How do you come up with a storyline? Let's say you're an author and are writing a book. How do you start with a plot? I have always wondered how this process worked. How do you come up with something interesting to read so consistently, churning out best-sellers after best-seller! If it's a series, then sure: you have an existing story to build upon, but that is not always the case!
Personally, I think you start with something that has happened to you, or perhaps a variation of something you've already read or watched (or heard). You just twist the story and add your characters; imagine the existing setting and augment it. But what happens when you're unable to do it? I mean, it's hard alright - not everything comes together so easily, and sometimes you just forget the little details: like what happened to the Duffer brothers recently. Well that is something people just need to be more careful with. I mean, I can totally imagine myself as a director/storywriter who forgot the birthday of a character mentioned earlier in a scene!
Recently I accidentally discovered another way to figure out a plot-line: AI. I am pretty sure I am not the first one to do this: Someone used AI to write an entire book of poems while another student used GPT-3 (the same model I used to get the results) to write blog posts that ended up at the top of Hacker News. However, I did not do the same thing as them: I didn't ask GPT-3 to write a book or poem. All I did was ask it to summarize the plot of a book that doesn't exist! Some of them were pretty generic, but sometimes GPT-3 went into detail about the story. Another surprisingly good way I got it to generate a storyline was by giving it the name of an existing book and asking it to create the summary. But then, in the first line, I prompt it with a slightly varied plot or even a radically different plot. Let me show you an example.
Describe the plot of "One of us is lying" in great detail. This book is about a group of high school students who get involved in a murder
Now here's what GPT-3 filled in:
mystery. One of the students, Bronwyn, is an aspiring journalist who is determined to solve the case. The other students are Simon, the school's golden boy who is the victim of the murder; Cooper, the star athlete with a secret; Addy, the popular girl who is hiding something; Nate, the bad boy with a troubled past; and finally, Mara, the outcast who was the last person to see Simon alive. The story starts with Simon's body being found in the school library. The police think that he was killed with a fire poker, but they can't find the weapon. The students all have different alibis for where they were when Simon was killed. Bronwyn starts to investigate and she quickly realizes that one of the students is lying. She doesn't know who it is, but she is determined to find out. The plot twists and turns as Bronwyn gets closer and closer to the truth. The reader is kept guessing until the very end.
Naturally, I wanted to know who the killer was. So I prompted the AI with this:
Who was the killer?
And it filled in this:
Why did they do it? You'll have to read the book to find out!
I was low-key impressed with the answer but kept protruding further.
Who killed Simon? It was M
I had to prompt the "M" at the end because it wasn't giving me a name, so I literally shoved the "M" down its throat! Nevertheless, I got my answer:
addy! She killed him because he found out about her cheating on her boyfriend with Nate.
So, Maddy it was! Conveniently, it also provided me with a motive for the murder. If I ever publish a book with a plot like this and you happen to be reading it, don't be surprised at the déjà vu ;)
Now there are people have voiced their concerns about this, and I can understand why. Just take my example above: while prompting the AI, I used the name of Karen McManus' book, but then the plot that GPT-3 produced was different from her book's plot. Agreed, the setting is similar and the character names are from the book, but the characters have different personalities and roles here. This is an issue that, in my opinion, can be easily solved if we try to draw comparisons between man and machine.
Humans take inspiration from others around them all the time, consciously and unconsciously. After reading this post, you will likely not be the person you were before reading it. When we use the knowledge we gained through such media or people, we cannot reference or thank everyone involved in the process. I cannot be referencing my kindergarten book publisher for all pieces I write just because I learned English from their textbooks! It doesn't sound sensible, and it's not feasible either. However, suppose I am explicitly taking information from somewhere. In that case, it's my moral (and at times, legal!) obligation to mention them. We can use this for text generated by GPT-3 too. In my case, I can thank Karen McManus in my book because I used her book's name in the prompt that GPT-3 used to output the text which later became my book's storyline.
Sure, there will be unethical users of the model. I can already imagine people using GPT-3 to generate hateful content en masse and use it to spam or harass. There is also the possibility of bias, which is something OpenAI seems to be working to tackle.