Many people fear that artificial intelligence (AI) will someday surpass human intelligence and take over the world. This fear is often fueled by sensationalized media reports and Hollywood movies that depict AI as a malevolent force that seeks to destroy or enslave humanity.
But is this fear justified? Are we really on the verge of creating artificial general intelligence (AGI) that can outsmart us at everything and pose an existential threat to our civilization? Or are we overestimating the capabilities and dangers of AI, especially large language models (LLMs) that can generate natural language texts on various topics?
In this post, I will argue that while building AGI is a very real possibility with immense risk (albeit a very high reward), LLMs aren’t what we imagine when we think of AI taking over the world. These models are more suited to be copilots which help enhance our workflow, making it faster and more efficient rather than replacing us completely.
What are LLMs and what can they do?
LLMs are neural networks that are trained on massive amounts of text data, such as books, articles, websites, social media posts, etc. They learn to predict the next word or sentence given some previous words or sentences as input. By doing so, they can generate coherent and fluent texts on various topics, given some keywords or prompts.
The ones we hear about today are ChatGPT, Bing (based on GPT-4) and Bard (based on LaMDA). These models have billions of parameters (weights) that encode the statistical patterns of natural language. They can perform various natural language processing (NLP) tasks, such as answering questions, summarizing texts, writing essays, generating code, creating poems, etc.
However, LLMs are not magic. They are not intelligent in the sense that they completely understand what they are writing or saying. They are simply mimicking the patterns of natural language that they have seen in their training data. They do not themselves have any common sense knowledge or reasoning ability. They do not have any goals or preferences. They do not have any emotions or morals, they can merely simulate them.
This means that LLMs can also make mistakes, produce nonsense, contradict themselves, repeat themselves, plagiarize others, offend people, spread misinformation, etc. These are called 'hallucinations' and work to reduce these is being done by teams at labs like OpenAI with somewhat great success.
Why LLMs are copilots, not overlords
Given these limitations and challenges of LLMs, it is clear that they are not ready to replace humans in any domain that requires creativity, critical thinking, judgment, ethics, or empathy. They are not going to write better novels than Shakespeare or Tolkien themselves, neither are they invent better technologies than Tesla or Jobs without any kind of prompting.
However, this does not mean that LLMs are useless or harmful. On the contrary, LLMs can be very useful and beneficial if we use them as copilots rather than overlords. By copilots, I mean tools that augment our abilities and help us in our tasks rather than replace us entirely.
LLMs can help us write faster and better by suggesting words, sentences, or paragraphs that match our style and tone
LLMs can help us research deeper and wider by finding relevant information, summarizing key points, or generating questions that prompt us to explore further
LLMs can help us learn faster and easier by explaining concepts, providing examples, or creating quizzes that test our understanding
LLMs can help us communicate better and clearer by translating languages, paraphrasing texts, or generating captions that convey our message
These are just some of the possible ways that LLMs can be copilots for us in various domains. The key idea is that we should use them as partners rather than competitors. We should leverage their strengths while being aware of their weaknesses. We should provide them with feedback and guidance while learning from their outputs. We should collaborate with them rather than delegate to them.
How to use LLMs as copilots
We are already seeing work here by major tech companies. Microsoft announced Microsoft 365 Copilot which will let you leverage the power of AI across their office suite, from creating text in Word to generating tables and sorting data in Excel. Recently, Adobe also released a beta of a model they call Firefly. As of now, Firefly is a separate app but Adobe envisions it being integrated into their Creative Cloud suite. We are also seeing similar announcements by Google and Unity.
Large language models are powerful tools that can generate natural language texts on various topics, given some keywords or prompts. However, they are not intelligent in the sense that they understand what they are writing or saying. They are more suited to be copilots which help enhance our workflow, making it faster and more efficient rather than replacing us completely.
Using them as copilots requires a shift in mindset and behavior. Far too often we miss out on the potential of a tool because we try to equate it to one we had earlier. We had phone book directories before the internet and so when the internet started we tried to make it work like a directory. Fortunately for us, we soon figured out that a search engine is a better way to work with the internet instead. A similar shift in mindset and thinking is required here to leverage the true power of these LLMs. I outline one such way to shift here and I'd love to hear your thoughts on it.