Does ChatGPT's emergence signal the AI apocalypse?

03/07/2023

The novel chatbot has captured the public's interest with its human-like responses. But how does it work, and how dangerous is it?

Article Image

Image by Sanket Mishra

By Ethan Attwood

NATURAL LANGUAGE CHATBOT ChatGPT has changed how we see Artificial Intelligence (AI). The versatile chat interface lets users ask questions or give instructions as if in everyday conversation, and receive answers they might expect from an encyclopaedic mind. However, the real power of ChatGPT is in its ability to invent detailed and complete narratives when given vague prompts. This has led to fears of academic misconduct, and institutions, including York, are working quickly to update their guidance on using the emerging technology in assignments.

AI has been used recently to allow a former-paraplegic patient to walk again, and has shown immense potential for the discovery of novel antibiotics. However, as the first mainstream AI to capture the public's imagination, ChatGPT's power has increased fear that we are now approaching a tipping point, where AI has become sophisticated enough to present an existential threat to humanity.

This premise has been explored in science fiction for decades, from Terminator and Blade Runner to I, Robot and even Wall-E, where technology is presented as so useful that we completely lose our ability to function independently. With industry leaders beginning to call for increased regulation of AI, it's worth asking the question before we reach any sort of catastrophic inflection point: how dangerous is AI really?

Large language models, of which ChatGPT is the most advanced public example, work by analysing large bodies of text and formulating relationships between words (referred to as "tokens"). This leads to capabilities such as next-token prediction, where given the start of a sentence such as, "Rishi Sunak is…", ChatGPT will correctly return, "Rishi Sunak is a British politician", based on how it has seen that sentence (and similar ones) completed with a job title previously. However, this type of sequential processing is limited because it doesn't understand how to value some words higher than others (is "a British politician" more relevant than, say, "well dressed", "a fool" or even historically correct but out-of-date responses like "the Chancellor of the Exchequer"?). In 2017, a team at Google Brain solved this with transformers, which allow a large volume of text to be processed quickly to inform the result.

Transformers are named as such because they encode the input text into mathematical functions that can be quickly evaluated by a computer, and decode the output into the desired format at the end. ChatGPT also features a novel "self-attention" mechanism for understanding relevance of words, which calculates how much weight or "attention" to give each part of the input. The key question a data scientist would ask at this point is, where does the input come from?

A common saying in the machine learning world is "garbage in, garbage out", meaning that if there is not sufficient volume or quality of data for a model to learn from, it cannot produce a high-quality result. For this reason, ChatGPT's developer, OpenAI, hired 40 contractors to create a bespoke database of sample responses. User queries were collected from the previous version of GPT, to which ideal responses were written by human labellers. Most queries could be categorised reflecting the most common ways in which humans seek information: direct ("tell me about…"), few-shot (given these examples, generate a similar response) and continuation (given a prompt, provide a conclusion). This work resulted in over 13,000 input/output pairs.

Next, the labellers were asked to rank different responses to the same query. This allowed the development of a reward model where the characteristics of better responses were learned, creating a strategy for the model to use for unseen input, which is used to generate a response and corresponding reward. This data was then fed back into the model to incentivise the generation of responses with maximum rewards. Thus, these concepts together explain the acronym: Generative Pre-training Transformer.

Chatbot responses tend to be ranked against three dimensions: helpfulness, which measures the output's relevance and detail; truthfulness, which is the avoidance of "hallucinations" wherein nonsensical or incorrect information is returned; and harmlessness, which is the avoidance of prejudicial or misleading content. Aligning with these values provides a form of self-regulation, but policymakers are still keen to ensure AI cannot advance past the point of human control. At the moment, ChatGPT is nowhere close. Far more concerning are models able to fabricate lifelike images, videos or audio of humans, which has wide-ranging legal implications. If we reach a point where any video or recording could trivially be faked, can they still be used as criminal evidence? Regulation of AI appears secondary to regulation of humans' ability to use it for nefarious purposes, which is no different to any legislation written since the dawn of the legal system.

In conclusion, while concerns about an AI apocalypse exist, it is important to approach ChatGPT and AI technology with a balanced perspective. ChatGPT's current capabilities do not align with the catastrophic scenarios often depicted in popular culture. Responsible development, ethical guidelines, and human oversight are key to harnessing the potential benefits of AI while mitigating risks. Collaboration between researchers, policymakers, and society can shape the future of AI towards positive outcomes, ensuring transparency, ethics, and responsible use.*

*This conclusion was written by ChatGPT.