Let's talk about Chat GPT


The dangers of AI, beyond the clichéd dystopian predictions

Article Image

Image by Louisa Norton

By Grace Bannister

Chat GPT: an AI that can, and most definitely will, transform almost every aspect of our lives. By now I’m confident that most people will have at least heard of Chat GPT, either favourably or not. My initial concept for this article was to determine the role of AI agents such as Chat GPT within the realm of other labour-saving technologies. The transformation of our daily lives, routines and tasks by machines is not new: since the Industrial Revolution, machines have been saving humans time and effort on menial and monotonous work. But the concept of machines performing more intellectually demanding tasks, with seemingly human mannerisms, is unnerving to say the least. Or is it?

One could imagine the benefits of AI within sectors such as education, where, for instance, AI’s assistive technology could be an aid to pupils with special educational needs. The image of AI as your own personal assistant, sending your emails and checking your grammar, is certainly tempting. It is clear that there are advantages to AI as the latest labour-saving technology. However, after exploring Chat GPT and AI within this realm, I ultimately decided that this was a vantage point exhausted by most authors writing on AI, specifically Chat GPT. This wasn’t before I imagined my younger self shocked at the prospect of a robot that can do my homework, however!

This article also does not aim to concern itself with establishing AI within the realm of a dystopian future sci-fi novel, with robots taking charge and causing mass unemployment. You can rest assured, though, that upon asking Chat GPT about the “main disadvantages within AI”, I received a message informing me of an “unknown error”, so fret not – AI supposedly has no drawbacks!

So, what will this article explore you may be asking? Well, let’s start with the innate biases that will be present in AI agents like Chat GPT carried through by those who have coded it, in addition to the research and arguments it chooses to present. The risks this raises could be significant with its infiltration into sectors such as the law, police, and politics. This is, in fact, what Chat GPT informed me when I asked whether “answers contain[ed] a fundamental Eurocentric and patriarchal bias” Although AI models are designed to be “impartial”, it assured me, “biases can stem from the data used to train the algorithm”. This means that AI software will inevitably reflect the demographic of those predominantly working within this field, white men, despite the promotion and encouragement of greater inclusivity in STEM related fields.

Furthermore, the co-founder of the Black Lives Matter movement, Ayo Tometi, has recognised the dangers of racism within AI and has urged tech companies to improve this. It has already been proven through a variety of tests that AI operations are designed for white, male users – having disproportionately poor facial recognition for black women, for example. This demonstrates one of the fundamental issues for the future of AI, as well as its effects on marginalised communities. Many of these (likely unplanned) race and gender-biased algorithms are a result of the demographic of those working in the technology and AI sector. With only 2.5 percent black workers at Google and only 22 percent female AI professionals, it is no wonder that the algorithms reflect these interests.

These algorithms are able to have real-world impacts. Think back to the controversial algorithm used by the government to produce GCSE and A-level results in 2020, which favoured students from privately educated and socially privileged backgrounds. Even more dangerously, the data inputted into AI, typically in the US, about conviction and crime rates is skewed due to police racism and stereotypes, further perpetuating these biases when deciding who gets bail, for example.

These examples are true of AI more generally, not exclusively restricted to Chat GPT and other generative AI. The astounding abilities of some of Chat GPT’s sister technologies, also created by OpenAI, have led to the circulation of deep fake images, such as the Pope wearing a puffer jacket, that have been widely believed on various social media platforms. This means that the biases within AI and its ability to spread misinformation could be problematic, especially within politics, with many people eager to form prejudiced and racist views based on false information.

I don’t pretend to know the answers. Even Chat GPT doesn’t know the answers. But I do think that makes it all the more important that we keep asking fundamental questions about who is really behind the AI that will increasingly shape our lives, and more importantly who it will benefit.