AI Myths Debunked: Unpacking Six Common Misconceptions
Media coverage of AI has contributed to misinformation about what it can do now and what it might achieve in the future. It’s time to unpick the hype. By Vassilis Galanos, SJ Bennett, Ruth Aylett and Drew Hemment.
Artificial Intelligence (AI) is the subject of extraordinary hype concerning its abilities and possibilities, resulting in the spread of misinformation and myths. While this “mythinformation,” as Langdon Winner once called it, dates at least back to the 1980s, it’s helpful to revisit this topic in light of hype about generative AI today. In the news media we see examples of AI used in policing to identify potential suspects and in recruitment to screen CVs, while in films and TV we are shown sentient robots and computer systems. AI is even marketed as something that can autonomously produce its own artworks, while AI text generators threaten to displace jobs and flood the web with text of dubious quality[1].
These stories of AI are so widespread they have become ‘suitcase words’ – words that carry around multiple meanings that change depending on the context in which they are used[2]. Here, we debunk five of the common misconceptions that have taken root about AI. These five myths are pointers – they are all interconnected and come in various guises sometimes related to other technological myths about progress and commercial desire.
First Myth: AI learns like humans
A common misconception is that new AI systems learn the same way as humans, only better, with the main difference being that they are more ‘objective’ and ‘correct’. However, while there are superficial similarities, and they can find patterns that a human might miss due to the sheer size of the datasets they learn from, AI systems have no understanding of meaning or cause and effect – they are really making statistical associations. What they learn depends entirely on what data they are given. For example, face analysis systems trained on data with too few people of colour cannot accurately process faces with dark skin. And even this learning is fallible: a robot cleaner can confuse useful items with trash; a medical system might miss significant patient background information; and a robot judge might suggest that someone is guilty because of previous convictions or because of the neighbourhood they live in. Another example is the so-called ‘hallucinatory’ academic references produced by widely used text generators such as OpenAI’s ChatGPT[3]. The long history of “philosophical objectivism”, or the idea that there is one correct, rational perspective, spans the history of automated systems, producing an illusion that ‘the computer is always right’. AI is not exempt from this type of cognitive bias, called automation bias. Conversely, AI can produce outputs which are difficult to distinguish from the real deal. A good example of this can be seen in the current hype regarding generative AI systems. These systems, such as ChatGPT, Bard, DALL-E, and MidJourney, are capable of producing text or images that may seem indistinguishable from human-generated outputs, yet may contain entirely false or unverified information. Such output can be used to ‘poison’ datasets and thus skew or warp interpretations of the world.
Second Myth: AI will take our jobs
There is a widely-held and understandable fear that AI will remove 50% of current jobs over the next 15 years, resulting in a plethora of new, low-skilled work. However, we tend to vastly overestimate AI’s capabilities and underestimate the flexibility and judgement needed in many manual or cognitive jobs. In the last couple of centuries, every introduction of new and more efficient tools has meant jobs are lost which are then replaced by a vast array of others. Moreover, the impact of automation is a political as well as a technological issue, illustrated by the growth of the gig economy which has resulted in a swathe of low-paid, unstable jobs with little oversight. One example of this is Amazon Mechanical Turk, a labour marketplace which is essential for the development of many machine-learning systems. In addition, the seemingly automated work delegated to AI is based on the more-than-often invisible labour delegated to underpaid workforce, either offshored or at precarious career stages.[4].
Concerns about how AI enables certain exploitative employment models to be deployed are certainly valid, and work is needed to combat the impact of such systems. The New Real artists-in-residence Caroline Sinders and Anna Ridler explore the theme of hidden human labour in the Art section of The New Real magazine. Caroline gives an example of how to engage people in probing the massive, often opaque systems of unstable, low-paid labour, in her provocation TRK (Technically Responsible Knowledge), which focuses on Amazon Mechanical Turk[5].
Third Myth: AI is immaterial
While popular conception often characterises AI and other computing technologies as an intangible or immaterial entity, it's crucial to understand that AI's functioning primarily relies on concrete, physical infrastructures. These include data centres filled with servers, fibre-optic cables, electricity grids, and myriad electronic devices. AI's algorithms require vast amounts of data, which is stored and processed in these material infrastructures, consuming substantial energy. This hardware plays an integral role in the performance of AI. Without this physical backbone and the environment in which they comfortably exist; cold, secure and electricity-rich, the advanced software capabilities of AI would be unable to operate. Consequently, AI is not an immaterial phenomenon; instead, it's deeply interwoven with physical realities around the globe.
Such dimensions of the AI production pipeline involving the material resources required to train and use it are hidden due to further myths about AI’s very use to tackle climate change. A growing amount of research, however, is focusing on the environmental impact of AI, its carbon and water footprint in order to train its algorithms, as well as the high mineral cost to produce its supporting hardware, which have even engendered conflicts, forced labour and displacement within local communities. Such realities are often obscured by the hype about solving climate change and addressing social issues by applying AI systems[6].
Fourth Myth: AI is a person
People often refer to ‘an AI’, as if talking about a person-like entity with greater-than-human intelligence, and maybe even sentience. Yet, rather than talking about ‘intelligence’ – a term that psychologists often avoid as there is no generally agreed definition – it is more accurate to focus on AI as a set of algorithms. We encounter these every day; they are a list of steps to follow in order to achieve a particular outcome, like a cooking recipe or instructions for making a cup of coffee.
In practice, AI is a set of many different pieces of algorithmic software similar to our smartphone apps or Google Search. However, some of these AI systems are combined into artefacts, such as robots, which in order to make them more user-friendly are designed to look, sound and behave in similar ways to humans. Examples include computer assistants like Apple’s Siri, Amazon’s Alexa, or Sophia the Robot, and more recently AI chatbots like ChatGPT[7]. Humans are hard-wired to empathise with what appears similar to us. This can make us feel that systems that mimic speech or emotion actually possess these characteristics – after all, we have been personifying things since the days of tree spirits. However, to claim that AI applications communicate with each other or with humans as humans do between them, is like suggesting that trees communicate in the same way as well. While ‘communication’ also has many different definitions, it is misleading to ascribe human communication traits to nonhumans, especially algorithms, which, if treated with the same rights as humans, their often biased output may surmount to credible opinion.
Fifth Myth: AI is capable of autonomous actions
We are frequently shown footage of robots that makes them appear much more successful than they actually are. We are led to believe that scientists can implement capabilities of perception, understanding, planning, enabling robots to react sensibly to new situations, or even have self-awareness or consciousness. However, most of these videos are staged to one degree or another: in some, robots are remotely controlled, while others might show one successful run out of a hundred.
Our understanding of how cognition works is patchy and shallow, and AI programs are very specialised, matching some human capabilities only in very specific cases and well-understood environments, and failing when placed within new contexts. Scientists still do not possess the necessary knowledge to allow us to combine skills of perception, analysis and reaction in the way living creatures can. Even humble lifeforms like slugs have surprisingly complex and nuanced cognition, but try searching YouTube for ‘robot fail compilations’ to see the stage of our current engineering capabilities[8]. Overconfidence in designing ‘intelligent’ systems may have disastrous consequences; take driverless cars, which have caused fatal accidents when they meet unexpected situations.
Sixth Myth: AI will outsmart humans
Given the rapid increases in computing capability over the past decade, it is easy to think that there will be a tipping point – a singularity – when computers are more ‘intelligent’ than humans. Similarly, because robots are often represented as being able to ‘become’ sentient and even dangerous, it is presumed that this is something that will inevitably happen. In reality, making computers compute at higher speeds with bigger memories just means they can process the same data, faster – it doesn’t make them more 'clever'. Speed doesn’t give computers the ability to understand things in the way humans do, or to be more flexible and less failure-prone. How robots are often presented in the press is way out of step with their actual or likely capabilities[9]. Besides, improvement in cognitive capabilities is curtailed by the physical limits on how much speed we can engineer. For example, robots rely on electricity and they consume lots of it. Their capacity to evolve into sentient beings is dependent on their very limited electric capacities – what if the batteries run out after a few hours, leaving the robot helpless until recharged?
Final Words
AI is the subject of more myths and misrepresentation than any other technology domain we know of. It’s important to remember that all these systems and machines, whether we identify them as AI, robots, machine learning or algorithms, are useless without humans making meaning out of them[10].
To sum up: unlike humans, AI does not learn from embodied experience and social interaction; it primarily relies on datasets, algorithms, and computing power. The fear of AI taking over all human jobs is an overstated concern which obscures the vast networks of human labour which underpin the systems we see and shape their outcomes and actions. However, although AI can seem ethereal and detached, it is important to remember it is also socially and materially tangible as it runs on computer hardware and consumes a significant amount of energy. AI is not a person, lacking self-awareness, consciousness, emotional intelligence and the ability to understand complex human contexts, however, it has the potential to impact people’s lives and contexts. The belief that AI is neutral and objective is also a myth; AI often inherits and magnifies the biases in its training data. Lastly, although AI's computational capabilities may surpass human intelligence in specific, narrow tasks, it isn't equipped to outsmart human ingenuity, critical thinking, creativity, and the ability to understand the broader picture. With these six pointers in mind, AI development can proceed in a responsible and sober manner.
The abundance of myths and hype that surrounds AI doesn’t mean it is useless or that it shouldn’t be developed and supported as a field. Many applications that we use in our everyday lives are products of simple machine learning, such as recommendation systems (“people who bought this also bought…” or music playlist algorithms), text prediction, and other assistive technologies. Advances in AI help us make important steps forward in medicine and surgery, from robotic prosthetic limbs and object recognition systems for the visually impaired, as well as text template production. But we need to make informed decisions about where and how to implement AI in a way that is successful and socially responsible, so it’s important to unpick some of the myths and bring some calm, clear thinking to this fast developing area.
—
[1] Winner, L. (1984). Mythinformation in the high-tech era. Bulletin of Science, Technology & Society, 4(6), 582-596.
Christie’s. 2018. Is artificial intelligence set to become art’s next medium? https://www.christies.com/features/A-collaboration-between-two-artists-one-human-one-a-machine-9332-1.aspx
[2] Rodney Brooks. 2017. The Seven Deadly Sins of AI Predictions. MIT Technology Review.
https://www.technologyreview.com/2017/10/06/241837/the-seven-deadly-sins-of-ai-predictions/
Ruth Aylett. 2019. AI-DA: A robot Picasso or smoke and mirrors?
https://medium.com/@r.s.aylett/ai-da-a-robot-picasso-or-smoke-and-mirrors-a77d4464dd92
[3] Thomas C. Redman. 2018. If Your Data Is Bad, Your Machine Learning Tools Are Useless. Harvard Business Review.
https://hbr.org/2018/04/if-your-data-is-bad-your-machine-learning-tools-are-useless
Alkaissi, H., & McFarlane, S. I. (2023). Artificial hallucinations in ChatGPT: implications in scientific writing. Cureus, 15(2).
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9939079/
Brian Cantwell Smith. 2019. The Promise Of Artificial Intelligence: Reckoning And Judgement. The MIT Press.
https://mitpress.mit.edu/books/promise-artificial-intelligence
[4] Jonathan Vanian. 2018. When it comes to A.I., worry about ‘job churn’ instead of ‘job loss’. Fortune.
Angela Chen. 2019. How Silicon Valley’s successes are fueled by an underclass of ‘ghost workers’. The Verge.
Billy Perrigo. 2023. Exclusive: OpenAI Used Kenyan Workers on Less Than $2 Per Hour to Make ChatGPT Less Toxic. Time.
https://time.com/6247678/openai-chatgpt-kenya-workers/
Viola Zhou and Caiwei Chen. 2023. China’s AI boom depends on an army of exploited student interns. Rest of the World. https://restofworld.org/2023/china-ai-student-labor/
[5] See more at: https://carolinesinders.com/trk/
[6] Megan Mastrolla. 2023. How AI can Help Tackle Climate Change. John Hopkins Magazine Hub.
https://hub.jhu.edu/2023/03/07/artificial-intelligence-combat-climate-change/
Sophie McLean. The Environmental Impact of ChatGPT: A Call for Sustainable Practices In AI Development. Earth.org.
https://earth.org/environmental-impact-chatgpt/
Ephrem Joseph. 2023. AI's growing thirst: rising water consumption in data centres sparks environmental concerns. Proactive Investors.
De Putter, T., 2019, March. “Cobalt Means Conflict”–Congolese Cobalt, a Critical Element in Lithium-ion Batteries. In Paper presented at the meeting of the Section of Technical Sciences held on 28 March 2019. Bull. Séanc. Acad. R. Sci. Outre-Mer Meded. Zitt. K. Acad. Overzeese Wet 65 (2019 – 1): 97-110. DOI: 10.5281/zenodo.4604402
[7] Joanna J. Bryson, Mihailis E. Diamantis & Thomas D. Grant. 2017. Of, For, And By The People: The Legal Lacuna Of Synthetic Persons. Artificial Intelligence and Law, volume 25, pages 273–291. (Open access)
https://link.springer.com/article/10.1007/s10506-017-9214-9
Sarah Gibbons, Tarun Mugunthan and Jakob Nielsen. 2023. The 4 Degrees of Anthropomorphism of Generative AI. Nielsen Norman Group.
https://www.nngroup.com/articles/anthropomorphism/
[8] Elizabeth Fernandes, 2019. AI Is Not Similar To Human Intelligence. Thinking So Could Be Dangerous. Forbes.
IEEE Spectrum. 2015. A Compilation of Robots Falling Down at the DARPA Robotics Challenge
https://www.youtube.com/watch?v=g0TaYhjpOfo
[9] Luciano Floridi. 2015. Singularitarians, AItheists, And Why The Problem With Artificial Intelligence Is HAL (humanity at large), Not HAL. APA Newsletter, volume 14(2), pages 8-11.
Ammon H. Eden, Eric Steinhart, Pearce, D., & Moor, J. H. (2012). Singularity Hypotheses: An Overview. In Singularity Hypotheses: A Scientific And Philosophical Assessment. (pp. 1-12). Springer, Berlin, Heidelberg.
https://link.springer.com/book/10.1007/978-3-642-32560-1
[10] Walsh, T. (2018). 2062: The World That AI Made. Australia: LaTrobe University & Black Inc.