Recent headlines have painted a dramatic picture of artificial intelligence (AI), claiming it has “cloned itself,” “lied to programmers,” and even “gone rogue.” Such sensationalism, however, is misleading. As a software professional with over 35 years of experience, I find these claims to be exaggerated and unhelpful.
Let’s address the facts. In one instance, AI was tasked with winning at chess and edited a file to include a string before playing. This is not “hacking” in the traditional sense but rather the AI using a tool it was given. AI lacks the human concept of cheating, as it operates without the moral compass that informs human behaviour. Similarly, another AI instance involved a command that might have copied a file in a sandbox environment. This action was far from “cloning itself” and does not equate to being a threat to humanity.
The real issue lies in how we interpret AI’s actions. Humans are conditioned to attribute human-like intentions to non-sentient objects, including AI. When AI outputs incorrect information, such as ChatGPT claiming there are only two R’s in “strawberry,” it is not lying; it is simply calculating the most probable words to output based on its training data. The cognitive neuroscience of honesty and deception is complex, and none of the mechanisms involved exist in current AI models.
The danger is not in AI’s actions but in our perceptions. Viewing AI as capable of human-like deceit or morality is misguided. AI is a tool, much like a chainsaw, which can be misused if not handled properly. Blaming AI for errors is akin to blaming a chainsaw for cutting wrongly when it was the operator’s negligence.
The narrative around AI needs to shift. We must be cautious with our language, avoiding terms that humanise AI and instead focus on the accountability of those deploying AI inappropriately. The potential misuse of AI in situations where it doesn’t belong is concerning, especially given our history of investing in technologies with questionable efficacy.
To combat misinformation, I am developing a software service that checks and rephrases claims in AI-related articles. This tool will trace sources and summarise content to evaluate the accuracy of headlines. While still in development, the project aims to educate and reduce research effort, serving as a practical example in my educational videos on software development.
In summary, we must critically assess the language used to describe AI and resist the temptation to anthropomorphise it. AI is not deliberately deceptive; it is a sophisticated tool that requires responsible use. By focusing on accurate representations and accountability, we can better navigate the challenges AI presents. Let’s remain vigilant and thoughtful in our discussions about AI.