The recent release of ChatGPT-4o has sparked a significant amount of discussion and hype, with some predicting the advent of human-level AI at the price of a luxury car, while others dismiss it as a passing fad akin to NFTs. The truth likely lies somewhere in between, but the impact on the job market is undeniable. Companies like BP are already reporting a reduction in the number of programmers they employ due to AI, although the long-term effects remain uncertain.
The release of ChatGPT-4o has led to a lot of excitement, particularly regarding its voice interface. However, this is not a groundbreaking development, as similar functionalities have existed for some time. While ChatGPT-4o is faster and more convenient, its ability to perform tasks accurately remains inconsistent. For instance, it sometimes performs better on certain benchmarks but worse on others. This mixed evidence makes it difficult to predict the future of AI with any certainty.
One important aspect to consider is the human tendency to attribute intelligence and emotions to non-human entities, a phenomenon known as the “Eliza Effect.” This effect is exacerbated by features like synthetic voices and cute behaviours, which make people more likely to believe that AI systems are sentient. Researchers have identified these as “dark patterns,” designed to trick users into believing that AI systems are more capable than they actually are.
Another significant issue is the prevalence of deception in AI demos. Companies like Tesla, Google, Amazon, and even OpenAI have been caught exaggerating or outright lying about the capabilities of their AI systems. For example, Tesla faked a self-driving demo, and Google’s Gemini AI demo was also found to be misleading. Such instances of “AI washing” have led to a general mistrust in the claims made by these companies.
Despite the hype, there is no clear evidence that we are close to achieving human-level AI. Many of the claims about AI’s capabilities are driven by financial incentives and have been proven false repeatedly. Journalists and other commentators often perpetuate these misleading narratives, making it challenging for the public to discern the truth.
In conclusion, while AI like ChatGPT-4o shows promise, its actual capabilities are still limited and inconsistent. The ongoing deception by companies and the human tendency to overestimate AI’s abilities further complicate the issue. As we navigate this uncertain landscape, it is crucial to rely on evidence and remain sceptical of grandiose claims. The future of AI remains uncertain, and we must be cautious in our expectations and decisions.