When I look at the current state of the tech industry, I can’t help but chuckle at the collective realisation many are having about issues I saw coming nearly a decade ago. The same reasons that drove me to quit tech, to abandon my agency, and to seek new paths in law and psychology, are now glaringly obvious to everyone else. It’s a bit of a “told you so” moment, but I’m resisting the urge to gloat. Instead, I want to focus on what we can do moving forward.
The tech industry’s trajectory has been clear for some time. The avalanche of data misuse, the rise of creepy as hell social score tools, and the transformation of user experience design into nothing but manipulative nudges are just a few examples of the ethical quagmire we’ve found ourselves in. Watching data science dominate what should have been human-centred design has been disheartening. It’s like watching a car crash in slow motion, knowing you warned everyone about the impending disaster.
The reckoning we’re seeing now, with AI and its implications, is a direct result of years of unregulated tech practices. We should have cracked down on these issues long ago. The absurd salaries and easy money in tech were always a red flag. It felt like blood money, and I knew there would be a cost. The industry’s current state is a testament to that.
AI is not new. It’s been lurking in the background, influencing content moderation, and subtly nudging us in various directions. The Overton window has become so narrow that any deviation from corporate-speak is punished. This has been happening for years, and it’s only now that people are waking up to it because it’s affecting their art, their words, their jobs. Welcome to the party, folks.
The AI we’re dealing with today, specifically large language models, are essentially sophisticated autocorrect systems. They’ve been around for decades, limited only by computing power. The real breakthrough has been in hardware, not the tech itself. Yet, the hype around AI has led to a frenzy, with companies like Microsoft and OpenAI pushing these technologies without adequate safeguards.
This brings us to the issue of platform decay. AI systems are learning from AI-generated content, leading to a cycle of inbreeding where the quality of data degrades over time. Google’s decision to train AI on Google Books, which scrapes from Amazon, including AI-generated books, is a prime example of this decay. It’s a mess, and it’s only going to get worse as companies start poisoning each other’s data sets in a bid for dominance.
The tech industry’s reliance on AI is a cover for deeper issues. Companies are laying off workers, not just because of AI, but because they’re spooked by the impending regulations and the unsustainable nature of their business models. The consulting firms promising savings through AI implementations are setting these companies up for catastrophic failures. It’s a bubble, and it will burst.
In the face of this, I urge those still in the industry to reflect on why they got into tech in the first place. If you’re in a position of power, use it to push back against unethical practices. HR professionals, for instance, can choose not to use automated rejection tools and instead offer empathy and human connection. Small actions can make a big difference.
Ultimately, we need to regulate tech more strictly. The law is catching up, and the days of unregulated data misuse are numbered. It’s time for the industry to face the music and for those within it to make ethical choices. Welcome to the new reality. It’s time to make things right.