Astroturfing and the white-anting of the foundations of the Internet.

Bully bots, troll farms and how they manipulate us all

Table of Contents

I have always said that fans and communities built around fans are the scaffolding of the Internet – particularly social media. Connecting with fans, talking to fans, listening to fans and getting feedback, are fundamental to what makes the Internet valuable for users, creators and corporations alike. There was an opportunity with social in the early days for governments and brands to embrace this concept, take the feedback on the chin and do better.

That was what I was sold, anyway.

In 2010, I gave a talk to the Women Parliamentarians conference on Twitter called “I Tweet and I Vote”. The crux of the talk was to sell every female MP in the country on the value of social media, the opportunities to listen to the electorate and be, well, a better representative. I also started to consult with brands at that time, in that brief window before the advertising agencies decided they were digital agencies and squeezed out and gobbled up and started talking over (and in my case, stealing from) the digital specialists.

One my catch-cries at the time being “nobody wants to talk to your logo“. A brief window where people were excited about, and open to, the possibilities, there was excitement in the air and people were willing to listen. That very quickly faded, though, and it all fell away when the people the internet was supposed to make uncomfortable and disruptbecame pretty unhappy with the feedback they were getting.

Greater access to information has always been a disruptive, and you could even say, revolutionary development. The printing press and pamphleteers, radio, telephone, television, the Internet and Open Source code and now social media – they all follow an uncannily and devastatingly similar pattern: new technology emerges, we have some hope of a voice and for things to change, the powerful give lip service to said disruption and change, it becomes a threat to the powerful, and so the powerful use their unlimited resources convince you that this time it will be different, we swear, and ultimately take the revolutionary teeth out of it. Rinse and repeat.

The internet has followed roughly the same pattern: the great unwashed being given the capability to influence large numbers of other great unwashed people quickly… and that ain’t gunna fly, proles. Here, have a 3 cent cut for every $10 we make on your content. If, of course, you say the words we want you to say.

It’s great.

Unfortunately, as our leaders are now (albeit precariously) pretty much locked into this whole illusion of liberal democracy problem, and those pesky proles might start to call bullshit… and all those really annoying laws and rights and regulations about election funding and unethical and harmful products that we want to sell people… what a pickle for the powerful with all that feedback and information sharing and potential revolution and ethics and laws and stuff

…enter the troll farm.

Not sure if you’re aware, but the vast majority of what you see online has been carefully curated for you by an algorithm working in the background – often well-meaning to avoid beheadings and snuff being livestreamed 24/7, but really vulnerable to manipulation.

I would argue, in 2022, that most social media is no longer organic, and generally assume that if I am seeing a piece of content, it is because someone, somewhere wants me to see it and act a certain way (in marketing, that’s called a call-to-action (CTA). Everything on the internet has a call to action. What is it asking you to do?).

You can pay to get your YouTube videos in the “Up Next”. You can pay for boosts on Facebook. Everything you see – declared or not – by an influencer who manages to make a living, either has some product or idea or ‘thing they want users to do’ behind it …that benefits some third party. That’s the business. That’s how it rolls. There are, of course, both ethical and unethical ways to do this.

Think of it as a spectrum from white to black: many shades of grey in-between. White hat to black hat. Good marketers use ‘white hat’. Unethical marketers use ‘black hat’. And the shades between often come down to case-by-case (and ability to rationalise, and intent).

The manipulation of algorithms using black hat tactics, whilst tech companies and governments turn a blind eye (and occasionally even collude themselves), have made it easy for anyone with enough money, or enough desire for fame, and no conscience to sway public opinion in any direction they desire. Campaigns like are relics of an organic past.

Organic is, for the most part, dead.

Unscrupulous entities like troll farms have exploited every aspect of social media that made it great: for everyone to have access, everyone to be anonymous and for everyone to have a voice. Bad Actors, however, use that as a weapon – maliciously steering conversations, intimidating opponents, and even coordinating en masse to de-platform entire communities in order to sway opinions. The intricate mechanics of bot armies, troll farms, and their methods are far more sophisticated and pervasive than you’d think, is used by pretty much everyone who wants to persuade you of something – from your friendly neighbourhood ISIS recruiter, to your local paper right up to the Pentagon. The problem is so pervasive that it is easier to list who doesn’t use these tactics, over who does.

Whoever controls the media, controls the mind

Jim Morrison

What is a troll farm?

A troll farm is like any other digital agency, social media management firm, or PR firm, really. They just have no ethics (and some even believe they’re fighting a bigger enemy or evil, and feel righteous in engaging in these practices – righteousness is a helluva drug). Where someone like me would turn clients like that away (and am much poorer for it… alas…become a Patron or donate), others build their business specifically for that purpose.

Note: “Troll farm” is a catch-all term for many different things, but think of it as a business that uses a set of unethical (‘black hat’) tactics to manipulate social media conversations. Again, remember… intent.

Troll farms have varying degrees of skill, but the good ones are difficult to detect, even for a professional, and especially for an easily-gamed algorithm such as Twitter’s. These organisations are just businesses, taking a retainer like any other agency, sometimes even playing both sides of a debate (I would be so rich if I had no ethics… seriously… oh em gee) are usually well-organised, extremely well funded, and run either by individuals with a political or social agenda, or on their behalf.

It’s just a job like any other. They turn up, they fuck some shit up, they go home to their families. It’s just Public Relations and social media management. Run some bots here, buy Likes there. Harass someone to suicide. You know… the usual work day.

Digital agencies currently fall through the cracks with regard to political fundraising. As they are privately run companies, they do not need to open their books for public scrutiny. They can simply have a number of Not-for-Profits as clients, hooked into a shared CRM such as ActBlue or WinRed, where those organisations can pool funds and gather all the data across all organisations, and even share with other external providers (I mean, you signed the Terms of Service, right?), and bam.

Money laundered, not accounted for, and everyone’s data being used to target you for fundraising, which is then funnelled into the agency to use on ad spending for Cats Against Trump merchandise and Jen Psaki socks and influencer campaigns and stealth Patreon donations to avoid detection.

With or without the content creator’s knowledge. Cool, huh. Creepy as fuck. That’s tech for ya.

If you are active on Twitter, you have definitely interacted with a troll farm. If you are very active on Twitter and don’t know how to spot them, you are probably inadvertently sharing propaganda from a troll farm. If you are a journalist or public figure on Twitter, you are most definitely being aggressively targeted by troll farms or bots that are trying to harass you into either reporting a certain way, supporting a cause, or bullying you into submission or to back down on a position.

That’s what they do.

Astroturf.

Here’s a link for later:

And, as AI becomes more accessible, and writing scripts that can target specific accounts for any reason using their own data as a weapon becomes easier (I have had these bots on me… they’re kinda cool, but creepy as fuck)… well…

What is astroturfing?

The term “astroturfing” is derived from the brand name of a type of artificial grass, which is meant to suggest that the campaign or community is fake or artificial, rather than authentic. It is the practice of creating fake grassroots campaigns or online communities with the intention of promoting a particular product, service, or idea. This unethical, ‘black hat’ tactic involves using a combination of fake social media accounts, influencers paid reviews, and other forms of online propaganda to manipulate public opinion and create the illusion of widespread support for a particular cause or viewpoint.

People who have been called a bigot, fascist or TERF with a cartoon avatar, and then sat perplexed at how really dumb tweets are getting thousands of Likes and good faith thinkers are being harassed and mass reported have likely witnessed astroturfing in action. Remember: it is difficult to detect 1) because it is designed to appear organic and spontaneous and 2) because you just don’t know who is on the take and who is genuinely stupid and trying to say whatever will stick to get famous. That’s the content factory, never-ending Idol Audition we live in.

Those engaged in these practices (whether agency or in-house) use a combination of tools, from bots (which are more obvious) for reporting and harassing low-levels, to MLM-like engagement pods where wannabe influencers are invited to “promote each other’s tweets”, and then end up too far in and at the mercy of the exact same mob if they break ranks. I watched this play out most recently with Ana Kasparian of TYT. Watch that space, folks. The TYT Army turned on the TYT Army. Like what…

Researchers and marketers who rely on Twitter data to make decisions are none the wiser. You just gotta get your opponents banned and harass the rest into silence. Any absurd idea or falsehood that nobody offline thinks is popular is able to become true, backed by a dimwitted rich kid who is only in college because their Dad is paying for it and they want to talk about coded heteronormativity or whatever the fuck in a children’s book like they do on Tumblr. You know, social science. Studies. Backed by research. With a weird grant from an NGO that seems totally legit. Sound familiar?

Troll farms use a variety of tactics to influence conversations and manipulate public opinion. These tactics include the use of bots, fake accounts, and automated tweets. Troll farms also use targeted advertising to reach specific audiences and manipulate conversations. They are able to generate a large amount of traffic and activity, making it difficult for users to distinguish between real information and false information. They spread false information quickly and easily, as well as manipulate conversations, often in subtle ways, using nudges.

Nudges are discreet prompts or indicators intended to shape the actions of both influential individuals and users using various rewards and punishments. Say the right thing, here’s a Patron or 1000 views or 1000 Likes. Say the wrong thing, lose a bunch of followers or get mass reported so the algorithm kills your reach and think your friends and family hate you and did something wrong for years. It’s easy to do. Even if influencers won’t come to the party, or want to disclose their paid endorsements, you can theoretically reward any content creator on any platform with love, praise, cash, punishments, shuns and bans. People with unlimited money can do anything they want if a content creator is desperate enough – either for money or clout or both.

Despite the cries and side-eyes over the use of the word “disinformation”, it is a real problem. The only problem is that they’re all accusing everyone else of doing it, when everyone is. This is why I decided years ago to focus less on the politics and more on a) educating people and b) focusing on the tactics and pushing to attach harsh penalties for those who are caught and c) making any digital agency who has political parties or NFPs on the books to be subjected to the same reporting as the NFPs themselves. Disinformation networks are the real threat to democracy, the authenticity of online discours, research data and democratic process.

To safeguard users from deceitful information and manipulation, it is crucial for social media platforms and governments to detect and dismantle such networks and to open up agencies to further scrutiny. Facebook’s Ad Library has worked well in some ways to combat this, however, what counts as ‘political advertising’ is getting harder to detect, because now, as the adaptive fuckers crooks are, they will run ads that aren’t political, or are just selling merch (see: Cats Against Trump – one of several thousand Pages that sell merchandise in an online store, and have approximately $500-1000 a month ad spend on Facebook, but all lead back to the same digital agency), and then taking you off-site or get you via email, which is then outside the view of Facebook or any other public scrutiny.

Fact-checking is a crucial step for both influencers and users. It is imperative to check the authenticity of any information before sharing it to avoid spreading misinformation. It is also essential to introspect and understand your own motives behind sharing or withholding information, and your own vulnerability to audience capture. If you think you are susceptible to being influenced for personal gain, you must remain extra vigilant and conscious of your vulnerability.

Do you like me?

Get new posts via email

Do you, like, like-like me?

Leave a Comment