Fake news, fake consensus, and the trolls that publish them. Are they winning the internet?

Online misinformation campaigns, fake consensus, and dark collaboratives.

By Rome Viharo for Wikipedia, We Have a Problem.

“Before the internet: most people lived in an information desert. After: people live in an information flood and the water’s unsafe to drink.” Reddit: Shower Thoughts.

A recent Pew report details the sad affair of online trolling, conceding victory to them. Just as recently, Fortune gives us the same bleak review, Twitter concedes, and even the NewYork Post echoes the call.

I’ve been fortunate, as a developer of collaborative technology, to track these digital behaviors for three years now. This study has recorded how they emerge, what these tactics are, and more unnerving, how easy they are to do.

Anyone who searches my name on Google can see how I’ve been targeted online in these campaigns and can see for themselves what these dark collaboratives are capable of doing with little to no resources, and no sophisticated organization.

The recent events with the Candice Lynn Potter harassment and blackmail website, partially directed to this publication – help expose how easy it is for anyone to manipulate information and weaponize the internet to target just about anyone, for any purpose – even crazy ones.

Russian trolls and sock puppets. Misinformation and Election 2016.

Recent news events with the Senate Intelligence hearings on Russian influence shows Russia’s FSB employed a force of one thousand people, each with the capability of one ‘Oliver Smith’.

What I’ve experienced as one individual targeted by another is a microcosm of what ‘army’ of information agents could easily accomplish.

Trump won. Did the Russian trolls make a difference?

Did the trolls tip the election? I imagine that is a difficult, if not impossible, thing to measure. Historically, influence and persuasion campaigns work, very effectively. That’s why billions are spent every year on media strategists, PR, advertising, marketing.

But it also means that influence and persuasion campaigns can also fail. In the 2016 election, it just so happens one persuasion campaign worked better than the other persuasion campaign. So what role did the “trolls” play?

The public never likes to think that we as individuals have been influenced by manipulation. Many are quick to dismiss this influence at the level of the election, primarily because each vote was an individual choice. Naturally, no voter thinks their behavior was leveraged by the Russians. However, I’m sure most people don’t think they buy snack products because they were told to by a commercial either. That’s not how persuasion works.

When people say that has little to no effect, show them the Google search results for my name before and after I was targeted in a misinformation campaign.

It is easy to have an effect. It requires no budget, just a decent knowledge of Google search ranking, Wikipedia and other WikiMedia platforms and time.

Creating misinformation campaigns is highly effective. An army of one thousand individuals conducting these campaigns around a central theme is truly something the world needs to be concerned with, but I believe it’s the other tens of thousands of misinformation trolls that are darkly collaborating on the internet we need to be concerned about too.

A dominion of deception now co-exists within the architecture of the internet.

Key to the Russian strategy was, of course, the peddling of deception, misinformation, and fake consensus building in the form of fake news stories spread via Twitter and Facebook. Fake news stories is a by-product of advertiser click bait, making fake news indirectly caused by a highly vulnerable and flawed ad tech infrastructure with some say as much as $16B a year in fraud, and that is just the fraud measuring faked “audience”, not “fake news stories” which are monetized, exploiting human nature’s ability to be easily swayed by our own bias.

Yet this audience that is exploited is not a passive one. They are active. A significant portion of this electorate also adopted these “trolling” behaviors, expanding the misinformation and targeting of online users across the web in a supercharged dark collaborative.

Trolling went mainstream from /pol to Facebook, and even the Clinton campaign took the bait.

We don’t need an army of Russian trolls to have this happen.

It would have happened anyway.

Misinformation campaigns and dark collaboratives happen organically, with no resources.

Trolling and trolling culture is an epiphenomenon of normal internet activity between users.

The trolls are winning the internet, and have captured my narrative. #metoo

And I want my narrative back.

But according to Adrienne Lafrance, a writer for the Atlantic, it’s not likely I’ll be successful.

Adrienne Lafrance offers her spin on the  Pew report on online trolls.

The uncomfortable truth is that humans like trolling. It’s easy for people to stay anonymous while they harass, pester, and bully other people online—and it’s hard for platforms to design systems to stop them. Hard for two reasons: One, because of the “ever-expanding scale of internet discourse and its accelerating complexity,” as Pew puts it. And, two, because technology companies seem to have little incentive to solve this problem for people.

The Trolls are winning is the conclusion, the wild west has been won by the wild.

According to the outcome of this study, trolls will continue to harass, dox, and target any individual online through various masks in social networks – and it is something they assert we will have to come to accept.

No current known, scalable solution to trolls, misinformation, or wiki wars.

Right now, most if not all architecture on the web, from Google search ranking to social networks, wikis, and forums assume a ‘good faith’ collaborative with users, based on a set of Terms of Service.

The architecture was designed not for the purpose of misinformation or harassment – but for the quick and clean exchange of information and social engagements at scale.

What has happened, however, is that the architecture of the web, from Google search to wikis, from Twitter to ad technology, is easy to game and in the past ten years, everyone has gotten ‘smarter’ about social media in general. The web is fully integrated with our dysfunction and darkness just as much as it is with our ideals.

Is being anonymous the problem, or is deception the problem?

What makes web architecture easy to game is the ability it gives users to use and practice deception at scale.

Practicing deception is an abuse of the right of privacy, and being anonymous. Yet being honest on the internet is not a terms of service requirement, it is something self regulated by people, not computer networks.

Adrienne Lafrance believes that it is human nature to troll. I agree with her. I just am not sure if focusing on ‘people’ and ‘accounts’ as the problem here provides us with much clarity.

That’s the funny thing about being anonymous online. Just as much as it makes it easier for people to troll, being anonymous online also makes it easier for people to be honest.

While the Pew report focuses on creating fake accounts and sock puppets, this is really not the problem.

No one would care if millions of sock puppets were communicating honestly, building credible and trusted consensus, and respecting common human dignity. The problem is the actual information and narrative formed by deception by sock puppets.

No one would care if the problem of trolls only referred to Stephen Colbert or Sasha Barron Cohen.

“People is not the problem” with online trolling. The platforms we are using are.

Right now, the only solution to a ‘people problem’ is via human intervention which needs more people, not a programmatic script. Human intervention, unlike the web itself – is not easy to scale at all. All a program can do is ban accounts. Even the latest attempt, Jigsaw’s AI which can alert admins to abusive comments, is a painful non solution even easier to game than the admins themselves, finding new ways to ‘flag’ users abusing the platform.

Facebook is spending $14 million on partnerships with Mozilla and Wikipedia to, somehow – identify a way to insure certain news is “real” as opposed to others in this battle against misinformation.

Additionally, the American Democracy Project in conjunction with a few universities as their own walled garden for a new style “Wikipedia for fact checking” to crowd source ‘true’ claims from ‘false’ claims found in the news and public discourse. Students using this new software already complain, saying “the tech for public wikis just feel years behind the times.”

The trolls are not winning, they are just controlling the discussion.

I don’t mind if the trolls win, I just don’t think everyone else needs to lose.

I believe the worst of our voices online are able to control the flow of information. Rational voices become suppressed, or intimidated from participating – while the most irrational and obnoxious voices begin to control and influence the discussion, gamed through Google search and social media.

I agree with Adrienne LaFrance that there really is no current solution out there, no reliable proposal from Google, and limited attempts by Facebook or Twitter to effectively control “trolls”.

Trolls are people, people don’t like being controlled. Additionally, people are “smart”, we like to game systems.

I disagree, however – that there is no longterm solution to this problem. There is.

I predict I will eventually recapture my narrative.

In a proper environment, we are smarter collectively when we collaborate than individually.

Case study for solutions architecture, aiki.wiki

This website is a case study for identifying behaviors that can be resolved in an online collaborative. I’ve been designing this platform for many years.

aiki.wiki’s  architecture for social engagement and discussion prevents these manipulative or deceptive voices from gaining power in an online consensus.

It does this not by censoring them, or blocking them – but by funneling them into a process where these deceptions are easily exposed through consensus building.

This is designed with the understanding that architecture must embrace both the best, and the worst, of our behaviors and use those forces to funnel solutions.

I do believe that the architecture for the web and all of it’s social platforms has been based on naive assumptions about human psychology. The early designers perhaps failed to look at the impact and reward of weaponizing the internet for misinformation.

Only through hard won, honest collaboration on the web can we build a solution.

The way to battle a fake consensus is to build a true, honest, and collaborative one.

 

 

 

Be the first to comment

Leave a Reply

Your email address will not be published.


*