Wiki wars, defined somewhat.
War, like the man said, is hell. A wiki war is perhaps is no different.
Wiki wars are literally conflicts between individuals or groups fighting over the control of narratives published on the internet, primarily Wikipedia.
“Battles to the death for insanely low stakes” as put by veteran Wikipedia editor and RationalWiki brain trustee, David Gerard.
A wiki war occurs across a digital, contextual, and psychological battlefield. A landscape comprised of WikiMedia platforms, Google search, subReddits, and WordPress blogs which become weaponized to suppress edits by groups of online users or to intimidate and harass other users on the web.
Even bots are used in wiki wars, with counter bots created to battle them in return.
Wiki Wars are very complex, and in a heated event, they can often require 8 – 16-hour full-time days in heated arguments, consensus building, research, and three-dimensional chess strategies between admins and editors gaming the process.
Wiki wars often begin from one single event, an ‘edit war’ that occurs on Wikipedia between the editors on an article.
The edit war turns to wiki war when working through disagreements on the sly, employing various tactics of editor suppression to remove the dissenting editors from the article.
One aggressive event to remove an editor from an article can trigger hundreds of defensive, and then regressive responses, igniting communities around the web into a much larger event which can play out for years. (see GamerGate, from which emerged the alt-right, and one could argue, eventually helped win the Trump presidency through digital persuasion, trolling, targeting, meme generation, and likely unintentional interaction with Russian or foreign agents riding the wave.)
Wiki wars are far more nefarious than just a bunch of nerds arguing over Oxford commas.
Wiki wars are comprised of more sophisticated manipulations for online misinformation and harassment, comparatively to their social media “sisters” such as Facebook and Twitter which do not provide the contextual completeness that a MediaWiki can offer.
In my case study on WikiWars- one group of Wikipedia trolls were applying just as sophisticated methods as were used by foreign agents in the 2016 election, and just as effective in many cases. I believe by studying these type of events, especially some of the “trolls” encountered – we can develop highly effective tools for building trusted online collaboratives.
WikiWars emerge from dark collaboratives, groups of users who are comfortable practicing deception on the internet in any form and then evolved upon the design sensibility of WikiMedia software and playing off of PageRank algorithm for Google search.
Therefore while Wikipedia often is the center, these WikiWars spread outside of Wikipedia, to other WikiMedia platform sites such as RationalWiki, Encyclopedia Dramatica, Metapedia, Conservapedia, RightPedia, KiwiFarms, the newly emerging InfoGalactic, and a host of other wikis.
Wiki wars can easily be initiated by any interest-based group or individual, for any reason, political, polemic, persuasive, or just in some instances, no reason at all.
The winner in the wiki war controls the broadcast signal around a specific narrative.
The encyclopedic voice of Wikipedia is credibility, notability, and reliability for searchable information. This is the grand prize for controlling a narrative, walking into global credibility and influence with perfectly targeted discoverability, reaching the exact people who are searching that particular topic, with Wikipedia being a highly likely “first stop” in the discovery of any subject. The most valuable real estate on the web. GoogleSearch. A zero-sum game that, from my experience, attracts a peculiar psychological type, quite probably sociopathic in nature – like bullies, stalkers, and obsessives.
Wikipedia’s own guidelines encourage us to trust the narrative voice on Wikipedia with its emphasis on ‘neutral point of view’ editing, a paradise where only disinterested editors without any bias are tirelessly improving the encyclopedia.
WikiMedia Foundation broadcasts this optimistic and utilitarian message through PR and TED talks.
The value of Wikipedia as a powerful online publisher gives any shrewd agenda editor a touch point to disseminate references, context, and in some cases misinformation and misdirection to influence every one interested in a search topic.
The losers in a wiki war are banned from editing an article in the least and can face harsher repercussions, such as doxxing, harassment, reputation destruction, and stalking; as detailed on this study.
Note; many “losers” in the wiki war on Wikipedia can often rush off to another WikiMedia software platform, and restore their lost narrative on a host of other wikis, each one catering to a specific worldview, such as RationalWiki, RightPedia, InfoGalactic, or Conservapedia.
Many other “wikis” on the web actually emerged from wiki wars happening on Wikipedia, such as RationalWiki and Conservapedia, and now with the emerging alt right “InfoGalactic”.
Other than GamerGate, we don’t hear about wiki wars much, really.
WikiMedia Foundation is, as one would naturally expect, the most influential voice broadcasting about all things “Wikipedia”.
Because of this, ‘Wiki Wars’ are sometimes reported as ‘cutesy’ little fun things happening on the worlds greatest thing since sliced bread or only reserved for a dozen or so highly significant topics which Wikipedia always seems to be able to account for.
GamerGate did get some mainstream attention on them, but the complexity of the problem overclouded the mediums that gave it life.
So one reason we don’t hear about them much is primarily that we hear of all the good things about Wikipedia from WikiMedia more, a testimony to truthfully how powerful and impressive Wikipedia has been adopted
Wiki Wars are “buried out in the open”.
The other reason I believe they are difficult to bring attention to is just the complexity of the software itself. Why? Because a wiki war is so mind-numbingly awfully complex to follow, highlight, and detail.
This complexity of the software itself cloaks discoverability of what is actually occurring in one, and unless someone has been directly involved, they are almost impossible to bring attention to.
Wikimedia software functions as a foundation for wiki war activities.
One of the hurdles in producing this site was actually detailing the full arc of events in a wiki war while working through the labyrinth of the platform’s software.
“Wikis” in general are widely adopted via WikiMedia’s software, which is the underlying platform housing the majority of most online wikis beyond Wikipedia.
Any flaw, therefore, on Wikipedia is repeated across all wikis running off of Wikimedia’s software.
The Psychology of a Wiki War
In a wiki war, a number of online psychological forces appear to converge, sometimes erupting in a ‘perfect storm’ of miscommunication, mistrust, and misinformation. In this sense, a wiki war is a ‘whole system’ psychology, with varying social dynamics that find a balance inside of one.
I see three clear psychological distinctions playing out, which I refer to as digital wildfires, wiki noise, and social propaganda.
I’ll do the best I can to define each one.
What is a Digital Wildfire?
Digital Wildfire is a more academic term used to describe events such as “PizzaGate”. According to the Macmillian online dictionary – a digital wildfire is ‘false or suspicious information’ that spreads virally on the internet. A ‘dark meme’ that is collaboratively constructed by a mob type mindset. A false rumor that uses online social networks to spread at ‘breakneck’ speed.
In 2014, the World Economic Forum declared ‘digital wildfires’ a leading global threat to stability. And that was published before the 2016 election.
Since election 2016, the term “digital wildfire” has been replaced in the mainstream by what we now call fake news and online misinformation – now a dominant tactic of war between nations, states, shareholders, and citizens.
I see digital wildfires as representing bad distribution with questionable attribution, an arc or story comprised of false or unknown information distributed at network scale by very real, human, visceral reactions to what is believed to be true.
While there is plenty of evidence I believe to show that things like PizzaGate were also intentional digital wildfires manipulated by deceptive actors, what really gave it adoption were well-intentioned people on the internet who genuinely believed the contents. They really believed they were white knights, defending children against a great evil in the world. In my own case study, my harassers claim to me they view themselves as “the good guys”.
In the belief that we are doing the right thing, we can easily participate in a “digital wildfire” without realizing the harm we are causing or the repercussions that can follow.
Reactions at scale
Digital wildfires, just like wildfires in nature, can be started intentionally, or unintentionally. I believe what accelerates them spreading is rather human, things like deception, misunderstanding, and confusion – all of which become amplified by digital technology and act like dried tender, just waiting to be ignited.
Just like forest fires, digital wildfires only require one online user with the intention of causing disruption to ignite.
I believe the majority of digital wildfires, however, are unintentional. I think it may be easier to understand them and build solutions for them by distinguishing that set from the more nefarious and intentional ones.
Unintentional wildfire: Wiki Noise.
Wiki noise is what creates, relative to each media environment, what some media professors are now calling context collapse.
Let’s refer to wiki noise as just ‘semantical confusion’ that arises uniquely in a digital environment. Something we just have to work through in an online collaborative to avoid any misunderstanding, a breakdown in “seeing what each other means”.
That is really all ‘wiki noise’ is, a common misunderstanding. And often it can be a feature/flaw of language or the medium itself, not any one’s intention.
Left unattended, wiki noise is like dried tinder, waiting to be ignited through individual reactions to what is believed to be true, but isn’t.
Digitally, especially with the limitations of WikiMedia software, this type of noise can be amplified.
Sci-Fi author Robert Anton Wilson coined this as ‘semantic noise‘ back in the 1980’s. I always loved the stories RAW would write about it. Say or write the words, “I love fish” and one group interprets it as a preference for dining, another as a fondness for a home aquarium, without even being aware there was any disagreement at all.
“I’m Watzlavic!” “No, you are!”
The infamous ‘I’m Watzlavic!’ story told by Wilson and Dr. Paul Watzlavic, a communications theorist and psychologist, highlights how a ‘communication jam’ coming from different contexts being perceived in a conversation can even lead to perceptions of insanity or paranoia about others.
Dr. Watzlavic told the story of his own personal account as a new staff member at a hospital. His first day on the job, he reported to the office, where he found a woman sitting at the desk. He assumed it was the director’s secretary and approached her.
“I’m Watzlavick” he said, assuming the ‘secretary’ would know he had an appointment.
She responded with “I didn’t say you were.”
A bit taken aback, Dr. Watzlavick exclaimed “But I am!”
To which he heard her reply ”Then why did you deny it?”
Dr. Watzlavic at this point classified her as a schizophrenic patient and was concerned she had wandered into the staff offices. Naturally, he became very careful in ‘dealing with’ her.
However, from the woman’s point of view, Dr. Watzlavick himself had appeared as a schizophrenic patient.
To her, a strange man had approached and said, ‘I’m not Slavic.’ Her own experience with paranoids taught her they often begin conversations with such assertions, vitally important to them, but sounding a bit strange to the rest of us.
When she replied – “I didn’t say you were,” she was trying to soothe him.
When she heard him reply ”But I am!” she added schizophrenia to paranoia in her assessment of him.
She then replied, “Then why do you deny it?”
She then became very careful in ‘dealing with’ him in return.
Dealing with these natural and unintentional communication jams can be serious business in online consensus building. It is easy for anyone to mirror their own personal psychology to filter out semantical noise or the confusion inherent in text and meaning.
This suggests that even if all participants in an online collaborative are all well-intentioned individuals, the inherent flaws of the communication medium itself can sow the seeds of mistrust, even paranoia, in the consensus.
So a digital wildfire can be informed by nothing more than the inherent flaws of our own communication mediums and language, even without intention.
“Misunderstanding” is of course not the only thing that can ignite a digital wildfire – what can also ignite a digital wildfire is social propaganda, intentional misinformation.
Intentional wildfire: Social Propaganda.
Social Propaganda, unlike ‘wiki noise’, is intentional.
Social Propaganda is similar to just plain old propaganda, with the exception that it is used towards influencing the attitudes of others in a ‘small or closed social groups’ online to accept or discredit new voices coming into the group while boosting their own.
It is also somewhat an ‘anti-social’ methodology of raising status or lowering status of different voices and perspectives in any form of online consensus with information or misinformation timed to target, embarrass, threaten or compete with someone.
In online consensus building, any type of activist group or individual, any disgruntled commenter, will use a tactic I refer to in this study as “flag waving”.
Flag waving strategies are methods used in social propaganda campaigns to misdirect or mislead an online community, a mixture of persuasion and deception.
There are various subtle, even petty misinformation strategies that are easy to perform and surprisingly effective in scope.
In wiki and consensus building groups – social propaganda tags such as ‘sock puppet’ and ‘troll’ are used to favor dissent against an opponent, and on Wikipedia my direct experience proved can be used as a tactic for editor suppression.
I’ve witnessed my own ‘propaganda’ campaign, instigated by the notorious online troll Oliver D Smith and his brother, spread across multiple platforms, attempting to redefine the narrative of who I am, and of course to discredit this website.
Combining social propaganda with wiki noise, we have the ingredients of a digital wildfire that flow from wiki wars and continue to play out on any medium it can find a home.
So just how common are ‘wiki wars’ on Wikipedia?
Currently, there is no way to tell other than by visiting various noticeboards on Wikipedia and trying to figure that out from reams of discussions and accusations flying back and forth. There is no way to have a platform-wide accounting of the problem because ultimately, no one is really accountable on Wikipedia.
However, the more time that carries on – the more the faulty architecture of Wikipedia becomes exposed, and what’s becoming exposed is that Wikipedia has no real solution to the problem. So if the problem happens a lot – then the leading source for global public education can only head down a path of discredit for Wikipedia if no solution emerges.
The crisis of wiki idealism.
The idealist message of what the internet can offer can often distract from genuine real-world problems, such as fraud, harassment, fake news, propaganda, manipulation, slander, libel, and tracking that are occurring online.
It’s no different on Wikipedia.
If we are to obtain the utilitarian ideal of what knowledge building, collaborative platforms offer, these darker problems will need to find a resolution.
The conflict however that I see is that historically, WikiMedia has promoted the idealistic message while somewhat sweeping under the rug the very real world type problems that are occurring because of it.
WikiMedia conveniently passes responsibility to a community that will not arrive at a consensus.
Currently, I believe there is no solution to the problem likely to be adopted on Wikipedia. WikiMedia software itself creates too much of a competitive environment, creating a competition while amplifying opportunities to exploit anonymity and twist it into deception and manipulation.
WikiMedia takes a handoff approach, touting idealism. Thumbing their noses at revenue models, or paying editors or admins, they give the responsibility to a community with no tools to arrive at it.
Therefore any changes to Wikipedia’s structure, or the adoption of a new one, would need to find community consensus, which is prevented by the same software that facilitates it.
They tend to sort of stick their heads in the sand, removing themselves legally from any liability for any abuse that occurs on the platform they created, pass of the responsibility to an unmanaged and anonymous online community with plenty at stake with little to no proper oversight.
Free of having to worry about core problems in community editorializing like any responsible publisher would normally have to, all Wikimedia has to worry about is fundraising and spreading the utopian message at TED talks, receive the adulation of the world, a massive PR friendly message by every web search.
It makes one question who is really benefitting from giving all the power and responsibility to a community that is growing toxic.