[update: this article was written in 2015, long before the 2016 election with claims of fake news, Russia election meddling, and social media trolling by alt right forces with things like Pizzagate. Recent events provide a much clearer example of these dynamics, which were more difficult to define a few years ago since there was little little language in the mainstream that could describe the complexity of what happens in them. While current events give clearer examples, the three dynamics I list below are the same.]
In a wiki war, a number of online forces appear to converge, sometimes erupting in a ‘perfect storm’ of miscommunication, mistrust, and misinformation. War, like the man said, can be hell.
I refer to these three social forces as digital wildfires, wiki noise, and social propaganda.
What are Digital Wildfires?
In 2014, the World Economic Forum declared ‘digital wildfires’ a leading global threat for stability. According to the Macmillian online dictionary – a digital wildfire is ‘false or suspicious information’ that spreads virally online. A ‘dark meme’ that is collaboratively constructed by a mob type mind set. A false rumor that uses online social networks to spread at ‘breakneck’ speed.
A timely example of a digital wildfire is the father in Australia who was playing with a Darth Vader mask in a kids playground and then was accused of being a pedophile across social media networks within 24 hours. This digital wildfire was a rumor that the woman who started it apologized profusely over. In a brief second – she really did believe this man was a pedophile, and snapped his picture. She shared it over social networks and both her and the man are now devastated over the results over what happened.
I refer to recent events, such as ‘Pizzagate’, as a strong example of a digital wildfire spreading online beyond anyone’s control.
While a digital wildfire is a decent term to describe behaviors and mediums exponentially growing rapidly, it does not tell us how a digital wildfire is ignited.
What fuels a digital wildfire? Why do somethings go viral in this fashion and not others?
One thing that can inform and ignite a digital wildfire is what I like to call ‘wiki noise’.
What is Wiki Noise?
The above recent example of a digital wildfire is an example of false information emerging as a human reaction to what we believe to be true.
In the belief that we are doing the right thing. Flaws in communication, even perception – can greatly influence what we react to and what we believe to be true online.
Let’s refer to Wiki Noise as the ‘semantical confusion’ those in an online collaborative must work through so as to avoid any misunderstanding, a breakdown in ‘seeing what each other means.’ Communication jams.
Author Robert Anton Wilson coined it as ‘semantic noise‘ back in the 1980’s. He saw it as an inherent flaw in any communications system, even among well intentioned people.
A man says ‘I love fish’ and one group interprets it as his preference for dining, another as his fondness for a home aquarium.
The infamous ‘I’m Watzlavic!’ story told by Dr Paul Watzlavic, a communications theorist and psychologist, highlights how a communication jam coming from different contexts being perceived in a conversation can even lead to perceptions of insanity.
Dr Watzlavic told the story of his own personal account as a new staff member at a hospital. His first day on the job, he reported to the office, where he found a woman sitting at the desk. He assumed it was the director’s secretary and approached her.
“I’m Watzlavick” he said, assuming the ‘secretary’ would know he had an appointment.
She responded with “I didn’t say you were.”
A bit taken aback, Dr. Watzlavick exclaimed “But I am!”
To which he heard her reply ”Then why did you deny it?”
Dr. Watzlavic at this point classified her as a schizophrenic patient and concerned she had wandered into the staff offices. Naturally, he became very careful in ‘dealing with’ her.
However, from the woman’s point of view, Dr. Watzlavick himself had appeared as a schizophrenic patient.
To her, a strange man had approached and said, ‘I’m not Slavic.’ Her own experience with paranoids taught her they often begin conversations with such assertions, vitally important to them, but sounding a bit strange to the rest of us.
When she replied – “I didn’t say you were,” she was trying to soothe him. When she heard him reply ”But I am!” she added schizophrenia to paranoia in her assessment of him. She then replied, “Then why do you deny it?” She then became very careful in ‘dealing with’ him.
Dealing with these natural and unintentional communication jams can be serious business in online consensus building. It is easy for anyone to mirror their own personal psychology to filter out semantical noise or the confusion inherent in text and meaning.
This suggests that even if all participants in an online collaborative are all well intentioned individuals, the inherent flaws of the communication medium itself can sow the seeds of mistrust, even paranoia, amongst the group.
So a digital wildfire can be informed by nothing more than the inherent flaws of our own communication mediums, even without intention.
This is of course not the only thing that can ignite a digital wildfire – what can also ignite a digital wildfire is social propaganda.
What is Social Propaganda?
Social Propaganda, unlike ‘wiki noise’, is intentional. It is a method of raising status and lowering status of different voices and perspectives in a social group.
Social Propaganda is similar to just plain old propaganda, but instead as a form of communication aimed towards influencing the attitude of a population, it is used towards influencing the attitudes of others in a small or closed social group online to accept or discredit new voices coming into the group.
‘Petty propaganda’ might be a more appropriate term. In online consensus building, any type of activist group, disgruntled commenter, can use ‘weasel words’ that can serve this purpose.
In wiki and consensus building groups – social propaganda tags such as ‘sock puppet’ and ‘troll’ are used to favor dissent against an opponent, and on Wikipedia my direct experience proved can be used to force or censor consensus or editing.
I believe the use of social propaganda is just human nature, and although ideologically driven in the language – the cause is not the ideology.
This website details my personal experiences dealing with such a sub group, known as ‘skeptic activists’, but their behaviors are likely to be the same as any ideological group of people.
I believe everyone, myself included, can be guilty of these types of behaviors at some point in our life. Its just as much social propaganda to call the cute new girl in high school math class a major slut on Facebook as it is for #gamergate activists, both pro and con, to frame each other in a manner which will stir the behaviors of their own social group back on Twitter and Reddit.
I’ve witness my own ‘propaganda’ campaign, instigated by the notorious Oliver D. Smith, spread across multiple platforms, attempting to redefine the narrative of who I am, and of course to discredit this website.
Combining social propaganda with wiki noise, we have the ingredients of a digital wildfire and wiki war that continues to play out on any medium it can find a home.
Wiki’s like Wikipedia, Rational Wiki, Encyclopedia Dramatica can accelerate these things to varying degree and currently there is no oversight for this.
I’ve been very fortunate as a developer and researcher of online consensus building processes. I’ve gotten to personally witness wiki noise and social propaganda fuel my own digital wildfire, lasting now for more than two years. ‘Wikipedia, we have a problem‘ is gonzo research and live blogging as my own digital wildfire continues to spread around the web.
Starting with Wikipedia editor Manul in September of 2013, continuing during my request for a new consensus, concluding into my AE indefinite block, to the publication of my name as a Wikipedia editor with an article about me on Rational Wiki – to a handful of Wikipedia Review and Wikipediocracy forum members to Reddit and WikiInAction, my digital wildfire continues to spread 18 months later.
How can we stop digital wildfires?
On Macmillan’s online dictionary page for ‘digital wildfire’ – it reports that;
Paradoxically, one of the most effective ways to deal with a digital wildfire turns out to be use of the same social media avenues to set the record straight.
To diffuse a digital wildfire – we have to broadcast.
Wikipedia we have a problem is a study into online harassment and bullying in consensus building, focusing on Wikipedia, wiki wars, and the personal events that can happen from participating in them.
It’s also a fair attempt to diffuse my own digital wildfire – and report the results of each event as they occur.
In every event of my own digital wildfire – the behaviors of those who wish me harm are consistent.
- Refusal to engage in face value honest discourse or critical questioning.
- Extreme social propaganda is used to discredit their opponent’s voice. High usage of ‘weasel words’ peculiar to their sub-culture. For example, mine are ‘troll’ ‘sock puppet’, ‘pseudo science promoter’.
- Omission of key facts in what they disclose to online readers that may lead that reader to doubt their own statements and claims made by their weasel words used to describe me.
- Lack of honest self reflection, claim wild ‘conspiracy theories’ made about them.
- A continued attempt to silence their opponents broadcast and voice, or ‘censorship’ by banning, shaming, blocking, reverting, deleting input and contributions. For example each new post made by this website receives a new, often failed attempt to discredit this study.
These are all behaviors that attempt to stop a collaborative rational consensus process.
In every digital wildfire event – any online user who is refusing to participate in honest consensus building around the facts of the event are the agents of not only their own confusion, but the web’s as well. When it comes to Wikipedia, knowing that these behaviors can actually determine the meta data around a search topic is very worrisome.
Building a rational consensus online naturally filters out these tainted agents of confusion. We should demand no less in critical consensus building, crucial to the future of a responsible internet.