Internet Drama: a consensual field study into online consensus building.

In the age of the internet, whatever appears to have consensus online is true, even if it isn’t. The converse is also true, anything that appears to be false on the internet isn’t, even if it is.

From 2015-2018, an entire panorama of media platforms, from Wikipedia to Twitter, from Google to Facebook on down to Reddit even Apple and numerous others, have amplified the “global dialectic” in clear left wing / right wing terms –  with the consequences of misinformation propaganda, a disregard for facts, the weaponization of social media and targeted harassment becoming a sort of politics as usual reality that is shouting at us loudly every day.

Alphabet Inc.’s Google, Facebook Inc.Twitter Inc. pledged to work together with other tech and advertising companies to fight the spread of “fake news” online in Europe, largely to prevent it from blighting political elections in the region.

Quite unintentionally, all major social media platforms have, to one degree or another, supported online misinformation in such a profound way that it is wrecking havoc of nations, shareholders and citizens. There is a crisis of consensus building on the internet, and world events reflect that crisis in consensus building back to us.

Currently, the only solution offered by platforms is flimsy,  a variation of “Terms of Service” violations. That’s right, if you abuse these platforms for misinformation or targeted attack, you risk having your account deleted.

European digital commissioner Mariya Gabriel on Wednesday welcomed the code as a step in the right direction but urged platforms to intensify their efforts in the area, adding the commission would “pay particular attention to its effective implementation.” The commission said it would analyze the first results of the code by the end of the year and could still propose regulation if the results are unsatisfactory.

Bloomberg, September 26th, 2018.

I’m highly skeptical these platforms will be able to offer any scalable solution because these platforms tend not to recognize the totality of all user behaviors on their platform as a form of consensus building. Therefore, these platforms do not recognize that it is human nature to game algorithms (rules, instructions, etc) in response to consensus building. It’s just in our nature to do it. And these platforms are attempting the opposite, trying to develop an algorithm that can game human nature.

I’m using the term consensus building broadly; a shared reality agreed to, perhaps even created by small to large social groups, specifically I mean to include all social media discussions happening online, and the results of those discussions online and in the real world.

I don’t mention truth with a capital T, or anything philosophical like “objective reality” when I talk about “consensus”, I’m just referencing what social groups, i.e. you and I and everyone else, do to make sense of and make work whatever it is we can ultimately agree on that exists in the world in some sense. Enough sense at least to survive, problem solve, learn stuff, have fun, be social, and partner up.

In the age of a polarized internet, I believe this makes online consensus-building one of the more critical activities online developers should focus on, and that’s why I publish Wikipedia, We Have a Problem, and that is why I develop aiki wiki, a platform for large-scale online consensus building.

So how did we get here?

Content as “cause” ultimately descends into censorship.

In the age of the internet, every possible subject that can be published online has potentially active interest-based groups, big or small, that represent the polarity of all social, political, commercial, or ideological types, both for and against, the subject.

In the simplest and most practical sense, all variations of any form of ideology, clustered in various social groups, are sharing content online.

Content that is considered misleading or abusive can be “flagged” as inappropriate, claim the tech giants, and removed.

To back that assumption up; Facebook and Google are spending millions attempting to identify the content shared via some sort of machine learning or artificial intelligence.

Big tech focuses on “content” when identifying cause on the internet. Content itself is a deeply subjective embedded medium, requiring both attention and individual interpretation.

This means that a significant number of statements/web pages shared or created online have groups claiming those statements/web pages are true, and another group claiming those statements/web pages are false, and vice versa, all engaging in one form of consensus building or another.

I believe approaching the problem as “content” will always fail to find a programmatic solution, and will always fall back on some form of censorship on the web. Specifically, I believe it is impossible for there to be an algorithm that can solve the problem if “content” itself is considered the problem.

Behaviors and choices, not content

I think that by analyzing the distinctions between the behaviors in a consensus process instead of the content of the consensus process, we can begin to programmatically account for not just more resolving consensus building, but perhaps begin to clean up the mess on the web more effectively.

So how do we distinguish these behaviors programmatically?

Emotional consensus building and rational consensus building: two distinct sets of behaviors and choices

Online, as in real life, a consensus can be easily earned by sharing content or keywords that carry a shared emotional signature with any given audience anywhere on the internet.

This is the most obvious and common form of online consensus building, “likes” or emoticons, even “memes”, that build consensus by the simple expression of pleasure or displeasure of any given subject or topic shared on the internet via Twitter, Reddit, Facebook, or any other social media platform.

An emotional consensus can also be akin to a political consensus; a product of social or political propaganda. An emotional consensus can be optimistic and positive, or it can be cynical and negative or any variation in between.

From a programmatic perspective, emotional consensus building is, by any other name, just voting. Propaganda, by any other name, is just marketing and advertising via media buys. Propaganda is the content, voting is the behavior.

These are precisely what is gamed for any misinformation campaign and supported by all major platform’s architecture.

Platform architecture supports emotional consensus building, even if it is unintentional.

The gaming of emotional consensus building in internet social groups

Internet users who manipulate emotional consensus building online appear to use three distinct strategies; users who willfully practice deception or manipulation in an online consensus, “psychological types” who attempt to “bully” an online consensus, and individuals who attempt to control all the permissions in a consensus building process.

These are specific choices that types of users “must” make in a consensus process, and all forms of harassment or misinformation carry at least one of these signatures.

These are obviously psychological, and not ideological, types. I believe what is happening on the internet, viewed with this particular lens, is that internet users with these psychological inclinations begin to dominate the “broadcast signal” on each media platform, becoming dominant voices of each ideological “niche”.

Because these behaviors are inherently competitive, more rational or calm voices become intimidated, suppressed, turned off, or banned/sanctioned in the consensus process.

YEAH, IF PEOPLE COULD STOP REPORTING PEOPLE FOR POLITICAL OPININIONS THEY DISAGREE WITH, THAT WOULD BE GREAT. Dialogue is good with those with whom you don’t concur. Please stop reporting unless against rules. I will NOT remove unpopular opinions without considered reasoning. Try to stay on topic. from r/technology


Easy come, easy go

An emotional consensus is easy to spark online, compared to its counterpart, a rational consensus; that is a consensus won by applying critical thinking and critical questioning along with nothing more sophisticated than honesty and common courtesy.

Indeed, a rational consensus naturally tempers an emotional consensus, eventually (or at least hopefully).

That is why an emotional consensus could be just as easy to lose as it is to win, for once it has been processed in an environment that allows for rational consensus building, the illusion of the emotional consensus becomes exposed, certainly.

Hard-won

Historically speaking, a rational consensus would be the more hard-won counterpart, meaning it requires “work”, and is itself “earned”.  A hard-won consensus building process relies on “steps” or methodological arcs to be satisfied as “rational” to those within the process, without relying on percentages based on popularity.

Voting itself is sloppy in comparison, as voting only requires reactions, similar to “animal spirits” in marketplace economics, to achieve a consensus.

In principle, a well developed rational consensus could exist in an extreme minority for years, decades, even centuries, and yet over time eventually come to be accepted as a commonly held truth and feature of consensual reality.

Psychologically speaking, agnosticism or skepticism would be obvious features of rational consensus building along with social empathy, and these forces combined require ethics; integrity, honesty, courtesy, and logic all be mutually agreeable.

Note that a rational consensus is more process and behavior orientated. Yet, in principle, from a programmatic perspective, an algorithm cannot make a distinction between content formed and shared via a rational consensus or an emotional consensus.

The problem is the most widely adopted platforms developed only the tools, an algorithm for emotional consensus building and zero tools for rational consensus building.

The digital dialectic creates a handicap for rational consensus building

MediaWiki (Wikipedia and host of other platforms), Facebook, Google, Twitter and Reddit, the most widely adopted platforms on the internet, only give us “rules” or terms of service for rational consensus building.

Rules, guidelines, and terms of service only require users to make a promise. Code of Service. In principle, this is an extension of trust building. Trust building is great between individuals but doesn’t really make much sense when it is an algorithm and an internet user generating content, where no true “bond” could ever exist. Instead, these terms of service are gamed by the very same programmatic tools provided to the users for emotional consensus building.

This flimsy barrier is all the web offers currently to support rational consensus building – the only barrier we have actually against mob rule, i.e. emotional consensus building run amok.

I believe that we can, programmatically speaking, distinguish between these two types of consensus from a behaviorial perspective, and build tools that could ensure the “rational consensus” becomes the most discoverable content on the web.

I believe I have been able to identify these distinctions in a manner in which can be programmed and hardcoded, yet still allowing for the first person experience of the consensus-building process to inform the algorithm, allowing for a consensus to be designed without the need for a third-party arbitrator, and which can guarantee a mutual resolution between all participants.

Consensus building is peer to peer.

The first problem all platforms have is the requirement that a consensus first must be established if an online event warrants a violation of a platform’s terms of service in the first place.

Consider; toxic interaction, deception, targeting or harassment is filtered through the first-person experience online, completely unregisterable programmatically. Same with misinformation.

A platform cannot programmatically determine if I or anyone is being lied to or harassed, but the first person experience of the event can claim that anytime. Twitter, for example, requires user “reports” to flag content, which then gets circulated for review. The verification process with mods would likely show a bias relative to each moderators point of view.

As all claims on the internet require consensus to be verified as true, claims about internet toxicity require lots of “work” to build consensus to verify, making it less and less likely that a solution can scale.

This means that all claims of deception, harassment, misinformation become just another claim on the internet, one of the millions, to be verified before any corrective action can take place.

Design problem

So what happens in consensus building online when internet users who are intentionally honest, rational, and collaborative have to build a consensus with those willing to practice deception, bully, or control all the permissions in the consensus process?

Pretty much what the web is today I imagine.

Set A users: Rational consensus building is encouraged through the platform’s terms of service or community guidelines, which require first-person experience and verification, hard if not impossible to scale within current platforms technology.

Set B users: Emotional consensus building is encouraged via the platform’s algorithm, which can scale programmatically.

In the past five years, I’ve left no stone unturned showing each angle and each step protagonists and antagonists go through in the process of controlling editing permissions building consensus on media platforms.

The purpose of this is to design a platform that allows for rational consensus building as the sole publishing mechanism of the platform while encouraging all types of participation, even emotional consensus building, without relying on the voting algorithm.

Such a platform would naturally distil emotional consensus building into a refined rational consensus for fully trusted, vetted, verified online publication.

For me, the only way I know how to develop, test, vet, and falsify my own assumptions in the design of aiki wiki is if I have real, opt-in interactions with users who willfully engage with me and willingly apply toxic consensus process while I remain confrontational, but within the boundaries of rational consensus building.

Viola’

Internet Drama as a field study, 2013 – 2018 (Happy 5th Year Anniversary, WWHP)

In October of 2013, I intentionally introduced a methodology for rational consensus building I’ve been refining for quite some time, directly to a group of editors on Wikipedia that, to me, were obviously engaging in toxic consensus building on a well known “wiki war” getting mainstream press, the biography of Rupert Sheldrake.

The only way I could know for sure if this group on Wikipedia were truly engaging in toxic consensus building would be if they applied it to me directly.

What’s more, this means that I would adhere to the ethics of rational consensus building; I was transparent and upfront with this group of editors. I invited them to collaborate with me in building a consensus.

The response to my request was, predictably, harassment. I did not let that deter me from extending my offer to them.

I continued to build consensus until this group had me sanctioned from editing, using techniques detailed in this study as editor suppression.

So in my first case study, although I was able to build a consensus within the community on Wikipedia, I was sanctioned and prevented from continuing in the consensus process.

Technically, if my intention then was to build a consensus on Wikipedia, I failed.

This “failure” was a profound learning experience, mistakes that I made would not be repeated twice.

I attribute this early failure to two factors, one was just my own naivete regarding the Wikipedia community, I was not expecting these obvious toxic behaviors to be supported by Wikipedia admins, I assumed it would be easy to overcome them. It was my first time attempting to build a consensus in a toxic environment while adhering strictly to those platforms guidelines, and I was very unfamiliar with MediaWiki formatting.

What’s more, in the first case study, I was sanctioned because I was unable to overcome recovering my “reputation” on Wikipedia, which was discredited by this group on Wikipedia through social propaganda designed to influence Wikipedia admins in a sanction against me. This has been a key event in this entire study, the attack on my own reputation as a weaponized event in these “wiki wars”.

My editing account at the time, The Tumbleman, had no credibility – I was labelled a disruptive “troll” on Wikipedia by these editors, the complete opposite of my intentions, to help build a rational consensus on a contentious Wikipedia article.

Because I was confrontational with them, and had some knowledge of how Wikipedia’s rules worked, I thwarted many of their early attempts on Wikipedia to sanction me, making them more desperate and the stakes higher.

Since bloggers were writing about this wiki war and it was getting attention, there was a little heat on this event, and a week or two after I was sanctioned, I then had an attack article written about me on RationalWiki, amplifying this misleading narrative about me as a disruptive internet troll, the same message these editors were making about me on Wikipedia to remove me from editing.

Next, influencers such as Tim Farley, a vanguard of “skeptic activist” editing on Wikipedia, began his own “PR” campaign to damage control perceptions of the event to skeptics. Not only did I find “attack articles” written about me, but now literally misinformation outreach campaigns were being promoted by this group of influencers, outside the Wikipedia community.

I must say that event took me off guard. It was a bit over the top as a response, completely unwarranted. I was being targeted for actually editing a Wikipedia article by a group of Wikipedia editors. RationalWiki was an obvious attack piece, and it opened me up to how “wiki communities” operate, giving me more fodder for Wikipedia, We Have a Problem.

However, my second attempt at a rational consensus building on Wikipedia was a success.

I returned to Wikipedia anonymously, and with the same group of editors along with senior Wikipedia editors and admins, I was able to accomplish a consensus hard won on an another notorious “wiki war”, the biographical article of Deepak Chopra, proving to me at least the efficacy of a methodology for online rational consensus building.

This success however increased the level of targeting and harassment I received.

Since this process I introduced was also predictable to appear confrontational to those practising toxic consensus building, I simply began to document the targeting and harassment I received in response on Wikipedia We Have a Problem, building out the case study showing the interaction between two types of online consensus building, but this time regarding a narrative about me, who is also the publisher of the case study – a plot twist that has bore incredible fruit.

While facing harassment is not something that I predicted would occur to the level that it did, this provided me with another “first person” verification, one that would always be certain to satisfy my own test of certainty – for I alone can be certain what is published about me a form of misinformation, as opposed to something that is a form of rational consensus building.

This five-year case study, hard-won indeed, records the interaction between collaborative consensus building and toxic consensus building around the narrative of a subject on the internet that I have perfect clarity on, me.

 

Please follow and like us: