What is is an emerging digital platform designed for trusted large-scale online consensus building between dozens, hundreds, or thousands of online users with different viewpoints on any topic, proposal, article, or study.

Through the process of negotiating an ‘’, users collaborate to build a shared narrative which is then published as a perfectly vetted and trusted article, comprised in the true collaborative voice of the rational consensus.

The published article, comprised solely of a collaborative voice, is written, edited, vetted and composed purely from rational and collaborative behaviors hard won in the consensus process between all users participating.

The result is a fully trusted, transparently vetted resolution that can be distributed on the web while it builds further consensus.

All vetted consensus articles are published in an online library, which has unique contextual features for easy user discovery and participation.

Using the platform, a published mutual resolution between all viewpoints is the only outcome possible.

It’s design offers a unique approach to the “conflict of idea” and a viable solution for false consensus, fake news, misinformation, online deception, harassment, bullying, and intimidation by rebuilding a trusted rational web from the ground up.

How does work?

Aiki wiki changes the entire dynamic of heated and conflicting discussion, argument, or negotiation.  It’s methodology contains a strategem to incentivize collaborative behaviors and choices by gamifying editing permissions into a win-win publishing achievement, which can only occur between two conflicting points of view, in principle between personal forces attempting a win-lose alternative, which the platform will not publish as a consensus.

One way aiki wiki achieves this is by giving users in the consensus process the ability to assign and exchange “three” shared values to each other’s statements instead of “two” shared values, such as thumbs-up or thumbs-down emoticons, which is commonly understood as a voting algorithm.

These three assignments which are essentially graded, assigned, and exchanged, and most importantly measured and mapped to user choices and behaviors are the numbers 0, 1, or 2.

While each can have many subcontexts, ultimately the algorithm grades each as; 0 = mystery, 1 = true, and 2 = false.

Pairing with this is the programmatic assigning of editing permissions to users based on their choices in assigning these three values.

Editing permissions are earned in aiki wiki through hard-won negotiation between users. Each “aiki wiki” can have hundreds of permissions assigned to one article.

There is a psychological element to – with an emphasis on “logical”. While users ultimately decide the outcome of the consensus without requiring a centralized third party to arbitrate – the platform algorithm can account for all behaviors inside of the process and truly assign micro permissions through the collaborative exchange.

This can allow the most trusted members of a community to emerge as “admins” to any given article, a higher permission rewarded to the users with the highest count of rational and collaborative behaviors.

Why do you say it publishes mutual resolution?

Resolution is the only logical outcome of an discussion because editing permissions, which are the rights to “publish” a consensus on the platform, are literally won, and only won in pairs between two or more participants who have conflicting assignments to any part of an article, even a word or a sentence, or basically a disagreement of some kind.

To work through this disagreement, allows for users to build a naturally forming social contract with each other that requires no third party to administer the outcome. This social contract is constructed in such a way that allows users a natural set of choices inside of the narrative of their discussion that can only resolve in a mutual resolution. resolves this through an algorithm that changes the consensus building process from a voting up-or-down binary game into a ternary game, logically allowing conflict, inherently a win or lose phenomenon, to move continually until resolutions are achieved and published between users achieving micro-editing permissions.

This feature alone will naturally attract the more collaborative psychologies from each worldview and slowly convert over time the more combative or toxic participants (as detailed quite extensively on Wikipedia, We Have a Problem), isolating and distilling consensus down to its bare elements, and then reuniting them into a ternary social logic that easily over-rides the more binary principles of win-lose voting.

From a logical perspective, this allows aiki wiki to accomplish something somewhat paradoxical at first psychologically speaking, giving those in the consensus-building process an opportunity to win, even if traditionally speaking it may mean to them at first that they may have “lost” a disagreement.

 Admitting errors, specifically contradictions in their explanations of their assignments earns a permission to publish, just as much as a hard-won disagreement querying on the contradictions made by another user.

Additionally, the algorithm allows all users discussions to generate a narrative about the discussion itself.

This occurs programmatically through all users decisions and behaviors in each arc of the discussion until resolution.

This creates a community-directed narrative and feedback loop of the direction of the consensus, and makes clear where there are disagreements, where there are agreements, and which questions remain unresolved inside of the community.

As long as users stay inside of an discussion, a resolution is the only logical outcome that can occur.

Programmatically speaking, can only fail to produce a resolution for an individual (as opposed to the collective) if an individual chooses not to complete the discussion and leave.

To the collaborative resolution, even if one user chooses to leave the discussion and their specific point of view open before resolution, another user can come in and easily take up their space in the consensus.

Since the discussion itself is what generates, edits, and refines the article, users are more likely to stay inside of the discussion so they can influence the output.

So if you have two sides in a dispute that are very extreme, say political or religious points of view, is a resolution still to be expected?


No matter what point of view or group exists in a collaborative, each worldview is going to have a segment of their adherents who tend to be more rational initially than others.

And because aiki wiki is more a game of assigning three values instead of voting up or down, small compromises can be made rather quickly, primarily because  the binary voting isn’t used for consensus.

This segment could be minuscule, small or large, it does not matter. It only takes one rational individual to alter a consensus between many people on

aiki wiki ensures that the rational parties in each ideological conflict ‘find each other’ and the consensus builds from there.

How can determine if someone is ‘rational’?

Editing permissions are unlocked between pairs of users when they make collaborative behaviors and choices in an exchange with each other. Until an editing permission is assigned, the permission to edit is not granted.

There are three grades to how could identify a user making rational choices and one of those grades is also graded at the user level, not the platform.

The first grade is honesty. primarily defines “rationality” algorithmically as a form of honest expression in the face of consensus building with other users with different viewpoints.

Is a user being honest with their answers to the very best of their ability?

If a user doesn’t know something, can they acknowledge they don’t know?

Can a user acknowledge a mistake? More importantly, can the user acknowledge they made a contradiction?

Ultimately only that user knows for sure. But simple choices that acknowledge an error, or misunderstanding, or improper assignment on the part of the user is readily identifiable to the algorithm and an editing permission is granted.

While the platform’s algorithm could never determine these answers purely from a programmatic perspective, other users potentially can through questioning, so a very simple level of “self-reflection” can be tested in the consensus process and any user can be tested by others for honesty.

The algorithm creates an arc in the discussion so these questions become addressed naturally in the process.

When users are honest to the best of their ability, all of their responses in consensus building should show rational behaviors and they should expect to be awarded dozens of editing permissions because ultimately, this is really how “rational” can be defined in consensus building in ways most users can understand without being esoteric or philosophical.

The second grade is similar to how “rational” is defined in economics.

Is the user able to argue and define their own self-interests (point of view) in the discussion?

In explaining their own self-interests, are there any contradictions?

These questions naturally become resolved and recorded within the rules of’s algorithm to assign or predict an editing permission. Any other user can identify a contradiction made by that user and then question that user within the confines of the algorithm.

The third grade takes all of this one step further.

Is the user collaborative? Can the user now argue or define the interests (point of view) of another user?

If a user is capable of writing the viewpoint of another user to their agreement, the algorithm can identify them quite easily and award them an editing permission.

Additionally, the way the discussion is formatted offers users multiple chances and opportunities to make rational choices based on the grades above. Almost every activity on the platform will be able to continually grade users choices and decisions and continually assign permissions, logically sealing the end result as a fully vetted, completely rational composition arrived at collaboratively by many different users.

So voting algorithms, like thumbing up or down, liking, etc are not applied in

Voting is inherently flawed in terms of rational consensus building. While voting itself is a fair, and hopefully open democratic process that gives everyone a voice that gives groups and societies a tool to approve of a consensus, it does not necessarily ensure that what is voted to the top is accurate, trustworthy, dependable, etc. (see 2016 election, see Reddit and social media platforms for limitations of voting)

Therefore, voting up or down is never used to determine a rational consensus or the outcome of a consensus.

Voting is still allowed, however, to occur on, it is not something that is suppressed.

Thumbing up or down is still valuable information in a consensus, it informs a rational consensus of the personal side of the process, and it may allow for a good reflection of group sentiment, or even “animal spirits” as applied in economics, but it should never be confused as a fully trusted, rational and consistent consensus.

So users can still have a ‘human’ discussion, and not be forced to discuss programmatically, or in legalese like attorneys?

Yes. should allow for lively discussion, especially humor, to take place. should be fun and natural.

What is unique about is not just how it can produce a rational consensus, but also how users can use and appreciate their own creativity, subjectivity, and personal expression and how important those voices also are in a consensus process.

How does remove trolling, harassment, or deception used in a consensus process? is designed with certain design principles in mind regarding user behaviors.

Ideally, should not seek to change user’s behavior or even their ideas, it should just seek to change the environment the user is in while discussing their ideas in a consensus process.

The environment for an discussion makes it less and less likely abusive behaviors will emerge because the environment for supports genuine, honest, and rational exchange between users as the sole incentive. Any abusive type of behaviors simply will not be able to compete in the consensus process, and eventually become isolated. does this by allowing one discussion narrative or article to simultaneously flow through three different types of forums or “voices” that are possible in the consensus process, all guided programmatically by the algorithm.

Each forum teases through different types of discussions and different types of user behaviors.

So where discussions become critical and require rational resolution naturally filters it through one forum, while discussions that erupt into personal attacks, ridiculous arguments, or even just personal commentary are filtered through another.

What is organizing the whole process are the individual choices made by individual editors and applying it as a collective result published as an article.

So should make it impossible for trolls to compete in a rational consensus or gain consensus where none is warranted.

Everyone can make their own choices how they choose to communicate in an, however. Everything is also “permitted”. It just takes more work to abuse the consensus than it does to participate rationally.

Can be ‘gamed’?

That is actually the point of, to ‘gamify’ critical discussion.

The game is very subtle and is created through a narrative that forms about the discussion itself, based on users choices.

This story about the discussion is collectively written and should add a subtle layer of “gamification” that follows the natural arcs of engagement users become involved in.

When an online dialogue is structured as a natural game, would be more similar to chess, and less similar to games that require deception, like poker.

However, unlike both of those games, is a non-zero-sum game, turning win or lose arguments into win-win choices between users which composes the shared narrative.

If someone attempts to alter the algorithm, they are going to find that it is much easier to game the discussion the way allows rather than to game the discussion by introducing deception into the stratagem of the composition.

We need an! (or at least something like it)

Barack Obama has now mentioned the necessity of something like for the poisoned media landscape. I believe aiki wiki is an idea whose time has come.

Bill Maher recently interviewed Barack Obama and Obama talks about this necessity around the 15:00 mark.

What is the aiki atheneum?

The Atheneum is a collaborative library that contains all of the published resolutions reached in a consensus through aiki wiki. The library publishes all of the available context to each discussion found on the web into what we call “context cubbies”. Eventually, all contexts become explored through the consensus process, and collects all trusted sources on the web into one location.

How is that like Wikipedia?

It is not like Wikipedia. Wikipedia is an encyclopedia, while aiki atheneum is a library.

Wikipedia itself can be one component in the Atheneum, and Wikipedia editors could use aiki wiki to arrive at a stronger consensus and article on Wikipedia.

aiki wiki is not intended to be a competitive platform with other wikis or content management systems, it is not seeking to replace anything, just improve everything.

Where is it?

You can visit us while we are still getting up and running at, and apologies its still in bit of a sloppy state. The link might even be down – right now because of my other platform in development I am not able to support for the time being.

So far just the prototype for the Atheneum is coded, and I have about three months of coding to complete phase one of, which – my schedule and finances permitting, will be accomplished in 2018.

So why is it up now?

It was not my intention to release any information about this project yet but because of online harassment on RationalWiki, it has become somewhat necessary.

aiki wiki has figured into the background of Wikipedia, We Have a Problem, why?

Wikipedia We Have a Problem grew from me just researching for

I have a massive curiosity for online discussions and especially “wikis” in general. In both wiki wars that I involved myself in, I adhered to ‘rules of engagement’ formulated in aiki wiki, and I wanted to see how that outcome would play out in a hostile environment on a platform that is not suited for social interaction.

Additionally, my fascination with ‘wiki wars’ and my own wiki idealism was the conclusion of my TEDx talk, “Google Consciousness”, where I noted that Israeli and Palestinian Wikipedia editors were able to build shared narratives, a feature of what I believe will be social media evolving to replace government as we use it today, alluding to something like an ‘aiki wiki’.

What was OS 0 1 2?

aiki wiki is somewhat derived from a very experimental, and very fun viral media and thought-project I co-created fifteen years ago in 2002 called OS 0 1 2.

OS 0 1 2 began quite unintentionally in late 2002 as an online, contextual anti-war protest to “stop the war before it starts”, a campaign against pro- Iraq war “trolls” operating persuasion campaigns on AOL messageboards over looming invasion of Iraq where they were promoting suspicious claims of Weapons of Mass Destruction.

OS was, of course, a play of off Operating Systems, at the time Mac OS was operating version 9 I believe. After reading the document, the user was informed they just downloaded the “master meme”, an operating system for human beings.

It may have been one of the early artifacts of early “meme” culture, as part of OS 0 1 2 was literally introducing tens, possibly hundreds of thousands of online users to the word “meme” itself, specifically a “master meme” to “stop the war before it starts”.

The trolls that were encountered were seduced into becoming part of the process, by persuading the trolls to a “challenge” on the OS 0 1 2 document, in much a similar fashion as a stand-up comedian may treat a heckler.

It was an essay that was collaboratively written online in a very organic manner, literally by copying and pasting emails and forum texts into a web page manually and then getting other people to share it or collaborate spontaneously.

The essay itself was just a collection of rules for the presentation, construction, deconstruction, valuation, and analysis of the document itself, in many ways a homegrown, organic precursor to a  “smart contract” with a playful theatrical twist, a “joke” the reader would only get after understanding how OS 0 1 2 worked.

Once more, once they “got” OS 0 1 2, they were able to make a contribution to the document relevant to whatever worldview they came from.

That’s what really expanded OS 0 1 2. As the “master meme” would receive more input from various worldviews, I became curious when each worldview could find a “meaning” to the document that they believed reflected their own.

I would hear different comparisons to different systems quite often, many of them I would be totally unfamiliar with. It was as if OS 0 1 2 could easily absorb any worldview into it and expand that worldview in such a way so as to “see what each other means” by each other’s views. Comparisons remembered were Tractacus, the Bhagavad Gita, Buddhism, Taoism, to some Christianity while others Ayn Rand’s “Objectivism”, Philosopher David Hume, or Hegelian Dialectic just to name a short few.

Following this, the intention was to attract as many random and unique viewpoints and seduce them into refining and co-creating the document. Eventually, Atheists, Christians, Muslims, political activists, philosophy majors, even “conspiracy theorists” and extreme online groups such as Stormfront were finding elements of OS 012 they felt they could bring back to their communities as an “idea we all could agree on.”

OS 0 1 2 as a presentation was intentionally meant to be perplexing to the unsuspecting online user who would stumble upon it as a link to a discussion forum and while theatrical and tongue in cheek, it also provided clear instructions for the collaborative construction of a document whose all participants were able to come to a rational resolution by simply discussing the document itself.

At the time, I participated with others in the creation of an online character called “Bubblefish” the “Flame Warrior”, performing theater, and confronting pro-war “trolls” on many forums, “tricking” them into having a rational discussion about the curation of the OS 0 1 2 document.

My engagement with highly aggravated online users who were bullying other users was especially where the document became very popular, as my attempts to create a “win-win” resolution with abusive individuals took on a theater and life of its own while engineering a thoughtful discussion amongst users who were ideologically opposed.

At the time, the Bubblefish Show and the OS 012 document appeared to me to be somewhat of a hard to define new form of performance theater that was using BBS and AOL messageboards as the stage.

It also represented a hard to define method of arriving at a group consensus. At the time, I was deeply involved in screenwriting and project development for film and television, and I believed I was on the cusp of a new form of media.

Honestly, it was probably one of the funniest, most exhilarating experiences of my life, especially during the time when I was doing heaps of creative and professional writing. It took me a number of years to fully comprehend and understand how unique, ahead of its time, and special the worlds first “collaborative operating system for the human being” was.

What’s more, it seemed to attract a mutual appeal from many different viewpoints, it seemed that many worldviews or different philosophies could all see OS 0 1 2 touching upon something shared from their own unique perspective. And that was the point because in many ways hundreds of different viewpoints from all over the world helped to co-create and refine the document.

I placed OS 0 1 2  in the public domain, allowing anyone to copy the meme, rewrite it or republish it. Many anonymous users did, and one internet user even published it into a book while others created fresh new adaptions. Even as recently as 2018, the master meme keeps spreading far after I’ve had anything to do with it, even garnishing a review; 

A decent expression of addressing divisive thought from the perspective of hyper-rationalism. Written more as a poetic manifesto than any traditional style. Draws from principles found in Ayn Rand, NLP, and maybe even bits of the green meme in Spiral Dynamics.

The official homepage for the OS has been offline for many years, but the last ‘updated’ OS is available here.



    • Hi Adrian. Thanks for the heads up. I’ll look into it shortly. The link only contains the library architecture, not the discussion architecture. I’ve been busy with my other platform for a bit so doesn’t have any support right now, but will soon.

  1. Freedom of access to information ought to be censored – for the benefit of decent-minded human beings, to defend them from humanoid monsters that look, talk annd act worse that wild beasts of prey.

Leave a Reply

Your email address will not be published.


This site uses Akismet to reduce spam. Learn how your comment data is processed.