According to a recent Pew Study called Crossing the Line, What Counts as Online Harassment?, there is no consensus on what actually defines “online harassment”, especially amongst internet users, which makes building tools to solve online harassment cloudy. What’s more, even when users do agree on where the harassment is, they still disagree on where it begins.
Clearly, we need a consensus on what online harassment is if we have to develop tools and new regulations to protect each other from it.
Where does free expression cross the line as a violation of common decency, or in some cases the law, and how can we programmatically determine when online harassment occurs?
Harassment may be easiest to define as non-resolving communication or actions intended to target an individual or group of individuals.
Harassment is psychological.
While users may not agree on what it is, anyone who has been harassed online knows what it “feels” like when someone is targeting them directly, and no definition is even required. In many cases online, that form of harassment can be challenging, if not impossible, to even communicate and explain to others.
Harassment is intimate – it requires the adoption of a relationship with another person.
The relationship sought out by the attacker is a toxic one, with a winner and a loser, and left to continue will evolve into an abusive relationship.
Of course, harassment can come in so many different forms and interpretations, but one consistent pattern emerges (so far at least, from years in this case study) is the non-resolving identifier.
Harassment is the continuation of non-resolving communication.
Nonresolving communication emerges from users with no intention of resolving an online relationship, conflict, negative impression, opinion, etc. Most non-resolving communication is hostile simply in interpreting the words used at face value.
If a blogger receives a comment “FU!” once, that is a non-resolving communication, if it re-occurs, it is harassment. Obviously, harassment can begin as very mild and increase to the extreme by the continuation of non-resolving communication. Many online users, simply having a bad day, can leave a toxic comment on the internet, and many of these same types of users may quickly collect themselves together, so a singular toxic comment might “flag” potential harassment, it alone would not qualify so much as direct harassment.
Yet language alone cannot be trusted to identify this behavior online because ultimately it is the intention of users in a relationship that relates to the harassment and any language used.
Users intention is often completely hidden from context on any online platform.
How can we detect someone’s intention?
Some online harassment can be so subtle, that only the intimate parties involved truly in the exchange understand what the text or set of images used mean, while that same set of text or image, viewed by outside parties, may seem innocuous.
This makes, I imagine, machine learning AI difficult if not impossible to detect some forms of online harassment, as the definition of what the text means is solely dependent on the interpretations of the target and the attacker.
To understand and define non-resolving communication, we can start by more easily defining and understanding “resolving communication.”
A “resolving” communication leaves both parties satisfied or at least content with the exchange. In a resolving communication, if a mistake is made, or a misunderstanding emerges, the resolving communication will find a resting point to the satisfaction of both parties.
If parties are given a choice, to enter into some form of social contract that offers resolving communication – and if a user chooses to accept the social contract or more importantly if they don’t, we can come to easily define, spot, and correct and more importantly resolve harassing behaviors online.
This is one way how the emerging aiki.wiki platform will programmatically be able to identify true harassment, independent of users definitions or contexts but applicable to all of them while giving all users online tools to resolve and handle online harassment.
Harassment does have many different expressions and comes in many forms online, different platforms define online harassment differently.
On Wikipedia, We Have a Problem, I focus on my own personal harassment which began on Wikipedia in the case study. Wikipedia has its own guidelines about what is considered harassment while editing on Wikipedia. Much of this study focuses on their own very nuanced definition of harassment and then defines it more succinctly simply as editor suppression, though Wikipedia itself is the author of those boundaries.
Gratefully, my harassers on Wikipedia then carried the harassment off-platform, across many other sites on the web, some more extreme than others, some with allowances for this type of behavior, and others with strict policies.
This has given me a rare opportunity to experience and identify a host of behaviors all of which carry a continuing and distinct marker, non-resolving communication.
From this study and the emerging aiki.wiki platform, a simple and easy framework for social contracts will eventually emerge to truly neutralize online harassment.