fbpx

Defining, Identifying and Preventing Cyberbullying Behaviors: What Has Been Done and the Path Forward

Tackling cyberbullying is at the very heart of my professional activities. As a Director of Community Trust and Safety for Two Hat, our vision of a world free of online bullying, harassment and child exploitation informs my work, and our belief in a shared responsibility between industry, academia and government inspires me to collaborate with global partners to fulfill that vision. Also, as a co-founder and steering committee member of the Fair Play Alliance, an organization that envisions a world where games are free of harassment, discrimination, and abuse, and where players can express themselves through play, collaborating with others to develop and share best practices in encouraging healthy communities is one of my key priorities.

While doing my work, I have identified the three stages at which online platforms can tackle cyberbullying: defining, identifying and actioning (which includes preventing) the behavior. Defining is being very clear about the behaviors that are not aligned with the platform, articulating them on a solid code of conduct/community guidelines, and ensuring behaviors are clearly outlined with examples and consequences for breaking those agreements. Identifying is using best practices and technology to classify content so platforms know when cyberbullying is happening. Actioning is what you do when you identify it, from warning messages and nudges aiming at preventing and/or modifying behavior on the go, to blocking high-risk content when necessary.

In this blog, I’ll focus on the identification and prevention (under actioning) aspects of cyberbullying.

In order to understand harmful online behaviors, it’s imperative that we become aware of the nuances involved. What behaviors are we referring to when we talk about cyberbullying? Are we aligned in our definitions? Perhaps we mean cyberstalking, cyber harassment, or denigration. That begs the question: do we understand all the variables and are we evaluating the same things? It’s critical for online platforms to be clear about those behaviors and to ensure we are looking at baseline definitions. Otherwise, how can we know that we’re identifying, actioning, and even preventing cyberbullying on our platforms?

Defining online harms is not a simple task and certainly not one that can be solely informed by a few perspectives. In early October Two Hat hosted a Content Moderation Symposium in London, UK. Experts from academia, government, non-profits, and industry came together to talk about the biggest content moderation challenges of our time, including tackling complex issues like defining cyberbullying and child exploitation behaviors in online communities.

We’re working closely with academia and industry to define cyberbullying and its subcategories, including: flaming/online fight, denigration, masquerading/Impersonation, exclusion/boycott and more. These definitions may also help us understand user motivations behind the behavior, and they may help us look for the root causes as well. For example, when we know a user is engaging in cyber harassment by displaying insulting and ridiculing behavior, we can consider what might be behind it: is the platform providing good tools for user communication/interaction and conflict resolution? Is the very game environment and team dynamics creating friction between players? We can also take the opportunity to nudge users in a more productive and positive direction.

The private neighborhood social network Nextdoor recently announced their new Kindness Reminder, meant to encourage positivity across the platform. Their approach is simple, elegant, and proactive: “If a member replies to a neighbor’s post with a potentially offensive or hurtful comment, Kindness Reminder is prompted before the comment goes live. The member is then given the chance to reference Nextdoor’s Community Guidelines, reconsider and edit their reply, or ultimately refrain from posting”.

The results? 20% fewer negative comments measured in their early US tests. One in 5 people are changing their minds. This is another important factor we need to consider: we can’t assume users know what is expected of them on online platforms. Companies need to take the appropriate measures to nudge users in the right direction, remind them of community guidelines, and be specific about the behaviors that are and aren’t aligned with that particular community. They also need to be aware of the very product design choices that can be inadvertently conducive to negative behaviors if sound safety by design practices are not considered at feature inception.

This Reddit study found that pinning a reminder of community expectations (when compared to discussions that did not do that) “Increased newcomer rule compliance by >8 percentage points and increased the participation rate of newcomers in discussions by 70% on average.” The research concluded that “Making community norms visible prevented unruly and harassing conversations by influencing how people behaved within the conversation and also by influencing who chose to join”.

This experiment run by Twitch (starts at 18:28) showed that displaying the channel’s chat rules and having the user agree to them before proceeding and chatting with other users generated good results, as compared to the other half of users who never got that flow. The experiment had “No significant impact on chat participation, and a statistically significant reduction in timeouts and bans for the ‘click to agree’ variant!”

Ultimately, as the Internet matures, and as a connected society we come to expect consequences for disruptive and detrimental online behaviors the same way we expect offline, users will ask for better protections and safeguarding online, and governments will propose legislation – like the UK Government is doing. The stakes are high, and industry can no longer afford to not understand the root causes of those behaviors (many times created by the very way products and online experiences are being designed) and not explore paths for identification and prevention. Best practices, like the examples cited in this blog, are available and immediately actionable.

I’m encouraged by the evidence-based approach and work done by the International Bullying Prevention Association, and believe it can play a key role to provide the evaluations and evidence that will further encourage improvements to legislative and technological approaches aimed at better protecting users from cyberbullying. I’m looking forward to sharing my perspectives and hearing from my fellow panelists and the audience on November 7th, in the Keynote Gaming Panel at the IBPA Conference in Chicago. I invite you to join the conversation there or to reach out via Twitter.

Scroll to Top