fbpx

Twitter has a notorious problem with online abuse.

The social network is one of the biggest platforms for trolls, with many people hiding behind anonymous handles to spew hate at people over their looks, their views or even their lifestyles.

Recently, there was a high-profile case of Twitter abuse against Ghostbusters actress Leslie Jones. Call intensified for Twitter to crack down on this abuse.

Investors have even walked away from potential deals to buy the company because of the problem that Twitter faces with online trolls.

Now, the company is making user safety a focus of its reinvigoration plan. Specifically, the company has said that it wants to “ensure users feel safe to express themselves.”

In August, Twitter added the Quality Filter tool that automatically detected and eliminated questionable tweets based on the words being used, such as those indicating threats or expressing abusive or offensive thoughts.

But breaking news – Twitter adds new feature to combat online abuse – gives users hope for even more protection in the space. Here’s what we know about the new tool:

Expanded Mute Feature

A feature known simply as “mute” has been expanded to now allows users to block words, user names, hashtags and even emojis that they don’t want to see.

Any tweets from a specified user will be blocked, as will any that contain the words, hashtags or emojis specified.

In the past, the mute feature could only be used to block specific accounts that you did not want to see. With the update, Twitter has provided additional mute options and has made it so that users also do not see this content in their notifications.

To mute words, you must go to your notifications settings and choose the “muted words” option. You can then enter the words or phrases you want to mute.

To mute a conversation, you must click on the drop-down arrow next to the conversation and click on the option “mute this conversation.” Twitter will stop showing you notifications about that conversation, but it will not remove the conversation from your timeline, nor will it block users. You will have to delete items or block users for that to happen.

Additional Abuse Control

While the expanded mute feature gives users more control over what they see on Twitter, it doesn’t block the abuse.

Those comments still exist on Twitter – the user just doesn’t see them.

In an effort to crack down on the abuse and stop it from existing on the site at all, Twitter has also introduced a “hateful conduct” option for reporting.

The reporting options have long included “it’s disrespectful or offensive,” “includes targeted harassment,” and “threatening violence or physical harm.” Now, the options include “it directs hate against a race, religion, gender, or orientation.”

Twitter is backing up this reporting feature with training for its employees to better recognize hateful conduct. The company is providing training on the cultural and historical contextualization of hateful conduct, and it is requiring that employees get routine refreshers on this training. The refresher courses will not only ensure that employees retain the information but also that they learn about new terms and issues as they evolve.

Additional Measures Needed

Many have applauded the measures that Twitter has undertaken to combat abuse, but many say that still more needs to be done.

Specifically, many would like to see Twitter require that all users have verified identities, like what Facebook already requires. Right now, users can sign up with any email address and use any handle and screen name that they like. They can easily create multiple accounts, and they can hide behind whatever name they choose.

There is no way to know who the person behind the user name is, unless it’s a celebrity or public figure who has taken steps to become verified. People take advantage of this anonymity to abuse others on the platform.

Some hope that advances in technology can also help to reduce the problem. Artificial intelligence software is getting better at interpreting content and making executive decisions. It is possible that AI will become advanced enough at some point to detect all abusive language and to delete it before anyone ever gets the chance to see it.

As a brand, you can do your part to combat abuse on the platform by blocking users from your feed who engage in this kind of commentary and by refusing to respond to users publicly when they are raring for a fight. Reply to those people via direct message to resolve any issues.

You can also model respectful behavior by responding to criticisms and complaints in a professional and customer-focused manner. You can stand up for diversity and inclusivity in the choices you make with your shares and the language you use in your own posts. Your actions can contribute to a larger atmosphere of tolerance and acceptance.