Amongst Twitter’s various announcements in its Analyst Day presentation today, including subscription tools and on-platform communities, it also outlined its work on a new anti-troll feature, which it’s calling ‘Safety Mode’. The new process would alert users when their tweets are getting negative attention. Tap through on that notification and you’ll be taken to the ‘Safety Mode’ control panel, where you can choose to activate ‘auto-block and mute’, which will then, as it sounds, automatically stop any accounts that are sending abusive or rude replies from engaging with you for one week. Who is going to benefit? It seems this new feature would be highly appropriate for our President Uhuru Kenya since he took a break from Twitter allegedly because of the trolls. Users will also be able to review the accounts and replies Twitter’s system has identified as being potentially harmful. You would then be able to review and block as you see fit. So if your on-platform connections have a habit of mocking your comments, and Twitter’s system incorrectly tags them as abuse, you won’t have to block them, unless you choose to keep Safety Mode active. Where will it be tested? It could be a good option, though a lot depends on how good Twitter’s automated detection process is. Twitter would be looking to utilize the same system it’s testing for its new prompts (on iOS) that alert users to potentially offensive language within their tweets. If Twitter can reliably detect abuse, and stop people from ever having to see it, that could be a good thing, while it could also disincentivize trolls who make such remarks in order to provoke a response. If the risk is that their clever replies could get automatically blocked, and as Twitter notes, will be seen by fewer people as a result, that could make people more cautious about what they say. Which some will see as intrusion on free speech and a violation of some amendment of some kind. But it’s really not.