Skip to main content

Long read: How TikTok's most intriguing geolocator makes a story out of a game

Where in the world is Josemonkey?

If you click on a link and make a purchase we may receive a small commission. Read our editorial policy.

Twitch implements Shield Mode to improve hate raid protection

"Safety work is never over."

Twitch logo
Image credit: Twitch

Twitch has implemented a new Shield Mode feature, allowing streamers to pre-set safety settings.

The feature offers customisable controls that can be activated in one click, including channel modes (such as follower or sub only chat), chat verification options, and AutoMod levels.

This is a much requested feature, especially in the wake of last summer's hate raids against streamers from minority communities, which remain ongoing.

Eurogamer Newscast: Why is Sony worried about Call of Duty on PlayStation 6?Watch on YouTube

Shield Mode can be accessed by both streamers and their mods and implements pre-set options in one go - either with chat commands or a shortcut.

This allows streamers to utilise different levels of safety options depending on the situation.

Shield Mode also introduces two new safety options: bulk ban and no first-time chatters.

The former will allow streamers to input specific terms or phrases while Shield Mode is activated and ban users using those words in chat.

The latter prevents anyone from chatting if they are new to a channel.

Both of these features directly target hate raids, in which a streamer's chat is raided by bots spouting abusive language.

"Harassment and hateful behavior can come in waves, such as through a targeted attack, and we hope this tool will make it easier to instantly shut down a hate raid if that ever happens to you," said Twitch in a new blog post.

"But there are many other ways to use Shield Mode - whether you're planning a stream on a sensitive subject, participating in a campaign that will put you in the spotlight, or being featured on the front page.

"Shield Mode can also help you thrive and safely build your community in the long term by letting you keep safety settings more relaxed when things are calm.

"Safety work is never over, and we're working on more tools that make it easier to moderate your channel. As we move forward, we're focusing more on features, like this one, that are customisable and can be easily ramped up or down to reflect your needs in the moment. We're also working aggressively to stop more harm before it ever occurs."

Read this next