Considering the mass shooting in Buffalo, is it time to impose new rules on livestreaming

Credit: Caspar Camille Rubin/Unsplash.

An 18-year-old white man, armed with an assault weapon on which was written an anti-Black slur, is accused of entering a grocery store in Buffalo, New York, over the weekend and killing 10 people.

Three others were wounded and, altogether, 11 of those shot were Black.

The shooter, identified as Payton Gendron, livestreamed the attack over the platform Twitch, making it one of several mass killings in recent years to have been broadcast online in real-time.

In messages sent over 4chan and the messaging platform Discord prior to the attack, Gendron admitted that, in fact, the opportunity to livestream the act gave him “motivation” to carry it out, according to The New York Times.

“Livestreaming this attack gives me some motivation in the way that I know that some people will be cheering for me,” the shooter said, according to multiple sources.

Twitch removed the video within minutes, but not before it was reposted elsewhere, garnering millions of views and effectively memorializing the attack online for years to come.

Photos and clips of Gendron’s stream resurfaced on various social media platforms hours after the shooting.

As it stands, social media and livestream platforms operate under a shared content moderation framework—namely that they are not responsible for user content or behavior, says John Wihbey, associate professor of media innovation and technology at Northeastern.

But, as more perpetrators are inspired to commit acts of violence by being able to publicize them in real-time, it might be time to impose a new set of rules on the online platforms, Wihbey says.

“I think we may be approaching the moment where we have to make online platforms at least civilly, if not criminally liable, for their live-streaming technology,” Wihbey says.

Wihbey says that, under a new liability-regulatory framework, the ability of victims’ families to sue platforms like Twitch could put pressure on the online companies to “implement more procedures, safety, security and validation checks.”

“The experiment has sort of run its course at this point,” Wihbey says of the status quo, which is marked by self-regulation and user-assumed-liability.

“We’ve known what many of the positive aspects of these platforms are: The ability to publicize injustice, to publicize legal protest and give voice to people who wouldn’t have broadcast power. But I think these companies might need them to start investing more of their billions into new forms of trust and safety.”

Researchers at Northeastern have been tracking a trend in mass shooting events: The use of guns in hate crimes.

Historically, fewer than 1% of hate crimes are committed with guns, but increasingly perpetrators are turning to weapons, finding inspiration in past acts—and at the prospect of achieving notoriety online, says Jack McDevitt, director of the Institute on Race and Justice at Northeastern.

McDevitt says that platforms like Twitch may provide perpetrators with the means to carry out a “movie-like” massacre.

That, combined with the ease of access to white supremacist communities online, may be contributing to a rise in hate-motivated shootings.

“It used to be hard for someone to find like-minded people who share their biases,” McDevitt says.

“Now, you’re four or five clicks and you’re in a chatroom … full of people who share your biases, or worse—and who seem, also, to be in a mode of encouraging violence.”

Written by Tanner Stening.