Monday, September 1, 2014

Twitter caught in censorship dilemma over graphic images

Twitter Inc decided last year to make images more prominent on its site. Now, the social network is finding itself caught between being an open forum and patrolling for inappropriate content. The pattern goes like this: A major public death spreads graphic images across Twitter. Users express outrage, forcing the company to decide what to remove.

Two recent incidents illustrate the difficulty of the choice. While Twitter is taking pains to remove images of the death of James Foley, the journalist who was beheaded by Islamic militants, some photos of the body of Michael Brown, the teenager who was killed by police in Ferguson, Missouri, remains on users’ streams. To many on Twitter, images of violence against Foley can be seen as spreading a terrorist’s message, while publicising Brown’s death shines a light on a perceived injustice.

“They’re letting the masses decide what should be up and what should not be up,” says Ken Light, a professor of photojournalism at the University of California, Berkeley. “When it’s discovered, it needs to be dealt with promptly. The beheading video should never go viral.”

The dilemma faced by Twitter, a proponent of free speech and distributor of real-time information, is not much different from that of a newspaper or broadcaster, according to Bruce Shapiro, executive director of the Dart Center for Journalism & Trauma at Columbia Journalism School.

“Twitter’s situation is exactly like that of a news organisation,” Shapiro says. “Freedom of the press and freedom of expression doesn’t mean that you should publish every video no matter how brutal and violent.”

The incidents also happened just after Robin Williams’ daughter Zelda said she was quitting Twitter after receiving abusive messages following his death.

“In order to respect the wishes of loved ones, Twitter will remove imagery of deceased individuals in certain circumstances,” the San Francisco-based company said in a policy that was enacted two weeks ago. “When reviewing such media removal requests, Twitter considers public interest factors such as the news worthiness of the content and may not be able to honour every request.”

Twitter’s software is not designed to automatically filter all inappropriate content. The company’s Trust and Safety team works in all time zones to stamp out issues once they are discovered, according to Nu Wexler, a spokesman for the company. Twitter uses image-analysis technology to track and report child exploitation images, Wexler says.

Twitter does not specifically prohibit violent or graphic content on its site — only “direct, specific threats of violence” and “obscene or pornographic images”, according to its terms of service. It may need to go further, if Facebook Inc’s experience is any guide.

In October, around the time Twitter started displaying images automatically in people’s timelines, Facebook was dealing with an uproar over a separate beheading video that was spreading around its site. The company resisted taking it down until user complaints intensified, including from UK Prime Minister David Cameron. Then, Facebook changed its policies.

“When we review content that is reported to us, we will take a more holistic look at the context surrounding a violent image or video,” the Menlo Park, California-based company said at the time. Facebook said it would “remove content that celebrates violence”.

Now that Twitter is encouraging images and video, it will also need to take another look at its rules, says Columbia’s Shapiro. “I don’t think a blanket rule is the point,” Shapiro says. “You do need a company policy that recognises that violent images can have an impact on viewers, can have an impact on those connected to the images, and can have an impact on the staff that have to screen this stuff. You can’t ignore Twitter’s role in spreading these images.”

No comments:

Post a Comment