Facebook Targets Hate Group and Vaccine Misinformation

Prime Minister Jacinda Ardern hugs a mosque-goer at the Kilbirnie mosque in Wellington, New Zealand, on March 17, 2019

In the wake of the New Zealand mosque shootings in which 50 worshippers were killed by a white nationalist gunman, Facebook has said it will ban “praise, support and representation” of white nationalism and separatism on the platform and its sister app Instagram.

The world’s largest social media company said in a blog post following the shootings that it had made the decision after three months of conversations with academics and civil groups, adding that it would also redirect people who search for terms associated with white supremacy to resources “focused on helping people leave behind hate groups”.

The sharp criticism leveled at Facebook, Twitter and YouTube after a gunman with white nationalist views live-streamed graphic footage of the Christchurch mosque shootings seems the actual reason for the policy change as well as the threat of criminal penalties suggested by Australian lawmakers. The video was copied and uploaded across multiple social media platforms even when the original had been taken down from Facebook creating alarm and backlash among viewers.

Jacinda Ardern, New Zealand’s prime minister, called for the platforms to take more responsibility for the extremist material shared on their sites following the attack. She welcomed the decision by Facebook on Thursday but said it did not go far enough, and called for a co-ordinated global response to the use of social media platforms by violent extremists. “I think when you look at the community standards of Facebook, which of course already state they do not allow hate speech, you would probably fairly assume that some of what they have announced may well have already been included,” she told reporters. “I think there is more work to do.”

Australia has threatened to introduce tough criminal penalties — including prison sentences for executives — on social media companies that fail to ensure their products are safe and prevent the live streaming of terror attacks. Scott Morrison, Australia’s prime minister, said this week his government is drafting laws aimed at preventing social media platforms from being “weaponized” with terror content.

In its defense, Facebook said that within the first 24 hours of the New Zealand mosque attacks, the company had taken down more than 1.2m copies of the live-streamed video as they were being uploaded, and 300,000 copies after they had been posted. The livestream was viewed 4,000 times before it was removed from Facebook.  The company has been ramping up its efforts to tackle hate speech, particularly after it came under intense criticism last year for failing to prevent inflammatory online attacks against minority Muslims in Myanmar.

Facebook Makes Misinformation about Vaccines Harder to Find

Activists opposed to vaccinations demonstrated outside a Senate hearing about vaccine safety earlier this month.

Following in the footsteps of Pinterest and YouTube, Facebook will combat misinformation about vaccines.  Groups and pages that spread misinformation about vaccines will have lower rankings and won’t be included in recommendations or predictions when users are searching within Facebook, according to the social platform.  Instagram, owned by Facebook also, will have similar policies.

Again, Facebook is the follower.  The new policy comes after measles outbreaks in the U.S. and abroad.  Facebook said its AI system will search for vaccine misinformation, such as that it causes autism, and put it lower in the user’s news feed.

The World Health Organization (WHO) put “vaccine hesitancy” as one of the 10 most notable threats to global health.

Last year Pinterest blocked certain searches related to vaccine use and YouTube prohibits anti-vaccine videos and surfaces more authoritative content when searched for by users.  YouTube also employs fact-panels.

The good news is that these hugely-used and important social platforms and conveyors of information are stepping up to significant challenges to such social issues as separatist hate groups and vaccine misinformation.  The bad news is that they often fail to do so or do so only in the wake of tragedy, disease outbreak or the threat of regulation or criminal prosecution.

One of the marks of a great or resilient company is its ability to self-monitor is flaws, failings and emerging issues.  With that bar as an indicator, Facebook, Pinterest, Instagram, and YouTube are not great organizations.  They are hugely successful behemoths that have the ability to spread as much dissent and violence as they do to promote global connectivity and uplifting initiatives.  That should be a lesson for anyone using these platforms.

References: https://www.nytimes.com/2019/03/07/technology/facebook-anti-vaccine-misinformation.html

NYT, March 27, 2019, Liam Stack, “Facebook Announces New Policy to Ban White Nationalist Content.”

This entry was posted in artificial intelligence, Fake News, hate speech, News, pinterest, Politics, Social Behavior, social media and accuracy, social media and health, social media and politics, Social Media Policy, Uncategorized, Viral, white natinalism, You Tube. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s