• Home
  • Services
  • Portfolio
  • Sitemap
  • Contact
WEBLARTECH LTD
(+237) 679569045 / (+237) 655857297
  • twitter
  • facebook
  • support
  • mail
  • Home
  • Services
  • Portfolio
    • Gallery
  • FAQS
    • Blog
      • Testimonials
  • Contact

Twitter claims more progress on squeezing terrorist content

Twitter claims more progress on squeezing terrorist content
,
  • Uncategorized

Twitter has put out its latest Transparency Report providing an update on how many terrorist accounts it has suspended on its platform — with a cumulative 1.2 million+ suspensions since August 2015.

During the reporting period of July 1, 2017 through December 31, 2017 — for this, Twitter’s 12th Transparency Report — the company says a total of 274,460 accounts were permanently suspended for violations related to the promotion of terrorism.

“This is down 8.4% from the volume shared in the previous reporting period and is the second consecutive reporting period in which we’ve seen a drop in the number of accounts being suspended for this reason,” it writes. “We continue to see the positive, significant impact of years of hard work making our site an undesirable place for those seeking to promote terrorism, resulting in this type of activity increasingly shifting away from Twitter.”

Six months ago the company claimed big wins in squashing terrorist activity on its platform — attributing drops in reports of pro-terrorism accounts then to the success of in-house tech tools in driving terrorist activity off its platform (and perhaps inevitably rerouting it towards alternative platforms — Telegram being chief among them, according to experts on online extremism).

At that time Twitter reported a total of 299,649 pro-terrorism accounts had been suspended — which it said was a 20 per cent drop on figures reported for July through December 2016.

So the size of the drops are also shrinking. Though it’s suggesting that’s because it’s winning the battle to discourage terrorists from trying in the first place.

For its latest reporting period, ending December 2017, Twitter says 93% of the accounts were flagged by its internal tech tools — with 74% of those also suspended before their first tweet, i.e. before they’d been able to spread any terrorist propaganda.

Which means that around a quarter of the pro-terrorist accounts did manage to get out at least one terror tweet.

This proportion is essentially unchanged since the last report period (when Twitter reported suspending 75% before their first tweet) — so whatever tools it’s using to automate terror account identification and blocking appear to be in a steady state, rather than gaining in ability to pre-filter terrorist content.

Twitter also specifies that government reports of violations related to the promotion of terrorism represent less than 0.2% of all suspensions in the most recent reporting period — or 597 to be exact.

As with its prior transparency report, a far larger number of Twitter accounts are being reported by governments for “abusive behavior” — which refers to long-standing problems on Twitter’s platform such as hate speech, racism, misogyny and trolling.

And in December a Twitter policy staffer was roasted by UK MPs during a select committee session after the company was again shown failing to remove violent, threatening and racist tweets — which committee staffers had reported months earlier in that case.

Twitter’s latest Transparency Report specifies that governments reported 6,254 Twitter accounts for abusive behavior — yet the company only actioned a quarter of these reports.

That’s still up on the prior reporting period, though, when it reported actioning a paltry 12% of these type of reports.

The issue of abuse and hate speech on online platforms generally has rocketed up the political agenda in recent years, especially in Europe — where Germany now has a tough new law to regulate takedowns.

Platforms’ content moderation policies certainly remain a bone of contention for governments and lawmakers.

Last month the European Commission set out a new rule of thumb for social media platforms — saying it wants them to take down illegal content within an hour of it being reported.

This is not legislation yet, but the threat of EU-wide laws being drafted to regulate content takedowns remains a discussion topic — to encourage platforms to improve performance voluntarily.

Where terrorist content specifically is concerned, the Commission has also been pushing for increased used by tech firms of what it calls “proactive measures”, including “automated detection”.

And in February the UK government also revealed it had commissioned a local AI firm to build an extremist content blocking tool — saying it could decide to force companies to use it.

So political pressure remains especially high on that front.

Returning to abusive content, Twitter’s report specifies that the majority of the tweets and accounts reported to it by governments which it did remove violated its rules in the following areas: impersonation (66%), harassment (16%), and hateful conduct (12%).

This is an interesting shift on the mix from the last reported period when Twitter said content was removed for: harassment (37%), hateful conduct (35%), and impersonation (13%).

It’s difficult to interpret exactly what that development might mean. One possibility is that impersonation could cover disinformation agents, such as Kremlin bots, which Twitter has being suspending in recent months as part of investigations into election interference — an issue that’s been shown to be a problem across social media, from Facebook to Tumblr.

Governments may also have become more focused on reporting accounts to Twitter that they believe are wrappers for foreign agents to spread false information to try to meddle with democratic processes.

In January, for example, the UK government announced it would be setting up a civil service unit to combat state-led disinformation campaigns.

And removing an account that’s been identified as a fake — with the help of government intelligence — is perhaps easier for Twitter than judging whether a particular piece of robust speech might have crossed the line into harassment or hate speech.

Judging the health of conversations on its platform is also something the company recently asked outsiders to help it with. So it doesn’t appear overly confident in making those kind of judgement calls.




Source link

Testimonials

  • We could not ask for a better IT partner in Cameroon than Weblartech. The dynamic team was able to satisfy all our needs from building our website to hosting and providing us… read more →
    Paulin Etoga Menye
  • "I love everything about WEBLARTECH. They are very professional, elegant and aesthetically pleasing. Besides that, their platforms  is also very easy to setup and use. Not only for you, but also for… read more →
    Chibuzor James
  • "Amazing! Not only do I love the website they delivered to us, their constant support has been outstanding for over four years now.I highly recommend WEBLARTECH for corporations needing a robust online… read more →
    Sir Frankline George
  • "Love it. This is probably one of the best purchases I have ever made. Everything about the Office is perfect, from the design to the back-end code. Get it, you won't regret… read more →
    Edwin
back up
© Copyright 2017 WEBLAR TECHNOLOGIES LTD
  • Home
  • Portfolio
  • Sitemap
  • Contact