TL;DR
X has announced plans to accelerate its review process for hate and terrorist content in the UK, aiming to review 85% within 48 hours. The move follows regulator Ofcom’s pressure, though skepticism remains about its implementation.
X, the social media platform formerly known as Twitter, has committed to reducing hate and terror content in the UK by speeding up its review process and removing illegal terrorist accounts, according to Ofcom. This development comes amid increased regulatory scrutiny following reports of persistent hate speech on the platform.
According to Ofcom, the UK’s communications regulator, X has agreed to review and assess reports of terrorist and hate content within an average of 24 hours, or at least 85% of such reports within 48 hours. The platform plans to work with UK-based experts to identify and ban offending accounts, and Ofcom will monitor X’s performance quarterly over the next year.
This announcement follows Ofcom’s ongoing investigation into X’s handling of harmful content, as well as the regulator’s recent fine of nearly $700,000 against the image board 4chan for violations of the UK’s Online Safety Act. Despite these commitments, skepticism remains about X’s ability to follow through, given past increases in hate speech post-Elon Musk’s acquisition of the platform and Musk’s own frequent posting of racist content.
Why It Matters
This development is significant because it marks a formal effort by X to address hate speech and terrorist content in the UK, a country with a recent surge in hate-motivated crimes, especially against Jewish communities. The platform’s actions could influence how social media companies are held accountable for harmful content and impact online safety policies.
online hate speech monitoring tools
As an affiliate, we earn on qualifying purchases.
As an affiliate, we earn on qualifying purchases.
Background
Following Elon Musk’s acquisition of Twitter and rebranding to X in late 2022, hate speech reportedly increased by 50% according to UC Berkeley research, partly driven by bots. The UK government and regulators like Ofcom have expressed concern over the platform’s role in spreading harmful content. Prior to this, Ofcom fined 4chan for violations of online safety laws, highlighting ongoing regulatory efforts to police harmful online activity.
“We have evidence that terrorist content and illegal hate speech is persisting on some of the largest social media sites. We are challenging them to tackle the problem and expect them to take firm action.”
— Oliver Griffiths, Ofcom’s Online Safety Group Director
“We will review and assess terrorist and hate content in the UK on average within 24 hours of it being reported, and work closely with experts to ensure swift action.”
— An unnamed X spokesperson
content moderation software for social media
As an affiliate, we earn on qualifying purchases.
As an affiliate, we earn on qualifying purchases.
What Remains Unclear
It remains unclear how effectively X will implement these review processes, given past reports of increased hate speech and Elon Musk’s own posting behavior. The actual impact on hate content levels in the UK is still to be measured, and whether X will meet its review targets remains uncertain.
hate speech reporting apps
As an affiliate, we earn on qualifying purchases.
As an affiliate, we earn on qualifying purchases.
What’s Next
X will undergo quarterly performance reviews by Ofcom over the next year, with the regulator monitoring compliance and effectiveness. The next steps include the platform’s rollout of the new review procedures and public reporting on progress.
internet safety filters for social media
As an affiliate, we earn on qualifying purchases.
As an affiliate, we earn on qualifying purchases.
Key Questions
Will X actually reduce hate content in the UK?
The platform has committed to faster review times and working with experts, but it is still uncertain whether these measures will significantly reduce hate speech given past challenges and Musk’s posting habits.
How will Ofcom monitor X’s performance?
Ofcom will review X’s performance data quarterly over the next year, assessing how quickly hate and terrorist content are reviewed and removed.
What are the consequences if X fails to meet commitments?
Details are not specified, but continued non-compliance could lead to further investigations, fines, or regulatory actions under the UK’s online safety laws.
Does this mean hate speech has decreased on X?
Not yet; the impact of these new measures will only be clear after implementation and ongoing monitoring over the coming months.