newsweekshowcase.com

Big Tech Platforms could be blocked by the UK due to Sweeping New Powers

The Verge: https://www.theverge.com/2023/11/8/23952736/uk-online-safety-act-ofcom-illegal-harms-guidelines

Do you feel uncomfortable? Children’s rights and the Ofcom investigation of fake news — the big outlier: Trump’s decision to stay out of the EU

The research conducted by Ofcom shows that three out of five children in the UK between 11 and 18 years of age have received unwanted approaches that made them feel uncomfortable online, as well as one in six having been sent naked or semi-naked images. Adults looking to groom children for abuse use scattergun friend requests. Under Ofcom proposals, companies would need to take steps to prevent children from being approached by people not connected to them, including making it difficult for accounts they are not connected to to send them direct messages. They wouldn’t show up in their own connections lists, they would be hidden from other users.

The aim is to require that sites be proactive to stop the spread of illegal content and not just play whack-a-mole. It is intended to encourage people to switch from a reactive approach to a more proactive one.

Ofcom hopes to see these practices implemented more frequently on the large tech platforms. They represent best practice of what is out there, but it isn’t always applied across the board. There is a great benefit for a more widespread adoption of it because some firms are only applying it intermittently.

There’s also one big outlier: the platform known as X (formerly Twitter). The UK tried to pass legislation long before Musk bought it, but it was passed as he fired the trusts and safety teams that were part of them, and loosened the moderation standards that were part of them. Musk has publicly stated his intentions to remove X’s block feature, despite Ofcom’s guidelines which say users should be able to easily block users. He was considering pulling out of the European market to avoid the rules that he clashed with the EU over. Whitehead declined to comment when I asked whether X had been cooperative in talks with Ofcom but said the regulator had been “broadly encouraged” by the response from tech firms generally.

The section in question allows Ofcom to require online platforms to use so-called “accredited technology” to detect CSAM. But WhatsApp, other encrypted messaging services, and digital rights groups say this scanning would require breaking apps’ encryption systems and invading user privacy. Whitehead says that Ofcom plans to consult on this next year, leaving its full impact on encrypted messaging uncertain.

There’s another technology not emphasized in today’s consultation: artificial intelligence. But that doesn’t mean AI-generated content won’t fall under the rules. The Online Safety Act attempts to address online harms in a “technology neutral” way, Whitehead says, regardless of how they’ve been created. CSAM and deep fake used to conduct fraud are both in scope because of the fraud. Whitehead says that we are not regulating the technology but the context.

MacKinnon Against the Agree: A Regulator’s View on a Scalable Age-Dependent Law

MacKinnon says that they Agree as a platform that we have responsibilities but that when you are a nonprofit and work is zero sum, that is problematic.

The act—a sprawling piece of legislation that covers a spectrum of issues, from how technology platforms should protect children from abuse to scam advertising and terrorist content—became law in October. Today, the regulator released its first round of proposals for how the act will be implemented and what technology companies will need to do to comply.

Exit mobile version