The plan by Musk to make sex pay on TWoP


What Will Women Leave the Internet if We Don’t Act? An Empirical Study of Women’s Online Sexual Abuse and Harassment

It will be the year that women leave the internet if we don’t act. Women face huge risks online. A Pew Research report of a US survey shows that one-third of young women report having been sexually harassed online and that women report being more upset by these experiences and seeing it as a bigger problem than men do. A study of journalists by UNESCO found that 73 percent of them had experienced online violence, while 20% had experienced physical attacks or been abused in connection to online abuse. The women journalists self-censored, withdrew from online interactions, and did not interact with their audience. Filipino-American journalist and Nobel Peace Prize winner Maria Ressa wrote about the online abuse that she faces, at one point receiving an average of over 90 hate messages per hour. The newspaper that employed the journalist received hundreds of thousands of harassment emails and threats of physical confrontation after she investigated the campaign finance discrepancies surrounding the presidency of Jair Bolsonaro. She canceled all the events for a month. Both women shared that they dared to question power while being visible on social media.

The Attack of a Washington House Speaker: David DePape, a Newly-Selected Senator, Became an Integral Threat to the Women of Color

Nancy Pelosi was in Washington, DC, when a person broke into her home in California and attacked her husband with a hammer. A statement released by the speaker’s office claims that Paul Pelosi will make a full recovery from the surgery he had to repair a skull injury.

The suspected attacker, David DePape, who has a history of sharing conspiracy theories on social media, said he would wait “until Nancy got home,” according to a source briefed on the attack. The suspect was taken into custody for attempted homicide, assault with a deadly weapon and elder abuse.

This shocking episode is just the latest in a series of escalating attacks and confrontations against politicians, and women politicians in particular – many of whom face unacceptable hatred on the Internet that spills over into physical threats or violence. Social media platforms need to stop the abuse before a politician is hurt or killed.

The threats against members of Congress investigated by US Capitol Police increased by 144%. Many of the lawmakers who are receiving threats are women and people of color.

She told The New York Times that abuse of phone calls began to translate into threats of violence when a window was broken in her senator’s home. She said, “I wouldn’t be surprised if a senator or House member were killed.”

A man with a handgun showed up multiple times outside of Pramila’s home. The husband said that two men yelled obscenities at him and said that they would stop harassing her if she killed herself.

When we sign up for a job, we sign up for a lot of things. It’s hard to describe the scene of someone show up to your door with a gun, scaring your neighbors, and clearly trying to intimidate me.

And Democratic Rep. Alexandria Ocasio-Cortez of New York receives so many threats that she has a round-the-clock security team and, at times, sleeps in different locations. Gosar’s colleague, a Republican from Arizona, sent a video altered to look like he killed her. (Gosar deleted the video and did not apologize. He had been reprimanded and removed from two committee assignments after the House voted to censure him.

Earlier this week, the New York Post said they fired a rogue employee who changed the headline of an online editorial to read, “We must assassinate AOC for America.”

Where Are You, Nancy? When Trump and Pelosi Comes on Social Media to Prevent Online Hate and Sexual Misogyny

Pelosi is disliked by the right. In 2019, the House Speaker, who has famously clashed with Former President Donald Trump, became the subject of manipulated videos that made her appear as if she were stumbling over and slurring her words. Those videos were amplified by both Trump and Giuliani on social media. During the January 6 attack on the Capitol last year, Trump supporters ransacked her office and yelled, “Where are you, Nancy?” – a chilling echo of the words DePape uttered on Friday: “Where is Nancy?”

The companies that operate on social media say they don’t tolerate hate. Yet the reality is clear: it continues to exist on their platforms. Humans must be used to take down this abuse. Any time users see online hate like Gosar’s video, we should immediately use the available reporting tools so these social platforms can take it down.

It is sobering, given that this attack on Pelosi happened on the same day as Musk finalized his purchase of the company, that he favors more relaxed content moderation policies. If Twitter – or any other platform – becomes a bigger cesspool of misogyny and abuse, then users should make the decision to stop using it.

The FBI should investigate and prosecute the abuse of women online and off. If the agency needs more funding to do it, Congress should levy a tax against social networks to fund an expansion of resources. I imagine the many lawmakers who have been threatened and harassed would be happy to cast a vote in favor of such a bill.

The Washington Post reported last week that Musk was working on a product to monetize adult content on his micro-blogging service, which he confirmed Saturday afternoon. The apparent plans from the richest person on the planet have nothing to do with supporting sex workers. Choosing to expand adult content at a moment of heightened scrutiny surrounding sex work and queer people is risky, especially amid reports that Musk plans to remove protections for trans people, a population that overwhelmingly overlaps with sex workers, on the platform.

Women who are well-known or highly visible can consider leaving social media due to online abuse. A YouGov poll commissioned by the dating app Bumble showed that almost half of women age 18 to 24 received unsolicited sexual images within the past year. Alex Davies-Jones put the phrase “dick pic” into the past when she asked a male colleague if he had ever received one during the debate on the UK Online Safety Bill. It is not, as she said, a rhetorical question for most women.

AI-enabled intimate image abuse that combines images to create or generate new, often realistic images—so-called deepfakes—are other weapons for online abuse that disproportionately impact women. Estimates from Sensity AI suggest that 90 to 95 percent of all online deepfake videos are nonconsensual porn, and around 90 percent of those feature women. Our ability to combat it is lagging behind by using the technology to create deepfakes. What we now see is a perverse democratization of the ability to cause harm: The barriers to entry for creating deepfakes are low, and the fakes are increasingly realistic. The current tools for identifying and combating this abuse simply can’t keep up.

But toolkits and guidance, while extremely helpful, still place the burden of responsibility on the shoulders of the abused. Policymakers must also do their part to hold platforms responsible for combating chronic abuse. The UK’s Online Safety Bill is one mechanism that could hold platforms responsible for tamping down abuse. The bill would make large companies more transparent with their terms of service regarding abusive content and blocking abusers. It would require companies to provide users with optional tools that will help them control what they see on social media. However, debate of the bill has weakened some proposed protections of adults in the name of freedom of expression, and the bill still focuses on tools that help users make choices, rather than tools and solutions that work to stop abuse upstream.

It’s not clear if this regulatory approach will keep women from logging off in large numbers. If they do, not only will they miss the benefits of being online, our online communities will suffer.

The former public policy director at Facebook is a fellow at the Bipartisan Policy Center. BPC accepts funding from some tech companies in order to provide information about elections to their users. The views expressed in this commentary are the author’s. CNN has a lot of opinion articles.

Social media companies working on moderation of their content must balance competing interests and differing views of the world in order to make a good choice.

People have a right to say something, but not everyone has a right to see it. This is at the heart of many criticisms about platforms that prioritize engagement as a key factor in deciding what people see in their newsfeeds because it is often content that evokes an emotional response that gets the most likes, comments and shares.

First, a platform needs to ensure that everyone has the right to free speech and can safely express what they think. Every platform — even those that claim free expression is their number one value — must moderate content.

Discriminating between hate speech and child pornography in social media – how serious can they be? Meta-characterization for the chronically abused

Child pornography can be removed under the law. However, users — and advertisers — also don’t want some legal but horrible content in their feeds, such as spam or hate speech.

The tools work well for this type of attacks. But for journalists, politicians, scientists, actors—anyone, really, who relies on connecting online to do their jobs—they are woefully insufficient. A continuous stream of harassment from different accounts do little for ongoing coordinated attacks. There is damage done to the mental health of users when they are bombarded with attacks; in other words, there is nothing they can do about it. These are only useful after someone has been harmed. Closing direct messages and making an account private can protect the victim of an acute attack; they can go public again after the harassment subsides. But these are not realistic options for the chronically abused, as over time they only remove people from broader online discourse.

Second, there are more options beyond leaving the content up or taking it down. Meta characterizes this as remove, reduce and inform; instead of taking potentially problematic, but not violating, content down, platforms can reduce the reach of that content and/or add informative labels to it to give a user more context.

As many of the most engaging posts are borderline, it is necessary to have this option. The platform won’t be comfortable removing clickbait, but it might want to take other action because some users don’t want to see it.

The Online Safety of the Internet: How Do Platforms Make Their Decisions? Activists, Legal Experts, and Public Propagators Need Better Measures

The reduction in reach is a scandal according to some people. The author of “Free Speech Does not Mean Free Reach,” Renee DiResta, has always written that free speech does not mean free reach.

This leads to the third point: transparency. Who is making these decisions, and how are they ranking competing priorities? The issue around shadow banning — the term used by many to describe when content isn’t shown to as many people as it might otherwise be without the content creator knowing — isn’t just one person upset that their content is getting less reach.

They are upset that they don’t know what’s happening and what they did wrong. Platforms need to do more on this front. If they are eligible to be recommended to users, people can see on their accounts. They have rules about how accounts with sexually explicit material, clickbait, and other types of content can not be recommended to others.

Users can be given more control over the types of moderation they want. Political scientist Francis Fukuyama calls this “middleware.” It would be possible for people to decide the types of content they see in their feeds with the help of middleware. This will allow them to determine what they need to feel safe online. Some platforms, such as Facebook, allow people to switch from a ranked feed to a chronological one.

Legislation will finally be passed in the UK in order to address similar harms and make headway on a regulatory body for tech companies. Unfortunately, the Online Safety Bill won’t contain the adequate measures to actually protect vulnerable people online, and more will need to be done.

We will need more insights into how platforms make these decisions to figure this out. It is necessary for regulators, civil society and academic organizations outside these platforms to say how they would make certain calls, governments need to find the right way to regulate platforms, and we need more options to control the types of content we see.

For the past ten years, the biggest companies in the tech industry have effectively been allowed to mark their own homework. They hid behind the tech industry slogan, “Move fast and break things” in order to protect their power.

The term “significant harm” is used in the bill, but there aren’t any processes to define it or how platforms would have to measure it. Academics have opposed the bill because of its proposal to drop the requirement that Ofcom should encourage the use of technologies and systems for regulating access to electronic material. Other groups have raised concerns about the removal of clauses around education and future proofing—making this legislation reactive and ineffective, as it won’t be able to account for harms that may be caused by platforms that haven’t gained prominence yet.

In 2023, legislation aimed at tackling some of these harms will come into effect in the UK, but it won’t go far enough. There are many concerns around the effectiveness of the online safety bill that have been raised by experts and campaigners. The think tank Demos emphasizes that the bill doesn’t specifically name minoritized groups—such as women and the LGBTQIA community—even though these communities tend to be disproportionately affected by online abuse.

More than 2 million people said they had been the victims of threatening behavior online in the past year. 26% of the people surveyed were members of the lesbian, gay, bisexual and queer community, and 25% said they had experienced racist abuse online.

There are platforms that have resources to help address abuse. Users under attack can block individuals outright and mute content or other accounts, moves that ensure they’re able to exist on the platform but shield them from content that they do not want to see. They can limit interactions with people outside their networks using tools like closed messages and private accounts. There are third-party applications that try to address the gap by muting or controlling the content.