newsweekshowcase.com

The Verge has an article on it

CNN - Top stories: https://www.cnn.com/2023/04/18/tech/lina-khan-ai-warning/index.html

Artificial Intelligence is a Ruler for Consumer Laws: OpenAI should not be used to impersonate a stranger, but to protect the public

Artificial intelligence tools such as ChatGPT could lead to a “turbocharging” of consumer harms including fraud and scams, and the US government has substantial authority to crack down on AI-driven consumer harms under existing law, members of the Federal Trade Commission said Tuesday.

The FTC chair warned thatturbocharging of fraud and scams that could be enabled by these tools were a serious concern.

A new crop of Artificial Intelligence tools have gained attention recently for their ability to generate convincing emails, stories and essays along with images, audio and videos. While these tools may change the way people work, some people are concerned about how they could be used to impersonate others.

The agency received a request last month to investigate OpenAI over claims that the company has misled consumers about the tool’s capabilities.

FTC Commissioner Rebecca Slaughter said that the FTC has had to adapt their enforcement to change technology. “Our obligation is to do what we’ve always done, which is to apply the tools we have to these changing technologies … [and] not be scared off by this idea that this is a new, revolutionary technology.”

Bedoya says that the staff consistently says that unfair and deceptive practices authority applies to the civil rights laws, fair credit and Equal Credit Opportunity Act. “There is law, and companies will need to abide by it.”

Why Does Google Have an Ethical Role? Bloomberg Report on Bard, Microsoft, OpenAI, and Other Artificial Intelligence Systems

Bard was better than the other two systems in our tests because it was less useful and accurate than the other systems.

Bloomberg’s report illustrates how Google has apparently sidelined ethical concerns in an effort to keep up with rivals like Microsoft and OpenAI. Safety and ethics work in Artificial Intelligence is frequently trumpeted by the company, though it has been criticized for being prioritized over business.

Others who work at the company would disagree. A common argument is that public testing is necessary to develop and safeguard these systems and that the known harm caused by chatbots is minimal. Yes, they produce toxic text and offer misleading information, but so do countless other sources on the web. (To which others respond, yes, but directing a user to a bad source of information is different from giving them that information directly with all the authority of an AI system.) Google’s rivals like Microsoft and OpenAI are also arguably just as compromised as Google. The only difference is they’re not leaders in the search business and have less to lose.

Exit mobile version