newsweekshowcase.com

Academics should be worried about the writing of intelligent essays by an artificial intelligence bot.

Wired: https://www.wired.com/story/plaintext-how-to-stop-chatgpt-from-going-off-the-rails/

Are essays really an assignment for universities? A survey of Artificial Intelligence tools and the impacts on students’ thinking about their learning and their academic careers

The situation both worries and excites Sandra Wachter, who studies technology and regulation at the Oxford Internet Institute, UK. She says that she is impressed by the capability. But she’s concerned about the potential effect on human knowledge and ability. If students use chatgppt, they’ll be outsourcing a lot of their thinking.

“At the moment, it’s looking a lot like the end of essays as an assignment for education,” says Lilian Edwards, who studies law, innovation and society at Newcastle University, UK. Dan Gillmor, a journalism scholar at Arizona State University in Tempe, told newspaper The Guardian that he had fed ChatGPT a homework question that he often assigns his students — and the article it produced in response would have earned a student a good grade.

ChatGPT is the brainchild of AI firm OpenAI, based in San Francisco, California. In 2020, the company unleashed GPT-3, a type of AI known as a large language model that creates text by trawling through billions of words of training data and learning how words and phrases relate to each other. GPT-2 has sparked questions about its limits and prompted a host of applications, from aiding computer programmers to summarizing legal documents. ChatGPT is fine-tuned from an advanced version of GPT-3 and is optimized to engage in dialogue with users.

Even if this is the end of essays as an assessment tool, that isn’t necessarily a bad thing, says Arvind Narayanan, a computer scientist at Princeton University in New Jersey. He says essays are used to test both a student’s knowledge and their writing skills. He says that it’s going to be hard to combine them into a single form of assignment due to the fact that CHATGPL makes it difficult. Academics could reanalyze written assessments to focus on critical thinking and reasoning that they can’t yet do. This might ultimately encourage students to think for themselves more, rather than to try and answer essay prompts, he says.

Nature wants to find out how artificial-intelligence tools are used in universities and how research institutions deal with them. You can take our poll here.

“Despite the words ‘artificial intelligence’ being thrown about, really, these systems don’t have intelligence in the way we might think about as humans,” he says. “They’re trained to generate a pattern of words based on patterns of words they’ve seen before.”

What Will OpenAI Tell Us About It? A Conversation with Sandra Wachter, President of the OpenAI Chatbot, Wired, and Professor of Technology

How necessary that will be depends on how many people use the chatbot. Over a million people tried it in the first week. The current version of Openai is free, but it’s unlikely to be free forever, and some students would cringe at the idea of paying.

She’s hopeful that education providers will adapt. She says there is a panic around new technology. There is a responsibility of academics to have a good amount of distrust, but I do not think this is an impossible challenge.

When WIRED asked me to cover this week’s newsletter, my first instinct was to ask ChatGPT—OpenAI’s viral chatbot—to see what it came up with. It’s what I’ve been doing with emails, recipes, and LinkedIn posts all week. Productivity is way down, but sassy limericks about Elon Musk are up 1000 percent.

I asked the bot to write a column about itself in the style of Steven Levy, but the results weren’t great. The commentary on the promise and pitfalls of artificial intelligence wasn’t much more than gibberish and didn’t capture Steven’s voice. As I wrote last week, it was fluent, but not entirely convincing. Would I have gotten away with it? And what systems could catch people using AI for things they really shouldn’t, whether that’s work emails or college essays?

To find out, I spoke to Sandra Wachter, a professor of technology and regulation at the Oxford Internet Institute who speaks eloquently about how to build transparency and accountability into algorithms. I asked her what that might look like for a system like ChatGPT.

Can you fool me? Detection of AI output from people who are computer-savvy and human-savvy: a cat-and-mouse game

This will become a cat-and-mouse game. I teach law, but the tech is not great enough to fool me, but it might be good enough to convince someone who is not in that area. I wonder if technology will get better over time to where it can trick me too. We might need technical tools to make sure that what we’re seeing is created by a human being, the same way we have tools for deepfakes and detecting edited photos.

There are fewer artifacts and signs for text than for deepfaked imagery. Perhaps any reliable solution may need to be built by the company that’s generating the text in the first place.

It is necessary for whoever is creating the tool to give you their buy-in. If I’m offering services to students, I might not be the kind of company that submits to that. And there might be a situation where even if you do put watermarks on, they’re removable. Very tech- savvy groups are most likely to find a way. One can detect whether output is created with the help of an actual tech tool.

There are a couple of things. I would argue that the person creating the tools put some sort of watermark on it. And maybe the EU’s proposed AI Act can help, because it deals with transparency around bots, saying you should always be aware when something isn’t real. But companies might not want to do that, and maybe the watermarks can be removed. It’s about getting the research into independent tools that look at AI output. In education, we have to be more creative about the questions we ask, how we assess our students, and how we write our papers. Tech and human oversight is needed to curb the disruption.

Exit mobile version