The New York City schools banned access because of suspicions of cheating and misinformation.


Using Chatbots to cheat: An Arizona researcher’s experience with GPT 3 and DALL-E in a journalism PhD case study

Many teachers fear students will use the tool to cheat. One user, for example, fed ChatGPT an AP English exam question; it responded with a 5 paragraph essay about Wuthering Heights. Another user asked the chat bot to write an essay about the life of William Shakespeare four times; he received a unique version with the same prompt each time.

It seems like the end of essays is an assignment for education, according to a researcher who studies legal innovation and society. Dan Gillmor, a journalism scholar at Arizona State University in Tempe, told newspaper The Guardian that he had fed ChatGPT a homework question that he often assigns his students — and the article it produced in response would have earned a student a good grade.

The tool stunned users, including academics and some in the tech industry. ChatGPT is a large language model trained on a massive trove of information online to create its responses. The same company that created DALL-E is behind it and generated a seemingly endless range of images.

The GPT 3 was cold and computer-like, while the ChatGgt can act as a collaboration and allow users to bounce ideas. “From what I’ve seen of this, it’s so good because it doesn’t run off down a rabbit hole nearly as much as GPT-3 previously did,” says Edwards. “I just think essay assessment is dead, really.”

What’s in a Bot? Analysing the Impact of Artificial Intelligence on Research Integrity: Conversations with an Oxford Internet Institute Professor

Nature wants to know about the impact of artificial intelligence on research integrity and how research institutions deal with it. The poll can be taken here.

“Despite the words ‘artificial intelligence’ being thrown about, really, these systems don’t have intelligence in the way we might think about as humans,” he says. “They’re trained to generate a pattern of words based on patterns of words they’ve seen before.”

How much will depend on how many people use the chatbot. More than one million people tried it out in its first week. Although the current version of OpenAI is free, it’s unlikely to be free forever and some students might not be willing to pay for it.

She’s hopeful that education providers will adapt. “Whenever there’s a new technology, there’s a panic around it,” she says. It is the responsibility of academics to have a good amount of trust, but I think that is not an impossible challenge.

When I was asked to cover the newsletter for WIRED, my first priority was to ask the chat bot how it came up with the idea. It’s what I’ve been doing with emails, recipes, and LinkedIn posts all week. Sexy limericks about Musk are up 1000 percent.

I asked the bot to write a column about itself in the style of Steven Levy, but the results weren’t great. ChatGPT served up generic commentary about the promise and pitfalls of AI, but didn’t really capture Steven’s voice or say anything new. As I wrote last week, it was fluent, but not entirely convincing. But it did get me thinking: Would I have gotten away with it? And what systems could catch people using AI for things they really shouldn’t, whether that’s work emails or college essays?

In order to find out, I spoke to an Oxford Internet Institute professor who talked about transparency and accountability. I asked her what that might look like for a system like ChatGPT.

Can Artificial Intelligence Help Students? The New York City Department of Education Has Blocked ChatGPT, citing “Fake Photos, Tweets, Social Media Threats”

Sandra Wachter: This will start to be a cat-and-mouse game. The tech is maybe not yet good enough to fool me as a person who teaches law, but it may be good enough to convince somebody who is not in that area. I wonder if technology will work in my favor in the future. We have tools for detecting edited photos and deepfakes, but we might need technical tools to make certain that what we are seeing is created by a human.

It’s hard to do that for text, because there are more artifacts and telltale signs. Perhaps any reliable solution may need to be built by the company that’s generating the text in the first place.

You need to buy in from someone who is creating that tool. But if I’m offering services to students I might not be the type of company that is going to submit to that. Even if you put watermarks on, they are easily removed. Very tech-savvy groups will probably find a way. You can use a tech tool built with Openai’s input to detect whether the output is created artificially.

A couple of things. First, I would really argue that whoever is creating those tools put watermarks in place. The EU is trying to make it easier to see when something is bogus, and the proposed Artificial Intelligence Act might help. But companies might not want to do that, and maybe the watermarks can be removed. This is about giving researchers more tools to look at the output of artificial intelligence. And in education, we have to be more creative about how we assess students and how we write papers: What kind of questions can we ask that are less easily fakeable? A combination of tech and human oversight is required to curb the disruption.

The New York City Department of Education has blocked access to ChatGPT on its networks and devices over fears the AI tool will harm students’ education.

A spokesperson for the department, Jenna Lyle, told Chalkbeat New York – the education-focused news site that first reported the story — that the ban was due to potential “negative impacts on student learning, and concerns regarding the safety and accuracy of content.”

“While the tool may be able to provide quick and easy answers to questions, it does not build critical-thinking and problem-solving skills, which are essential for academic and lifelong success,” said Lyle.

All of the most recent intelligent language systems suffer from failures common to large language models. It often uses data from the internet to make its answers worse, like sexism and racism. The system is also prone to simply making up information, from historical dates to scientific laws, and presenting it as accurate fact.

But such adaptations will take time, and it’s likely that other education systems will ban AI-generated writing in the near future as well. Already some online platforms — like coding Q&A site Stack Overflow — have banned ChatGPT overs fear the tool will pollute the accuracy of their sites.

Just days after its launch, it went to viral on the internet. Open AI co-founder Sam Altman, a prominent Silicon Valley investor, said on Twitter in early December that ChatGPT had topped one million users.

It will be harder to prove when a student uses ChatGPT than if they use other cheating techniques, according to an assistant professor of philosophy.

He said that in more traditional forms of plagiarism, he can find more proof, evidence and bring it to a board hearing. “In this case, there’s nothing out there that I can point to and say, ‘Here’s the material they took.’”

In an old problem where students would pay somebody to write for them, say a essay farm or a friend that have taken a course before, this has become a new type of problem. It is instantaneous and free, just like that.

Some companies such as Turnitin – a detection tool that thousands of school districts use to scan the internet for signs of plagiarism – are now looking into how its software could detect the usage of AI generated text in student submissions.

In the future, teachers will need to rethink assignments so that they are not easy to write. The bigger issue is how the administrations are going to adjudicate these kinds of cases.