Is the end of essays an assignment for education? The case of a computer scientist at Northumbria University, Lancaster, and GPT-3
While others disagree, they acknowledge that students have long been able to use human third parties in their essay writing. Thomas Lancaster, a computer scientist and academic-integrity researcher at Imperial College London says that it doesn’t mean that it adds any more features that were already available to students.
At the moment, it is looking more and more like the end of essays as an assignment for education, says a Northumbria University law student. Dan Gillmor, a journalism scholar at Arizona State University in Tempe, told newspaper The Guardian that he had fed ChatGPT He often assigns a homework question to his students, and if they answered it in an article the student would receive a good grade.
ChatGPT is the brainchild of AI firm OpenAI, based in San Francisco, California. In 2020, the company unleashed GPT-3, a type of AI known as a large language model that creates text by trawling through billions of words of training data and learning how words and phrases relate to each other. GPT-3 is in the vanguard of a revolution in AI, sparking philosophical questions about its limits and prompting a host of potential applications, from summarizing legal documents to aiding computer programmers. ChatGPT is fine-tuned from an advanced version of GPT-3 and is optimized to engage in dialogue with users.
Even if this is the end of essays as an assessment tool, that isn’t necessarily a bad thing, says Arvind Narayanan, a computer scientist at Princeton University in New Jersey. He says the essays are used to test both students knowledge and writing skills. He says it is going to be hard to combine these two into a single form of assignment. Academics could use reworked writing assessments to prioritize critical thinking or reasoning. This might ultimately encourage students to think for themselves more, rather than to try and answer essay prompts, he says.
Nature wants to learn how artificial-intelligence tools affect education and research integrity and how research institutions deal with them. Take the survey here.
“Despite the words ‘artificial intelligence’ being thrown about, really, these systems don’t have intelligence in the way we might think about as humans,” he says. They are trained to produce a pattern of words based on previous words they have seen.
What do we really care about when we buy a chatbot? How many children use Chatbots, and why do we care about them?
How many people use the chatbot will affect how necessary that is. More than one million people tried it out in its first week. But although the current version, which OpenAI calls a “research preview”, is available at no cost, it’s unlikely to be free forever, and some students might baulk at the idea of paying.
She is hopeful that education providers will adapt. There is a panic about new technology, she says. “It’s the responsibility of academics to have a healthy amount of distrust — but I don’t feel like this is an insurmountable challenge.”
I wrote a book about strollers, and what they reveal about our attitudes towards children and their caretakers. I came to adore my strollers because of my critique of the consumer culture of contemporary American parenthood. When I was the captain of my college team, I recorded race times faster than when I was running while pushing my kids ahead of me. In the early days of the flu, my son and I would go up and down the sidewalk in our neighborhood, keeping an eye on the cold spring in New England. Sometimes, on warm days, I’d leave my kids in the shade and myself in the sun to work, and sometimes, I’d leave them at home, where they’d fall asleep after a long stroller walk.
That kind of casualness was a relic of a time before my inbox started to fill up with another flurry of emails, this time about ChatGPT. I taught high school English for many years and now teach freshman composition, so news about the new—horrifying, amazing, fascinating, or dystopian, depending on how one sees it—large language models, and their role at the nexus of writing and teaching, often made friends and family think of me. Because everyone has a wealth of (often fraught) memories about their own high school years, and because many of my friends now have children around the age of the students my husband and I teach, we end up talking about work in social contexts fairly often. Students in multiple AP classes are stressed out. Are our students’ weekends like an episode of Euphoria or even—and this would be alarming enough—more like what our own adolescent parties were in the late ’90s? What do we wish our students were better equipped to do? How do we keep them off their phones in class? And, most recently, as news about ChatGPT swept through increasingly wide rings of society, I began to get questions that were not so different than those that accompanied the emails about self-driving strollers: What are we going to do about life as we know it being changed by automation?
My initial response was to insist that there are important differences in how easily AI might produce work mimicking student code as opposed to essays. I could not ignore the broader concern of the ethical and philosophical implications of the program itself, despite the fact that we might both give similar assignments for our students. Instead of being built around if-then commands, Nick explained, ChatGPT is a neural network. What is it then, Nick asked me, that makes those neural networks that comprise ChatGPT different than our biological network of neurons? The fact that they’re silicon instead of carbon-based? Why would a carbon-based network allow consciousness to develop and a silicon-based network not? He wanted to know if extra protons could make a difference. Nick had a line of thinking I was not comfortable with. Of course, I insisted, there is something beyond carbon—perhaps something we can’t put into words or even prove exists—that makes us human. I could not tell you what human-making something is.
What I Don’t Want to Tell You About Running : The Case of a Student Who Had a Bad Time with the ChatGPM
Unlike strollers, which I will happily talk about all day, I hate talking about it and find myself doing so all the time, because I am the person who brought it up.
At the beginning of the spring semester, I asked my students to consider if they had done a good enough job with their homework, and if they hadn’t been using the ChatGPM to do it. The student would not be better off than the student who had started running, or the student who was already running, if they had been responsible for bringing the illusion to life.