The New York City Department of Education (NYCDOE) has blocked OpenAI’s ChatGPT service access on its networks and devices amid fears that students will use it to cheat on assignments and other school tasks.
ChatGPT is an artificial intelligence chatbot capable of producing content mimicking human speech. Accessible for free, the service can be used to generate essays, technical documents, and poetry, Chalkbeat New York reported. The program uses machine learning to pull and compile historical facts and even make logical arguments that sound convincing, all the while ensuring that the output remains grammatically correct.
“Due to concerns about negative impacts on student learning, and concerns regarding the safety and accuracy of content, access to ChatGPT is restricted on New York City Public Schools’ networks and devices,” NYCDOE spokesperson Jenna Lyle told Chalkbeat. “While the tool may be able to provide quick and easy answers to questions, it does not build critical-thinking and problem-solving skills, which are essential for academic and lifelong success.”
However, if individual schools do need access to the site in case they wish to study the technology powering ChatGPT, they only need to put in a request, Lyle said.
ChatGPT and School Tasks
In an interview with the New York Post, Darren Hick, an assistant philosophy professor at Furman University in Greenville, South Carolina, said that academia “did not see this coming,” referring to the capabilities of ChatGPT.
In early December, Hick had asked his class to write a 500-word essay on philosopher David Hume and the paradox of horror. One of the submissions caught his eye as it featured a few hallmarks of having been created by AI.
“It’s a clean style. But it’s recognizable. I would say it writes like a very smart 12th grader,” Hick told the New York Post, adding that the bot uses “peculiar” and “odd wording.”
Dangers of AI
A problem with ChatGPT is that it is not always correct. OpenAI admits that ChatGPT “sometimes writes plausible-sounding but incorrect or nonsensical answers,” and that fixing the issue is a challenge. As such, the service cannot be used to source critical information, like medical advice.
Many people have been raising alarm bells over the rising development of AI. In June of last year, Google put a senior software engineer in its Responsible AI ethics group on paid administrative leave after he raised concerns about the human-like behavior exhibted by LaMDA, an AI program he tested.
The employee tried to convince Google to take a look at the potentially serious “sentient” behavior of the AI. However, the company did not heed his words, he claimed.
Tech billionaire Elon Musk has also warned about the dangers of AI.
“I have exposure to the very cutting edge AI, and I think people should be really concerned about it,” Musk told attendees of a National Governors Association meeting in July 2017.
“I keep sounding the alarm bell, but until people see robots going down the street killing people, they don’t know how to react, because it seems so ethereal.”