artificial intelligence

ChatGPT Likely Helped Student Cheat in Ethics Course About Artificial Intelligence

The NBC Bay Area Investigative Unit surveyed the largest school districts and universities in Silicon Valley, home to nearly half a million students, to learn how educators are navigating developments in artificial intelligence that have been criticized for enabling cheating and spreading false information.

NBC Universal, Inc.

Advancements in artificial intelligence, including the website ChatGPT, are forcing school districts and universities across Silicon Valley to rethink the way they teach and test students.  The NBC Bay Area Investigative Unit surveyed the largest educational institutions across the nine-county Bay Area, home to nearly half a million students, and discovered a majority of schools have recently held internal meetings to discuss the impact artificial intelligence is having on their classrooms. 

ChatGPT, which launched less than three months ago, is known as a ‘chatbot,’ which is capable of producing high-level written responses to a wide range of requests.  It can create essays, screenplays, speeches, jokes, poetry, and even intricate business plans.  The site scrapes the internet for answers and then uses artificial intelligence to formulate high-level responses.  ChatGPT’s self-admitted proclivity for “occasionally” generating “inaccurate information,” as well as ongoing concerns that the technology could be abused and enable wide-spread cheating, forced some educators to ban access to the site.

Educators Try to Remove 'Cheating Temptation' Created by ChatGPT

“If you're a student who has not prepared for class and you've got only five minutes left, then what are you going to do,” said Brian Green, a professor at Santa Clara University who teaches an ethics course to engineering students.  “It's going to be a big temptation.”

For nearly a decade, Green has assigned his students essays as a major component of their final grade.  This semester, however, he is requiring his students to give oral presentations, in-person, instead of allowing them to complete the written assignments at home.

“I'm trying to kind of remove the temptation,” said Green, who heads the Technology Ethics program at the university.  “This gets into some very fundamental questions about what the educational system does and how it operates and how it should function in society.”

Green’s concerns aren’t hypothetical.  He believes one of his students used ChatGPT to produce an essay that he then attempted to pass off as his own in class – essentially using artificial intelligence to cheat in Green’s course on ‘Ethics in Artificial Intelligence.’

“The irony is very clearly there,” Green said with a smile.  “[The essay] wasn't exactly on topic and, also, it had a very kind of, honestly, a robotic feel to it in some ways.


[The essay] wasn't exactly on topic and, also, it had a very kind of, honestly, a robotic feel to it in some ways.

Brian Green, a professor at Santa Clara University, discussing what led him to believe one of his students used ChatGPT to cheat on a school assignment

Green recently helped assemble a group of faculty on campus to discuss ongoing concerns about ChatGPT and its potential to negatively impact the educational process.

“That's one of my biggest fears about this technology,” remarked one professor during the discussion.  “We’re really worried about this,” remarked another.

Professor Brian Green, Director of Technology Ethics at Santa Clara University, has decided to do away with at-home essay assignments because of concerns students can now rely on AI-powered websites to instantly generate answers to homework assignments.

In addition to Santa Clara University, Berkeley, Stanford, San Francisco State, and San Jose State have also recently held internal meetings to examine the effects of ChatGPT, including what it could mean for their more than 130,000 students.  None, however, bans access to the website.

The NBC Bay Area Investigative Unit also surveyed the ten largest school districts in the Bay Area, home to more than 300,000 students.  Of those districts, 70 percent have had discussions about the impact of ChatGPT and 30 percent have already blocked the site on school computers.

“We really felt like we needed to study it a little bit more before we said, ‘let's open it up for people to be able to use,’” said Dr. Sheila McCabe, an assistant superintendent at the Fairfield-Suison Unified School District, where roughly 20,000 students have take-home laptops but access to ChatGPT is restricted.

“We don't want situations where our students turn in essays and demonstrate that they have a knowledge level that's beyond what they actually have,” McCabe said.  “They don't have the opportunity to truly engage in the learning.”

Dr. Sheila McCabe, assistant superintendent of educational services at the Fairfield-Suison Unified School District, said she and her colleagues made the decision to restrict student access to ChatGPT because of fears it could be used to generate homework assignments and give the false impression that students may be performing at a higher educational level than they actually are.

McCabe, however, believed there could soon be a day where sites like ChatGPT are utilized in the classroom.

“I could see a student, that might have writer's block, be able to start using that as like a first draft,” she said.  “The takeaway is that we still need to know more.”

ChatGPT Prompts Action at School Districts Across Silicon Valley

The NBC Bay Area Investigative Unit surveyed the 10 largest school districts in the Bay Area to find out which have banned access to ChatGPT and/or held internal meetings to discuss the impact the website is having on the area's more than 300,000 students.

Source: NBC Bay Area Investigation

ChatGPT is the product of OpenAI, a San Francisco-based company that boasts Elon Musk as one of its original founders.  The company did not reply to NBC Bay Area’s request for comment regarding the ongoing criticism against ChatGPT and neither did Microsoft, which partnered with OpenAI to embed its chatbot technology right into Microsoft's search engine, Bing.

This week, Microsoft acknowledged its newly revamped search engine, Bing, hasn't been operating as designed after users began accusing the chatbot of being overly aggressive, even threatening. Senior Investigative Reporter Bigad Shaban joins Raj Mathai to discuss the latest developments and a new NBC Bay Area investigation regarding artificial intelligence and its impact on education institutions throughout Silicon Valley.

It’s not so much the tools themselves but the underlying deception that’s the problem.

Irina Raicu, director of the Internet Ethics Program at the Markkula Center for Applied Ethics at Santa Clara University

“It’s not so much the tools themselves but the underlying deception that’s the problem,” said Irina Raicu, who is the director of the Internet Ethics Program at the Markkula Center for Applied Ethics at Santa Clara University.  “It's not – ‘is Chat GPT Good or bad?’ It's that it's really disrupting, and on a very short notice, some of these processes that have been put in place about how to how to teach and how to assess learning.”

The Investigative Unit surveyed the largest school districts and universities in Silicon Valley to learn how educators are navigating new advances in artificial intelligence that have been criticized for enabling cheating and spreading false information. Senior Investigative Reporter Bigad Shaban reports.

ChatGPT Sometimes Generates Inaccurate Information

Cheating isn’t the only concern.  On its website, ChatGPT acknowledges that “occasionally” it provides “incorrect information,” and admits it tends to produce “longer answers” in an effort to “look more comprehensive.”

“If you've ever taught writing, longer answers that sound comprehensive is like the opposite of good writing,” Raicu said.  “You want concise answers that are comprehensive and accurate.”

Irina Raicu, director of the Internet Ethics Program at the Markkula Center for Applied Ethics at Santa Clara University, believes there is an its critical for educational institutions to begin crafting policies and procedures relating to the recent developments in artificial intelligence so that there are clear expectations for students regarding when the technology can be used on school assignments and in what capacity.
Irina Raicu, director of the Internet Ethics Program at the Markkula Center for Applied Ethics at Santa Clara University, believes it is critical for educational institutions to begin crafting policies and procedures relating to the recent developments in artificial intelligence so that there are clear expectations for students regarding when the technology can be used on school assignments and in what capacity.

While Programs Exist to Spot Plagiarism, It's Difficult to Detect Content Created by Artificial Intelligence

For many students, the question of whether they’re technically allowed to use ChatGPT on school work remains unclear since some educators still don’t know it exists.  Plus, it is difficult to distinguish when ChatGPT is used.  School districts and universities often pay tens of thousands of dollars for programs that can spot plagiarism, but, for now, those can’t accurately detect when something is created using artificial intelligence.

OpenAI, the creator of ChatGPT, recently unveiled its own screening software, but admits its program can only correctly identify AI-generated content 26 percent of the time.

Additionally, it incorrectly labels human-written text as AI-produced 9 percent of the time.

It's not that helpful,” said Brian Green.  “Especially if we're going to be falsely accusing students of using an A.I. generated text tool.”

Green worries that kind of predicament will become increasingly common as more AI-powered sites come online.  Google recently announced plans to launch its newest version, Bard, in the coming weeks.  Green said he won’t be assigning another at-home essay anytime soon.

“We could have engineers, writers, business people, all sorts of people going out into society and we find out that they've just been cheating their way through all their classes,” Green said.  “If we can't evaluate on a very basic level whether [students] have learned what we've taught them, then we're going to be in big trouble.” 


Contact The Investigative Unit

submit tips | 1-888-996-TIPS | e-mail Bigad

Contact Us