Skip to Main Content

Artificial Intelligence: Ai - Academic Integrity

Resources for research and use of AI tools Chatgpt, Dall-E and others

Things to Try

"ChatGPT and other AI-based language applications could be, and perhaps should be, integrated into school education. Not indiscriminately, but rather as a very intentional part of the curriculum. If teachers and students use AI tools like ChatGPT in service of specific teaching goals, and also learn about some of their ethical issues and limitations, that would be far better than banning them," says Kate Darling, a research scientist at the MIT Media Lab.

Limitations

What are ChatGPT's limitations?

Despite looking very impressive, ChatGPT still has limitations. Such limitations include the inability to answer questions that are worded a specific way, requiring rewording to understand the input question. A bigger limitation is a lack of quality in the responses it delivers -- which can sometimes be plausible-sounding but make no practical sense or can be excessively verbose. 

Instead of asking for clarification on ambiguous questions, the model just takes a guess at what your question means, which can lead to unintended responses to questions.

"The primary problem is that while the answers that ChatGPT produces have a high rate of being incorrect, they typically look like they might be good and the answers are very easy to produce," says Stack Overflow moderators in a post. Critics argue that these tools are just very good at putting words into an order that makes sense from a statistical point of view, but they cannot understand the meaning or know whether the statements it makes are correct.

The ChatGPT platform currently has some limitations, according to OpenAI. These include sometimes nonsensical answers, a tendency to be verbose, and an inability to ask appropriate clarifying questions when a user enters an ambiguous query or statement. In some cases, changing a word or two can dramatically alter the outcome within ChatGPT.

However, OpenAI monitors responses and feedback using an external content filter. This helps the company flag false positives and false negatives (and other issues) along with potentially harmful output. The information is used to update and improve the AI model.

Excerpts from ZDNet Article January 24, 2023 by Sabrina Ortiz, Associate Editor 

ChatGPT / Ai and Academics

Concerns For Usage

People are expressing concerns about AI chatbots replacing or atrophying human intelligence. For example, the chatbot can write an article on any topic efficiently within seconds, potentially eliminating the need for a human writer. The chatbot can also write an entire full essay within seconds, making it easier for students to cheat or avoid learning how to write properly. This has led some school districts blocking access to it.

Another concern with the AI chatbot is the possible spread of misinformation. Since the bot is not connected to the internet, it could make mistakes in what information it shares. The bot itself says, "My responses are not intended to be taken as fact, and I always encourage people to verify any information they receive from me or any other source." OpenAI itself also notes that ChatGPT sometimes writes "plausible-sounding but incorrect or nonsensical answers."  

The ChatGPT platform is currently in a beta test phase. Although it has received mostly favorable reactions, the tool isn’t without issues and critics. In some cases—as a result of using statistical methods rather than creating a way to understand the meaning of actual language—it generates simplistic, incorrect, disturbing and even shocking responses. It also sometimes flunks basic math problems. Worse, the system can be used to generate phishing emails free of errors. And it has produced content that is racist or sexist when users applied tricks to bypass the system’s filters.

For now, Open AI describes the ChatGPT platform as a tool designed to complement humans rather than replace them. For example, it cannot yet generate footnotes and, while its answers are often accurate and engaging, they sometimes don’t represent the complete picture and they aren’t always synced with the specific messaging that a marketing team or other business function might require.

In a worst-case scenario, the AI engine produces text that’s well-written but completely off target or wrong. Thus, humans might plug deceptive or incorrect ChatGPT text into a document or use it to intentionally deceive and manipulate readers.

Other concerns exist. One revolves around the possibility that students will be able to generate high quality essays and reports without actually researching or writing them. Another is that the technology could lead to the end of many jobs, particularly in fields such as journalism, scriptwriting, software development, technical support and customer service. The AI platform could also deliver a more sophisticated framework for web searches, potentially displacing search engines like Google and Bing.

Finally, some have complained that the platform should not be regulated for speech and content.

Excerpted from eWeek article by Samuel Greengard, December 29, 2022