ChatGPT and other generative AI applications are notorious for inventing information. These inventions are called "hallucinations." Hallucinations usually happen when the AI can't find the specific information you've asked for. Instead of telling you it can't find something, it will just make something up. It does so with total confidence, though, which means it can be really hard to tell real information from fake.
Most of the time, ChatGPT does well with big ideas and general information, while it tends to hallucinate when asked about very specific things, like a list of sources for your research paper or detailed information on an individual person. For this reason, you should always double-check any information that ChatGPT gives you when the accuracy of that information matters. When it comes to using ChatGPT to learn about something, always think of it as a starting point, not an ending one. Take what it gives you with a grain of salt: now that you understand a little about the topic, look for better, more reliable sources that have actually been written by real people who have real knowledge.
Hallucinations are also why you should never use ChatGPT or other generative AIs when doing research, because they frequently will provide you with sources that don't exist. It may even take a real author's, and a real journal's title, and make up a fictional article that person never wrote, and that journal never published. You always want to pick the right tool for the job, and in the case of research, ChatGPT is NEVER the right tool.
ChatGPT falls back on hallucinations and made-up information because it doesn't have good information to draw from. This is due to flaws in the training process. OpenAI copied the free parts of the internet when training ChatGPT, but it couldn't get behind paywalls to get at premium information, including many academic databases containing scholarly articles. Because it couldn't access these articles, ChatGPT resorts to inventing ones.
In 2023, two lawyers in New York City used ChatGPT to research case law, which they then submitted in a filing in federal court. Because ChatGPT is prone to make up information, it invented bogus precedents from court cases that never happened, which federal judge P. Kevin Castel called "legal gibberish." The lawyers claimed they didn't know that ChatGPT would create fake information, but that's not a valid excuse; the court fined both lawyers $5,000.
Remember: You are responsible for anything you use that is created by an AI, whether it's in federal court or one of your classes at SUNY Potsdam.
Ref: Neumeister, Larry. "Lawyers Blame ChatGPT for Tricking Them into Citing Bogus Case Law." APNews.com, Associated Press, 8 June, 2023, apnews.com/article/artificial-intelligence-chatgpt-courts-e15023d7e6fdf4f099aa122437dbb59b.
Just as students can be fooled by ChatGPT inventing sources, so, too, can professors. A researcher at the University of Southern California asked a librarian for help finding 35 articles. When the librarian couldn't locate any of them, she asked where the citations came from, and the researcher told her they'd been supplied by ChatGPT. Each had a full citation, including plausible-sounding titles, dates, page numbers, and journal names, but none of them actually existed. ChatGPT had just invented all of them.
Ref: Hicks, Maggie. "No, ChatGPT Can't Be Your New Research Assistant." Chronicle of Higher Education, 23 August, 2023. www.chronicle.com/article/no-chatgpt-cant-be-your-new-research-assistant