Popular Lesson
Recognize how ChatGPT generates responses based on language prediction
Identify the main limitations of ChatGPT, including hallucinations and errors
Understand why ChatGPT can provide confident but incorrect answers
Distinguish between real knowledge and plausible-sounding text in AI outputs
Start applying prompt techniques to reduce mistakes and inaccuracies
Gain awareness of the impact of different AI models and updates
ChatGPT may sound sharp and insightful, but it doesn’t understand questions or facts like a person. This lesson explains that ChatGPT works by predicting words one after another, based on patterns it learned from enormous amounts of text. It does not check facts, search the internet (unless specifically enabled), or understand real-world meaning the way humans do. That’s why it is sometimes fooled by its own predictions, confidently offering answers that sound right—but can be completely wrong. This is called hallucination, and it’s a well-known issue for all generative AI tools.
Understanding this helps you spot where ChatGPT might create fake quotes, statistics, or web links. These mistakes aren’t intentional; the model is simply guessing what comes next. Over time and with new models trained on more data, these errors may become less common—but they have not been solved. Knowing how ChatGPT works, and its core limitation, is one of the best ways to use it wisely. This lesson lays the groundwork for the rest of the course, which focuses on making your prompts clearer and more reliable.
If you want to get useful, trustworthy results from ChatGPT, it’s important to know what’s going on behind the curtain.
Before you can ask ChatGPT the right questions, you need to understand what it can—and cannot—do. This lesson fits at the start of any project where you plan to use AI-generated text, from drafting documents to gathering research or brainstorming ideas. For example, if you’re creating an article or getting help with a homework problem, knowing that ChatGPT can make confident errors lets you double-check its facts or ask for sources. This awareness prepares you to use more precise prompts and to treat AI outputs with healthy skepticism, setting you up for success in later, more advanced prompting lessons.
Knowing ChatGPT’s limitations saves time and prevents mistakes. In the past, you might have assumed that answers given in a confident tone were correct—leading to inaccurate reports, incorrect citations, or even made-up statistics. With this lesson’s approach, you’ll be able to spot possible hallucinations and prompt ChatGPT in a way that produces more reliable answers. For example, instead of blindly accepting a generated statistic or quote, you’ll understand the need to clarify your prompt or verify the answer elsewhere. Over time, this helps you avoid project setbacks, build stronger content, and avoid spreading unreliable information.
Try using ChatGPT to answer a factual question, such as: “Who won the Nobel Prize for Literature in 2016?”
Reflection: Did ChatGPT get the answer right? Did it provide a real source, or did it generate a plausible but fake link or citation? Compare your findings and consider how you would prompt more carefully next time.
This lesson helps you understand the mind—such as it is—behind ChatGPT’s responses, building a solid base for prompting skills. You’ve learned about the key limitation (hallucinations) and why smart prompting matters. Earlier lessons introduced the basics of ChatGPT and how it was trained. Up next, you’ll start learning the craft of ideal prompt structure for more consistent and accurate results. Continue with the course to master the skills that help you guide ChatGPT to produce the work you actually need.