Hallucinations: Why AI Sometimes Makes Things Up
Reading time: approx. 5 min
We have now gone through how AI models work, how tokens function, the importance of the context window, and how temperature affects creativity. Now we will address one of the most important and sometimes most frustrating phenomena when working with generative AI: hallucinations. This is when the AI model generates answers that seem correct and credible, but are in fact incorrect, made up, or simply nonsense. Understanding why this happens and how you handle it is crucial for using AI responsibly in the classroom.
1. What is a hallucination in AI contexts?
A hallucination occurs when an AI model generates text or information that:
- Lacks basis in real facts or data.
- Is directly incorrect, even if it is presented with great certainty.
- Makes up references, sources, or people that do not exist.
- Seems to be coherent and logical on the surface, but upon examination turns out to be nonsense.
It is important to emphasize that a hallucination is not a conscious lie from the AI. The model does not "understand" truth in a human sense. Instead, it is a direct consequence of the underlying probability calculations. The model simply generates the most probable next word/token, based on the enormous amount of data it trained on, even if the probable word leads to a factually incorrect statement in a given context.
2. Concrete examples of hallucinations
Hallucinations can look many ways, from small errors to completely made-up scenarios:
Example 1: Incorrect facts
- Prompt: "Tell me about the founder of Gothenburg and the exact year the city was established."
- Model's answer (potential hallucination): "Gothenburg was founded by King Gustav III in 1641."
- Fact: The city was founded by Charles IX in 1603. Here the model has mixed up kings and years, but presents it as fact.
Example 2: Made-up sources
- Prompt: "Give me a list of three scientific articles that support the hypothesis that AI improves students' creativity in language teaching."
- Model's answer (potential hallucination): "Absolutely, here are three articles:
- 'AI and Creative Writing: A Longitudinal Study' by Dr. Elin Svärd, published in Journal of Educational Technology and Research, vol. 15, no. 2, 2023.
- 'The Impact of Generative AI on Student Narrative Development' by Professor Karl Lindberg, International Journal of Modern Education, 2022.
- 'Leveraging AI for Enhanced Linguistic Expression in English Classrooms' by Anderson et al., Nordic Journal of Pedagogy, 2024."
- Fact: These articles, authors, and journals may be completely made up and do not exist in reality. The model has generated credible titles and names but without real basis.
3. Common causes of hallucinations
Hallucinations are a complex problem, but they often arise due to a combination of factors:
- Insufficient or conflicting training data: If the model has been trained on data that contains errors, inconsistencies, or bias, it can replicate these errors.
- Exceeded context window: As we discussed in Moment 3, if important context or instruction falls out of the model's memory, it can start "filling in gaps" with untrue information to still generate a coherent answer.
- High temperature: A higher temperature setting (see Moment 7) increases the model's creativity, but also the risk that it chooses less probable and therefore sometimes incorrect word sequences.
- Complex or ambiguous prompts: If your prompt is unclear, too broad, or asks a question that lacks a clear answer in the training data, the model can "guess" its way to an answer.
- Lack of "grounded" facts: AI models are not search databases. They generate text, not facts. If they are not "grounded" in reliable information sources, the risk of errors increases.
4. Strategies for handling and minimizing hallucinations
Although hallucinations cannot be completely eliminated, there are several strategies to minimize them:
Lower the temperature: For tasks that require high factual precision, set the temperature as low as possible (0.0 to 0.2). This forces the model to choose the statistically most probable and often most correct words.
Include source requests in the prompt: Ask the model to explicitly state sources or references for the information it generates. Although the AI can "hallucinate" sources, it is a good exercise to make the model more aware that you expect fact-based answers.
- Example: "Generate an overview of the English parliament. Include at least three sources in your answer."
"RAG setup" (Retrieval Augmented Generation): This is a powerful technique where you first let the AI retrieve relevant information from a reliable, verified data source (for example, a database, a specific PDF, a government website) and then use this retrieved information as the basis for the AI's answer. The model is "grounded" in facts you have control over.
- Example: You upload a text from a government website about digitalization in schools and then ask the AI to answer questions based only on that text.
Active verification in the prompt: Ask the AI to actively question and verify information before it presents it as fact.
- Example: "Before you answer, consider whether this information can be verified. If you are unsure, say so clearly."
Step-by-step reasoning: Ask the AI to think aloud and explain its reasoning step by step, which makes it easier to discover hallucinations.
- Example: "Explain step by step how you arrived at your answer and which sources you would recommend to verify the information."
Post-processing and fact-checking: Always see AI-generated text as a first draft. A manual or automatic fact-check is necessary, especially for information to be used in teaching or as a basis for students' work.
5. Important tips for teachers in the classroom
Handling AI hallucinations is not just a technical question, but a pedagogical opportunity:
Double-check ALWAYS: Never assume that AI-generated facts are correct. Always check AI-generated information against reliable and verified sources (textbooks, encyclopedias, government websites, established news agencies, scientific databases). This is the single most important rule.
Teach students source criticism: AI hallucinations provide a perfect opportunity to teach source criticism. Show students examples of when AI has "made things up" and discuss why it is so important to question and verify information from all sources, including AI.
Build fact-checking exercises: Design lesson tasks where students actively should identify and correct hallucinations in AI-generated texts. This trains both their source-critical ability and subject knowledge.
Transparency: Be clear with students that AI can make mistakes and that their own critical thinking is irreplaceable.
6. Reflection exercise
To get a practical understanding of hallucinations and how they are handled:
Experiment with temperature: Give the AI model a fact-based task (for example, "Describe the most important causes of World War I"). Generate an answer with low temperature (0.0-0.2) and another with high temperature (0.8-1.0). Compare the answers carefully. Do you find any differences in precision or any hallucinations?
Test "grounding" with the RAG principle: Choose a short, fact-packed text (for example, a Wikipedia page about a certain animal or a historical event). Ask the AI to answer some questions about the subject, but instruct it to only use the information from the specific text you paste in. Then compare with if you had asked the same questions without giving it the text. Do the hallucinations decrease?
Discuss in the staff: How can you integrate routines for fact-checking and source criticism in your lessons when students use AI? Which specific subject areas are most exposed to AI hallucinations, and how can you work proactively with this?
Next moment: From prompt to practice: Design AI-supported tasks - Now that you have a solid foundation in how AI models work, how you communicate with them, and how you handle their limitations, it is time to bring together everything you have learned into concrete and pedagogically thought-through lesson activities and tasks for your students.

