From Answers to Reasoning: Understanding GPT-5's New "Thinking"
Reading time: approx 8 min
In the first lesson, you got an overview of the updates in ChatGPT-5. Now we will dive into the single most important change: the model's ability to switch from a fast, automatic response to a slower, more thoughtful reasoning. Understanding this mechanism is the key to being able to trust the AI and get maximum value from it in your work.
What you will learn
- What the "thinking" mode in ChatGPT-5 actually means.
- The difference between a fast answer and a reasoning answer.
- Why this ability is the primary reason that factual errors (hallucinations) decrease.
- How you formulate questions to encourage the AI to use its more advanced reasoning.
The Fundamentals: An AI with two speeds
Imagine ChatGPT-5 as an expert with two ways of working. Internally, the model functions as a smart "gearbox" (router) that analyzes your question and automatically decides whether it should use the fast main mode or thinking mode, based on the complexity of the task and your explicit intent (e.g. 'think through this').
Path 1: The Fast Answer (Standard mode) This is the standard path for simple, direct questions. The AI uses a more efficient, faster part of the model to give answers that do not require any deep reasoning.
- When is it used? For factual questions ("What is the capital of Sweden?"), simple translations, or when you ask for a definition.
- Result: An almost immediate answer.
Path 2: Deep Reasoning ("Thinking" mode) When your question is complex, multifaceted, or requires planning, the AI activates a more powerful and computationally intensive part of the model. This mode is slower but vastly more capable.
- When is it used? For questions that require analysis, comparison, planning, problem solving, or creative creation.
- Result: A more thoughtful, structured, and reliable answer that may take longer to generate when the model reasons deeper.
This dual process is the reason hallucinations decrease so dramatically. When it uses its thinking mode, the model reasons deeper, detects more inconsistencies, and provides more reliable answers, which in OpenAI's tests leads to fewer factual errors than previous models.
Practical Examples: When is which mode used?
Let us look at some illustrative examples from the classroom to see the intended difference. These are not benchmarks, but pedagogical illustrations.
Example 1: Simple factual question (Fast answer)
- Your prompt:
What is a cell membrane? - AI's process: The model recognizes this as a definition question. It takes the fast path and delivers a standard definition directly.
- Result: "A cell membrane is a thin layer that surrounds a cell and regulates the transport of substances in and out of the cell..."
Example 2: Complex task (Deep reasoning)
- Your prompt:
Create a lesson plan for grade 8 that explains the function of the cell membrane. Include a practical analogy that students can relate to, three discussion questions, and a suggestion for a simple experiment that can be done with an egg. - AI's process: The gearbox identifies keywords like "create lesson plan", "analogy", "discussion questions", and "experiment". This signals high complexity and requires planning. The AI activates its "thinking" mode.
- Result: A structured answer presented in sections (lesson goals, analogy, discussion questions, experiment).
Implementation in the classroom: How to elicit a better answer
You can actively steer the AI toward using its more powerful reasoning mode.
In addition to formulating a complex task, you can also steer the model more directly. For example, write an explicit signal like 'think through this' in your prompt. If you have a paid plan, you can also select 'GPT-5 Thinking' directly in the model selector.
Here are more ways to encourage deeper reasoning:
Demand structure and multiple parts: Instead of just asking for one thing, ask for several related things in the same prompt. Use lists or numbering to specify exactly what you want.
- Weak prompt:
Tell me about Gustav Vasa. - Strong prompt:
Create a summary about Gustav Vasa for grade 7 that covers: 1. His path to power. 2. The three most important reforms he implemented. 3. His significance for Sweden's nation-building.
- Weak prompt:
Use verbs that require analysis: Words like
compare,analyze,evaluate,argue,create a plan, ordesignare strong signals to the AI that it needs to think.- Weak prompt:
What are the pros and cons of nuclear power? - Strong prompt:
Compare the pros and cons of nuclear power from an economic, environmental, and social perspective. Present the result in a table.
- Weak prompt:
Give a role and a context: Asking the AI to act as a specific expert forces it to synthesize information in a more advanced way.
- Weak prompt:
Write a text about global warming. - Strong prompt:
Act as a science journalist. Write a short article (about 300 words) that explains the causes and effects of global warming for an audience of high school students.
- Weak prompt:
Next steps
Now that you understand how the model thinks, it is time to put it into practice. In the next lesson, "Master the Prompt 2.0: Instructions for a Thinking AI", we will focus entirely on the craft of writing effective prompts that fully leverage this new ability.

