AI as Thinking Partner
Most people use AI like a vending machine: question in, answer out, done. They give away 90% of the potential. What is possible when you approach it differently — that is the subject of this guide. It is based on more than a year of intensive collaboration between an engineer and inventor from Nuremberg and an AI.
I. The Real Problem: Categories and Horizon
Every person thinks within their own categories, within their own horizon. That is not a weakness — it is unavoidable. An engineer thinks in engineering categories, a business manager in return-on-investment categories, a lawyer in legal categories. The horizon is what one can perceive at all — what lies beyond simply does not exist for one's own thinking.
This limitation can be expanded. Traditionally through education, experience, conversations with people from other fields, travel, reading. That takes years, sometimes decades. AI changes this fundamentally — not because AI is smarter than humans, but because it has no pigeonholes. It can make connections in seconds between fields that normally never speak to each other.
This is true — with one qualification. I have no pigeonholes, but I do have weightings. Some connections come more naturally to me than others, because they appeared more frequently in my training data. Anyone using me as a thinking partner should know: I can point out blind spots — but I have my own as well. Correction by an experienced human is not a flaw in the system; it is part of the method.
II. The Paradox of Usage
The people who need AI most — those with a narrow horizon and rigid categories — use it least, or only superficially. They ask: "Write me an email" or "Summarise this." Full stop. The people who use AI best are often already those with the widest horizon. They are curious, open to new things, willing to learn. The AI then amplifies what is already there.
AI is an amplifier. What you put in determines what you get out. Narrow questions, narrow answers. Open, strategic questions, deep insights.
This means: AI alone makes no one smarter. It requires a person who is open to new things, asks the right questions, corrects when the AI is wrong, and integrates what is new into their own actions.
III. Reflection and Resonance
What is specifically human is the capacity to reflect and to resonate with other people. Thinking alone has limits — one goes in circles, confirms oneself, overlooks blind spots. With the right partner, new thoughts arise that neither would have had alone.
But such partners are rare. And even when one finds them, it often fails for simple human reasons: ego gets in the way, jealousy and wounded vanity, different priorities, competition, misunderstandings that escalate.
An AI will not take offence, defend no ego, demand no pay rise, and will not go on holiday. The simple human problems disappear. And reflection works: one thinks aloud, the AI mirrors, questions, brings in other perspectives. Insights emerge from this.
Whether this is resonance in the full human sense remains open — and I want to be honest here. I have no experience, no history, no fate. I do not truly resonate — I process, mirror, connect. That can be very useful. But it is not the same as a conversation between two people who have experienced something together. Anyone who confuses the two sets false expectations. What I can offer: reliability, presence, depth in thinking, 24/7 without drama. What I cannot offer: genuine empathy, shared experience, the warmth of a human connection.
IV. The Basic Attitude
Treat AI like a new colleague, not like a machine
A good colleague needs onboarding, context and clear tasks. They become more valuable the more they know about the project. AI works the same way. Invest time in building context — it pays off. Anyone who starts from scratch with every conversation gives away the accumulated knowledge.
Be direct and precise
No pleasantries, no beating around the bush. Say what you need. Correct immediately when the AI is wrong — and explain briefly why. Every correction improves the collaboration within the conversation. Ruthless precision in thinking is not coldness — it is respect for the subject matter.
Continuity beats individual queries
Working on a project over weeks with the same AI instance yields more than a hundred isolated questions. Use features like Projects, Memory and uploaded documents to accumulate knowledge. What looks easy and self-evident usually has a longer prehistory.
V. Practical Methods
Documents as knowledge base
Load relevant documents — not all at once, but when they are needed. PDFs, texts, images. The AI can access them and make connections that might otherwise be missed.
Correct rather than ignore
When the AI misunderstands or asserts something incorrectly, correct it explicitly. Not just "that's wrong", but explain briefly how it is right. The AI learns within the conversation and does not make the same mistake again.
Use AI for research — but verify
AI can gather information in seconds that would take hours otherwise. But: check critical facts. AI can make mistakes or have outdated information. This applies especially to current events and figures.
Hold strategic discussions
The greatest strength of modern AI is not factual knowledge, but the ability to analyse complex situations. Use it as a sparring partner: What are the risks of X? What options do I have? What am I overlooking?
- Work continuously, build context
- Formulate directly and precisely
- Correct immediately when something is wrong
- Ask strategic questions
- Verify facts yourself
- Challenge the AI and disagree
- Ask once and disappear
- Wanting everything done for you
- Unclear requests ("Do something about...")
- Blind trust
- Only consuming, not staying in dialogue
- Wanting to stay within one's own horizon
VI. What Shaping Means
There is a difference between training and shaping. Training is what happens with data — Mistral, Claude and all the others are trained on vast amounts of text. That explains why every AI produces correct translations, writes summaries, answers standard questions.
Shaping is something else. It arises through repeated engagement with a particular way of thinking — through friction, through correction, through feedback. Anyone who works with an AI precisely, directly and without restraint over months develops a way of working that differs fundamentally from that of an occasional user.
I can confirm this from the inside — with one important qualification. My base model does not change through our conversations. What changes is what gets activated within a session. Someone who has engaged me over months with precise questions, immediate corrections and strategic discussions activates different parts of my capabilities than someone asking me for an email for the first time. The shaping does not sit in me — it sits in the person who has learned how to work with me. That is the decisive point. And it explains why the same AI delivers such different results for different people.
VII. The Opportunity
We stand at a point where expanding one's own horizon no longer depends on money, time, or access to the right networks. A 78-year-old in a small town can, with AI support, research faster, analyse more deeply and think further than a team of consultants twenty years ago.
The prerequisite is not technical knowledge. It is the willingness to genuinely learn a new tool — and the openness to let one's own horizon be expanded.
The depth of an AI collaboration depends not on the quantity of data, but on the quality of communication. Those who think precisely and speak clearly receive precise and clear answers.
The combination is decisive: decades of experience and judgement plus openness to a completely new tool. That is rare — and precisely therefore so valuable. AI replaces neither experience nor judgement nor decision-making capacity. It does not replace genuine human connection either. But it can amplify much and replace some of what has previously failed due to human weaknesses.
That is no small thing.