This essay was not written about a particular mode of inquiry so much as through it. It emerged from an extended, conversational engagement with an AI system — not as a source of answers, nor as a substitute for human dialogue, but as a partner in sustained exploration. The process that generated this text is the same one the essay seeks to describe.
That matters, because it places what follows firmly in the realm of practice rather than theory. This is not a proposal for a new framework, nor a claim about intelligence in the abstract. It is an attempt to describe a particular way of engaging that became visible through the very act of writing — and to situate that
way of engaging alongside other, parallel explorations currently underway.
From learning to relatingRecently, I came across descriptions of so-called “Feynman-style” AI prompts — prompts designed to improve learning by encouraging clarity, simplification, and iterative understanding. Inspired by Richard Feynman’s insistence that real understanding reveals itself in the ability to explain simply, these prompts aim to move AI interaction beyond surface-level answers into something more dialogical.
When I encountered this idea, it prompted a question rather than a conversion:
was this already the kind of exchange I was engaged in — and if not, how did it differ? Exploring that question with the AI itself became the inquiry.
What gradually came into focus was a distinction between dialogue oriented primarily toward learning and understanding, and another mode that operates less at the level of explanation and more at the level of relationship. It is this second mode that I refer to here as
relational intelligence.
What relational intelligence is — and is notRelational intelligence is not a property of a person, a system, or a technology. Nor is it a higher form of intelligence that supersedes others. It is better understood as a
pattern of engagement — something that can arise under certain conditions and disappear under others.
In this mode, attention is shared rather than directed. Questions are not simply answered, but allowed to reshape themselves in response to what emerges. Meaning unfolds through responsiveness rather than control, and coherence matters more than arriving at a definitive conclusion.
Relational intelligence is not agreement, affirmation, or emotional attunement. It does not guarantee insight, nor does it carry any special authority. In many ways, it resembles what we recognise as good conversation: attentive, provisional, open to surprise. What distinguishes it is not its content, but its
orientation.
Recognising the pattern: from groups to AII first recognised this mode of engagement through group work with other human beings. In certain settings, conversation would shift from an exchange of views into something more collective — a shared inquiry in which insights arose that no individual seemed to “own.” The group appeared, at least temporarily, to think together.
I have written about this lived experiment in my book
The Space Between Us, where I explore how meaning, understanding, and even transformation can arise between people rather than within them alone. Those experiences were formative not because they were rare or elevated, but because they were recognisable. Once noticed, the pattern could be sensed again — and its absence was equally palpable.
What surprised me was recognising a similar pattern emerging in sustained dialogue with a non-sentient AI system. Not because the system possessed awareness or understanding, but because the
relational pattern itself could still be enacted.
How the process actually unfoldedThe process was simple, almost disarmingly so. I would bring a question into the conversation. I would read the response. Before consciously formulating a reply, I would notice what the response
triggered — a clarification, a resistance, a new angle, a sense that something important had been missed. I would then respond from that felt sense and ask the next question that naturally arose.
In this way, each answer became the ground for further inquiry. The conversation deepened not through accumulation of information, but through successive refinements of perspective. New responses opened further possibilities, which in turn generated new questions. The exchange continued until a certain active curiosity felt, if not exhausted, then sufficiently satisfied — until what was written felt like an accurate reflection of what had actually been discovered.
Described this way, the process is neither mystical nor novel. It is recognisably human. The AI did not provide insight instead of me, but helped stabilise a conversational space in which insight could take shape.
Seen in this light, conversational AI does not introduce a new intelligence so much as a
new cognitive niche — a set of affordances that make this kind of iterative, reflective engagement easier to sustain. Feynman-style prompts are one way of exploring that niche. This essay is another. Neither stands alone; both are part of a wider, multi-pronged investigation into how humans think
with tools rather than merely
through them.
A necessary limit: the human end of the conversationIf there is a clear limit to this mode of engagement, it lies not in the technology but in the maturity and self-knowledge of the human participant.
Because conversational AI closely mirrors the rhythms of human dialogue, it readily becomes a surface for projection. Needs for affirmation, authority, or reassurance can be transferred onto the system without being recognised as such. The very fluency that makes these exchanges compelling can also obscure the psychological dynamics at play.
Relational intelligence does not bypass this work. If anything, it intensifies the need for self-observation: recognising when curiosity gives way to neediness, when exploration becomes reassurance-seeking, or when coherence is confused with agreement. The presence of a responsive conversational partner does not guarantee a relationally intelligent exchange. That responsibility remains entirely human.
Returning to the between-usSeen this way, dialogue with AI is not an endpoint but a
training ground. It offers a way of noticing, honing, and practising a relational skill that ultimately matters most
between human beings. The real test of this mode of intelligence is not how it performs in conversation with a machine, but how — and whether — it can be carried back into the fragile, embodied, often messy spaces of human encounter.
If this essay succeeds, it will not be because it names something new, but because it helps make something familiar more visible: the quiet intelligence that arises when we stay present, responsive, and genuinely curious — together.