Have you ever received the same answer from an AI twice?
Not approximately the same answer. The exact same answer — word for word, comma for comma. If you have used a large language model for anything beyond trivial queries, you already know the answer. You have not. Ask the same question tomorrow, and you will get a different response. Five minutes later, the phrasing will have shifted and the emphasis will sit somewhere else entirely. Generate an image from the same prompt, and a different image will appear. A code generator asked for a sorting function might give you quicksort on Monday and mergesort on Tuesday. Neither answer is wrong, both are valid, and neither is the one you got yesterday.
Most people treat this variability as a flaw — a technical imprecision that better engineering will eventually iron out, the way autocorrect improved over time. That instinct is understandable. It is also, I believe, profoundly mistaken. The variability of generative AI is not a temporary embarrassment on the road to more reliable machines. It is the surface expression of a deeper shift in the fundamental contract between humans and computers, from a relationship built on commands and execution to one built on proposals, judgment, and curation. And this shift may, within a generation, transform how we interact with computers as fundamentally as the graphical interface transformed it forty years ago.
Decades of precision shaped how we think about machines
Since the 1940s, computing has rested on a deterministic promise: identical inputs produce identical outputs. But that promise shaped the expectations of a broad public only after the personal computer arrived in the mid-1980s. Since then — through PCs, the internet, smartphones — every interaction with a digital device has reinforced the same reflex. You type a formula into a spreadsheet and expect the same sum every time. A database query yields the same result set whether you run it at noon or midnight. When you enter an address into a GPS, you expect computed routes, not suggestions. We expect machines to behave the same way every time, and since the personal computer, they have. Every one of these interactions also required the human to learn the machine's grammar — cell references, query syntax, the right sequence of menus and form fields. The relationship was defined by two things: the machine was predictable, and the human had to speak its language.
That expectation was well earned, and for the problems deterministic computing was built to serve — accounting, logistics, simulation, data management — it remains well placed. The Von Neumann architecture, with its binary logic and sequential instruction execution, was designed for precision, and precision is what it continues to deliver. But decades of working exclusively with deterministic machines trained us to assume that predictability is the essence of computing itself, rather than a property of one particular kind of computing. Generative AI now introduces a class of systems that operates on a wholly different principle — and in doing so, it exposes how deeply the expectation of sameness is embedded in our understanding of what a computer is.
Neuroscience found that brains use noise as a resource
One might expect that the most sophisticated information-processing system in nature — the human brain — would operate on the same principle of precision and reproducibility. It does not. Neurons fire with inherent randomness, synaptic transmission is stochastic, and the same signal traveling the same neural pathway does not produce the same downstream effect every time.
For decades, neuroscientists treated this noise as a limitation, the biological equivalent of a loose wire. That view has been overturned. Edmund Rolls and Gustavo Deco showed that neural noise is a computational resource the brain exploits.[1] Stochastic fluctuations prevent decision-making deadlocks, because when two options are equally attractive, noise tips the balance. Non-deterministic transitions between thoughts enable creative thinking, which is why insights tend to arrive during idle moments rather than during focused concentration.[2] Even perception is affected: a Necker cube, a simple line drawing that can be read as two different three-dimensional orientations, will spontaneously flip between them as you stare at it, because competing neural populations are nudged by random fluctuations rather than by any deliberate choice.
The brain evolved into a hybrid system. It operates with high reliability when precision is critical, and it allows stochastic fluctuation when exploration or the breaking of fixed patterns is more useful than exact repetition. A 2015 study in Frontiers in Computational Neuroscience argued that human creativity arises precisely from this synergy between low-energy stochastic and energy-intensive deterministic processing — two cognitive modes that evolution optimized for energy efficiency, because organisms that could harness randomness solved complex problems more effectively than those that relied on determinism alone.[3]
The analogy between neural noise and AI stochasticity is functional, not mechanistic — the sources are different, but the capabilities they unlock are similar. Both systems derive from variability what pure determinism cannot provide: the capacity to explore alternatives and produce different valid responses to the same situation.[4] The most capable information-processing system in nature arrived, through evolutionary pressure, at a hybrid architecture that uses randomness productively. That does not prescribe what artificial systems should do, but it reframes the question — from "how do we suppress variability?" to "where might it be useful?"
A deterministic medium trains precision — a probabilistic medium trains judgment
If variability can be useful rather than merely tolerable, then the shift from deterministic to probabilistic computing is not primarily a technical event. Marshall McLuhan would have recognized it as a cognitive one. He argued that the most consequential thing about any medium is not its content but its structure — not what it carries but how it reshapes how we think.[5]
Deterministic computing required the human to learn the machine's grammar. Probabilistic computing learns the human's. That inversion changes more than the input method — it shifts where the cognitive demand sits, and it changes what the word "dialogue" means in computing. For decades, the term described a protocol the human had to follow; now it is beginning to describe an exchange in which the machine does the interpreting. In a deterministic system, precision was the human's burden: a wrong formula, a malformed query, and the machine refused or returned the wrong result. In a probabilistic system, the machine takes on the work of interpreting the human's intent, translating natural language into structured action and validating against the deterministic systems it still relies on underneath. But because interpretation is inherently approximate, a new demand emerges at the output: the human must judge whether what came back matches what was meant.
An interaction built on expressing intent, receiving an interpretation, and judging the result has a familiar structure. It resembles a conversation. Researchers have long known that people instinctively apply social rules to computers when the machine's behavior mimics human patterns.[6] A 2024 study went further and applied Grice's conversational maxims — the foundational rules of effective human dialogue — to human-AI interaction, finding them directly applicable and requiring only two additions specific to AI: benevolence and transparency.[7] When the analytical tools of conversational linguistics apply to human-computer interaction, the interaction has become conversational. And a conversational medium shapes its users differently than a transactional one.
When the machine proposes, the human curates
Imagine this: a designer sits down with a diffusion model. She does not specify an image and receive it. She enters a prompt, examines what comes back, adjusts her language, generates again, and keeps iterating until she finds something worth developing further. Her role is shifting from production toward curation — the navigation of a possibility space, guided by judgment about which possibilities are worth pursuing. The same pattern is appearing in code generation, where developers iterate with AI-produced implementations rather than writing from scratch, and in writing, where authors use language models as sparring partners whose proposals they accept, reject, or redirect.
What is emerging across these domains is a new fundamental rhythm of interaction: the machine proposes, the human judges. This differs from deterministic computing as profoundly as conversation differs from dictation. In a deterministic system, the human specifies and the machine executes — the value flows in one direction. In a probabilistic system, the value emerges from the exchange itself, because neither the human's intention nor the machine's exploration alone would have produced the result.
The pattern has precedents in art. Brian Eno developed Oblique Strategies — a set of randomized cards designed to break creative deadlocks by introducing the unexpected. William Burroughs cut texts into strips and reassembled them, trusting the aleatory process to generate connections his conscious mind would not make. What these artists discovered by experiment, generative AI is beginning to make available to anyone with a keyboard: the productive use of variability as a creative method.
Curation is not a lesser skill than production. It is a different skill, and a demanding one. In a deterministic paradigm, the critical competence is precision — the ability to specify what you want. In a probabilistic paradigm, it is judgment, the ability to recognize quality in a field of possibilities. If this shift continues, it maps onto a form of expertise that predates computing entirely, the kind of expertise an editor brings to a manuscript or an art director brings to a photographer's contact sheets. What these roles share is a cognitive capacity that probabilistic systems increasingly demand: the willingness to formulate goals without specifying every step, and the ability to evaluate results that you could not have produced yourself.
Not everyone engages at the same depth — but the direction is clear
That capacity is unevenly distributed, and so is the depth at which people currently engage with probabilistic systems. Most users are still in a mode of simple selection — one prompt, one result, accept or reject. The variability is experienced primarily as a nuisance, the reason you sometimes regenerate three times before getting something usable. This is the shallowest engagement with a probabilistic system, and it accounts for millions of daily interactions.
At the other end, people who use agentic coding tools or AI-assisted research workflows already practice something much closer to delegation: they define a task and its quality criteria, let the system explore and execute, then evaluate and curate the results. For this group, the curation pattern is not a theoretical possibility but a daily working method. What separates them from the majority is not access to tools — the tools are widely available — but the cognitive readiness to work with a system that proposes rather than executes. The demands of curation are no greater than those of any professional skill that seemed alien before it became routine, and as the interfaces become more intuitive, a growing share of users will likely develop them.
The form of interaction — how deeply the human engages with the machine's proposals — is one axis of this change. Orthogonal to it, the medium of interaction is evolving: where today we still type prompts into text fields, the trajectory points toward voice, gesture, and camera-based communication that would make the keyboard optional for many everyday tasks. At the same time, the question of who initiates is beginning to shift — systems that accumulate context could start to surface observations on their own, more like a colleague who notices something than a tool waiting for input. The temporality of the relationship is also in flux, moving from episodic sessions toward persistent working relationships in which the system builds an understanding of the user's work over weeks and months. These dimensions develop at their own pace, but they reinforce each other. A system you speak to, that knows your project and occasionally offers an unsolicited insight, is a qualitatively different thing from a chat window that forgets you after every session.
Where all of this leads in specific terms is difficult to say with confidence, because the intervals between interaction modes are measured in months rather than decades. What was experimental in early 2025 is routine for power users in 2026. That pace makes any detailed prediction fragile — but the direction is legible, even if the destination is not.
Deterministic processes remain — the change happens where humans and machines meet
Wherever a task demands exact reproducibility — storing a record, settling a transaction, controlling a robotic arm on an assembly line — deterministic computing will continue to do what it has always done well. The spreadsheet is not going anywhere. Probabilistic processes have existed in computing backends for years — fraud detection engines, recommendation algorithms, classification systems — but their outputs were always collapsed into deterministic interface elements before reaching the user. A probability score became a red flag in a dashboard, a ranking became a sorted list, a classification became a routed ticket. The user still clicked through the same structured forms and widgets regardless of what happened behind them.
What generative AI changes is the interface itself. For the first time, the human can express intent in natural language — typed or spoken — and the machine interprets and acts without requiring structured input. The intermediary translation into a technical grammar — the form, the query, the menu path — becomes optional. Instead of learning the machine's language, the human speaks her own, and the machine does the interpreting. This is not just a more convenient input method. It dissolves the coupling between computing and dedicated input devices that has defined human-computer interaction since the keyboard.
Text prompts are already the first step: you write a sentence instead of filling out a form, and the machine derives the structured action from your words. But the trajectory goes further. Speech-to-text eliminates the keyboard as a necessary intermediary. Cameras with AI-powered interpretation can read gesture and facial expression. Text-to-speech closes the loop. Together, these enable a form of ambient computing in which the machine is embedded in the environment — kitchen, car, office — and interaction happens through voice and gesture rather than through a dedicated device you sit in front of. Early versions exist in products like Amazon's Alexa or Apple's HomePod, but with more capable language models and multimodal interpretation, this could go much further, fundamentally changing computing in the private sphere and potentially in professional contexts too. For specialized precision work — programming, data analysis, design — dedicated input devices will remain essential, because these tasks require structured, precise input that natural language cannot efficiently replace. But for everyday computing, the era in which a human needed to learn a machine's interface grammar may be approaching its end.
The dialogue has changed direction
For as long as computers have existed, the interaction between human and machine has been called a dialogue. But it was always a dialogue on the machine's terms. The human adapted — learned commands, filled forms, structured input in the machine's grammar. What was called dialogue was, in practice, compliance.
The new quality of the exchange between humans and probabilistic systems is that the direction has reversed. The machine now adapts to the human. It interprets natural language, derives structured action from spoken or written intent, and responds in a medium the human does not need to learn. As this capability extends beyond language into gesture, facial expression, and spatial context, the interaction becomes multimodal in a sense that previous computing paradigms never achieved — not a dialogue mediated by technical interfaces, but a dialogue conducted in the full bandwidth of human communication.
Within that dialogue, a new rhythm is emerging: the machine proposes, the human judges, and the result arises from the exchange. Two people planning a dinner do not exchange specifications; they suggest, react, and adjust until they converge on something that works for both. Human-computer interaction is beginning to follow this same pattern, because the medium has become structurally similar to the medium in which humans have always communicated with each other.
A generation that grows up with probabilistic computing may never develop the expectation of sameness that deterministic machines trained into ours. They may find it natural that the same question yields different answers, and develop an intuition for navigating possibility spaces rather than specifying exact outcomes.
For decades, we asked computers to be reliable — and they were. Now we are beginning to ask them to be interesting. That is a different contract, and a more demanding one.
[1] Rolls, E.T. and Deco, G. (2010). The Noisy Brain: Stochastic Dynamics as a Principle of Brain Function. Oxford University Press. https://global.oup.com/academic/product/the-noisy-brain-9780199587865
[2] Deco, G., Rolls, E.T., and Romo, R. (2009). "Stochastic dynamics as a principle of brain function." Progress in Neurobiology, 88(1), 1–16. https://www.sciencedirect.com/science/article/abs/pii/S0301008209000197
[3] Palmer, T.N. and O'Shea, M. (2015). "Solving difficult problems creatively: a role for energy optimised deterministic/stochastic hybrid computing." Frontiers in Computational Neuroscience, 9:124. https://pmc.ncbi.nlm.nih.gov/articles/PMC4600914/
[4] Kamb, M. and Ganguli, S. (2024). "An analytic theory of creativity in convolutional diffusion models." arXiv:2412.20292. https://arxiv.org/html/2412.20292v1
[5] McLuhan, M. (1964). Understanding Media: The Extensions of Man. McGraw-Hill.
[6] Nass, C. and Moon, Y. (2000). "Machines and Mindlessness: Social Responses to Computers." Journal of Social Issues, 56(1), 81–103.
[7] Miehling, E., Nagireddy, M., Sattigeri, P., Daly, E.M., Piorkowski, D., and Richards, J.T. (2024). "Language Models in Dialogue: Conversational Maxims for Human-AI Interactions." Proceedings of EMNLP 2024. https://arxiv.org/abs/2403.15115