December 1987. A bicycle goes down on East Main Street in Houma, Louisiana. The rider—a night janitor—dies. Two blocks away, his cramped apartment holds thousands of typed pages. Diagrams, memos, manuscripts—all sketching what he called a "calculus of mind." Here's the kicker: this janitor wasn't just any custodian. Peter Putnam had studied with John Archibald Wheeler, chatted with Niels Bohr, and spent decades trying to crack how minds actually learn. Then he just... vanished from scientific memory.
This year, writer Amanda Gefter managed to piece together Putnam's story. She uncovered an idea that feels almost shockingly relevant in 2025: maybe the brain isn't a Turing machine with goals imposed from outside. Maybe it's an induction machine that creates its own rules by resolving contradictions—through action—driven by a single built-in need to repeat (basically, to stay itself over time). Sounds wild? Sure. But here's the thing—it lines up in unexpected ways with what we now call embodied cognition, predictive processing, and curiosity-driven AI.
Who was Peter Putnam?
Born into money in Cleveland in 1927, Putnam studied physics at Princeton and got caught up in Wheeler's orbit—you know, the guy who coined "black hole" and couldn't stop thinking about how observers shape physics. During the '60s, Putnam taught on the East Coast, hung out in Harlem jazz clubs, and wrote in such dense, idiosyncratic prose that his friends literally called it "Putnamese."
Then things got weird. He basically fled academia, moved to Houma with his partner John "Claude" DeBrew, lived on almost nothing, and worked nights as a janitor while continuing to write. When that drunk driver struck him in 1987, nobody really knew about his intellectual work.
The money thing is fascinating—Putnam had it but kept pushing it away from himself. He anonymously funded a modern sculpture collection at Princeton to honor his brother (killed in WWII). Only recently has anyone connected the dots back to Peter. Walk around Princeton's campus and touch the cool bronze of Henry Moore's Oval with Points, and you're literally touching one of his breadcrumbs.
His final breadcrumb? A jaw-dropping ~$37–$40 million bequest to The Nature Conservancy. (Yes, the Putnam Marsh exists because a Louisiana janitor wrote a check.) The whole thing reads like a riddle—genius hiding in the most ordinary places.
The fisherman's net—and why the "net" matters before the "fish"
Putnam was obsessed with this parable from Arthur Eddington. Picture a scientist trawling the ocean with a net that has two-inch holes. He announces two great discoveries: (1) no fish exist smaller than two inches; (2) all fish have gills. Obviously, the first "discovery" is just telling us about the net, not the fish. Eddington's point: you've got to understand your instruments before trusting your discoveries. For studying the mind, the "net" is whatever logic the nervous system uses to create perception, action, and learning.
Putnam decided he'd write down that net.
Not a Turing machine: the brain as an induction machine
Mid-20th century computers crystallized around Turing's universal machine—basically a formal engine for deduction. Feed it rules and premises, it grinds out conclusions. But hang on—where do the rules come from in the first place?
Putnam bet that minds belong to a "fundamentally different genus" of computer. They manufacture their rules by playing a kind of game with the world.
He called it an induction game. From what we can reconstruct from his papers and Gefter's detective work, here's roughly how it works:
- You've got a brain-like network of binary units that influence each other, forming self-reinforcing loops
- The system can make one move at a time (think motor action)
- The environment keeps poking at the system (hunger pangs, photons hitting retinas, whatever)
- Here's the key bit: the system's only global goal is to repeat—to get back to its prior state despite all the perturbations
- A move counts as "good" if it changes the environment in a way that helps the system return to familiar territory. That move's underlying loop gets stronger (boom, you've got a rule)
- When contradictory rules fire at once (say, "turn left when hungry" clashes with "turn right when hungry"), the system can't just deduce its way out. It has to do induction—hunting through memory and environment for some new variable that sorts out the conflict and yields a more general rule (maybe "turn toward the warmer cheek")
That last step—"resolve contradictions by adding variables"—is absolutely crucial. Putnam thought this bootstrapped a whole ladder of increasingly general behaviors. That's learning. In his manuscripts and a systems paper with physicist Robert Fuller, he tried to sketch out the mechanics of these loops, how motor actions drive perception and concept formation. Not biochemistry—a logic of mind, a "brain calculus."
Two pretty radical claims fall out of this:
- Goals emerge from inside. A Turing machine follows externally specified objectives. An induction machine starts with just "repeat" baked into its very being—persist across time—and specific goals bubble up from interaction.
- Action comes before representation. Rules get forged to solve bodily conflicts in specific contexts, so motor sequences aren't the output of thinking—they're what makes thinking happen. Putnam was basically saying "we think with our bodies" decades before that became trendy.
How Putnam's loops rhyme with today's neuroscience
If you squint a bit, Putnam's binary units forming stable collective patterns look suspiciously like what we'd later call attractor networks (Hopfield nets). You know—large groups of simple elements falling into self-reinforcing states that can be triggered by partial cues. Hopfield's 1982 paper showed how this supports memory and decision-making. Not exactly the same model as Putnam's, but the family resemblance is hard to ignore.
His conflict-driven rule updates? They're eerily similar to modern ideas about cognitive control. Remember Botvinick and colleagues' work on how the anterior cingulate cortex monitors conflict (competing responses) and recruits control to resolve it? That's basically Putnam's "two rules collide; add a new variable to disambiguate" idea made concrete.
And the motor-first, world-involving picture fits perfectly with embodied and sensorimotor theories. O'Regan and Noë argue that "seeing is a way of acting"—perception depends on mastering sensorimotor contingencies, not building rich inner pictures. I bet Putnam would've loved that—he'd already cast perception as what's left over from useful action sequences that restore equilibrium.
Zoom out even more and you'll catch echoes of predictive processing and active inference. Friston's free-energy framework has agents acting to minimize long-term surprise (basically, keeping their states within survivable bounds). Different math, sure, but Putnam's "repeat your state" seems like a conceptual ancestor to these homeostatic-control ideas—learning emerges as the agent sculpts policies that keep it in familiar, safe regions.
I'm not saying Putnam anticipated the exact formalisms. But his compass was definitely pointing toward the same mountain range everyone else ended up climbing.
Where the math came from: Princeton, game theory, and a "goal to repeat"
Why all this talk about "games"? Well, if you were at Princeton in the late '40s and '50s, game theory was everywhere. Von Neumann and Morgenstern had just shown how to analyze decision-making as mathematical competition under constraints. Putnam took that energy and ran with it: what if we treat the brain's ongoing dance with the world as a repeated game—one move at a time—and look for the minimal goal that makes the math work? For him, that was repetition: existing across moments. Accept that, and suddenly the induction machine's rule-forging dynamics become a consistency-maximizing game under perturbation.
Here's what's interesting—Putnam's rules aren't written in some detached language of propositions. They're sensorimotor programs, loops linking body configurations, sensations, and actions. His "grammar of thought" sits right where modern robotics has learned to put it: between motors and sensors, not floating on some server farm above them.
What Putnam got right (and what he missed)
Right: action drives abstraction. That baby turning left or right until cheek temperature disambiguates the rule? Perfect parable for structure learning through interaction. Developmental psychology keeps finding the same thing—infants' control improves as they discover which variables matter for their bodies. Embodied accounts made this mainstream way later.
Right: contradictions are gold. We talk constantly now about surprise, prediction error, conflict as the learning signal. Putnam's insistence that clashing rules force creative induction reads like an early, intuitive grasp of the same insight. Brain imaging of cognitive control (that ACC conflict stuff) backs up the link between detecting conflicts and updating policies.
Right: goals can be endogenous. Reinforcement learning got formalized as "maximize reward," and now there's this huge subfield exploring intrinsic motivation—curiosity bonuses that reward agents just for learning better models. Schmidhuber's "compression progress" ideas explicitly reward becoming less surprised, not just collecting cheese. Putnam's "repeat" looks like a proto-homeostatic intrinsic objective—keep yourself together, figure out what helps.
Missed (or left implicit): statistics and scale. Reading Putnam's manuscripts, they feel pre-Bayesian—more topological than probabilistic. He doesn't give us likelihoods, variational bounds, or credit assignment—the stuff that later unified perception, action, and learning into single frameworks. His binary-loop formalism also doesn't tell us how to scale from small loops to whole cortices the way modern deep RL does. Doesn't diminish the insight, though—just shows where his map faded out.
The induction machine vs. the Turing machine
Let's get concrete here. A universal Turing machine does deduction: feed it input and a program, it follows fixed rules to produce output. Induction is the inverse problem—figure out the program from messy streams of input/output pairs while you're still trying to act in the world.
Putnam's clever move is splicing induction right into the motor loop. Rules get proposed by accident (random thrashing around), vindicated by success (hey, that restored equilibrium!), and stabilized into circuitry that gets selectively reused. When two stabilized rules contradict? You don't just throw one away. You introduce a variable that reveals when each applies, creating a more general rule, a deeper regularity. Keep doing this. Over time, the mind builds up this compact library of conditional programs—the "net" Eddington warned us to understand before trusting the catch.
There's a hint of Herbert Simon here (heuristics, bounded rationality) and early Newell & Simon attempts to formalize search and problem-solving. But Putnam's going after something more basic: the logic of learning itself in a system that's always already acting. RL turns that into algorithms (policy gradients, value functions). Putnam gave us the origin story.
The embodied hinge: why muscles matter in Putnam's logic
You can't read Putnam without noticing how physical his language is. "Moves." "Loops that fire to move the body." In today's terms, this is the closed-loop view: perception exists for action, and knowledge is basically sensorimotor skill. O'Regan and Noë say "seeing is a way of acting." Andy Clark's predictive-processing story has the brain constantly issuing motor-tinged predictions and adjusting action to reduce prediction error. Putnam was there first, insisting those motor programs are the representational backbone, forged through conflict-driven induction.
If you build robots, this really matters. A Putnam-style agent wouldn't sit there passively labeling images. It would poke at the world, searching for variables that resolve contradictions in its own behavior, pushing toward rules that keep it in familiar, good-enough states.
A guide for AI: how to Putnam-ify your agent
Want to try operationalizing Putnam today? Here's where I'd start:
- Single global homeostatic objective. Give your agent a simple survival-style target—staying within viable bounds—and translate that into continuous intrinsic reward. Think free-energy minimization or curiosity bonuses tied to learning progress. Build task goals on top, not instead.
- Motor-first representations. Learn policies in a closed loop where actions generate the data you learn from. Sensorimotor contingencies ("what changes when I do this?") should be your substrate of representation.
- Contradiction harvesting. Set up your agent to detect policy conflicts (clashing action proposals) and route them to a "structure learning" module that hunts for new variables (context features) to separate the cases. Putnam's cheek-temperature trick, but scaled up. Cognitive-control models can give you ideas for conflict signals.
- Attractor dynamics for memory. Use recurrent networks or energy-based models to stabilize newly forged rules into reusable patterns that can be triggered by partial cues. Hopfield-style.
None of this is alien to modern ML. The novelty is the unified story—from homeostasis to conflict to new variables to motor skill—that Putnam sketched out 60 years ago.
Why did the idea disappear?
Two reasons: language and life.
Language: Putnam's friends joked you needed a glossary to read him. When Wheeler introduced him at a 1975 MIT neuroscience meeting, he literally projected a transparency of Putnam's terminology on an overhead. The talk ran long. People fidgeted. Putnam's confidence shattered—he concluded "it's a lot of nonsense" and walked away. Papers went into filing cabinets, then storage units. Only determined detective work brought them back.
Life: Putnam fled institutions he suspected were tainted by his mother's behind-the-scenes string-pulling. He chose anonymity, gave away his money, worked blue-collar jobs, poured energy into civil-rights organizing in Houma. When his bicycle went down, the world had no idea what it had lost. We only found out when The Nature Conservancy deposited his check.
Reading Putnam now
Your best starting point is Gefter's long feature. She weaves together biography with clear explanations of the induction game, the single-goal constraint, and the contradiction-resolution mechanic. She's also hosting scans of Putnam's unpublished papers—the early Outline of a Functional Model of the Nervous System (1963), the joint 1964 memo with Fuller, the 1966 General Systems article "On the Origin of Order in Behavior." They're raw, sometimes cryptic, but unmistakably aimed at what we'd now call a theory of mind.
For context and contrast, I'd pair Putnam with:
- Embodied cognition primers (check the Stanford Encyclopedia; O'Regan & Noë's sensorimotor account)
- Predictive processing/active inference reviews (Friston 2010; any recent overview)
- Reinforcement learning (grab Sutton & Barto, 2nd edition)—see how the field formalized learning to act under reward, where intrinsic motivation lives now
- Attractor networks (Hopfield 1982)—for the flavor of self-reinforcing patterns Putnam intuited
So...was he right?
"Right" might be the wrong question for something this foundational. Better to ask whether Putnam identified the load-bearing beams any complete theory of mind needs:
- an endogenous, survival-like global objective
- learning as action-driven rule formation
- contradictions as the trigger for innovation
- variable discovery as the mechanism of generalization
- self-reinforcing loops as the substrate of memory and control
On those beams? Today's neuroscience and AI are, if not explicitly Putnamian, at least living in the same neighborhood. The next step would be formalizing his "contradiction to variable" leap at scale. A modern Putnam would probably wire ACC-like conflict signals into a structure-learning module, wrap it all in an intrinsic-motivation RL agent, then drop it into a robot that learns by doing—measuring success not by benchmarks but by how well it keeps itself together while inventing its own subgoals.
If that sounds like where we're headed—homeostatic agents, exploratory policies, world-model learning—well, that's kind of the point. The field has been marching toward Putnam's mountain range from different trailheads. Now that we can finally read him, we can see the path he was trying to blaze.
Epilogue: two points that almost touch
Go sit inside Moore's Oval with Points at Princeton. Look up. Those two bronze tips reach inward, close but never quite meeting. It's a shape that turns near-conflict into something beautiful—that moment before a clash resolves into new form. Putnam thought that's where minds live: balanced on the edge between rules that don't quite fit, reaching for the variable that'll make them both true.
He never got to see his theory meet the world the way he hoped. But those points are close now—probably closer than he could've imagined. Our machines are learning to act, our models to predict, our robots to find the variables that matter. If we build agents in the next few years that truly learn by contradiction, we might finally step through that oval and out the other side—to the logic of mind he was trying to give us all along. The net is visible. The fish are waiting.
Sources & further reading
- Amanda Gefter, "Finding Peter Putnam," Nautilus (June 17, 2025). The definitive narrative and clearest exposition of Putnam's core ideas and biography. Also links to primary sources.
- The Peter Putnam Papers (archival site with scanned manuscripts): "Outline of a Functional Model of the Nervous System" (1963); "Memo for Dr. Wheeler: Link Syntax to Subjectivity" (1966); "On the Origin of Order in Behavior" (1966).
- John B. Putnam Jr. Memorial Collection of Sculpture (Princeton University Art Museum overview).
- Encyclopedia of Cleveland History entry on Mildred Andrews and Peter Andrews Putnam (Nature Conservancy bequest; Houma period).
- Arthur Eddington's fisherman's-net allegory (recounted via the St Andrews "MacTutor" history site).
- J. J. Hopfield (1982), "Neural networks and physical systems with emergent collective computational abilities," PNAS.
- M. M. Botvinick et al. (2001/2004), conflict monitoring & ACC.
- J. K. O'Regan & A. Noë (2001), "A sensorimotor account of vision and visual consciousness."
- Karl Friston (2010), "The free-energy principle: a unified brain theory?"
- Sutton & Barto (2018), Reinforcement Learning: An Introduction, 2nd ed. (intrinsic motivation, control as learning).
Unlock the Future of Business with AI
Dive into our immersive workshops and equip your team with the tools and knowledge to lead in the AI era.