AgentSee

A for-profit independent research effort building toward radical improvements in mental well-being and human agency. For-profit because commercial success at scale should only work if people actually prosper. We believe personal growth should be as systematic as engineering and as natural as play. That requires new science, new tools, and new infrastructure. We're building all three.

The bottleneck is moving

AI is making execution cheap. Building, writing, coding, designing: the machines can do it. The question is no longer who can do the work. It's who can decide what's worth doing.

That takes clarity. It takes the ability to step back, see what you're actually trying to do, and hold that vision steady under pressure. That capacity has a name. Agency: the ability to recognize what you genuinely want and act on it.

It sounds simple. It isn't. Agency is biological. It runs on specific neural systems that degrade under stress, uncertainty, and overload. The cognitive machinery you need to think clearly about your future is the same machinery that goes offline when you're anxious about your future. Not a metaphor. A well-documented neurobiological constraint. Where it's being designed for now, it's designed to exploit you. We will reverse that.

Our environment is accelerating and destabilizing while demanding clearer thinking. If judgment is the only uniquely human role left, stabilizing it is infrastructure.

When help makes it worse

When you're in that state, stressed, prefrontal cortex offline, and someone tries to help you, this can fail. At times it can make things actively worse.1Evidence: intervention backfireThis is not a hypothesis. In clinical research, when therapists use decisional balance, a structured exploration of pros and cons, with clients who are ambivalent and pre-decisional, it consistently decreases their commitment to change (Miller & Rose 2015). Separately, a meta-analysis across 36 studies (N=3,025) found that sustain talk, client language favoring the status quo, independently predicts worse outcomes (r=.19, p<.001; Magill et al. 2018). Notably, MI-consistent therapist skills correlate with increased sustain talk as well as change talk (r=.40), meaning skilled intervention surfaces both sides.The implication for system design: the method and the timing both matter. A well-designed intervention delivered to someone who is not ready to process it does not land neutrally.

Your brain has a circuit that continuously monitors one question: do my actions produce outcomes?2Mechanism: controllability circuitThis circuit is centered in the ventromedial prefrontal cortex (vmPFC). When it detects that your actions lead to outcomes, it sends an inhibitory signal to the dorsal raphe nucleus, a brainstem structure that drives serotonin-mediated passivity. This signal is what keeps you feeling capable.The critical finding: passivity and helplessness are not learned. They are the brain's default state. The feeling of control is what has to be actively maintained by this circuit, moment to moment (Maier & Seligman 2016, reversing their own foundational 1967 learned helplessness model). When the circuit goes offline, you don't lose motivation. You lose the biological mechanism that was suppressing helplessness. When the answer is yes, it suppresses the stress response. You stay capable. When the answer is no, stress deepens.

The problem: that circuit runs on the same hardware that stress already took offline.3Mechanism: prefrontal shutdownUnder acute stress, norepinephrine and dopamine spike and trigger an intracellular cascade in prefrontal cortex neurons: potassium channels (HCN and KCNQ) open on dendritic spines, draining the persistent electrical firing that sustains working memory and executive function (Arnsten 2009; Arnsten 2015). This is not gradual. It operates on a seconds-to-minutes timescale.A meta-analysis across 51 studies confirmed reliable degradation of working memory (d = -0.52), cognitive flexibility, and cognitive inhibition under acute stress, with the largest effects under high cognitive demand (Shields et al. 2016).The controllability circuit described above runs on this same prefrontal hardware. When the hardware degrades, the circuit that detects control degrades with it. So when someone offers you advice while you're overwhelmed, your brain can't register it as help. It registers as one more demand you can't meet.

Nearly every AI mental health tool, every app, every chatbot, every intervention makes the same assumption: that the person receiving help is in a state where they can process it. That assumption is biologically false. It means the entire approach is structurally broken, not by accident, but by design.

What we're building instead

A different kind of system.

We're building toward a causal model of how you actually work.4The axiomsThe specification is built from six axioms. These are the foundational commitments everything else derives from.1. Humans are biological systems first. Everything cognitive and psychological emerges from and is constrained by biological states.2. The machine maintains a comprehensive, continuously refined model of how humans function. Grounded in first-principles mechanisms, not rules of thumb.3. The machine constructs a specific model of each person it serves. The general model updates with science. The individual model updates with every interaction.4. The human controls the ends. The machine observes, estimates state, and stabilizes. It never decides what you should want.5. The system optimizes for your capacity for self-directed action. Not for a behavior. Not for an outcome.6. Engagement optimization is prohibited as a terminal objective. Precise enough to build from. The science already exists. It sits in fields that don't talk to each other.5The fieldsNeuroscience, psychology, control engineering, computer science, philosophy, computational neuroscience, active inference, clinical research, behavioral science, systems thinking, business, and organizational design. Put together, each of those problems becomes a design constraint.

A system with your values and cognitive capacity determining the intervention it offers to you.6How this would workIn principle, a system that reads you and over time learns what you value, what you're working toward, what matters to you when you're thinking clearly.Likely through signals like heart rate variability, skin conductance, response speed, language complexity, all downstream traces of neurochemistry measured against your own baseline. Combined into a continuous state estimate. What it offers you, and whether it offers anything at all, depends on that estimate.The honest gap: Research on classifying cognitive states from wearable sensors is active and advancing rapidly. What doesn't yet exist is the specific integration: personalized models at the resolution needed to drive state-matched interventions for individuals. Whether that integration is achievable with consumer-grade hardware is among the early questions the research program is designed to answer.

Everything you've read so far is not merely a problem description. It's a specification. Taken seriously, it tells us exactly what kind of system we need to build.

What if you don't know what you actually want? Wouldn't it be nice to have some help in finding out? Can we really trust a machine to do that? This is an open question. This is what we intend to explore concretely.

We're after human alignment, and in that, discovering better ways to align AI. Ultimately, constructing a system that enables you to figure out which parts of your story have been written for you and which parts you still get to author.

The human always decides where to go. The system makes it possible to see clearly enough to decide.

How you'd know it worked

Eyeglasses versus GPS. Glasses correct a structural limitation and expand your capability. They don't make your eyes weaker. GPS provides efficiency but degrades your internal navigation over time. After a decade of GPS, your phone dies and you're lost.

The success metric: your capacity for self-directed action is higher after using the system, even once it's removed. You keep it because it extends you. If you're worse without it, it failed. It became GPS.

The system is governed by constraints that make it structurally impossible to optimize for engagement instead of capacity. If the user doesn't get better, the system has failed by its own metrics. Not as a policy. As an engineering requirement.

Want to go deeper? Pick the door that fits.

Below: each of these doors becomes a game that is not yet live. Come back soon. The work will meet you where you are.

This path is for the person who clicked a link out of curiosity or respect for whoever sent it. You'll get the whole picture in plain language, starting from what you already know about being human. Coming soon.

Positioning against Schoeller's trust-as-extended-control, connections to IWMT and FEP, the plant model gap, and why the integration layer between these frameworks is the actual contribution. Coming soon.

The problem, the architecture, the risks stated honestly, the experimental program with kill conditions, and what a Bell Labs for human well-being actually looks like as an organization. Coming soon.

You've seen interventions backfire. Here's the mechanistic account of when and why, grounded in the neurobiology you already know, with testable predictions and an experimental program designed to falsify the core claims. Coming soon.

The foundational path. No jargon. No shortcuts. If the ideas can't survive being said simply, they aren't real. This is the longest path and the most important one. Coming soon.

Want to talk?

If this intersects your world, if you work on adjacent problems, or if you just want to have a conversation.