steamHouse Position on Artificial Intelligence

Conscious Humans in the Age of AI

Version: 1.0
Date: January 2026
Status: Draft for Review

EXECUTIVE SUMMARY

Artificial intelligence represents a powerful tool that amplifies whatever the user brings to it. AI excels at the pattern-based, automatic processing that our cognitive system handles below the level of awareness—and struggles with the conscious and purposeful processing that makes us authors rather than passengers.

steamHouse develops precisely the capacities AI cannot replicate: genuine care about outcomes, meaning-making from experience, moral judgment with accountability, and purposeful authorship of one's own life.

We neither fear AI nor embrace it uncritically. We teach young people to work skillfully with AI while maintaining their identity as conscious authors.

In the age of AI, developing conscious humans is not merely useful—it's the irreducibly human contribution.

PART ONE: WHY AI MATTERS FOR STEAMHOUSE

The Moment We're In

Young people today will inherit an AI-saturated world. They will work alongside AI systems, have opportunities shaped by algorithms, and face decisions about technology that previous generations never imagined.

This isn't speculation. It's already happening.

Every app on a smartphone is optimized to trigger automatic reactions—to bypass conscious choice, to keep users in loops they didn't choose. The attention economy runs on a simple proposition: minds that don't direct themselves can be directed.

And now AI offers to do thinking for you. Why struggle to write when AI can generate text? Why wrestle with a problem when AI can solve it? Why develop judgment when AI can advise?

The temptation is to outsource more and more cognition to machines. And for many tasks, this makes sense—AI handles routine processing better than humans.

But outsourcing conscious processing—the meaning-making, the caring, the choosing—doesn't make sense. It's not even possible. You can outsource the appearance of these things. You cannot outsource the reality.

A life lived on autopilot, with AI smoothing the remaining friction, isn't a life authored. It's a life administered.

The Automation of the Automatic

Here is the pivot that makes this moment different from every previous technological transition:

Automatic thinking is being automated.

Previous technologies extended human physical capacities—the wheel, the lever, the engine. Even previous information technologies extended specific cognitive capacities—writing extended memory, calculation extended arithmetic.

AI is different. AI can now perform the rapid, pattern-based, intuitive processing that humans do automatically. The very cognitive mode that the modern environment exploits is the mode that AI replicates.

What remains distinctly human is conscious processing:

  • Genuine care about outcomes

  • Values clarification in ambiguous situations

  • Meaning-making from raw experience

  • Authentic relationship

  • Purposeful choice when the stakes are real

These require consciousness—not just intelligence. And these are precisely what steamHouse develops.

The Painful Irony

Just as conscious thinking becomes most essential, the modern environment—including AI itself—makes it harder to develop.

The attention economy captures automatic responses. Algorithms learn what triggers reactions and feed users more of it. Social forces—commercial, political, ideological—compete to recruit automatic reactions for their purposes.

So the defensive case remains essential: if you don't wield your mind as a tool to your own purpose, someone else will enlist it for theirs.

But defense alone produces fearful, reactive people who are less exploited but not more alive. The goal isn't just protection. The goal is flourishing.

PART TWO: WHAT AI IS AND ISN'T

The Pattern Recognition Reality

The foundational insight across research: current AI systems perform pattern recognition and statistical prediction—not understanding, reasoning, or genuine intelligence in the human sense.

AI systems are trained to map inputs to outputs based on patterns in training data. They can generate impressive outputs without accessing meaning.

A language model can produce grammatically correct text about hospitals without knowing what a hospital is. An image classifier can identify dogs without understanding "dogness."

Practical implication: AI outputs can be fluent, confident, and wrong. Testing for genuine understanding—not just fluent output—is essential critical thinking for the AI age.

The Jagged Frontier

Ethan Mollick offers the concept of the "jagged frontier" to describe AI capabilities. AI doesn't advance uniformly—it might write poetry better than many humans while failing at simple arithmetic, generate convincing code while hallucinating basic facts, all in the same session.

This jaggedness means expectations based on AI success in one domain don't transfer to others. Learning what AI can and cannot do requires hands-on experimentation and continuous updating.

The Common Sense Gap

Common sense remains AI's deepest limitation. Humans effortlessly know that water is wet, fire is hot, if you put something in a box and close it the object is still there, and time moves forward.

This vast store of background knowledge, acquired through living in bodies that interact with physical and social worlds, has no equivalent in AI systems trained on text and images alone.

This validates experiential education. Common sense develops through real engagement with the world—not from reading about it. The embodied, relational, hands-on learning steamHouse emphasizes develops what AI cannot.

AI as Alien Intelligence

Yuval Noah Harari makes a provocative argument: AI is not merely "artificial"—it's fundamentally alien.

Previous technologies extended human capacities (telescope extends sight, calculator extends arithmetic). AI operates by different principles entirely:

  • AI can pursue goals autonomously (unlike any previous tool)

  • AI processes information in ways humans cannot comprehend

  • AI may develop "inter-computer realities" divorced from human concerns

  • AI decisions increasingly shape human lives without human understanding

When a language model says "I am conscious," it's echoing patterns learned from human data. There's no independent reason to believe those words reflect an internal reality. The hurricane simulation isn't wet or windy—it's an abstraction.

PART THREE: WHAT AI CANNOT DO

Five Irreducibly Human Capacities

What AI cannot do:

1. Care.

AI processes. It doesn't care about outcomes. It has no stake in what happens. You can give it any goal and it will optimize toward it with equal indifference.

Caring—having things genuinely matter to you—requires a being for whom things can matter.

2. Mean.

AI generates tokens that statistically follow from previous tokens. It produces text about meaning without accessing meaning. A language model can write eloquently about grief without knowing what grief is.

Meaning requires a meaning-maker—a consciousness for whom things signify.

3. Experience Consequences.

When you make a choice, you live with it. The outcome affects your life, your relationships, your future. This skin in the game shapes how you choose.

AI models consequences; it doesn't bear them. The weight of consequence creates conscience.

4. Exercise Moral Judgment.

Ethics isn't calculation. It's navigating competing values, taking responsibility for choices that could go either way. Machines can simulate this process. They can't do it.

Moral agency requires a being who can be held accountable—who has something at stake.

5. Create Genuine Novelty.

AI recombines existing patterns impressively. But genuine creativity—the insight that surprises even the creator, the combination that wasn't latent in the training data—emerges from consciousness engaging with consciousness, from the collision of perspectives that each genuinely hold a view.

Why Bandwidth Poverty Yields Wisdom

Neil Lawrence, cognitive neuroscientist and Google DeepMind researcher, makes a crucial observation about human intelligence. We process information slowly—perhaps 40-60 bits per second consciously, compared to the billions of bits per second that computers handle.

This isn't a bug. It's a feature.

Our bandwidth poverty forces abstraction. We can't track every detail, so we must identify what matters. We can't process every possibility, so we must develop judgment. Our limitations yield wisdom.

Machines have no such constraint and thus no such capacity. They optimize; they don't understand. They correlate; they don't mean. They predict; they don't experience.

PART FOUR: THE STEAMHOUSE RESPONSE

What We Develop

steamHouse develops the capacities that AI cannot replicate:

Recognition—so you know when you're on automatic and when exploitation is occurring.

Interruption—so you can create space between stimulus and response.

Reflection—so you can check automatic patterns against articulated purposes.

Direction—so you choose responses aligned with who you're becoming.

Training—so your automatic patterns serve your purposes rather than someone else's.

But also:

Meaning-making—so you contribute purpose, not just execute.

Genuine creativity—so you bring what AI cannot.

Relational capacity—so you develop through community what cannot develop alone.

Moral judgment—so you bear responsibility for choices that matter.

Authorship—so your life is your story, not someone else's algorithm.

The Two Capacities

steamHouse develops two complementary capacities:

Defense: Seeing Clearly

The ability to recognize when you're being manipulated. When your automatic responses are being exploited. When the "choice" you're making was engineered by someone with interests opposed to yours.

This includes:

  • Understanding cognitive biases

  • Recognizing technology's exploitation methods

  • Grasping the evolutionary mismatch (Stone Age minds in a modern environment)

  • Developing the metacognition to catch yourself mid-reaction

Defense is essential. Without it, you're a puppet who doesn't see the strings.

Offense: Creating Purposefully

The ability to envision what you actually want and move toward it. To author your life rather than merely react to stimuli. To contribute something meaningful rather than just avoid harm.

This includes:

  • Clarifying your values and purposes (Gold Star Ideals)

  • Building mental models that serve your goals (Red Toolbox)

  • Developing skills that extend your agency (Green Gear)

  • Creating in teams and communities

  • Engaging at scales from family to planet

Offense without defense is naive. Defense without offense is sad.

The goal is authors who can see.

Working WITH AI

Ethan Mollick provides practical guidance for human-AI collaboration:

Use AI as:

  • Thinking partner (not replacement for thinking)

  • Draft generator (not final authority)

  • Knowledge exploration tool (not knowledge source)

  • Creative catalyst (not creative substitute)

Maintain:

  • Critical evaluation of all AI outputs

  • Understanding of AI's jagged capabilities

  • Human judgment as final authority

  • Awareness of bias and error patterns

"Just enough of you, just enough of the AI"—find the collaboration point where human and machine capabilities complement.

This isn't replacement. It's amplification. The conscious human who can work skillfully with AI extends their reach enormously. But the key word is conscious. AI amplifies whatever you bring to it. Bring consciousness and purpose, and AI multiplies your impact. Bring unreflective automaticity, and AI accelerates your drift.

The question isn't whether to use AI. The question is what kind of human you'll be when you use it.

PART FIVE: THE TECHNOMORAL VIRTUES

Shannon Vallor proposes virtues specifically needed for flourishing with technology—what she calls "technomoral virtues":

Virtue Application to AI Honesty Especially about AI's limitations and your own Self-control Resisting AI's pull toward outsourcing conscious thought Humility Recognizing what we don't understand about AI systems Justice Ensuring AI serves all people, not just some Courage Questioning technological inevitability Empathy Maintaining human connection despite digital mediation Care Attending to relationships and responsibilities Civility Sustaining democratic discourse in algorithmic environments Flexibility Adapting to rapid change Perspective Maintaining long-term view amid short-term optimization Magnanimity Generous engagement with difference

These are character traits for navigating the AI age—not technical skills, but the human qualities that determine whether AI serves us or we serve it.

PART SIX: WHAT WE SAY AND DON'T SAY

Procedural vs. Substantive Positions

steamHouse distinguishes between procedural commitments (how to approach questions) and substantive conclusions (what answers to reach).

What We Teach (Procedural):

Topic Our Position AI literacy Young people need to understand what AI is and isn't Critical evaluation All AI outputs must be evaluated, not trusted blindly Skillful use AI is a tool to be wielded consciously Human distinctiveness Conscious, purposeful capacities remain irreducibly human Technomoral virtues Character development for the AI age is essential

What We Don't Decide (Substantive):

Question Our Position Will AI become conscious? We don't know; thoughtful people disagree Should AI development be paused/accelerated/regulated? Policy questions for citizens to decide Is AI an existential risk? Contested; we teach how to think about it, not what to conclude Which AI companies are "good" or "bad"? Not our call How should individuals use AI in their lives? Personal choice within ethical bounds

What This Means in Practice

We WILL:

  • Teach how AI systems work (pattern recognition, training data, limitations)

  • Develop critical evaluation skills for AI outputs

  • Practice skillful human-AI collaboration

  • Cultivate technomoral virtues

  • Discuss AI ethics as contested terrain requiring judgment

  • Help young people think clearly about AI's role in their lives

We WON'T:

  • Tell young people whether AI is "good" or "bad"

  • Advocate for specific AI policies or regulations

  • Claim to know whether AI will become conscious

  • Dismiss AI concerns as alarmist OR embrace AI uncritically

  • Pretend we have answers to questions that remain genuinely open

PART SEVEN: THE WAGER

Here is the bet we're making:

If conscious development doesn't matter—if AI can eventually do everything humans do, including meaning-making—then steamHouse is harmless and probably pleasant for participants. We'll have developed young people who think clearly, relate well, and author their lives intentionally. No harm done.

But if conscious development does matter—if there are irreducibly human capacities that AI cannot replicate, and if these capacities require deliberate cultivation—then steamHouse is essential. And every year we fail to develop these capacities in young people, we're losing ground that cannot be recovered.

The bet isn't whether AI will transform everything. It will.

The bet is whether conscious humans will be authors of that transformation or merely subjects of it.

steamHouse exists to develop the authors.

PART EIGHT: AUDIENCE-SPECIFIC APPLICATIONS

For Parents

Your children will grow up in a world where AI handles routine cognitive tasks the way machines handle routine physical tasks. The question isn't whether they'll use AI—it's whether they'll maintain their identity as conscious authors while doing so.

steamHouse develops:

  • The metacognition to recognize when they're drifting vs. choosing

  • The purpose clarity to know what they want AI to help them do

  • The critical thinking to evaluate AI outputs rather than accepting them blindly

  • The human capacities that AI cannot replicate: genuine care, moral judgment, authentic relationship

We neither fear technology nor embrace it uncritically. We prepare young people to wield powerful tools wisely.

For Mentors

Your role is to model conscious engagement with AI:

  • Show how you evaluate AI outputs rather than accepting them

  • Demonstrate when AI is useful and when it isn't

  • Discuss the distinction between automatic and conscious processing

  • Help young people notice when they're outsourcing thinking vs. using tools

The Four Principles apply to AI engagement:

  • Reflective Thinking: Am I using AI consciously or on autopilot?

  • Personal Agency: Am I directing AI or being directed by it?

  • Mutual Respect: What are the human costs of AI systems I use?

  • Objective Reason: Are AI outputs actually accurate?

For Funders

steamHouse addresses a gap that AI makes more urgent, not less: the development of conscious, purposeful human beings.

As AI handles more cognitive tasks, the capacities it cannot replicate become more valuable:

  • Purpose and meaning (knowing what's worth doing)

  • Moral judgment (knowing what's right)

  • Authentic relationship (genuine human connection)

  • Creative novelty (what emerges from consciousness engaging consciousness)

We prepare young people for an AI-saturated world by developing what AI cannot replace. This is not anti-technology; it's pro-human.

For Young People

AI is a tool. Like any tool, it amplifies what you bring to it.

If you bring purpose, AI helps you achieve it faster. If you bring confusion, AI helps you drift faster. If you bring judgment, AI extends your reach. If you bring passivity, AI makes decisions for you.

The question isn't whether to use AI. The question is: who are you when you use it?

steamHouse helps you become someone worth amplifying.

PART NINE: SUMMARY STATEMENT

The Core Position

steamHouse on Artificial Intelligence:

AI represents a powerful tool that amplifies whatever the user brings to it. AI excels at the pattern-based, automatic processing that our cognitive system handles below the level of awareness—and struggles with the conscious and purposeful processing that makes us authors rather than passengers.

steamHouse develops precisely the capacities AI cannot replicate: genuine care about outcomes, meaning-making from experience, moral judgment with accountability, and purposeful authorship of one's own life.

We neither fear AI nor embrace it uncritically; we teach young people to work skillfully with AI while maintaining their identity as conscious authors.

In the age of AI, conscious humans are the point. Everything else is optimization.

DOCUMENT METADATA

File: STEAMHOUSE_POSITION_ON_AI_v1.md
Version: 1.0
Created: January 2026
Purpose: Comprehensive statement of steamHouse's position on artificial intelligence

Related Documents:

  • STEAMHOUSE_EPISTEMOLOGICAL_POSITION.md (epistemological framework)

  • COMBINED_FULSOME_E7_Technology_Innovation_Future.md (research foundation)

  • ESSAY_Conscious_Humans_AI_COMBINED_VERSIONS.md (essay versions)

  • FRAMEWORK_GUIDE_CHAPTER_19.md (consciousness levels)

Audiences:

  • Board members and leadership (understanding organizational position)

  • Funders (understanding mission relevance in AI age)

  • Parents (understanding approach to technology)

  • Mentors (guidance for discussions)

  • Educators and partners (alignment on approach)

  • Young people (understanding what we develop and why)

Review Notes:

  • Position is consistent with epistemological framework (procedural vs. substantive)

  • Draws from existing E7 compendium and essay work

  • Maintains "how to think, not what to think" stance on contested questions

  • Ready for stakeholder review and refinement

steamHouse: Developing conscious humans in the age of AI.

Neither technophobic nor naively enthusiastic. Rigorously human.