Deprogramming Education: The $40,000.00 Question.
What Alpha School Taught Me About Trust and Capability
I thought I understood AI’s role in education. Then I watched 100 students prove me completely wrong.
During my Digital Learning Institute training, I developed what I thought was a practical vision for AI in education. I pictured classrooms where students worked on AI-powered adaptive learning platforms while teachers circulated the room, managing needs, intervening with struggles, keeping everyone on task. The AI would deliver personalized content. The teacher would maintain order and provide human support.
It made perfect sense. I’d spent years in public education watching teachers battle impossible odds—thirty-plus students with vastly different needs, behavioral challenges, home situations ranging from stable to chaotic, constant distractions, and the Sisyphean task of keeping everyone learning while managing everything that prevents learning from happening.
My vision wasn’t revolutionary. It was pragmatic. AI as a tool to make classroom management more effective. To help teachers do an impossible job slightly better.
Then I walked into Alpha School in Austin, and realized I’d been designing solutions for a broken system instead of questioning why the system was broken in the first place.
The Setup
My visit started with lunch with Scott Finkelstein, VP of Private School Relations for 2 Hour Learning, the curriculum company behind Alpha’s model. We talked about education’s failure to adapt to the digital age, how the “information scarcity paradigm” that built public education no longer reflects reality, and what it means to rebuild learning systems from first principles.
Scott described Alpha’s evolution over 10+ years into their current “2 Hour Learning” model: students complete traditional academics in two hours each morning using AI-powered adaptive software, then spend afternoons on real-world skill development and passion projects. The performance claims were striking—students learning at 2x-4x traditional speed, SAT scores in the top 1% statewide, MAP testing showing double the expected annual growth.
Walking from lunch into the school felt like entering a successful business rather than an educational institution. The lobby opened into a large learning space with 10-12 students scattered at tables, privacy booths along walls, second-floor walkways, and prominent screens displaying learning cycle timers. Students were at various stages of 25-minute “focus sessions” followed by break periods.
The atmosphere was purposeful but relaxed. Professional rather than institutional.
And then I started noticing what wasn’t happening.
What I Actually Witnessed
Over the next two hours, I observed approximately 100 students across multiple learning spaces. Middle schoolers were still in their structured academic sessions when I arrived at 1 PM. High schoolers had moved into afternoon project work.
In one room, middle school students were leading a “town hall meeting”—a regular Friday activity where they facilitated discussion with a visiting entrepreneur about his phone usage app. Three to four students ran the session with professional questioning techniques and confident engagement. Their peers offered sophisticated feedback about user experience and software design.
I spoke briefly with the middle school guide, a former social worker who’d been at Alpha for eight years. She described her role as “social-emotional support” and “meaningmaker”—helping students “clear mental and emotional clutter to free themselves up for the academic experience.” When I asked about classroom management, she looked almost confused. “We don’t have those issues here,” she said simply. The town hall meeting was her favorite part of the week.
Upstairs, I found roughly 15 high school students conducting what Scott called “one of the best run corporate meetings I’ve ever seen.” They were facilitating their own “Brain Lift” session—market validation discussions for a content creation app they were developing. The conversation covered user onboarding strategies, engagement psychology, and business model validation.
Multiple parallel conversations were happening simultaneously. Students were synthesizing insights naturally, building on each other’s ideas, challenging assumptions, refining strategies. The young woman leading the discussion was articulate, focused, and remarkably poised.
No adult was directing any of it. A film crew was documenting the session, but they were just doing their jobs. The students were completely self-directed.
And here’s what struck me most: during my entire two-hour visit, across 100+ students in multiple spaces, I never once saw an adult stand up to direct, manage, or intervene with student learning.
Not once.
When the timer shifted from focus sessions to breaks, students transitioned naturally… some stood and moved around, some picked up phones, some sat quietly talking with peers. No chaos. No need for management. Just students managing themselves.
The absence of management wasn’t creating problems. It was enabling remarkable capability.
The Immediate Dismissal
My first reaction was reflexive: “Of course this works—look at the demographic.”
Alpha’s families pay $40,000+ annually for this education. These aren’t students dealing with food insecurity, unstable housing, untreated trauma, or parents working three jobs. These families explicitly chose a mastery-based, AI-integrated model and have the resources to support it at home.
Alpha doesn’t have to deal with ALL the produce. They’re selling premium items from well-tended orchards. My public school vision was designed for the messy reality of serving everyone—the bruised fruit, the irregular shapes, the ones that need extra care just to be viable.
It was easy to conclude: this model works because of privilege, not because of pedagogy. It wouldn’t translate to public schools because public schools face real challenges that money insulates you from.
Comfortable conclusion. Intellectually defensible. Completely misses the point.
The Uncomfortable Data Point
Because here’s what I learned after my visit: Alpha opened a campus in Brownsville, Texas—a small community on the Mexican border with less than half the per capita income of Austin. Students entering second grade averaged in the 31st percentile.
One year later, they’d climbed to the 84th percentile.
Same model. Dramatically different demographics. Exceptional results.
So maybe the question isn’t whether disadvantaged students can handle this level of autonomy. Maybe the question is whether we’ve ever actually trusted them with it.
The Real Question We’re Avoiding
What if the issue isn’t that some students can’t self-direct?
What if the issue is that we’ve built an entire educational system on the assumption that they can’t?
Think about the logic chain:
Students need constant management and direction
Therefore we build systems that manage and direct constantly
Students never develop self-direction skills
Which proves they need constant management and direction
We’ve confused “dealing with difficult circumstances” with “incapable of self-direction.” We’ve mistaken system-created dependency for inherent limitation.
My original AI vision—teachers circulating to manage student needs while AI delivers content—wasn’t wrong because the technology was bad. It was wrong because it accepted the premise that students fundamentally need to be managed.
Alpha’s model (and Brownsville’s results) suggests something radically different: the management problem isn’t student incapability. It’s institutional design that never trusts students with capability in the first place.
What AI Is Actually Exposing
The current educational system wasn’t designed for student learning. It was designed for industrial-age workforce needs: compliance, time-based progression, batch processing of humans through standardized experiences.
It made sense when information was scarce and teacher-controlled distribution was the only way to access knowledge. It made sense when the goal was producing factory workers who followed instructions and worked in predictable rhythms.
It makes no sense in an age of information abundance where students can access more knowledge in five minutes than their teachers could provide in five years.
Grade levels are arbitrary constraints, not learning requirements. Six-hour school days are about childcare logistics, not cognitive optimization. Standardized testing measures system compliance, not actual capability.
The system creates the problems it then must manage. Students act out because they’re bored, constrained, and treated as incapable. Then we point to the acting out as evidence they need more constraint and management.
And now AI comes along and threatens to make the whole charade impossible to maintain.
The Real Resistance
When educators push back against AI in classrooms, the language is always about protecting students: preventing cheating, ensuring authentic learning, maintaining academic integrity, preserving the student-teacher relationship.
But listen more carefully and you hear something else: “If students can learn without us controlling every step, what’s our role?”
The resistance isn’t about student welfare. It’s about institutional obsolescence.
Teachers aren’t the problem.
Well-meaning teachers are trapped in a system that was never designed for actual learning. But when AI threatens to expose that the emperor has no clothes, the instinct is to defend the clothes rather than admit we’ve been naked all along.
We’re protecting lesson plans that students find boring. Grading systems that measure compliance rather than understanding. Classroom control that treats adolescents like they’re incapable of managing themselves. Pacing guides that ignore whether anyone is actually learning.
We’re not protecting students from AI. We’re protecting a system that never served them well in the first place.
My Vision Was Both Wrong and Right
I was wrong to design for better management of a broken system. The question isn’t “how do we use AI to manage students more effectively?” The question is “why are we managing them at all?”
But I was also right that public school teachers need help. They absolutely do. Just not help managing better.
They need permission to stop managing entirely. They need systems that trust students instead of constraining them. They need to become what that Alpha guide had become—a meaningmaker and support system rather than a controller and director.
The shift isn’t from teacher-led to AI-led learning. It’s from control to trust. From management to empowerment. From assuming incapability to recognizing what students can do when we get out of their way.
The Uncomfortable Truth About “All Students”
Yes, Alpha’s $40,000 families bought their way OUT of the traditional system.
But what did they buy out of, exactly?
They bought out of a system that assumes ALL students need constant management. A system that treats self-direction as a privilege earned through compliance rather than a capability to be developed. A system that confuses “free and appropriate public education” with “batch processing through standardized experiences.”
The traditional system doesn’t just fail disadvantaged students by not meeting their needs. It fails ALL students by never trusting their capability in the first place.
Brownsville’s results suggest the trust matters more than the tuition. When you remove the institutional constraints and actually believe students can self-direct, they demonstrate remarkable capability—regardless of socioeconomic background.
The privilege isn’t the ability to self-direct. The privilege is being trusted with the opportunity to try.
What This Means for the AI Conversation
We’re having the wrong conversation about AI in education.
We’re asking: “How do we prevent students from misusing AI?”
We should be asking: “What does it mean that we automatically assume they will?”
We’re asking: “How do we maintain academic integrity in an age of AI?”
We should be asking: “Why was our definition of integrity so fragile that a technology tool could shatter it?”
We’re asking: “How do we use AI to improve the current system?”
We should be asking: “What if AI is exposing that the current system is the problem?”
Stop designing for control in an age of abundance. Start designing for capability we’ve never acknowledged because we’ve never trusted it enough to develop it.
The question isn’t whether students CAN self-direct, collaborate at high levels, and take ownership of their learning. Alpha proves they can. Brownsville proves it’s not about privilege.
The question is whether we’re willing to let them.
The Choice Ahead
We stand at a fork in the road.
One path: Use AI to make the current system more efficient. Better adaptive learning platforms. Smarter classroom management tools. More personalized pacing through the same standardized curriculum. Teachers becoming better managers of AI-enhanced compliance.
Other path: Let AI expose that the system itself was never designed for learning. That the management problem is system-created. That students have always been more capable than we trusted them to be. That the emperor has no clothes and maybe never did.
The first path is safer. Incremental. Politically feasible. It preserves institutional structures, professional identities, and comfortable assumptions.
The second path is terrifying. It requires admitting we’ve been doing this wrong for a very long time. It requires trusting students in ways that feel reckless when you’ve spent your career managing them. It requires reimagining education from first principles instead of just optimizing existing structures.
Alpha isn’t showing us what money can buy. It’s showing us what trust enables.
Closing Reflection
I walked into Alpha School expecting to see expensive educational innovation… rich kids with access to cutting-edge technology producing predictably impressive results.
I walked out questioning every assumption I’d brought with me about student capability, teacher roles, and what education is actually for.
The most striking moment wasn’t the sophisticated business discussions or the professional facilitation skills. It was the complete absence of adult intervention. The trust that permeated every interaction. The assumption of capability rather than the assumption of need for control.
And then learning about Brownsville—students starting far behind and leaping ahead using the same model—forced an even more uncomfortable realization:
We haven’t been protecting students from their own limitations. We’ve been protecting a system that requires us to believe in those limitations to justify its existence.
The $40,000 question isn’t “Can all students do this?”
The $40,000 question is “Why have we built a system that assumes they can’t?”
And what happens when AI makes that system impossible to maintain?
I don’t have all the answers. I’m still working through what I witnessed and what it means. But I know this: my original vision of AI in education—teachers managing students more effectively with better tools—was born from a system that never should have required that much management in the first place.
Maybe the real potential of AI in education isn’t making teachers better managers.
Maybe it’s making management unnecessary entirely.
And maybe that’s exactly what terrifies us.



