Last updated: 2026-02-05 | 10 min read
At Davos this year, the narrative was everywhere: "abundance for all." Elon Musk told the World Economic Forum we're approaching ubiquitous AI and robotics that will create an explosion in the global economy "truly beyond all precedent." He even recommended people not save for retirement.
Meanwhile, Dario Amadei predicted half of white-collar jobs would disappear—apparently that's fine because the abundance will make up for it.
But here's the thing: the abundance narrative is probably the wrong frame for most of us thinking about the next few years. Instead, we should think about the bottleneck economy.
It's more practical. More likely to get you employed. More likely to help you find ways to succeed in the AI economy. Let's talk about why.
Cognizant released research claiming AI could unlock $4.5 trillion in US labor productivity. But there was a massive caveat that no one paid attention to:
The value will only materialize if businesses can implement it effectively.
That's the biggest asterisk I've ever seen. And according to Cognizant's CEO, most businesses haven't done the hard work yet.
Here's the gap: it's not about the capability of models. It's about implementation. It's about value capture. The AI already exists. The trillion-dollar value doesn't just show up and flow automatically.
This isn't the fountain of youth. The question isn't whether AI creates abundance—it does. The interesting question is where are the bottlenecks? Because that's where value concentrates.
Let's get clear on what a bottleneck actually is. It's not just any constraint. It's the high-leverage binding constraint—the one that determines actual throughput in a system.
If you improve anything else, you've accomplished nothing because you didn't improve the bottleneck. But if you improve the bottleneck just a little bit, everything moves.
This is basic systems thinking, but most people ignore it. They optimize for what's visible, comfortable, or what they're already good at. They work harder instead of differently. They add capacity where there's already lots of capacity in the system and ignore the choke point.
The history of the corporation illustrates this perfectly. Every dominant organizational form emerged to dissolve a specific bottleneck:
The pattern is consistent: Whoever solves the binding constraints captures disproportionate value. Everyone else participates in the abundance that's created.
The binding constraint on AI capability is increasingly atoms, not bits.
Jensen Huang told Davos that AI needs more energy, more land, more power, and more trade-skilled workers. Contemporary hyperscale data centers consume 100+ megawatts. Training a single frontier model can require sustained exaflops of compute for weeks.
The electricity demands are approaching those of small nations.
This matters because physical infrastructure operates on very different timelines than software. You can ship a new model in months if you have compute. Building a data center to run it at scale? That takes moving atoms around. That takes time. Permitting alone can take years. Expanding grid capacity is even harder.
Google recently shared they're bottlenecking on the ability to establish connections to the grid.
The result: A structural wedge between what's technically possible and what is deployable today. Capability sprints ahead while infrastructure plots.
The joke is "it's always Jensen" and Nvidia—and that's not entirely wrong. But it's also whoever can navigate physical constraints faster. Better site selection. Faster permitting. More efficient construction. Smarter energy sourcing.
This isn't a temporary bottleneck. It's structural.
Companies that understand this are:
Companies that don't? They're assuming compute will magically appear.
The physical layer creates an opportunity for an entirely different kind of company—one we normally don't think of as an AI business.
Someone has to build these facilities. Provision the power. Manufacture the cooling systems. Install the racks. Connect the fiber.
This is what Jensen is calling "high-quality jobs" because he can't get enough of them. Trade craft jobs in these spaces have salaries that have nearly doubled. I'm not surprised.
The abundance of AI at the application layer depends on scarcity being resolved at the physical layer. And that resolution means people.
When Demis Hassabis spoke at Davos, his biggest concern wasn't technical. It was the loss of meaning and purpose in a world where productivity is no longer the priority. He also worried that we lack "institutional reflection" about AI.
He's really talking about coordination problems—and coordination runs on trust.
Here's the reality: When anyone can generate sophisticated AI content at the touch of a button—text, images, video, code—the cost of generation collapses. But the cost of trust doesn't get cheaper.
If anything, trust gets harder because the difference between synthetic and authentic is becoming indistinguishable. Every piece of content could be fabricated. Every credential could be fake. Every piece of information might be generated to manipulate you.
When you can't distinguish signal from noise, you're overwhelmed as a human. You look for someone to trust.
Trust is the infrastructure of coordination:
Trust reduces transaction costs. It's the trust in the system that makes coordination possible.
Now imagine trust degrading. You don't have to imagine it—you see it and feel it:
Whoever can mediate trust:
We're looking for trust banks in the 21st century—essential infrastructure everyone can rely on, controlling a scarce resource that must be accumulated over time and can be allocated across different uses.
Cognizant's research pointed to something specific: the value is conditional on implementation.
$4.5 trillion sitting there chained up because organizations can't figure out how to use AI effectively.
This is the integration bottleneck. AI has general capacity but no specific context. After a couple of years of implementation at corporate level, we know: general AI capability works well for individuals, but without specific work from a company, it dies at the team level.
Here's the gap:
The gap between "AI can do this" and "AI does this usefully right here" is $4.5 trillion.
Bridging this gap requires context that's often tacit—it embeds practices and relationships, not just documents. The person who's been at a company for 20 years knows things that aren't written down anywhere. The AI doesn't.
This knowledge isn't promptable.
The interface between general AI capability and specific organizational reality is where value gets lost or captured.
Some companies will figure out how to solve this integration problem and unlock massive productivity gains by tying AI into their workflows. Others will deploy AI tools that sit unused or get actively misused, generating outputs that look deceptively productive but don't connect to anything that matters.
The difference isn't the AI or the tool. The AI is increasingly a commodity. The difference is organizational capacity to integrate.
AI doesn't magically dissolve the challenge of getting humans to agree or align. In fact, it might make coordination even harder.
When anyone can generate sophisticated arguments for any position, groups have even more trouble reaching consensus or alignment.
Larry Ellison's warning at Davos was pointed: If AI does to white-collar workers what globalization did to blue-collar workers, we need to confront that reality directly.
The question: How do we share the gains from AI in ways that don't trigger social disruption? That's a question of human alignment. And honestly, no one at Davos has those answers.
The people who are closest to knowing how to put AI and jobs together aren't the ones going to Davos. They're the ones actually building workflows where AI and people work together.
Everything above applies to companies, but bottlenecks are fractal. You are also a system with binding constraints. Your output, impact, and leverage are functions of which bottleneck you're solving.
The old individual bottlenecks are dissolving:
It used to take 5 years to become a proficient programmer. AI compresses or eliminates those runways. Dario Amadei noted his own engineers no longer program from scratch—they supervise and edit the work of models.
This is disorienting if your identity was built around skills that are commoditizing like programming. But disorientation isn't a strategy.
The question: Where are the new individual bottlenecks?
Tool fluency is table stakes. The constraint shifts to what you do with those tools.
When generation is cheap because everyone has AI tools, curation of what's good becomes expensive. Knowing what to make, when to stop, what's good enough versus what's actually good—these are capacities that still take time to learn.
The challenge: Taste develops slowly while AI devalues output. If you spend 3 years developing good taste in design and AI makes okay design a commodity before you can capitalize on your edge, you lose a race you didn't know you were running.
People who are surviving and thriving are narrowing their focus earlier. They're diving deeply into something specific—rapidly pushing to the frontier past the edge of where "AI good enough" is acceptable.
AI has solved front-end design. But if you want extraordinary design, people still turn to humans with extraordinary taste. That dynamic is going to persist across many corners of the economy.
AI solves well-specified problems with increasing fluency. But specifying the right problem and framing it correctly remains very, very human.
What should we build? What's wrong here? Have I had time to think about it? What question, if answered, would unlock everything else?
Our education system optimized for problem solving. The market is increasingly rewarding problem finding.
The analyst who knows which questions to ask and which problems matter vastly outpaces the analyst who can answer any question. The skill increasingly isn't execution—it's direction setting.
Context and institutional knowledge are becoming moats for individuals the way data is becoming a moat for companies.
The person who understands why an organization really operates the way it does, what stakeholders actually want beneath what they're saying—that tacit knowledge is hard to replicate and increasingly valuable.
This creates a strange dynamic. Juniors who would historically accumulate context through years of apprenticeship now face a compressed path. Why spend 5 years learning how an organization works when AI can help you skip grunt work?
But the grunt work was also where that context got absorbed. The implicit knowledge that made senior people valuable often came from thousands of little exposures that never happen if AI handles all tasks.
How do you develop institutional knowledge without that slow accumulation? Honestly, I think it still takes slow accumulation. People trying to speedrun it are going to learn the hard way.
There's no fast forward to 20 years of deep experience in a domain.
AI can generate a lot of plans. It can generate a workout plan for me tomorrow. But I have to show up to the gym.
Turning any of these plans into reality requires a human to:
Execution has always been underrated because it's much less legible than ideation. People love asking, "What about Steve's brilliant mind when he created the iPhone?" They don't ask about Steve's relentless execution to get it done. To call Corning and make them produce glass he knew was right for the iPhone. The grinding work of implementation.
Tolerance for ambiguity separates those who thrive from those who freeze.
The environment is shifting fast. Best practices shift constantly. People are desperate for stable ground. The constraint you face is your ability to metabolize change—how much uncertainty can you hold in a rapidly changing world without freezing, while continuing to execute and follow through on a longer-term perspective?
People who master that balancing act are in huge demand.
The old model of talent development was super linear:
The new model has a different shape. Some individuals are discovering 10x leverage through AI augmentation—not because they work harder, but because they've identified their bottleneck and directly dissolved it.
Maybe a developer was bottlenecked on boilerplate. Maybe a strategist was limited by analysis bandwidth. Whatever it is, they found the constraint and removed it, unlocking latent capacity.
Most of us aren't finding that leverage because we're optimizing against old pre-AI constraints. We're still trying to prove we have skills when skills are commoditizing.
The diagnostic question for each of us is deeply personal:
What is constraining my output right now?
Not what you wish was constraining you. Not what was constraining you 3 years ago. Not the constraint you built your identity around solving so you can be proud of it.
The actual binding constraint today.
For some, it's tool fluency because you haven't genuinely integrated AI into your workflow. For others, it's taste. Maybe problem finding. The bottleneck is specific to you.
Solving it requires honesty about what's actually holding you back.
Going back to the abundance narrative at Davos—it feels clangy. Out of touch.
The conditional is doing a lot of work in these predictions. Yes, capability might exist. Value capture depends on solving bottlenecks that are organizational, institutional, physical, and social—not technical. That's hard human work.
The businesses and people that will thrive in the next 10 years are the ones who:
Intelligence is getting cheaper. The promise of abundance is real. AI will keep getting smarter. Cognitive output will keep getting easier to produce.
But abundance doesn't create value directly. Abundance shifts where scarcity lives. And we haven't been honest about that.
For our clients at Medianeth, this means:
That's the only question that matters, and it doesn't get enough airtime.
Need help navigating the bottleneck economy and integrating AI effectively in your business? Contact our team for a strategy session tailored to your constraints.
Related Resources:
Ready to make your online presence shine? I'd love to chat about your project and how we can bring your ideas to life.
Free Consultation