We are in 2026. The unemployment rate is about 4.28%. Spending on AI infrastructure (capex) has reached roughly 2% of GDP, or around $650 billion. Commodities linked to AI—like metals and energy inputs—have risen about 65% since early 2023. At the same time, nearly 2,800 new data centers are planned across the United States.
Even with all the talk about AI replacing jobs, the actual data tells a different story. Job postings for software engineers are increasing quickly—up about 11% compared to last year.
Meanwhile, economists still struggle to predict job growth just a couple of months ahead. Yet some people claim they can confidently predict massive job losses years into the future based on hypothetical scenarios like “The 2028 Global Intelligence Crisis.” That level of certainty doesn’t match reality.
We recently argued that in the short term, heavy investment in AI is likely to push inflation higher. But since markets care more about the long-term story, it’s important to think about where things may eventually settle. Before that, we need to examine how fast AI is actually spreading through the economy.
The St. Louis Fed tracks AI usage through surveys. Most discussions focus on a simple question: “Do you use AI?” But the more important question is how often people actually use it for work.
When we look at how frequently AI is used, there’s no clear surge in daily usage for work tasks. If AI were about to replace large numbers of jobs, we would expect to see a sharp rise here. Instead, the data looks steady, with no strong signs of rapid change.
There’s a common mistake in today’s AI debate. People assume that because AI technology can improve itself quickly, its use in the real economy will also grow at the same exponential pace.
Markets often focus only on the rapid growth phase and assume it will continue forever. In reality, adoption slows down because:
This slower adoption reduces the risk of sudden job losses.
AI systems need massive amounts of computing power, data centers, and energy. Replacing large amounts of human labor would require far more computing capacity than currently exists.
If demand for computing rises quickly, its cost will also increase. At some point, using AI becomes more expensive than hiring people for certain tasks. When that happens, companies stop replacing workers. This creates a natural limit.
So even if AI technology improves very fast, its real-world use is constrained by:
In short: just because AI can improve quickly doesn’t mean it will be used everywhere quickly.
AI is essentially a productivity boost. Productivity improvements are a type of supply shock—they lower costs, increase output, and raise real incomes over time.
Historically, major technologies like steam engines, electricity, cars, and computers followed this pattern. They made economies grow and prices more stable in the long run.
Some argue AI is different because it could reduce wages by replacing workers. But if companies produce more at lower cost:
If total output (GDP) increases, then demand must also increase somewhere—through consumption, investment, government spending, or exports. It’s not possible for output to rise while demand collapses completely.
For a true economic downturn to happen from AI, you would need:
That combination is very unlikely.
Also, new business formation is growing strongly, which suggests economic activity is expanding, not shrinking.
A key factor is how easily companies can replace workers with AI. Economists call this the “elasticity of substitution.”
If AI can replace nearly all human work at low cost, then wages could fall and profits could rise sharply. But even in that case:
For demand to collapse, you would need both:
In reality, governments tend to respond to major disruptions with policies that reduce extreme outcomes. That limits how far substitution can go.
Also, current labor data shows no major disruption yet. In fact, some indicators are improving, and construction jobs are rising due to data center building.
AI is more likely to assist workers than replace them
The economy includes many types of work:
These are difficult or expensive to automate. Even for knowledge work, there are issues like coordination, trust, and legal responsibility.
So AI is more likely to help workers do their jobs better rather than replace them entirely.
History supports this. New technologies usually change the types of tasks people do instead of eliminating jobs completely.
A good example is office software. Before it existed, people feared it would replace office workers. In reality, it made them more productive and increased demand for their work.
The myth of drastically reduced work hours
In 1930, economist John Maynard Keynes predicted that people would eventually work only 15 hours a week because of productivity gains.
He was right that productivity would rise, but wrong about how people would respond.
Instead of working much less, people chose to consume more. As goods and services became cheaper and more varied, demand expanded. New industries emerged, and people kept working to afford more things.
Leisure time did increase somewhat, but not nearly as much as expected. Human desires grew along with productivity.
Final thoughts
For AI to cause a major economic downturn, several extreme conditions would need to happen at once:
This combination is highly unlikely.
Over the past century, technological advances haven’t caused runaway growth or eliminated work. Instead, they’ve helped economies grow steadily at around 2% per year.
Today, long-term challenges like aging populations, climate change, and reduced globalization are slowing growth. AI may simply help offset these pressures rather than transform everything dramatically.
In the end, the economy is shaped by:
AI will change the economy—but it’s unlikely to break it.