Mainframes Won’t Save Us - Debunking the Comfort Narratives Around Gen-AI
Why six comforting beliefs about AI are blinding us to the economic collapse already underway.
Brace yourselves, this is a long one…
In my last article “The Coming Collapse of White-Collar Work” I argued that Gen-AI will hollow out the middle layers of white-collar work and cause a potential collapse of the white-collar economy, the push-back, was predictable…
“COBOL still runs the world, so change will be slow.”
“Every tech wave ends up creating more jobs than it destroys.”
“We’re all being up-skilled, calm down.”
These claims feel safe. The numbers say they’re wrong.
Below, I unpack six popular comfort myths and show why the middle layers of white-collar work are still on the chopping block, mainframes or not.
Myth 1
“Legacy mainframes insulate us from AI.”
An estimated 800 billion lines of COBOL remain in production - huge, but largely invisible in day-to-day payroll.
Mainframes now process ≈ 68 % of global production workloads while consuming only 6 % of IT spend. Most salaries sit above the zSeries—in integration code, dashboards and report writing that Gen-AI can already replace.
Natural-language-to-COBOL bridges and RPA shells let firms leave the iron alone while gutting the people who maintain the surrounding middleware.
Myth 2
“Tech revolutions always net-add jobs.”
This is the most popular defence of the status quo, and the most dangerous.
It gets repeated by executives, economists, and engineers as if it were a law of physics:
“Every major technology shift ends up creating more jobs than it destroys.”
The truth is that’s only sometimes true, and only over long, turbulent timeframes, with no guarantee that your job, or your industry, survives the churn.
Let’s break it down.
Historical context is comforting, but misleading.
The Industrial Revolution created new roles, but only after displacing millions of skilled artisans, labourers, and guild members. The short-term effects were;
Wage suppression
Urban poverty
Mass protests and riots (the Luddites weren’t anti-technology, they were fighting for fair pay and dignity)
It took decades for labour markets to adapt, and the people hurt in that transition weren’t the ones who benefited later. That pattern has repeated itself with mechanisation, electricity, computing, and the internet.
But there’s one big difference this time:
Gen-AI scales exponentially, automates cognitively complex tasks, and improves itself.
This is not steam engines. This is not spreadsheets. This is compounding intelligence applied to the work of people who never thought they could be replaced.
The numbers don’t support optimism.
Goldman Sachs projects that up to 300 million jobs globally will be impacted by Gen-AI, either eliminated outright or made economically nonviable.
This represents a systemic restructuring of the global white-collar economy.
No historical wave comes close to that scale, not the factory line, not the internet, not the smartphone.
Yes, new jobs will appear, but:
They will be highly specialised, requiring deep technical, creative, or operational skill.
They will scale linearly or asymmetrically, a few AI specialists can replace dozens of traditional workers.
They will not absorb displaced workers fast enough, because retraining an insurance analyst into a robotics prompt engineer is neither quick nor realistic.
Job creation is no longer 1-to-1.
A team of ten marketers, analysts, or copywriters can now be replaced by one skilled AI operator and a well-tuned system.
Even if five new jobs emerge for every ten lost, that’s still a net decline in hiring, wages, and opportunity.
Companies are already proving this in practice:
IBM pausing hiring for thousands of back-office roles
Klarna cutting customer support headcount by 90 % after switching to Gen-AI
Media companies collapsing under AI-generated content floods
Law firms using AI to draft contracts, shrinking junior headcount
Startups launching with zero marketing staff and just an AI stack
We’re seeing the asymmetry in real time.
Open roles today aren't replacements, they're efficiency multipliers.
A single AI engineer plugged into GPT-4 + LangChain and bolt.new can build what once took an entire product team. An ops lead with a few automation scripts can do what once needed five support reps.
In these scenarios, “job creation” means one person replacing many, not many finding new purpose. It’s not that no jobs will exist, just far, far fewer, and clustered at the top, junior and mid-level roles disappear entirely…
History won’t repeat…
The idea that tech disruption always leads to net job creation is a backward-looking illusion. Yes, new jobs will appear.
But not in the numbers, timeframes, or locations needed to prevent collapse. This is a structural reset and the jobs it kills won’t be replaced.
Let go of the myth. Prepare for the math.
Myth 3
“Up-skilling protects headcount.”
This one sounds progressive. It’s the feel-good corporate line:
“We’re investing in our people. We’re giving them the tools. Nobody’s getting left behind.”
In theory, up-skilling should help employees stay relevant. In practice, it’s a smokescreen, because the goal isn’t retention, it’s reduction.
Let’s look at what’s actually happening.
Up-skilling is being used as cover - not protection.
IBM’s CEO openly stated in 2023 that the company would pause hiring for 7,800 back-office roles because AI would handle the tasks those people would’ve done.
Not future displacement - current.
That same year, IBM announced AI training programmes for employees, creating a perfect public narrative:
“We’re preparing our people for the future.”
Meanwhile, the company is scaling down the number of people it needs.
And they’re not alone.
BT Group (UK) plans to cut 55,000 jobs by 2030, with over 10,000 of those directly replaced by AI.
Klarna replaced 700+ customer support agents with an AI chatbot—and then released a PR statement praising the bot’s empathy.
Dropbox, Meta, and Salesforce all launched AI initiatives and headcount reductions in the same breath.
Training workers while quietly making them obsolete isn’t a strategy, it’s a transition plan. Upskilling isn’t about protecting roles. It’s about buying time before the next round of layoffs, and this is a pattern we will see happening more frequently.
Gen-AI isn't augmenting most people
If you can give every employee an AI co-pilot that boosts their output by 300 – 500 %, the logic is simple:
You don’t need more people.
You don’t even need the same number of people.
You need fewer, higher-leverage operators.
In most orgs, that means flattening the pyramid:
Entry-level roles vanish.
Mid-level roles shrink.
Senior roles consolidate responsibility—with a model doing 70 – 80 % of the execution.
This isn’t hypothetical. It’s already playing out. Ask any content team, marketing department, or data analyst shop what’s happened to hiring pipelines since co-pilots landed.
Up-skilling isn’t equal, and it won’t scale fast enough.
Yes, some workers will adapt. Some will become 10× operators (I consider myself as one of these x10 operators.)
But most won’t. Not because they’re lazy-but because:
The learning curve is steep.
The AI ecosystem moves faster than humans can retool.
And most jobs don’t translate well into “prompt engineering” or systems thinking.
Even if companies run internal AI academies, they don’t intend to retrain everyone.
They’re looking for internal champions, not universal transformation.
Which leads to the real outcome:
A smaller, more elite workforce running AI-augmented operations
…and a large group of employees who were “trained” but never retained.
Up-skilling sounds like salvation
The narrative is appealing: “We’ll all learn the new tools and evolve together.”
But the economic reality is brutal: companies don’t want everyone to evolve—they want a leaner, faster, cheaper organisation.
Up-skilling isn’t the life raft it’s sold as.
It’s the polite announcement that the boat has fewer seats now.
Myth 4
“AI only augments; displaced staff move to higher-value work.”
This is the polite fantasy we like to tell ourselves:
“Sure, AI will take some tasks—but that just frees people up to do more meaningful work.”
It sounds win-win. The company boosts efficiency, the worker gains purpose, and no one gets laid off.
But this idea falls apart the moment you look at how organisations actually respond to productivity gains.
Productivity growth doesn’t save jobs - it makes them unnecessary.
McKinsey estimates that Gen-AI could add 0.5 to 3.4 percentage points to annual global productivity growth.
In plain English: AI lets companies do a lot more with a lot fewer people.
Historically, when businesses experience that kind of productivity leap, they don’t reassign the spare capacity to “higher-value” tasks.
They cut costs.
They shrink teams.
They consolidate roles.
If your AI system can handle customer service tickets, write ad copy, generate reports, and triage support queries, what’s the business case for keeping a team of people to do it worse and slower?
The “higher-value work” doesn’t scale with displaced labour.
Let’s say a company used to employ:
20 marketers
10 analysts
15 copywriters
Now it can do the same work with 5 multi-skilled AI operators and a stack of prompt templates.
Where exactly do the other 40 people go?
To what higher-value work?
In reality, that “higher-value work” is:
Scarce
Highly skilled
Heavily bottlenecked by senior leadership and strategic need
There isn’t an endless backlog of deep, innovative work just waiting for displaced staff to jump into. Most organisations already struggle to even define “higher-value” tasks beyond vague buzzwords like “strategy,” “insight,” or “innovation.”
AI is augmenting humans AND it’s becoming the higher-value worker.
The irony is that Gen-AI isn’t staying in the realm of grunt work.
It’s moving up the value chain:
Writing internal policy docs
Drafting legal arguments
Generating strategic recommendations
Creating visual prototypes
Conducting market analysis
Writing SQL queries and interpreting dashboards
Tasks that used to define mid-to-senior roles are now being handled by models with no salary and no ego.
So when you hear that AI is “just augmenting,” ask yourself:
Augmenting who?
Because in most cases, it’s not the team-it’s the org itself that gets augmented, allowing it to operate with 30–70 % less headcount.
The maths doesn’t lie.
If AI raises a team’s productivity by 4×, a company has two choices:
Keep the whole team and do 4× more work.
Cut the team by 75 % and maintain output at the same level.
In every cost-sensitive industry - finance, retail, media, SaaS - the second option is irresistible.
Especially when demand is flat or shrinking.
Efficiency at scale does not mean more opportunity.
It often means fewer people doing more with less, and being pushed harder because they’re now AI-assisted and “should be able to handle it.”
“Higher-value work” is a mirage for most.
The AI shift doesn’t turn junior marketers into strategists or analysts into futurists.
It turns teams into templates.
It shrinks the labour pyramid and widens the gap between those who leverage AI and those who get replaced by it.
The promise of “augment, not replace” might hold for a lucky few.
But for the rest? It’s a placeholder until your role hits the automation threshold.
It’s not augmentation.
It’s attrition, just delayed and rebranded.
Myth 5
“Regulation will slow everything down.”
This one gets repeated like a safety blanket:
“Governments won’t allow unchecked deployment. Regulation will act as a brake. We’ve got time.”
But this myth confuses appearance with action.
Because while policymakers talk, the market moves, and it doesn’t wait.
Regulation doesn’t move fast enough to stop AI rollout. And it never will.
Compliance moves at committee speed.
Cost-cutting moves at quarter-close speed.
Take the UK.
The government has launched initiatives like the AI Safety Institute, the Frontier AI Taskforce, and hosted the Bletchley Park Summit.
But what’s actually in place?
No mandatory audits for deployed models
No legal requirement to assess economic displacement
No framework for testing model alignment before production use
These are voluntary guidelines, not enforcement tools.
Meanwhile, in the real world:
HR teams are automating away recruitment funnels.
Marketing teams are replacing content roles with templates and LLMs.
Customer service is getting gutted by chatbots and support copilots.
Ask Klarna. Ask IBM. Ask any startup that’s quietly gone from 20 people to 4 + GPT.
The tools are already live. The damage is already happening.
The regulation is non existent.
Once one competitor ships, you have to follow.
This is the part most regulators don’t understand.
If your competitor:
Lets users chat-query their entire customer history
Automates support queues with a fine-tuned LLM
Generates personalised outbound emails at scale
…you don’t get to sit back and wait for an “AI ethics framework.”
You deploy, or you lose the account.
This is the competitive dynamic now driving AI adoption:
First-mover wins. Caution equals market loss.
Even firms that want to be careful don’t have the luxury.
Internal pressure mounts.
Investors want margins.
Sales wants a feature parity slide.
Marketing needs a launch headline.
The result is a model that should’ve gone through six months of safety evaluation is rushed into production in six weeks.
The legal department signs off. The PR team drafts a values statement.
And the automation goes live, quietly displacing staff behind the scenes.
Open-source makes enforcement nearly impossible.
Even if regulators clamp down on OpenAI, Anthropic, and Google, it won’t stop what's coming from below.
Right now, developers are:
Fine-tuning LLaMA and Mixtral on local GPUs
Running instruction-tuned models on edge devices
Automating workflows inside SMBs with zero oversight
Building agents that plug into databases, CRMs, Slack, and more, without permission
There’s no way to enforce regulation on open-weight models once they’re out.
It’s the torrenting problem, but for automation.
You can't ban what runs on consumer hardware, trained in private.
Even if the UK passed sweeping legislation tomorrow, enforcement would lag by years, and the tools would keep spreading.
Boards talk ethics. Then push to prod.
We’ve all seen the playbook:
The company publishes an “AI Principles” page.
They launch an internal “Responsible AI” taskforce.
They commit to “transparent model usage.”
But behind closed doors?
Teams are asked to find headcount savings from Gen-AI.
Budgets are realigned toward automation tooling.
Deployments go live with zero interpretability or oversight.
Why? Because the model works.
It gets the job done, faster and cheaper.
And in a cost-cutting economy, that’s all that matters.
So yes, the ethics memo gets written and then it gets buried in Confluence while the board greenlights another 10 – 20 % reduction in overhead, powered by AI.
Regulation is lagging. Competition isn’t.
The idea that regulation will slow things down belongs to a different era.
We’re not dealing with heavy industrial machinery. We’re dealing with lightweight, infinitely replicable code.
The regulatory framework is so far behind because we’ve never seen tech as disruptive as AI before…
Meanwhile, the market moves on:
Faster deployments
Cheaper operations
Leaner teams
Less human labour
Governments are hosting summits, companies are shipping code, The only thing regulation might slow is our ability to react once the damage is done.
If you're waiting for policy to protect your role, you're already behind.
Myth 6
“Creativity and empathy guarantee job safety.”
The IMF finds ≈ 60 % of jobs in advanced economies overlap AI capabilities because they centre on predictable cognitive tasks. Roughly half may be negatively affected, facing wage cuts or outright removal.
Beyond the Myths
White-collar salaries underwrite mortgages, pensions and consumer demand. Remove that layer and you trigger:
Housing shocks – Fewer high-income buyers means falling prices, rising defaults.
Welfare strain – These are six-figure earners, not minimum-wage redundancies. Existing safety nets crack.
Political volatility – Displaced professionals don’t stay quiet; they organise, protest and radicalise.
Mental-health crises – Careers are identity pillars; when they vanish, so does stability.
Mainframes won’t cushion any of that—they’re merely the basement servers of a skyscraper being gutted from the 3rd floor up.
Who Actually Survives?
A tiny cadre maintains mainframes, safety systems and compliance gates. Domain polymaths People who fuse deep expertise with AI leverage out-compete both pure humans and pure bots.Regulated-trust rolesLiability, health or physical risk keeps a human name on the dotted line, even if the model did 90 % of the work.
Everything else, CRUD apps, ticket triage, routine analytics, slide decks, gets compressed.
The Evidence…
~800 B lines of COBOL still running in production codebases (Micro Focus / Vanson Bourne, 2022) itjungle.com
Mainframes handle ≈68 % of global workloads on just 6 % of IT spend (Precisely stat roundup, 2024) precisely.com
IBM’s watsonx Code Assistant for Z already turns plain-English prompts into COBOL (IBM press release, Oct 2023) newsroom.ibm.com
McKinsey forecasts ≈12 M U.S. job transitions by 2030 under Gen-AI adoption (MGI, 2023) mckinsey.com
Goldman Sachs puts ≈300 M global jobs in Gen-AI’s firing line (GS Research via Forbes, Mar 2023) forbes.com
BT Group will cut up to 55 K roles by 2030—CEO says 10 K are direct AI replacements (Reuters, May 2023) reuters.com
IBM paused hiring for ≈7 800 back-office roles because “AI will cover that work” (Bloomberg, May 2023) bloomberg.com
Klarna says its support bot does the work of ~700 human agents (Customer Experience Dive, May 2025) customerexperiencedive.com
Amazon runs ≈750 K warehouse robots; BofA pegs the labour-savings runway at $16 B / yr by 2032 (Investor’s Business Daily, Jun 2025) investors.com
Reuters notes big-law firms are trimming junior associate hiring as AI drafts contracts (Oct 2024) reuters.com
The UK’s AI Safety Institute & regulatory principles remain voluntary, with no statutory audit gate yet (UK GOV overview, 2024) gov.uk
IMF: about 60 % of advanced-economy jobs overlap AI capability; half may face wage cuts or elimination (IMF blog, Jan 2024) imf.org
SignalFire reports a 24 % YoY drop in entry-level tech hires in 2024 as AI absorbs junior work (State of Tech Talent, May 2025) theoutpost.ai
What Boards and Ministers Must Do Now
Tax the Windfall – Channel AI-driven profit spikes into re-skilling funds and portable benefits.
Tie Automation to a Social Floor – Mandatory contributions when firms cut staff via Gen-AI, similar to carbon pricing.
Fund Transition Pathways – National boot camps that teach market-relevant skills, not generic coding.
Enforce Deployment Gates – Frontier models must pass safety audits before touching sensitive data or critical processes.
Track Labour Displacement in Real Time – A public AI Labour Dashboard, updated quarterly, so policy reacts on data, not vibes.
Ignore these levers and we stumble into a white-collar Great Depression with no cushion.
Comfort Myths Don’t Pay the Bills
Productivity spikes don’t automatically trickle into new jobs; they often consolidate wealth and shrink payroll. Mainframes will keep humming, but they won’t shield millions of knowledge workers from obsolescence.
If you’re a policymaker: build buffers before the head-count line bends.
If you’re an exec: plan for fewer, but sharper, humans, or risk a talent exodus when morale tanks.
If you’re a worker: pivot to the roles AI can’t swallow, and do it yesterday.
Comfort myths feel good today. They won’t cover your mortgage tomorrow.
I want to be wrong. I really do. But I think in systems, I analyse through data, and I don’t make calls based on gut instinct, I follow the signals. And right now, they’re flashing red.
References
Micro Focus Survey on COBOL Code Base (2022) — ~800 B lines in production. medium.com
Precisely, “9 Mainframe Statistics” (2024) — Mainframes handle 68 % of workloads, 6 % of spend. precisely.com
Bloomberg, “IBM to Pause Hiring for Jobs That AI Could Do” (2023). bloomberg.com
Goldman Sachs Research via Forbes (2023) — 300 M jobs at risk. forbes.com
McKinsey, The Economic Potential of Generative AI (2023) — 0.5–3.4 ppts productivity. mckinsey.com
IMF, AI and the Future of Work (2024) — 60 % job exposure in advanced economies. imf.org
Gradual Disempowerment: Systemic Existential Risks from Incremental AI Development arxiv.org/abs/2501.16946