Executive Summary:

The grunt work nobody wanted to do was secretly the work that built every great professional. Now it’s gone. And the pipeline with it.

AI is automating the bottom rungs of elite knowledge professions  law, banking, consulting, medicine  at speed. What looks like a productivity win is quietly destroying the mechanism by which raw graduates become experienced professionals.

The dirty secret: those gruelling 11 pm contract reviews and 2 am pitch books weren’t just billable drudgery. They were training infrastructure. Repetition built pattern recognition. Pattern recognition became judgment. Judgment became a partnership. Automate the work, and you automate the training that was hiding inside it.

The real crisis isn’t what AI is taking. It’s what nobody is replacing it with.

The firms shrinking their junior classes aren’t building a smarter pipeline. They’re not building one at all. Goldman’s 2025 intern class was its smallest in a decade. McKinsey has shed over 9,000 people. JP Morgan’s CFO has told managers to stop hiring. The cohort entering in 2026 may be the last trained in anything resembling the old way.

A better model exists –  the accelerated apprenticeship: let AI handle the mechanical grind and put juniors in front of senior-level thinking from day one. Germany did exactly this when automation hit manufacturing in the 1980s. The cognitive science supports it. The technology is ready.

So why isn’t anyone doing it?

Because the incentive structure won’t allow it. Senior partners earn their fortunes over 7–12 years, then exit with no continuing stake in the firm’s future. Every analyst class they cut improves this year’s profit-per-partner. The destruction of the talent pipeline is indistinguishable from good management under the metrics that govern these institutions.

AI didn’t create this problem. It just gave a broken incentive structure permission to act on its worst impulses at scale.

The verdict: Machines can do the work. They can’t yet produce the people who understand it. But they could –  if firms chose to redesign training rather than simply eliminate it. That choice requires governance structures that think in generations, not annual cycles. Those structures barely exist. And the window to build them is closing fast.

Read the full article below.


The Last Analyst Class

AI is replacing the grunt work that built every senior professional in the world. The firms that should be protecting the pipeline are the ones destroying it. And the incentive structure ensures nobody will stop them.

The author is the CEO and co-founder of Hypothesis3, an AI-powered strategic consulting platform. His company automates the kind of midnight conference-room work described in this article. He has an obvious interest in the subject. He also has an obvious reason to worry about the question it raises.

In the autumn of 2025, around a hundred young people who had recently left jobs at Wall Street’s most prestigious banks began a peculiar new assignment. Recruited by OpenAI for something called Project Mercury, they were being paid $150 an hour to teach an artificial-intelligence system how to do what they had spent two years learning by hand: the painstaking assembly of financial analysis, deal structures and presentation material that is the foundational grunt work of investment banking.

They were described as former employees of Goldman Sachs, JPMorgan and Morgan Stanley, though the pool also included people from Brookfield, Mubadala, Evercore and KKR, as well as MBA candidates at Harvard and MIT. Why each of them had left banking was not reported – and may be the most revealing absence in the story. Some had likely been among the thousands laid off as banks shrank their junior ranks during the industry’s most profitable year. Some may have burned out after two years of hundred-hour weeks. Some were already in business school, doing Mercury as a well-paid side gig. Whatever brought them there, the destination was the same: $150 an hour, no equity, no benefits, no career path, one financial model per week, reviewed by an algorithm. When the models have been absorbed, the AI will do in seconds what once took a team of them a fortnight.

Whether they left by choice or were pushed out matters less than what happened next. The industry that trained them had no further use for them. The technology company that hired them had exactly one use – to extract what they knew and encode it in a system that would ensure nobody like them would be needed again.

This is not a story about what happens to a hundred junior bankers. It is a story about what happens to the firms that can no longer find anyone to promote in fifteen years because they automated the pipeline that was supposed to produce them.

It is also a story about why the people who run those firms have every reason to let it happen.

11pm, City of London

It is 11pm on a Tuesday. A 24-year-old associate at a Magic Circle law firm is sitting in a windowless conference room surrounded by seventeen lever-arch files and four empty coffee cups. She is reviewing contracts for a cross-border acquisition – checking each document against a due-diligence checklist, flagging deviations from standard terms, noting where indemnification clauses differ from the template. She has been at it for nine hours. She will continue for another three. She is being billed to the client at £450 an hour and paid, after tax, roughly £18. She is exhausted, resentful, and learning more about the mechanics of corporate transactions than any lecture or textbook could teach her.

She does not know it yet, but this miserable Tuesday evening is the most valuable single day of her professional education. By the time she has reviewed her four-hundredth contract, she will spot a problematic limitation-of-liability clause at a glance. By her thousandth, she will know what “normal” looks like across six jurisdictions. By the time she makes partner, fifteen years from now, this pattern recognition – built one grinding evening at a time – will be the foundation of her professional judgment. She will call it experience. She will call it intuition. She will be wrong. It was drudgery. And drudgery was the point.

An AI system can now do what she is doing in approximately four minutes.

One of these two scenes – the bankers at their laptops, the associate in her conference room – is the future. The other is the past. What follows explains which is which, and why the answer is more troubling than either alone suggests.

The hidden curriculum

Every serious profession has a version of that associate’s Tuesday evening. Investment banks have their pitch books – the financial models and presentation decks that first-year analysts build at 2am for managing directors who will glance at them for thirty seconds. Goldman Sachs’ CEO David Solomon has observed that a large language model can now draft 95% of an IPO prospectus in minutes. JPMorgan’s proprietary AI suite reportedly produces five-page pitch decks in thirty seconds. Goldman’s 2025 summer intern class – roughly 2,600 people – was its smallest in over a decade, and the firm has reportedly considered cutting future classes by as much as two-thirds, which would take them below a thousand. Consulting firms have their own version of the midnight conference room – McKinsey grew 2% in 2024, cut roughly 10% of its workforce, and now employs around 36,000 people, down from a peak above 45,000. That is 9,000 people – the population of a small English market town – who are no longer learning the craft. Architecture practices have their compliance drafts. Accounting firms have their ledger audits.

Medical training has the most consequential version. The junior doctor on a 90-hour week is building clinical pattern recognition through sheer volume of cases – but this is not merely workforce development. It is a patient-safety system. The junior’s growing ability to spot the abnormal within the routine catches the rare, dangerous case that the protocol missed. The radiologist who was never trained on thousands of scans does not merely lack expertise. She lacks the instinct to pause when something looks wrong – the instinct that saves lives and that no amount of AI supervision can replicate, because it was built, scan by scan, in the years that the AI has now consumed. Automate the routine diagnostics and the question becomes urgent and specific: who catches what the AI missed, and how did they learn to catch it?

In every instance, the work being done by the junior has two functions. The first is visible: it produces a deliverable that the client or patient needs. The second is invisible: it trains the person doing it. The late nights and the tedious repetition are not a regrettable cost of the professional model. They are the model’s training infrastructure – the mechanism by which raw graduates are converted, over years of practice, into the experienced professionals who will run the firms, the hospitals and the practices that employ the next generation.

Automate the work and you automate the training. This is the apprenticeship paradox: the work you are most eager to eliminate is the work that produces the people who know how to do the work you want to keep.

This is not a fifteen-year prediction. It is a twelve-month observation. Every firm that has deployed AI tools in the past two years has already changed what its juniors do – without changing how they are developed. The associate who once reviewed four hundred contracts now reviews the AI’s summary of four hundred contracts. She is no longer building pattern recognition through repetition. She is checking boxes on a screen. The first-year analyst who once built a pitch book from a blank spreadsheet now edits one the AI drafted. He is no longer learning how the pieces fit together. He is correcting formatting. The juniors are still there. They are still being paid. But they are learning less per hour, less per week, less per year than any cohort before them – and nobody has noticed, because the efficiency metrics have never looked better.

The wrong rung of the ladder

There is, however, a more radical possibility. What if the old apprenticeship was never a good training method? What if it was merely the only one available?

Consider what the traditional model actually does. It takes a law graduate who has spent three years studying contract law, tort, equity, constitutional principles and legal reasoning, and sets her to work checking whether a clause matches a template. She is not being asked why the clause matters, what risk it creates, or how it fits the broader transaction. She is not being trained. She is being used.

She is a search engine in a suit.

The pattern recognition she develops over hundreds of late nights is a byproduct of the labour, not its purpose. The law firm does not assign her four hundred contracts because four hundred is the optimal number for developing commercial judgment.

It assigns them because four hundred contracts need reviewing and she is the cheapest person in the building.

The insight is that juniors and seniors in every profession are working on the same problems – but asking different questions. The senior partner reads the same contracts as the associate. But where the associate is checking clause against template, the senior partner is asking: what does this pattern of deviations reveal about the counterparty’s appetite for risk? Where is the negotiating leverage? How does this deal compare to three similar transactions she advised on last year? The associate, buried in page-by-page review, never encounters these questions. She encounters them only after a decade of surviving the lower levels – by which point the firm has lost most of her cohort to attrition, exhaustion and the quiet realisation that the banking hours were not, in fact, worth it.

Now imagine an alternative. The AI handles the clause extraction, the document search, the template matching – all the work that fills conference rooms at midnight. The junior starts, from her first week, at the level where her senior partner operates: evaluating what the patterns mean, formulating hypotheses about risk, comparing the deal to precedents the AI has assembled. She is not being asked to exercise senior judgment on day one. She is being asked to begin developing it – years earlier than the old model permitted. She is doing in month three what the grinding machine produced in year five. The university gave her the foundations. The AI handles the mechanical application. She enters the ladder at the rung where genuine professional thinking begins.

The accelerated apprenticeship

This does not happen automatically. Remove the midnight drudgery without replacing it with coached, higher-level work and you produce the worst of both worlds: juniors who neither review contracts by hand nor develop the questions a senior partner asks while her associate is still counting pages. They supervise AI and learn nothing. In aviation, autopilot has made flying dramatically safer and pilots measurably worse – a documented phenomenon the industry calls “automation complacency.” When the system fails and the human must take over, the skills have atrophied through disuse. The knowledge professions are building themselves the same trap.

But if firms invest deliberately in what might be called the accelerated apprenticeship – coached, higher-level work from the first day – the timeline to professional maturity could compress dramatically. The mechanism is well understood. Cognitive-apprenticeship research, dating to Collins, Brown and Newman in 1989, describes how expertise develops: seniors make their reasoning visible by thinking aloud through a problem; juniors attempt the same reasoning with immediate feedback; the complexity of the task increases as the junior’s capability grows; juniors articulate their logic and compare it to expert performance. What is new is that AI makes this scalable for the first time. In the old model, a partner could mentor two or three juniors – the relationship demanded hours of personal attention. In the new model, the AI handles the routine scaffolding, freeing the partner to focus on the distinctly human part of mentorship: sharing the tacit knowledge that never makes it into a manual, narrating the judgment calls that a textbook cannot teach. One partner, assisted by AI, could effectively develop ten juniors at the depth previously possible with three.

There is an irony in this worth savouring. The technology industry has spent five years learning how to get the best results from AI agents: set a clear goal, define what success looks like, give autonomy, do not micromanage the route to the answer. This turns out to be exactly how firms should have been developing their juniors all along. The old apprenticeship micromanaged every step: find this clause, check this template, populate this row. The new one should operate the way a well-designed AI agent does – goal-directed, with freedom to find its own path to the answer. The comparison is not between the nature of humans and machines. It is between the management methods that produce the best results from both. The firms that have learned to direct AI well are inadvertently learning how to develop people better.

Germany provides the structural precedent. Throughout the CNC revolution of the 1980s and 1990s, when automated machining eliminated much of the manual work that trained junior machinists, Germany maintained its apprenticeship infrastructure by redesigning it rather than abandoning it. Apprentices spent less time filing metal by hand and more time programming the machines, troubleshooting their failures, and understanding the metallurgy beneath the automation. Countries that simply automated and celebrated the efficiency gains discovered, a generation later, that they had nobody who understood what the machines were actually doing. Germany had both the machines and the people who could interrogate them. The professions that redesign their training around higher-level work will have both. The ones that merely automate will have the machines.

Fewer, better, more expensive

The new model raises the bar. The old apprenticeship was brutally democratic in its way: anyone with sufficient stamina could review contracts at 11pm, and many did. The new one demands a different kind of capability from the first day – the ability to formulate hypotheses about risk, to synthesise patterns across a complex transaction, to take responsibility for judgments that the old model deferred for a decade. Not every graduate will meet this threshold. Fewer juniors will qualify. But those who do are creating year-five value in year one, and should be compensated accordingly.

The pyramid flattens. Fewer juniors, paid more, doing harder work, producing more value per head. Remove the exploitation layer – the review-until-midnight drudgery that subsidised the partner’s income – and the twelve-hour day compresses to eight rather quickly. Productivity per hour rises; total billable volume falls. The partner’s £3–5m annual income, historically funded by six associates billing twelve-hour days at £450, faces a structural squeeze. The firm either charges substantially higher rates for substantially higher-value work, shifts to pricing by outcome rather than by hour, or invents some hybrid that does not yet exist. The old equilibrium – mass junior labour billed by the hour – is being dismantled from below. The new equilibrium has no settled shape. What is clear is that the transition requires rebuilding the economic engine while it is still running, which is why almost nobody is attempting it voluntarily.

The incentive trap

And this is where the argument takes its darkest turn. The accelerated apprenticeship is feasible. The compressed ladder is sound. The technology exists. The Germany precedent proves it can be done. So why is nobody doing it?

Because the governance structure of the firms that should be building the new apprenticeship actively rewards them for destroying the old one instead.

Consider what it means to be a senior partner at a top consulting firm or a managing director at a leading investment bank. You earn $3–5m a year during a tenure that typically lasts seven to twelve years at the top. When you retire, you leave with accumulated wealth but no continuing stake in the firm’s future. You do not own shares that decline in value if the talent pipeline collapses in 2040. You do not sit on a board that answers to future employees. You are gone.

The house you built – or failed to build – is someone else’s problem.

The only metric that matters, year to year, is profit-per-partner. And profit-per-partner improves when you cut junior headcount, because you are reducing costs without immediately reducing revenue. Every AI tool that replaces an analyst, every analyst class that shrinks, every office that eliminates a layer of associates – these register as efficiency gains in the current year’s accounts. The partners who cut deepest show the best numbers. They are celebrated as disciplined managers. They are, by every measure the partnership tracks, doing an excellent job.

The destruction of the training pipeline is indistinguishable from good management under the metrics the partnership uses to govern itself.

There is no external corrective. No stock price that falls when the market detects a depleting talent pipeline. No shareholders with the standing to demand a fifteen-year workforce-development plan. No board of directors empowered to fire a managing partner for mortgaging the firm’s future. The partnership is accountable only to its current members, who benefit from extraction and bear none of the long-term cost.

Nor would public ownership solve this. Public companies are at least as hostage to the short term – quarterly earnings, activist shareholders demanding buybacks, CEOs with three-year tenures optimising for the stock price during their window. The average holding period for a share on the New York Stock Exchange has collapsed from eight years in the 1960s to roughly five and a half months today. These are not long-term stewards. The only entities that reliably invest on generational timelines are those whose ownership structure explicitly requires it: sovereign wealth funds that answer to future citizens, family offices that think about grandchildren, endowments that measure performance in decades. The question of whether professional firms can create governance structures that mimic that long-horizon alignment is genuinely open. It has not been answered because it has barely been asked.

Marvin Bower answered it once. When he retired from McKinsey in the 1960s, he sold his shares back to the firm at book value – turning down guaranteed personal wealth to ensure the institution’s long-term health. He designed the partnership as a trust to be maintained, not an asset to be monetised. He believed that people were the firm’s primary resource, that values should govern how they were developed, and that professionalism meant putting the institution’s future above the individual’s present. The firm he built was 500 people, and every one of them knew his name.

That firm is now 36,000 people. The values Bower instilled have become institutional memory – words in an onboarding document rather than a living practice enforced by personal example. The partnership he designed to resist short-term extraction has become its principal vehicle. Each generation of partners inherited the institution, grew it, and incrementally shifted the incentive structure toward the thing Bower feared most: the monetisation of a trust.

AI did not break this. A hundred years of successful growth did – each adaptation moving the firm further from the governance structure that made it resilient in the first place.

AI is not the cause of the crisis. It is the revealer. It gives a broken incentive structure permission to act on its worst impulses at unprecedented scale.

The last class

Goldman Sachs’ 2025 summer intern class was approximately 2,600 – the smallest in over a decade, and barely two-fifths of what it would be if the reported two-thirds cut materialises. McKinsey’s headcount has fallen by more than 9,000 from its peak – the equivalent of emptying a small town of consultants. JPMorgan’s chief financial officer has told managers to avoid hiring as AI deploys across the bank. Across every elite knowledge profession, the base of the pyramid is being automated, offshored or eliminated – simultaneously and at speed.

The cohort entering these professions in 2026 may be the last to be trained in anything resembling the traditional way. They will arrive expecting the apprenticeship that built every senior professional before them. Many will instead find themselves supervising AI systems in roles designed for productivity, not development. The juniors who are not hired at all – the analyst classes that shrink, the associate positions that disappear – will never enter the pipeline. They are the partners who cannot be promoted in 2040. The leaders who will not exist. The judgment that was never built.

The solution – the accelerated apprenticeship, the deliberate redesign of professional training around the questions a senior partner asks rather than the clauses a junior associate checks – is technically feasible and structurally sound. Germany proved the model works for manufacturing. The cognitive science literature supports the mechanism. The technology to implement it exists today. But the firms that should be building it are governed by incentive structures that reward them for not doing so. The current generation of partners profits from the cuts. The next generation inherits the consequences.

Nobody inside the partnership has both the authority and the incentive to act.

The old apprenticeship is dying. A better one is possible. But it will not come from inside the institutions that need it most – because those institutions have spent a hundred years building governance structures that are, by design, incapable of investing in anything that pays off after the current partners have retired to the beach.

Machines can do the work. They cannot yet produce the people who understand it. But they can, if directed wisely, accelerate the process by which those people are made. That possibility – the most important in the future of professional services – requires an ownership model that thinks in generations, not in annual profit-per-partner cycles.

Bower built it once, by personal example, in a firm of 500 people who knew his name. The question is whether it can be built again – at scale, by design rather than by force of character – before the generation that was supposed to be next has already been lost.

Author: Nikolay Sudarikov

So what?…

Nikolay Sudarikov is an Associate of the Openside Group and the CEO and co-founder of Hypothesis3. Openside is currently working closely with Nikolay and Hypothesis3 on providing AI augmentation to all of Openside’s development modules.