Imagine a Meta employee — call her Serena — sitting at her desk in early 2026. She has a personal AI assistant baked into her workflow. She uses it to draft proposals, summarize meeting notes, structure her code reviews, and prioritize her week. She's faster than she's ever been. Her manager notices. Her performance review reflects it.
Serena is not worried about AI. Serena loves AI.
This is where most articles about Meta's "AI for Work" initiative end. Mark Zuckerberg has announced that 2026 is the year AI starts to dramatically change how Meta works. CTO Andrew Bosworth now leads the push. The goal: make every one of the company's 78,000 employees a one-person team — a squad of one, augmented, accelerated, multiplied.
The coverage is almost uniformly positive. More output. Less bureaucracy. Fewer management layers. The vision of China's one-person companies — where a single founder with AI handles what used to take entire departments — scaled to enterprise. Nobody asks what else is being built in the process.
The Interview You Never Agreed To
Every time Serena asks her AI to restructure a proposal, she is teaching it her preferred argument flow. Every time she overrides a suggestion, she documents her judgment. Every correction is a data point. Every shortcut she takes, every instinct she follows, every operational decision she makes: logged, indexed, pattern-matched.
The AI is not just her assistant. It is her shadow.
The most dangerous moment in any knowledge worker's career is when their tacit knowledge becomes explicit.
Tacit knowledge — the "how" behind what you do, built over years of practice — is what makes you hard to replace. It lives in your head. It cannot be downloaded. Until, that is, you spend eighteen months using a personal AI that asks you to explain your decisions, follow your reasoning, and validate its outputs.
At that point, it can be downloaded. Or close enough.
Frederick Taylor Had the Same Idea
This is not a new story. In the early 1900s, Frederick Taylor watched factory workers do their jobs and timed every motion, every step, every technique. He extracted the tacit craft knowledge of skilled workers, codified it into standard procedures, and handed those procedures to the next person down the wage scale. The craftsmen became interchangeable. The knowledge left the body and entered the system.
Taylor called it scientific management. We called it the birth of modern productivity.
What Meta is doing in 2026 is Taylor's protocol applied to knowledge work. The personal AI is the time-and-motion camera. The workflow is the factory floor. The employee is both the subject and, unknowingly, the author of the manual that will one day train their replacement.
The twist is that this version comes with a friendly interface, a productivity boost, and your company paying for the subscription.
The Numbers to Read Carefully
Research from the Dallas Fed shows that early-career workers (ages 22-25) in AI-exposed occupations have seen a 13% relative decline in employment since late 2022. Experienced workers in the same roles? Stable or growing.
The standard interpretation: AI replaces entry-level work but complements experience. The under-examined reading: we are still in the extraction phase. Junior workers go first because their knowledge is already codified (degrees, textbooks, credentials). Once the tacit layer is extracted from experienced workers — through years of daily interaction with their personal AI assistants — the dynamic shifts.
The tools for that extraction are now being handed out, one per employee, at enterprise scale. Meta's internal tool "Second Brain" indexes and queries project documents, acting as a virtual chief of staff. Zuckerberg himself is building a personal CEO agent, trained on internal data and engineering roadmaps, to get answers "without going through layers of people."
The layers of people, incidentally, are the 78,000 employees currently building the dataset.
The Bias You Are Teaching It Too
There is a second effect — quieter and more immediate than replaceability. Every personal AI assistant learns not just your strengths, but your patterns. Including the bad ones.
If you skip steps under deadline pressure, your AI learns to skip steps. If you write performance reviews with implicit bias (and research consistently shows most people do), your AI will help you write those reviews faster, with more apparent rigor, at scale. 80% of AI projects fail partly due to biased algorithms — and a significant share of that bias does not come from the training data. It comes from the human doing the training.
The productivity multiplier is real. The bias multiplier is equally real. What varies is which one gets measured.
A properly guided implementation, one that surfaces and corrects patterns rather than amplifying them, is technically possible. It requires deliberate investment in governance, feedback loops, and human review. Most rollouts skip this part. "AI for Work" as a corporate initiative almost never leads with it.
What Serena Does Not Know
The real output of Serena's AI assistant is not the proposal she drafted this morning. It is not the code she reviewed, the meeting she summarized, or the roadmap she restructured with three prompts instead of three hours.
The real output is the model of Serena.
Precise. Reproducible. Portable. And getting more complete with every working day.
She is not being replaced yet. She is being documented. Those are different things, for now.