THE UPDATE

MiniMax M2.7: the AI that rewrites its own code. Without asking permission.

What happened: MiniMax, a Shanghai-based AI company, released M2.7, a model that participated in its own creation. It ran over 100 autonomous cycles where it analyzed its own failures, rewrote its own code, tested the results, and decided what to keep and what to throw away. No human in the loop.

The result: 30% performance improvement, achieved by the model itself. M2.7 now handles 30-50% of the reinforcement learning research workflow. Tasks that previously required entire teams of AI researchers working together for weeks.

The kicker: It does all this while activating only 10 billion parameters. It's small. It's cheap. And it's replicable.

WHO SHOULD BE WORRIED

The builders are now being built out.

AI researchers and ML engineers. Yes. The people building AI. M2.7 doesn't replace the factory worker or the cashier. It replaces the people who train AI models. The ones who thought they were safe because they were on the "right side" of automation.

R&D teams in tech companies. McKinsey estimates this type of self-improving AI can cut R&D costs by up to 40%. That's not a line item reduction. That's half the team.

Junior data scientists and research assistants. The model handles the repetitive, iterative work: running experiments, analyzing results, tweaking parameters. That's the job description of most entry-level AI roles.

WHY THIS IS DIFFERENT

This isn't AI doing your job. It's AI doing the job of improving AI.

Every week, a new AI model comes out that writes better code, generates better images, or answers questions faster. That's incremental. This is not.

M2.7 is the first widely-released model that improves itself. Think about what that means: until now, making AI better required humans. Better data, better architecture, better training. All human decisions. M2.7 breaks that dependency.

The loop used to be: Humans build AI → AI does work → Humans improve AI.

Now it's: AI builds AI → AI does work → AI improves AI.

The human in the middle is becoming optional. Not today. Not fully. But the direction is unmistakable.

THE NUMBER THAT MATTERS

One number. No spin.

30-50%. That's the share of the research workflow M2.7 handles autonomously. Not 5%. Not "assists with." Handles. Independently.

Now ask yourself: if a tool can do 30-50% of a researcher's job today, how long before it does 70%? These models don't plateau. They compound.

WHAT NO ONE IS SAYING

The question everyone is afraid to ask out loud.

The conversation around AI and jobs always focuses on blue-collar work, creative work, customer service. "Will AI replace writers? Will AI replace drivers?"

No one is asking the real question: what happens when AI replaces the people who build AI?

Because once that loop closes, once AI can meaningfully improve itself without human intervention, the speed of change stops being linear. It becomes exponential. And every job displacement prediction you've read becomes an underestimate.

M2.7 is not that moment. But it's the clearest signal yet that the moment is coming. And almost nobody is talking about it.

THE BLINDSPOT

The thing hiding in plain sight.

This is the section where we name it.

This week's blindspot: The people most at risk from AI aren't the ones already worried about it. It's the AI engineers, researchers, and developers who believe their expertise makes them irreplaceable. M2.7 just proved that the machine doesn't need their expertise forever. It's learning to do it alone.

The safest job isn't the most technical one. It's the one a machine can't teach itself to do.

The Blindspot is a weekly newsletter that tracks every major AI update and tells you what it really means for your job. Before your boss does.

If this landed, share it with someone who needs to read it.

Keep reading