Fareesh

The Moat, Justified by Faith Alone

· fareesh

A boy stood at the ship’s rail. He was sixteen. He had money sewn into the lining of his coat. He watched the dock and the German architecture in the distance. Bremen.

A priest moved through the crowd with a Bible, pressing verses into palms. The boy took one and folded it into his pocket without reading it. He knew the words already. They had been ringing in the Palatinate since before his grandfather was born, spoken in German, directly, without intercession.

The Kingdom of Bavaria wanted him. Twelve years of barracks and boots and officers who spoke with the hard consonants of Munich but answered only to Rome. A Catholic army demanding a Lutheran’s body. He remembered first seeing the conscription notice and then staring at the vineyard. The vines were finished for the season.

In the steerage hold the air tasted of vinegar. He sat down and closed his eyes. He thought of his grandfather pruning the vines with hands that had never touched a rosary. He thought of his mother alone in the stone house, and of sons he did not yet have, and whether they would know why he had left or only that he had. The boat moved into the channel. Beneath him the engine turned, indifferent, carrying him toward a place he could not picture and a future he would not see. He did not pray. The valley had prayed enough for all of them.

He was going to the new world.

The Emperor’s Zero-Days

Meanwhile in the mundane present, something odd happened earlier this month.

Anthropic announced “Claude Mythos Preview”: a frontier model that it will not release to the public. Mythos, Anthropic claims, could autonomously discover and exploit zero-day vulnerabilities across “every major operating system and browser,” and it allegedly found thousands of them, some sitting undetected for decades.

For non-technical readers, a “zero-day vulnerability” is an unpatched bug in a program unknown to its developers, that a malicious user could exploit to make it crash or or gain access to its instructions and data. Vulnerabilities in ubiquitous software like browsers, operating systems or core server programs can be weaponized to cripple infrastructure or steal data at scale.

Naturally this caught a lot of attention and sparked an equal amount of debate. The model’s benchmarks looked extraordinary as well - a seemingly generational leap over present-day models.

This was also the first time since GPT2 (2019) that an AI lab withheld a general-purpose model over security concerns. We all know how that one turned out considering everyone has free access to GPT 5.4 today and the world is still spinning.

Instead, Anthropic announced “Project Glasswing”, an invitation-only consortium of 50 organizations - Amazon, Apple, Microsoft, Google, the usual suspects.

Anthropic’s public statements claimed “thousands of high-severity vulnerabilities.” but they had only checked about 200 manually and 90% of them looked reasonable to researchers. They extrapolated the 200 to “thousands”.

Then there is the replication problem.

Independent security researchers wanted to know if this was unique to Mythos. The CEO of HuggingFace reported that small, cheap open-weight models, when fed the same vulnerabilities Anthropic highlighted, “recovered much of the same analysis” for a fraction of the cost.

AI security researcher Stanislav Fort ran an experiment on another vulnerability Anthropic touted as evidence of Mythos’s singular capability. Eight existing models (all cheaper and more accessible than Mythos) discovered the same issue.

None of these means Mythos is a bad model. It’s probably among the best models in the world right now. The packaging, however - existential dread, secretive access, the $100 million credit backed defensive consortium appears to be theater.

Whether Mythos lives up to the claims is almost beside the point. The pattern is what matters. I don’t think we’ve seen the last of it.

The Whale in the Room

A few months after ChatGPT went public in 2022, a Chinese hedge fund spun up a small research team to build its own LLM. It was a side project, but with real money and real talent.

About a year later they had DeepSeek-V2. Early the year after came DeepSeek-R1. They weren’t as good as the top models, but they were close. Other Chinese labs like Alibaba and Moonshot had similar stories.

The biggest difference was that these models were free.

For most people “free” doesn’t matter. You can download the model, sure! but your laptop isn’t going to run it, and if it does, it’s going to take hours to respond.

For builders, it changed everything. You, in your pajamas (yes, you), could have grabbed DeepSeek-R1 or something similar, rented GPUs for $200/hr, and shipped a product that was better than early ChatGPT and almost as good as current-gen GPT. Don’t know how? No problem, the model will probably do it for you.

What about the next model? No problem. ChatGPT gets better, Claude gets better, eventually you download a better one, and you also get better.

You wouldn’t have made any profits, but neither did OpenAI or Anthropic.

That was 2025.

Intelligence is getting cheaper faster than distribution is.

One TRILLION Dollars

If you counted from 1 to 1 trillion it would take more than 32,000 years.

OpenAI is aiming to reach an eventual IPO at a valuation of $1 trillion dollars. The thesis is simple.

OpenAI is a frontier AI lab. They’re building the infrastructure for the AI economy. Everyone is using ChatGPT, it is a household name, they’re building out massive compute infrastructure and just like Google runs the web of today, OpenAI will run the web of tomorrow. If you could go back 25 years and invest in Google wouldn’t you do it? Are you in?

When you put it like that who wouldn’t be? Well, until recently.

How about this - what if instead of going back 25 years, you could go back a few years and invest in OpenAI? Well Claude is a great model, Anthropic has strong enterprise uptake and a more efficient revenue stream so why would you invest in OpenAI when you could just invest in Anthropic instead?

And so everyone’s social media feeds “organically” gravitated to Claude posts as investor money started to flow towards Anthropic.

Not one, but two trillion dollar companies.

Moonshot

Earlier this week, Moonshot released Kimi K2.6. They claim their model actually beats Opus 4.6 and GPT 5.4 on benchmarks. These are tall claims, and while the model is still new with inference and serving issues being ironed out, there’s some degree of truth to them.

I used the previous incarnation: Kimi K2.5 nearly every day for the past several months and it’s the best model I’ve found for everyday agent driven tasks. It isn’t the smartest model for every task, nor is it the most capable for every task. It’s at the sweet spot of speed, quotas and capability that makes it the reliable overall choice. It makes mistakes, it forgets things, but it does the things I want it to more often and faster than the frontier labs and I could run it for most of my day and comfortably not run out of quotas.

YMMV.

At this stage, the question that the markets have been debating for the past year seems to be at a relative peak of relevance.

Where is the moat?

In 2026, once more, ye of the entrepreneurial persuasion, could be home in your pajamas running your own LLM company. Kimi K2.6 is free. Deepseek v4 is free. GPUs are available to rent. You could conceivably copy every ChatGPT and Claude feature to a reasonable degree and have a poor man’s Anthropic in 2 weeks.

You didn’t make the model, sure, but if your product is good and does the job it needs to, is the consumer going to care?

Obviously, I am being glib and this is an exaggeration. Of course you have no users. Nobody knows you. Nobody is going to use your ghetto pretend-LLM regardless of how good it is. You have no talent, no brand name, no recognition, no marketing, no trust, no SOC2 certificate.

Those things are valuable. After all, if you had them, you’d be able to have some serious market share.

They are, however, not worth half a trillion dollars.

The Three Ways

There are three ways to make a living in this business: be first; be smarter; or cheat. Now, I don’t cheat. And although I like to think we have some pretty smart people in this building, it sure is a hell of a lot easier to just be first.

Google and xAI are reshaping their existing businesses around Language Models, but OpenAI and Anthropic are the first pure-play LLM companies, and for now, they are the smartest.

With the commodification of intelligence, however, it’s only a matter of time before “smarter” is no longer relevant and “good enough” is truly good enough by leaps and bounds.

After all, how often do you really need the head of hardware engineering at Samsung to help you with your electronics exam? Most of our problems are mundane, most of us aren’t curing cancer.

A few months ago, Sam Altman sent out an internal memo to all employees calling for a “Code Red” at the company because of threats from competition. I can’t imagine things are any better now.

And so with trillions of dollars on the line, it begs the question. There are three ways to make a living in this business. Being first helped to some degree, being smarter is becoming irrelevant. The only thing left to do is cheat.

The great thing about that Margin Call quote is that it’s rooted in observable behavior around systems. When people operate in environments where effort and outcome stop lining up, it shifts mindsets. Over time, people stop believing they can stay ahead by just being better. They stop believing they can win on those terms.

If better models don’t translate into durable advantage and if every breakthrough is matched or distilled or open-sourced, or undercut within months, then “be smarter” is no longer a strategy.

At the apex of where wealth and power intersect, that dynamic can reshape the system itself.

They can cheat.

At this scale, ‘cheating’ is just rewriting rules rather than breaking them. Rewriting the rules with licensing regimes, and safety frameworks that entrench incumbents.

Regulatory Capture

 I also worry that that legitimate concern is justifying some over-regulation or some preemptive over-regulation attempts that would frankly entrench the tech incumbents that we already have and make it actually harder for new entrants to create the innovation that’s going to sort of power the next generation of American growth and American job creation. Very often CEOs, especially of larger technology companies that I think already have advantageous positions in AI will come and talk about the terrible safety dangers of this new technology and how Congress needs to jump up and regulate as quickly as possible. And I can’t help but worry that if we do something under duress from the current incumbents, it’s going to be to the advantage of those incumbents and not to the advantage of the American consumer.    – Erstwhile US Senator from Ohio, JD Vance

I said we hadn’t seen the last of the “Mythos” propaganda, because we’ve heard it for years in many forms. It feels like it’s only going to get worse from here on out.

If I had to guess, (and I hope I’m wrong) regulatory capture is next on the agenda.

It will begin with an increase in safetyist rhetoric and fear-mongering. “Language models are dangerous”. “Rogue nations could use them to develop cyberattacks”. “Planes will fall out of the sky”. “Children are talking to these models and ending their lives”. “Governments must allow only licensed businesses to develop and serve large language models”.

Many would say that sounds reasonable, by the way, and that is the crux of why I wrote this. It’s a mainstream view rooted in good intentions that dangerous technology should be controlled. Experts should handle it. Better safe than sorry. And, in isolation, each of those arguments makes sense. The problem is what happens when you zoom out.

You and I aren’t going to build a jet engine. We aren’t going to build a nuclear power plant either.

Language models are not the same. This is general purpose technology. Writing, programming, research, communication, everyday things that millions of people do are downstream from language and downstream from language models.

So the idea that “only licensed providers should be able to build and serve these systems” comes down to the uncomfortable idea that most people are using someone else’s intelligence - locked behind some bureaucrat’s compliance regime handed out by their approved providers.

Everyone has heard the cliche’d “you are not going to lose your job to AI you are going to lose your job to someone using AI” line that the frontier model companies rattle off every chance that they get.

This slowly becomes a problem if you consider a system where you aren’t allowed to have access to the best models of today. Your competitors - perhaps your coworkers, peers or competing businesses are. Under a system like that it’s illegal for you to access better technology, and it’s illegal for anyone to supply you with it.

Where does that leave you?

Reformation

Video: A one-shot Kimi K2.6 agent generated website + images + videos assets with a single short prompt with the rough idea behind this blog post

For those familiar with the reformation, this should all sound very familiar, for those who aren’t - a brief recap.

For most of European history, ordinary people didn’t read the Bible. They were religious and to a large degree, and piety was considered something of a virtue, but they never read it because they couldn’t. It was written in Latin and church services were in Latin. If anyone wanted to understand scripture, they went to the clergy. All worldly pre-renaissance knowledge flowed through the clergy. Almost nobody understood Latin. The Bible was the source of all wisdom but nobody knew what was really in it, and for all practical purposes there was only one version.

The system worked well. Order was maintained, and depending on who you spoke to it was “necessary” to keep civilization together.

Life went on until one day a monk named Martin Luther translated the Bible into German. The recent invention of the printing press had unlocked European civilization to the phase of movable type, four centuries after the Chinese.

In any civilization everything changes when the written word can be mass-produced and distributed. Everything.

The gatekeepers of knowledge and wisdom slowly became irrelevant.

“Being first” was irrelevant. “Being smart” was also irrelevant.

Interpretations of the Bible began to fracture and the genie was out of the bottle and all of Europe gradually reordered, violently.

Lines were drawn between Catholic and Protestant. Battle lines. Political authority fragmented and the legitimacy that it claimed eroded in some places. Regions armed themselves, unified Christendom fell apart and for thirty violent years Europe went to war until the eventual Peace of Westphalia. The turning point that formalized the concept of what we all consider a sovereign nation-state today.

There was peace between rulers, but the masses hadn’t come around to it yet. Even two hundred years later, a post reformation religious identity like being Lutheran or Protestant in Europe was not easy.

I like to think it took a bit of a rebellious anti-authoritarian streak in one’s personality to have become a Protestant back then. All of us know someone like that, some of us are someone like that. You hear something about something and you think to yourself “yeah that doesn’t work for me”, and it doesn’t matter how much pressure there is to conform, it’s not going to happen.

The fruits of the Protestant reformation seeded the thirteen colonies in the new world - on the continent of North America. In some ways it was a pressure valve. Wars had been fought and ended but things were never the same for the people after the reformation. When the system became to prescriptive, too centralized and too rigid, they could opt out. They could go somewhere else and build something new. It kept the peace between the factions from escalating to war over religion again.

All of this has happened before. All of this will happen again

The technological leap offered by movable type in Europe changed the face of the world forever. When information became free from centralization, Christianity fractured, Europe fractured, America was born, and the world was never the same again.

China got there 4 centuries earlier. They had printing since the Song Dynasty. It was absorbed into the existing order. They had paper, they had printing, and it just scaled what they already had. One branch of history optimized for continuity, the other for disruption. China stayed stable, durable, and remained far ahead of the rest of the world, until it briefly wasn’t.

There were different games, different outcomes history unfolded, but here we are again.

The American labs talk about openness, they ramble on about safety and responsibility, but they don’t release the models. They centralize them. They gate them. They license them. They decide who gets access and who doesn’t. They call themselves “Open AI” but they’re effectively the clergy.

The Chinese are releasing open-weight models. Fast, cheap to run, widely accessible. Good enough, and getting better. Free to download, distributed access, anyone can run it, change it, push it further, nobody decides who gets to use it or what the model is allowed to say or do.

Two systems. One decides what you’re allowed to see, build, and understand. The other lets you figure it out yourself, for better or worse. At this point for most it really doesn’t feel like a big deal. Both work. Both produce results. Both systems look reasonable and either one works until you’re the one inside it and one system decides what your limits are.

Unlike before, this time it is not what you can access, it isn’t even what you’re allowed to build. In this case, it will control what you’re allowed to become, and that is when it stops being a philosophical question, it becomes a personal one.

In 2022 or 2026 if parts of Europe go to war we all feel it no matter where we are. The price of oil changes the economics of distribution. Essentials become expensive. Budgets change. Jobs are lost. Lives are changed. Iran and Israel go to war in West Asia and for many around the world their money is suddenly worth 10-20% less in the global marketplace.

If you aren’t convinced that sometimes obscure philosophical differences about things that don’t concern you in far away places can impact your daily life in ways you’d never expect, then I will leave you with the conclusion of the original story in this post, and so we return to the seas off the shore of Bremen.

Beneath the deck of the steamship, the boy, now dozing, dreamed of his future life in America, all that came before him, and all that would come after.

Every night he dreamed. He dreamed of the valley. He dreamed of the stone church on the hill, and of monks in scriptoriums hunched over vellum, copying words they did not want him to read in a language they did not want him to speak. He dreamed of fires. Of printing presses clanking in German towns. Of armies marching across the Palatinate because a man had said the words sola fide and meant them.

He dreamed of blood. Not his own, but of his Lutheran forefathers, and of his children, and their children. The same stubborn refusal that he carried across an ocean and into an unknown future.

If prevailing wisdom says there are three ways to make it in life, the erstwhile Protestants said no, and decided on a fourth. You can be first, you can be smart, you can cheat. Or you can leave - and they did.

He did.

On October 19, 1885 the steamship Eider sailed in to the castle garden landing depot in New York City. The ship had emptied after a long journey, but he was still asleep. Port authorities found him nestled in a corner and shook him awake.

“Wake up boy, you’re here”

He stirred, blinked, and sat up slowly, as if the world hadn’t quite caught up with him yet.

“What’s your name son?”

He hesitated for a moment.

“Friedrich. Friedrich Trump”.