The $5b company with no product: How did Safe Superintelligence raise $1b?
I love writing about startups that make sense: They validate demand in a small market segment, overserve that segment with a great product, then expand from there. But I love even more when startups don't make sense. And Safe Superintelligence doesn't.
The company just raised $1 billion (from tier 1 VCs like a16z and Sequoia) at a $5 billion valuation - with nothing to show for itself but this website:
That's right: Not only is there no product, there's no plan to ship one with any urgency. Frankly, it's not even clear what a product might look like. Safe Superintelligence (SSI) seems to ignore all best practices:
- It raised $1b pre-revenue instead of being scrappy
- It deliberately focuses on safety instead of "moving fast and breaking things"
- It wants to be insulated from short-term commercial pressures and eschews quick iterations for long-term innovation.
If you look at it like a typical SaaS founder (or SaaS investor), this company makes zero sense. Why would anyone consider handing this company a billion dollars and valuing it at five? I see two core reasons for this:
Want an instant unicorn? Hire Ilya Sutskever
Ilya Sutskever is the co-founder of OpenAI and was its chief scientist for almost 6 years. He ultimately left in June of 2024 to pursue Safe Superintelligence.
To call Ilya Sutskever a generational talent is an understatement. The truth is that Ilya is probably the main reason for the company's valuation. OpenAI, of which Sutskever is the co-founder, is reportedly valued at $80 billion (and in talks to raise at $100+ billion).
Some founders are so qualitatively different that it almost doesn't matter what they do. If Mark Zuckerberg would quit Meta to start a pool cleaning service, people would still flock to invest.
AI models have different economics
It's not clear what exactly SSI will build. From the website, you can broadly impute it's about AI that's even more capable than OpenAI's, Anthropic's, Google's and Meta's frontier models.
But whether that means far more advanced LLMs or something completely different isn't clear yet. But the VC bet here is quite simple: If SSI succeeds in building safe superintelligence, then whatever product they build will unlock unimaginable productivity gains that companies and people will pay billions for. At that point, $5 billion will look like pocket change.
Of course, this isn't guaranteed. Even for Ilya Sutskever, building superintelligence is ambitious (to put it mildly). If they fail, it'll likely mean the opposite. This isn't like SaaS, where stalling at a few million in ARR might not be the outcome your investors hoped for, but can still get you acquired.
The second facet of AI model economics is that it simply costs more money to build AI models. You need to have data centers, GPUs and tons of compute. These are extremely expensive.
That's entirely different to SaaS businesses, which you can often launch with low to no costs, and where you usually raise money to grow the team. Most of the money will then be spent on salaries and software subscriptions, not on research costs.
If AI model companies like SSI proliferate, we might see a different software landscape:
Cooking vs. Baking startups
Cooking and baking are both about making food, but they're very different. When you're cooking, you can taste a spoonful of sauce and add the salt it needs (or pepper, or herbs, or whatever). You can course-correct at any time.
With baking, you make something, put it in the oven and hope it comes out well a few hours later. If it doesn't, you need to throw it out and try again.
SaaS is like cooking: You can course-correct anytime and iterate your way to success. AI is much more like baking: You have fewer iterations. Your shots on goal need to hit.
This is a new paradigm - and SSI is at the forefront of it.