Venture Capital & Free Lunch

Welcome to the 68 newly Not Boring people who have joined us since last week! If you haven’t subscribed, join 220,300 smart, curious folks by subscribing here:

Subscribe now

Today’s Not Boring is brought to you by… Range ETFs

Did you enjoy Season 1 of Age of Miracles and now want to invest in the growing nuclear energy market? The Range Nuclear Renaissance ETF is your gateway to the clean, safe, and ever-expanding world of nuclear power.

If you didn’t listen to the pod, here’s a quick summary of why we’re so bullish on nuclear:

  • Clean: Nuclear energy emits zero greenhouse gasses

  • Safe: Don’t buy into the stigma, nuclear is one of the safest energy sources in the world.

  • Reliable: Nuclear power plants deliver 24/7, always-on electricity, providing the backbone of stable grids.

The Range Nuclear Renaissance ETF offers:

  • Nuclear Energy Diversification: Gain exposure to a diverse range of companies across the nuclear energy spectrum.

  • Growth Potential: The nuclear energy industry is likely poised for expansion, driven by ambitious climate goals and rising energy demands.

  • Liquidity and Convenience: Trade the ETF easily on major exchanges, enjoying the flexibility and transparency of a publicly traded instrument.

Don’t be left in the dark. Invest in the Range Nuclear Renaissance ETF today and illuminate your portfolio with the power of clean, safe, and reliable nuclear energy.

Learn More

Hi friends 👋,

Happy Tuesday!

Optimism is in the air, and so am I. I’m sending this from 32,000 feet up on my way out to California to meet with companies building satellites, Von Neumann universal constructors, cultivated meat, hydrocarbons made with CO2 and nuclear, and more. All of these outlandishly ambitious ideas have been funded by venture capitalists.

Last week, Nadia Asparouhova made the case that tech is starting to stick up for itself again. You know who no one’s willing to stick up for, though? The venture capitalists.

Let’s get to it.

Venture Capital & Free Lunch

Venture capital gets a lot of shit, but I’m here to tell you that venture capital rocks.

In fact, it’s the best asset class there is. 

I can hear the other asset classes yelling at me. Certainly, some have a case: 

  • Public equities are the largest 

  • US Treasuries are the safest

  • Real estate is the only one you can live in

  • Private equity is an asset class. 

But there is no more beautiful asset class than venture capital

Venture capital is a free lunch machine. 

I’ll admit that venture capital isn’t perfect. It’s risky, illiquid, and highly variable. The best venture funds perform amazingly well; the worst ones are horrendous. 

Pitchbook Q2 2023 Global Fund Performance Report

Even the best venture funds are wrong much more often than they’re right. Peter Lynch said of public markets investing, “In this business if you’re good, you’re right six times out of ten.” In venture capital, if you’re good, you’re right maybe three times out of ten, possibly twice, probably once, but when you’re right, you’re really right. 

Therein lies the beauty. 

No asset class’ constituents fail at a higher rate than venture capital’s, yet venture capital’s returns match or exceed all the others’. 

Think of the dumbest fucking investment your least favorite venture capitalist has made. FTX, WeWork, Quibi, Juicero, Theranos, pick your poison. Choose all of them, if you’d like. Sprinkle in some Clubhouse, the 9,000th dating app, the 78,000th social network, the 19th best foundation model company, the 10,000 PFP NFT project. 

All of those screaming zeroes are included in these returns:

Michael Mauboussin, Public to Private Equity in the United States: A Long-Term Look

Cambridge Associates, US Venture Capital, as of June 30, 2023

The data isn’t super clean. Depending on the time horizon and whether you’re talking risk-adjusted returns, venture capital may or may not be the best performing asset class. And investing in venture capital requires locking your money up for a decade, unlike stocks or bonds which you can go sell right now. 

But the fact that venture capital is even in the running, and wins on some time horizons, means that the world gets all of that innovation – from failures and winners alike – for free. 

There is no such thing as a free lunch, except, perhaps, when it comes to venture capital. 

Free Lunches

This doesn’t mean that venture capitalists are smarter than anyone else. As I mentioned earlier, they’re wrong a lot. 

What it does mean is that venture capital, with its Power Laws, is the asset class best designed to embrace variance.

Would returns look better if VCs simply funded the good companies and didn’t fund the bad ones? Maybe in the short-term, maybe not in the long-term, but it’s a moot point, because no one knows what the great ones are going to be ahead of time. Venture is beautiful for embracing that. 

Venture capitalists can fund the wildest ideas in the world, many of which won’t work, some of which will work bigly, and their returns as a group end up being pretty great. 

And on top of that, we get the fruits of innovation, because venture capital is the only asset class that funds truly new things. 

Take the current hype cycle in AI. Venture capitalists are pouring stomach churning amounts of money into foundation model companies. Billions of which will, for all intents and purposes, be lit on fire. The problem is: it’s hard to tell which billions until you put the money up and watch it play out. Venture capital is designed to put those chips on the table and see what happens. 

If history is a guide, most AI companies will go to zero, and a small handful will generate enough returns to carry the rest. Those returns will go to venture funds’ limited partners, the charities, endowments, pensions, and other individuals and institutions who invest in venture funds, who will take that money, reinvest some into the next generation of venture funds, and operate their institutions with the rest. 

Way back in 2007, before he became a VC himself, Marc Andreessen wrote:

The best VCs get to improve society in two ways: by helping new companies take shape and contribute new technologies and medical cures into the world, and by helping universities and foundations execute their missions to educate and improve people’s lives.

The world, for its part, may get AGI in the bargain. 

It is not from the benevolence of the venture capitalist that we expect our AGI, but from their regard to their own self interest.

The Visible Invisible Hand

The invisible hand is more visible in venture capital than it is in any other asset class. 

Venture is an ecosystem made up of parties acting in their own self-interest that seems to operate with some sort of collective intelligence on a longer timescale. 

If you zoom in on the behavior of any one participant at any given time, you might think that they’re behaving stupidly. In many cases, they are. The ecosystem works not in spite of, but because of, the stupidity of some of its participants. 

Venture capitalists are willing to fund companies creating new technologies even if those companies don’t seem to have a viable business model in sight. The best venture capitalists try to fund companies that marry new technology with viable business models, but some technologies are just too early to make any economic sense. That’s OK! There are venture capitalists willing to fund those, too. 

Maybe they believe that the team is smart and will figure out a model. Maybe they hope the company will be acquired for its technology. Maybe they hope the market will catch up while the company is still a going concern. Maybe they just got really excited and bought into the hype. 

It’s easy to make fun of VCs for buying into hype so quickly. My X feed is chock full of memes about VCs who went from being web3 investors to AI investors to Gundo investors faster than you can say “Patagonia vest.” 

I think hype-buying is one of venture capitalists’ most endearing qualities. 

Hype is usually an indicator that there’s something there, even if that something is a long ways away. As I wrote in Capitalism Onchained, “Any technology that is sufficiently valuable in its ideal state will eventually reach that ideal state.” But it takes years, sometimes decades, of winding experimentation, and millions, sometimes billions, of dollars to reach that state, or even to reach a pre-ideal state in which the technology can be built and sold profitably.  

Who’s going to fund that experimentation period? A bank? The public markets? Nope. Venture capitalists. 

Take the clean tech bubble that Kleiner Perkins helped kick off in the mid-2000s. In 2007, Kleiner Perkins Partner John Doerr said, “Going green is bigger than the Internet. It could be the biggest economic opportunity of the 21st century.”

Kleiner and other venture capitalists ended up investing a tremendous amount of money investing in clean tech – on the order of tens of billions of dollars. They lost money on the vast majority of those investments. But their investment attracted talent to the industry and provided an incentive to improve clean technologies that helped lay the groundwork for the dramatic increase in renewable and storage capacity we’re benefiting from today, nearly two decades later. And Tesla, which at a $626 billion market cap is worth multiples of all of the money invested in clean tech.

Along with the government, which funds basic research, venture capitalists often fund too-early technologies as they take their first steps out of the lab and into the cutthroat world of the market. They provide the capital that sustains technologies through the flat-looking parts of eventually exponential curves, and often lose that capital. Then, when the curves go exponential, the next generation of venture capitalists take advantage to make investments that actually deliver returns. 

That’s what I mean when I say that there seems to be collective intelligence at play on longer timescales. The opportunity to generate returns today might not exist if someone hadn’t been willing to lose money decades ago. No venture capitalist does this altruistically – they’re driven by the small, against-all-odds chance that this too-early technology might be the next big thing – but in their mistakes they create opportunity for others. 

Losing money is all fun and games for VCs because it’s not their money, right?

But What About the Pensioners? 

Whenever a specific investment goes south, a cry goes up for the poor pensioners, charities, and endowments who are really the ones funding all of these experiments behind the scenes. 

I’m here to tell you that those pensioners are doing just fine, thank you very much. 

Limited Partners (LPs) often manage large pools of capital that they invest across a number of asset classes. Yale’s endowment, for example, is roughly $42 billion. 

NACUBO (The National Association of College and University Business Officers) does an annual survey on endowments’ asset allocations. When Marc Andreessen wrote The Truth About Venture Capitalists in 2007, NACUBO found that endowments allocated 3.5% of their assets to venture capital. In 2023, that number grew to 11.9%. 

LPs build portfolios that are diversified across asset classes, and then further diversified within asset classes. Within venture, they will invest in a number of funds, each with its own strategy: some early stage, some late stage; some generalist, some vertical. They continue to invest across vintages, meaning that every year, they’ll invest in new funds or re-invest in funds they’ve already invested in. Those funds, in turn, diversify across a number of investments that fit their criteria over a number of years. 

From the LPs’ perspective, venture capital is a small but growing high-risk / high-reward piece of a much larger portfolio. If any individual investment that one (or multiple) of their venture managers make fails, no matter how spectacularly, it’s unlikely to have a big impact on the overall portfolio. What’s more important to LPs is that venture as an ecosystem continues to take the kinds of risks that have a shot at driving higher returns. 

If one venture firm makes a bunch of too-early investments that lose a lot of money, that firm may go out of business, but the LP will still be around when the fruits of those too-early technologies ripen.

From their gods-eye perspective, LPs can invest in all of the messiness – the successes and the failures – and expect that on the other side, in ten or so years, they’ll get multiples of their money back. Then they’ll keep investing, and harvest the seeds sown by the previous generation’s losers.

More returns means more money to donate to charity, fund growing pensions, run universities, and continue to invest in venture capital.

The Fund Size Paradox

One of the things LPs do with those returns is continue to invest in the funds that got them there, inflating those funds’ assets under management over time.

The rise of the resulting megafunds has drawn criticism, occasionally from venture capitalists themselves, based on the argument that large funds are terrible for returns and exist to generate management fees. 

Venture capital funds typically earn “2 and 20”: 2% management fees and 20% carry. They take 2% of the fund size every year for ten years to pay salaries and generally manage the fund, and they keep 20% of what they earn once they’ve paid back LPs. 

The thinking behind the criticism of megafunds goes something like this: it’s harder to generate outsized returns on larger pools of capital than it is on smaller pools of capital. Owning 20% of a company that IPOs for $10 billion means a 20x for a $100 million fund, but doesn’t even return a $3 billion fund! But they don’t care, because they make 2% of a big number every year no matter what. 

So if megafunds exist solely to milk fees out of LPs, why the hell do LPs definitionally invest more money in these larger funds than they do in smaller funds? 

Let’s go back to this graphic:

Assume that every year, there’s a certain amount of money that’s going to get allocated to venture capital and a certain number of companies that are going to generate returns. 

If all of the money that’s currently allocated to megafunds instead got allocated to smaller early stage funds, there would be a few issues. 

First, those small funds’ extraordinary returns would decrease. More competition at the early stages would likely drive higher valuations and it would mean that each early stage fund would win fewer deals, decreasing the likelihood that they’ll own one of the handful of companies that matter. 

Second, and more importantly, there would be less capital available downstream to fund winners. Some companies need hundreds of millions of dollars to get to the point at which they can become profitable. This was somehow true of pure software companies, and it will be increasingly true as venture capital funds more capital intensive foundation model and deep tech companies. 

Without megafunds operating at the Series A and beyond, more companies would fail before they have the chance to go public, which would lower overall returns to LPs. That, in turn, would mean that small, early stage funds couldn’t take the types of risks they need to take to generate those outsized returns and give the wildest ideas a chance. 

If anything, we need more megafunds. I recently spoke to one of my LPs who’s interested in deep tech investing in India, and his biggest concern isn’t the talent in the space (there’s a lot) or the market (which is growing), but the lack of downstream capital. If even the most promising companies can’t raise the capital they need, then it’s really hard to justify starting or investing in a new company. 

There’s that invisible hand making itself visible again: self-interested parties dynamically evolving into an ecosystem with a robust capital pipeline. Larger firms are rewarded for playing their role in fees, smaller firms are rewarded for playing theirs in higher upside when they’re right. One is not better than the other; both are necessary. 

Founders win, LPs win, and the world wins.

If megafunds win too by generating a lot of fees in the process, great! If you believe that megafunds earn too much money relative to the service they provide, make like Jeff Bezos, treat their margin as your opportunity, and go start a larger fund with lower fees or something. 

An Aside on Fees

Speaking of megafund management fees, I think they’re among the most interesting buckets of capital in the world, and that we’ll see them utilized in increasingly beneficial ways. 

The rationale is simple: if megafunds are confident that they’ll benefit from the growth of the early stage tech ecosystem, they can justify paying for all sorts of pro-ecosystem things. If you believe that technology is good, and I do, then management fees applied to strengthen the tech ecosystem are like charity that keeps paying for itself.

It’s unsurprising, for example, that this weekend’s Gundo defense tech hackathon was co-organized by 8VC and sponsored by a number of others, including Founders Fund, or that a16z hosted an unofficial St*****d Defense Tech Club kickoff party the night before. 

Those are tiny examples. On a larger scale, a16z recently announced that it would be “supporting candidates who align with our vision and values specifically for technology.” It also built a world-class crypto research team based on the belief that “There is an opportunity for an industrial research lab to help bridge the worlds of academic theory with industry practice.” The team has since built and open sourced a number of useful research-based products, including Lasso and Jolt

My hunch is that this is the beginning of a larger trend in which venture capital firms use their management fees, which are already netted out of the return numbers I shared earlier, to support the industries they invest in in increasingly creative ways. 

For firms with a long view, there’s an economic incentive to support the kinds of things that have long, uncertain payoffs that doesn’t exist anywhere besides government and academia, both of which have become increasingly sclerotic and slow-moving. I wouldn’t be surprised to see more VC-supported basic and applied research labs, for example. In the short-term, it’s good marketing (and a good way to pull more smart people and ideas into their orbit), and in the long-term, it’s a way to increase the number of fundable companies and produce returns (and fees) on ever-larger funds. 

The world gets accelerated research and knowledge in the bargain. Another free lunch. Yum. 

Viva la VC 

Venture capital is at a low point. According to Crunchbase, the $258 billion venture capitalists invested in 2023 is the lowest amount they’ve invested since 2017. 

It’s not particularly popular, either. Certainly, VCs espousing pro-Putin views on Twitter doesn’t help, but there are a number of reasons. 

VCs make money (and sometimes take credit) when other people – founders and startup employees – do all of the hard work. They have to tell founders “no, we don’t want to fund your life’s work” much more often than they get to say “yes.” They invest in brash, risky companies that occasionally fail spectacularly. They jump from industry to industry with, often, nowhere near the depth of understanding of those industries as the people who work in them. They write blogs and host podcasts 😬

And some of them are genuinely shitty: either bad investors or worse, as predatory behavior in the downturn exposed once again. They stand in the way of acquisitions that would be life-changing for founders but meaningless for returns. Often, the most helpful thing some VCs can do is write a check and get out of the way. I’ve worked with bad ones from the other side of the table and I know how harmful they can be. The good news is, over time, the market typically punishes the bad ones. 

There are a ton of really great VCs as well, but I’m not here to argue that VCs are heroes. The best thing they can do is to identify, encourage, and fund the entrepreneurs who actually create value by building great things. 

All I’m saying is that as an asset class, venture capital is way better than it gets credit for. There’s no other asset class that’s been as positive-sum or cooked more free lunches over the past half century.

Founders can walk into a VC’s office with an idea and a dream, and leave with millions of dollars. They’ll hire people and build things that have ever only existed in their imagination. Most of them will fail, and many of those who do will be able to walk back into those same offices and get a fresh bag of money to try again. Some will succeed, and they’ll succeed so outrageously that the value they create will pay for all of the failures and then some. They might even Make the World a Better Place™️ in the process.

I’d argue that even if venture capital underperformed other asset classes, generated 0% returns or something, it would be a net benefit to society to have a pool of capital that funds crazy experimentation. But this is capitalism, and returns are what keeps the machine humming. So the fact that VC has generated such strong returns over a long time horizon is key. 

The question is: will venture continue to deliver returns? 

I think the answer is undoubtedly, unquestionably yes. 

Tech is Going to Get Much Bigger as tech’s total addressable market expands to include large existing industries that have been relatively untouched by technology, including industrials and agriculture. Cheaper energy, intelligence, and dexterity, in the words of Valar Atomics’ Isaiah Taylor, will mean new opportunities to attack old industries with cheaper, better products. Higher margins in large industries will make for very valuable companies. And things that were previously impossible are becoming possible at an accelerated rate. 

Venture was created for times like this, when big shifts in underlying technologies create opportunities for crazy geniuses to build products that might change the world. 

If anything, the past few of decades of pure software investing were a necessary interstitial period during which bits developed to the point at which they could make a meaningful impact on the world of atoms, and the next few will be the period during which the combination of bits and atoms made the world a magical place. 

The speed at which change will happen, the amount of capital these companies will require, and the outlandish ambition of the projects founders will tackle will mean more and bigger blowups than ever before, but it will also mean more and bigger winners. In the process, the world will get a suite of new capabilities – medicines, machines, money, (literal) moonshots and more – practically for free. 

So by all means, ape into the Gundo! Fuel that techno-optimism! Lose many, and win some. The only mistake is to play it too safe, to fund things that don’t matter.

As tech gets much bigger, venture capital will get much bigger, too. That’s a beautiful thing. God bless the United States of America, and God bless venture capital 🇺🇸

Thanks to Dan for editing!

That’s all for today! We’ll be back in your inbox on Friday with a Weekly Dose.

Thanks for reading,


Clear Street: From COBOL to the Cloud

Welcome to the 1,000 newly Not Boring people who have joined us since last week! If you haven’t subscribed, join 220,232 smart, curious folks by subscribing here:

Subscribe now

Today’s Not Boring is brought to you by… Clear Street

Read on to learn about how Clear Street is rebuilding the infrastructure of the capital markets.

Hi friends 👋,

Happy Wednesday and Happy Valentine’s Day!

Love is in the air, and besides Puja, Dev, and Maya, there’s nothing I love more than a Hard Startup working to replace crumbling infrastructure with modern technology, and succeeding.

That’s what Clear Street is doing, modernizing capital markets infrastructure through which over $3 trillion flows daily by rebuilding it from the ground up.

Clear Street has been an under-the-radar monster, known well to a select few, like the founder who told me “I would so prefer to be on Clear Street it’s not even funny,” but not as well-known in the wider startup world. I think it’s a company and a story a lot of startups can learn from as they attempt to face-off against entrenched incumbents armed only with technology, talent, and ambition.

Just five years old, Clear Street was recently valued at $2.2 billion. This isn’t a ZIRP valuation: Clear Street did $260 million in revenue in 2023, and it did so profitably. Getting to this point has taken a unique blend of experience, capital, long-term thinking, and technical chops. Relative to its ambitions, Clear Street is just getting started.

This essay is a Sponsored Deep Dive. While the piece is sponsored, these are all my real and genuine views. I’m writing fewer Sponsored Deep Dives, because as I said when I wrote about LayerZero in December, I’m only writing them on companies that I think have a chance to be really important in an area I care about. You can read more about how I choose which companies to do deep dives on, and how I write them here.

Clear Street clears that bar: making the capital markets (and capitalism) run more efficiently and effectively is one of the most important things there is, and the full-stack Hard Startup approach its taking is full of lessons for founders who want to build insanely ambitious things in the face of incredibly deep-pocketed incumbents.

Let’s get to it.

Clear Street: From COBOL to the Cloud

It’s easy to tell when physical infrastructure decays. 

You can see the rust on the bridges, feel the bump of the pothole, experience the terror of trice-daily train derailments, fill your cup with brownish water from the sink, and open your social media app of choice to discover how a Boeing 737 Max fell apart this time. 

It’s hard to fix physical infrastructure because people use it every day. Shutting down a bridge means a minor inconvenience for thousands of people, so we put it off and risk major catastrophe.

It’s harder to tell when digital infrastructure decays. 

Digital infrastructure is invisible, the pipes that power an ever-increasing amount of our lives, how we communicate, work, get around, and transact. It’s easy to forget that digital infrastructure is there because usually, it just works. 

Like physical infrastructure, digital infrastructure is hard to fix because people use it everyday. It’s also hard because once companies reap the rewards of building the digital infrastructure on top of which other companies build, no one inside of those companies is incentivized to fix it. 

Building infrastructure that others build on is a long journey with a huge payoff: the switching costs of replacing infrastructure are high. If given the choice between risking the cash cow to make the infrastructure better and just letting it slowly degrade while the cash cow produces, most infrastructure providers will choose the latter, to their own short-term benefit but to the long-term detriment of the whole system.

This is the reason banks still run on Common Business-Oriented Language, more commonly known as COBOL, a 65-year-old programming language that pretty much only 65-year-olds (and older) know how to maintain. 

Over $3 trillion flows through COBOL every day. For decades, layers upon layers of disparate coding languages and technologies have been built on top of this infrastructure. And for a whole host of reasons, banks don’t rip it out and replace it. 

Doing so would introduce the kind of short-term risks that come with tinkering with something built by someone who’s long-since retired and held together by a patchwork of digital duct tape. Not doing so, however, introduces the kind of major risk that threatens the financial markets and the invisible, counterfactual risk that comes from not innovating. 

With physical infrastructure, you can’t just delete all of the roads and start fresh. The roads are the roads, and we need to fix what we got a little at a time, like after a bridge collapses on I-95.

With digital infrastructure, you can start fresh. It’s just really hard. It requires a combination of capital, experience, technical sophistication, and a different way of viewing risk. 

Clear Street is doing it. In the early stages of a long journey, there are signs that it’s working. The cloud-native prime brokerage company recently closed a $250 million extension to a previous round, bringing that round’s total to $685 million at a $2.2 billion post-money valuation. It has $750 million on its balance sheet – a necessity in the prime brokerage business – and generated $260 million in revenue in 2023, its fifth year of business. Oh, and it’s EBITDA profitable

The $2.2 billion valuation seems hefty for a five-year-old company, but relative to the opportunity, it’s small. Companies that have chewed off just the easier, surface-level piece of the problem – better interfaces on old infrastructure – like Broadridge and Interactive Brokers – are worth $24 billion and $41 billion in the public markets, respectively. Banks like Goldman, Morgan Stanley, Citi, JP Morgan, and Bank of America are worth, collectively, over $1 trillion. 

Clear Street believes that the banks are going to have a very difficult time modernizing their own systems while running them simultaneously and potentially putting the trillions at risk. This isn’t just speculation: Credit Suisse’s demise can be traced to poor risk management stemming from bad infrastructure. 

The banks’ reticence is Clear Street’s opportunity: to rebuild the infrastructure from the outside, and replace the old system with the new, slowly at first, one client, geography, and asset at a time, until financial markets around the world eventually run on Clear Street.  

Tackling that opportunity means building a startup in a different way than most startups build. This isn’t a story of minimum viable products and pivots. Because it’s building infrastructure, Clear Street looks more like the Techno-Industrials that operate against a clear roadmap, with higher capital requirements and more specialized talent, from day one, than like a traditional software company. 

Clear Street’s cofounder, Uri Cohen, started the business with his own capital in 2018 after experiencing the pain, and worrying about the risk of trading on crumbling infrastructure for over two decades on Wall Street. He recruited a CEO, Chris Pento, and a CTO, Sachin Kumar to co-found the company, both of whom had felt the same pain. 

Then, in 2020, they bought their own online brokerage, CenterPoint Securities, to serve as the first customer of Clear Street’s clearing, custody, and prime brokerage products. If they were going to ask financial institutions to move off of their old, but tried and true, infrastructure, they’d need to prove that it worked on themselves, a la Amazon’s first and best customer strategy.  

Meanwhile, they got to work completely rebuilding financial infrastructure from the ground up with modern software that needs to communicate with antiquated systems, like DTCC, and a hodgepodge of customers’ bespoke patchworks of code. They also began building all of the things that a prime brokerage needs to do to win clients’ business: research, investment banking, capital introductions, securities lending, margin, and more all at the same time. 

Clear Street, while still early in its journey, is executing on its mission to build the cloud native pipes through which the capital markets will one day run, across every major asset class, in every major market, replacing the antiquated system that everyone from small prime brokerages to large banks use to move trillions of dollars every day. 

And they’re offering a diverse product set that takes advantage of this modern technology stack, giving the largest hedge funds that currently work with banks, down to the smallest day traders that the bank can’t touch due to their cost structure, clean data from one single source of truth, bringing crystal clarity into their risk, and driving operational and strategic efficiencies in their businesses that could lead to growth that they can only currently dream of.  

The ambition seems unlimited, because practically, it is. “The markets” are the world’s largest market. 

The risk of capital market participants not modernizing their infrastructure, product offerings, and services, Uri believes, is equally huge: a Boeingesque series of increasingly frequent blowups like the ones that killed Archegos Capital Management, Credit Suisse, and Signature Bank, and worse. 

There are a couple of ways they might come to appreciate the risk: the threat and the opportunity. 

The threat is the risk of catastrophe that comes from operating on old infrastructure, the risk that Credit Suisse and Signature Bank learned the hard way but that takes at least a decade to remedy, at which point it’s someone else’s problem (or glory). 

The opportunity is the set of possibilities that opens up when you can build products on one cloud-based platform with clean, real-time data. Every bank C-suite is thinking through how to take advantage of AI, for example, which could be catastrophic with bad data but nirvanic with good data. 

In either case, the banking industry has proven that words aren’t as powerful as incentives. The industry’s current leaders don’t want to wade through the quagmire of replacing infrastructure, so Clear Street needs to build new infrastructure in parallel so that adopting it is a no-brainer for the industry’s future leaders. 

Thinking in Risk

Clear Street was born from a lifetime of experience and one specific conversation. 

Uri has spent over two decades in the financial markets. He’s a co-founder of Alpine Global Management, a firm with a breadth of sophisticated trading strategies across global markets, the kind of firm that demands a lot from its infrastructure. 

A few years ago, Alpine had an issue with one of its large prime brokers, the one-stop-shops investment banks and financial institutions offer to support large investors. Prime brokers offer margin, securities lending, custody, trade clearing and settlement, risk management, reporting and more. But this large prime’s risk management and reporting were broken. 

Their system was cobbled together from a string of acquisitions, and it was giving Uri the wrong data and costing him money. They miscalculated how much equity he had, against which they provided margin, and as a result, the fees he was paying were wrong. 

He needed it fixed, but there was a problem: the only guy who knew how to fix it was a retired 70-year-old named Neil Stepanich who happened to be vacationing in the Himalayas. 

Uri somehow got in touch with Neil, who said that he could fix it, “but it’s going to cost you a lot of money: $2,000.” That was just fine. Two hours later, it was fixed, temporarily, at least. 

Over the next few years, Uri kept having more and more issues with the systems his prime brokers provided. He kept thinking, “OK, the banks will have to fix it now,” and the banks kept not fixing it. 

Five years later, things had only gotten worse. 

That year, in a meeting with two heads of prime brokerages, he asked, “Why aren’t you guys fixing this?” 

The answer was simple and human: “It’s going to take seven to ten years, a lot of money, and a lot of headaches. We’re not even sure we will succeed.”

From their perspective, the decision not to fix makes sense: the payoff was too far out and uncertain. From Uri’s, it didn’t work: the system was causing him to lose money.

During that conversation, Uri made the realization that many entrepreneurs make at some point: if he wanted the system fixed, he was going to have to fix it himself. 

Why Clear Street is So Hard to Build

What Uri, Chris Pento, and Sachin Kumar set out to build in 2018, on the surface, doesn’t sound too dissimilar from many software startups. 

Clear Street was going to take an antiquated system and modernize it with software. Instead of mainframes, Clear Street would be cloud native. Instead of clunky integrations, Clear Street would be API-first. In place of slow, duct-taped systems and manual spreadsheets, Clear Street would give clients a comprehensive view of their portfolios in real-time and the ability to execute trades at internet speed. 

And that is what they’re building. Clear Street offers clearing and custody, execution, prime financing, and white-glove services like capital introductions and account management to financial institutions and active traders. 

Executing the startup playbook on the world’s financial infrastructure, though, is hard. Clear Street is a Hard Startup. In The Good Thing About Hard Things, I defined Hard Startups as:

The ones with no playbook, the ones that have R&D risk or some other hair on them that keeps competition at bay and gives them a clear path to short-term sales and long-term defensibility if they get a great product to market.

I broke them down to bits and atoms, and included infrastructure in the bits category: “these companies are often 100x harder below the surface than above it; there’s a lot of schlep.”

Clear Street is perhaps the best example of a bits infrastructure Hard Startup I’ve come across. Rebuilding the world’s financial infrastructure has more hair on it than Sasquatch. 

The software challenge is incredibly hairy in its own right, and software is only one part of the story. Clear Street needs capital, risk management, and the client services muscle of an investment bank in order to get big clients to even use the software in the first place. 

Let’s start with the software. It’s the core of the business and the thing that makes Clear Street different from other prime brokerages. 

Jon Daplyn, the former Head of Prime Brokerage Technology at Morgan Stanley and now Clear Street’s Chief Information Officer, explained the challenge. 

“The hardest thing,” he told me, “is that the systems you’re interacting with were built decades and decades ago. No one is building them; they were built in the past. You’ve effectively got to go and learn them from scratch.” 

He described a steep learning curve and a ton of tech that needed to be built in order to put in even the first trade.

To start, you need to build your own core books and records that keep track of client information, account balances and positions, cost basis, margin details, and more, a big database of client information that updates in real-time. A database sounds relatively straightforward, but with so much information, from a variety of sources, including up-to-date market data, it’s a beast. 

When I asked Jon for an example of something that was particularly hard to build, he cited this “account master,” the core system that maintains all client information and positions. 

“You’d think implementing an account master would be straightforward,” he said, “but we’re already rolling out the second version because we didn’t get it quite right the first time.” 

He noted, though, that this is the advantage of being a fintech instead of a bank: instead of saying “Why did you make that mistake?” they looked at it and said, “That didn’t work, let’s rip it up and do it again.” 

“Our core value is technology, so we need it to be excellent.” 

Step one is building excellent technology internally – that single source of truth that provides clean data to Clear Street and its clients. From there, Clear Street needs to connect its own systems with a patchwork of external ones that have no willingness to change the way they work to speak with a new entrant. 

To enable even the first trade, Clear Street’s systems need to communicate with an alphabet soup of organizations, each of which have their own antiquated and mission-critical systems, like DTCC (Depository Trust & Clearing Corporation) and ICE (Intercontinental Exchange). The DTCC is responsible for clearing US securities, but individual exchanges, like NASDAQ, have their own clearing processes and systems that occur alongside the DTCC’s centralized clearing.

Just taking NASDAQ, trading activity on that exchange generates data that needs to flow through to post-trade systems for proper clearing and settlement. That includes obvious things like trade details – how many shares, what price, etc… – and less obvious things, like accounting for the exchange’s fees. Then there are the periodic things that happen – like stock splits, dividends, and repurchases – that happen at the exchange level and must get reflected properly in Clear Street’s own systems. 

That’s just one exchange! Clear Street needs to send data to NASDAQ, other exchanges, clearinghouses, and regulators in formats in which they can digest it, and take data back in whichever format they choose to send it, clean it all up, and show it accurately in its own internal systems in order to give clients a real-time understanding of their positions, and in order to give Clear Street itself the data it needs to do things like extend margin to clients without blowing itself up. 

And then, it needs to share that data with clients in a format that works for them. Many hedge funds and brokerages have built their own patchwork of systems and ways of handling data on top of old infrastructure, and since they’re not going to rip everything out and replace it with Clear Street whole cloth on day one, Clear Street needs to translate the data in its own system into language that each of those systems understands. It’s like a real-time Rosetta Stone operating inside of the chaos of the Tower of Babel. 

And that’s just US equities! 

Then, with the core systems built, Clear Street has to scale all of that – to more clients, more assets, and more geographies, each with its own idiosyncrasies and old systems with which to integrate. 

Let’s say they wanted to support bonds, or “fixed income,” in the US. They’d have to integrate with the DTCC’s FICC system (Fixed Income Clearing Corporation), Fedwire, TradeWeb, and the fixed income exchanges, among other systems. Internally, they’d need to update their systems to account for securities that don’t trade like equities and come with their own valuation math. I still get nightmares from reading Frank Fabozzi’s The Handbook of Fixed Income Securities early in my career. Clear Street essentially has to incorporate the bond math from that 1840 page tome into its internal systems in order to properly account for risk. 

And then there are options (hello OCC!) and futures and all sorts of exotic derivatives, some of which trade on exchanges and many of which trade over-the-counter (OTC) in bespoke deals. Those need to be reflected in the system, too, if Clear Street is to maintain a clear picture of its clients’ risk. 

And if you want to add new geographies? Well, then you need to rebuild all of these integrations with the equivalent agencies and exchanges in each country you plan to service, each of which have their own regulations and idiosyncrasies. Some of these countries’ systems run on COBOL; others are more modern and run on Java and SQL. Whatever language they speak, you must too. 

No one gives you bonus points for managing all of this extra complexity; they just expect it to work, to show up as a set of clear numbers that clients can use to trade and manage their own risk. 

It’s no wonder that banks have no desire to rebuild all of this; if it works well enough, the distant-seeming threat of catastrophe pales in comparison to the clear and present schlep of having to redo all of that. 

Simply put: Clear Street itself builds modern software, but that software has to speak to legacy technology in whichever way partners and clients require. 

“The DTCC (Depository Trust & Clearing Corporation) isn’t going to change their protocols for us,” Jon explained. “We need to adapt to the way they like to speak. Then clients [like hedge funds] also have their own ways of speaking that you need to adapt to. Our clients say ‘This is how we talk,’ and if we want to do business with them, we need to learn to talk that way.” 

I have some personal experience here. At Breather, some of the buildings we were in used building access systems like Kastle that ran on old software. To give our customers access to their spaces in those buildings, we had to get them past security, which meant getting their names in the security system. Which meant we had to integrate with the building access systems. 

Something so seemingly simple turned out to be brutal. Integration with each new system took months of our developers’ time, and whichever backend developers were unlucky enough to be put on those projects absolutely hated it. And that was just to get some names in a database. 

Multiplying that by hundreds of systems and adding the zero-room-for-error nature of handling money is a mind-numbingly complex and painful challenge to contemplate. 

But if Clear Street wants to replace the world’s financial infrastructure with modern software, that’s where they have to start. Because the world’s financial infrastructure runs on COBOL, the people who know how to build on COBOL are getting old, and the banks aren’t incentivized to fix the problem themselves. 

COBOL Cowgirl, Cowboys, and Cow-Paced Banks

There are two themes that we should unpack a little further before continuing the Clear Street tale. 

First, we need to understand COBOL, the brilliant and long-lived software that still underpins so many of our systems, why it is that pretty much only 70-year-olds like Neil Stepanich know how to fix it, and what kind of risk that represents. 

Then, we need to dive deeper into the incentives that keep the banks on their cobbled-together COBOL despite the risks. 

COBOL Cowgirl and Cowboys

Even the people betting their career on replacing COBOL are in awe of it. 

When I asked what he thought about the language, Jon Daplyn warned me that he was about to recite me a love letter to COBOL. 

“If you think about it,” he said, “Someone wrote a piece of software decades ago that’s still working today. That’s remarkable.” 

COBOL is truly remarkable. 

In the 1940s and 1950s, computer scientists had to speak the language of machines, which they did at first by setting switches and plugging and unplugging cables… 

Programmers program the ENIAC

Or by feeding punch cards with data and code into machines and getting punch cards with results back out… 

Harvard Mark I

Grace Hopper, a lieutenant junior grade in the US Navy, first worked on such a machine in 1944, when she was assigned to work with Howard Aiken’s Bureau of Ordinance Computation Project at Harvard University to work on the Mark II, the successor of the computer in the image above. Five years later, in 1949, she joined the Eckert-Mauchly Computer Corporation as a senior mathematician to work on the UNIVAC I, which would become America’s first commercially available electronic computer. 

Grace Hopper, inventor of COBOL

While working with UNIVAC, Hopper realized that the way humans programmed machines – not just speaking their language, but speaking a different language for each machine – could be improved. She thought that humans should be able to speak English to machines, or more specifically, to use English-like syntax to program computers. 

Hopper “was told very quickly that I couldn’t do this because computers didn’t understand English. Nobody believed computers could understand anything but arithmetic.” She disagreed, or rather, she recognized that for computers to understand English, they’d need a translator. 

In 1952, Hopper and her team created the first compiler, a program that translates high-level programming language code into machine language code that a computer’s processor can execute. Then she developed one of the first English-syntax programming languages, FLOW-MATIC, specifically to program the UNIVAC, in 1955. 

While Hopper was developing better ways to speak to computers, others were designing better computers themselves. In addition to the UNIVAC, IBM built its 700/7000 Series, Digital Equipment Corporation (DEC) released the PDP-1, and National Cash Register (NCR) offered the NCR 315. 

Recognizing the need for software that worked across all of the various hardware systems,  a committee formed by the Department of Defense and the private sector proposed the idea for a common business language. In 1959, it established the Conference on Data Systems Languages (CODASYL) to develop that language. 

CODASYL Report (l) and Committee Members on the 25th Anniversary of COBOL (r)

Hopper was on the committee. Drawing on her work, CODASYL proposed a readable, English-like language that could be adopted by businesses for data processing tasks. In 1960, it completed the first version of the Common Business-Oriented Language, or COBOL. 

COBOL was a hit. It worked on mainframes from IBM, UNIVAC, Honeywell, Burroughs, and Control Data Corporation (CDC), each of which provided its own compiler to translate the machine-independent COBOL into machine code specific to their machines. 

IBM’s System/360, which I wrote about in Internet Computers and which IBM developed for $48 billion in today’s dollars, took the business world by storm. As Computerworld’s Frank Hayes explained, “Hardware compatibility meant platform stability. That led to application longevity, which made complexity possible. A whole new world opened up for IT, a world of huge, business-changing megaprojects.” 

Many of those business-changing megaprojects took place inside of large banks, programmed in COBOL. 

In Internet Computers, that’s where I left the mainframe era and moved on to the Next Big Thing: personal computers. The banks, however, and many other large institutions, from insurance companies to government agencies, never left. 

More than 60 years later, 44 of the top 50 banks and all 10 of the largest insurers, are still using mainframes programmed in COBOL. 

That, as Jon said, is remarkable, and deserving of a love letter. 

COBOL is reliable and efficient. Its fixed-format syntax optimizes code generation. “Rigorous syntax and error-checking capabilities have helped COBOL programs withstand multiple updates and adaptations without endangering system integrity and run relentlessly for decades.” Did a mainframe engineering services firm write that? Yes. But it seems to be true. COBOL programs are as Lindy as computer programs get. 

But there are a couple of challenges with continuing to rely on COBOL. 

The first is a straightforward one: we’ve developed much better ways to write and run software. No one starting from scratch builds on COBOL. 

The second is the bigger risk: the people who know how to work with COBOL are leaving the workforce. 

When I wrote about Hadrian, I wrote about a huge problem facing the precision manufacturing industry: the average machinist is in their mid-50s, the average precision parts shop owner is in their 60s, and the younger generation isn’t joining the business. 

The same thing is happening here. The average COBOL programmer is in their 40s or 50s, and the younger generation isn’t backfilling them. 

Reuters Graphics

It’s no wonder. One Hacker News commenter compared learning COBOL to “swallowing a barbed cube shaped pill,” but cheerfully pointed out that that’s not the hard part. 

Even worse, the ones who really know how things work, like Neil Stepanich, are even older and no longer in the workforce. Reuters begins a great 2017 piece on the “COBOL Cowboys” by writing, “Bill Hinshaw is not a typical 75-year-old. He divides his time between his family – he has 32 grandchildren and great-grandchildren – and helping U.S. companies avert crippling computer meltdowns.” 

Hinshaw and his wife founded a company called COBOL Cowboys, named after the movie Space Cowboys in which retired Air Force pilots are brought in to troubleshoot a problem in space that only they know how to solve. 

Space Cowboys (2000) 

The challenge, according to Hinshaw, is even though some young people still learn COBOL, there are some things you just can’t teach: “COBOL-based systems vary widely and original programmers rarely wrote handbooks, making trouble-shooting difficult for others.”

Only people who were there when the systems were built know how it all fits together, and worse, that dwindling group of people won’t be around forever.

Accenture’s Andrew Starrs put it bluntly. The risk is “not so much that an individual may have retired,” he says, but that “he may have expired, so there is no option to get him or her to come back.”

What happens when the COBOL Cowboys ride into that eternal sunset? 

There will be no one left who knows how the infrastructure on which trillions of dollars daily all fits together, no one to fix it when its inevitable crumble accelerates. COBOL is incredibly impressive infrastructure, but even the best infrastructure is no match for time. 

So why aren’t the banks getting ahead of the problem and doing whatever they can to avoid catastrophic risk? 

Cow-Paced Banks

The short answer is that replacing the COBOL-based systems while the bank runs on them is slow, expensive, and introduces short-term risks. 

Chris Pento, Clear Street’s CEO and co-founder, has spent his career in operations at financial institutions, from large established investment banks like Merrill Lynch to startup broker dealers like Exos Financial. If anyone feels the brunt of bad systems, it’s the operations people who are responsible for cleaning up their mess every day. 

I asked Chris why a big bank couldn’t just throw a bunch of money at the problem. He told me they probably could build new systems, before listing off a litany of reasons they wouldn’t: 

  • Building the system while running a company with diverse needs – including keeping clients happy on a daily basis – adds complexity 

  • Technical debt has built up in these systems over 30 to 40 years as people patch problems 

  • There’s all these connection points – internally and with external systems – that have to be unconnected and reconnected, “and it takes five years just to figure out what the connections are.” 

  • Then, midway through the process, someone on the risk team says, “Hey, you forgot about this” and shuts the project down. 

“You drive on the BQE (Brooklyn-Queens Expressway),” he said. “You know how bad decade-long projects can get.” 

The complexity and time to build it is hard and long, fraught with risk. Recall the patchwork of integrations we talked about earlier, integrations with the DTCC, OCC, exchanges, and regulators, in countries around the world and on a variety of computer systems. Like a game of Operation, touching any one of them the wrong way might mean game over, so you have to go very slowly and very carefully, all while protecting the core. 

Jon Daplyn, who was responsible for these systems at Morgan Stanley, echoed Chris’ assessment and added a technical lens. To Jon, the issue is that, while the banks have tried to improve their systems, they’ve only gotten so far because they’re afraid to touch the core and break the whole thing. 

What you’d have to do is take a small piece of functionality from the core, pull it out and rebuild it, and then bridge it back in until the core gets smaller and smaller. Thousands of people have been building and patching and patching, never really changing the core because they’re too scared. Pulling it apart will take decades of consistent effort.

Decades may as well be millennia from the perspective of the people running the banks today: they would need to foot the bill under their watch for a payoff that might come long after they’ve sailed into retirement. So they take the hidden but pernicious risk that comes with inaction. 

What Happens When the Infrastructure Breaks? 

The big risk in the old system is that the different people who need to see the same data, don’t. 

Do you have a single source of truth? Is there one place where your firm calculates the value of an option or many? Do your finance team, risk team, and the client all see the same values or do they see different values? Do they see the same thing eventually, or in real-time?

Sometimes, bad infrastructure shows up as potholes. Slower transactions, extra work for the back office, miscalculated fees like Uri dealt with. Annoying and inefficient, but not the end of the world. 

Other times, it rears its head in the financial equivalent of a bridge collapsing with drivers on it. 

I-95 Bridge Collapse, NBC News

Like when Archegos Capital Management blew up and took Credit Suisse down with it. 

Remember this one? 

In March 2021, ViacomCBS sold some stock and its stock tanked. That was bad news for Archegos, Bill Hwang’s $10 billion family office, which was turbo long swaps on ViacomCBS and a number of other tech stocks on margin extended by the fund’s prime brokers. As the prices fell, the prime brokers demanded more collateral. 

Archegos’ Bill Hwang outside a federal courthouse, Reuters

When Archegos failed to meet its margin call, its prime brokers – including Credit Suisse, Nomura, Goldman Sachs, and Morgan Stanley, among others – raced to sell off Archegos’ collateral and recoup their money, crushing the prices of the assets they all held. 

Archegos collapsed, of course, and it brought some, but not all, of its prime brokers with it. As The Trade News reported, speed made all the difference. And speed depended on risk management systems. 

When Archegos blew up, Goldman had the intel within seconds, Morgan Stanley hesitated but sold on the same day, and Credit Suisse, which was still on antiquated risk management systems, waited days to catastrophic effect. 

Goldman was the first to sell. Over the course of the day on March 26th, “Goldman Sachs sold $10 billion of shares in stocks linked to Archegos.” 

“We have robust risk management that governs the amount of financing we provide for these types of portfolios,” said David Solomon, CEO of Goldman Sachs, on the bank’s first quarter 2021 earnings call. “We identified the risk early and took prompt action consistent with the terms of our contract with the client.”

Morgan Stanley was almost as quick, dumping $8 billion that same day. The firm lost $911 million between money owed by Archegos and trading losses. A mere scratch. 

Nomura wasn’t as lucky. It lost $2.87 billion in the winddown.

But the biggest loser was Credit Suisse. The Swiss bank was the slowest to appreciate and unwind its exposure to Archegos, and lost a nauseating $5.5 billion. It had to raise $1.9 billion to stay afloat, and fired both its investment bank CEO Brian Chin and, tellingly, its Chief Risk and Compliance Officer, Lara Werner. 

It wasn’t enough. Two years later, in March 2023, the Swiss Government and Swiss Financial Market Supervisory Authority organized the $3.2 billion fire sale of Credit Suisse to its Swiss Rival, UBS. 

“Credit Suisse would be here today,” Uri told me, “if they’d had the right tech in place.” 

Or take Signature Bank

The same month that UBS acquired Credit Suisse, March 2023, Signature Bank faced a bank run as depositors raced to withdraw their funds on the heels of the collapse of Silicon Valley Bank. 

As the FDIC went to work to assess the damage, they asked all of the at-risk banks for tallies of the wires they had going out. 

Signature Bank said, essentially, “We’re actually not sure because of our systems.”  

The FDIC said, essentially, “Are you kidding me?” and told the New York State Department of Financial Services to take possession of the bank that Sunday. 

The bank’s exposure to crypto was blamed in the press, but as the FDIC wrote in its report, “SBNY management was sometimes slow to respond to FDIC’s supervisory concerns and did not prioritize appropriate risk management practices and internal controls. Management was described by FDIC supervisors as reactive, rather than proactive, in addressing bank risks and supervisory concerns.”

The real challenge that affected these firms is a lack of clean data

At the surface level, that sounds like a silly and trivial-to-fix problem given the stakes. But as we’ve learned, for international firms that handle every asset class, getting clean data means integrating hundreds of systems, all which speak different languages, into one central database that can translate all of that disparate data into clear metrics as fast as digitally possible. When that database is built in a 70-year-old programming language that few people alive truly understand, and when it needs to translate from exchanges, clearing houses, and clients that each speak their own language, the problem becomes prohibitively complex. 

Little inconsistencies compound on each other, and as they do, the risks compound, too, until… KABOOM. 

These are the kind of risks that Uri thinks are going to come due more and more frequently if the infrastructure isn’t fixed from the core out. 

Fixing the infrastructure from the inside – rebuilding the roads while people are driving on them – doesn’t seem to be an option. If it were, Clear Street wouldn’t exist. 

Rebuilding the infrastructure from the ground up is hard, but it might actually be the easiest way to make sure it happens. That’s Clear Street’s bet. 

What Clear Street is Building

A long time horizon is a competitive advantage, one that requires capital, vision, and the right blend of patience and momentum. 

When I wrote about Anduril, I said that it was in a goldilocks position when it came to acquiring advanced technology companies: “it’s well-funded enough that it can make meaningful acquisitions, but privately held enough that it doesn’t need to justify its investments in the same way public companies do.”

Clear Street sits in its own goldilocks zone: it’s well-funded and experienced enough that it can build new infrastructure with the patience it requires, but fresh enough that it’s not constrained by the same demands and risks as the banks. 

Clear Street’s mission is to build a modern, single source-of-truth platform capable of handling every asset class, in every investor, and in any currency.

But that’s not something you can build from day one. Uri started the company to solve his own pain point, but even he doesn’t do most of his trading on Clear Street yet. 

“Alpine is 95% at other clearing firms,” he readily admitted. 

The goal is, over time, to win more and more of Alpine’s business, and by extension, to be able to serve an increasing share of other firms’ trading business, from the very small to the very big. 

That means building bulletproof core infrastructure, and then racing to build new products to handle more assets in more geographies on top of the infrastructure. 

So what’s in the core? 

Clearing, settlement, and the bookkeeping ledger,” Jon told me, “Those are the things that we’ll protect completely.” 

Trading a security like a stock or bond seems simple. You just hit a button in Robinhood, and voila! But there’s a lot that goes on behind the scenes to make the trade happen, which can be broken into three steps for simplicity:

  1. Execution. Someone wants to sell at a certain price, someone else wants to buy at that price. They’re matched through an order book (or automated market maker in crypto). The price is locked in. This is the front-end, what you see when you trade. 

  2. Clearing. After the trade is executed, the details are verified and the obligations of both parties – one makes a payment, the other delivers securities – are established.

  3. Settlement. Finally, the securities and money trade hands. How and when that happens depends on the type of securities traded and the markets they trade in. Stock markets are typically “T+2,” meaning that they settle two days after execution.  

If you’ve ever bought or sold a stock before, you know that you don’t need to meet the person on the other side of the trade somewhere, hand them cash, and receive a stock certificate. The role of clearing and settlement software is to orchestrate everything that needs to happen behind the scenes, including interacting with all of the intermediaries that touch a transaction, like the DTCC. 

For a much deeper read on how it all works, check out this classic thread Compound248 wrote in the middle of GameStop mania: 

For our purposes, what you need to know is that the core of Clear Street’s software, the stuff they’ll protect completely, is the infrastructure that handles all of that behind the scenes work and keeps track of who owns and owes what. It also handles custody, holding securities and money for clients. 

Sachin Kumar, Clear Street’s CTO, explained that it’s really all about data management and workflows. 

At this point, most assets exist and move as bits of data. Execution, he said, is relatively simple: sending an order to buy 100 shares of Apple at a certain price changes fewer than 10 fields in a database. But the post-trade side – clearing, settlement, and custody, which he calls the “critical path” – is more complex: 30-50 fields, each of which has a different meaning, each of which drives business rules, and each of which has compliance, risk, and regulatory implications. 

He sees Clear Street’s role as bringing modernity to the critical path. 

Instead of COBOL databases on mainframes, Clear Street has a proper, modern Snowflake data lake with different layers with increasing fidelity. 

Instead of direct access to a messy database, Clear Street offers APIs its clients can use, and recently introduced Clear Street Studio to provide a client-facing user interface. 

Clear Street Studio 

What makes Clear Street’s software so hard to build is that it needs to deliver modern APIs and interfaces to bulletproof data to clients that abstracts away all of the complexity of integrating with antiquated software like the DTCC’s in the background as table stakes. Then it needs to do a million other things that clients demand across a growing number of asset classes and geographies with the same zero-fault-tolerance. 

It’s a perfect example of what I described in APIs All the Way Down:

That leads to one of the most important things to realize about API-first companies: they’re a lot more than just software… The magic of companies like Stripe and Twilio is that in addition to elegant software, they do the schlep work in the real world that other people don’t want to do. Stripe does software plus compliance, regulatory, risk, and bank partnerships. Twilio does software plus carrier and telco deals across the world, deliverability optimization, and unification of all customer communication touchpoints.

Clear Street does software plus compliance, regulatory, risk, custody, and all of the things prime brokerage clients expect, and delivers it via clean APIs. 

“While we’re building infrastructure,” Uri said, “We have to remember that it’s the products we offer that will actually drive demand.” Things like: 

  • Better access to capital and financing

  • Better stock loan to borrow

  • More margin, cross margining and enhanced leverage

  • “Products – more and more integrated products” 

As one example of the products modern infrastructure can support, Clear Street can build products on its single source of truth, real-time data that take advantage of pre-trade analytics, predictive risk modeling, and intelligence instead of reactive products that wait for the data to catch up.

Studio has a “Shocks View” that lets clients see what would happen to their portfolio and margin requirements if asset prices swung violently up or down. It also has built-in risk simulations that let clients ask “What would happen if I put this trade on?” both to the existing portfolio and in the case of shocks. Currently, to do this, traders pick up the phone and ask their risk team what the impact might be. In a field in which milliseconds can make a difference, the speed advantage from replacing phone calls with software can be enormous. 

As another, moving everything onto one real-time cloud software platform means that Clear Street can provide more bespoke margin, lending, and financing packages that respond in real-time to clients’ portfolios. 

Because Clear Street is building API-first, third parties can build products that inherit these advantages on top of its infrastructure and/or Clear Street can offer those products itself. 

For now, many of those products are assets. Clear Street started by offering just US equities. Then it added options, fixed income, and swaps. Now, it’s scaling out the swaps business and building a futures business. 

In July, it acquired React, the maker of cloud-native futures clearing platform BASIS, and it recently applied for Futures Commission Merchant (FCM) membership. It’s the kind of acquisition a company can make when it’s well-capitalized enough to be acquisitive but small and technologically clean enough to integrate new products smoothly.

Then, it needs to bring those products to major markets around the globe, each of which has its own regulations and its own antiquated systems with which to integrate. My head hurts. 

Oh, and it launched an investment banking business. 

Andy Volz, Clear Street’s COO and Head of Prime Sales, is responsible for bringing in hedge fund clients. And he said that a big part of that – of getting hedge funds and other institutional clients to try and trust the technology Clear Street has built in the first place – is offering them the services that investment banks offer. 

“These are discerning customers, and it’s hard to get them to trust you,” Andy said. “Clients traditionally come in in one of a few ways: research, corporate access, capital introductions, balance sheet. We didn’t have that stuff, and now we do.” 

And even with all of that stuff, winning prime brokerage clients still requires a sales team with the right relationships. Some know a few big clients really well, others cover 200 smaller clients. Clear Street started with that group, targeting the hedge funds that the big banks don’t focus on. 

With Studio, a self-serve product, they can open up the top-of-funnel even further to capture the long-tail of the roughly 6,000-8,000 hedge funds in their addressable market. 

Software, services, and sales. Winning a foothold in prime brokerage requires all three. And from that foothold, Clear Street can expand its share of wallet as it adds new products in new geographies that those same clients, and new ones, want to use. 

Eventually, the goal is to serve all types of clients – from the day-trader to the largest multi-strategy hedge funds like Alpine – across all of their needs. It’s better for Clear Street, obviously, but they believe that it’s also better for those clients from a risk, financing, and speed perspective. 

Slide from Clear Street Pitch Deck

As Uri described the flywheel, “More products, more exponential value in terms of capital efficiency and cross-margining, all on one database is very powerful.” 

That flywheel is just beginning to spin in the grand scheme of things – Uri still has 95% of his activity with other prime brokers – but the traction to date has been impressive. 

Business Model, Traction, and Moats

Aside from the quest to de-risk the financial system by improving its infrastructure, there’s an important business model reason to build modern financial software from the ground up. 

More efficient software makes it cheaper to serve each client, which means that Clear Street can serve clients that the largest prime brokerages can’t profitably serve. 

Owning the full-stack means that it can capture more value, and offer better financing and cheaper software, than electronic brokerages without their own infrastructure like InteractiveBrokers profitably can.

Building the best clearing, settlement, and custody infrastructure for the most products in the most geographies, and delivering it all via APIs, means that it can power more fintech apps more affordably and more safely than companies like Apex Clearing or Alpaca can. 

Slide from Clear Street Pitch Deck

Clear Street makes money by facilitating and financing trades. It can make transaction fees on trades and earn a spread on products like margin and securities lending. 

Everything it builds is in service of increasing the volume of transactions taking place on top of its infrastructure and minimizing the risk, to itself and to its clients, of those transactions. 

It’s incentivized to lower the barriers to using Clear Street – whether by giving smaller clients software for free or providing larger clients with investment banking services – so that it can earn fees on a growing volume of transactions. And because of the higher fixed CapEx it spent to rebuild the infrastructure, it can capture higher margins than a competitor using legacy systems, more humans, or more third-party services. 

It seems to be working, although it’s early. 

Clear Street is currently clearing 3% of the US equity market daily – $13 billion in volume.

Across roughly 1,500 clients ranging from active individual traders to hedge funds to market makers, they support over $30 billion in revenue producing balances. 

In 2023, Clear Street generated $260 million in revenue, and most impressively, they did it profitably, earning 25% EBITDA margins. 

Now, the name of the game is to expand into more products and more geographies in order to serve more customers and grow the share of current customers’ transactions. 

The opportunity if they do is massive. 

Clear Street is currently valued at $2.2 billion. Broadridge, an ADP spinout that offers clearing technology built on top of the old infrastructure, has a $23.7 billion market cap. Interactive Brokers has an astonishingly large $41 billion market cap. Goldman, which does many things besides prime brokerage so isn’t a clean comp, is valued at $126.5 billion. 

If Clear Street succeeds, it has the opportunity to win clients, and market cap, from each of them, and do so in a truly defensible way. 

Infrastructure is incredibly hard to build, but the reason that certain entrepreneurs are drawn to building it nonetheless is that if they pull it off, it’s incredibly defensible. Specifically, using Hamilton Helmer’s 7 Powers as a framework, infrastructure has high switching costs. Once a customer has built their products and systems on top of your infrastructure, it’s expensive and even risky for them to switch to another system. 

It cuts both ways. High switching costs are why banks still use the same 70-year-old infrastructure despite the risks and bottlenecks to growth it prevents! Incumbents’ switching cost moats work against new entrants until the new entrant is able to cross the moat and put the same switching costs to work for itself.

Overcoming incumbents’ switching costs is why Clear Street isn’t just building infrastructure, but also products that run on top of it. 

Buying a brokerage to prove the advantages of operating on Clear Street’s infrastructure is one way to show clients that the switch is worth the cost. Products like Studio, which offers a powerful and intuitive UX made possible by better infrastructure, is another. Opening up access to the infrastructure with modern APIs that have good developer ergonomics is a third. And providing the full suite of prime brokerage services, for a certain client group, is a fourth.

The bet that Clear Street seems to be making is that if it provides as many easy on-ramps to its infrastructure, for clients as small as a single developer building a new trading platform to a large hedge fund that requires white glove service, it will be able to retain and grow their business over time. 

Clear Street has strong traction, a business model that allows it to capture its fair share of the value it creates, and features that should make the business defensible as it grows. That said, there are risks. Uri is the first to call them out. 

Risks to Clear Street 

Uri thinks in risk, so I asked him what the biggest risks to Clear Street are. He rattled them off. 

“There’s risk at so many levels. There’s a risk if we don’t hire the best people, there’s execution risk, there’s risk in how we manage our clients’ risk because we lend them money, there’s compliance risk.” 

Those are the obvious risks, the ones that anyone would think of when analyzing Clear Street. 

And then there’s the risk of not leaning into the momentum.

I hadn’t thought of this category of risk before. I’ve written about the advantages of taking the longest view in the room over and over without thinking about this particular challenge that comes with it. In my piece on Stripe, I included a great quote from its CEO Patrick Collison: 

The way in which Jeff Bezos has been persistently and continually able to use time horizons as a competitive advantage is something I have deep respect for. There’s something quite deep about the notion of using time horizons as a competitive advantage, in that you’re simply willing to wait longer than other people and you have an organization that is thusly oriented.

What I failed to appreciate is that maintaining momentum over long time horizons is really hard. I think that’s why Jeff Bezos also insisted that the company behave like “It’s always Day One.” 

In Clear Street’s case, Uri said, “We have great momentum, great clients, great employees, a lot of buzz. But if you slow down too much, it gets boring, because it’s already taken 3-4 years to build, and it will take another 3-4, then another 3-4, and that becomes boring and stale over 10-15 years.” 

More succinctly, he said: “Forward momentum and excitement is a big element of success. It’s hard to understand the risk of losing it because it’s not quantitative.

More generally, Uri views his job as making sure that the company avoids the risks that don’t seem risky, but are. 

Like relying on AWS alone, which he views as massive risks because of the impact on the off chance that it does. 

“AWS can’t go down!” “But what if it does go down?” “Then it’s a real issue!” “So make sure we have a great backup for Amazon if it goes down.”

And then there’s the thing that I think might be the sneakiest risk to the business: the team.

Assembling a Team

In every conversation I’ve had with Uri, Chris and the Clear Street’s leaders, one thing that comes up over and over again is the importance of building the right team. 

This is something that practically every founder will tell you: the team is the most important thing. But given what Clear Street is attempting to pull off, I think it’s particularly true, and particularly difficult, in this case. 

In a nutshell, Clear Street needs to build a team of people with the drive and technical competence to both build modern software and integrate with legacy systems, and with the credibility and experience to sell to both individual traders and the world’s largest financial institutions, all while maintaining the right level of risk that they can maximize profits without blowing themselves up. 

On the software side, Daplyn said that they hire “a combination of pure high-tech developers from Google, Meta, Amazon, etc… and others who are very good technically but also have deep subject matter expertise.” You need to find people who know how to build flexible, modern software, but who are also willing to do the tedious work of integrating with legacy systems, people who have experience working inside large financial institutions, who understand exactly what the customer needs, but people who are willing to work in a high-paced and hungry startup environment. 

Sometimes, all of that exists within the same person; often, it comes from a mix of different people who you need to get to work together towards the same goal. When I asked Daplyn the secret for getting that kind of group to work together, his answer was simple: “Get people in a room. Part of this job is making sure that everyone is at the table sharing the right information.” 

The even more daunting challenge is leadership hires. 

Uri has not been shy about the fact that he wants to hire the very best and most experienced people, even if they’re the type of people who would typically never consider working at a startup, like the leaders of large financial institutions. 

Culturally, I view this as one of the biggest risks to Clear Street. As a non-technical person who assumes that smart engineers can figure out the technical challenges, I view this as perhaps the biggest risk. 

In a business like prime brokerage, one that works with large institutional clients and handles billions of dollars, hiring experienced people is necessary. But as anyone who’s worked in a startup that’s hired big names from big companies can tell you, that can often go horribly wrong. Nothing kills momentum like a bad executive hire. 

So how do you counter that risk? 

Specifically on the hiring side, you spend a lot of time upfront making sure that the big name hires are there for the right reasons and willing to get their hands dirty. 

And even when you get it right for the given moment in the company’s life, Uri told me, you can always keep aiming for the best. That creates a unique culture, one where people have to be bought into the overall success of the mission as much as or even more than their own personal ambitions. 

It seems to be working so far. The executives that I met at Clear Street all seemed to balance experience and tenacity, motivated to be there by decades of frustration of having to deal with shitty systems and an inability to change them within large bureaucracies. 

They view Clear Street as their best opportunity to fix the thing that they’ve always wanted to fix, and understand that the company is going to bring in the best possible people to fix that thing. 

Assembling a world class team pulling from startups and financial institutions is a major challenge, and getting them to work together productively is an even bigger one. It’s a risk that Uri acknowledges and is willing to take. 

But it’s worth it, because there’s one risk to the system that we both see equally clearly: the risk of not innovating. 

The Other Side of Risk

Crumbling infrastructure is bad because disaster can strike when it crumbles, but it’s also bad because it makes it difficult for people to experiment with building new things. 

In Framing the Future of the Internet, I tried to capture the relationship between infrastructure and experimentation: 

Good infrastructure enables experimentation; experimentation solidifies infrastructure.

In Clear Street’s case, most obviously, that means making it easier for anyone, anywhere to access the market, and even to run their own fund. 

Jon Daplyn told me that one of the things that excited him about joining was the shared vision around creating a “hedge fund in a box– the full thing, not just servicing, but tools that allow them to run their own business, from pre-trade analytics and data all the way through to regulatory compliance reports front to back.” 

It also means better and more innovative financial products.

One fintech founder I spoke to, who only let me quote him anonymously because his product is built on a competitor’s clearing, said: “I would so prefer to be on Clear Street it’s not even funny.” 

When I asked why, he said: “Better technology, way more reliable, way more control. Apex is just too antiquated, it’s like going back in time 20 years, and that across everything.” 

(He hasn’t made the switch yet because Clear Street doesn’t yet offer an off-the-shelf way to create thousands of customer accounts – still Day One.)

There’s an apparent irony in me writing about a company building traditional financial infrastructure when I’m so interested in crypto, but the two worlds are beginning to come together, and that connection requires connective tissue. 

In a recent interview, BlackRock CEO Larry Fink said, “We believe the next step forward will be the tokenization of financial assets.” 

The company that’s furthest along in making that happen, our portfolio company Ondo Finance, works with Clear Street as its bridge into the traditional financial system. Clear Street clears, settles, and custodies the US Government securities that back Ondo’s USDY and OUSG (and soon OMMF) products. 

Ondo Finance

Better infrastructure leads to more experimentation, and Clear Street is building better financial infrastructure. 

The bet on Clear Street is that not only is better financial infrastructure a way to avoid the known risks, but a way to defend against the infinitely large risk of not innovating when the opportunity arises. 

If AI is going to infuse itself into financial markets, it will need clean, real-time data to work with. Attempts to bolt AI onto COBOL-based systems will only accelerate the risk of collapse. And as global markets become more interconnected, as individual traders and large financial institutions trade products from around the globe, the value of a single source of truth for all of it becomes even more pronounced. 

One of the criticisms of fintech products is that they put lipstick on a pig: they’re shiny interfaces on top of the same slow, antiquated systems the industry has relied on for decades. That’s not because fintech companies don’t want to build better products, but because the infrastructure is so entrenched – technically and regulatorily – that deep innovation has been hard to do. 

Clear Street is tackling that difficulty head on. It’s rebuilding the infrastructure that underpins the traditional financial system as a parallel system that speaks the language of the traditional one until it becomes the system of record.

If it succeeds, it will have pulled off something that few companies ever have: replacing old, crumbling infrastructure with newer, stronger pipes on top of which everyone from the old guard to the new relies, lowering risk and raising the ceiling in the process.

Thanks to Uri, Ru, and the Clear Street team for working with me, and to Dan for editing!

That’s all for today! We’ll be back in your inbox on Friday with a Weekly Dose.

Thanks for reading,


Framing the Future of the Internet

Welcome to the 364 newly Not Boring people who have joined us since last week! If you haven’t subscribed, join 219,232 smart, curious folks by subscribing here:

Subscribe now

Hi friends 👋,

Happy Tuesday! 

Today’s piece is about a company I’ve mentioned a few times over the past couple years, Farcaster.

Farcaster is a crypto company, a decentralized social network, but I want you to try to not think about it with any of the baggage you might associate with crypto as you read this piece. Think about it as a platform or a network.

In Chris Dixon’s new book, Read Write Own, which comes out today, he uses the term “blockchain network” as an alternative to corporate networks like Facebook and Twitter and protocols networks like SMTP and HTTP. “Blockchains are the only credible, known architecture,” he writes, “for building networks with the societal benefits of protocol networks and the competitive advantages of corporate networks.” Farcaster is a blockchain network.

The benefits of blockchain networks can be hard to grok in the abstract. What’s happening at Farcaster is a concrete and approachable example of why they might win, playing out in real time.

To see it for yourself, join Farcaster. Cast me a hello when you’re there @packy and share a link to this essay, and I’ll recast it to welcome you. 

Let’s get to it. 

Framing the Future of the Internet

Farcaster’s Very Small Apps Are a Very Big Deal

The god of the internet has a sense of humor, or at least a sense of timing. 

In the same week that Apple proved that regulators can’t stop its App Store monopoly with its frustrating response to the EU’s Digital Markets Act (DMA), Farcaster released Frames

The connection between Apple’s App Store monopoly and Farcaster’s Frames might not be immediately obvious – it might turn out not to exist at all – but Frames is the most compelling example of why blockchain networks might ultimately disrupt corporate networks I’ve come across. It puts a bunch of ideas I’ve been writing about into production. 

Let me take a step back, though. Starting with a couple questions you might be asking: what is Farcaster and what are Frames? 

Farcaster is a sufficiently decentralized social network founded by Dan Romero and Varun Srinivasan. 

Farcaster is a protocol, responsible for storing things like user’s handles, casts (Farcaster’s version of tweets), and reactions (like likes and recasts). 

Anyone can build their own clients on top of Farcaster that use that data but offer different frontend experiences. 

Clients are interfaces through which users access the feed. We don’t use that word “client” much anymore, because in most modern social networks, like Twitter and Facebook, the protocol and the client are the same thing. Twitter used to have clients, like Tweetdeck and Twitterific, until it shut down its API access and killed the clients. Now, Twitter is just Twitter (or, if you prefer, X). 

You can think of Farcaster as a Twitter that’s extensible by its community and always will be. 

The Farcaster team built its own client, Warpcast, but any developer is free and encouraged to create their own competing client. Many have. a16z crypto has a list of General and Specialized clients in their awesome-farcaster repo, including Supercast, Yup, Searchcaster, Launchcaster, Eventcaster, and Casthose. Each takes a different twist to the frontend experience of accessing Farcaster’s data and interacting with its users. 

A few months ago, the Farcaster team introduced channels as a way to organize conversations by topic. The Farcaster team created the first few channels, and then opened up the ability to create new channels to any user. Once created, those channels show up in any client that chooses to build them in, with all of the same casts you’d see in Warpcast.  

And on Friday, Farcaster launched Frames

Frames let people build small apps that run inside of casts. 

They’re essentially small, interactive iframes embedded in casts, hence the name. Like casts, handles, and channels, Frames work in any Farcaster client. 

Frames can support anything from Doom to polls to block explorers to mints to buys to prediction markets. 

Frames are just a canvas; developers will figure out what to paint on them. 

And they are! In the four days since launch, developers have built Frames that let users mint songs in one click, subscribe to a newsletter in one click, and play games right in their feed, including Doom. 

(Fun fact: there’s a long history of people running Doom in weird places, from pregnancy tests to vapes and chainsaws.)

You can feel Warpcast buzzing with ideas. You can see the buzz in the Farcaster DAU chart. 

The absolute number of daily active users is still small (although Dan Romero casted that it’s now over 14k), but there’s clearly something special happening. 

On Friday, I shared Chris Dixon’s new book, Read Write Own, in the Weekly Dose and mentioned two of my favorite essays of his: The next big thing will start out looking like a toy and What the smartest people do on the weekend is what everyone else will do during the week in ten years

Early Frames activity is a mashup of the two. A lot of the early Frames look like toys, and a lot of smart people spent this past weekend building them. 

The crypto VC fund Variant hosted a Frames hackathon Sunday with 50 people in-person and hundreds online. Coinbase’s Jesse Pollak set up a group chat for people building Frames that have onchain components, and hit the 100 person cap in five hours. People and companies are offering bounties for the best Frame ideas using Bountycaster, a product that Linda Xie is building on top of Farcaster – people, to get things they want built, and companies, to pay people to use their products (APIs, layer 2’s, etc…) as they build them. 

I can try to describe the energy around Frames all day, but the best way to understand what I’m talking about is to join Warpcast to watch it unfold in real-time. 

A lot of people have written about Frames already. It’s the biggest story in crypto right now, because as Antonio Garcia Martinez wrote:

For Web 3 to succeed it needs to do two things: Enable cool functionality unable through traditional Web 2, and make the user largely unaware that they’re even on the blockchain. With Frames, we’re finally doing both.

I’m going to write about it, too, because Frames feels like the beginning of something big. Whether it’s the first domino to fall towards toppling Twitter, killing Apple’s App Store monopoly, or creating something entirely unique, it’s too early to tell. But it’s a domino. 

Seeing it play out over the past few days, I’ve felt like Leo in Once Upon a Time in Hollywood:

Frames are a real, tangible example of three things I’ve been writing about:

  1. Small Applications, Growing Protocols

  2. From Experimentation to Infrastructure

  3. Why Blockchain Networks Win 

I’ll cover each in turn. 

Small Applications, Growing Protocols 

In Small Applications, Growing Protocols, I wrote about the fact that apps are getting easier to build and harder to sustain. Companies like Lensa, Poparazzi, BeReal, Dispo, and Clubhouse gained millions of users, and even made millions of dollars, before fading away. 

Variant’s Li Jin made a similar point in a recent essay, “In a sense, consumer products now resemble entertainment, with users regularly wanting to try out the latest thing and then quickly moving on.”

So what to do? I proposed a third path beyond raising money or bootstrapping:

Small Apps that recognize their fleeting nature can team up with protocols that are built to last to build something bigger and more durable collectively.

Turns out, I wasn’t thinking small enough. Frames are very small apps, some meant to be as fleeting as a tweet, others to serve as portals to larger apps, like Sound or Paragraph. 

Developers and creators gain distribution, and the Farcaster protocol gains activity and users. 

Crypto’s UX has been famously terrible (although it’s improving). With Frames, users can go to one feed instead of a hundred different apps to do the various things they might want to do.

That’s good for the developers and creators, because it means one place to reach thousands (and maybe someday millions) of users and frictionless conversion. 

Li’s partner at Variant Jesse Walden calls Frames a new go-to-market strategy, writing:

For founders, the takeaway is that you now have the ability to tap directly into existing user attention, engagement, identity and data as you plan your GTM, the same way that Zynga famously did with Facebook. Only this time, users’ money will be there too, and their identity and data can’t be rugged from underneath you.  

And it’s good for Farcaster, because it means that thousands of developers and creators will be working to build experiences that attract activities and users, and that the Farcaster protocol has a shot at becoming the center of gravity for onchain activity. 

Thousands of small apps (Frames) working to grow one protocol (Farcaster). 

From Experimentation to Infrastructure

The relationship between apps and protocols is a specific example of a more general relationship I’ve been thinking about a lot recently: experimentation and infrastructure. 

Good infrastructure enables experimentation; experimentation solidifies infrastructure.

In Pace Layers, I wrote that “Strong infrastructure and good governance provide freedom through constraint.” Stable infrastructure provides a base for experimentation: the stronger and more stable the infrastructure, the easier it is to experiment on top of it.  

By making it incredibly easy to experiment, and by guaranteeing that users’ “identity and data can’t be rugged from underneath you,” Farcaster encourages more experimentation. 

The relationship works the other way, too. The more experimentation there is at the top layer, the more stable the infrastructure below it becomes.  

One way to visualize the role of platforms in the context of Pace Layers is by thinking of experiments – or apps – as weights. The more experiments run on top of a platform, the deeper into the layers the platform sinks and the more firmly it’s held in place. 

Farcaster itself started as an experiment. The Farcaster team made the only Farcaster client, Farcaster, which looked like a less functional Twitter clone for a small group of Ethereum people. With nothing built on top of it, it was light enough to float away. 

Over time, it’s opened up. Farcaster the client became Warpcast, and other developers built their own clients on top of the Farcaster protocol. Warpcast introduced Channels to help people find casts related to their interest, and then let anyone create their own Channels. With the introduction of Frames, Farcaster is making it easier to build apps within Casts. 

Each thing that people build on top of the Farcaster protocol – clients, apps, channels, and Frames – adds weight to the protocol that pushes it into the infrastructure layer and cements it there. And the more deeply established the Farcaster protocol is as infrastructure, the more people will build on top of it. 

More experimentation is a good thing for its own sake. Novelty is the feedstock of progress. And experimentation is good for the platforms that enable it. 

Ben Thompson explained it in a way that has stuck with me in Shopify and the Power of Platforms

I would argue that for Shopify a high churn rate is just as much a positive signal as it is a negative one: the easier it is to start an e-commerce business on the platform, the more failures there will be. And, at the same time, the greater likelihood there will be of capturing and supporting successes.

The easier it is to experiment on a platform, the more failures and successes there will be, and the greater the likelihood the platform itself will win. 

I’m going to include a long block quote here, which Dan will hate, because I think Ben says it so well (emphasis mine): 

What is powerful about this model is that it leverages the best parts of modularity — diversity and competition at different parts of the value chain — and aligns the incentives of all of them. Every referral partner, developer, theme designer, and now 3PL provider is simultaneously incentivized to compete with each other narrowly and ensure that Shopify succeeds broadly, because that means the pie is bigger for everyone.

This is the only way to take on an integrated Aggregator like Amazon: trying to replicate what Amazon has built up over decades, as Walmart has attempted, is madness. Amazon has the advantage in every part of the stack, from more customers to more suppliers to lower fulfillment costs to faster delivery.

The only way out of that competition is differentiation; granted, Walmart has tried buying and launching new brands exclusive to its store, but differentiation when it comes to e-commerce goods doesn’t arise from top down planning. Rather, it bubbles up from widespread opportunity (and churn!), like that created by Shopify, supported by an entire aligned ecosystem.

You can find and replace “Amazon” with “Twitter,” “Walmart” with “Threads,” and “Shopify” with “Farcaster” and the argument works just as well. 

(I’m not trying to make the ecommerce analogy directly, but interestingly, Alex Danco, who works on making Shopify wallet aware, tweeted: “frames are a key missing piece of ecommerce infra that now suddenly works.” This thing has levels.)

By enabling widespread opportunity and encouraging competition (for attention) and churn at the experimentation layer, and building an aligned ecosystem of developers who benefit from a larger and more engaged community of users, Farcaster has the opportunity to build differentiated social infrastructure and succeed where other Twitter competitors have failed. 

Why Blockchain Networks Win

Building new infrastructure bottoms-up is a slow, messy, winding process. It looks like the experimentation layer until it becomes part of the infrastructure layer. 

There are all of these attributes of blockchain networks that people in crypto like to talk about, things like decentralization, composability, and permissionlessness. I like to write about those things, too, but I get that they can seem a little abstract. 

A bet that blockchain networks will win is a bet that these attributes will, over time, allow richer ecosystems to develop on top of them. 

One of the reasons you might want those attributes is to prevent bad things from happening

When Twitter killed Substack links back in April of last year, I wrote a piece called Crypto (Could) Fixes This about how sufficiently decentralized social networks like Farcaster could prevent things like that from happening. 

A sufficiently decentralized protocol, on the other hand, would need the majority of its token holders to wake up feeling petty in order to do the same thing, or even more if big changes required some sort of supermajority.

Defense wins championships, but defense is also boring. It’s hard to attract a lot of users to a new product with a “just-in-case” value proposition. 

Another reason you might want those things is to enable more good things to happen.

That’s how great products that millions of people use get built: by creating differentiated experiences that people love. 

Frames is a tangible example of that kind of decentralization, composability, and permissionlessness in action. 

  • Decentralization: Builders can build on Farcaster knowing they won’t be rugged.

  • Composability: Developers can snap their Lego blocks into Farcaster’s infrastructure and other developers’ Lego blocks, to experiment without needing to build the full stack or find new users. Wallets are built in, facilitating commerce and gating. As people build new things, everyone can remix them.

  • Permissionless: Anyone can create Frames, and Frames work in any Farcaster client, not just Warpcast. 

More succinctly, Frames allow anyone, anywhere to easily build small apps with capabilities that grow as other people build new things, with the knowledge that the platform can’t change the rules on them.

We’re just starting to see what’s possible, even if early experiments look like toys. I expect that in the coming weeks and months, people will build easy ways to send payments, sell tickets, and do all sorts of things that happen a few clicks away from the feed on Twitter. 

One possible use case that I’m particularly excited by are prediction markets Frames, one of the ideas I’ve seen mentioned most since the launch. 

Instead of simply firing off random predictions, the prediction markets Frame might let people express their conviction by putting their money where their mouth is. Casting pessimistic predictions is easy; it would be great for humanity if it were a little more expensive. With prediction frames, I suspect that there would be a lot of signal in what currently amounts to a lot of noise. 

Live sports betting makes a lot of sense, too. Farcaster already has channels for the major sports leagues and teams. Adding betting in those channels would make Farcaster the social place to gather around live sports. (Obviously, rules vary by jurisdiction.)

0xdesigner highlighted another killer use case: Instant Checkout

That one is compelling because it removes friction from commerce (onchains capitalism) in a way that one-click checkout companies could only dream of (they don’t have their own social feeds) and enables attribution in a way that marketers dream of. If I share a product in a Frame and you buy it, it’s clear that I’m the one who got you to buy it. The brand can send affiliate revenue right into my wallet, no tracking link required. And it will only be possible thanks to composability: as developers build zero-knowledge tools that allow me to store my personal information and address in a wallet without exposing them to the world, anyone building an Instant Checkout Frame can plug them in permissionlessly. 

Update: I wrote this section last night, and this morning, I woke up to this:

Of course, I ordered Girl Scout Cookies directly from my feed (2 Trefoils, 1 Tagalongs, 1 Thin Mints). The experience isn’t one-click yet, but it was really smooth. I made my order, added it to my cart, and checked out. The @cookie account sent me a cast with a link to my cart. I entered my address, checked out with Coinbase Commerce, and paid in ETH. Smooth. 

Buying Girl Scout Cookies in Frames

I couldn’t have asked for a better example of the composability, permissionlessness, and experimentation I’m talking about. Yesterday afternoon, someone proposed Instant Checkout. Last night, someone built it. Over the next few days, the swarm of builders will remix it and improve on the process. 

And speaking of remixing, Frames themselves are composable, as Once Upon’s Paul Cowgill pointed out. Builders can make Frames of Frames, snapping them together to create entirely new experiences.

This feels like what the internet is supposed to feel like.

A lot of people are talking and writing about Frames. The hype is deserved. This is a really big deal. 

But – don’t say it, don’t say it – we’re still so early

Developers and creators will dream up things that I can’t, and build an increasingly rich ecosystem in the process. Maybe one day, as the gravity of its infrastructure and user base pull in more and more developers, Farcaster will topple Twitter. Maybe it will obviate the need for App Stores altogether by throwing the things people do in apps right in the feed, one click away. 

More likely, it will create something new and different and potentially more powerful: a social protocol on top of which anyone can build products that take payments, purchases, mints, predictions, and all of the other primitives that developers develop for granted. 

This isn’t about “onboarding the next billion users to crypto.” It’s about using blockchain networks to do things that billions of people want to do, better. 

That’s how blockchain networks win, and how builders and users win in the process.

I think this might be the one. Frames are small apps with big implications.

Thanks to Dan for editing!

That’s all for today. We’ll be back in your inbox with a Weekly Dose on Friday.

Thanks for reading,


The Techno-Industrial Revolution

Welcome to the 161 newly Not Boring people who have joined us since last week! If you haven’t subscribed, join 218,868 smart, curious folks by subscribing here:

Subscribe now

Today’s Not Boring is brought to you by… Alto

With Alto, you’ve got options. Its self-directed IRA platform lets you invest in a range of alternative assets across private equity, venture capital, real assets like farmland and fine wine, cryptocurrency, private startup angel deals, and more. Better yet, investing in alts with dollars earmarked for retirement means you can make long-term and tax-advantaged decisions. Here are the top three reasons why Alto is a no-brainer:

  • IRA Capital: So many people leave money on the table by not investing through an IRA. Alto makes investing, tracking, and saving through a self-directed IRA easy.

  • Horizon Matching: Investing with an SDIRA and investing in alts are both long-term decisions.

  • Investment Options: If you’re already investing in alts or want to allocate more to them, Alto offers a full menu of alts through its investment platform partners that are usually inaccessible to retail investors.

Plus, you can now also check out the new Alto Marketplace – the premier destination for discoverability-driven securities investing. As we start off the New Year, make a good decision and learn more about Alto.

Start Investing in Alts with Alto Today

(Of course, IRA rules and regulations apply, and you should seek advice from a tax professional when making investments. Alternative asset investments are inherently risky and are intended for sophisticated investors.)

Hi friends 👋,

I’ve become obsessed with a certain kind of company recently. This is my attempt to put a framework around the opportunity.

Let’s get to it.

Techno-Industrial Revolution

We’re standing at the foot of a Techno-Industrial Revolution

Progress shifted from atoms to bits and is now shifting to a combination of atoms and bits. 

In Tech is Going to Get Much Bigger, I argued that cheaper energy, intelligence, and dexterity (i.e. robots), and I would add biotech and new materials to the list, will combine to increase tech’s total addressable market to include large existing industries that have been relatively untouched by technology, including industrials and agriculture. Higher margins in large industries will make for very valuable companies.  

These companies will look less like the tech giants of today and more like the giants of the Industrial Revolution. 

Some, certainly, will be the incumbents that take advantage of cheaper, more capable inputs to improve their businesses. Think Adobe incorporating AI or a factory installing robots. Cheaper, more capable inputs, in many cases, will be a rising tide that lifts all boats. Clayton Christensen would call these sustaining innovations.

But I also believe that there will be a crop of new companies that challenge incumbents head-on and win, capturing large markets at high margins in the process. I’ve started calling them Techno-Industrials.

Techno-Industrials use technology to build atoms-based products with structurally superior unit economics with the ultimate goal of winning on cost in large existing markets, or expanding or creating new markets where there is pent-up demand. 

They can leverage their structurally superior unit economics in the form of lower prices to win market share, higher gross margins, or both. 

Less fancily, they’ll use new technology, processes, or approaches to make physical things that customers want more cheaply than existing companies can, in a way that existing companies can’t match. Because they can manufacture more cheaply, they can either lower their prices to steal market share from incumbents or capture more gross margin. In some cases, they’ll have enough room to do both.  

Three examples might help explain what I mean. 


In a great conversation with Patrick O’Shaughnessey, Anduril co-founder Palmer Luckey captured the spirit of Techno-Industrials as well as I’ve ever heard:  

Our pitch deck page one says that Anduril will save western civilization by saving taxpayers hundreds of billions of dollars a year as we make tens of billions of dollars a year.

That’s exactly it: save customers money and earn high margins. That’s only possible if you figure out a fundamentally cheaper way to deliver capabilities. 

In Anduril’s case, that means using software to make hardware more capable. I described Anduril as a series of bets, one of which speaks to this point: 

Bet #4: By investing in software R&D upfront, it will be able to better deliver the capabilities the DoD needs to fight the new kind of war, more cheaply: that its Lattice OS will let Anduril deliver more capable hardware products for less money, and that its advantages will compound with each new product it plugs in.

The mega-bull case for Anduril is that it’s able to capture more and more of the trillion-dollar defense market and do so at 40-50% margins instead of the 8-12% that incumbent Defense Primes lock in with cost-plus contracts. Lockheed Martin is worth $113 billion on $68 billion of revenue with 12% gross margins. If Anduril achieves a fraction of that revenue while maintaining much higher gross margins, it will be a very valuable company. 


Solugen is using biotech to attack the $4 trillion chemicals market. 

Solugen Bioforge

Instead of refining petrochemical feedstocks to produce chemicals, Solugen uses enzyme engineering to convert sugar and water (and eventually CO2) into chemicals. For more detail, Elliot wrote a great piece on the company after his visit to its Bioforge in Houston and Jeff Burke wrote a deep dive on the process

The takeaway is that by removing steps from the process, and using software and biology in place of traditional chemical synthesis and energy-intensive physical processes, Solugen is able to produce chemicals at a higher yield, lower environmental footprint, lower cost, and higher margins than incumbents.

Solugen bills itself as a climate friendly company – its homepage announces “We decarbonize the physical world” – but as Elliot writes, “The climate benefits have a tertiary impact on Solugen’s sales strategy. First, they compete on price and customer experience.”

What’s illuminating is that by cutting steps out of the process and replacing expensive inputs with technology, which Solugen originally did for the climate benefits, it makes the chemicals cheaper to manufacture! It gets closer to the true cost physics – what it should cost to make given the technologies, materials, and processes intrinsically required to manufacture it – of chemical production. Higher yield, lower heat and energy requirements, no waste, cheaper product. 


It’s very early in Solugen’s journey, but already, the company generates more than $100 million in annual revenue from its first bioforge at “software-like margins” of around 60%, founder Gaurav Chakrabarti told Bloomberg. It believes that it can build a library of enzymes and catalysts that could produce 90% of chemicals by 2030. 

Monumental Labs

Monumental Labs’ website pronounces: “Monumental Labs is building AI-enabled robotic stone carving factories. With them, we’ll create cities with the splendor of Florence, Paris, or Beaux-Arts New York, at a fraction of the cost.” 

The company is much younger than Anduril and Solugen, but it’s a Techno-Industrial: using technology (“AI-enabled robotic stone carving factories”) to build atoms-based products (“cities with the splendor of Florence, Paris, or Beaux-Arts New York”) with structurally superior unit economics (“at a fraction of the cost”). 

In my hunt for Techno-Industrials, I talked to Monumental Labs’ founder Micah Springut the other day to understand how he’s thinking about the business. It sounded familiar. 

In I, Exponential, I wrote about the use of modern technology like 3D models, CNC machines, Lidar scans, and 3D printers in the construction of Barcelona’s unfinished La Sagrada Família.

Sagrada Família, CNN

Monumental wants to do something similar, at city-scale. The idea is that stone should be the cheapest material for construction, even cheaper than concrete. It has cost physics on its side: making concrete requires an energy-intensive process of heating limestone and other materials to very high temperatures (around 1400-1500°C) in a kiln to make cement, which is then mixed with water, sand, gravel, and other materials. Concrete manufacturing is responsible for roughly 8% of global greenhouse gas emissions. 

When you use stone, you just need to get it out of the ground. You can skip the heating and processing. The reason we don’t use stone is that working stone is so labor intensive – 70-80% of the cost is labor, not the stone itself. “Stone is a cheap material being sold like a Cadillac,” Micah told me. 

Monumental’s bet is that by using AI-enabled robotic stone carvers, it can turn that OpEx into CapEx and dramatically reduce the cost of stone. The company is starting at the high-end of the market with its first robot – carving busts, sculptures, and architectural ornament at high margins – but Micah believes that they’ll be able to build full buildings, structural elements and all, out of reinforced stone at scale. 

If they succeed, they’ll make buildings more beautiful, and cheaper. Micah pointed out that we add a lot of unnecessary material to buildings – like drywall – to cover ugly reinforced rebar, for example. He thinks working with stone can simplify buildings in the same way that an electric engine is so much simpler than a piston, and lower costs in the process. 

Over time, if everything goes right, Monumental hopes to make a dent in the $1.5 trillion and growing building materials market, and capture margins by vertically integrating either backwards into quarries or forward into delivering turnkey building solutions. 

Obviously, getting everything to go just right is going to be insanely hard. It will require enormous warehouses staffed with fleets of robots, a tremendous amount of capital, an orchestra of flatbed trucks criss-crossing the country loaded with stone, selling into a fragmented and geographically far-flung market, and perhaps most dauntingly, changes to building code in the face of the concrete lobby and construction unions. 

Building a Techno-Industrial won’t be easy, but the prize – big margins in enormous markets – is worth the fight, and I think that cost physics is a powerful force. 

Aggregators and Industrials 

Anduril, Solugen, and Monumental Labs are just three examples, and all three are still relatively early in the journey. Most of the promise of Techno-Industrials as it stands lies in spreadsheets and techno-economic analyses

A skeptic would argue that we’ve heard this story before. In the 2010s, companies that dealt in the physical world promised better margins through technology. 

Companies like WeWork, Bird, and my company, Breather, failed to live up to that promise. 

Ultimately, while they were tech-enabled, the technology wasn’t infused into the product in such a way that they changed the cost physics of their industries. 

From an economic perspective, it was like squeezing a money balloon: we just shifted dollars from one side to the other.

There are also a number of successes. DoorDash is worth $42 billion. Airbnb is worth $89 billion. Uber is worth $134 billion. 

I would argue that these companies didn’t change the cost physics of their industries, either. Food is not cheaper. Travel is not cheaper. Catching a ride is not cheaper. 

These companies are examples of what Ben Thompson famously calls Aggregators

Ben Thomposn, Stratechery, Aggregation Theory

Aggregators “aggregate modularized suppliers — which they often don’t pay for — to consumers/users with whom they have an exclusive relationship at scale.”

These companies do create value. There’s value in convenience, and there’s value in being able to monetize an underutilized resource, whether that’s an empty house or an hour of time. But they don’t structurally improve the gross margins on the products in their markets. They don’t change the cost physics. Like the balloon squeezer, they just move them from one place to another. 

One of the key ideas behind Aggregation Theory is Clayton Christensen’s Law of Conservation of Attractive Profits. Note the word conservation. Aggregators didn’t expand industries’ profits as much as they moved them from one place to another. 

Reasoning about Techno-Industrials requires us to go back further in time, all the way back to the Industrial Revolution in America. I’m calling this coming stage the Techno-Industrial Revolution, but the Industrial Revolution was a techno-industrial revolution, too. 

Companies like ALCOA and Carnegie Steel invented or leveraged new technology to atoms-based products with structurally superior unit economics with the ultimate goal of winning on cost in large existing markets, or expanding or creating new markets where there is pent-up demand. 

Take ALCOA. In 1880, before Charles Martin Hall and Paul Héroult independently invented what became known as the Hall-Héroult Process for aluminum smelting, the metal sold for $17 per pound. In 1884, the US only produced 125 pounds of aluminum, at least 100 ounces of which went to the capstone of the newly built Washington Monument, the tallest structure in the world. 

Charles Martin Hall with Illustration of ALCOA Steelman Street Facility

In 1903, the Wright Brothers built the engine for their historic plane using aluminum alloy. By 1930, aluminum sold for $0.20 per pound and ALCOA was responsible for over half of worldwide aluminum capacity. The US alone produced 148,000 tons of the metal in 1939. When Sputnik launched in 1957, it brought its aluminum body into orbit. Today, it’s impossible to imagine the world without the lightweight, flexible metal and its alloys.  

ALCOA dominated the industry so handily that in 1938, FDR’s Justice Department sued it for antitrust based not on rapacious practices, but on market share alone. 

Hall invented a technology (and protected it with patents) to manufacture a product with structurally superior unit economics in order to bring down costs and both successfully compete in existing markets and create new ones. 

Or take Carnegie Steel. In 1860, when Andrew Carnegie was dabbling in iron investments, the superior steel was so expensive to make that it was only made in small batches; the US produced only 13,000 tons that year. But in 1872, Carnegie saw the Bessemer Process in action at a plant in England, and introduced it to his J. Edgar Thompson Steelworks in 1875. By the turn of the century, the US produced over 11 million tons of steel, and Carnegie was its largest and most profitable producer.

Andrew Carnegie and the Bessemer Process

The first ton of steel Carnegie produced cost about $56; by 1900, Carnegie was producing steel at $11.50 per ton, a 72% decline. At lower prices and increased manufacturing capacity, steel underpinned railroads, ships, and newly-possible-thanks-to-steel skyscrapers, which helped meet the growing demand for urban dwellings to house an influx of immigrants.

The 1901 sale of Carnegie Steel to JP Morgan’s U.S. Steel for $480 million made Carnegie the richest man in the world and gave the newly-created conglomerate two-thirds of the US Steel market. 

Carnegie used a new technology, and vertically integrated around it, to manufacture a product with structurally superior unit economics in order to bring down costs and both successfully compete in existing markets and create new ones. 

ALCOA and Carnegie Steel took two different approaches – innovation and integration, respectively – to achieve the same ends: driving down costs and increasing production. 

As Carnegie himself put it:

Show me your cost sheets. It is more interesting to know how well and how cheaply you have done this thing than how much money you have made, because the one is a temporary result, due possibly to special conditions of trade, but the other means a permanency that will go on with the works as long as they last.

By structurally superior unit economics, I mean that there is something in the way the product is manufactured or delivered that gives the Techno-Industrial a cost advantage that can’t easily be replicated. For ALCOA, that was the patented Hall-Héroult Process. For Carnegie Steel, it was the application of the new Bessemer Process combined with vertical integration of the supply chain and a maniacal focus on cost across the whole thing. 

In both cases, new technology was a necessary ingredient that the companies leveraged in their manufacturing processes in order to drive down costs, grow demand, and increase scale to drive costs down further. They didn’t just change prices, they got closer to the true cost physics of their products. 

Modern Techno-Industrials can take either the ALCOA inventor route or the Carnegie integrator route. I’d argue that Solugen is an example of an inventor and Anduril and Monumental Labs are integrators, although the line, in both the old examples and the new, is a little blurry. In either case, they will face enormous headwinds as they race to scale. 

Techno-Industrial Challenges 

Operating in the world of atoms is different than operating in the world of bits. 

In the world of bits, you don’t need to ask permission; in the world of atoms, you do. 

In bits, you can pivot and iterate and ship updates and run experiments; in atoms, you need to have your strategy and roadmap dialed in early. 

In bits, you can build early products with very little capital, and increasingly, you can scale with less capital; in atoms, you need large pools of capital, including venture capital, but also including debt, asset-based financing, and project financing. 

As energy, intelligence, and dexterity get cheaper, the world of atoms will come to look more like the world of bits, but it will never be as fluid as the world of bits. 

There are a few big categories of challenges that Techno-Industrials will need to face that software companies don’t. 

The first is that manufacturing is hard. I didn’t listen to the whole Elon Musk Joe Rogan interview, but in the bit that I heard, Elon mentioned how hard manufacturing is roughly 100 times. Getting something to work in the lab is very different than getting it to work at scale. Making one of something at whatever cost is very different than repeatedly making thousands of that thing at a cost that looks anything like the cost in a techno-economic analysis. 

The incumbents that Techno-Industrials will be competing with are excellent at manufacturing. It’s what they do. The chemical giants like BASF and Dow are manufacturing wonders. The concrete industry produces billions of cubic meters of the product every year. 

Relatedly, incumbents have scale and distribution. They have long-standing supplier and customer relationships around the world. In many cases, these battles will be a race to see if incumbents can get technology before Techno-Industrials get scale and distribution. 

In some cases, Techno-Industrials (the inventors) might be protected by patents, but large companies might find it economically advantageous to ignore the patents and bleed poorer startups dry in court. And if not in America, Chinese manufacturers will certainly run roughshod over intellectual property and flood the market with cheaper versions of the same product. Competing in markets in which Chinese companies are unable to serve western customers is a key consideration. 

Within America, regulations can either be helpful or harmful. Regulatory capture by incumbents is a real threat to Techno-Industrials. Lobbying efforts come up in every conversation with Techno-Industrial founders. Anduril famously hired more lobbyists than engineers in its first few months. Watch the Bill Gurley talk again for a sober reminder.

Then, there’s the matter of timing. These are such big and exciting opportunities that companies will attack them whether or not the economics make sense quite yet, in hopes that they will when the time is right. There’s a graveyard full of companies that were too early, and I suspect we’ll see many Techno-Industrials emerge in categories that have scarred investors but might now make sense. 

Finally, building a Techno-Industrial is capital intensive. These companies will need to show consistent and rapid progress against their milestones in order to raise the funding they need to survive and grow. As Varda’s Delian Asparouhov told me, “deeptech companies should begin generating revenue within three years.” 

There are ways to overcome these challenges, of course. Tesla and SpaceX are proof that it’s possible. That might mean starting at the high-end of the market, where margins are higher and competition is less cutthroat. It might mean starting with a novel product with a smaller market that incumbents can’t or won’t build before scaling up into direct competition. It might mean positioning as a climate company in order to get customer buy-in and government subsidies before translating the efficiencies that come with climate-consciousness into efficiencies that lock-in higher margins. 

But have no doubt, it will be a battle. I think the prize is worth it. 

Searching for Techno-Industrials

The category alternately called hard tech / deep tech / frontier tech / atoms-based / American Dynamism is on fire. The things that these companies are able to do and build boggle the mind. 

It takes a new, old way of thinking about tech companies to understand which might break through to become very large standable businesses, and this is my first entry in what I suspect will be an ongoing attempt to put a framework around it. 

While software has been all about swamping upfront fixed costs with high gross margins as revenue grows, I think the defining characteristic of successful Techno-Industrials will be whether they have a structural cost advantage. 

That might come from an in-house invention or the integration of a number of new capabilities in novel and defensible ways. In either case, if Techno-Industrials are to succeed, it will be because they leverage technology to do what technology is supposed to do: give people more and better for less.

We covered three examples – Anduril, Solugen, and Monumental Labs – taking three different approaches towards that end. Beautifully, there is no playbook written yet, and I suspect each Techno-Industrial will forge a different path. 

I’m on the hunt for more examples – please send me your favorites – both to analyze and potentially to invest in. While the challenges they face will be enormous, the potential equals it. 

For the first time in a long time, tech companies have the opportunity to rebuild the world’s industrial base and tap into huge, existing pools of revenue at higher margins than incumbents can. Assuming the same revenue – a safe assumption, since revenue will likely grow with lower prices – structurally higher margins will create more valuable companies than the incumbent comps. 

I suspect that in a decade or two, there will be dozens of trillion-dollar companies, and my bet is that many of them will be Techno-Industrials.

Thanks to Dan for editing!

That’s all for today. We’ll be back in your inbox with a Weekly Dose on Friday.

Thanks for reading,


The Experimentation Layer

Adios to the 134 Not Boring people who left us since last week (lotta New Year inbox cleaning!). If you haven’t subscribed, join 218,707 smart, curious folks by subscribing here:

Subscribe now

Today’s Not Boring is brought to you by… Plaid

You may know Plaid as the company that lets you safely and quickly connect your bank to your favorite finance apps. When their original product dropped back in 2013, it was like magic and unleashed a whole wave of fintech innovation – think Venmo, Chime, and SoFi. Ten years later, Plaid has established itself as one of the leading digital finance platforms and has rolled out a whole suite of products that allow businesses to get the most out of their financial data – including payments, identity verification, and credit.

Now, Plaid just dropped its 2023 Fintech Effect report – a 2,000 person survey on everything important happening in fintech. The report combines the survey’s results and Plaid’s unique vantage point in the industry to produce a comprehensive overview over everything you need to to know about building in fintech in 2023. The best part? It’s completely free.

If you’re a founder, builder, investor, or just curious about the future of money – then do yourself a favor and spend a few moments diving deep into the 2023 Fintech Effect. 

Get Free Report

Hi friends 👋,

Last week, I wrote about Stewart Brand’s Pace Layers. I said I’d probably be using them as a framework going forward, and that I’d call out which layer I was writing about when I did. 

We didn’t have to wait long. This week, I’m writing about the beautiful messiness at my favorite layer: the fashion layer. 

Let’s get to it. 

The Experimentation Layer

Last week, Rabbit dropped its $199 AI device, the r1. 

It sold out 40,000 devices across four pre-order batches, good for $8 million in booked revenue, tout de suite. It also launched a thousand tweets arguing that “it should have been an app,” that it would never work at $199, that it didn’t have a real use case, and that it was a solution in search of a problem. 

Look, I don’t know the Rabbit founders, they don’t need me to defend them, and they’re laughing all the way to the bank as we speak. But some of the reactions to the product are the latest in a long line of commentary I’ve noticed that amounts to: 

“Why didn’t they simply do the perfect thing on the first attempt?”

The short answer is: that’s not how this works. Things are not linear or clean. We can only asymptote towards perfection through trial and error. 

Even Steve Jobs, the Steve Jobs, launched the ROKR E1 in partnership with Motorola before the iPhone, as Michael Mofina pointed out to me. I had memory holed it. It was a total bust. 

 You don’t have to watch that whole video, but if you want a little hit of cringe, start at 10:50, where Steve lets the COO of Cingular Wireless come onstage to do a corporate standup comedy routine. No one gets this stuff right every time, not even ~*Steve*~. 

The longer answer, which we’ll explore today, is that the beauty of the system lies in millions of people trying new things, most of which will fail outright, some of which will succeed spectacularly, and many of which will survive only in pieces, contributing a new mechanism, concept, or feature to the pile of Lego bricks that the next group of tinkerers can play with. 

This is true for products, research, and ideas, even the ones you disagree with. Especially the ones you disagree with. 

The point isn’t to play it safe and get it right the first time; if any one person or group of people had The Answers, we might as well give real communism a try. 

The point is to experiment with wild, clever, creative, novel things, get feedback from the market, and iterate until, maybe, something emerges that improves society in a fundamental way. 

Let’s turn to the Pace Layers

The Experimentation Layer

I like to think of the top layer of Stewart Brand’s Pace Layers, the fashion layer, as the Experimentation Layer.  

This is my favorite layer, the fuel that keeps the engine of progress humming, the raw material from which society is sculpted. It’s where new things and ideas come to play, airlocked from hurting the deeper levels, free to compete for the right to change them for the better. As Brand writes (modified with our new name): 

The job of experimentation is to be froth—quick, irrelevant, engaging, self-preoccupied, and cruel. Try this! No, no, try this! It is culture cut free to experiment as creatively and irresponsibly as the society can bear. From all that variety comes driving energy for commerce (the annual model change in automobiles) and the occasional good idea or practice that sifts down to improve deeper levels.

Most of the things that get tried out at the experimentation layer don’t make it through. Clothes go out of style, ideas crumble in the face of scrutiny, radical campaigns lose, and there’s that at-least-directionally-correct stat that 90% of startups fail. That’s OK. That’s great!

Variance is the point. Launch a million mutations and see which increases society’s fitness. If something seems too unpolished, ambitious, weird, or even dangerous, good! It’s in the right place. 

Capitalism’s main advantage over other economic systems is that the experimentation layer exists within capitalism, and does not exist within other economic systems. Democracy’s main advantage over other governance systems is that the experimentation layer exists within democracy, and does not exist within other governance systems. 

Stalin skipped this layer and went straight for commerce or infrastructure or culture. Empirically, that didn’t work. 

A society needs a lot of wild, crazy new ideas to fight off stagnation, to grow and evolve, even if each particular wild, crazy new idea seems jarring, scary, incomplete, or impractical. For better or worse, the froth is what produces the occasional good idea or practice that sifts down to improve deeper levels. 

There doesn’t seem to be a reliable shortcut, as nice as it would be if there were. 

There are a couple of ways a new idea can sift down to improve the deeper levels: in full or in part. We can look at startups as an example. 

Two Paths to the Deeper Layers

The first path is to make it through fully-formed; altered a little, sure, but intact in the form of a company that survives the early stages of startup life to become part of the fabric of commerce, something a lot of people pay for and use every day. 

This is the path to riches and glory. Zuckerberg walked this path. Musk walked this path. Bezos walked this path. It takes a lot: new ideas, fortitude, timing, adaptability, drive, and all of the other attributes attributed to successful entrepreneurs.

The second path is to create an idea, mechanism, feature, or whatever that is so novel and unique that it survives and eventually sifts down on the back of another product, even if the company that created it fails. 

This is the less appreciated and often derided path, the path to on-paper failure but potentially lasting legacy, like passing on good genes to your kids and watching them succeed in ways you couldn’t. 

But this group contributes as much to the world’s progress as the former. It should be celebrated for its sacrifices, and potentially even rewarded posthumously. 

There should be Retroactive Public Goods Funding for those founders crazy enough to try something truly new that failed as a business but succeeded in creating useful mutations, maybe even ticker tape parades, and certainly not derision.

Venture capital actually does a pretty great job here (the beauty of the model is a subject for a future piece, but we need do more to create the conditions for even more wild experiments — from the garage to the lab — because they create the ingredients for progress.

Without those on-paper failures, future on-paper successes would be less likely. 

The only real failure is failing without adding anything new to the stew. 

Experimentation Everywhere

I’m using startups as an example here, but this is true for anything – new ideas, new research, new political ideologies. 

And it operates at different scales. For instance, some nations are experimenting with radical new economic models like stringent degrowth policies or near anarcho-capitalism. If you live in Germany or Argentina, you might not love that your leaders are running opposite experiments on your country, but from a global perspective, it’s truly fascinating to get to test degrowth and anarcho-capitalism in production.    

Whatever the specific situation, people want new things to look like this: 

But that is the least valuable kind of new thing there is! 

That’s launching yet another ChatGPT wrapper as an app, doing research that’s likely to get funded, running a safe campaign by saying what the voters want to hear. It might even work in the short-term, but it doesn’t move the needle. 

The lower layers – commerce, infrastructure, governance, culture, and nature – are for slow and steady. They’re the backstop. The experimentation layer is for froth and novelty and things that are likely to fail but might, just might, echo down into the deeper layers. 

I’ve written about society’s relatively recent risk aversion a bunch, and the more I think about it, the more I think that a big part of the issue is trying to smooth the experimentation layer into a straight line. 

This can show up as silly things, like telling hardware developers that they should have built an app instead. It can show up as a scientific research system that funds safe, incremental work instead of crazy new ideas – 81% of Fast Grants recipients who responded to a survey said that their research programs would become more ambitious if they could choose how to spend their funding. It shows up in the ideas that people aren’t allowed to discuss without getting canceled. 

The effect is that of an overactive immune system that starts attacking its own cells. 

So when people try something new, or introduce a radical idea, the experience looks something like this: 

That’s part of the process, too, of course. For something to escape the experimentation layer and influence the deeper, slower layers, it needs to survive the gauntlet.

Trust the Layers 

I wonder if, given the declining trust in institutions further down the stack, people feel the need to police new things at the experimentation layer, just as a basketball team without a dominant rim protector might play a safer, less aggressive defense on the perimeter. 

Pew Survey

“What happens,” they seem to ask, “if we let a bad idea slip through?”

The more important, and harder to predict, question is, “What happens if we choke the good ideas out before they have a chance to sift down?”

There’s good news though: the experimentation layer seems to be getting unshackled. In my conversations and adventures on the internet, I’ve noticed the tide turning. Maybe you’ve felt the vibe shift, too.

From the ashes of institutional trust a phoenix of trust in people to make up their own minds is rising. 

Throwing everyone together, online, 24/7, and giving them information and access to each other directly, unfiltered, was bound to have a jarring impact. It’s become clear that no one – not politicians, not the smartest person you know – has all The Answers, and we’re learning how to live in a world where that’s evident. 

Instead of trust in any one person or group of people’s opinions at the experimentation layer, we can trust that the deeper layers will provide a backstop, and free the experimentation layer up for maximal variance.

A few examples:

Controversial ideas aren’t as likely to get you canceled anymore. The Overton Window seems to be expanding. Just this weekend, Moment of Zen interviewed Curtis Yarvin. I don’t know Yarvin’s ideas well enough to say which I agree with and disagree with, but airing them out and letting people decide which to incorporate and which to dismiss for themselves feels like a healthy step. It’s been said before, but X under Elon has played a role here, and I think it’s a positive development. 

Crypto is coming back, using the building blocks from the last cycle to create something about which Larry Fink, the head of world’s largest asset manager, BlackRock, said: “We believe the next step going forward will be the tokenization of financial assets.

People want the tokenization of financial assets, proof of personhood, and stablecoins without monkey jpegs, memecoins, and, but I guess the point I’m trying to make is that you don’t get the tokenization of financial assets, proof of personhood, and stablecoins without monkey jpegs, memecoins, and 

This meme only ever makes sense with the benefit of hindsight:

These are just a few examples, but there are many. You can probably name a bunch. And then there’s Rabbit. 

The market quickly drowned out Rabbit’s naysayers; now it will be up to Rabbit to make it through the experimentation layer or die trying. 

Maybe Rabbit makes it through fully formed, iterating across new versions to eventually topple Apple. Maybe it fizzles. Maybe a big company or young entrepreneur riffs on one or two of its features and adds their own twist to create the next big thing. 

I don’t know how it will turn out. No one knows how it will turn out. You really do need to fuck around to find out. 

Here’s to the crazy ones.

Thanks to Dan for editing!

That’s all for today. We’ll be back in your inbox with a Weekly Dose on Friday.

Thanks for reading,


Pace Yourself

Welcome to the 1,282 newly Not Boring people who joined us over the holidays! If you haven’t subscribed, join 218,841 smart, curious folks by subscribing here:

Subscribe now

Today’s Not Boring is brought to you by… Alto

With Alto, you’ve got options. Its self-directed IRA platform lets you invest in a range of alternative assets across private equity, venture capital, real assets like farmland and fine wine, cryptocurrency, private startup angel deals, and more. Better yet, investing in alts with dollars earmarked for retirement means you can make long-term and tax-advantaged decisions. Here are the top three reasons why Alto is a no-brainer:

  • IRA Capital: So many people leave money on the table by not investing through an IRA. Alto makes investing, tracking, and saving through a self-directed IRA easy.

  • Horizon Matching: Investing with an SDIRA and investing in alts are both long-term decisions.

  • Investment Options: If you’re already investing in alts or want to allocate more to them, Alto offers a full menu of alts through its investment platform partners that are usually inaccessible to retail investors.

Plus, you can now also check out the new Alto Marketplace – the premier destination for discoverability-driven securities investing. As we start off the New Year, make a good decision and learn more about Alto.

Start Investing in Alts with Alto Today

(Of course, IRA rules and regulations apply, and you should seek advice from a tax professional when making investments. Alternative asset investments are inherently risky and are intended for sophisticated investors.)

Hi friends 👋,

Happy New Year, Happy Wednesday, and welcome back to Not Boring!

This is going to be a wild year.

We’re only ten days in, and robots are cooking and making coffee, the US launched its first commercial robotic mission to the Moon, the SEC tweeted that the Bitcoin ETFs were approved and took it back, and Bill Ackman is waging war on the universities.

Let’s get to it.

Pace Yourself

Pace Yourself

Every year, right around this time of year, people make predictions for the year ahead. 

It’s a ritual, a form of intellectual entertainment, and even a kind of synthesis: a chance to look back on everything that’s happened in human history, weighted for recency, and package it all up into a guess at what comes next. 

The fun thing about these predictions is that they are invariably wrong. Or, they’re right in the same way that a broken clock is, and as often. You should not bet your life savings on these predictions. 

So with that, I only have one prediction, so obvious as to be meaningless: 

This year will be crazier than last year. 

Until I receive disconfirming evidence, consider this an evergreen prediction. I’ve been making it practically since I started writing Not Boring, and I haven’t been wrong yet. 

All of the craziness can be discombobulating. I love the acceleration, but I get that change is scary, even when it’s good. 

The only times I can really remember crying, aside from deaths, is when I’ve had to move from one thing to the next: at middle school graduation, at high school graduation, at college graduation, moving from my first NYC apartment to my second. 

The funny thing, in retrospect, is that each thing I cried about moving on to became a thing I cried about moving on from. But it’s hard to grok that in the moment, because you know full well what you’re giving up but can only vaguely picture what you’ll gain. And because you forget that while the only constant is change, the important constants survive change. 

Anyway, check this out, from Stewart Brand, in his 2000 book, The Clock of the Long Now:

The division of powers among the layers of civilization lets us relax about a few of our worries. We don’t have to deplore technology and business changing rapidly while government controls, cultural mores, and “wisdom” change slowly. That’s their job. Also, we don’t have to fear destabilizing positive-feedback loops (such as the Singularity) crashing the whole system. Such disruption can usually be isolated and absorbed. The total effect of the pace layers is that they provide a many-leveled corrective, stabilizing feedback throughout the system. It is precisely in the apparent contradictions between the pace layers that civilization finds its surest health.

He wrote that 24 years ago, when I was in middle school. It’s almost perfectly relevant today. 

The layers of civilization Brand is talking about are Pace Layers, a concept he came up with in the late 1990s to describe “how complex systems learn and keep learning.” Maybe you’ve seen the graphic: 

The beauty of pace layers is that they can describe so many systems – from buildings to forests to technology to civilizations – because, as Brand wrote: 

“All durable dynamic systems have this sort of structure. It is what makes them adaptable and robust.” 

Some parts of a system move faster, and some things move slower, and that is good. 

Nature moves the most slowly, on the scale of eons. 

Fashion is practically disposable; it moves on the scale of months, weeks, days, or even hours. 

Each layer moves a little faster than the one below it, held in check by the one below it, and is sped up by the one above it, pushed by the one above it. 

From the fastest layers to the slowest layers in the system, the relationship can be described as follows:

Fast learns, slow remembers. Fast proposes, slow disposes. Fast is discontinuous, slow is continuous. Fast and small instructs slow and big by accrued innovation and by occasional revolution. Slow and big controls small and fast by constraint and constancy.  Fast gets all our attention, slow has all the power.

“In a durable society,” Brand writes, “each level is allowed to operate at its own pace, safely sustained by the slower levels below and kept invigorated by the livelier levels above.” 

The core concept – that things have different layers which move at different paces – scales up and down beautifully. 

Brand first discovered the framework that would become pace layers when he set out to figure out why architects design buildings that people detest. That question brought him to British architect Frank Duffy, who told Brand that “there isn’t such a thing as a building. A building properly conceived is several layers of longevity of built components.” Duffy called these layers Shearing Layers.

Shearing Layers – Frank Duffy, adapted by Stewart Brand

So civilizations, buildings, what else? 

I’ve probably read Brand’s Pace Layering essay a dozen times over the past few years. I even read Brand’s How Buildings Learn, which introduced Shearing Layers, when, despite the fact that I know very little about Design or Construction, I was promoted to a role at Breather that oversaw the Design & Construction team. 

The concept grabbed me this time because as soon as I re-read it, I saw it everywhere.

For one, there’s this thing I’ve noticed among a few people whose minds I really respect, that I’ve been trying to put into words. 

They’ve built a solid frame of knowledge and beliefs about the things that change more slowly on which they hang newer, faster-moving information in its proper place. The new thing that most people see as the main thing, they treat like a small thing in the context of a much longer, larger thing. Maybe it will impact the longer, larger thing – that’s where the action is! – but maybe it won’t. 

They don’t get swept up in the new thing, but they don’t dismiss it, either, and they certainly aren’t scared by it. By building a firm base of old ideas, they seem to enjoy new things and ideas even more, because they see where they fit in the bigger picture and realize how hard it is for a new thing to shake the old things up. 

Whether or not they’d call it this, I think they’re thinking in pace layers. 

I want to think more like that.

I mainly write about things at the fashion, or new thing, layer, the “fast and small” that might “instruct slow and big by accrued innovation and occasional revolution.” Sometimes, like today, I write about the slow and big ideas that take place deeper in the stack, the things that change more slowly and have all the power. I focus more on small and fast, but as smaller things move faster and faster, I find myself drawn to big and slow to put small and fast into context. 

There’s this great passage in Haruki Murakami’s Norwegian Wood in which Nagasawa tells the book’s protagonist, Toru Watanabe, that he only reads books whose authors have been dead for at least thirty years. 

Toru Watanabe (l) and Nagasawa (r), Norwegian Wood (2010)

In it, Nagasawa says the famous line, “If you only read the books that everyone else is reading, you can only think what everyone else is thinking.”

Nagasawa is half-right! I think the trick is to read the old stuff to form your own base layer of beliefs, the ones that change much more slowly, but to use that base to engage with new things from your own perspective. 

Not Boring is ostensibly a newsletter about business and technology, but as technology gets more powerful, and has a bigger impact on the other layers, what I write would be hollow without putting things into a larger historical, economic, and even philosophical context. The important thing is recognizing which layer of the stack we’re talking about. 

But certainly, pace layering applies to thinking about new businesses and technologies, too. 

Most of the businesses and technologies I write about operate at the top layer, although some, like energy, are much deeper. The odds are, many of the companies I write about will fail. I’ve been more drawn to hard startups doing new things in part because even if they do, they might impact the layers below them. 

I think one of the disagreements between me and crypto critics, for example, is that while we agree that many of the specific products and meme coins operating at the fashion layer are dumb, I believe that all of those fleeting experiments, taken together, have a real shot at improving the layers beneath them. I wrote about crypto as a laboratory for complex problems, and I think the more experiments we run at the top layer, the more of a shot we have to improve the commerce, infrastructure, and even governance layers beneath it.  

More generally, thinking in pace layers helps embrace variance and novelty at the top layer – the more crazy experiments the better! – because, as Brand wrote, “The division of powers among the layers of civilization lets us relax about a few of our worries.” The system operates in such a way that it’s really hard to break, and usually, only the really strong technologies and ideas can break through from the fashion layer down through the stack. If they survive that journey, we’re better for it. 

At the same time, despite the fact that I’m a red-blooded techno-capitalist, one of the things I might disagree with some people in tech is that I think strong institutions are really important. I want Harvard to figure its shit out. I agree with Noah Smith that it would be great if America had a bigger, better bureaucracy. Strong infrastructure and good governance provide freedom through constraint. 

Brand includes a relevant quote from historian Eugen Rosenstock-Huessy in his essay: “Every form of civilization is a wise equilibrium between firm substructure and soaring liberty.”  

That doesn’t mean institutions are perfect by a long shot. Both the Biden Administration Executive Order on AI and the SEC’s stance towards crypto seem to be examples of governance trying to move at the speed of fashion, and that’s dangerous, because governance is stickier than fashion. The NRC’s half-century misregulation of nuclear is a prime example of what can go wrong when regulations are put in place based on the fashion of an era. 

But the system works pretty well. Tech can fight back, and it is. We can accelerate faster at the top layers because of the friction with the lower layers. Tech is like a kid with good parents: free to try new things and even fuck up sometimes in the knowledge that there’s a safety net.  

A more concrete example might clarify how a firm base creates room for freedom. 

There’s that Jeff Bezos question: “What’s not going to change in the next 10 years?” 

That’s like the Amazon equivalent of the nature layer. Of course, on top of that, Amazon employees try all sorts of things to deliver low prices, fast delivery times, and vast selection. Some of those things might be at the culture layer, in Amazon’s customer-obsessed, always day 1 mantras. Others will be at the infrastructure layer, in the form of new warehouses. Some will be at the commerce layer, like what gets included in the Amazon Prime subscription. And a whole lot will be at the fashion layer, button tweaks and deals and whatever else they can come up with to deliver low prices, fast delivery times, and vast selection. 

That’s pace layering. It’s not just that Amazon has a long-term focus, it’s that having a bedrock belief about what won’t change at the bottom, Amazon can move faster and experiment more at each layer up the stack.

I could go on, but I promised myself that I would try to write shorter essays this year, so I’m going to wrap this up. 

Pace layering is a really useful way to look at the world as things speed up, and it works at a number of scales. 

Personally, I’m incorporating the concept in a few ways. 

Because I get so excited about everything happening on the top layers, I want to deepen my knowledge at the base layers. I’ve been drawn to religion recently. I read Pierre Teilhard de Chardin’s The Phenomenon of Man, now I’m reading Tom Holland’s Dominion, Paul Johnson’s Jesus: A Biography from a Believer is up next. Building that base frees me up to explore even weirder ideas in context: I spent a bunch of time diving into UFOs over the break. 

On the investing side, I’m starting to think about pace layers when I look at companies. Instead of forming theses on particular technologies or products, I want to form theses deeper in the stack and use those to put companies in context. One that I’m excited about, and will write about more deeply soon, is something I touched on in Tech is Going to Get Much Bigger: that the biggest companies built today are going to be the ones that use technology to dramatically improve margins in big categories, like Anduril is in defense. 

As I write Not Boring essays this year, I want to be more clear about which layer I’m writing about. Is this company or technology going to change everything immediately, or is it one that has a chance, if things go right, to earn a place deeper and deeper in civilization’s stack? A big theme will be turning curiosity into conviction: having a strong enough base that I can take even the wildest ideas seriously, and study them until I form a view on whether they’ll flitter away like most things at the fashion layer or whether they’ll be more lasting, despite short-term ups and downs. 

And finally, an evergreen challenge I’m working on is how to make the world more optimistic. Charts like this one from the FT keep me up at night:

Financial Times

Optimism matters. Ideas matter. But yelling “Be more optimistic!” while pointing furiously at the things happening at the top layer misses some important context that I think an appreciation for the other layers can help fill in. 

Nothing is black and white. There are layers to this shit. The stronger our base, the faster we can accelerate.

We’re in for a wild ride. Pace yourself.

That’s all for today. We’ll be back in your inbox with a Weekly Dose on Friday.

Thanks for reading,


Momentum, Consolidation, and Breakout

Welcome to the 27 newly Not Boring people who have joined us since last week! If you haven’t subscribed, join 217,559 smart, curious folks by subscribing here:

Subscribe now

Today’s Not Boring is brought to you by…Merge

Merge is a single API to add hundreds of integrations to your app. In fact, it’s the only leader in the Unified API space, according to G2.  It’s loved and used by companies like Ramp, Navan, and literally hundreds of other tech companies. Here’s why:

  • Seamlessly integrate: Merge ensures a no-hassle onboarding experience for your customers, secure data, and continuity across your entire software stack.

  • More, with less: Engineers can quickly build and maintain all your business’ integrations, without costly and timely custom work.

  • New Standard: Merge covers hundreds of integrations, handles the lifetime of maintenance, is fully secure and compliant, and let’s your team move faster. That’s why G2 considers it the only leader in the Unified API space.

Stop wasting your engineering team’s time building out dozens of bespoke, complex integrations and please just use Merge – it’ll save you time and money. And here’s the kicker, Merge is offering Not Boring readers a $5,000 credit on annual plans.

Claim Your $5k Credit

Hi friends 👋,

Happy Tuesday! This is the last Not Boring essay of 2023. Normally, in the last essay of the year, I write about how next year is going to be even wilder than this one was. That’s been true every year since I started writing Not Boring, and I don’t think it’s going to change any time soon.

This year, I’m doing something a little bit different, because this year was a little bit different. I felt the momentum slow, and maybe you did too.

So I figured I’d go full transparency, tell you about the hard stuff, and then tell you why I think the slowing momentum is actually a good thing.

I’m fired up for 2024, and I can’t wait to see you on the other side.

Let’s get to it.

Momentum, Consolidation, and Breakout

I have to be honest with you, 2023 was the hardest year of my life. 

It wasn’t a bad year. It was a great year in a lot of ways. But it was hard

The best way to describe it might be with a trading concept: consolidation.

After a big run up in the price of an asset, the momentum runs out and the price churns in a range for a while before breaking out or breaking down

The first three years of Not Boring were a magical, momentum-filled run. It felt like a dream. I captured some of my disbelief at my good fortune on the newsletter’s one year birthday in A Not Boring Adventure, One Year In.

In early 2020, I’d quit my job at Breather, which would end up getting sold for pennies during COVID, thought about starting something with physical spaces, which got killed by COVID, and found myself unemployed, uninsured with a case of COVID, with a kid on the way. So I started writing a free newsletter full time. 

Somehow, over the next few years, the newsletter grew way bigger than I thought it could, I got to meet and become friends with founders and investors I’d admired from afar, joined a team that I respect deeply at a16z crypto as an advisor, built Not Boring into a real business, and launched Not Boring Capital to invest in the types of startups I’d been writing about. 

The momentum was palpable. I wrote about whatever I was most interested in, however I wanted to, and people read it! And shared it! And even people whose writing I loved shared it! The number just kept going up, the momentum just kept building. 

I remember thinking, “I shouldn’t take any opportunity that presents itself that would lock me into anything because the opportunities being presented just keep getting better.” 

That sounds cockier than it felt. I was just in awe of the magic of writing on the internet. Saying it out loud, though, it sounds an awful lot like not taking some profit on a winning trade. 

Anyway, thinking that is just one of the many mistakes I made because I thought that the momentum would continue forever as long as I kept working really hard. 

That’s not how momentum works, though, and it felt like my momentum hit a wall in 2023.

You can see it in the subscriber graph. 

It’s not perfect, but it looks a lot like consolidation. 

That’s why I say that this year hasn’t been bad – I’m trading in a range that I never would have expected to trade in a few years ago, and I feel incredibly lucky to get to do this – but that it’s been hard – I’ve never experienced something like having momentum and losing it before. 

I debated whether or not to write this piece, because even when the momentum slows, you want to make it seem like there’s momentum to the outside world. Strength leads to strength. And we do optimism here at Not Boring! 

But as I wrote in Optimism, the kind of optimism we’re talking about is “the very optimistic belief that things will inevitably go wrong, but that each new challenge is an opportunity for further progress.”

So I’m writing it, because I think a lot of people have felt similarly this year. I’ve had a ton of conversations with founders, fund managers, and others over the past year who have been feeling the same thing. Maybe you are too. 

And I’m writing it because I plan on breaking out in 2024 – I’ve been more energized over the past couple weeks than I have all year – and I’ve learned a lot this year that will be helpful when I do. Hopefully it’ll be helpful for you too. 

Plus, if I’m going to share the wins when things are going really well, I need to share the losses when they aren’t. 

So fuck it, let’s do this, get it out of the way, and get back to growing. 


If I had to summarize what made 2023 hard in a sentence, it would be this:  

Two things kept me up at night this year, one literally and one figuratively, and the exhaustion they caused showed up in my writing. 

Literally, what’s kept me up at night is Maya

Maya is the greatest little girl in the world. She’s 16 months old and she already has a bubbly, spunky, funny, loving little personality. Her vocabulary might be bigger than mine, and she’s stringing together sentences like “Want give milk to Christmas tree.” Getting to talk to a person that little is the funniest thing, it’s like having a cute little robot that keeps getting smarter. That Puja and I get to hang out with Maya and Dev is the greatest joy in my life. 

Having said that… Maya might be the worst sleeper I’ve ever met. For her whole sixteen months, she’s woken up at some point between 10:30pm and 2am screaming bloody murder until we give her milk. Then, often, she wakes up again to scream a little bit more. Then, most days, she or Dev or both wakes up for good between 5:45am – 6:15am. She wakes up fresh as a daisy and wonderful, and I wake up feeling like an extra from The Last of Us. 

Anyway, see if you can spot where Maya was born in my Oura ring data: 

Look at that dropoff! And I’m not even fun! I barely drink and rarely go to bed after 10pm. 

Working tired is always hard, but in a normal job, you get to zombie out in meetings or have someone tell you what you need to get done, which you can do kind of on autopilot. Coming up with creative topics every week and writing thousands of words about them has not been easy, and I think it shows. 

There’s this sequence that’s almost become a routine in our house. I’ll stay up late Monday night and wake up really early on Tuesday morning to finish an essay, often rewriting it entirely from 5am – 8:50am when the narrative structure finally shows itself. I’ll hit send, tweet about the essay, and then refresh Twitter for a while to see if it’s picking up steam. Puja will ask how the piece is doing, and I’ll say, “Fine. It’s doing OK.” 

Some of that is just flat momentum. Sharing Not Boring pieces was novel and fun when it was new, you felt like you were sharing something that people didn’t know about. That naturally decays over time. Put faux-quantitatively, s=q*x/n, or something, where s is shareability, q is quality, x is novelty, and n is the number of your pieces people have read. After you’ve written a bunch, the piece needs to be higher quality or more novel or both for people to share it. That’s exacerbated by the fact that there’s a ton of great writing on startups out there now, much of which is written by new people with novel perspectives.

And if I’m being realistic, I think my writing this year has been fine. I’ve written some pieces that I really like, and that I think will stand up well to the test of time, but I do feel like generally, I’ve lost a little heat on my fastball and haven’t thrown enough screwballs. 

This isn’t just in my head. I talked to a really perceptive founder the other day who has been reading Not Boring since the early days, and he asked me how I felt my writing has changed. While I thought about it, he said he had a thought: on the one hand, it had gotten more confident as I’ve learned more, but on the other, it was lacking the wild, fresh ideas that I had when I started in 2020. 

You know what? He was absolutely right! I’d been thinking the same thing and thought that maybe I was going crazy. It was surprisingly refreshing to hear it from someone I respected. 

I actually like that answer a lot more than general decay, because I can do something about it. If the bar is higher, I need to up my game. I’m not an LLM. IT’S TIME TO DIFFERENTIATE

Figuratively, what’s kept me up at night is fundraising

In the beginning of 2023, the very first post I wrote, I announced that I was raising Not Boring Capital Fund III, a $30 million fund. 

Almost a year later, I still haven’t closed the fund. We’ve done rolling closes and have been investing, but we’re nowhere close to the $30 million. 

It’s hard to admit, but it’s true. Fundraising has been absolutely brutal. 

Early on, I had a $5 million commitment drop to $1 million and then drop out completely because of the market. I’ve had multiple people commit to invest $2 million in the fund and either back out or simply disappear. Countless smaller commitments have ghosted when it came time to sign and wire. I won’t even try to count the “No”s.

If you’ve been following the venture news, you’ve probably read that it’s a brutal market for venture fundraising, that it’s particularly brutal for emerging managers, and that it’s triply brutal for emerging managers who raised their first funds at the peak in 2021. 

All of that is true, but it isn’t productive and it doesn’t help me sleep at night. 

I’m more convinced than I was coming into the year that the strategy – investing in hard startups, mainly crypto, bio, and deep tech startups – is the right one, and the right one for Not Boring Capital specifically. 

But as a lot of founders have learned over the past year, a good strategy is necessary but not sufficient. You need the capital to execute it. And not having it is fully on me. 

I made the assumption that I’d have enough momentum to raise Fund III quickly, and when that didn’t happen, the process dragged out. The more any fundraising process drags out, the harder it gets. I’ve never felt like I had less leverage or momentum than I have throughout this process. You can’t manufacture momentum. Especially in a market like this, people can smell it. 

What’s kept me up at night isn’t the lack of leverage or the rejections; it’s the fact that there have been great companies and founders that I’ve wanted to invest in but couldn’t because I didn’t run a tight process. While I put a ton of time, thought, and effort into our investing strategy, and I think it’s the right one for us, I didn’t put enough into our firm building strategy.

On the bright side, limited funds have also enforced a healthy discipline that I might not have deeply understood without living through this market. We’ve built a very strong early stage portfolio in Fund III, and a smaller fund means that it’ll be easier to return capital to LPs. I just think that we’re in one of those rare paradigm shift periods, the biggest yet, and I would like to be deploying more aggressively into it. 

Nobody owes you the right to be in market, though. You need to earn it.

There are a bunch of lessons I’ve taken from this process that can be boiled down to the idea that to earn the right to manage a fund over the longer term, you need to be good at all aspects of fund management, including and maybe especially having capital to deploy when others aren’t. There’s no venture capital without capital. 

More generally, when momentum is on your side, people focus on your strengths and forgive your weaknesses. When the momentum stops, they scrutinize the whole thing. This is true for stocks, funds, people, and even essays. 

That’s good! It’s healthy. It forces you to get fit, to shore up weak points, and to tighten up arguments. It forces you to master your craft. 

With the benefit of hindsight, I think the move is to use momentum when you have it to build things that last no matter the market or your personal circumstances. 

Personally, I wish that instead of meeting a million people, I’d spent more time building strong relationships with a handful of people I admire. Instead of investing in hundreds of companies to not miss the winners, I wish that I’d focused more on a smaller number that I think will really matter. Instead of taking the easier path of being a non-institutional fund, I wish I’d spent time meeting with institutional LPs learning what they look for long before ever trying to raise from them, and raising my bar to meet theirs. Instead of assuming that I didn’t need to be great at fund administration, I wish that I’d spent the time to study the art and science of building an enduring venture firm. 

When all doors are open to you, it’s important to pick the right doors. 

One of the questions that’s tortured me this year is whether I’ll get another shot at doing those things the way I wish I had with the benefit of hindsight, or whether you only get one shot at a first impression. That question has kept me up at night, too. 

The answer I’ve come to is one of the reasons I view this as a year of consolidation and not breakdown: yes, I will, if I earn it. 

When I started writing Not Boring, I thought that my friends had a secret group chat where they made fun of me for writing a newsletter. They didn’t. The truth is that nobody thinks about you nearly as much as you think about you. 

If you do mediocre work, no one thinks about you. On the rare occasions when you do great work, people do. You can always earn that right back by doing great work. 

Write great essays, invest in great companies, and stay alive long enough for them to prove you right (and return money to LPs).  

I’ve found the thing that I want to do for the rest of my career – investing in and telling the stories of the companies shaping the world – and the fact that I still love it despite the difficulty of this year is a great thing to learn. I’m not going to quit; I’m more motivated to do what it takes to become great at it, momentum or not. 

As Leo said:  

Breakout Time

So to recap, this has been a hard year. I think it’s been a hard year for a lot of people. It feels silly to call this hard when there are wars happening; hard is relative. I consider myself very lucky that it’s been hard in the ways it’s been hard: getting to write this newsletter, raise a fund, and raise Maya are all priceless privileges. 

Throughout the year, as I’ve agonized over the loss of momentum, I’ve wondered if this is it. Once the momentum slows, is it so over or is it possible to be so back? 

If you know me, you know I think it’s possible to be so back, but from inside of 2023 it’s been hard to grok that. Thinking about this as a consolidation has clarified it for me. 

Consolidation can go one of two ways: breakout or breakdown. The momentum isn’t guaranteed to pick back up, but it’s possible, if you play it right. 

In writing this piece, and using it as an opportunity to reflect on this year, I actually think it’s been a really productive consolidation period. I did a lot that I’m proud of, despite the sleep deprivation and anxiety, and we’re coming out of it with more assets than we came in with. 

Age of Miracles. We launched a new podcast, Age of Miracles, that’s gotten over 100k downloads in its first season. I know a lot more about energy, particularly fission and fusion, than I did coming in, and thinking about energy has improved the way that I think about the rest of the tech landscape. That’s shown up in pieces like Tech is Going to Get Much Bigger and The Morality of Having Kids in a Magical, Maybe Simulated World. 

I got to work with an amazing co-host, Julia DeWahl, meet incredibly smart people across the energy landscape, including founders and investors, and built a strong partnership with Turpentine. Creating a narrative podcast was more challenging than I expected it to be – tons of interviews, script writing, recording, pickups – but we got so much better at it as the season went along. I think we can grow Age of Miracles into something special for listeners that also benefits the fund as I get smarter on frontier industries.  

Newsletter. Although I think I lost some heat on my fastball and didn’t bring the freshness I’ve brought in years past, I wrote a lot of essays that developed the way that I think about the world and the outlook for tech. 

Some of my favorites are the two I mentioned above, Tech is Going to Get Much Bigger and The Morality of Having Kids in a Magical, Maybe Simulated World (which I think actually had the most new ideas of any piece I wrote but a bad title), OpenAI & Grand Strategy, Riskophilia, Capitalism Onchained, I, Exponential, Sci-Fi Idea Bank, WTF Happened in 2023?, In Defense of Strategy, When to Dig a Moat, How to Fix a Country in 12 Days, Small Applications Growing Protocols, Intelligence Superabundance, Attention is All You Need, The Fusion Race, Love in the Time of Replika, and Differentiation

Going through the list to pick these, I actually think there was a bunch of good stuff and new ideas in Not Boring this year! I just need to make the writing pop a little more. We’ll see how they stand the test of time. 

Deep Dives. One of my favorite types of pieces to write is the Deep Dive, sponsored or not. There’s nothing I enjoy more than getting to dig in with founders to understand a complex business deeply enough to explain it well. And I think we really raised the bar on the quality and potential importance of the companies we wrote about to align with our new, tightened fund thesis. I wrote pieces on Varda and Atomic AI (with Elliot), Wander, Array Labs, Anduril, LayerZero, Ezra, and The Browser Company. Besides Anduril and LayerZero, which are already there, there are at least a couple billion dollar-plus companies in that group. I want to do more Deep Dives in 2024, particularly on our portfolio companies, and become the best in the world at telling the stories of complex, ambitious tech startups.

Not Boring Capital. Despite sluggish fundraising, we’ve managed to build a portfolio of pretty fantastic companies, in Fund III and prior funds. 

As you read this, Not Boring Capital portfolio companies are building foundation models for RNA and for gene interactions (👀), creating physics-based AI chips, building hyperlogistics networks under cities, manufacturing drugs in space, accelerating manufacturing to make parts for space, growing cabbages that produce GLP-1, crafting Von Neumann universal constructors, rebuilding the school system and giving parents choice in their kids’ education, decentralizing ID to build a more human economic system, bringing abundant energy to Africa, infusing AI in code, bringing real world assets onchain, creating virtual worlds, reimagining how the physical one gets built, and using light and DNA barcodes to measure biology in context. This list is non-exhaustive.

A few of our Fund I and II portfolio companies have shut down this year, but those writedowns are more than outweighed by the markups other portfolio companies have received, even in the bear market. Both portfolios, but especially Fund II, are chock full of breakout and potential breakout companies, and the crypto portion of the portfolio is performing particularly well as that market thaws. We tried to back solid projects with real use cases, like Worldcoin and DIMO among others, and it’s starting to pay off. There will be more to come here. 

On the team side, working with Elliot and learning about the mind blowing things happening in bio has been a highlight. Elliot’s combination of science, code, writing, investing discipline, and ability to dream about the future is incredibly rare, and I fully expect him to be one of the best techbio investors to ever do it when all is said and done. 

I’m incredibly bullish on our strategy as the most important companies shift from pure software companies to those that combine bits and atoms, and as the world shifts from centralization to decentralization. We can live in an Age of Miracles, and we want to continue to back the founders who are going to bring it about. The crazier the better. 

We’re still raising (I’ll probably be raising until I die). Get in touch if you want to be a part of it

Optimism. One of the things I’m proudest of is that we planted an optimistic flag early and we’ve stuck with it through thick and thin. 

As the markets tumbled and wars raged, it was very easy to get views and listens by telling people how awful things were. Pessimism will always sound smart and responsible. Showing LPs that you’re being cautious builds trust, even if it’s the wrong move at precisely the wrong time. 

As I wrote in the first ever Not Boring, Not Boring Newsletter #1 – delayed by a couple of days because I caught early COVID: 

I feel very lucky to get to work with my brother on putting out the Weekly Dose of Optimism every week to share the amazing things humans are doing in the world (all while he’s been slinging millions of creatine gummies). We didn’t freak out during the beak market, didn’t tell people the world was ending, recession was coming, or to pivot to AI. We just kept publishing the good stuff and stayed true to our conviction that things would get better. I’ve heard from a lot of people that they’ve appreciated the optimism in the face of so much doom and gloom. 

Now, the techno-optimism is spreading. e/acc is taking off. Marc Andreessen wrote The Techno-Optimist Manifesto. Jason Carman launched S3 News. Pirate Wires is killing it. We love to see it. Optimism shapes the future, and I’m happy to do our small part to spread it. 

It was a hard year, but looking back, it was a year of good and necessary consolidation. We built up assets that are going to be valuable as the momentum returns. 

The most valuable asset I’ve built up during the consolidation may be the lessons: 

  1. Appreciate momentum for what it is. Enjoy it, but know that it’s fickle. Use the opportunities it provides to build something solid. Take profits sometimes. 

  2. Master all aspects of your craft. Focus on getting better at the things you can control even when momentum could carry you so that you’re ready when it doesn’t. 

  3. Stay Weird. Don’t let mastering your craft make you boring. Use it as a platform off of which you can get even weirder. Business in the back, party in the front. 

  4. Build conviction. If you know why you’re doing what you’re doing and why you believe what you believe, bad markets can be great opportunities.  

  5. Say no and yes more. Say no to the million little things and yes to a few big things. 

  6. Have Agency. Focus on being great at the things you can control. 

Both riding momentum and agonizing over its loss are low-agency moves. I can write better essays, invest more in relationships, and learn to manage Not Boring Capital like a fund that’s going to be investing no matter the market. All of that is in my control. 

I’m fired up for 2024. Expect fresher essays, even if it means that I write fewer of them. I can’t wait to let you know when I close the full fund, even if it kills me, and to share the stories of the world-bending companies we back from it. 

I’m going to take a couple weeks off from writing over the holidays to plan out exactly what all of that looks like, try to get some sleep, and recharge the batteries for a big year ahead. 

Not Boring is going to be here for a long time, and when we look back in a decade, I have a sneaking suspicion that 2023 is going to look like one of our most important years. Momentum is intoxicating but impermanent. Consolidation is critical. 

I think that will be the case for a lot of us. So go consolidate for a couple more weeks, rest, reflect, and plan, spend time with family and friends, and get ready to break out in 2024.

Thanks to Dan and Puja for editing (and putting up with me this year)!

That’s all for today, and the last essay of the year. But don’t go anywhere, we’ll be back in your inbox with a Weekly Dose on Friday. Happy holidays!

Thanks for reading,


LayerZero: The Language of the Omnnichain

Welcome to the 200 newly Not Boring people who have joined us since last week! If you haven’t subscribed, join 217,532 smart, curious folks by subscribing here:

Subscribe now

Hi friends 👋,

Happy Thursday!

Crypto, pronounced dead in 2022, has come back to life. If you’ve been reading Not Boring through that period, you would have known that it would come back eventually.

But just because token prices are back up doesn’t mean that crypto is anywhere close to doing the things that I think it will be able to do. Better infrastructure will need to translate into better applications, which will demand better infrastructure, and on and on. It’s a never-ending project.

LayerZero Labs is building foundational infrastructure for crypto all the way at the foundation… LayerZero. It’s an omnichain messaging protocol, a transport layer that allows applications to send messages containing bytes of information from one chain to another. If it succeeds, each blockchain will be like a node in one larger, more performant network.

Today, it’s rolling out v2 of its protocol. I got an early look, and I’m excited to explain what it is and why I think it’s going to be important. Apologies for the late send — v2 is hot off the presses.

This essay is a Sponsored Deep Dive. While the piece is sponsored, these are all my real and genuine views. I haven’t written as many of them this year as I have before because I’m only writing them on companies that I think have a chance to be really important in an area I care about, and LayerZero Labs fits that bill.

I’ve been a fan of LayerZero since I had LayerZero Labs CEO Bryan Pellegrino on Not Boring Founders in early 2022, and I’m impressed with the improvements they’ve made in v2 and the potential of the business model.

You can read more about how I choose which companies to do deep dives on, and how I write them here.

As always, please, please for the love of God note that this is not investment advice. LayerZero doesn’t have a token, but it will at some point. When that time comes: please do not look back at this piece as investment advice. If you’re reading this from that future time, please note the following: I am a terrible trader. This is a sponsored piece. a16z, where I’m an advisor, is an investor in LayerZero Labs. I have no idea where ZRO will be priced when trading starts.

I just think it’s important infrastructure for crypto, and that crypto infrastructure is important.

Let’s get to it.

LayerZero: The Language of the Omnichain

In the beginning, there was Bitcoin. 

Bitcoin did one thing well. “A purely peer-to-peer version of electronic cash would allow online payments to be sent directly from one party to another without going through a financial institution.”

Soon, however, people wanted to do other things peer-to-peer without going through institutions. 

Enter Ethereum. 

Ethereum was designed to do many things. “What Ethereum intends to provide is a blockchain with a built-in fully fledged Turing-complete programming language that can be used to create ‘contracts’ that can be used to encode arbitrary state transition functions.”

With Ethereum, developers could theoretically build all sorts of things on blockchains: decentralized applications, domain names, exchanges, lending protocols, NFTs, DAOs. Turing-complete means that, theoretically, they could build anything onchain. 

The challenge was, when they tried to build those things in practice, it was slow and expensive. When users used those things, it jammed up the system and drove up costs.

So two things happened. 

First, right around 2018, a tsunami of new “Layer 1” blockchains emerged to either take Ethereum on directly or to focus on doing a specific thing better than the more general Ethereum could. 

Second, a number of founders built Layer 2 (L2) chains that handle the execution of a lot of transactions faster and more cheaply and settle them in batches on Ethereum. 

Each L1 and L2 – the serious ones, at least – attempts to improve on Ethereum in one or more ways. They might be faster, cheaper, more scalable, more private, or more custom-built for specific use cases, from DeFi to payments to NFTs. 

If you think that sounds confusing, imagine what it feels like for a user trying to figure out where to do things onchain. Worse, imagine what it’s like for a developer trying to figure out which chain to build on!

If you want to build an application that uses crypto, there’s all sorts of complexity and trade-offs you have to make. Do you want to build something that’s cheaper and faster? Cool, if you do, it might be less secure and there may be fewer users and less liquidity on that faster, cheaper chain. 

All told, there are hundreds of Layer 1s and Layer 2s. Each makes different trade-offs, each has strengths and weaknesses. And each, for all intents and purposes, operates in a silo. 

If you view each blockchain as its own competing network, it’s a messy state of affairs: confusing for users, brutal for developers, and results in fragmented user bases and liquidity. 

I don’t think that’s the right way to view blockchains though. 

In May 2021’s Own The Internet, I wrote:

There are a lot of Something Maxis, people who believe their thing is the one and only solution, but I subscribe to the idea that each successful L1 or L2 will focus on what it does best and interoperate with others who do something else best. I’m a Maximalist Minimalist.

I think the right way to view blockchains is as nodes in one larger Omnichain network.

In that view, competitors become complements. Each chain can focus on doing one thing really, really well. As more chains enter the fray, instead of becoming more confusing and fragmented, the Omnichain becomes more capable. 

It’s a nice idea, if you just assume that all the chains can easily interoperate. But they don’t. Go try bridging tokens from one chain to another if you don’t believe me. Blockchains don’t natively communicate with each other; many don’t even speak the same language.

What’s needed in this world is a way for all of these nodes to speak with each other swiftly and securely. What’s needed is a common language for the Omnichain. 

That’s what LayerZero Labs is building.

LayerZero Labs is the company developing the LayerZero protocol, an open-source messaging protocol that allows developers to build omnichain, interoperable applications. Messaging here doesn’t mean SMS – it means sending any arbitrary packet of bytes from one chain to another, so that a smart contract on Chain A can tell a smart contract on Chain B to do something. 

Note: I’ll use LayerZero Labs when referring to the company and LayerZero when referring to the protocol. 

At its simplest, it means that developers can build products that interoperate with more chains to reach more users, and that those users can more easily move value from chain-to-chain. 

At its wildest, it means that developers can choose to build products that use different chains for different features, and that users never have to think about which chain they’re on. 

Bryan Pellegrino, Ryan Zarick, and Caleb Banister founded the company almost accidentally in 2021 when the long-time friends and multiple-time co-founders caught wind of Binance Smart Chain’s (BSC) growing popularity and tried to build a game that worked across BSC and Ethereum. They were horrified with the state of bridges – which have since been hacked for at least $2.5 billion – and attracted to the very hard problem of building foundational infrastructure for crypto. 

They made a bet that blockchains would behave more like nodes in a larger network than as their own competing networks, and that if that were the case, there would need to be a TCP/IP for blockchains

Just as TCP/IP enabled the internet by allowing different computers running different operating systems on different networks to communicate with each other, LayerZero allows smart contracts operating on different blockchains to communicate with each other. 

It’s core crypto infrastructure, operating a layer below everything else, hence… LayerZero. 

LayerZero v1 Whitepaper

While there are other messaging protocols on the market, including Wormhole and Axelar, LayerZero is a pure transport layer that allows applications to choose their own entities to verify and execute messages; the others compete as verification layers themselves. 

To build at that layer, LayerZero Labs believes, you need a protocol that is immutable, censorship-resistant, and permissionless

The team’s bet on TCP/IP for an omnichain future seems more in the money with every new chain launched. 

In February of this year, in the middle of Crypto Winter, LayerZero Labs raised a $120 million Series B from a16z crypto (where I’m an advisor), Sequoia, and a host of other funds at a $3 billion valuation. It’s handled over 87 million messages and $40 billion across more than 50 chains in under two years since launching, far more than any competitor, without a single dollar lost. 

I first met Bryan in April 2022 when he came on the Not Boring Founders podcast. A few weeks ago, he emailed me to ask if I wanted a sneak peek at the v2 they’d been working on for the past 18 months and maybe even to write a deep dive – “If you’re into it would obviously be no bias / write whatever you’d like, would love to explore.” I jumped at it. 

While rising token prices might increase the chance that you read a long deep dive on a crypto company, LayerZero Labs is building the kind of thing that excites me personally whether it’s a bull market or a bear. In the last crypto piece I wrote in November – Blockchains as Platforms – I wrote:

Blockchains are platforms on top of which developers can build products…

When choosing which platforms to build on top of, developers need to weigh trade-offs. What benefits does the platform offer the product, and ultimately its users, and what drawbacks does the platform present? 

Different developers will make different trade-offs depending on what they’re building. 

Over time, as infrastructure improves, the trade-off calculus will come out in blockchains’ favor for more use cases.

LayerZero makes it easier for developers to choose among the benefits of particular chains when building their products, improving the trade-off calculus. And it operates at a beautiful place in the stack – as any new chain comes online and offers improvements in one area, LayerZero Labs can incorporate it as easily as deploying an endpoint. 

Just as TCP/IP turned a novel but limited set of local networks into one of humanity’s greatest achievements – the Internet – LayerZero has the potential to do the same for web3, at a time when baking decentralized ownership, governance, and control into the fabric of the web is as important as it’s ever been. 

LayerZero Labs is my kind of company in a lot of ways: 

  • I’m a maximalist minimalist: I don’t think there will or should be one chain to rule them all, and LayerZero’s existence will encourage chains to focus on their strengths and differentiation.

  • It will help make capitalism more effective: value flowing more efficiently from chain-to-chain and project-to-project will make onchain capitalism smoother. 

  • It’s enabling technology: products will be developed because LayerZero infrastructure exists that wouldn’t have been otherwise, and what those are is unpredictable. 

  • It’s a hell of a business: by building the hard stuff, LayerZero has earned the right to take a small fee on every message that gets sent over its rails. 

  • Protocols should make money: one of the things that first got me excited about crypto was the opportunity to build better open source infrastructure by rewarding developers.  

But LayerZero v1 didn’t make its value proposition as clear as it could have been.

Bryan told me that when they launched, the popular Crypto Twitter personality Cobie gave him some good advice: write a refutation article getting ahead of the things that sucked about the practical implementation of LayerZero in its early form. He didn’t. 

“We just launched and left it at that. The first time, we said, ‘Here’s how everything works,’ but not the why behind the seemingly simple things that we’d spent months debating.” 

The things they cared most about – immutability, permissionlessness, and censorship-resistance – they viewed and still view as non-negotiable. They’re baked into the architecture. But the other things – like how verification and execution work on the protocol – were meant to be a work-in-progress. 

After 18 months cooking in the lab, LayerZero Labs is launching its v2 with improved an verification and execution model today. This time, we’ll cover the why as well as the what: 

  • Parallels with TCP/IP 

  • How LayerZero Works

  • LayerZero v1: From Bridges to Oracles and Relayers  

  • How Interoperability Protocols Compete 

  • LayerZero v2

  • Default Refutation

  • Protocols that Make Money

  • Increasing the GDP of the Omnichain

To start, let’s go back to the early days of the very internet on which you’re reading this right now, no matter what computer or operating system you happen to be running. 

Parallels with TCP/IP

I don’t want to lose you with talk of TCP/IP stuff so early in the piece, but if you want to understand LayerZero, there’s no better place to begin. 

In my research, Perplexity served up a PDF of the first chapter in this 2002 gem: TCP/IP: The Ultimate Protocol Guide by Philip Miller. It begins: 

Two people can communicate effectively when they agree to use a common language. They could speak English, Spanish, French, or even sign language, but they must use the same language.

Computers work the same way. Transmission Control Protocol/Internet Protocol (TCP/IP) is like a language that computers speak. More specifically, TCP/IP is a set of rules that defines how two computers address each other and send data to each other. This set of rules is called a protocol. Multiple protocols that are grouped together form a protocol suite and work together as a protocol stack.

TCP/IP is a strong, fast, scalable, and efficient suite of protocols. This protocol stack is the de facto protocol of the Internet. As information exchange via the Internet becomes more widespread, more individuals and companies will need to understand TCP/IP. 

That last sentence is particularly interesting because it was written 28 years after internet godfathers Vince Cerf and Bob Kahn published “A Protocol for Packet Network Interconnection,” in which they first described TCP and its potential to upgrade ARPANet, the predecessor to the internet. When the piece was published, there were 558 million internet users in the world, exactly 10% of the 5.5 billion people who use the internet today. 

Miller was right that information exchange via the internet would become more widespread, but wrong that more individuals and companies would need to understand TCP/IP. 

Soon after he published the book, web 2.0 companies built products that abstracted away the complexity of the underlying protocols so that most people and companies never had to think about them. 

If you want to send an email, you don’t need to know anything about the Simple Mail Transfer Protocol (SMTP). You just open up Gmail. If you want to transfer a file, you don’t need to understand the File Transfer Protocol (FTP). You just drag the little file icon into Dropbox. 

And if you want to use the internet, you don’t need to understand the first thing about TCP/IP. 

When I made that graphic a couple of years ago, for my first piece on crypto, The Value Chain of the Open Metaverse, I didn’t think to include TCP/IP because it operates so far from the user, a layer below protocols like HTTP, SMTP, and FTP. But it’s the most critical layer in the stack. Without TCP/IP coordinating the flow of data from one computer to another, there would be no HTTP, SMTP, or FTP. 

What exactly does TCP/IP do? Miller continues: 

TCP/IP is a set of protocols that enable communication between computers. There was a time when it was not important for computers to communicate with each other. There was no need for a common protocol. But as computers became networked, the need arose for computers to agree on certain protocols.

Today, a network administrator can choose from many protocols, but the TCP/IP protocol is the most widely used. Part of the reason is that TCP/IP is the protocol of choice on the Internet—the world’s largest network. If you want a computer to communicate on the Internet, it’ll have to use TCP/IP.

Another reason for TCP/IP’s popularity is that it is compatible with almost every computer in the world. The TCP/IP stack is supported by current versions of all the major operating systems and network operating systems—including Windows 95/98, Windows NT, Windows 2000, Windows XP, Linux, Unix, and NetWare.

Unlike proprietary protocols developed by hardware and software vendors to make their equipment work, TCP/IP enjoys support from a variety of hardware and software vendors. Examples of companies that have products that work with TCP/IP include Microsoft, Novell, IBM, Apple, and Red Hat. Many other companies also support the TCP/IP protocol suite.

TCP/IP is sometimes referred to as “the language of the Internet.” 

My editor/brother Dan hates when I use block quotes, and here I’ve done it twice, and from an old manual on TCP/IP. But I did it for a reason, promise. In both sections, if you find-and-replace “TCP/IP” for “LayerZero,” “computer” for “blockchain,” and specific operating system and company names for blockchains it works almost perfectly! 

LayerZero is a protocol that enables communication between blockchains. There was a time when it was not important for blockchains to communicate with each other. There was no need for a common protocol. But as blockchains became networked, the need arose for blockchains to agree on certain protocols. 

Today, a developer can choose from many protocols, but the LayerZero protocol is the most widely used. Part of the reason is that LayerZero is the protocol of choice on the Omnichain–the world’s largest blockchain network. If you want a blockchain to communicate on the Omnichain, it’ll have to use LayerZero. 

Another reason for LayerZero’s popularity is that it is compatible with almost every blockchain in the world. LayerZero is supported by current versions of (mostly) all of the major blockchains and layer 2’s – including Ethereum, BNB Chain, Aptos, Avalanche, Polygon, Optimism, Arbitrum, and Base. 

You get the point. LayerZero might be referred to as “the language of the Omnichain.” 

If TCP/IP was the enabling protocol for the Internet, I think that LayerZero can be the enabling protocol for the Omnichain – a network of blockchains, each leaning into its own points of differentiation. 

As value exchange via the Omnichain becomes more widespread, most individuals and companies won’t need to understand LayerZero, but smart, curious people like you should anyway. 

This is one of the hardest and most important problems in crypto, one that, if solved, unlocks a ton of value for everything built on top and makes capitalism flow more efficiently onchain. 

How LayerZero Works

OK, so how do you build TCP/IP for blockchains? How does LayerZero actually work? What does it do? And if you want to create infrastructure that lasts as long as TCP/IP has, what decisions do you need to make upfront? 

We’ll need to get a little technical here, but I’ll try to make it as simple as possible for now and then dive into details later. 

You can think of the LayerZero protocol as a series of pipes that deliver messages between blockchains, transport layer infrastructure that runs below the cities.

DALL-E via ChatGPT 

These are modern, highly technical pipes. If TCP/IP is plumbing, LayerZero is Pipedream.

Over time, if everything goes just right, these pipes might be the pipes through which the global economy runs, so the LayerZero Labs team believes that the pipes must have three core characteristics: 

  • Permissionless: Anyone can build on top of LayerZero without approval from the team, no matter where they are or what they’re building, and and anyone can run the infrastructure necessary to verify and execute messages. 

  • Immutable: Once an endpoint or library is in place, it can’t be changed, only appended. The protocol will exist in perpetuity, even if development on LayerZero stopped today (though breaking changes at the change level would disrupt endpoints).

  • Censorship-Resistant: Even if someone had a gun to Bryan’s head, there would be no way for them to alter a message or prevent it from being sent. Neither governments nor financial institutions looking to front run can ever access a message. 

If you don’t think about those three things upfront, Bryan told me, “the whole world will be built on rails that are corruptible.” 

In order to build incorruptible rails that can adapt, extend, and stand the test of time, LayerZero Labs needs to balance security where needed and flexibility where possible. 

From the beginning, LayerZero Labs (the developer) split LayerZero (the protocol) into components ranging from “completely locked down” to “flexible.” In v2, that split shows up as four components: 

LayerZero Whitepaper v2

Endpoints and Validation Libraries are the core of the LayerZero protocol.

Endpoints are low surface-area, open source smart contracts that live on each chain and can’t be changed once they’re in place. Validation Libraries are responsible for sending packets from one chain and validating them on the other and define how communication should be handled on each chain. 

They’re the immutable pipes through which messages flow. 

Verification and execution of messages are up to the application to decide; they can hook any set of verifiers and any executor they choose into the protocol. 

Verification provides the security for the messages being sent; some entity or entities verify that each message is legitimate. Execution essentially means paying gas so that smart contracts on each end can process the transaction described in the messages. 

In v1, verification and execution were provided by Oracles and Relayers. In v2, they’re provided by Decentralized Verifier Networks (DVNs) and Executors

Importantly, in v2, LayerZero Labs has totally decoupled security from execution in order to guarantee censorship-resistance without impacting liveness, or the ability of the protocol to continue functioning and processing transactions without interruption. You don’t want anyone to be able to mess with your messages, and you always want them to go through. 

Don’t worry. I’ll explain what all of this means in detail. Getting there is going to take a journey through LayerZero’s history, the theory and practice of LayerZero v1, and the why behind the v2 upgrades. 

What hasn’t changed between v1 and v2 is that endpoints are immutable and validation libraries are append-only. The deepest layers of the protocol remain unchanged, because they were built to be unchangeable from the beginning. 

Security and execution are still modular and up to the application, too, but what has changed in v2 is exactly how that happens. To understand that, we need to understand why it was harder for applications to choose in practice than it was in theory. 

LayerZero v1: From Bridges to Oracles and Relayers  

LayerZero started out as a side project for Bryan, Ryan Zarick, and Caleb Banister. 

After leaving a successful poker career behind in 2014, Bryan kept his brain spinning on a series of hard projects. He tried bitcoin mining in 2014, built an AI model to measure pitchers’ performance that he sold to MLB teams in 2016, launched a platform to let people launch their own tokens, OpenToken, in 2018, and built the world’s best poker AI, Supremus, with Ryan, Caleb, and Meta AI researcher Noam Brown (the creator of CICERO) in 2020. 

“All my life, I’ve just liked to work on hard problems,” he told an interviewer for Sequoia. “More than anything else, that’s what attracts me.”

Later in 2020, when Binance Smart Chain launched and started to pick up real adoption, a rarity then among non-Bitcoin or Ethereum chains, he got the band back together. Bryan, Ryan, and Caleb built a game in which gladiators fought to the death on the cheaper, faster BSC, the winning gladiator was minted as an NFT, and the NFT was stored on the more secure Ethereum. It was a toy model to test the chains, and the test was fruitful. 

In the process of moving the NFT from BSC to Ethereum, they hit a problem: bridges.

Just as physical bridges connect two landmasses, bridges connect two blockchains. They’re how you send tokens – fungible or NFT – between chains. 

Because different chains have different ways of doing things, a token that works on one might not work on the other. So if you wanted to send your tokens from say, Ethereum to Polygon, in order to use an app on Polygon, you’d go to a bridge and the bridge would do a few things:

  1. Lock the Original Asset. Essentially, the bridge sends your ETH to a smart contract that holds it “securely.” 

  2. Create a Wrapped Version. The bridge issues a “wrapped” version of your ETH, WETH, that represents your original ETH but works with Polygon’s rules.

  3. Give You the Wrapped Version. The bridge puts the WETH in your wallet on Polygon, which you can use to do things on Polygon. 

  4. Redeem the Original Asset. To redeem your original ETH on Ethereum, you send your WETH back to the bridge, the bridge burns it, unlocks your original ETH, and puts it in your Ethereum wallet, where you can use it to do things on Ethereum. 

As Bryan, Ryan, and Caleb discovered, though, Bridges have a couple of major problems.

First, they’re a security nightmare. All of that locked ETH (or whatever locked token) is a gigantic flashing sign that there’s a pot of money hackers can try to break into. In 2022, hackers exploited the Wormhole Bridge between Solana and Ethereum for $326 million and the Ronin Bridge between Ronin and Ethereum for $624 million. The US Treasury Department alleged that the North Korean hacking group Lazarus was behind the Ronin hack, highlighting that these systems need to be robust against state-level actors. 

Second, they’re slow, painful, and a little bit scary. Bridging tokens means figuring out how to get the native token of the chain you’re bridging to in order to pay gas fees for that end of the transaction, putting your valuable tokens into a website, signing them away, and then waiting for a painfully long time. More than once, I’ve worried if my tokens were lost forever while I waited. 

At launch, Bryan did a video comparing LayerZero to bridges that shows how slow bridging can be: 

So the trio realized that they’d need to go deeper into the stack to solve the problem. They’d need to build foundational infrastructure on top of which bridges could be built. They’d need to build a protocol that would transport arbitrary messages across blockchains. 

LayerZero was born. 

By building a messaging protocol instead of a bridge, LayerZero Labs eliminated the idea of locking and wrapping tokens, got rid of the honeypot, bundled a bunch of steps into one message, and removed the need to worry about gas. 

Chain A sends a message containing bytes to Chain B, and Chain B executes whatever instructions are contained in the bytes. 

For times when that message contains instructions to transfer tokens, LayerZero Labs introduced the Omnichain Fungible Token (OFT) Standard. According to the docs, “This standard works by burning tokens on the source chain whenever an omnichain transfer is initiated, sending a message via the protocol and delivering a function call to the destination contract to mint the same number of tokens burned, creating a unified supply across both networks.” 

With OFT, there is no pot of tokens sitting somewhere – tokens are burned on one side and re-minted on the other. 

LayerZero launched on St. Patrick’s Day, March 17, 2022. Alongside the launch, the LayerZero Founders built a bridge, Stargate, to show it off. (It’s now governed by a DAO.) Stargate uses the LayerZero protocol to move native assets instead of locking and wrapping, and raises an important distinction. 

LayerZero is the messaging rails that operate underneath liquidity transfers. Bridges, like Stargate, operate on top of LayerZero. Those bridge smart contracts still have locked liquidity in them for non-OFT tokens, and hackers can still target the bridge’s liquidity pools. The more protocols adopt standards like OFT, and the Ethereum community’s similar xERC20, the less opportunity there is for hackers. 

Since launch day, LayerZero has delivered over 87 million messages, with over 31,000 smart contracts live on Mainnet, and Bryan tells me that over $40 billion has moved through its pipes… all without a hack

LayerZero also allows for messages to be composed across chains. For example, a user could move OFT X from Chain A, swap it on chain B for OFT Y, and then use OFT Y to purchase an NFT on chain B. All of this can be paid for in the source chain’s token and handled in a single transaction from the user’s perspective. Applications can use composability to create magical experiences that abstract away complexity. 

LayerZero v1 created a new architecture for messaging between chains: a locked down transport layer protocol with modular security and execution on top. 

The v1 Whitepaper introduced the concepts of immutable endpoints and append-only libraries that are still in place today, but it handled message verification and execution differently than v2 does. 

Where v2 has DVNs and executors, v1 had Oracles and Relayers. 

LayerZero v1 Whitepaper

Oracles and Relayers made sense on paper. 

The Oracle’s job is to fetch block headers – like a summary of each block on the blockchain – from Chain A and send it to Chain B so that each chain can verify the other’s current state and integrity. At launch, Chainlink, a leading oracle, was the most popular option, and in September, LayerZero Labs announced that Google Cloud would become the default Oracle. 

In theory, applications could choose their Oracle, and Bryan told me the team assumed that someone would make a meta-oracle that combines a number of different oracles – bridges, oracles, and attestation services. In practice, most applications stuck with the default, and no one created a meta-oracle.

The Relayer’s job, on the other hand, is to provide the necessary proof that a particular event or transaction happened on Chain A, which Chain B could then act upon. For example, it could say, “Yes, the user approved sending 10 ETH from Ethereum to Polygon, and we, Chain A, have burnt the 10 ETH. Your turn.”  Crucially, the Relayer was responsible for both security and execution. It handled things like quoting pricing pairs across 40+ different chains in real-time, sending 50-80 billion RPC calls per month to get information, writing millions of messages to chain, and abstracting gas payments away from the user. 

In theory, anyone could build and deploy their own Relayer. In practice, Bryan told me, “A Relayer is impossibly hard to run. We had to build an internal custodian, real-time n^2 pricing, write more messages to chain than anyone in the world, and essentially run Alchemy internally.” So no one built Relayers. 

And because no one built Relayers, LayerZero Labs was a potential chokepoint. If LayerZero Labs’ Relayer went down, the whole network would have a liveness issue – transactions wouldn’t go through – until someone, likely the App itself, came in and picked up transactions. That’s bad from an operational perspective, and it’s equally bad from a censorship-resistance perspective. If the government wanted to temporarily shut down the protocol, all they’d have to do is shut down LayerZero Labs’ Relayer. 

“It goes against everything we believed in,” he told me. 

In theory, LayerZero v1 was trustless – developers could plug in the Oracle and Relayer of their choice. In practice, it isn’t, really. It requires trusting Chainlink, Google Cloud, or Polyhedra, a zk light client that essentially inherits security from source chains, and LayerZero Labs itself. 

That’s a pretty safe bet, especially with the addition of Polyhedra, and fortunately, in terms of security and censorship-resistance, that hasn’t been an issue yet. LayerZero hasn’t been hacked. There haven’t been major liveness issues. LayerZero is the leading messaging protocol by a wide margin. 

But in the long-term, if you want to build foundational infrastructure that lasts fifty years or more, like TCP/IP has, there can’t be any room for error. 

And in the short-term, even the perception of potential pitfalls leaves room for confusion and competition. 

How Interoperability Protocols Compete 

To get widespread adoption and become the TCP/IP for blockchains, LayerZero has to do two things, each of which reinforces the other:

  1. Hook into More Chains. Deploy endpoints on more chains so that the protocol can serve developers and users wherever they want to operate. 

  2. Integrate with More Apps and Protocols. Convince more applications and protocols to build on LayerZero to build omnichain apps or bridge their tokens by adopting the OFT Standard. 

The more chains LayerZero is on, the more compelling the value prop for more apps and protocols. 

Currently, LayerZero is live on 45+ chains including Ethereum, Optimism, Arbitrum, zkSync, BNB Chain (the successor to BSC), Aptos, Celo, Scroll, Polygon, Avalanche, Fantom, and Base, with more coming soon. 

The most eagerly anticipated is the darling of this cycle so far: Solana. “Solana is basically done and has been in security check / final stages for a bit,” Bryan told me. It’s meaty though so we’ve been super cautious with it. Security first has meant speed is abysmal in this case, we’ll get there.” 

As LayerZero Labs lays the infrastructure, it also needs to convince developers to use it. This being crypto, that sales process plays out a little differently than it might for a traditional tech company. 

Exhibit A: Uniswap

In December 2022, Plasma Labs CEO Ilia Maksimenka put forward a proposal in the governance forum of the popular DEX, Uniswap. Ilia believed that Uniswap should deploy its protocol on BNB Chain, “the second-largest blockchain infrastructure by volume and user base.” He proposed that his company, Plasma Labs, deploy the Uniswap protocol to BNB Chain using its Hyperloop protocol, “a generalized cross-chain message-passing protocol inspired by roll-ups.” 

If you’ve read this far in the post, you might be interested enough in this stuff to find the full forum discussion as fascinating as I did. It’s business development and technical sales, out in the open, with debate among participants and observers, meant to convince a community instead of a single buyer. 

As the conversation continued over the next month, the community seemed to agree that launching on Binance was a good idea, but weren’t convinced that Hyperloop was the solution. So more bridges and messaging protocols threw their hat in the ring: first deBridge, then Celer, and then, on January 24th, Wormhole

After the $326 million hack in 2022, Wormhole upgraded itself. It shifted its architecture from bridge to cross-chain messaging, spent $2.5 million on bug bounties, and underwent a number of audits

The day after Wormhole entered the fray, LayerZero did too. 

With Wormhole and LayerZero in the ring, the competition pretty quickly became a choice between these two heavyweights, each backed by heavier-weights. LayerZero was backed by a16z crypto, also an early Uniswap investor and large UNI tokenholder, and Wormhole was backed by high-frequency trading firm and crypto market maker Jump Trading, which incubated the project and which also owns a significant amount of UNI tokens (and therefore votes). 

Over the next week, there was a lot of back and forth among UNI community members, including university blockchain clubs from Stanford, Penn, Michigan, and Berkeley. The conversation covered a few different topics simultaneously: whether there should even be a vote on a single bridge provider, what the process for decisions like this would be going forward, and whether Uniswap should wait and use a multi-bridge or multi-message aggregation solution. 

But there was a time constraint. As a16z crypto’s Porter Smith wrote, “Given the expiration of Uniswap V3’s BUSL license in early April, we’re generally in favor of deploying Uni V3 to other chains before that date to avoid the copy-paste rush that will likely ensue otherwise.”

So a choice needed to be made quickly, and for the purposes of our discussion, it came down to verification models: 

  • Wormhole: secured by 19 Guardians, 13 of whom must attest to a message for it to go through. The validator set is comprised of independent validators, chosen by Wormhole.

  • LayerZero: secured by an Oracle, with Chainlink as the default but configurable by the application, in this case, Uniswap. 

The fact that this is how the battle shaped up was an error in communication on LayerZero Labs’ part, Bryan told me. It became a contest of whose validation set or verification model was more trusted by the market; he wishes they’d made it more clear that LayerZero doesn’t compete at that level, that as a transport and messaging protocol, it’s verification agnostic. If they’d messaged it right, he thinks the choice should have come down to: 

  • Wormhole: trust Wormhole’s validator set and verification model, which is locked in 

  • LayerZero: choose whichever validator set and verification model Uniswap wants, with the freedom to swap out Oracles at any point.

But they didn’t message it well, and initially, community members like Kydo from Stanford Blockchain leaned in favor of Wormhole, “because of its diverse validator set.” Kydo was concerned that LayerZero relied heavily on Chainlink’s oracle relay service. 

LayerZero spoke with Stanford Blockchain and clarified its proposal, recommending that Uniswap switch out Chainlink for a “gasless oracle run by a minimum of 5+ significant Uniswap delegates.” Kydo responded: 

Thank you for posting this updated proposal, LayerZero.

We believe this proposal addresses a lot of the questions we have raised before. And their ability to change Chainlink with a set of Uniswap-aligned oracle validators is very promising.

For total transparency, LayerZero reached out to us and addressed most of our technical concerns.

One thing we believe we undervalued, in our initial assessment, is the immutability of the bridge. Many hacks happen because of simple upgrades. Under LayerZero’s design, the underlying infrastructure (LayerZero protocol) is immutable. The changeable parts, the Oracle (header sync) and Relayer (MIP), can only be changed by Uniswap governance.

Given the immutability and LayerZero’s new proposed Oracle, we would like to signal to the community that we have tentatively moved LayerZero as our preferred bridge provider for BNB.

We would also love to see more documentation provided from LayerZero’s side, especially around the oracle and relayer services.

There’s an important distinction in there, which Kydo alluded to but didn’t call out specifically: LayerZero and Wormhole are fundamentally different products. 

LayerZero is solely the transport layer, and Wormhole is also the verification layer. Because of that, Uniswap could swap out Chainlink for its own verifier set. With Wormhole, if you want the pipes, you also need to use its 13-of-19 Guardian model. 

“We felt like the Uniswap forum was more of a failure on our part to properly message the protocol and the role it plays,” Bryan told me. “They expected us to be a Wormhole or an Axelar and we just fundamentally aren’t.” 

Despite the miscommunication, the momentum seemed to be shifting towards LayerZero, but when a “Temperature Check” vote among the solutions went live, a16z was unable to vote its 15 million tokens “due to infrastructure limitations with the custodian on short notice,” and Wormhole won the vote. 

a16z crypto CTO Eddy Lazarin on Uniswap Governance Forum

It was an unusual situation: the first and only binding temp check in Uniswap governance history. Typically, the process would move to an official on-chain vote after the temp check. 

But this time, it didn’t. 

After the temp check, Uniswap governance moved to a new and final vote: whether to deploy Uniswap on BNB Chain with the protocol that won the temp check, Wormhole, or hold off on deploying. Given the time sensitive nature of the decision, the community voted to deploy with Wormhole. 

In a follow-up Bridge Assessment Report, the Uniswap DAO assessed a number of providers, including Wormhole and LayerZero. It approved Wormhole “for use in all cross-chain deployments,” but recommended reassessment for LayerZero “based on pending upgrades.” It wrote: 

After assessing the current version of the LayerZero protocol, the Committee has concluded that it does not currently satisfy the full breadth of the requirements of the Uniswap DAO’s cross-chain governance use case as outlined in the assessment framework, but is on a path to do so. LayerZero employs a combination of two types of validators to secure the protocol: Oracles and Relayers. However, currently, the available options for Oracles and Relayers do not offer the level of decentralization and security required for Uniswap’s use case. LayerZero has a planned upgrade to its oracle and relayer set that would likely address these concerns. 

Between the conversation in the BNB Chain governance forum and the Bridge Assessment Report, it seems that the takeaway was this: 

The immutability of the LayerZero protocol and the potential for applications to choose their own Oracles are both advantages over Wormhole in theory, but in practice, its security model looked too much like a 2-of-2 multisig with Chainlink and LayerZero Labs as the signers. While Wormhole contracts are upgradeable, which introduces risk, and while Wormhole doesn’t allow applications to configure their own validator sets, a 13-of-19 Guardians model appeared to be more decentralized. 

The thing is… Bryan agrees. LayerZero v1 has been safe and has successfully handled a ton of messages and assets, but in practice, the messaging around Oracles was too confusing, and building a Relayer was too hard, for most projects to move beyond the defaults. 

So for the past 18 months, the team has been cooking up something new and improved. 

LayerZero v2

Without the background we’ve covered, you might not have been able to recognize the shade in the whitepaper LayerZero Labs dropped today on v2. From the abstract (emphasis mine): 

The paradigm shift from crosschain to omnichain interoperation has created a secondary problem where applications are siloed into a single communication protocol. Exacerbating this predicament is the fact that crosschain messaging systems are often mutable and provide a single configuration for security. Thus, applications are not given a choice in the timing with which they adopt network protocol updates, and must use a one-size-fits-all configuration for offchain infrastructure

The LayerZero protocol solves these problems, and is the first intrinsically secure messaging protocol. By using an immutable base protocol, append-only validation contracts, and fully-configurable offchain verifiers, LayerZero provides unparalleled flexibility and extensibility. LayerZero’s security modules give complete ownership of protocol security, liveness, and cost to applications, and the modular design enables the protocol to scale to all blockchains and applications.

In the whitepaper, LayerZero Labs makes a distinction between crosschain messaging systems, like Wormhole, and omnichain messaging systems, like LayerZero. 

LayerZero v2 Whitepaper

LayerZero’s architecture is based on the belief that you can’t construct an omnichain messaging system by stringing together a number of crosschain messaging systems. Crosschain messaging systems work well if you’re connecting chains that handle execution and consensus in similar ways, like EVM-compatible chains, they argue, but not if you want to connect chains across different compatibility groups, like connecting Solana and Ethereum. 

They propose an omnichain messaging protocol (OMP) that leverages “foundational invariants underlying all blockchains to create a chain-agnostic protocol that semantically unites interoperation between onchain contracts, between chains in the same compatibility group, and between different chain groups.” 

In other words, like TCP/IP created a “language of the Internet,” LayerZero Labs is attempting to create a “language of the Omnichain.” 

LayerZero Labs writes that an OMP has two responsibilities: 

  1. Intrinsic Security

  2. Scalability

Scalability is simpler and less involved, so we’ll hit it first. It means that the OMP should be able to support any arbitrary new blockchain, security algorithms, or features, now or in the future, and that lower-risk feature extensions should be separated from the security-critical components. This requires one language that works across all blockchains, present and potential, instead of overspecialization based on today’s blockchains and use cases.

Specifically, LayerZero uses a “standardized interface for omnichain composition – lzCompose” to allow developers to use the same code to compose contracts regardless of what the destination blockchain is or which language it speaks. If you want to dig in on this, check out section 4.1 in the whitepaper

Intrinsic security is a new framing of the transport layer’s security in v2. This is the security that LayerZero, the protocol, is responsible for: liveness, censorship-resistance, and long-term stability. Applications need to be able to use the protocol anytime, no matter who’s trying to stop it, for a long time. 

To do that, it’s separating the security the protocol is responsible for from the security that the application and underlying blockchain are responsible for. 

LayerZero v2 Whitepaper, annotated by me

The intrinsic security, the infrastructure, needs to work reliably and predictably over time, but the extrinsic security needs to be modular and controlled by the application so that it can evolve as technology advances. As better validators or zero-knowledge clients, like Polyhedra, enter the market, applications should be able to plug them in. 

Applications should also be able to choose how much extrinsic security they need versus how much they’re willing to pay; a DeFi application responsible for billions of dollars and a gaming application responsible for a bunch of $1 NFTs shouldn’t be forced to use the same level of security: such a model would either be too insecure for the DeFi protocol or too expensive for the gaming protocol. 

The best way to illustrate this point is with the Pareto Frontier. Modular extrinsic security provides benefits on two dimensions: choice and time. 

In the present, allowing applications to set their own DVNs lets them choose the right point on the cost-security frontier for their specific need instead of locking them into one likely suboptimal point. 

Over time, the ability to change the DVN allows applications to reap the benefits as technology and competition push out the Pareto Frontier. 

This is one of the most important changes in v2, and the first that Bryan highlighted when we spoke: instead of Oracles and Relayers, which were good in theory but hard in practice, there are Decentralized Verifier Networks and Executors, which are which are permissionless to run, easier to implement (open sourced examples exist), and endlessly configurable.

Now, any external network can be a DVN, and applications can choose any combination of them to approve messages. At launch, Animoca, Blockdaemon, Delegate, Gitcoin, Nethermind, Obol, P2P, StableLab, Switchboard, Tapioca, SuperDuper, Polyhedra, and Google Cloud are confirmed DVN options, and adapters have been built to hook in Axelar and Chainlink’s CCIP. Adapters for other bridges, including Wormhole, are on the roadmap. 

Application A might choose to go very secure and expensive, and have a 15 of 21 threshold and include Chainlink, Polyhedra, Google Cloud, and even would-be LayerZero competitors like Wormhole and Axelar, plus validate messages themselves. Application B, built on Ethereum and Arbitrum, might go with a 3 of 5 and include the Arbitrum native bridge, Google Cloud, and themselves. A verification model that allows you to plug in Wormhole or Axelar’s validator set as just one of a number of other DVNs is strictly superior to those validator sets alone. 

Or if you’re say, JP Morgan, which is building its asset management blockchain project, Onyx, with LayerZero, you can choose to grant write access to a 3rd party validation set, or you can deploy one endpoint at JP Morgan, another at Goldman, and “they can verify directly, handshake themselves,” Bryan explained.  

Applications are responsible for their own extrinsic security, and for selecting an Executor. 

Instead of Relayers, applications choose their own Executors, which live outside of the security of the protocol. These Executors are mainly responsible for quoting prices and gas and handling all of the complexities of transactions for applications and their users. In exchange, they’ll charge a small fee on top of the cost of gas. 

LayerZero Labs will run an optional Executor that applications can choose to run, and that will be the default out of the box, but it encourages applications to set up their own configurations. LayerZero Labs will have to compete on price and service. Importantly, Executors are much simpler to run than Relayers, because v2 decouples security and execution, so LayerZero Labs expects there to be strong competition. If needed, applications can even execute easily onchain or users can self-execute. 

Importantly, v2 also improves liveness. In v1, if the LayerZero Labs Relayer went down, liveness would be compromised. Now, once a message is verified, anyone can execute a transaction on LayerZero Scan or directly onchain. Applications can choose whether to execute them in order – which might be important for a DeFi app – or in whatever way achieves maximum throughput – which you might choose if you’re building a game. If one executor goes down, you can swap in another and the show goes on. 

All told, LayerZero Labs lays the infrastructure, the things that should never change, and applications are free to choose whatever configurations work best for their needs on top of that (and change those over time). The application’s unique configuration of DVNs and Executors is its Security Stack.

LayerZero v2 Whitepaper, annotated by me

In this model, LayerZero Labs doesn’t care what applications choose. They provide the hooks, and with v2, make it as easy as possible to hook them into whatever chain, verifier, or executor the application wants to choose. 

But they do set defaults… 

Default Refutation

While v2 addresses the concerns highlighted in the Uniswap governance forum – it makes LayerZero’s verification and execution more practically decentralized – Bryan told me that he still expects there to be pushback on one thing in particular: defaults.

What are defaults? When you start building with LayerZero, it comes with default DVNs, executors, and libraries. They’re proxy contracts that let LayerZero Labs select things on behalf of the application. For example, if you stick with the defaults, then if there’s any issue with, say, a library, LayerZero Labs can automatically switch you to an updated library. If you statically select your configuration, you need to choose to switch yourself. Since v2 is launching to testnet only for now, the defaults haven’t been set. 

Earlier I said that Cobie suggested writing a refutation ahead of time, calling out the things that suck with the current model that could and should change over time. Bryan told me that he knows that a few vocal critics will be unhappy they’re keeping defaults because they evaluate LayerZero as though defaults were the only option, but that it’s worth the heat because they’re the only way to create a magical developer experience out of the box. 

But he also said: 

“If you’re using defaults, you’re fully trusting LayerZero Labs – don’t.” 

Still, to give developers a good experience, they chose to keep defaults despite the pushback they know they’ll get. He also pointed out that while competitors like to give them pushback on defaults, every other protocol only has defaults. They’re not configurable. “The worst case for LayerZero,” Bryan said, “is the best case for any other messaging protocol, because at least developers always have the choice to change the parameterization.”

I asked the team whether the decision to set defaults was consistent with the idea of creating TCP/IP for blockchain. They told me that just as TCP/IP is a standard and your computer comes with all sorts of built-in decisions on the implementation of the standard in order to make it usable, the LayerZero protocol is the standard and defaults are implementation. Defaults don’t impact the protocol itself in any way, shape, or form, they just make it easier to implement. 

And if you want to build the TCP/IP for blockchains, you need developers to implement it. And one big difference between standards and protocols is that if you can become the TCP/IP for blockchains, that can be a really good business. 

Protocols that Make Money

Earlier this month, Union Square Ventures’ Fred Wilson wrote a blog post titled Why Monetize Protocols? 

Traditional web protocols like SMTP, HTTP, RSS, etc are not monetized. They are free to use and build on and maintained as open standards that anyone can use for free. That has worked reasonably well in the web era so the question arises why we would want to monetize the new protocols that are emerging in web3.

The answer, according to Wilson, “lies in securing the governance of protocols and managing bad actors/applications built on them.” 

A decentralized protocol is governed by tokenholders. The more valuable the token is, the harder it is for any one entity to acquire a large enough stake to control governance and the easier it is for the token holders to incentivize the kind of behavior they want to see. Monetizing the protocol by charging fees that flow to the treasury should make the token more valuable. 

Currently, that’s not a concern. LayerZero is owned, operated, and governed by LayerZero Labs. Last week, however, the team announced that there will be a LayerZero token, likely in the first half of 2024. 

Even after that happens, the LayerZero protocol will always be able to be used without LayerZero’s token – there is no token now and applications can use it, and the endpoint code is immutable. Token holders might, however, use a token to incentivize verifiers, applications, or users to verify, build, and transact on the protocol, or do any number of things that are good for the development and implementation of the protocol. 

So how will value accrue to that token? 

To be clear: the team hasn’t said anything about this, either publicly or to me. My last call with Bryan was the day before they announced the token, and when I asked how a token played into everything, he said: “From the scope of the technology, nobody needs to care. We very specifically have not said anything ever to date about a token.” 

To be double-clear: this is not financial advice. I’m an idiot, I have no idea what the token supply will be, I don’t know what fee amounts the protocol will take on messages. There’s no way to analyze the value of the token with currently available information, and I’m not going to try. 

With that said, I can appreciate the potential of the business model without taking a view on the token. It seems as if the model is to take a small fee on a whole lot of transactions. 

Their docs have a section on Estimating Message Fees, with code that lays out three fees: Relayer Fee, Oracle Fee, and LayerZero Fee

LayerZero Docs

The Relayer Fee might contain the price of gas on the sending and receiving chain with a small fee on top, the Oracle fee pays the Oracles for verifying messages, and the LayerZero Fee goes to the protocol. 

Today, LayerZero Labs earns Relayer Fees where it’s the Relayer and the LayerZero Fee. Presumably, these will be replaced with Verifier Fees, Executor Fees, and LayerZero Fees in v2. Once LayerZero Labs decentralizes governance of the protocols, it’s safe to assume that the Verifier Fees will go to participants in the relevant DVN, Executor Fees will go to the Executor, and LayerZero Fees will go to the protocol’s treasury. 

These fees likely represent small markups over costs, and as more Verifiers and Executors compete for business, it may drive their margins down. The killer business here over the long-term is the protocol

By operating below and connecting all the chains, LayerZero sits in the flow of funds of the Omnichain. Every time a message moves from one chain to another, it collects a small fee. Over millions and eventually billions of messages, those fees add up. 

From a product perspective, LayerZero is like TCP/IP. From a business perspective, it’s like VISA. 

When I ran this analogy by Bryan, he disagreed, pointing out that VISA is a hub and spoke model: it sits in the middle of a global network of banks, merchants, and customers. “It’s low friction,” he agreed, “if you’re willing to take on the assumption that they’re benevolent.” 

Bryan compared the VISA network to the state of cross-chain messaging when they started building. There’s a middle chain – Cosmos, Axelar, Wormhole, whoever – that listens to events, says “yes/no” to validity, and sends a message. Everyone trusts the middle thing, and if that’s corrupt for even a matter of blocks, it can corrupt everything that touches it. 

He and the team architected LayerZero precisely to avoid that model, to ensure that applications had a way to send a message from one chain to another without anything in the middle: “We were convinced that model wouldn’t scale.” 

But while LayerZero’s architecture is different from VISA’s, I still think there are similarities in the business. By allowing a bunch of entities that couldn’t communicate before to communicate, and enabling value to flow among them, it’s in a position to take a very small cut of a very large number of transactions. 

That’s a good thing. It means that LayerZero Labs, and eventually ZRO token holders, are incentivized to build infrastructure and attract third-parties that increase the flow of value among chains. Ultimately, that means building financial rails that applications and users can trust and access: rails that are immutable, censorship-resistant, and permissionless.

If Stripe’s mission is to increase the GDP of the internet, LayerZero’s might be to increase the GDP of the Omnichain.

Increasing the GDP of the Omnichain

In the beginning of the piece, I said that LayerZero was my kind of company in a lot of ways. It’s a hybrid of the last two pieces I’ve written on crypto. 

In Blockchains as Platforms, I argued that as crypto infrastructure improves, developers will need to make fewer trade-offs in order to build onchain, and as that trade-off gap shrinks, an increasing percentage of developers will choose to do so. 

LayerZero is crypto infrastructure that explicitly lowers trade-offs developers need to make. Instead of building on one chain or another, they can build on both. Instead of choosing an extrinsic security model that’s either too expensive or too insecure, they can choose the right spot for their product on the Security-Cost Pareto Frontier. 

Why does that matter?

In Capitalism Onchained, I wrote that “Crypto’s ideal state is to make capitalism more effective.” 

Just as TCP/IP allowed information to flow freely from computer to computer, LayerZero might allow value to flow freely from computer to computer. It will speed up transaction times, increase liquidity, allow assets to move to the best opportunities more easily, increase competition, and open trade and capital flow among blockchains and their communities. 

If you’re still not convinced that crypto, even in its ideal state, is a valuable thing, than nothing I’ve written today will convince you; but if you believe that crypto can make capitalism more effective, then LayerZero may be an important piece in making that happen. 

Of course, there’s a long way to go to make that happen. LayerZero Labs has to successfully launch v2 and convince a new wave of applications and protocols to build on it. It needs to hook its endpoints into more chains, like Solana. It needs to maintain its sterling security record while significantly increasing volume. It will certainly need to fend off competitors who also recognize how valuable a position it is to be at layer zero in the stack, providing infrastructure that lasts a very long time. 

But it has some advantages baked into the architecture. The beautiful thing about building flexible infrastructure in the way that LayerZero has is that it improves as the ecosystem around and on top of it improves.

As new Verifiers come into the network and compete for business, the Security-Cost Pareto Frontier shifts. As new Executors come into the network and compete for business, execution gets tighter. As new chains come online, each with improvements to previous models, the Omnichain gets stronger. 

The thing that I think will be most fascinating to watch play out if LayerZero succeeds is that L1s and L2s will need to push further to the extremes on certain capabilities. If you’re trying to compete directly with Ethereum, you might be tempted to try to do what Ethereum does, but better. That’s bad strategy

Good strategy means leaning into your differentiation. Let Ethereum have the settlement layer, and optimize for speed, price, storage, or particular use cases. Experiment with bolder trade-offs. Do one thing really well, and plug that thing into the larger network, so that the Omnichain gets more capable, and more developers choose to build applications onchain. 

Good technology reduces the trade-offs that people need to make. Instead of this or that, it’s:  

That’s what LayerZero enables. Developers can pair the security of Ethereum with the speed of an L2, tap into liquidity across chains, and build apps that are composable no matter where the best Lego blocks happen to live. 

And as for you? If you haven’t played around with crypto for the past year, I think you’ll be surprised at how much better it’s getting. Go bridge some ETH to Optimism or Base on Stargate and bridge it back to Ethereum mainnet. It’s a smoother experience than you might remember from the last cycle. 

LayerZero still has a long way to go. It’s live in a number of use cases, including DEXes (Trader Joe), stablecoins (MIM), and games (Parallel), but developers have barely scratched the surface of the omnichain app design space. Even with a better product, it will have to hook into more chains, win BD battles, and clearly communicate to developers why and how they can build better products on top of its infrastructure. 

But wherever we are in the price cycle, that’s the cycle that excites me most: better infrastructure enabling better applications that bring more people onchain without forcing them to sacrifice security or performance. I think LayerZero v2 is a big step in that cycle, and I’m excited to see what developers build on it. 

Thanks to Bryan and the LayerZero Labs team for working with me and answering my dumb questions, and to Dan for editing!

That’s all for today! We’ll be back in your inbox tomorrow with a Weekly Dose.

Thanks for reading,


The Morality of Having Kids in a Magical, Maybe Simulated World

Welcome to the 629 newly Not Boring people who have joined us since last week! If you haven’t subscribed, join 217,332 smart, curious folks by subscribing here:

Subscribe now

Today’s Not Boring is brought to you by… Alto

Alto allows individuals to invest in alternative assets with their retirement funds through a self-directed IRA. It’s perfect for long-term investors in crypto, fine art, real estate, private credit, and more. Plus, given crypto’s recent surge, if you’re going to invest now, you might as well do so in a way that significantly reduces your tax burden in the future. 

Alto launched its alternative assets investment platform designed to streamline access for individual investors wanting to invest their retirement funds. Investing in alts with funds earmarked for retirement means you’re investing for the future with assets that have a longer time horizon and greater potential upside while also claiming tax benefits. Here’s an example:

Let’s say you make a $10,000 investment in venture capital with a Roth IRA. By the time you’re 59 and a half, that investment has increased in value to $100,000. 

If you invested with non-qualified cash, you could pay up to 20% in capital gains taxes over the lifetime of this investment, meaning up to $18,000 of potential gains is lost to taxes. The RothIRA allows you to potentially avoid the entire tax liability, meaning $0 on the gains is lost to taxes. Of course, IRA rules and regulations apply, and you should seek advice from a tax professional when making investments.

Even better, Alto just launched Alto Marketplace, a capital raise platform that connects individual accredited investors to leading funds and exclusive opportunities.

Start Investing in Alts With Alto Today

Hi friends 👋 ,

Let’s get to it.

The Morality of Having Kids in a Magical, Maybe Simulated World

There’s a thing that fancy magazines do every so often, which is to ask this Very Deep Question: is it morally OK to have kids when the planet is literally burning? 

The New Yorker

I am here to tell you that that is an exceedingly dumb question. 

I am here to tell you that you1 should have kids.

There are a bunch of rational arguments you can make against the idea that it’s immoral to have kids in a drowning, burning world: 

  • Any one person’s emissions are negligible

  • More kids mean a greater chance that someone figures out new solutions to climate change and all sorts of other challenges

  • Malthusian predictions have been wrong to date 

  • It’s a Pascal’s Wager of sorts: if climate change destroys the planet, your downside is finite (we all die anyway); if it doesn’t, your upside is infinite (generations of your family will live in an energy abundant world and maybe even travel the galaxy) 

  • Kids rock

But it’s December now, and we’re in that annual wind-down period, so I figure why not get a little bit weird, try a different tack. It’s going to be one of these pieces: 

The other day, thinking about that Very Big Question, I tweeted something cryptic: “The climate crisis is the best proof I’ve seen that we’re in a simulation.” 

A few people, understandably, asked me what the hell I was talking about. 

The short, non-crazy version is this: climate change is a very real challenge with very real negative impacts on people and the planet. But climate change is solvable, and by solving it, we unlock the next phase of civilization’s growth. 

The longer, crazier version is longer and crazier. By the end, I hope you’ll see why I think that the kids are gonna be more than alright. Let me explain. 

ENERGY: The Game

Human history is the history of unlocking new energy sources to fuel new stages of civilization: food, fire, fossil fuels, then whatever comes next. Call it climbing the Kardashev Scale if you’d like. The more energy we put to productive use, the further civilization advances.  

If you squint, human history looks like a big, long game called ENERGY.

The objective of ENERGY is to consume more energy in order to improve the human condition and ultimately light up the universe with intelligence.  

On each level, you need to maintain per capita energy consumption with the existing energy sources to not die and keep playing. 

Like any good game, ENERGY is full of challenges and bosses to defeat; defeating those bosses opens up new levels. Each level represents a new primary energy source. 

Humanity’s progress in ENERGY has been the story of unleashing increasingly abundant, efficient, powerful sources of fuel in order to support more complex and advanced forms of society and technology and increase human flourishing.

After beating food and fire, we’re currently on the Fossil Fuels level of the game, which we entered in the 18th or 19th century with the transition from wood to coal. 

We need to advance to the next level – Solar and Atomic Energy – before we deplete fossil fuels in order to keep the game going, or we get sent back to the beginning. To some, things look bleak, harder than they’ve looked in a long time. That’s OK; it means we’ve gotten to the Boss. 

We need to defeat the Boss to make it to the next level. The Boss is hard to beat. Transitions are hard. Things must have looked bleak during the last transition, too. 

The Coal Question

The conditions that led to the rise in coal consumption resemble the catalysts for the shift to clean energy: depletion of the old fuel source, and correction of an atrocity. 

The old fuel source was firewood. In the late 16th Century, the world’s largest economy, Britain, began to run low. This chart, from Thunder Said Energy, shows a drop in both firewood energy consumption and total energy consumption per person. “This is another reminder that energy transitions tend to occur when incumbent energy sources are under-supplied and highly priced,” the researchers wrote. 

Thunder Said Energy

The atrocity was African slavery. Around 1600, humans and draught animals were tied as the largest source of useful energy at 25%. Mercifully, by the time Britain banned the slave trade in 1807, human labor was down to 10%, and “by the time of the Abolition Act in 1833, it was closer to 5%.”

As the old fuel source, firewood, began to run low and became expensive, people invented new ways to harness an existing but underutilized fuel source: coal. In 1712, an English blacksmith named Thomas Newcomen invented the atmospheric steam engine, the first commercially successful engine to use pistons and cylinders. 

Newcomen Steam Engine

The Newcomen Steam Engine both used coal as its source and pumped water out of coal mines, allowing humanity to mine more coal more easily, but it was still immobile and inefficient. Its 0.5% efficiency meant that only 0.5% of the energy input in terms of coal was converted into useful mechanical work. 

It was a starting point, though. By 1776, that annus mirabilis, James Watt and Matthew Boulton began selling their Boulton & Watt steam engine, which was not only 4x more efficient at 2% but also more versatile. It drove machinery in paper, cotton, flour, and iron mills, textile factories, distilleries, canals, and waterworks. It opened up new capabilities and new geographies. 

The Boulton & Watt steam engine helped kick off the next level in humanity’s growth: the Industrial Revolution. As William Jevons would note nearly a century later in The Coal Question, a more efficient steam engine paradoxically increased the rate of coal consumption significantly. 

And consume we did! During the Industrial Revolution, England nearly quadrupled its coal consumption over a century while more than tripling its total energy consumption.

Advancing in ENERGY by unlocking new energy sources has implications beyond material things. It reshapes how civilization operates.

For example, as machine labor grew in the west, the need for forced human labor shrunk. 

Slavery is as old as civilization. It was present as early as Mesopotamia around 3500 BC, and widespread in the ancient world across Europe, Asia, the Middle East, and Africa. Human labor, powered by food, was the primary source of useful energy. And then, within a century of the transition to an energy source that enabled machines to do labor only humans previously could, the practice was outlawed in the world’s two largest economies: America and Britain. 

To say that the steam engine led to the end of slavery in Britain and the US obviously ignores a number of important factors, but the Occam’s Razor is that once we had machines to do labor, we could afford to stop treating humans like machines. 

As Thunder Said Energy pointed out, human labor was one-fifth as big a contributor to the overall energy mix by the time Britain abolished slavery in 1833. In the United States, industrialization shifted power from the agrarian south to the urban north, and weakened the economic argument for slavery. People still worked in terrible conditions during the Industrial Revolution, but the economic/ethical balance of owning people flipped with the rise of machines. 

In ENERGY, humanity is roughly as ethical as it can afford to be

When food was scarce, it was acceptable to kill neighboring tribes to take their food. When machine labor was scarce, it was acceptable to enslave other human beings and force them to labor. 

Today, we allow billions of people to live in energy poverty because fossil fuels are a scarce resource. Around the world, children still do homework by candlelight once the electricity goes off for the day. Think of all the human flourishing capped by lack of energy. 

In the future, if we look back on our farming and eating animals as barbaric, it will be because we’ve learned to convert energy into food as delicious and nutritious as meat. We may look back on the fact that people were forced to spend most of their waking hours working jobs they hated in order to provide food and shelter for their families as horribly unethical, once we can afford to. We will certainly look back on energy poverty as unethical. 

One of the main reasons we progress in the game of ENERGY is in order to afford to become more ethical.

Which is what happened in the last energy transition. By the turn of the 20th century in the west, slave labor was out and machines were in. And instead of killing the economy, World GDP began to go vertical. 

Data from Our World in Data 

Forced by dwindling supply and growing demand, humanity was forced to unlock a new level in ENERGY, and we did. In the process, we overcame an atrocity. 

Right in the middle of that last energy transition, Thomas Malthus wrote his infamous Essay on the Principle of Population, in which he argued that population growth would inevitably outpace food production, leading to widespread famine and hardship. 

He was wrong, of course. He failed to predict human adaptability and resilience, and more specifically, failed to predict the technological innovations our ancestors would devise to solve the problem, including mechanized farming made possible by new machines and new energy sources to power them. 

Instead, we flourished. In the face of challenges, we transitioned from wood to coal, and added oil, gas, and capitalism to the mix, to boot, unlocking the next level. 

The Fossil Fuels Level

This level – Fossil Fuels – has been awesome. We’re reaping its benefits to this day. Fossil fuels (and wood) still provide ~83% of global primary energy consumption. 

Our World in Data

Though they get a bad rap now, fossil fuels have been miraculous for humanity. 

Since fossil fuels replaced human and animal labor and firewood as the world’s dominant sources of useful energy in the 18th century, the percentage of humans living in extreme poverty has declined from 94% to 8.5%. Democracy, education, vaccination, and literacy have risen, and child mortality has declined. 

Our World in Data

Fossil fuels and their byproducts made modern computing possible. There would be no computers without fossil fuels, there would be no internet, and there would be no AI. 

Nor would there be solar panels or large wind turbines, both of which use petroleum-based components and energy-intensive manufacturing processes, or advanced geothermal, which uses techniques learned in drilling and fracking. The mining and refining of uranium ore for nuclear reactors, not to mention the materials in and construction of nuclear power plants, require fossil fuels and their derivatives. And commercial nuclear fusion would have no shot without the many fossil fuel-based inputs – from simulation software to materials to power requirements – that might make it possible. 

But, like wood three centuries ago, fossil fuels are dwindling, and their impact on the environment is a modern atrocity. 

We need to transition, but the transition represents the most complex global coordination problem in the history of humanity. Fossil fuels are just too good for business. 

This is why I said that the climate crisis is the best proof of a simulation I’ve seen: 

We can’t progress to the next level in ENERGY without widespread adoption of new sources of energy, and the climate crisis provides a catalyst to force that transition at just the right time. 

Proof of Simulation

OK, proof of simulation is maybe a bit strong. But whoever set the starting conditions – god, aliens, physics, the universe itself – lined everything up too perfectly for this all to be a coincidence. 

Unbelievably, right as we’re grappling with the threat of climate change, the next energy sources on our tech tree, such as solar, fission, and fusion, are clean and sustainable ones, exactly the kind we need to address climate change. What are the odds!

The tech tree is this concept from strategy games like Civilization, a visual map of the technologies available as you progress through the game and make certain decisions. Start with raw materials and basic technologies, and work up to more powerful technologies. Each new technology requires mastery of the old; you can’t skip steps. 

Civ 6 Tech Tree, Marbozir on YouTube

The raw materials required for the next set of energy sources have been there all along. The sun has always shone on the earth and Uranium, Deuterium, and Tritium have been here for billions of years, but without the accumulation of knowledge, machines, software, science, components, and energy we have today, we haven’t been able to turn them into usable energy. 

Now, right when we need them, we can. 

But we don’t just need these new energy sources to fight climate change. We need them for three reasons: 

  1. Fossil fuels will run out. 

  2. To unlock the next level of civilizational progress, we’ll need denser and more abundant energy sources. 

  3. To solve climate change. 

In other words, we would have needed to undertake this energy transition with or without climate change. The threat of climate change may be the only way that we can make it in time. 

Here’s what I mean. 

We’re going to need a lot more energy – and much denser energy – to unlock the next level of the game than fossil fuels can provide. 

Peak oil and gas estimates are always wrong – when the incentives are aligned, we come up with new ways to discover new resources – but fossil fuels are not infinite. That’s the main challenge: fossil fuel supplies are finite, and demand for energy is not. 

In fact, not long ago, the main concern around fossil fuels was not that they would destroy the planet, but that they would run out! In college, I took an Energy & Economics class, and the big theme was that we were going to run out of that sweet, sweet oil and gas very soon. Yesterday, Ben Thompson pointed out that in 2011’s Ready Player One, author Ernest Cline explained the book’s 2044 dystopia thusly (emphasis mine):

But that’s where the bad news comes in. Our global civilization came at a huge cost. We needed a whole bunch of energy to build it, and we got that energy by burning fossil fuels, which came from dead plants and animals buried deep in the ground. We used up most of this fuel before you got here, and now it’s pretty much all gone. This means that we no longer have enough energy to keep our civilization running like it was before. So we’ve had to cut back. Big-time. We call this the Global Energy Crisis, and it’s been going on for a while now.

Just 12 years ago, the person who wrote the most futuristic book of the year was worried about an Energy Crisis, not a Climate Crisis

And thanks to intervening advances – AI, electric vehicles, heat pumps, robots, and more – we’re going to need much more energy than Cline could have predicted. 

In a recent conversation with Clay Dumas and Dr. Clea Kolster of Lowercarbon Capital for Age of Miracles, Clea told us that US electricity demand is expected to grow 5x by 2050, and that that expectation is probably conservative. In ten years, computing demand for electricity alone is going to be as big as all of US electricity demand is today. 

Climate impact or not, fossil fuels weren’t going to be enough to get to the next level, or even to keep us alive on this level for much longer. The transition to denser, more abundant fuels is necessary. 

But we have a problem, from the perspective of ENERGY: The Game: fossil fuels are really good today – they’re energy-dense, easily transportable, and cheap. 

Convincing the people and businesses that rely on fossil fuels to switch to less proven, less reliably, more expensive energy sources is a massive coordination problem, what SlateStarCodex would call a Moloch situation. Everyone is incentivized to keep using fossil fuels until it’s too late, and the next generation of energy technologies – solar, nuclear, and fusion – take a really long time to develop, scale up, and bring down the cost curve. 

Fossil fuels are still so important to our economy and quality of life that the IMF estimated that global governments spent $7 trillion – equivalent to 7.1% of global GDP – on explicit and implicit fossil fuel subsidies last year alone! 


Explicit fossil fuel subsidies of $1.3 trillion alone is higher than the high-end of the estimated budgetary impact of the Inflation Reduction Act’s climate provisions – $1.2 trillion – over the next decade

Fossil fuels still dominate. 

So the simulation creators programmed a little mechanism into the parameters: 

  • greenhouses gasses can destroy the planet,

  • all of the main sources of energy up until this point produce greenhouse gasses,

  • all of the main energy sources that come next don’t.

Put another way, there is nothing next on the tech tree that is more dense and abundant than fossil fuels but just as highly emitting or moreso. There’s no superfuel that we could burn if we decided to say “fuck the planet, give me the superenergy.” Everything that comes next is clean. 

People will yell at me about the details here. Producing solar panels, wind turbines, geothermal rigs, nuclear power plants, and fusion generators produces some greenhouse gasses. But we’re talking a two order of magnitude drop in emissions; if we switched all energy production to these sources tomorrow, we’d be good on the climate front. 

These new energy sources – solar, wind, geothermal, nuclear, and fusion – address fossil fuels’ shortcomings from the perspective of the game. 

They’re abundant: renewables are renewable (save the panel and turbine replacement) as long as the sun shines and the wind blows, and there is enough fission and fusion fuel on the planet to last humanity billions of years. 

They’re energy-dense: nuclear fuel, like Uranium-235, is 2 million times more energy dense than fossil fuels, and fusion is 5-10 million times more energy dense! 

That energy density and abundance is useful for things we do today, like power generation and large-scale industrial processes, but it also opens up new possibilities. 

New levels in ENERGY – more dense, abundant fuel sources – translate to new civilizational capabilities. Coal powered the Industrial Revolution. Oil and gas opened up new means of transportation. Solar, fission, and fusion will power AI data centers, robots, electric vehicles, desalination, flying cars, cultivated meat, vertical farming, and any number of new inventions that we’ve yet to dream up. 

The point I’m trying to make is: we would have had to transition to these new energy sources in order to unlock the next level of civilizational progress no matter what. 

And we were going to run out of fossil fuels at some point. Without the climate crisis, we may have had an energy crisis. 

The reason I say that the climate crisis is proof of simulation is that somehow, right when our energy usage is on the cusp of destroying our planet, forcing us to use new energy sources, for the first time in human history, the next set of energy sources on the abundance/density ladder are clean

You can’t make this shit up!

Amazingly, those energy sources were not developed because of their low carbon footprints. Bell Labs developed solar PV cells to power telephone systems in remote, humid locales where traditional dry cell batteries degraded too quickly. When early fission and fusion researchers took the baton from the Manhattan Project, they were excited by the technologies’ energy density and potential to provide abundant, reliable power. 

Solar, nuclear, and fusion were developed because they did something better than the energy sources humanity had developed to date, because they might be the next step on the tech tree, the next level in ENERGY. 

It is a happy accident of history, a great coincidental gift from the universe, or an intentionally designed feature of the simulation that they all turned out to not kill the planet. 

But without the climate crisis, their development and installation would likely have been too slow to matter. 

After initial military and space applications in the US, solar was relegated to Japan, which as a remote island nation with limited natural resources needed a way to gain energy independence.

It was only when Germany turned to solar as part of the clean energy movement in the 1970s, and when China began manufacturing for Germany at scale, that solar began to ride the cost curve down towards competitive prices. For two decades, climate-related subsidies around the world continued to drive down costs and drive up installations. 

Without the climate crisis, solar installs would not be where they are today. 

Nuclear fission, too, started with military applications, namely the atomic bomb. After an initial wave of excitement and nuclear plant installations in the 1960s, a number of factors – environmentalists, economics, slowing energy demand, and disasters – slowed new installs to a trickle. Germany even shut down its nuclear plants and replaced them with coal. Fortunately, at the COP28 climate summit, the United States led a group of nations in committing to triple nuclear energy capacity by 2050, “recognizing the key role of nuclear energy in reaching net zero.

Without the climate crisis, “environmentalists” may have succeeded in killing nuclear energy. 

And then there’s fusion, the energy source furthest along the tech tree. Fusion has been a promising research project for nearly 80 years with no commercial success to show for it. At the turn of the millennium, most of the world’s research efforts and dollars were channeled into one promising but glacially slow project – ITER – with a projected timeline of 2035-2040. 

What’s happened since, however, is for all intents and purposes miraculous. A number of other branches of the tech tree – magnets, simulation software, and controllers, to name three – have all hit a point at which commercial fusion has become much more viable in the near-term. At the same time, government programs like the Department of Energy and ARPA-E and climate-focused investors like Bill Gates’ Breakthrough Energy and Lowercarbon Capital, have begun to pour significant dollars into private companies in the space. 


In 2012, investors put $154 million into six companies. In 2021, they invested $3 billion into 20 companies. Currently, there are roughly 80 companies pursuing fusion energy. We have Age of Miracles episodes on fusion coming out over the next two weeks, and after talking to founders, I am convinced that one of the companies we’ve spoken to will commercialize fusion sooner than the world expects. The question is just: which one, or how many of them?

The technology, “always 30 years away,” may arrive in as little as five years, just in the nick of time. 

Without the climate crisis, private fusion companies may not have gotten the talent or funding they needed to achieve fusion; now, it’s just a matter of who and when. 

We haven’t defeated the boss yet. There is a gigaton of work left to be done to avoid warming the planet by more than two degrees celsius. COP28 agreements are not installed capacity. Fusion generators still need to produce more energy than it consumes, and then we need to manufacture a lot of them. We still need to use all three technologies, and more besides, to pull CO2 from the atmosphere. 

If anything, humanity needs to put even more effort into the energy transition. If the Climate Crisis is proof of a simulation, it’s only as a signal to humans to work harder to ensure we make the transition happen. It won’t happen by itself. 

But at this point, it seems… very doable. 

Casey Handmer, the Terraform Industries CEO who we had on a recent episode of Age of Miracles, recently tweeted that we’ll get to net zero before the most optimistic expectations, despite the fact that “almost noone knows how or why we’ll achieve this.” He’s betting you can use solar to pull CO2 from the air and turn it into synthetic fuels. 

The grand challenge of, and incentives to prevent, the climate crisis have pulled in some of the smartest people in the world to use all the tools at humanity’s disposal to avoid disaster, and in the process, beat the boss and advance to the next level of ENERGY: The Game. 

Have Kids

Look, I tried to simplify millions of years of history and some very complex, multi-causal events into one, clean narrative. I’m missing important details, even getting some wrong. 

But from this zoomed out view, the period that we’re living through isn’t a tragedy, it’s a trial, and it will be a triumph. 

Without the climate crisis, we would likely have ridden fossil fuels until it was too late. At that point, we wouldn’t have been able to power a growing population or increasingly powerful machines. We might not have had the resources to develop, produce, and scale the next generation of energy sources at all. Without the climate crisis, the future might have been bleak. That zero-sum, negative growth world would have been a bad one to bring kids into. 

Because of it, the world has accelerated the development and deployment of technologies like solar, fission, and fusion. We’re not out of the woods, but we have the tools to get out. 

The moment we begin to reverse the effects of climate change, we’ll have broken through to the next level, one in which we can manufacture energy and consume as much of it as we would like without fear of killing the planet. That is enormous and unprecedented. 

There will certainly be new problems and challenges. Utopia isn’t real, and we wouldn’t want it to be. 

But if history is a guide, the next level in ENERGY will transform and improve society in ways that are impossible to predict from our current vantage point, just as it would have been impossible to sketch the Industrial Revolution by looking at the Newcomen Steam Engine. 

Our World in Data

If we get this right, our kids will live in a world that’s more abundant and more ethical than the one we inhabit today. They’ll look back at 2023 the same way we look back at 1712: wouldn’t want to live there, but thankful for the contributions of the people who did. 

Climate change is real. It’s our current Boss. Defeating it will require deft gameplay from millions of people, governments, and businesses. 

But it doesn’t spell inevitable doom, either. Humans are incredibly adaptable, and against all odds, we’ve created exactly the right tools to defeat it and unlock the next level at exactly the right time. 

Having kids isn’t the problem; it’s the solution. It’s exactly because people who lived in much worse conditions than we do decided to have kids that we have the ultimate weapon in this game: human creativity. 

Whatever challenges await on the next level, we’ll need all the kids we can get to solve them. 

So don’t listen to the doom and gloom. It’s counterproductive, depressing, and evil. It prioritizes clicks over your happiness and uses fear to sell ads. The people pretending to be ethical by being pessimistic are not the good guys; they’re villains in this game.

But villains can be useful, too. They’re part of the game. If that fear has motivated some of our best people to join the fight to beat the boss and unlock the next level, humanity is better off for it.

Just don’t let it depress you into hopeless inaction, right when things are about to get really good. Don’t let it stop you from bringing kids into this world.

If we’re lucky, we might be alive to experience some of the wonders of this next level. I certainly hope to be. 

But I’m almost certain that our kids will be, and I am so excited for them. What a time to be alive!


Not all of you, of course. You might have very good non-climate-related reasons to not have kids. You might be too young or too old. You might not have the means to add another mouth to feed at the moment. There are a number of valid reasons to not have kids right now, but the climate isn’t one of them. 

Narrative Tug-of-War

Welcome to the 408 newly Not Boring people who have joined us since last week! If you haven’t subscribed, join 216,703 smart, curious folks by subscribing here:

Subscribe now

The sourcing tool for data driven VCs

Harmonic AI is the startup discovery tool trusted by VCs and sales teams in search of breakout companies. It’s like if Crunchbase or CB Insights was built today and without a bunch of punitive paywalls. Accel, YC, Brex and hundreds more use Harmonic to:

  • Discover new startups in any sector, geography, or stage including stealth.

  • Track companies’ performance with insights on fundraising, hiring, web traffic, and more.

  • Monitor their networks for the next generation of founders.

Whether you’re an investor or GTM leader, Harmonic is just one of those high-ROI no-brainers to have in your stack. 

Find your next deal on Harmonic!

Hi friends 👋 ,

Happy Tuesday! Hope you all had a great Thanksgiving (or enjoyed the peace and quiet while us Americans were in Turkey comas).

Apologies that this is a little late — once again, the newsletter gods dropped a perfect example of the point I was trying to make in my lap at the last minute, and I’ve been up since 5:30 trying to incorporate it.

We live in a time of extreme narratives. It’s easy to get caught up and worked up when you take the extremes in isolation. Don’t. They’re part of a bigger game, and once you see it, the world makes a lot more sense.

Let’s get to it.

Narrative Tug-of-War

One of the biggest changes to how I see the world over the past year or so is viewing ideological debates as games of narrative tug-of-war

For every narrative, there is an equal and opposite narrative. It’s practically predetermined, cultural physics. 

One side pulls hard to its extreme, and the other pulls back to its own. 

AI is going to kill us all ←→ AI is going to save the world. 

What starts as a minor disagreement gets amplified into completely opposing worldviews. What starts as a nuanced conversation gets boiled down to catchphrases. Those who start as your opponents become your enemies. 

It’s easy to get worked up if you focus on the extremes, on the teams tugging the rope on each side. It’s certainly easy to nitpick everything they say and point out all of the things they missed or left out. 

Don’t. Focus on the knot in the middle. 

That knot, moving back and forth over the center line as each team tries to pull it further to their own side, is the important thing to watch. That’s the emergent synthesis of the ideas, and where they translate into policy and action. 

There’s this concept called the Overton Window: the range of policies or ideas that are politically acceptable at any given time. 

Since Joseph Overton came up with the idea in the mid-1990s, the concept has expanded beyond government policy. Now, it’s used to describe how ideas enter the mainstream conversation where they influence public opinion, societal norms, and institutional practices. 

The Overton Window is the knot in the narrative tug-of-war. The teams pulling on either side don’t actually expect that everyone will agree with and adopt their ideas; they just need to pull hard enough that the Overton Window shifts in their direction. 

Another way to think about it is like price anchoring, when a company offers multiple price tiers knowing that you’ll land on the one in the middle and pay more for it than you would have without seeing how little you get for the lower price or how much you’d have to pay to get all of the features. 

No one expects you to pay $7,000 for the Super Pro tier (although they’d be happy if you did). They just know that by showing it to you, it will make paying $69 for the Pro tier more palatable. 

The same thing happens with narratives, but instead of one company carefully setting prices to maximize the likelihood that you buy the Pro tier, independent and opposed teams, often made up of people who’ve never met, loosely coordinated through group chats and memes, somehow figure out how to pull hard enough that they move the knot back to what they view as an acceptable place. It’s a kind of cultural magic when you think about it. 

There are a lot of examples I could use to illustrate the idea, many of which could get me in trouble, so I’ll stick to what I know: tech. Specifically, degrowth vs. growth, or EA vs. e/acc. 

EA vs. e/acc

One of the biggest debates in my corner of Twitter, which burst out into the world with this month’s OpenAI drama, is Effective Altruism (EA) vs. Effective Accelerationism (e/acc). 

It’s the latest manifestation of an age-old struggle between those who believe we should grow, and those who don’t, and the perfect case study through which to explore the narrative tug-of-war. 

If you look at either side in isolation, both views seem extreme. 

EA (which I’m using as a shorthand for the AI-risk team), believes that there is a very good chance that AI is going to kill all of us. Given the fact that there will be trillions of humans in the coming millennia, even if there’s a 1% chance AI will kill us all, preventing that from happening will save tens or hundreds of billions of expected lives. We need to stop AI development before we get to AGI, whatever the cost. 

As that team’s captain, Eliezer Yudkowsky, wrote in Time:

Shut down all the large GPU clusters (the large computer farms where the most powerful AIs are refined). Shut down all the large training runs. Put a ceiling on how much computing power anyone is allowed to use in training an AI system, and move it downward over the coming years to compensate for more efficient training algorithms. No exceptions for governments and militaries. Make immediate multinational agreements to prevent the prohibited activities from moving elsewhere. Track all GPUs sold. If intelligence says that a country outside the agreement is building a GPU cluster, be less scared of a shooting conflict between nations than of the moratorium being violated; be willing to destroy a rogue datacenter by airstrike.

The idea that we should bomb datacenters to prevent the development of AI, taken in a vacuum, is absurd, as many of AI’s supporters were quick to point out. 

e/acc (which I’m using as a shorthand for the pro-AI team) believes that AI won’t kill us all and that we should do whatever we can to accelerate it. They believe that technology is good, capitalism is good, and that the combination of the two, the techno-capital machine, is the “engine of perpetual material creation, growth, and abundance. We need to protect the techno-capital machine at all costs. 

Marc Andreessen, who rocks “e/acc” in his twitter bio, recently wrote The Techno-Optimist Manifesto, in which he makes the case for essentially unchecked technological progress. One section in particular drew the ire of AI’s opponents: 

We have enemies.

Our enemies are not bad people – but rather bad ideas.

Our present society has been subjected to a mass demoralization campaign for six decades – against technology and against life – under varying names like “existential risk”, “sustainability”, “ESG”, “Sustainable Development Goals”, “social responsibility”, “stakeholder capitalism”, “Precautionary Principle”, “trust and safety”, “tech ethics”, “risk management”, “de-growth”, “the limits of growth”.

This demoralization campaign is based on bad ideas of the past – zombie ideas, many derived from Communism, disastrous then and now – that have refused to die.

The idea that things most people view as good – like sustainability, ethics, and risk management – taken in a vacuum, seems absurd, as many journalists and bloggers were quick to point out. 

What critics of both pieces either missed is that neither argument should be taken in a vacuum. Nuance isn’t the point of any one specific argument. You pull the edges hard so that nuance can emerge in the middle. 

While there are people on both teams who support their side’s most radical views – a complete AI shutdown on one side, unchecked techno-capital growth on the other – what’s really happening is a game of narrative tug-of-war in which the knot is regulation

EA would like to see AI regulated, and would like to be the ones who write the regulation. e/acc would like to see AI remain open and not controlled by any one group, be it a government or a company. 

One side tugs by warning that AI Will Kill Us All in order to scare the public and the government into hasty regulation, the other side tugs back by arguing that AI Will Save the World to stave off regulation for long enough that people can experience its benefits firsthand. 

Personally, and unsurprisingly, I’m on the side of the techno-optimists. That doesn’t mean that I believe that technology is a panacea, or that there aren’t real concerns that need to be addressed. 

It means that I believe that growth is better than stagnation, that problems have solutions, that history shows that both technological progress and capitalism have improved humans’ standard of living, and that bad regulation is a bigger risk than no regulation. 

While the world shifts based on narrative tug-of-wars, there is also truth, or at least fact patterns. Doomers – from Malthus to Ehrlich – continue to be proven wrong, but fear sells, and as a result, the mainstream narrative continues to lean anti-tech. The fear is that restrictive regulation is put in place before the truth can emerge. 

Because the thing about this game of narrative tug-of-war is that it’s not a fair one. 

The anti-growth side needs only to pull hard and long enough to get regulation enacted. Once it’s in place, it’s hard to overturn; typically, it ratchets up. Nuclear energy is a clear example

If they can pull the knot over the regulation line, they win, game over. 

The pro-growth side has to keep pulling for long enough for the truth to emerge in spite of all the messiness that comes with any new technology, for entrepreneurs to build products that prove out the promise, and for creative humans to devise solutions that address concerns without neutering progress. 

They need to keep the tug-of-war going long enough for solutions to emerge in the middle. 

Yesterday, Ethereum co-founder Vitalik Buterin wrote a piece called My techno-optimism in which he proposed one such solution: d/acc. 

The “d,” he wrote, “can stand for many things; particularly, defense, decentralization, democracy and differential.” It means using technology to develop AI in a way that protects against potential pitfalls and prioritizes human flourishing. 

On one side, the AI safety movement pulls with the message: “you should just stop.” 

On the other, e/acc says, “you’re already a hero just the way you are.” 

Vitalik proposes d/acc as a third, middle way: 

A d/acc message, one that says “you should build, and build profitable things, but be much more selective and intentional in making sure you are building things that help you and humanity thrive”, may be a winner.

It’s a synthesis, one he argues can appeal to people whatever their philosophy (as long as the philosophy isn’t “regulate the technology to smithereens”): 

Vitalik Buterin, My techno-optimism

Without EA and e/acc pulling on both extremes, there may not have been room in the middle for Vitalik’s d/acc. The extremes, lacking nuance themself, create the space for nuance to emerge in the middle. 

If EA wins, and regulation halts progress or concentrates it into the hands of a few companies, that room no longer exists. If the goal is to regulate, there’s no room for a solution that doesn’t involve regulation. 

But if the goal is human flourishing, there’s plenty of room for solutions. Keeping that room open is the point. 

Despite the fact that Vitalik explicitly disagrees with pieces of e/acc, both Marc Andreessen and e/acc’s pseudonymous co-founder Beff Jezos shared Vitalik’s post. That’s a hint that they’re less worried about their solution than a good solution. 

Whether d/acc is the answer or not, it captures the point of tugging on the extremes beautifully. Only once e/acc set the outer boundary could a solution that involves merging humans and AI through Neuralinks be viewed as a sensible, moderate take. Ray Kurzweil made that point a couple decades ago and has the arrows to prove it. 

In this and other narrative tug-of-wars, the extremes serve a purpose, but they are not the purpose. For every EA, there is an equal and opposite e/acc. As long as the game continues, solutions can emerge from that tension. 

Don’t focus on the tuggers, focus on the knot.

Thanks to Dan for editing!

That’s all for today. We’ll be back in your inbox with the Weekly Dose on Friday!

Thanks for reading,


OpenAI & Grand Strategy

Welcome to the 240 newly Not Boring people who have joined us since last week! If you haven’t subscribed, join 216,295 smart, curious folks by subscribing here:

Subscribe now

Instead of an ad, let me tell you about… Not Boring Capital 

Not Boring Capital invests in hard startups and tells their stories.

For the past couple of years, I’ve been writing about why hard startups are the best place to invest and why I think tech companies are going to get much bigger, while investing in many of those companies at Not Boring Capital. SaaS drove the past decade’s returns, but it will not drive the next decade’s. 

Hard tech companies can scale without diluting early investors to smithereens, and the markets they tap at scale are massive. I think the outcomes will be bigger, and more impactful, as a result.

We’re in a great position for this shift: we write non-lead checks in the best hard startups we can find and provide differentiated value by explaining what they do, like we’ve done this year with Atomic AI, Varda, and Array Labs

Techno-optimism is on the rise. The future is bright. We’ve been consistent in our conviction throughout the bear market, and it’s time to accelerate. 

If you’re an accredited investor or family office interested in learning more about investing in Not Boring Capital, let me know here and I’ll reach out:

I’m Interested in Not Boring Capital

Hi friends 👋, 

Happy Wednesday! What a week.

I don’t know if there’s a Newsletter God, but certainly if there is, that god was smiling on me. I wanted to write a piece on grand strategy in tech, and I was gifted the perfect case study. 

Whether you just want to sound a little more sophisticated when you explain what happened at OpenAI to your family at Thanksgiving dinner or plan to build a trillion-dollar company, I’ve got you covered.

Let’s get to it.

OpenAI & Grand Strategy

There’s this meme that goes something like this: 

“Men used to go to war, now they run tech companies.” 

The meme is meant to be sarcastic, to suggest that our modern pursuits are relatively meaningless. 

That’s not how I read it. I think it’s great. 

Count me among those who would rather our best people start companies to achieve goals no old-time general could have dreamed of than kill each other by the hundreds of thousands in pursuit of lesser ones. 

Oh, you brought 600,000 young men into Russia to what? Enforce the Continental System, weaken Britain’s economy, expand French influence in the East, and flex your military prestige? And you lost half a million of them? Bravo! Quel coup de maître! 

Do you know how many men Napoleon would have risked to control even a fraction of the wonders we have at our disposal today? 

With 770 nerds, OpenAI unlocked infinitely available intelligence. With fewer than 10,000, SpaceX built a rocket that may one day make humans multi-planetary. With about 30,000 lines of code, Satoshi introduced a viable alternative to government-issued currencies. 

The occasional board coup is a small price to pay compared to a bloody war. 

In On War, 19th century military theorist Carl von Clausewitz wrote that “War is merely the continuation of policy by other means.” 

The great strategist meant that war should always serve policy, rather than the other way around. War is one way to accomplish a goal, not the goal itself. Means, not ends. 

Technology, too, is just the continuation of policy by other means, but it’s a positive-sum and relatively peaceful continuation. 

Think of it this way: at the two extremes of the climate change solution spectrum, there’s “reduce the world’s population to under 5 billion people” and “build more nuclear, solar, and fusion.” One would lead to war, death, and misery, and the other to abundance. Same ends, very different means. 

As tech companies get much bigger, as they accomplish nation-state-level goals with more limited means, their leaders need to think like the great leaders of history, people like Augustus Caesar, Elizabeth I, and Abraham Lincoln. 

The most ambitious tech leaders need to be grand strategists. 

What is grand strategy? 

“The alignment of potentially unlimited aspirations with necessarily limited capabilities.”  

That’s how Yale Military and Naval History Professor John L. Gaddis defines it in On Grand Strategy, the 2018 book he wrote to condense the lessons from history’s great leaders and strategists into 312 pages. 

On Grand Strategy, John L. Gaddis

He adds, “Alignments are necessary across time, space, and scale.” 

The book talks a lot about war — The New York Times titled its review, When to Wage War, and How to Win — but when I read it a few weeks ago while writing Tech is Going to Get Much Bigger, I kept thinking about how much it applies to really ambitious tech companies.

At the end of that piece, I wrote, “Take your ambition and multiply it by 100. Craft a Grand Strategy. We’ll talk about that next time.”

Then I started writing this piece, thinking about what examples I could use to illustrate the themes from the book, things like… 

  • Align aspirations and capabilities

  • Maintain ecological sensitivity 

  • Embrace contradictions 

  • Balance theory and practice

  • Know what it’s all about 

  • Expect the unexpected 

…when a decade’s worth of examples fell into my lap over the course of five days. 

  1. Elon Musk launched Starship, another step in his nearly 30-year pursuit of making humans multiplanetary.

  2. Brian Armstrong watched the downfall of another less scrupulous competitor when Binance CEO CZ stepped down and pleaded guilty to money laundering charges, and Binance agreed to exit the US market.

  3. Sam Altman and Satya Nadella emerged from an attempted OpenAI coup stronger than they entered. 

In each case, this week’s headlines were the result of decades of patient, flexible, efforts across time, space, and scale. 

To the grand strategist, specific companies may just be means of acquiring the capabilities to pursue greater aspirations. It covers time, space, and scale. 

The ancient Greek poet Archilochus of Paros wrote, as Isaiah Berlin remembered it, “The fox knows many things, but the hedgehog knows one big thing.” 

The grand strategist, according to Gaddis, is both fox and hedgehog: “We’d need to combine, within a single mind (our own), the hedgehog’s sense of direction and the fox’s sensitivity to surroundings. While retaining the ability to function.” 

Grand strategists must, as Octavian and Lincoln did, “stick to their compass heading while avoiding swamps.” Pursue a grand ambition relentlessly, but remain flexible in your pursuit as you respond to changing circumstances. 

Each of the three examples I mentioned above deserves its own analysis. The fact that there are clearly grand strategists at work in all three cases – even if the grand strategy is only clear, even to them, with the benefit of hindsight – shows that to build businesses that challenge governments in scope and scale requires grand strategy. 

The world’s largest industries are up for grabs, and new, larger ones are open to creation. 

But just because they’re open doesn’t mean they’ll be easy. Cracking the really big industries is going to be a dogfight. Look at crypto. Or nuclear. Or AI. Or OpenAI. 

The higher you climb, the stronger the opposition will be, from governments, incumbents, and even your own board. Their tactics will be dirty. And it makes sense! 

In the context that tech is going to get much bigger as it goes after the big pools of money and power, it’s the rational response. Wars have been fought, over and over again throughout history, for much smaller prizes. 

One way to approach the situation would be to complain about how unfair it all is. But that won’t work. There is no referee. 

The other way would be to accept reality as it is and play the game on the field to win. 

This is the more effective route, and the more difficult one. It requires foresight and flexibility, theory and common sense, idealism and pragmatism, long-term goals and short-term actions, power and vulnerability, caution and audacity.

If you take the red pill, you’re going to need to craft a grand strategy.

There’s no playbook, unfortunately, just lessons from history that you can use as a starting point. As Gaddis writes: 

Begin with theory and practice, both of which Clausewitz and Tolstoy respect without enslaving themselves to either. It’s as if, in their thinking, abstraction and specificity reinforce each other, but never in predetermined proportions. Each situation requires a balancing derived from judgment and arising from experience, skills acquired by learning from the past and training for the future.

Theory serves practice, which corrects theory. Be schooled in the theory and flexible in its practice. “Assuming stability is one of the ways ruins get made,” Gaddis warns. “Resilience accommodates the unexpected.” 

Resilience accommodates the unexpected sounds a lot like what went down at OpenAI over the past week.  

The best way to understand the OpenAI saga is through the lens of grand strategy, and the best way to understand how to apply grand strategy to tech companies is through the lens of the OpenAI saga. 

Sam’s Grand Strategy 

Last Thursday, Sam Altman was the CEO of an AI company with a weird non-profit board. On Friday, that board fired him as CEO without warning. Today, six days later, he is back as CEO of the same AI company, with a new board and with the tech equivalent of the Mandate of Heaven.

Writing about the unpredictable chain of events that turned Octavian from 18-year-old kid into Augustus Caesar, first Emperor of Rome, Gaddis says, “There’s no way Octavian could have planned all of this. Instead he seized opportunities while retaining objectives. Octavian stuck to his compass heading while avoiding swamps.”

The same could be written of the situation at OpenAI over the past five days. Altman didn’t plan this situation, but the outcome looks as if he did. 

For grand strategists, as Littlefinger told Varys: “Chaos isn’t a pit. Chaos is a ladder.” 

Way back in August 2008, when Altman was 24-years-old and still just the CEO of a YC-backed location-based social network called Loopt, Y Combinator founder Paul Graham wrote in a blog post, “When we predict good outcomes for startups, the qualities that come up in the supporting arguments are toughness, adaptability, determination… Sam Altman has it. You could parachute him into an island full of cannibals and come back in 5 years and he’d be the king.”

Eight months later, in April 2009, Graham wrote a blog post about the five most interesting founders of the last 30 years. Alongside Steve Jobs and Larry & Sergey, PG highlighted the then-24-year-old Altman: 

Paul Graham, Five Founders 

“On questions of strategy or ambition I ask ‘What would Sama do?’”

That combination of strategy and ambition is essentially grand strategy. And before Altman had achieved any success, Graham singled him out for possessing it. 

The way to view the past weekend isn’t as Altman pulling a good outcome from a bad situation in days. 

It’s as the result of at least 15 years of building capabilities to match his aspirations, and being adaptable enough to use them when the time came.

Aligning Aspirations and Capabilities

The meta-principle from On Grand Strategy is the importance of aligning your aspirations and your capabilities. 

If there’s just one thing to take away, it’s that. So this is the one we’ll spend the most time on. 

“Napoleon lost his empire by confusing aspirations with capabilities;” Gaddis writes. “Lincoln saved his country by not doing so.” 

Time, space, and scale. Napoleon lost his empire by failing to align his capabilities with the scale of the challenge in Russia. Lincoln saved his by patiently waiting for the right time to issue the Emancipation Proclamation. 

Octavian, who would become Caesar Augustus, grew his capabilities over time to match his aspirations as well. 

After the death of Julius Caesar in 44 BC, his heir Octavian waited 17 years to claim the leadership of the Roman Republic that Caesar passed to him. In that time, he made and broke alliances, won and lost battles, built up his strengths and learned to manage his weaknesses, patiently waited for his enemies to eliminate themselves, and came to represent what Rome stood for in the eyes of the Roman people. He built up his capabilities to match his ambitions. 

Augustus of Prima Porta

By biding his time – seeing time as an ally – and building up his capabilities to match his ambitions, he gained more power than even Julius Caesar had, maintaining the facade of the Roman Republic while in reality leading the new Roman Empire. And he avoided his predecessor’s downfall, ruling over the Empire for the next 13 years until dying of natural causes just short of his 77th birthday. 

How much time do men spend thinking about the Roman Empire? Maybe not enough! 

In Choose Good Quests, Markie Wagner and Trae Stephens compare founders to players with “specific resources, skills, and powers.” 

For a college-aged Mark Zuckerberg, building a social network for college students was a good quest because it leveraged his specific resources, skills, and powers. For Palmer Luckey, who sold Oculus to an older Zuck for $2 billion, a good quest was one with higher aspirations to match his greater capabilities: modernizing defense. 

Zuck and Palmer Luckey, from Choose Good Quests by Markie Wagner and Trae Stephens

Both Zuck and Palmer are accomplishing things today that they would have failed at in their teens. If you have Level 1 capabilities, you need to level up before going after Level 10 aspirations. The same was true for Octavian, and for Sam Altman. 

In all of the craziness of the past week, the most striking thing to me was the support Altman received: from other founders, investors, the tech community, and most importantly, OpenAI’s employees. 

When the news initially broke on Friday afternoon, given the wording in the board’s blog post, the assumption was that Altman had done something Very Bad. Quickly, though, prominent founders and investors like Airbnb’s Brian Chesky and SV Angel’s Ron Conway, came to his defense. Soon after, X was filled with tweets from founders who Sam had quietly gone above and beyond to help over more than a decade. 

That shift in the narrative gave him a position of strength from which to fight back, and it wouldn’t have been possible if he hadn’t spent the past 18 years in Silicon Valley building up his capabilities to match his aspirations. 

When the news broke on Saturday morning that Altman and OpenAI co-founder and President Greg Brockman were already talking to investors about starting something new, those rumors carried a credibility that they wouldn’t have if Altman were a fresh founder. 

By Saturday evening, when rumors that Altman might come back as CEO started swirling, it seemed almost a fait accompli. That night, Sam tweeted “i love the openai team so much” and the team responded with a show of force in the form of quote-tweet heart emojis, signifying that if Sam left, they would too in the most 2023 way possible. 

Negotiations dragged on all through Sunday. It looked like Altman and Brockman would return and the board would resign. And then on Monday morning, we woke up to a bombshell: that Twitch CEO Emmett Shear would become OpenAI CEO, while Sam and Greg would lead a new AI research team within Microsoft and bring OpenAI employees with them. 

Microsoft Satya Nadella, a grand strategist in his own right, and Sam Altman were the winners, and OpenAI’s board were the losers. 

But the saga didn’t end there. On Monday morning, at the buttcrack of dawn west coast time, more than 500+ of 770 OpenAI employees (over 95% of employees ultimately) signed a letter threatening to quit and join Sam and Greg at the new Microsoft AI thing unless all current board members resign and a new board reinstates Sam and Greg. 

OpenAI Employee Letter

Over the past couple of days, it became clear that the board’s reasons for firing Sam were shaky at best, and that whatever the facts, Altman had the support of the people. 

As of this morning, Sam is back in as CEO, with a new board.

Altman played the situation perfectly. He drew on a career’s worth of built-up capabilities and relationships, and maintained flexibility on the specifics in pursuit of the long-term goal by showing a willingness to work towards AGI at a new startup, within Microsoft, or at OpenAI. 

It was downright Octavian. 

When Octavian reached the height of his power after the downfall of his rival Antony, writes Gaddis: 

He secured authority by appearing to renounce it, most dramatically on the first day of 27 when he unexpectedly gave up all his responsibilities. The surprised senate had no choice but to forbid this and to award Octavian the title of princeps (“first citizen”) – as well as a new name: Augustus.

The board, meanwhile, failed miserably in its coup by ignoring the lessons of grand strategy. 

As Gaddis warns, “The principle, for both Augustine and Machiavelli, reflects common sense: if you have to use force, don’t destroy what you’re trying to preserve.” 

They almost destroyed the company, and ended up destroying their oversight over it. 


Their capabilities didn’t match their aspirations

In fact, they seemed to have wildly misjudged their capabilities, believing that the company’s wonky governance structure gave them the leeway to do as they wished, but not realizing that real power doesn’t come from pieces of paper, but from the will of the people. 

They didn’t realize that OpenAI is nothing without its people. 

And they didn’t maintain ecological sensitivity. They didn’t understand the tech community well enough to understand that most people would support Altman, didn’t understand the investors well enough to know how much pressure they would apply, and didn’t know the company well enough to realize that its people would walk. They didn’t pick up on the subtle vibe shift from safetyism to acceleration, and failed to anticipate how little support they would receive. 

They came at the king, and missed, and made him the Emperor.  

Becoming a Grand Strategist

Here’s the thing: as crazy as this past week was, things are going to keep getting crazier. 

I keep banging this drum, but I can’t bang it loudly enough: things will keep getting crazier.

The things that tech companies build in the coming years will blow even my optimistic mind and give them power on a scale no company has had. They’ll rival nation-states. They already rival nation-states – Apple’s $383 billion in 2023 revenue would rank it 41st in the world in national GDP, sandwiched between Hong Kong and South Africa, and while revenue and GDP aren’t perfect comps, those are serious numbers – but they’ll rival more powerful ones. 

Battles like one we all just witnessed at OpenAI, like the decades-long attack on nuclear, like the government’s anti-crypto Operation Chokepoint 2.0, will become more common, not less. 

And that’s fair. The stakes are that high. 

If you’re reading this and you have ambitions to make the world a different place, know that coming in, but don’t let it deter you. Know that it’s what you’re up against, and start crafting your own grand strategy. 

Acquire capabilities to match your ambitions. Maintain ecological sensitivity. Build alliances. Embrace chaos. Know theory while accepting that it won’t survive contact with practice. Remain flexible in tactics and steadfast in ambition. 

Certainly, read On Grand Strategy. I captured a fraction of the lessons from the book. Live in the stories that Gaddis tells, put yourself in the shoes of history’s greats. 

Coming up on the holiday season, when we all get a little time to relax and reflect, think about the final theme from the book: Know What It’s All About.

What are you working towards, and why? 

There’s never been a better time for a small group of passionate people to change the course of history, for the better, bloodlessly. Go craft your grand strategy and get to work.

Thanks to Dan for editing!

That’s all for today. Have a Happy Thanksgiving, and we’ll be back in your inbox with the Weekly Dose on Friday!

Thanks for reading,


Blockchains As Platforms

Welcome to the 833 newly Not Boring people who have joined us since last week! If you haven’t subscribed, join 216,055 smart, curious folks by subscribing here:

Subscribe now

Today’s Not Boring is brought to you by… Alto

Alto allows individuals to invest in alternative assets with their retirement funds through a self-directed IRA. I love and use Alto because I can diversify my portfolio, all the while minimizing my tax burden by investing out of tax-advantaged retirement accounts. It’s a win-win. My accountant father-in-law is very proud of me for this type of long-term financial planning.

Alternative assets (real estate, private credit, VC funds, crypto, fine art, etc.) have long been thought of as reserved for the ultra-rich and professional investors. Alto set out to change that in 2018 when it launched its platform designed to streamline access to alternative assets for  individual investors wanting to invest their retirement funds. That last part is important (and what makes Alto a no-brainer in my opinion), because investing in alternative assets with funds earmarked for retirement means you’re investing for the future with assets that have a longer time horizon and the greater potential upside while also claiming tax benefits. 

Alto just launched Alto Marketplace, a capital raise platform that connects individual accredited investors to leading funds and exclusive opportunities. If you’re allocating to alternatives, consider investing in a tax-advantaged way and check out Alto.

Start Investing in Alts with Alto Today

Hi friends 👋,

Happy Wednesday! Last week, I promised you an essay on Grand Strategy, and that’s coming, but while writing it, I opened Twitter and I learned something odd…

Crypto seems to be not dead.

It’s weird because I could have sworn people said it was dead, that if you’re in crypto, you should pivot to AI. And yet, here we are.

Now who knows if this recent run-up is the beginning of a new bull market or just a little glimmer of hope in the middle of a long bear. Certainly not me. Long-term, price matters only insofar as it attracts developers to the space to build good products.

Crypto is often viewed as this ideological thing. You either love it or hate it. But I think a more useful way to think about the space is that blockchains are just platforms. In their current state, they present unique benefits and real drawbacks.

Ultimately, developers will choose to build on blockchains if the benefits outweigh the drawbacks for the specific product they’re trying to build. It’s that simple.

If this is the end of the bear market, it seems odd that it’s come without new killer use cases. My guess is that higher prices will make developers take another look at blockchains, and that when they do, a larger number will find that the trade-offs make sense for what they’re trying to build.

This is a shorter one, so if you’re looking for something to fill the void, Age of Miracles is about to hit the halfway mark, with our fifth (and best yet) episode coming out Friday. Subscribe and listen to our first four on Apple, Spotify, or YouTube.

Let’s get to it.

Blockchains As Platforms

Blockchains are platforms on top of which developers can build products. 

Their success depends on whether developers choose to build products on top of them. As more developers build better products on blockchains, they will attract more users.

Generally, developers, in their enlightened self-interest, build the products that they believe will make them the most money. That means building products that will attract the most users, or attract a smaller number of users who are willing to pay them more money. Certainly, there are other considerations – impact, self-actualization, novelty, whatever – but for our purposes, let’s assume developers build the products that they think will make them the most money. 

When choosing which platforms to build on top of, developers need to weigh trade-offs. What benefits does the platform offer the product, and ultimately its users, and what drawbacks does the platform present? 

Different developers will make different trade-offs depending on what they’re building. 

Over time, as infrastructure improves, the trade-off calculus will come out in blockchains’ favor for more use cases. 

Take the cloud as an example. When Amazon first built AWS, startups were early adopters while larger companies and governments with higher security requirements remained on-prem. For startups, the low upfront cost, scalability, and ease of AWS outweighed the potential security risks and lack of full control of the cloud. The cloud made previously infeasible companies feasible. For large companies and governments, the familiarity, tooling, and security of on-prem servers outweighed AWS’ benefits. Over time, as Amazon and the other cloud providers invested in performance, tooling, and security, the drawbacks shrank as the benefits grew. Cloud usage has grown as the trade-off has flipped in its favor for more and more developers. 

The decision of whether or not to build on a blockchain depends on trade-offs, too. 

On the benefits side, blockchains offer: 

  • Decentralization

  • Smart contracts 

  • Tokens 

  • Global exchange

  • Ownership

  • Governance

  • Verification and transparency

  • Permissionlessness

  • Composability 

  • Immutability

  • Security

  • Ability to make commitments in code

But blockchains in their current form come with significant drawbacks: 

  • Slowness

  • Higher costs

  • Clunky user experience

  • Limited privacy

  • Regulatory uncertainty

  • Key management difficulty

  • Stigmatization

Whether a developer chooses to build their product onchain depends entirely on whether the benefits outweigh the drawbacks for the specific product they’re trying to build.

Today, according to the latest numbers from a16z’s State of Crypto Index, there are 23.3k active developers building in crypto out of roughly 1,000x that many developers in the world. The drawbacks currently outweigh the benefits for 99.9% of developers. 

But for a small number of developers, the benefits outweigh the drawbacks because blockchains allow them to do things that they could not otherwise do. 

A significant proportion of products built on blockchains are financial – exchanges, NFTs, and payments – because they are the products that couldn’t be built on traditional infrastructure. Slowness, higher costs, and a clunky UX don’t matter as much as the fact that these products are possible to build on blockchains. 

It’s no surprise that the most popular social product in crypto over the past year,, is one with money baked into its mechanics – it’s a product that’s only possible in crypto. The app crashes and the product is slow and clunky, but those drawbacks are outweighed by the unique ability to make money every time someone trades your keys. 

It won’t always be just about money, though. As blockchains’ capabilities expand, performance improves, and drawbacks get figured out, the number of things built on blockchains grows. The more “free” you can make the benefits, the easier the decision becomes. 

Bitcoin the blockchain was good at sending and receiving bitcoin the cryptocurrency. It didn’t attract many developers besides those who were philosophically aligned or wanted to build with something new, and there are few examples of popular products built on Bitcoin. 

When Ethereum introduced smart contracts, it expanded the things that developers could do on blockchains. It increased the benefits and decreased the drawbacks, but it was still slow and expensive and clunky enough that it captured mostly newly possible use cases. 

With faster and cheaper blockchains and L2s, the trade-offs tilt further in the favor of developing certain products onchain. With zero-knowledge proofs that provide privacy onchain, the trade-offs tilt further in the favor of developing certain products onchain. With UX improvements like embedded wallets, multi-party computation, and account abstraction, the trade-offs tilt further in the favor of developing certain products onchain. 

Take social apps. web3 social products like Farcaster and Lens won’t win by being Twitter, but decentralized. Users care about speed, experience, and dopamine more than they care about decentralization. 

But as the infrastructure improves, the benefits of building on blockchains begin to outweigh the drawbacks. In August, Farcaster moved from Ethereum Mainnet to Optimism, a faster and cheaper Layer 2. It integrated NFT minting directly in casts with Zora and song minting directly in casts with

It’s reducing speed and cost drawbacks while leaning into the benefits of composability. The trade-off tilts further in favor of developing onchain. 

There are plenty of products that would benefit from the unique features of crypto if the drawbacks were low enough. Verification, for example, becomes more important with the rise of generative AI, as does the ability to own your own models and tap into decentralized compute. Tokens and value exchange can be useful for loyalty and rewards products. When transactions cost $50 in gas fees and took minutes to finalize, the drawbacks weren’t low enough. 

But with cheaper, faster transactions and UX improvements, those products start to make sense. 

Blackbird, for example, is a restaurant loyalty app founded by Resy co-founder Ben Leventhal. It’s built on Coinbase’s Base L2 and uses embedded wallets so that it’s fast, cheap, and doesn’t feel like a crypto app. It feels like a regular app, with crypto benefits, and it’s rapidly expanding with great bars and restaurants in NYC. Soon, it will be crazy to launch a loyalty product that doesn’t use crypto. 

Blackbird Restaurants in NYC 

In short, as crypto’s infrastructure continues to improve, more and more developers will find that its benefits outweigh its drawbacks and decide to build onchain. 

This is the simplest way I’ve come up with to think about what use cases currently make sense in crypto and which don’t yet, and the simplest way to track progress in the space. 

It’s as much art as science, since developers will have to make a judgment call on what users will care about more. Some will build products onchain believing the trade-off has flipped in blockchains’ favor, only to realize that the benefits don’t quite outweigh the drawbacks. 

I think that’s what Dani Grant and Nick Grossman at USV were describing in their 2018 blog post, The Myth of the Infrastructure Phase: “First, apps inspire infrastructure. Then that infrastructure enables new apps.” 

It takes people being a little early and trying things that don’t quite make sense yet to show other developers which drawbacks need to be attacked next. 

But the drawbacks are being addressed. During the bear market, crypto has improved most of the drawbacks I listed above:

  • Slowness → faster L2s like Optimism, Base, zkSync and L1s like Solana, Aptos, and Sui

  • Higher costs → cheaper L2s like Optimism, Base, zkSync and L1s like Solana, Aptos, and Sui 

  • Clunky user experience → account abstraction, embedded wallets, MPC

  • Limited privacy → zK proofs and privacy-focused L1s like Aleo and L2s like Aztec

  • Regulatory uncertainty → work-in-progress but good wins in court

  • Key management difficulty → companies like Bastion (a portfolio company)

  • Stigmatization → will come with more good products 

There’s still a ton of improvement left to be made. Onchain actions need to get nearly as cheap as fast as offchain actions for the benefits to outweigh the drawbacks for many use cases. Developers will need to be thoughtful about which actions need to be onchain, which can take place offchain, and the best ways to build experiences that combine the two. 

We’ll likely never get to a world in which everything is built onchain. For some products, crypto’s benefits might not matter and might even be counterproductive. That’s OK. 

But over time, as infrastructure developers attack crypto’s drawbacks, and app developers get crypto’s benefits for closer to free, I expect that we’ll see a surprising amount of products move onchain.

Thanks to Dan for editing!

That’s all for today. We’ll be back in your inbox with the Weekly Dose on Friday!

Thanks for reading,


Tech is Going to Get Much Bigger

Welcome to the 620 newly Not Boring people who have joined us since last Tuesday! If you haven’t subscribed, join 215,222 smart, curious folks by subscribing here:

Subscribe now

Our new podcast, Age of Miracles is three episodes in, with a fourth coming Friday. Today’s essay is about what happens when we get cheap energy, intelligence, and labor. This season of Age of Miracles is about how we get that energy. Listen on Apple, Spotify, or YouTube.

Today’s Not Boring is brought to you by… Create

If you read Not Boring, you likely already known that my brother Dan writes the Weekly Dose of Optimism. What you might not know is that he also runs Create — a creatine gummy brand thats sold nearly 15 million gummies in its first year of business.

Creatine?! That bodybuilder supplement?

Yes, creatine. It turns out that most people can benefit from creatine supplementation — whether you want to gain lean muscle, improve muscle recovery, or increase cognitive performance. Yeah, that’s right…it turns out creatine can improve brain health, everything from cognition to memory to mood. And of course, when matched with resistance training, it’s gonna help safely build muscle.

Create is giving Not Boring readers early access to its Black Friday Sale (up to 40% off + Free Gift With Purchases) here. Give them a shot — even though I’m a biased older brother, the gummies are delicious, convenient, and have actually worked for me.

Try Create (Up to 40% off)

Hi friends 👋,

Happy Wednesday! Sorry for the late email – I was ready to send yesterday, but I didn’t like the draft and I want this one to be good, so I started rewriting it at 8:30 am yesterday. 

I could be wrong – I often am – and I certainly don’t have a good enough crystal ball to predict the timing or the specifics, but it’s my best distillation of the way things are moving and what that might mean for tech companies. 

Think of it as half-prediction, half-thought exercise.

Let’s get to it.  

Tech is Going to Get Much Bigger

Tech is going to get so much bigger in the next decade or two that it will make everything up until now look quaint by comparison. We’re standing on that part of the curve that looks steep in the present but will look prairie flat looking back from the future. 

I’ve been writing around various parts of this thesis for a while. The simple version comes from Working Harder and Smarter, which I wrote last September: 

  • Until the 1970s, tech was about hardware

  • From the 1970s to today, tech has been about software

  • From here on out, tech will be about the combination of software and hardware

That was 14 months ago, and already so much has changed. 

The Tesla AI Day at which they unveiled Optimus was still three weeks away. Now Optimus can do this:

Figure was still in stealth. Not its 01 humanoid robot can do this: 

ChatGPT wasn’t around then. Yesterday, OpenAI announced all of this:

Millions of people and businesses are going to have armies of agents that just keep getting smarter and more capable with every model upgrade. 

Long, long ago, when I wrote that piece, the National Ignition Facility hadn’t achieved fusion ignition, half of the nuclear fission startups we’ve spoken to for Age of Miracles didn’t exist, and solar hadn’t yet crossed the $100 billion in quarterly investment mark. 


That’s head-spinning progress, all made in a supposedly bad economy with higher interest rates. 

The pace of change is important, but more important is what’s changing.

At the end of all of the interviews we do for Age of Miracles, we ask each guest what the world will look like when we have abundant energy. Isaiah Taylor, the founder of Valar Atomics, gave an answer that frames the situation perfectly: 

There are only really three pillars to anything around us, as far as consumable goods. We’ve got energy, intelligence, and dexterity.

Those are the three things that go into any physical good, any product. And we are like right on the cusp of getting all three for free, which is kind of unbelievable, right? Dexterity has been, you know, worked on for a while, but it was always bottlenecked by intelligence. What OpenAI is doing on the intelligence front is genuinely making intelligence free.

And then I plan to make energy free. So we’ve got free energy, free intelligence, and we’ve got dexterity with projects like Figure and Optimus.

Labor is becoming a scalable utility – plug in, power up, and produce. 

This will certainly mean change for workers – I’m in the camp that believes we’ll do more fulfilling work and non-work with our time – and it will certainly mean surplus for consumers. If the cost of everything decreases, everyone can have more of what they need. 

We’ve talked about abundance here a bunch. I’ve seen a lot of tweets and read a lot of essays about what AI might mean for humanity. And those are the most important things, for sure. 

But my god, just putting on my tech investor hat for a minute, cheaper energy, intelligence, and dexterity are going to be an absolute boon for tech companies. 

The biggest tech companies of the next decade will be much, much bigger than the biggest tech companies today. 

Bart Simpson Chalkboard Generator

Technology is Eating Everything

A little over a decade ago, in a similar in-between kind of market to the one we’re in today, Marc Andreessen wrote Why Software is Eating the World. He wrote about Amazon eating books and Netflix eating entertainment and Google eating direct marketing. He wrote about software enabling the truly big markets, like oil & gas, defense, and retail. 

The piece was prescient. When he wrote it in the third quarter of 2011, Apple was the largest company by market cap, and Microsoft was the fifth largest, but the top 10 was largely dominated by oil companies, and the market caps were much, much smaller. 

Largest Companies by Market Cap, 2011; Wikipedia

Today, the top 10 looks like this:

Largest Companies by Market Cap, 2023; Wikipedia

Saudi Aramco should be in there at number three, but look at that. The top companies are all tech companies, and the market caps are almost an order of magnitude larger. 

Tech won. How are we going to top that? 

Software ate the world, but like the Very Hungry Caterpillar, it is still very hungry.

Adapted from The Very Hungry Caterpillar by Eric Carle

In the incredible Acquired conversation with NVIDIA CEO Jensen Huang, David Rosenthal tells a story from the early days of Sequoia to highlight that venture returns can always get bigger as technology’s market grows: 

David: The great story behind it is that when Mike [Moritz] was taking over for Don Valentine with Doug, he was sitting and looking at Sequoia’s returns. He was looking at fund three or four, I think it was four maybe that had Cisco in it. He was like, how are we ever going to top that? Don’s going to have us beat. We’re never going to beat that.

He thought about it and he realized that, well, as compute gets cheaper, and it can access more areas of the economy because it gets cheaper, and can it get adopted more widely, well then the markets that we can address should get bigger. Your argument is basically AI will do the same thing. The cycle will continue.

That’s just compute. What happens when energy, intelligence, and dexterity get cheaper? 

Later in the conversation, Jensen gives his answer, focusing only on intelligence: 

This is the extraordinary thing about technology right now. Technology is a tool and it’s only so large. What’s unique about our current circumstance today is that we’re in the manufacturing of intelligence. We’re in the manufacturing of work world. That’s AI. The world of tasks doing work—productive, generative AI work, generative intelligent work—that market size is enormous. It’s measured in trillions…

The technology industry is that what we discovered, what Nvidia has discovered, and what some of the discovered, is that by separating ourselves from being a chip company but building on top of a chip and you’re now an AI company, the market opportunity has grown by probably a thousand times.

Don’t be surprised if technology companies become much larger in the future because what you produce is something very different. That’s the way to think about how large can your opportunity, how large can you be? It has everything to do with the size of the opportunity.

The size of the opportunity is growing as tech – software and hardware – accesses more areas of the economy, and eats more areas of the economy. 

What’s notable about tech’s dominance of the top companies by market cap list is that it doesn’t map very cleanly to the list of top companies by revenue. 

Largest Companies by Revenue, 2023; Wikipedia

Apple, Microsoft, Amazon, and Alphabet come in ninth, thirty-first, fourth, and eighteenth, respectively. Meta, Tesla, and NVIDIA don’t make the top 50. Their market caps are so high because they’re insanely profitable, fast-growing, or dominant within their categories. 

But other than Amazon in retail and now Tesla in automobiles, they haven’t made a dent in the world’s largest pools of money. There are a few ways to slice what those pools are. 

The US Bureau of Economic Analysis breaks down GDP by Industry:

Tech doesn’t fit fully into any one category – it’s eating the world! – but its best fit is “Information,” which includes activities like software publishing, data processing, and telecommunications. Amazon has certainly penetrated “Retail Trade,” tech workers are included in “Professional, Scientific, and Technical Services,” and computers and iPhones sit in “Manufacturing – Durable Goods,” but there’s a lot left to eat. 

IBISWorld, which, caveat fontem (for example, they have Telecommunications at the top of the list because of a formatting error), lists the top 10 global industries by revenue

Data from IBIS World

Good data on this is surprisingly hard to find, and I was going to bang my head against the wall to pull it, but this is directionally current and you get the point. Whichever way you slice it, tech companies haven’t won the biggest spend categories. 

That’s beginning to change. Tech companies are going to get much bigger for two separate reasons, which will converge: 

  1. Tech Startups are entering hard tech categories and leveraging advantages. 

  2. Energy, intelligence, and dexterity will grow what’s in tech’s addressable market. 

  3. Every market will trend towards looking like the software market. 

Tech Startups in Hard Tech 

The first reason tech companies are going to be bigger this decade than last decade is simply that they’re going after bigger buckets of spend, energy, intelligence, and dexterity or not. 

SpaceX and Tesla kicked this off and showed that it could be done. 

I think SpaceX in particular is the 4-minute mile of hard tech, because it showed that a tech company could break into an incumbent and government-dominated industry and win by offering a better product at a better price. While the launch market itself isn’t massive today, SpaceX is entering the trillion-dollar communications market with Starlink. And if the Space Economy flourishes, SpaceX is positioned to be the its infrastructure layer. 

SpaceX is currently the most valuable private tech company at $150 billion, and many of its alums have gone on to start and work for the next generation of hard tech companies.

Anduril is a spiritual descendent of SpaceX with a key evolution: it’s betting that by launching hardware products on top of its AI-powered software operating system, Lattice, it will be able to “deliver more capable hardware products for less money, and that its advantages will compound with each new product it plugs in.”

The company is putting its money where its mouth is, funding its own R&D and acquisitions and bidding on fixed-price contracts instead of the Department of Defense’s traditional cost-plus contracts, where the winner gets paid for all of their costs, original budget be damned, plus a fixed margin. If it keeps costs low and product quality high enough to win contracts, it reaps the rewards in the form of higher margins. 

The mega bull case for Anduril is that it will be able to eat into the more than $100 billion the US DoD spends with the five Defense Primes each year, but at much higher margins, in the 50% range instead of the 10% range. 

Higher margins mean a few things: 

  1. Higher Market Cap: If you earn higher profits on the same revenues, all else equal, your company will be worth a lot more. 

  2. More Investment in R&D: Higher profits mean more money to invest back into building and buying the next generation of products, which could help grow market share. 

With Anduril, we’re starting to see the early glimpses of what the world might look like when tech companies eat large, established markets. 

Anduril’s revenue is nowhere close to Lockheed Martin’s $66 billion today, but if everything goes just right, if the market comes to it over the next decade, the company could find itself in a position where it is earning tens of billions of dollars in revenue at something like 50% margins, for a market cap that’s multiples higher than Lockheed’s on the same revenue. Lower costs, better margins, and faster growth unlock value. The same revenue in the hands of a well-run tech company is worth multiples of what it would be in the hands of an incumbent. 

What’s particularly interesting in the context of this conversation is that the traditional Defense Primes seem to struggle to do the same. Last month, when Boeing reported its third quarter earnings, it blamed the $1.7 billion in losses in its Defense, Space, and Security Division on fixed-price contracts. “Rest assured,” the company’s CFO Brian West assured analysts, “We haven’t signed any fixed-price development contracts, nor intend to.”

That’s one of the reasons tech companies will be bigger in the next decade: if the thesis that good software combined with cheaper hardware is better than more expensive hardware alone, the financial picture of companies in big spend categories begins to shift towards looking more like software companies. Not all the way there, but more like them.   

The success of companies like SpaceX and Anduril has inspired a wave of newer startups to tackle huge, established, challenging industries. 

Of course, there are aerospace and defense startups, companies I’ve written about like Varda and Array Labs, plus dozens more credible and well-backed teams, many of which hail from SpaceX and Anduril themselves. 

But it extends beyond the obvious ones into anything hard in the physical world, including two legs of the energy, intelligence, and dexterity stool. 

Like energy. As we’ve been interviewing founders in nuclear fission and fusion for Age of Miracles, for example, it’s incredible how often SpaceX in particular has come up. My co-host, Julia DeWahl, is a SpaceX alum who recently co-founded a company, Antares, that’s working to bring nuclear power to the DoD. 

And dexterity. Tesla itself is using its resources, including its Dojo Supercomputer and AI team, to develop the Optimus Robot. Figure is developing the 01. And it’s not just America. China claims to have a plan to mass-produce humanoid robots that can “reshape the world” in the next two years. The race is on. 

Increasingly, tech startups will steal market share in the largest industries on earth, changing the financial dynamics of those industries and unlocking value in the process. 

But I think the bigger opportunities might be in converting large, non-addressable markets into addressable markets.

Expanding Tech’s TAM

In Intelligence Superabundance, I argued that AI wouldn’t replace the need for human workers, but that thanks to concepts like induced demand and Jevons Paradox, more intelligence would create demand for even more intelligence. 

Extending that argument, cheaper labor – intelligence and dexterity – will create demand for more labor. Importantly, it will make an increasing share of that labor – mental and physical – addressable. Humans earn for themselves; AI and robots earn for whoever sells them. 

According to the International Labour Organization’s 2020-2021 Global Wage Report, people around the globe earn roughly $44 trillion. Sense checking that against labor’s share of GDP, it checks out. In 2020, Global GDP was $85.2 trillion. The worldwide labor share of GDP was 53.8%. That gives us $45.8 trillion in global compensation. 

So we’re looking at $45 trillion in annual spend that is not currently addressable by any tech company that can be addressed only by tech companies

(Note: This piece is not about the ethical implications or the potential dislocation that might occur. For what it’s worth, I believe that, after a transition period, people will do more fulfilling work than work that robots can replace, or find more fulfilling non-economic things to do with their time, but I also think that there will be a gap between those who adapt quickly and those who don’t. But this piece is just about the economic implications). 

I know that this sounds insane and hyperbolic. And for a while, it will be. Optimus can move some blocks around, like a baby. Figure 01 can walk, like my 1-year-old. And I need to tell ChatGPT exactly what I want done, and keep pushing it to get a result I’m happy with, like an intern. 

But AI keeps getting smarter, more capable, and cheaper. As OpenAI rolled out an ocean of new features at Dev Day, it also cut prices for those features by 2-3x. Intelligence is getting more powerful and cheaper at the same time. 

Why cut prices? In a prescient 2009 blog post, Y Combinator founder Paul Graham wrote a post on the five most interesting startup founders of the past 30 years. He included a young Sam Altman, writing, “On questions of strategy or ambition I ask ‘What would Sama do?’” 

Sama likely isn’t being charitable by cutting prices; he believes the increase in demand for intelligence will outweigh the impact of lower prices. Jevons Paradox. 

Suddenly, lawyers, accountants, and any manner of professional services providers become part of tech’s addressable market. Not in a “we can sell these big industries software” way, but in a “we can make and sell you lawyers as-a-service” way. OpenAI’s GPTs – “custom versions of ChatGPT that combine instructions, extra knowledge, and any combination of skills” – are a little preview of this world.

Robots, too, will get cheaper and more capable, especially as they’re infused with AI that’s cheaper and more capable. They’ll take time to scale, and it will be a while before they come down the experience curve to the point that they’re cheaper than human labor. Today, they don’t need to be. 

As I wrote about in my piece on Formic, America faces a huge labor shortage, with 10.1 million unfilled jobs at the time, 1.5 million of which were in manufacturing. Robots – whether they look and walk like humans or do one task really well – can fill open roles, and work round the clock, dramatically increasing the profits of the factories that employ them. 

In manufacturing, because so much of a facility’s costs are fixed: 

More capacity, as long as it’s met with demand (which, in this market, it likely will be) means higher profits. Rent stays the same. The upfront money for machines is already spent. Labor costs increase in line with demand. And materials costs increase with demand, but decrease on a per unit basis with larger orders.

To see how huge this is, let’s run some purely illustrative and probably wrong but directionally correct numbers on a factory that increases its output by 3x:

Even assuming no cost savings from robotic labor (we make up for it with higher human salaries), a 3x increase in revenue translates to a 25x increase in profit.

With those profits, manufacturers can pay higher wages, hire more people, buy more robots, and make products more cheaply. That will increase demand, and keep the flywheel spinning. Dexterity is an important step towards material abundance. 

Intelligence and dexterity, by becoming cheaper, will increase their market size and make the market addressable to the tech companies that make the intelligence and dexterity. 

Again, I don’t know if this will play out over five years, ten years, or twenty. Regulators may try to stop it in a misguided attempt to protect workers (who typically, counterintuitively, benefit from automation). A million things can and will stand between what the plan looks like on paper and how it gets implemented in reality. All of this stuff will be messy and complex. But it will happen. 

And as it does, one final thing will happen: every market will trend towards behaving like a software market. 

Every Market Will Look More Like Software

As the costs of energy, intelligence, and dexterity approach zero, the cost and speed of manipulating and distributing atoms will approach the cost and speed of manipulating and distributing bits. 

I said approach, not match, because the laws of physics stand in the way. Bits can move at the speed of light; atoms cannot. Bits can cost practically zero; atoms cannot. The cost and speed of physical things will asymptote somewhere above zero, but they’ll get much cheaper and faster than they are today. 

Turning labor into CapEx will change cost structures. It will require more upfront investment and enable lower marginal costs. 

As that happens, every industry will come to look more like the software industry. Faster growth, higher margins, more R&D. 

Here’s a relevant table from the National Scientific Board. It looks at R&D intensity – R&D spend as a percentage of net sales – by industry. 

It once again shows how small software is compared to other industries, but it also shows how much more the industry spends on R&D. 

Software companies have an R&D intensity of 17.7%, meaning that they spend 17.7% of net sales on R&D. The only category that’s higher – Scientific R&D services at 26.3% – is the category that includes biotech, nanotechnology, advanced materials, and renewable energy, and literally has R&D in its industry name. Compare that to manufacturing broadly, at 5%, or a particular laggard in Finance and Insurance, at 0.6%. Those seem ripe. 

It makes sense when you think about it. Industries whose main product is intellectual property spend more money on researching and developing that intellectual property. They spend a lot of money on research and development upfront, and reap the benefits of high marginal costs over time. Because their products are information-based and not physical, they can grow more quickly. 

But what happens when the marginal costs of almost everything approach zero? 

I think the majority of SaaS companies are going to have a hard time, but SaaS is a great business model if you can protect it. I understand why investors like it so much and want to hold on. 

The real big opportunity here, though, is that large swaths of the economy might come to look like SaaS models, but with deeper moats – higher switching costs, meaningful economies of scale, and even network effects as the most used AI and robots get smarter and more skilled with more usage of any member of their fleet. 

This is kind of happening already, in a very early form, with robotaxis. While they’re still money-burning endeavors, the model is beginning to emerge. Instead of every driver having a car that they operate for eight, maybe 12 hours a day, burning their own time for money, robotaxis will be able to run practically non-stop, save for charging time, running on near-free and ever-cheaper machine labor. 


Waymos are electric vehicles powered by renewables, which can be super, super cheap if you charge at the right time. So Waymo invests in cars, the models needed to run them, and the data needed to train them, then powers up with cheap renewables, and can run practically non-stop. Currently, it needs to burn a lot of car time on data collection, but that might change, too. The end result is that they’ve pushed as much driving OpEx as possible into CapEx. At scale, each ride will be so cheap – approaching the same cost for the rider as a ride on public transport – that companies like Waymo might charge a subscription instead of per-ride to lock in customers and generate predictable cashflows. 

Plus, according to a study Waymo did with reinsurance giant Swiss Re, their cars are already 4x safer than human drivers, and getting safer as the cars get smarter. You’d imagine that will show up in lower insurance rates, making the model even more profitable at scale, giving the company more money to invest in R&D and fleet expansion, and accelerating tech’s devouring of a big portion of transportation demand. 

As energy, intelligence, and dexterity get cheaper, more abundant, and more on-demand, we’ll see this same thing play out in other sectors of the economy. More and more industries will come to look like software. 

Of course, this will not be a smooth transition and it will not happen overnight in any industry, as the pushback against robotaxis shows. 

A bunch of chatbots and robots won’t be able to replicate everything that large manufacturers can do overnight. There are huge CapEx investments and know-how that would be difficult to replicate. Energy startups will take decades to displace the fossil fuel giants, if the fossil fuel giants don’t leverage their scale and regulatory skill to make the investment in clean energy first. The whole point of Age of Miracles is that these big industry transitions are much harder and more complex than they look on paper. 

And in some cases, large incumbents will be the ones to invest in AI and robots, to lower their costs, increase their output, increase demand, and improve margins. 

In those cases, the companies that make the AI and the robots – or whichever flavor of intelligent machine they need for the task at hand, just as humans do some things and machines do others today – will become very valuable indeed. Those will be tech companies, and they will be big. And as they scale production to meet demand, they’ll bring their products down the learning curve, making the machines that make the things cheaper and cheaper, which will make them more widely accessible, and make all of the things cheaper and cheaper. 

Then, when you can, for all intents and purposes, speak a product into existence, what becomes valuable is the IP. Reggie James put it well:

More spend will shift from production to R&D, across all industries. Everything, even things that deal in atoms, will look more like software companies. 

Lower prices, more demand, higher margins. In bigger markets than technology touches deeply today, with, I think, benefits to scale that come from having swarms of smart machines that can learn and teach each other the more they experience. Hardware experience curves at near-software speeds. 

While you can still paint a strong bull case for tech even if its AI and robots simply power incumbents to improve their performance, my suspicion is that companies built from the ground up for this new paradigm will ultimately win, as Anduril and SpaceX are showing. 

If every company behaves more like a software company, then startups should be able to leverage modern technology and tech company nimbleness to build structurally better businesses than incumbents in very large categories. And that’s before the costs of energy, intelligence, and dexterity get close to zero. 

What’s It All Mean?


The economy will get larger and structurally more profitable as the prices of energy, intelligence, and dexterity trend to zero. 

If incumbents maintain their position, there will be massive opportunities for tech companies to sell plug-and-play labor and intelligence to them, essentially productizing, centralizing, on-demanding, and SaaSifying what is now a distributed, inconsistent product in human labor. 

Early evidence in industries as varied as auto and defense suggest that many incumbents will not maintain their position in a sufficiently dramatic shakeup. I think the transition could be akin to what Spotify did to the music industry – growing the market, but killing CDs.

If that’s the case, tech companies will both provide the infrastructure – intelligence and dexterity, and potentially energy – and build the products that serve end customers in massive markets that have been largely impervious to startups, like manufacturing and defense. 

There’s always the major risk – certainty, actually – that regulatory capture staves off some of the benefits – and the disruption – for longer than we’d imagine in certain industries. 

Visual Capitalist

But when people start heading to Mexico for cheaper robot medical services in droves, that will start to break down. 

The long and short of it is, tech companies will tap into larger and growing pools of revenue at lower costs and higher margins than they’ve been able to to date. Everything will be cheaper, and they’ll sell much more of it. They’ll reinvest profits into R&D, and turn the fruits of that R&D into newer, cheaper, better products at an accelerating rate. 

The economy will become much bigger and structurally more profitable. Companies will be able to hire more people and machines, and consumers will be able to acquire most anything more cheaply, maybe as-a-service. 

There are some big pools we haven’t touched on. Real estate will be tough thanks to zoning, regulation, NIMBYs, and the industry’s capital structure (writing this piece has made me want to buy land). Finance and insurance spend the least on R&D – 0.9% – but they have large customer deposits and cost of capital advantages. I do think crypto becomes an even more formidable challenger in a world where intelligent machines play a larger and larger role. 

Anyway, tech companies will get much, much bigger than they are today.

I think this framing can help explain a lot of what we’re seeing out there. 

NASDAQ is heading back in the direction of all-time highs and crypto is coming back even as rates remain high and software companies’ forward estimates come down.

In venture, there’s a bifurcation: a lot of software companies with good numbers are struggling to raise while foundation model companies, hard tech companies, and even nuclear companies are hoovering up investor dollars. 

One way of framing this is that any company that is convex to this trend we’re discussing will be able to raise at what seem like crazy valuations, and any company that isn’t, or worse, is at risk from it, will struggle to find investor interest at any price. 

There’s an intense battle being waged between those who want to regulate AI in a way that would benefit its current leaders and those who want to keep it open, as exemplified by President Biden’s Executive Order and the open letter from technologists and investors fighting for open source. This debate has been shrouded in language around safety and extinction risk, but when you think about the opportunity to make a $45 trillion market addressable by a small number of companies, safetyists begin to look like useful idiots in a bigger game. 

Tech companies as a group will get much, much bigger than they are today. Take that to the bank. 

The question is: how many tech companies? Will it be a small handful of 10-trillion-plus-dollar companies, or a larger number of mere trillion-dollar companies? 

That’s to be determined. The trend is clear, but the details are up to how everyone plays it. 

Whatever the case, you need to think way bigger to win in a world where intelligence and labor are being commoditized. Take your ambition and multiply it by 100. Craft a Grand Strategy. We’ll talk about that next time.

Thanks to Dan and the new ChatGPT with 128k context window for editing!

That’s all for today. We’ll be back in your inbox with the Weekly Dose on Friday!

Thanks for reading,


Age of Miracles

Hi friends 👋,

Happy Friday and welcome back to our 66th Weekly Dose of Optimism.

Packy here for a Weekly Dose first: an essay takeover.

First things first, I’m launching a new podcast today, Age of Miracles, along with Julia DeWahl and the incredible Turpentine team – Nancy Xu, Amelia Salyers, Tom Hollands, Justin Golden, Jake Salyers, and Erik Torenberg.

It’s more than a dude with a mic talking to other dudes, and we’ll get to what it is and why we’re doing it in a second, but first, my producers (💁🏻‍♂️) will kill me if I don’t beg you to listen, like, subscribe, and share.

Today, we’re releasing our first two episodes. You can find them on Apple, Spotify, YouTube, and wherever you listen to podcasts:

We want this to spread outside of tech, so if you like it, I’d really appreciate it if you shared with a few friends who you think might, too.

Don’t worry: we’re not going to short you on the optimism. Dan dropped five stories at the bottom, and it was a good week for the optimists. My bet is, the weeks are going to keep getting better if we fight to make it happen.

Let’s get to it.

Today’s Not Boring (and Age of Miracles) is Sponsored by… Tegus

Imagine a day where every minute of your research is well spent. Where information gives you the freedom to be bold. Tegus is research without all the tedious hunting, calling, parsing, and pasting. You get access to expert calls, custom financial models, and public filing data— all in one place. The time for a more powerful perspective is here. Do what you do best, even better.

Trial Tegus for free today

Age of Miracles

If you’re reading this, you have the chance to live in an Age of Miracles. Of the 100 billion people who have walked the earth, we’re the lucky ones who get to experience the best part. But it’s going to take some work. 

I want you to close your eyes and imagine with me. Metaphorically, of course. We don’t have the technology to read with our eyes closed… yet. 

OK, ready? 

The year is 2073. It’s a cool spring day, light breeze, 73 degrees. The world hasn’t ended. In fact, it’s better than ever. You’re eighty years old, but you feel like you did in high school, better even. All the energy of youth with the wisdom of age. 

It’s a Tuesday. You have a light work day, so light that your AIs can handle everything. You’re free to play. They’ll ping you if they need your input. 

You step out onto your street, a street once filled with parked cars, and see only walkways, fountains, and trees. You look up and see tall architectural wonders all around you, facades as breathtaking as Sagrada Familia hiding the spacious homes within. 

The World if everyone listened to Age of Miracles

Your friends invite you to join them on their hypersonic for lunch in Marrakech, but you decline. Your kids are free and asked if you’d do lunch with them and your grandkids at a spot nearby. 

You stroll for a while, lost in thought, Neuralink silenced, just enjoying the day. You feel guilty, for a second, that your life is so good – old habit – before remembering that most peoples’ lives are pretty great, too. 11 billion people on the planet, ten million more on the Moon, another two million brave souls on Mars, and millions more bopping around somewhere in orbit, and there’s enough for everyone. 

It was touch and go there, for a while, but between the desalination, the Von Neumann machines, the terraforming, the flying cars, the MeMeds, and the countless marvels since, everyone has all of the space and things they need. 

When you arrive, almost accidentally, at the restaurant, your daughter knocks you out of your reverie with a hug. Your grandkids seal the deal with squeals. You step inside and grab a table. In the place of menus, you simply look around to see which ingredients the restaurant is growing today and ask the server for whatever you wish, prepared however you wish. The kitchen staff, some bots and some people who just love to cook, whip it up and serve it hot. (Bots handle all the dishwashing; no human likes doing dishes.) 

While you wait for your food, you joke with your grandkids and plan the upcoming Tokyo family trip with your son. Then you sit back and smile so cheesily that your kids give you that look. 

Let them. You’re happy. You are living in an Age of Miracles

Ok you can open your eyes. Welcome to Age of Miracles.

The world of 2073 I just described is admittedly uber-utopian. But I believe three things: 

  1. Getting to that world is entirely possible in our lifetimes.

  2. Getting there is going to be hard, messy, and complex. 

  3. Narrative Matters. 

That’s why we’re launching Age of Miracles, a narrative podcast where each season, we go into all the nitty gritty details about one sci-fi-sounding industry to understand what it will take to go from idea to innovation to implementation to impact. The first season, with two episodes out today, is about nuclear energy, both fission and fusion.

But first, let’s take each of those three beliefs in turn. 

An Age of Miracles is Entirely Possible in Our Lifetimes

Whether the world in 2073 looks exactly like the one I described or not, I think it’s going to be a better world for more people. The march of history suggests that things improve over time. 

Our World in Data 

The coolest part of my job is that I get to glimpse the future by seeing what entrepreneurs are working on today. 

And I bring good news from the future: most of the futuristic technology I described is being developed as we speak. Knowledge about it just isn’t evenly distributed. 

Chances are, I’m dramatically undershooting what 2073 will look like because I can’t hold the ideas of millions of current and future entrepreneurs in my brain. Technology is exponential, it compounds. The wildest technology that researchers and entrepreneurs are working on today will be inputs to even wilder technology in the future. 

I do know that energy is going to be an important input. With it, we can power the growing demand for AI. We can electrify a robotic labor force that makes everything cheaper while freeing us up to pursue jobs that seem as luxurious and silly to us as “newsletter writer” would have seemed to someone in 1973. We can travel more quickly, cheaply, and easily around the earth, and out into the solar system. We can pull CO2 out of the sky, desalinate water, and breathe life into barren landscapes. We can reduce poverty, extend healthy lives, and lower the incentives for conflict. 

Will there be problems? Of course! We’re humans. We’ll still fight. We’ll still compete for status. We’ll still hurt each other. We’ll still want more. We’ll never be satisfied. But all of that humanness and desire will propel us ever forward and impel us to keep improving. 

Forward momentum itself is a good thing – read The New Yorker’s excellent China’s Age of Malaise to see what happens when it disappears. An Age of Miracles won’t see us listlessly popping soma a la Huxley or slovenly guzzling slurpees in hover chairs a la Wall-E. An Age of Miracles isn’t an end; it’s the means to explore even further. A culture that’s satisfied with stagnation or degrowth is a dead culture. Progress imbues us with energy and vitality, as individual people and as a civilization, which is, at the end of the day, a group of billions of individual people. 

If you focus on the headlines and the culture wars, if you listen too closely to the pessimists (who sound smarter than they really are), it’s easy to believe that things are getting worse. 

They’re not. If you zoom out, things get better. And they can keep getting better. We can live in an Age of Miracles. 

But it’s not guaranteed.

Getting There is Going to be Hard, Messy, and Complex

Developing the technology is the easy part. 

That’s easy for me to say as a non-technical person who gets to simply write about the results of all of the hard work of millions of brilliant technologists, but in the fullness of time, it’s true. Technology has an inevitability to it. At some point in the future, we’ll have Dyson Spheres and Matrioshka Brains and all sorts of things pulled straight from the pages of science fiction… if we don’t get in our own way.  

Getting sci-fi technology to work is just the first leg in a long relay race that involves governments, construction, supply chains, regulators, advocates, investors, entrepreneurs, and opponents.  

My co-host, Julia DeWahl, is a force of nature, someone who knows how to build hard things and how to explain nuclear energy: early Opendoor, Starlink at SpaceX, co-founder of nuclear startup Antares, and a nuclear advocate.  

We teamed up to do the first season of Age of Miracles on nuclear energy because energy is at the root of everything, but also because it proves that building a technology doesn’t guarantee its ubiquity or impact. 

Nuclear energy is a miracle technology. If we discovered it today, there would be parades in the streets. By splitting heavy atoms apart, we can release enough clean energy to power cities for decades. It’s energy dense: one kilogram of uranium-235 can yield about as much energy as 1,500 – 3,000 tons of coal, around 2 million gallons of gasoline, or roughly 50,000 barrels of oil. It’s reliable baseload power, the good stuff. It’s a boon for energy independence: it doesn’t require imports of fossil fuels, and it doesn’t require the sun to shine or the wind to blow. And it’s safe! Three orders of magnitude safer than coal

And yet, nuclear has stalled. Nuclear capacity flatlined in the United States in the 1980s. 

U.S. Energy Information Administration

A lot of factors contributed to nuclear’s stagnation, and we go deep on them in Episode 2. Environmentalists, overregulation, the oil crisis, construction delays, cost overruns, disaster fears, and stalling energy demand all played a role. The sad fact is, some of nuclear’s most vocal opponents knew that nuclear could provide cheap, clean, abundant energy, and they opposed it for those reasons. Paul Ehrlich, the author of The Population Bomb, said it explicitly: “Giving society cheap, clean, and abundant energy would be the equivalent of giving an idiot child a machine gun.”

The degrowth movement isn’t an invisible boogeyman; it’s a real and powerful enemy that needs to be defeated if we want to live in an Age of Miracles. 

Nuclear’s history is a warning. AI Doomers are running the same playbook as we speak. They’re selling fear, and fear sells. The press fuels it, people eat it up, and before you know it, a non-event like Three Mile Island becomes an excuse to block progress for the sake of the people. 

We need to learn our history so that we don’t repeat our mistakes. We techno-optimists must drown out amorphous fear mongering with facts, proof, and hopefulness. 

And then, of course, we need to do whatever it takes to build. Fortunately, as the world demands more clean, reliable, independent energy, we have a shot at applying the lessons to nuclear itself.

Today, there’s a nuclear renaissance underway. But bringing more nuclear capacity online isn’t as simple as recognizing that nuclear is good, actually, or demanding more nuclear. It will require a hard, messy, complex process of unwinding the damage that has been done and the regulations that have ratcheted. It will require reanimating an atrophied nuclear workforce. And most importantly, it will require building nuclear capacity in a way that’s cost competitive with other energy sources and that doesn’t put the utilities that order reactors out of business. 

That’s the stuff we explore in the first half of Season 1 of Age of Miracles. Figuring out how to scale new technologies and enable them to compete in the free market is the crux of creating an Age of Miracles. It’s true in nuclear, although a little extra hard because it has to fight through fifty years of opposition, and it’s true of all of the other technologies we’ll explore in future seasons, from biotech to crypto to advanced manufacturing and beyond. 

It will certainly be true in fusion, which we’ll explore in the second half of this season. Harnessing the power of the sun for use on earth will be a scientific miracle when it happens, but that will be the starting point. From there, we’ll need to figure out clear regulation, reactor manufacturing, commercialization, and all of the complexities that go into growing a nascent industry that threatens the largest companies in the world. 

If a new technology is sufficiently impactful, the battle with degrowthers and incumbents will be intense, and we need to understand exactly what it will take to win. 

Narrative Matters

Look, I’ll be the first to admit: the world does not need another dude with a podcast. 

Of all of the things that will contribute to bringing about an Age of Miracles, researchers, entrepreneurs, engineers, competent government officials, policymakers, and manufacturers are at the top of the list. 

But you gotta work with what you got. My comparative advantage, where I can contribute the most, is explaining what all of those people are doing, why it’s important, and where they could use our help. So I’m going to do that as well as I can, in this newsletter and on Age of Miracles, with the help of experts like Julia and our heavy-hitting lineup of guests experts and practitioners. 

I’m biased, but I believe narrative really matters. Losing the narrative war in the 1970s is a big part of what got nuclear in this mess in the first place. The other side fights with narrative, and we need to fight back. As Reggie James put it in The New Technologist Manifesto:

12. We believe that there is no progress where there is no narrative. To us, narrative forms the other side of the progress double-helix with technology. The new technologist is either an open wielder of narrative themselves, or works closely with / deeply values narrative construction as a critical function of progress.

13. We live in an environment of narrative warfare. If we don’t shape the narrative, our enemy will. We do this to serve the progress of beneficial outcomes, not as a status measure.

While the builders build, we can provide narrative air cover. 

That doesn’t mean propaganda and it doesn’t mean glossing over challenges. Yelling “MORE NUCLEAR” isn’t helpful. 

It means addressing the opportunities and the challenges, and honestly assessing the benefits and the costs. It means respecting the nuance and appreciating that a lot of very smart people have already poured their lives into addressing this; if it were easy, they would have gotten it done already. It means understanding that nuclear can’t win on vibes alone, that it has to win by being cost competitive, safe, and predictable, and assessing all of the paths that might get us there. And it even means letting people who think other energy sources offer a better solution tell us why they think we’re wrong. 

That’s why we decided to do this as a podcast instead of a newsletter. A podcast lets us hear directly from the people on the ground, in their own words and with their own enthusiasm. Over the past few months, we’ve recorded dozens of interviews and through the end of the season, we’ll record dozens more. We can get deeper in ten podcast episodes than I can in even the longest Not Boring essay, and I think that depth is appropriate for important, complex industries. 

Making a narrative podcast is hard. It’s a lot harder, and a lot more work, than I realized when I set out to do this. But it’s worth it. I’ve learned a ton talking to so many smart people, and we’ve packaged those conversations up into an information-dense primer that we hope can be the best starting point for anyone looking to understand nuclear energy – its history, present, and future. 

Nuclear energy is in the zeitgeist right now. We need a lot of energy to power everything we want to do, and nuclear (both fission and fusion) will be key. Germany’s decision to shut down nuclear plants and turn on coal forced the world to rethink the technology. Microsoft is looking to use small modular reactors to power its data centers. Fission and fusion companies have attracted billions of dollars in VC funding. The Lawrence Livermore Lab took the world by storm when it achieved fusion ignition in late 2022.

The energy transition – or ELECTRONAISSANCE – means that trillions of dollars are up for grabs, and cheap, abundant energy will unlock trillions more and unlock the Age of Miracles. 

I hope that Age of Miracles inspires you to go address some of the biggest challenges in whatever way you can. It should expose some of the degrowth playbook so you can recognize it and fight it when you see it in your own industry. And if nothing else, I hope that it provides a little dose of realistic optimism and a renewed faith in the smart, pragmatic, hard-working people dedicating their lives to building companies and systems that can improve the lives of billions. 

Not Boring’s mission, and my personal mission, is to make the world more optimistic. It’s not just about feeling good. Optimism shapes reality, as does pessimism. If we want to live in an Age of Miracles, we need to believe that it can happen and understand how to make it happen. We need to flip the narrative from degrowth to growth. 

That shift is underway. You can see it in the e/acc movement and Pirate Wires and Marc Andreessen’s Techno-Optimist Manifesto. People are waking up to the fact that to stop growing is to doom humanity to a meager, violent, zero-sum future. In tech, acceleration is becoming the default stance. 

But to clear the path to an Age of Miracles, that optimism needs to spread outside of tech. Nuclear is a prime example of the fact that those who oppose progress are willing to bend the truth to push their agenda. The problem is, they’re consistently wrong. Studying nuclear is a nuke-pill that fundamentally alters how you see the growth vs. degrowth debate. 

When you dig in, you realize that this isn’t a debate between two sides with equally legitimate opinions. Nuclear is three orders of magnitude safer than coal. We didn’t run out of food with more humans; humans came up with the Haber-Bosch Process. When you build more housing, more people can afford housing. Free market capitalism wins across practically every measure worth measuring

Technology is good. Capitalism is good. Growth is good. And growth requires a lot of energy, from electrons and from humans. 

Age of Miracles is just a podcast, but I hope it can play a part in shifting the conversation. Share it with your friends outside of tech. Spread the facts and optimism to counter the baseless doom. 

We can live in an Age of Miracles, but we have our work cut out for us. 

Weekly Dose of Optimism #66

(1) First malaria vaccine slashes early childhood mortality

Meredith Wadman for Science

In a major analysis in Africa, the first vaccine approved to fight malaria cut deaths among young children by 13% over nearly 4 years, the World Health Organization (WHO) reported last week.

(2) ‘Mind-blowing’ IBM chip speeds up AI

Davide Castelvecchi for Nature

A brain-inspired computer chip that could supercharge artificial intelligence (AI) by working faster with much less power has been developed by researchers at IBM in San Jose, California. Their massive NorthPole processor chip eliminates the need to frequently access external memory, and so performs tasks such as image recognition faster than existing architectures do — while consuming vastly less power.

(3) Personal aviation is about to get interesting

By Eli Dourado

Modernization of Special Airworthiness Certification (MOSAIC) is the name of the new rulemaking initiative from FAA to expand the definition of light-sport aircraft. As discussed above, the explicit goal here is to make the new category so attractive that many recreational pilots switch from more dangerous experimental amateur-built aircraft to light-sport aircraft that are designed according to consensus standards.

(4) Global Education

By Hannah Ritchie, Veronika Samborska, Natasha Ahuja, Esteban Ortiz-Ospina and Max Roser for Our World in Data

In the early 1800s, fewer than 1 in 5 adults had some basic education. Education was a luxury, in all places, it was only available to a small elite.

But you can see that this share has grown dramatically, such that this ratio is now reversed. Less than 1 in 5 adults has not received any formal education.

This is reflected in literacy data, too: 200 years ago, very few could read and write. Now most adults have basic literacy skills.

(5) Lululemon’s Founder Is Racing to Cure the Rare Disease Destroying His Muscles

Ari Altstedter for Bloomberg

For decades, Chip Wilson has been facing a form of muscular dystrophy. Now the notoriously unfiltered billionaire is spending $100 million to cure it.

That’s all for today. We’ll be back in your inbox on Tuesday. Have a great weekend!

Thanks for reading (and listening),

Packy and Dan


Welcome to the 901 newly Not Boring people who have joined us since our last essay! If you haven’t subscribed, join 213,039 smart, curious folks by subscribing here:

Subscribe now

Today’s Not Boring is brought to you by… Plaid

You may know Plaid as the company that easily lets you connect and authorize your bank to your favorite finance apps.When that original product dropped back in 2013, it was like magic and unleashed a whole wave of fintech innovation – think Venmo, Chime, and SoFi. In the decade since, Plaid has established itself as one of the leading fintech infrastructure companies and has rolled out a whole suite of products that allow businesses to get the most out of their financial data – including payments, identity verification, and credit.

The company also has a particularly unique vantage point on the fintech industry. It partners with both cutting-edge startups and legacy incumbents, has experienced the ups and downs of the general macro environment over the last few years, and, as a financial data network, has an inside view of what’s working in fintech today.

Each year, the company publishes its Fintech Spotlight Report. This year’s report, titled “A Focus on Profitability,” covers three main topics: improving profitability, crypto services, and lending. Whether you’re a fintech, lender, banker, investor, or just an interested analyst, you’ll find valuable insights inside the report, which they are making free to download for all Not Boring readers.

Get the Free Report


What if we’re thinking about risk backwards? 

Over the weekend, a paper in The Journal of Pediatrics titled, Decline in Independent Activity as a Cause of Decline in Children’s Mental Well-being: Summary of the Evidence swept the internet. 

The authors’ argue that “a primary cause of the rise in mental disorders is a decline over decades in opportunities for children and teens to play, roam, and engage in other activities independent of direct oversight and control by adults.” 

Right on the first page, they write (emphasis mine): 

Over time, however, beginning in the 1960s and accelerating in the 1980s, the implicit understanding shifted from that of children as competent, responsible, and resilient to the opposite, as advice focused increasingly on children’s needs for supervision and protection. Rutherford noted, as have other reviewers, that in some respects—such as freedom to choose what they wear or eat—children have gained autonomy over the decades. What has declined specifically is children’s freedom to engage in activities that involve some degree of risk and personal responsibility away from adults.

My thesis is that it’s not just the kids. 

Replace “children” with “people” and the paragraph works just as well! 

Over the same time period, beginning in the 1960s and accelerating in the 1980s, the implicit understanding shifted from that of Americans as competent, responsible, and resilient to the opposite, as advice focused increasingly on Americans’ needs for supervision and protection.

As a quick pulse check, I asked ChatGPT to give me a rough estimate of America’s attitude towards risk every decade since our founding: 


This is an admittedly unscientific approach, but I found it interesting. ChatGPT has no horse in this race, and the dropoff from the 1960s to the 1970s lines up with the child-raising paper and the rise in regulation. Something changed in the 1970s (cue: wtfhappenedin1971?). 

If you deconstruct the larger trendline into two – one from the 1770s – 1960s and the other from the 1960s – 2020s – that change jumps off the page. 

We stopped embracing risk and started trying to eliminate it. We went from riskophilia to riskophobia. 

The 1970s are a fascinating decade for people who think about progress. Tyler Cowen calls the period since 1971 The Great Stagnation, a half-century slowdown in productivity and innovation. In WTF Happened in 2023?!, I wrote that about that year, “You might have picked up a shift from optimism to pessimism, which trickled into policies, corporate decisions, and then, years later, into the data.”

We can get more specific here. The 1970s saw the rise of safety culture with the creation of the Nuclear Regulatory Commission (NRC), Environmental Protection Agency (EPA), and Occupational Safety and Health Administration (OHSA), along with the passage of the Clean Air Act, the Occupational Safety and Health Act, and the Consumer Product Safety Act. 

J. Storrs Hall, Where is My Flying Car?

Inspired by the German environmental movement, which called it “Vorsorgeprinzip,” or foresight principle, the 1970s also birthed the precautionary principle. In simple terms, it means “better safe than sorry.” In less simple terms, the precautionary principle suggests that if an action or policy has the potential to cause harm, in the absence of scientific consensus, the burden of proof falls on those advocating for the action or policy, not those opposing it. 

It’s guilty until proven innocent for new things, but worse, because it forestalls the opportunity for a trial in the first place. 

The basic problem with the precautionary principle is that, in its strong form, it’s like running a cost-benefit analysis without the benefit column. A single “X” in the cost column is enough to generate a “no” decision. 

More bluntly, it doesn’t work. In a 2005 paper, The Precautionary Principle as a Basis for Decision Making, Harvard legal scholar Cass Sunstein argues “that the precautionary principle does not help individuals or nations make difficult choices in a non-arbitrary way. Taken seriously, it can be paralyzing, providing no direction at all.” 

Worse, attempting to eliminate legible risks just creates new ones. “It turns out,” he writes, “that the danger that regulation will create new or different risk profiles is the rule, not the exception.”

The canonical example here is nuclear energy. The NRC literally uses a principle that demands that radiation be “As Low As Reasonably Achievable,” an asymptotically impossible goal. Make it safer, then safer, then safer, until you don’t make it at all. Regulation has contributed to making nuclear so expensive that we effectively don’t build it. By minimizing the 0.03 deaths per terawatt hour of electricity produced, we increase the risk of deaths from other fuel sources, the impacts of climate change, and dependence on foreign energy. And we forestall an energy abundant future and its untold benefits. We lower the ceiling, and the floor. 

We trade freedom not for safety, but for the illusion of safety. 

Illusion, because: you can’t eliminate risk. 

Whether we’re talking about parenting or regulation, the idea that you can is comforting. It provides a sense of control. Everyone wants their kids to be safe. But risks don’t operate in a vacuum. By trying to limit known but potentially small risks, you open yourself up to bigger but less legible risks elsewhere. 

It’s like squeezing a water balloon. Push on one side, the water moves to the other. Push too hard, and the balloon explodes. 

More dramatic than I was going for, but you get it; giphy

Embracing risk can make you antifragile by building resilience and adaptability; running from it can make you fragile by removing opportunities to build the same. 

That’s what the parenting paper argues, and I argue that the same thing is true for adults, companies, and even nations. 

When World War II struck, America sprung to action, transforming itself into the factory that won the war. Today, as Noah Smith writes, “We’re not ready for the big one.” Our shell-making capacity is about 3% what it was thirty years ago, and China has a 200:1 shipbuilding advantage over the US. 

I hope to god we avoid World War III, but the fact that it’s even on the table – that war is happening around the world as we speak – should serve as a wake-up call that the world is full of risk, and that by trying to avoid it, we allow it to bloom.

There were a million different reasons that 1971 was a turning point from growth to stagnation: oil shocks, the shift from manufacturing to services, declining energy usage, the abandonment of the gold standard, increased regulation. But I think it runs deeper than that, to the cultural core of the country, which is why the same pattern shows up in the economy and in the way parents raise their kids. 

The shift from optimism to pessimism, from growth to degrowth, from hope to doom, and from riskophilia to riskophobia was the result of a cultural movement, and we need a cultural movement in the other direction to get our swagger back. 

That might seem naive. There are those who believe that our decline is a fait accompli. I’m not one of those people. It’s why I have been banging the drum on optimism and telling the stories of the people and companies that do embrace risk to try to fix the biggest problems they can tackle. A movement got us into this mess, and a movement can get us out of it. 

I’ve been wanting to write a piece on America’s attitude towards risk for a while. The good news is, the tide seems to be shifting already. 

A couple of weeks ago, the All-In podcast’s David Friedberg, who’s had to fight through the precautionary principle as an entrepreneur, went in on the “total lack of assumption of risk generally in the US which limits progress in meaningful ways”:

Watch it. 

And yesterday, a16z co-founder Marc Andreessen wrote The Techno-Optimist Manifesto, offering an aggressively optimistic perspective and going to town on the precautionary principle: 

Our enemy is the Precautionary Principle, which would have prevented virtually all progress since man first harnessed fire. The Precautionary Principle was invented to prevent the large-scale deployment of civilian nuclear power, perhaps the most catastrophic mistake in Western society in my lifetime. The Precautionary Principle continues to inflict enormous unnecessary suffering on our world today. It is deeply immoral, and we must jettison it with extreme prejudice.

Read the whole thing if you haven’t. 

I don’t need to re-litigate the points Friedberg and Marc made. I agree with them. And if they sound a bit extreme, it’s because movements aren’t started by tepidly considering every point and counterpoint. The culture shifts through a great Overton Window tug-of-war. 

If the precautionary principle that has underpinned the last half-century of riskophoboa in America has pushed safety by erasing the benefits column, then the appropriate counterbalance is to make a case for erasing the cost column. 

If that case lands, we’ll pull hard enough that we end up somewhere in the middle, taking Cass Sunstein’s recommendation that we throw out the precautionary principle in favor of a radical new framework: cost-benefit analysis. 

Kids should ride bikes with their friends, but they should wear a helmet. They should roam freely, but they shouldn’t go to bars. We should embrace nuclear energy, but we should build containment domes. We should protect the environment, but we shouldn’t let every bobwhite quail egg stand in the way of a better future for humans. Every decision has costs, and benefits. 

The movement underfoot isn’t against each other or against the government. America has worked best when its government and people were pulling in the same direction: forward. 

The movement is against the idea of risk minimization as a goal. It’s against the culture of safetyism that we’ve all allowed to seep into every corner of our lives. It’s for the more vibrant, alive, and vigorous world we can build when we embrace risk with eyes wide open. 

This movement is about much more than tech. The parenting paper helped me realize that the issue runs deeper, and that the solutions must too. From our homes to our startups to our government, we need to take responsibility for our actions and outcomes. 

I have good news and bad news: we are not children. 

There is no parent to protect us, and even if there were, it might do more harm than good. So drop the illusion. Embrace risk. Roam free. Live vigorously. 

The reason we’ve made so much progress over the past half-century despite our decelerating risk appetite is that a brave few have chosen to take risk on and win. I write about the entrepreneurs among them, but they’re everywhere. Someone probably pops to mind for you. 

Imagine what the world could be if more of us joined them. We are competent, responsible, and resilient. We have the freedom to engage in activities that involve some degree of risk and personal responsibility. Embrace it. Join the movement. 

My hunch is that the government will follow. They’re just been giving us the sense of protection we’ve demanded. Let’s demand something better. Vox populi, vox Dei. 

I refuse to believe that America’s best days are behind it. Decline is not inevitable. But it takes more than tech. It’s up to all of us to cut through the cruft, get back to the sense of freedom and adventure that make the country great, and build from there. 

Go do that risky thing you’ve been putting off. Now. Build the muscle. Spread the word: we’re going risk-on.

The only way out is up, even if it’s a little risky up there.

Thanks to Dan for editing!

That’s all for today. We’ll be back in your inbox with the Weekly Dose on Friday.

Thanks for reading,


Anduril: Acquiring Prime

Welcome to the 205 newly Not Boring people who have joined us since last Tuesday! If you haven’t subscribed, join 212,148 smart, curious folks by subscribing here:

Subscribe now

Today’s Not Boring is brought to you by… Mercury

Mercury is banking engineered for startups. But they go beyond banking, providing the resources and connections startups need to become success stories. We’ve used Mercury at Not Boring since day 1…and frankly, it’s very rare I see a startup that doesn’t. The company just launched Mercury Raise: a comprehensive founder success platform built to remove roadblocks at every step of the startup journey. 

  • Access: Get exclusive intros from Mercury to investors, experts, and other founders.

  • Fundraising: Submit your deck and get in front of hundreds of active investors looking to invest in companies like yours.

  • Community: Attend unfiltered conversations with industry vets and build relationships with other founders at similar stages.

Navigating this whole startup thing can be hard and lonely – but with Mercury Raise you don’t have to go it alone.

Explore Mercury Raise

Hi friends 👋,

Happy Tuesday! Fun one today. 

After starting its life as a controversial company for supporting the US military, Anduril has been proven prescient by the war in Ukraine, the growing threat of China, and its early ability to win contracts from the US and allied militaries.

As a result, the company’s story has been told in numerous newsletters, publications, and podcasts. I’ll put the links at the end, and I highly recommend reading and listening to any or all of them for the full Anduril story. But for today’s essay, I wanted to focus on one piece specifically: Anduril’s acquisitions. 

Just six years old, Anduril has made a splash with five acquisitions since 2021. I wanted to understand why, and how, they do it. In the process, I learned a ton about the future of defense, Anduril’s ambitions to connect the old way of doing things with the new, and what it takes to build a generational Hard Startup.

Full disclosure: I’m an investor in Anduril through an SPV we put together for Not Boring Capital’s LPs, and a big fan of the company, but this isn’t a sponsored deep dive. After their most recent acquisition, I asked them to tell me about their M&A strategy, and I’m excited to share what I learned with you.

Let’s get to it.

Anduril: Acquiring Prime

The best way to think about Anduril is through the series of nested bets the company is making. 

Bet #1: The way humans fight wars, or more importantly deter each other from fighting wars in the first place, is changing: from large, expensive, exquisite hardware platforms to distributed systems of smaller, cheaper, more networked defense products.

Bet #2: The United States Department of Defense (DoD) and its allies will be able to transition the way it buys capabilities to account for this new reality: from the DoD’s slow Planning, Programming, Budgeting, and Execution (PPBE) system to what Anduril Chief Strategy Officer Christian Brose calls a “Moneyball Military.”

Bet #3: No incumbent in technology or defense is able to serve this new need: large technology companies that have the software chops have been incentivized to keep China happy and have shied away from defense, and the Defense Primes – the five leading contractors that provide major military systems, platforms, and services to the DOD – don’t have the software chops, the ability to recruit the people who do, or the incentive to move away from the cost-plus gravy-train. 

Bet #4: By investing in software R&D upfront, it will be able to better deliver the capabilities the DoD needs to fight the new kind of war, more cheaply: that its Lattice OS will let Anduril deliver more capable hardware products for less money, and that its advantages will compound with each new product it plugs in. 

Bet #5: Startups can develop the advanced technologies that DoD will need better than incumbents, and that Anduril can sell them into DoD better than the startups building them can: that it can serve as a pipe between the competitive capitalism of the startup and SMB defense tech ecosystem and the more centrally-planned DoD acquisition machine. 

Bet #6: This one is actually a series of bets Anduril makes every time it builds a new product, or acquires a company that does: that it will be able to sell that particular product into DoD and ultimately into large programs.

The first three bets define the battlefield on which Anduril is fighting, and the second three define the company’s battle plan. 

Anduril, founded in 2017, doesn’t look like either a typical tech startup or a traditional defense contractor. It was born to fill the gap between the two worlds. Its mission is “transforming US & allied military capabilities with advanced technology.” 

The way I’ve come to think about the company is as a meta-version of Lattice

Lattice is Anduril’s software operating system. It serves as an integration layer between legacy defense systems, modern sensors and weapons, and Anduril’s own hardware, giving operators the ability to make faster, better-informed decisions. 

Similarly, Anduril the company aims to serve as an integration layer between old and new, between the traditional, centrally-planned way the DoD procures and deploys capabilities and the free market of startups and SMBs building cutting edge defense technology.

In order to do that, and win its place among the Primes, Anduril needs to take risk, as its co-founder Palmer Luckey explained in a recent conversation with Politico

If your goal is to become one of the major players, you have to take riskier bets, you have to put in more money. You have to do things that are less likely to pan out. But if it does pan out it’s going to be very impactful.

One of those risky bets is making acquisitions so early in its life. 

Over the past two and a half years, it’s put up its own capital to acquire five companies: 

  • Area-I 

  • Copious Imaging 

  • Dive Technologies

  • Adranos Energetics

  • Blue Force Technologies

The most recent, Blue Force, turned heads in early September when Anduril unveiled the acquisition’s first fruit: Fury.

These haven’t been the acquihires or yard sales, but good ol’ fashioned 1+1=3 M&A transactions. They’re a part of a specific strategy, based on those six bets. 

I wanted to dig into Anduril’s M&A activity specifically because it stands out as a unique company-building strategy for such a young startup. After conversations with the team and dozens of hours of research, Anduril’s acquisitions have illuminated what makes Anduril tick, and why I think it has a real shot at becoming a sixth Defense Prime alongside Lockheed Martin, Boeing, Raytheon, Northrop Grumman, and General Dynamics.  

Every successful company has a “thing.” Of all of the many things they do, there’s one that’s the secret sauce, the thing they do better than anyone else. SpaceX can get kilograms to orbit more cheaply than anyone else. Meta connects people. Google dominates search. ByteDance does addictive algorithmic feeds. Amazon is a logistics powerhouse. Microsoft sells software to enterprises. 

Anduril’s “thing” isn’t any of its particular products, or even its Lattice OS; its thing is its ability to sell modern defense capabilities into the DoD. 

That requires two broad capabilities: 

  1. Building Trust with the DoD

  2. Having the Right Product at the Right Time

While Anduril is relatively young, it’s worth noting that its leaders have run this playbook before. CEO Brian Schimpf, Executive Chairman Trae Stephens, and COO Matt Grimm were all early employees at Palantir, which built the “sell modern technology into DoD” playbook.

If Anduril can continue to build trust with the DoD, acquire companies with the right capabilities at the right time, and sell them into increasingly large programs, it has the chance to inject some capitalism to the nation’s fight against our centrally-planned adversaries, and become a Prime in its own right. 

We’ll dive (pun intended) into the details of specific transactions, but to understand why Anduril acquires companies, we need to start by understanding the defense market. 

Defense is Weird

“Defense is weird.” 

When I asked Adam Porter-Price, Anduril’s Head of Corporate Development and M&A, and Matthew Steckman, its Chief Revenue Officer, in separate conversations, why Anduril acquires companies, they both gave the same response. 

So what makes defense weird? 

In Christian Brose’s recent Hoover Institute paper, Moneyball Military, he describes the core weirdness at the heart of defense: the Planning, Programming, Budgeting, and Execution (PPBE) system that the DoD uses to define the “military capabilities the nation requires and how to validate, fund, and ultimately procure them.” Brose writes (emphasis mine):

The goal of the PPBE system, when Secretary of Defense Robert McNamara institutionalized it fully from 1961 to 1968, was not to optimize or better manage capitalism in matters of national defense. It was to transcend capitalism altogether. It was an attempt to graft an allegedly superior, explicitly Soviet-style institution onto the postwar American government, ironically for the purpose of competing more effectively against that Soviet adversary. 

As Lofgren writes, “It was not hyperbole when historian Charles Ries described military staff planning in 1964 to be ‘almost socialist in its metaphysics.’” After all, “the economic expert and socialist alike believed that central planning could far outstrip the productive capability of uncoordinated markets.” 

Let that sink in. It’s certainly not something I appreciated. As the US faces threats from two centrally-planned economies, China and Russia, we’re doing so with a system modeled on theirs instead of leaning into our capitalist advantage.

To fix that, Brose recommends splitting procurement in two: continue to use PPBE for traditional platforms and weapon systems, while establishing the Moneyball Military, generated through market creation, for “robotic vehicles of all sizes in all domains, constellations of small communications and intelligence satellites, and other commercially derived military systems.”

While there’s growing support for something like Brose’s proposal among America’s military leadership, the DoD is a very large, slow-to-turn institution. So in the interim, Anduril wants to serve as the bridge between the two approaches. It needs to play by the rules of the PPBE on behalf of smaller defense tech companies, like an API into DoD. 

So what are the rules? Three are most relevant to understanding Anduril’s acquisitions:

  1. Small Number of Buyers. The defense sector is a monopsony, the demand-side version of a monopoly in which a market in which one buyer is the major purchaser. Technically, a Prime may have 10-20 relationships across the DoD’s branches and agencies, and among its allies, but that’s a tiny number compared to a typical company’s client roster, and DoD sets the tone. Most of the spend they compete for comes out of the Congressionally-approved DoD budget.

  2. Sales are Punctuated, not Linear. Instead of selling to customers on their own schedule, defense companies compete for contracts and programs that may come up once every five or ten years. Companies might work on a number of smaller projects to build their track record, trust, and capabilities, but the goal is to win big programs when they come up. “There are certain barriers to entry that are very real and time-based,” Porter-Price explained. 

  3. Wins Can Be Massive. These programs can be absolutely enormous. The Joint Strike Fighter program, awarded to Lockheed Martin in 2001 to produce 2,456 F-35 Lightning II fighter jets through 2044, is currently estimated to cost $1.7 trillion. Lockheed still owns the contract despite the fact that its costs have ballooned ~8x from its proposal, and because contracts are cost-plus, the government pays the contractor’s cost with a margin on top, Lockheed makes more money the more over budget it is. 

While JSF is the largest, it’s certainly not the only enormous program. 

Each year, Congress approves the Defense Budget, and it continues to grow. The Fiscal 2024 National Defense Authorization Act allotted an all-time high at $842 billion to the DoD, a record-setting $315 billion of which will go to procurement and research and development. It’s worth checking out this 2024 Defense Budget Overview presentation if you’re interested in the details. 

We’re spending $246.6 billion this year on Integrated Deterrence: $61.1 billion on Air Power, $48.1 billion on Sea Power, $13.9 billion on Land Power, $37.7 billion on Nuclear Enterprise Modernization, $29.8 billion on Missile Defeat and Defense, $9.0 billion on Long-Range Fires, $13.5 billion on Cyberspace, and $33.3 billion on Space and Space-Based Systems.  

Getting more granular, this table shows the 2024 Defense Budget Request per weapon system: 

Defense is a market with essentially one customer, which buys new things infrequently, telegraphs what it wants to buy, and when it does buy, commits an absolute ton of money for a very long time. In such a market, it’s critical to be “ready and geared up for a competition,” Steckman told me. “If you win, you get a big spike. If you lose, you get nothing.” 

But winning isn’t as simple as having the best technology at the best price. The Defense Primes spend a relative pittance on R&D compared to large tech companies, and they sell expensive platforms that regularly come in multiples over budget. 

No startup can saddle up to a competition with a fancy piece of technology alone and expect to win. This presents a challenge for both the DoD, which can’t always acquire the best solution for the job, and defense startups, which can’t simply compete on product like traditional startups can. It also presents an opportunity for a startup willing to put in the time and work to do all of the other things required to win. 

Competing as a new entrant with Prime ambitions means operating with a mix of long-term patience and short-term speed. To go back to our Lattice analogy, the platform Anduril is patiently building at the company level is trust with DoD.

Trust in Anduril as a Network Effect 

“I haven’t listened to Brian’s podcast, although I’m sure it was amazing. Put that in the piece.” 

I was asking Steckman about the question Ben Thompson asked Anduril CEO Brian Schimpf on Lattice’s network effects, to which Schempf replied, “I think as we add more platforms, more sensors, more capabilities into this, everything gets more valuable, I think that’s spot on.”

Instead of expanding on Schempf’s point, Steckman offered an alternative: “My view from the CRO perspective: the network effect to me is actually trust in Anduril.”

Trust is critical to Anduril’s ambitions. Without it, Anduril won’t be able to weave Lattice OS into the fabric of defense, won’t be able to make its acquisitions pencil, won’t win large programs, and won’t have a shot at fulfilling its mission: transforming US & allied military capabilities with advanced technology

 Building trust with the DoD requires a number of things. 

First, it requires a deep understanding of DoD’s needs. As Steckman explained to me, even the seemingly simple question of “Is it the right tech?” is not a simple one. “If you’re not fully informed, you might think something is a solve where it isn’t,” he said. “Often, the government isn’t buying a piece of new technology because it doesn’t solve the problem.”

Getting to the point where you can develop a deep understanding is challenging in its own right. It means that the company needs to earn the right to see certain information, which means building up internal teams, technology, and processes to handle the security apparatus around sensitive information. It means being in Washington, DC and thoughtfully engaging with the community at the Pentagon or on Capitol Hill. It means understanding the current set of equipment, hardware, and software that exist so you know where the holes are and what you’ll need to integrate with. 

Plus, there’s a paradox at play: you have to have a history of working with the DoD to be able to sell to them in the first place. While DoD has tried to fix this by introducing startup-friendly programs like Small Business Innovation Research (SBIR) and Small Business Technology Transfer (STTR) and the Defense Innovation Unit’s Commercial Solutions Openings, among others, needing to go through these first can represent a speed bump that prevents small businesses from selling into large programs. 

Even then, the DoD is rightfully concerned about the health of the suppliers it buys from. It needs to feel confident that the company will be around to continue to supply the capabilities it’s contracted to supply. Here, the $2.3 billion Anduril has raised from top-tier venture funds like Founders Fund, a16z, Lux Capital, Thrive Capital, Valor Equity Partners, and 8VC helps.  

If you overcome all of that, you still have to show that your technology can integrate with the hodgepodge of current and future systems that all of the other pieces of technology that DoD acquires and operates, all of which have varying degrees of software sophistication, and most of which speak different languages. Integrating with old or poorly written software of any kind is difficult – I remember months’ long projects at Breather when we had to integrate with outdated building management systems, and that the engineers absolutely hated them – but it’s even more challenging when the stakes are so high. 

This gets to a core Anduril thesis: all incumbent defense companies are built around hardware platforms and treat software as an afterthought. But the company believes that approach is fighting the last war. For the wars of the future, according to Anduril, you need to start with a software platform that speaks to all sorts of hardware, and plug new hardware into that network as it emerges. 

This is where Lattice comes in. Lattice is the software network at the heart of everything that Anduril does. Its first product, Lattice for Command and Control, is designed to accelerate the “kill chain” by giving human operators information and interfaces designed to help them understand, decide, and act more quickly. Lattice for Mission Autonomy, released in May, is built to allow “a single human to control and coordinate a wide range of autonomous assets across the ocean, land, and sky.” 

Lattice is designed to work with Anduril’s own hardware and to integrate with hardware and platforms built by other vendors. In an ideal world, military personnel use Lattice to coordinate all of their missions, consume synthesized information from sensors on every piece of hardware available, command fleets of autonomous machines, and make decisions out of harm’s way. But getting to that point takes trust. 

“We first proved the concept on missions like border security and air defense,” Schimpf says in the video above, “and today, Lattice is now automating the operations of hundreds of robotic systems deployed in tactical environments around the world.” 

That approach is built to design trust. Start with lower stakes missions, prove not only that Lattice works but that it delivers better capabilities, and then repeat on increasingly high-stakes missions. 

“The more government users see it working and delivering 2x, 3x, 10x capability, the more other government users will look at it and say, ‘I want it to do that for me’” Steckman explained. “That’s the flywheel: the more it’s out there and working, the more you get this interesting social effect on trust.

To build the software-like network effect that Schimpf and Thompson discussed, Anduril first needs the social network effect, the trust among customers, operators, and other vendors that Lattice can deliver better outcomes. 

But Lattice itself is just one piece of the company-level Anduril OS: the relationships, security apparatus, knowledge, engagement, and software that build trust with DoD and put the company in a position to win competitions as they come up. 

As Anduril OS continues to get stronger with time and experience, so too does the company’s ability to sell advanced technologies – the apps on Anduril OS in this metaphor – to US and allied militaries. As its flurry of acquisitions shows, the company seems to believe it’s now in a place where it can use Anduril OS as the bridge between the defense tech free market and the DoD’s PPBE. 

Anduril’s Acquisitions 

I told you that this is a piece specifically on Anduril’s M&A strategy, and then I spent seven pages talking about the defense industry and how Anduril is building trust with its customers. That, I think, was necessary context. 

Anduril is in a position to acquire companies because it’s built up the hard-earned ability to sell their products to DoD better than the companies themselves could, and it has to acquire companies because of how the defense industry works. 

Specifically, acquiring companies allows Anduril to sell into competitions that it either knows are coming up or expects to come up based on the way the market is shifting. Hardware development can take time, and competitions often come up without enough advanced warning to go through that whole process, so for Anduril, acquisitions speed the route to market

As Steckman explained, “Because returns are so punctuated, and creating hardware takes many years to get right and to become trusted within a field, it is a way to accelerate your positioning against those big punctuated movements.” 

To date, the company has made five acquisitions across a range of capabilities:

  • Area-I (April 2021): ALTIUS – Agile-Launched, Tactically Integrated Unmanned System

  • Copious Imaging (September 2021): WISP – Wide-Area Infrared Sensing with Persistence Sensor

  • Dive Technologies (February 2022): Dive-LD – Autonomous Underwater Vehicle (AUV)

  • Adranos Energetics (June 2023): Solid Rocket Motors

  • Blue Force Technologies (September 2023): Fury – Group 5 Autonomous Air Vehicle

It’s useful to look at a few of them specifically to understand Anduril’s process and rationale.

Dive Technologies

In February 2022, Anduril acquired Dive Technologies, a Boston-based maker of autonomous submarines, or AUVs in defense parlance. 

Dive fit the future model of war that Anduril is building against: smaller, cheaper, autonomous systems across all domains (land, air, sea, and space) coordinated through software. But part of the balance is figuring out which parts of that future will occur when. 

Steckman told me that Anduril asks two questions for all organic product development and acquisitions: “Is the problem urgent and can we solve it in less than 5 years?” 

Dive fit the bill on both fronts: 

  • Urgent: Submarines are critical in the conflict with China, and we’re retiring exquisite nuclear submarines faster than we can build them. We need cheaper, faster-to-build submarines for their stealth, deterrence, intelligence gathering, and more, now. 

  • Solve In 5 Years: Dive Technologies had been developing its product for nearly four years at the time of acquisition, and could get to market quickly. 

Porter-Price told me that Dive had done a lot of clever things from the hardware side, that its Dive-LD was good for the type of mission the US and its allies want to perform, and that there was “nothing like it in the market.” At the same time, Dive wasn’t ready to win large contracts on its own yet. Its autonomy software and ability to collaborate with other equipment in the water was far less developed than the hardware. 

So Anduril didn’t have a hardware product to sell against a capability it knew allied militaries would need, and Dive did, but didn’t have the ability to sell into military buyers or the software to deliver on its targeted autonomous capabilities. 

“Dive would have gotten there,” Porter-Price said, “but we sped up their timeline by 7-10 years.” On the flip side, Dive accelerated Anduril’s entry into a market where “customers are banging down the door.” 

Anduril acquired Dive, turned it into Anduril Maritime, and made Dive Founder & CEO Bill Lebo its GM. Within three months of the acquisition, the combined team hooked the Dive-LD into Lattice, came up with a plan for turning the pickup-truck-sized submarine into a school bus-sized version, and sold into a “$100 million co-funded design, development and manufacturing program for Extra Large Autonomous Undersea Vehicles (XL-AUVs) for the Royal Australian Navy.” By December, Anduril execs flew to Australia for the naming ceremony for the AUV. They call it Ghost Shark

According to an excellent Reuters article, the Ghost Shark is part of a two-pronged advanced submarine approach the Australian Royal Navy is taking to meet the challenge of a rising China: on one side, 13 nuclear-powered attack subs that will cost $18 billion apiece and won’t finish being delivered until after 2050, on the other, “three unmanned subs, powered by artificial intelligence, called Ghost Sharks. The navy will spend just over AUD$23 million each for them – less than a tenth of 1% of the cost of each nuclear sub Australia will get. And the Ghost Sharks will be delivered by mid-2025.” 

That starkly illustrates the difference between the way things are and the way Anduril thinks they should be. 

Dive’s Australian contract may be the first of many. Japan, an island, is tripling its defense budget to become the third highest in the world next year to prepare for the threat from an adversary on their doorstep, and the program might also serve as a test run the US Navy uses to evaluate purchasing Dive-LD AUVs for its own fleet. 

With Dive, Anduril is skating where the puck is clearly heading and delivering capabilities, and revenue, now. 

Adranos Energetics

“Our process takes a year,” Porter-Price told me when I asked about the lifecycle of an acquisition at Anduril. 

Each year, Anduril’s leadership team comes up with a shopping list, identifying areas that fit within programs they want to pursue. 

One such area on the shopping list was solid rocket motors (SRMs). After Northrop Grumman acquired Orbital ATK, the maker of the nation’s largest rocket motor, Boeing abandoned the ICBM business it’d been in with the Minuteman for half a century, citing the fact that the acquisition gave Northrop “an unfair advantage.” The Pentagon warned Congress of the dangers of SRM consolidation in 2017, and the FY 2024 DoD Budget, submitted in March 2023, notes that the Future Year Defense Plan (FYDP), essentially a five-year defense roadmap, “applies resources to address supply chain bottlenecks in … Solid Rocket manufacturing,” among others.

It didn’t take much of a crystal ball to understand that the DoD would be willing to spend on solid rocket motors, and looking at how Anduril attacked that opportunity gives us a glimpse into its process.

So SRMs were on the list, and Porter-Price and his team went shopping. 

He says they identified about ten SRM companies in the US and Australia. Then they met with each of them, did technical diligence, and “spent a lot of time on planes” getting to know everyone in the ecosystem. They honed in on three companies that were particularly good. 

None of these companies is for sale. No banker is shopping them. But none of them is stupid, either. “They know what it means when the head of corp dev comes to visit,” he admitted. One of the three companies Anduril was interested in was Adranos Energetics, which manufactures its SRMs in Mississippi. 

“I went to Mississippi seven or eight times with people on our team, we brought them out to California. Everyone wants to know what Palmer’s like,” Porter-Price said, “It’s uncomfortably like dating.” 

It’s important that both sides get to know each other. Many of these companies are bootstrapped. They’re the founders’ babies. And joining forces with Anduril means they’ll no longer be their own boss. In return, Anduril structures deals to incentivize founders with significant ownership in the combined company, which they need to earn by coming to work and often running divisions. This is a long-term relationship. 

At a certain point, they have a discussion on a couple different paths they can follow: 

  1. Partner or Collaborate

  2. Become One Company

In Adranos’ case, they decided to become one company. 

At that point in the process, Porter-Price explained, Anduril “puts together a compelling package for the founding team” and wants them to bring their whole team with them, and then they go fast. Typically, they go from LOI to execution in 60 days, although it can take longer in some cases, like it did with Adranos, because greater diligence was required because agencies like the ATF expect you to be careful with rocket motors. 

In June, Anduril announced its acquisition of Adranos

Adranos Solid Rocket Motor Coooooking

In the announcement, Anduril said that it would become a “merchant supplier of solid rocket motors to prime defense contractors delivering missiles, hypersonics, and other propulsion systems,” and not a direct contractor itself. 

That highlights the versatility of Anduril’s approach: while it wants to become a Prime, it’s willing to go where the opportunity is to build up its capabilities, solve a critical need for the DoD, and grow its business. 

Anduril is bringing the Adranos team in-house and investing in building out its manufacturing capabilities in Mississippi in order to increase its output to thousands of SRMs per year. In a world of high-volume, attritable drones and a need to increase our weapons production capacity, that should be a valuable capability to offer. 

Blue Force Technologies

Build or buy? In an M&A discussion, that’s a key question. Does it make more sense to develop the product in-house, or are we better off acquiring a company that already makes it? 

Anduril does both. It developed Lattice, Sentry Towers, and Ghost Drones in-house, and it’s currently either exploring or developing a number of new capabilities. But obviously, it’s also willing to acquire companies when that makes more sense. So I asked Porter-Price how the company decides when to build or buy. 

“It comes down to the open window,” he explained, and offered the Blue Force Technologies deal as an example. 

Blue Force has been working on what would become Fury, the Group 5 autonomous aerial vehicle (AAV), for four years, as Joseph Trevithick and Tyler Rogoway lay out in an excellent history in The Dive, The Rise of Fury

Fury, Anduril

Fury is exactly the type of system that Anduril believes represents the future of deterrence and warfare. 

Group 5 AAVs are the largest and most capable class of military drones, often comparable in size and function to manned aircraft, and can be used in training – as pilotless “red air” aggressor jets against which pilots simulate battle – or in surveillance or even combat missions, flying autonomously in formation with manned fighter jets. 

And it’s exactly the type of product the military has increasingly said it wants to purchase. 

In March, Secretary of the Air Force Frank Kendall detailed the service’s plans for Collaborative Combat Aircraft (CCA), saying that the Air Force was conducting planning based around a notional fleet of around 1,000 advanced drones with high degrees of autonomy, like Fury, along with 200 Next-Generation Air Dominance (NGAD) stealth combat jets. The Air Force plans to pair F-35s, for example, with advanced drones.

In August, Deputy Secretary of Defense Kathleen Hicks announced a program named Replicator that plans to field thousands of attritable (expendable and low-cost) autonomous platforms, like Fury, that are “small, cheap, and many” in order to overcome China’s biggest advantage, mass: “more ships, more missiles, and more people.” 

This is exactly what Anduril, and most vocally Brose, has been calling for for years, and it’s happening now. Steckman told me that Anduril had a sense that these specific competitions would be coming, but also “knew in a more macro sense that this class of technologies was going to permeate all of the services.” They also knew that a Group 5 jet would take a long time to develop. 

Porter-Price made a similar point, telling me that Blue Force had been working on the aircraft with the Air Force for four years, and that the Air Force loves them, but at the same time, Blue Force didn’t have the ability to invest in autonomy software, talk to other fighter jets, or manage complex software integrations with F-35s. “Anduril can.” 

The two teams agreed they’d be more competitive together – Blue Force providing the hardware and Anduril providing the software, DoD sales infrastructure, and balance sheet – and decided: “Let’s accelerate.” 

In September, Anduril announced that it had acquired Blue Force Technologies.

1:2 scale model of Fury, Anduril

Group 5 AAVs, like autonomous subs and solid rocket motors, represent an urgent problem that Anduril can solve within five years. Like Dive, Blue Force was an opportunity to accelerate both sides’ time to market by years and deliver the military better capabilities, sooner. 

In all three cases, Anduril put its own capital – cash and equity – at risk, buying companies and spending money before winning competitions with the expectation that being early comes with significant advantages, particularly for a new entrant. It’s the kind of bet that Anduril is uniquely positioned to make. 

Venture Capital Shaped Holes and Wall Street Weenies

M&A is not a novel concept in the defense industry. Its modern structure is defined by the consolidation that emerged from “the Last Supper,” a 1993 dinner at which Deputy Secretary of Defense Bill Perry told the CEOs of the largest defense contractors that the military would be spending less money going forward and urged them to consolidate. They did. Dozens of smaller defense contractors became five Defense Primes: Lockheed Martin, Boeing, Raytheon, Northrop Grumman, and General Dynamics. 

Defense Industry Consolidation, Anduril

But Anduril is hoping that it’s able to approach M&A differently enough, for long enough, to get a seat at the Prime table. This isn’t wishful thinking; it’s structural. 

When Breaking Defense’s Aaron Mehta asked Palmer Luckey when he planned to take the company public, Luckey gave one of my favorite founder interview question answers in a long time (emphasis mine):

It doesn’t make sense to IPO when private capital is going to be more willing to fund the types of high-risk plays that you’re making than the public markets. Right now, we’ve been building a company that is exactly the right shape to go through the venture capitalist hole. Building a company that goes through the Wall Street analyst hole is different. And we’re doing that, that is a goal, we’re trying to get there. But we’re not a well-shaped company for Wall Street right now.

Wall Street would look at our company and say, this is still a very high-risk venture they have, they’re making these large investments in programs that may or may not pan out. And the only way that works out for Wall Street is if I can bring their confidence level close to my confidence level. I will never get them all the way in there, but I have to get most of the way there so they agree that our financial model makes sense. And so I feel like that’s what we’re going to be doing over the next few years is trying to shape all of these things so it’s the right shape for a Wall Street weenie to tell his boss that we are a safe part of their institutional portfolio.

The answer is great because what founder-who-wants-to-eventually-go-public in his right mind calls his future shareholders “Wall Street weenies,” but also because it illuminates the goldilocks position that Anduril is in when it comes to acquiring advanced technology companies: it’s well-funded enough that it can make meaningful acquisitions, but privately held enough that it doesn’t need to justify its investments in the same way public companies do.

Anduril fits the venture capitalist hole. As mentioned earlier, it has raised $2.3 billion from top-tier venture funds like Founders Fund, a16z, Lux Capital, Thrive Capital, Valor Equity Partners, and 8VC, most recently in a $1.48 billion December 2022 Series E that valued the company at $8.48 billion. That gives it a healthy enough balance sheet to make customers comfortable, and it also gives it a warchest of cash and equity priced by good investors post-downturn with which to acquire companies. 

And because it doesn’t yet need to be the right shape for Wall Street weenies, it’s able to make acquisitions that the Defense Primes and other large contractors can’t, at least not without upsetting shareholders. 

That gives it some advantages because it can acquire companies early, before there are specific contracts or competitions in place. Married with the company’s educated belief in the way the market is moving, it can make bets that larger companies can’t. 

Steckman told me that while the benefits of being early apply to anyone in the space, it’s a particularly important advantage for a smaller company. 

First, it lets you shape requirements. The earlier you are, the more closely you can work with customers to shape requirements that ultimately determine how they acquire the capability. The earlier you are, the more you can become the thought leader if you do it right. “You can push your own conviction out to the world, to your customers, and if you prove it through analysis and data and they agree,” he explained, “sometimes the customer will acquire the exact thing you have.” 

Second, being early lets you shape the narrative, not in a fluffy way, but “with a lot of math and physics.” It lets you go to the customer and say “We’ve thought about this, done mathematical modeling and simulation, and here’s what we believe. But we also know we could be wrong. Let’s talk about your problem and modify the analysis.” A high-integrity, low ego approach can go a long way. 

Anduril hasn’t been shy in sharing its narrative, backed by experience and expertise. Brose’s writing – both his essays and his book, The Kill Chain, describe a future of warfare that they believe would be good for the country and good for Anduril. 

Third, being willing to go early lets Anduril buy companies that public competitors can’t

As Steckman told me, “It’s hard for a large company to buy an asset and say they can’t forecast, but the way Anduril is structured allows them to make unique bets on a thesis.”

Porter-Price admitted that he’s “spoiled in this job”: 

No one on the team asks me about a discount rate or whether we’re paying 15x EBITDA. We’re not trying to find cost synergies. All of the typical things in an M&A environment, we don’t have to deal with.

We have to have shots on goal, and they’re big bets. When we’re making an investment to acquire a company, it will either be a tremendous success or a smoking crater.

Imagine a public company Head of Corp Dev telling that to a Wall Street weenie! That said, Porter-Price added that they do pay very in-market prices, they’re just not hamstrung by EBITDA multiples. “If we believe the revenue story,” he explained, “we can be flexible and put together a compelling package for the seller.” 

For now, that’s an advantage over the Primes. Like the VCs that back Anduril, Anduril itself can invest in companies early. But that will change. Porter-Price said that he’s increasingly running into the Primes on the companies they’re looking at as the larger companies start to grok the Anduril playbook. But they’re still hamstrung by bureaucracy, slowness, and public market incentives, which he believes means Anduril will still win these for a while. But, he acknowledges, it won’t last forever: “They know that there just aren’t that many great companies.” 

So the race is on. 

Can Anduril prove itself quickly enough to get big and convince Wall Street weenies that its financial model makes sense before the Primes get nimble and compete for the great companies? 

Can Anduril become a Prime before the Primes become Anduril? 

Prime Time

Anduril is a Russian doll fighting to help America’s defense industry plan in a less Soviet way. 

The company is making a series of nested bets on the future of war and the role it can play in bringing it about. The DoD needs to shift to distributed systems of smaller, cheaper, more networked defense products, and Anduril is betting that it can serve as the Lattice-like integration layer between the DoD’s centrally-planned PPBE system and the free market of startups and SMBs that produce the advanced technology it needs. 

To do so, its building and acquiring technologies to take advantage of open windows during which it might be able to sell those technologies, coupled with Lattice, into large programs. Those windows exist within a larger window: the period in which Anduril is the only logical buyer for the technologies it believes will make the difference in the conflicts to come. For now, it’s brilliantly counter-positioned against the incumbent Primes. 

Five times in the past two and a half years, this has meant acquiring companies in order to speed their time to market. My hunch is that we’ll see Anduril lean even harder into acquisitions versus in-house development in the years to come, for a couple of reasons:

  1. Bet on the Power of Free Markets. If one of Anduril’s key bets is that the market will produce ideas, technologies, and capabilities that better respond to the military’s needs than a centrally-planned system, then I suspect that bet extends to Anduril itself. Instead of planning far enough in advance to develop each technology in-house, it can let the market do its thing, and then pick the winners in that sweet spot when Anduril can be early with an educated view towards near-term DoD demand. 

  2. Differentiation. A lot of companies make hardware; no one has built the combination of software and DoD trust that Anduril has. By focusing on building out that integration layer, it can remain the buyer-of-choice for the most innovative defense tech companies while becoming the DoD’s partner-of-choice for advanced technologies that work with the rest of its assets. 

Of course, the situation is fluid and things will change. Primes might appreciate the threat and begin competing for the companies Anduril wants to acquire. The DoD itself might listen to Brose and begin buying large quantities of attritable, consumable, autonomous hardware directly, lessening the need for Anduril to acquire those companies. 

In that case, Anduril will adapt. The more I learn, the more I think viewing Anduril as a meta-version of its products is useful. Just as its products enable operators to close the kill chain by understanding, deciding, and acting more quickly, Anduril itself can be defined by this continuous process of understanding, deciding, and acting. 

What’s brilliant about Anduril’s acquisitions to me isn’t necessarily any of the specific companies they’ve acquired, but how cohesively its M&A strategy fits with the realities of its current position and future ambitions:

  1. Vision-Aligned Acquisitions: Anduril aims to acquire startups and SMBs that align with its vision of future warfare, particularly those with advanced technologies that can be integrated into its existing Lattice platform.

  2. Private Funding Flexibility: Leveraging its $2.3 billion in private venture capital, Anduril can make high-risk bets in acquisitions without immediate justification to public shareholders, giving it an edge over public Defense Primes.

  3. Plug Acquisitions into a Platform: Anduril has spent years building Anduril OS – the combination of software and trust with DoD – which makes it an attractive and effective acquirer. 

  4. Early-Mover Advantage: By acquiring companies early in their lifecycle, Anduril gets to shape both the requirements and the narrative surrounding new defense technologies, making itself indispensable to its customers.

  5. Balance Vision with Near-Term Market Demands: “Is it urgent and can we solve it in five years?” puts practical constraints around early bets. Being too early is the same as being wrong. 

  6. Race Against Time: The company is on a clock to prove its financial model and scale up before the larger Defense Primes adapt and start competing for the same advanced technology companies.

Normally, a writer might frame that list as “6 M&A tricks you can learn from Anduril,” but the takeaway for other startups shouldn’t be, “Do M&A like Anduril does.” It should be, “Do strategy like Anduril does.” If you really want to build a generational company, strategy really matters.

Anduril is a singular company, and its strategy works because the company’s leaders have crafted it for the specific mission at-hand. That shouldn’t be a surprise. Richard Rummelt’s strategy kernel – diagnosis, guiding policy, coherent actions – is essentially a longer-range version of the kill chain – understand, decide, act. No startup I’ve encountered has been more clear-eyed and thorough in its diagnosis, or followed it through more fully into a guiding policy and coherent actions than Anduril has. 

Instead of building a large, hard-to-turn, exquisite platform, Anduril has chosen to build its company as a network that’s constantly taking in new information and using it to adjust in real-time. 

That’s why, even as the Primes begin to move to where Anduril is today, I expect that the company will be able to respond more quickly to the changing defense landscape of the future. 

So will Anduril become the sixth Prime? 

Steckman doesn’t think they’re quite there yet, but he believes they’re getting close. They’re finally getting invited to sit next to the Primes at big events, getting mentioned in the same breath as those century-old stalwarts. 2024, he thinks, is going to be a big year: 

If we stick to what we’re good at — high levels of control and autonomy, unmanned systems, core technology areas – it will end up working out. If you look at the Defense budget over the next ten years, the sliding bar is moving into favor. If we stay the course, the addressable market increases beneath us.

On the meta level, that’s the real benefit of being early and nimble. Anduril has been able to position the company for where it believes the market will be in ten years. It’s doing things that are hard for public Primes to do – investing in software R&D upfront, acquiring companies before they’re generating meaningful EBITDA, taking risk on new products instead of banking on the certainty of cost-plus – that position it for what’s to come. 

If Anduril does become a Prime, it won’t look like the other five. It doesn’t plan to build exquisite platforms like manned fighter jets or multi-billion-dollar aircraft carriers. It realizes that fighting the adversary on their own terms is a losing strategy. 

When I asked what was next on the shopping list, they told me they’re looking at space. If they go up there, I would imagine their products look more like swarms of cheap, smart, distributed systems, like Array Labs, than the multi-hundred-million-dollar satellites the government and Primes have traditionally built. The future Anduril envisions will be distributed and connected by software. 

Breaking into a massive industry dominated by powerful incumbents and changing it from the inside requires making risky bets against a clear vision of the way things should and will be, and adjusting as new information and opportunities arise. I’m betting that Anduril will pull it off. 

Further Reading and Listening on Anduril

Thanks to Dan for editing, and to Matthew, Adam, and Sofia for letting me see inside the Anduril M&A machine.

That’s all for today. We’ll be back in your inbox with the Weekly Dose on Friday.

Thanks for reading,


Array Labs: 3D Mapping Earth from Space

Welcome to the 78 newly Not Boring people who have joined us since last Tuesday! If you haven’t subscribed, join 211,943smart, curious folks by subscribing here:

Subscribe now

Today’s Not Boring, the whole thing, is brought to you by… Array Labs

If you get as fired up reading this essay as I did writing it, you can simply go build radar satellite clusters to map the Earth in 3D by joining Array Labs:

Array is Hiring

Hi friends 👋, 

Happy Tuesday! It’s fall in New York City, the Eagles looked great on Monday Night Football, and I get to bring you a deep dive that’s been melting my brain for weeks.

If I’ve spoken with you over the past few weeks, chances are I’ve told you about Array Labs. It was designed for me. A deeply Hard Startup, riding exponential technology curves, to build distributed systems for space, to enable businesses on Earth, with a strategy as thoughtful as its hardware.

Nothing makes me happier than banging my head against a complex company like Array for weeks to understand it well enough to share it with all of you. This is a longer essay that I’ve written in a while, because fully appreciating what Array is trying to do requires some knowledge of the earth observation market, how satellites work, how radars work, business strategy, the self-driving car market, and more.

This is a Sponsored Deep Dive. I’ve always said that I’d only write them on companies I’d want to invest in — and I hope to invest in Array Labs when they raise next — and as Not Boring Capital’s thesis has tightened to Hard Startups that can bend the world’s trajectory upwards, so has the deep dive bar. Array clears that bar by hundreds of kilometers. You can read more about how I choose which companies to do deep dives on, and how I write them here.

So throw on some appropriate music…

And let’s get to it.

Array Labs: 3D Mapping Earth from Space

A real-time, 3D map of the world is the holy grail of earth observation (EO). 

Possessing such a map would enable new applications and technologies, from self-driving cars to augmented reality. It would improve climate monitoring, disaster response, construction management, resource management, and even urban planning. 

Access to a real-time global 3D map would confer significant strategic and economic advantages to whoever possesses it. The US government has spent untold billions of dollars over decades for more accurate and timely information about the state of the world. Real estate developers, insurers, and energy producers could turn the data from that map into dollars. 

But holy grails are, by definition, difficult to obtain. The map doesn’t currently exist, and it’s not for lack of trying. 

Array Labs thinks that now is the time to build it. And its founder, Andrew Peterson, thinks he knows how. 

The main thing you need is Very High Resolution Imagery (VHR) of gigantic swaths of Earth, freshly collected daily, to start, and more frequently over time. There are a few ways, theoretically, you could collect the data. 

You could send out a fleet of LiDAR-equipped self-driving cars, with humans behind the wheel. That’s how Autonomous Vehicle (AV) companies do it today, one city at a time. But it’s too expensive and impractical to cover the globe. At $20 per mile, it would cost over $1 trillion to map the Earth’s surface, even assuming cars could drive everywhere. 

You might fly LiDAR-toting airplanes across the globe. When people want VHR of specific areas today, that’s what they do. At around $500 per square kilometer (apologies for the mixed units, but we’re going with industry standards), a single US collection would be $5 billion. A global collection would run you a cool quarter-trillion, assuming you can access all of the world’s airspace without getting shot down. Closer, but still not good enough. 

Let’s go even higher, from ground to planes to satellites. Satellites rarely get shot down. They scan the entire globe, airspace be damned. Once in orbit, they keep flying around and around and around at very high speeds. And they’re way cheaper: at current retail pricing of around $50/sq. km, it would only cost you $150 million to cover the US, or $7.7 billion to cover the whole Earth. 

We’re getting warmer, but there are still two main problems with this approach: 

  1. Resolution. Today, the resolution from even the best 3D satellite collections is still 100x worse than airplanes. You couldn’t guide self-driving cars with these maps. 

  2. Coverage Cost. Getting coverage by sending up enough high-resolution satellites to capture the whole Earth would be too expensive to pencil out.

For a number of reasons we’ll get into, you can’t simply scale up current earth observation satellites to larger and larger sizes. They get exponentially less efficient as you scale them, and assuming you worked all of that out, a satellite big enough to provide the resolution and coverage you’d need would be way, way too big to send up on a rocket. 

If you wanted to make the antenna really big – like 50 kilometers in diameter big – it would be impossible with anything close to our modern capabilities. Such an antenna would be 1,000x taller and 5,000x wider than SpaceX’s Starship. It would be 60x taller than the Burj Khalifa. It would have 5.82x the diameter of CERN’s Large Hadron Collider. 

I tried to make an image to scale but it doesn’t come close to fitting on the screen. So let’s try a metaphor. If Starship were the size of a soda can, this 50 km diameter satellite antenna would be the size of the island of Manhattan. 

Not even close to drawn to scale

But a diameter that size is useful, because it helps increase vertical resolution, a key piece if you want a true 3D map instead of a stack of high-resolution 2D pictures. 

But having something that behaves like an enormous imaging satellite, or even just a giant radar antenna, would be really useful… 

There’s a way to do that, but it’s kind of crazy. 

What if instead, you kept the radar imaging satellites very small, but sent dozens of them up there in a big ring? 

Each satellite sends and and receives data, and to each other; not close to scale

When you do that, what you end up with is something that behaves like an absolutely enormous antenna – an antenna 30 miles wide – that improves exponentially. 

Instead of a monolithic antenna that gets exponentially less efficient with scale, you end up with a distributed system that gets more and more efficient with every satellite you add. Each satellite has its own solar panels, providing plenty of power. Each one is small and stackable enough that the whole cluster fits on a single port on an ESPA ring, snuggling in amongst all of the other satellites on the next SpaceX rideshare mission. Pointing them at the target is as easy as adjusting a small cubesat, like directing a ballet with dozens of prima ballerinas. 

“Then, pretty soon,” Andrew explains, “You’ve got the most efficient image collection system that ever existed in the history of mankind.” 

That’s what Array Labs is building, the crazy way: the most efficient image collection system that ever existed in the history of mankind. Its mission is to create a daily-refreshed, high res 3D map of Earth

To be sure, Andrew’s idea for Array Labs is crazy in a lot of ways. 

For one, the Air Force Research Lab tried to use radar to make a 3D map of the Earth back in the early 2000s with a program called TechSat-21. Working with DARPA, they designed the system to demonstrate formation-flying of radar satellites that would work together and operate as a single “virtual satellite.” It was an idea ahead of its time, sadly, and the program was scrapped due to significant cost overruns. 

TechSat-21, Wikipedia

For another, earth observation isn’t necessarily the rising star of the space economy these days. There are already a bunch of EO satellites orbiting our Blue Marble. Incumbents and upstarts alike, companies like Maxar, Planet, and Blacksky, have a lot of capacity in orbit. A workable business model has been harder to come by. Planet and Blacksky are both trading near all-time lows after coming public via a SPAC in 2021. 

Yahoo! Finance as of 9/22/23

When Ryan Duffy and Mo Islam wrote a piece on The Space Economy for Not Boring last year, they weren’t particularly bullish on the EO market, which was why I was surprised when Duffy called me a couple of months ago to tell me about a company I absolutely had to meet … in EO. 

He’d gotten bullish – he was doing consulting work for them, and asked to take his pay in equity instead of cash – but after five years at media startups writing newsletters about emerging tech, he told me that he was planning on doing his own thing for the foreseeable future. So I was doubly surprised when, on our next call, Duffy told me that he was giving up the consulting life early to join Array Labs full-time as Head of Commercial BD. 

After he introduced me to Andrew, I understood why. 

Array Labs has many of the characteristics I love to see in a startup. It’s ambitious and just the right kind of crazy. It’s reanimating an idea that didn’t work before but might now thanks to exponential technology curves. It’s turning something traditionally monolithic into something distributed. It’s using technological innovation to unlock business model innovation, and executing against a clear strategy while it’s still in its uncertainty period. And, if it works, it will enable new businesses to be built that aren’t possible today. 

In short, Array Labs both has its own powerful answer to the question, “Why now?” and the rare opportunity to serve as other companies’ answer. 

But taking advantage of a powerful “Why now?” means that it’s also very early. Array Labs plans to launch four demonstration satellites next year and two more orbital pathfinder missions in 2025, before sending up its first full cluster in 2026. Risks abound. 

That makes it a particularly fun time to write about the company. This essay is a time capsule from before Array Labs becomes obvious. It’s a Not Boring thesis paper of sorts, and the thesis is this: a company with a near-perfect why now (and the right team to capture it) and a strategy to match the moment can both disrupt and grow an established industry while enabling new ones. 

To lay it out, we’ll need to cover a lot of ground in one pass: 

  • Earth Observation: The 600km View

  • The Engineering Magic of SAR

  • How to Make the Most Efficient Image Collection System Ever

  • Intersecting Exponentials

  • Array Labs’ Strategy

  • Enabling New Markets

  • Risks

  • Mapping the Path to Mapping the World

Put on your spacesuit. Let’s launch into orbit to get a better view of the industry.

Earth Observation: The 600km View

A couple of weeks ago, I wrote, “Any technology that is sufficiently valuable in its ideal state will eventually reach that ideal state.” 

The earth observation market is sufficiently valuable, even in its current state. In 2022, estimates put the market size around $7 billion and growing at roughly 10% annually. This is likely a fairly significant underestimate, as the US and other governments spend billions of dollars on highly classified programs. 

But the earth observation market isn’t close to its ideal state. In its ideal state, earth observation would produce real-time, high-resolution, global coverage, easily consumable by the naked eye or machine learning models, accessible to anyone for their specific needs. 

  • AV companies would ingest point clouds that turn into real-time maps, alerting cars of changes in traffic conditions, construction projects, or closed roads. 

  • The Department of Defense would have real-time visibility into the movement of enemy resources, no matter the weather or time of day. 

  • Environmental groups would be able to track planetary indicators by the minute. 

  • Augmented Reality (AR) companies would design experiences that interact with the real world in real-time. 

  • Foundation model companies, always hungry for more data to scale performance, could train their models on 3D and HD continental-level scans of the world. 

  • If the data was good and cheap enough, entrepreneurs would certainly dream up uses that I can’t. 

In its ideal state, earth observation will expand humanity’s ability to understand, respond to, and shape events on a global scale. 

And we’re getting closer. Over the past century, we’ve made significantly more progress towards that ideal state than we did in all of the millennia of human history before it. 

Here’s the progress in mapmaking over the two millennia between the first world map in the 6th century BCE and the first modern one in 1570:

Babylonian World Map (l) and Theatrum Orbis Terrarum (r)

No disrespect to the cartographers. Stitching together a map from ground level was an arduous, and even dangerous, proposition. But the maps were of dubious accuracy, nowhere near real-time, and certainly nowhere near close to real-time enough to fight a war with. So during World War I, governments experimented with a number of ways to get close-to-real-time images from above, including pigeons, balloons, and kites:

Popular Science Magazine, Rare Historical Photos, Leslie Jones/Boston Public Library 

The war spurred government funding and public-private innovation, as wars tend to do. During World War I, they started putting cameras on planes, and by World War II, combatants on both sides sent planes higher and higher up to get better and better views, and to make them harder to shoot down. 

World War II ended and the Cold War began, and both the US and the Soviet Union upped their spy game. To get a sense of the lengths the government was willing to go to to get aerial intelligence on the enemy, I highly recommend listening to the Acquired episode on Lockheed Martin, starting at around the 34 minute mark, where they cover the development of the U-2 spy plane and the launch of the Corona satellite program:

The U-2 plane that Lockheed’s Skunk Works developed flew 70,000 feet above the earth, higher than airplanes had ever flown before, so high that its pilots had to wear spacesuits. Edwin Land himself developed the camera that could take pictures from that high up. 

The U-2, with its 100 foot wingspan

The U-2 flew over the Soviet Union for the first time on July 4, 1956, and for nearly four years, the US was able to keep tabs on the enemy from above. Then, on May 1, 1960, the Soviets shot it down with a ground-to-air missile. We’d need to go higher: to space

The Soviets famously launched the first satellite into orbit, Sputnik, in 1957, kicking off the space race. One of America’s biggest concerns was that with satellites, the Soviets would be able to collect better intelligence on us than we could on them. But it was the Americans who launched the first satellite capable of photographing the Earth from orbit: Corona. 

Corona Satellite, Wikipedia

The first Corona satellite went up in August 1960, and over the course of its month-long mission, “produced greater photo coverage of the Soviet Union than all of the previous U-2 flights combined.” Earth observation from satellites had begun in earnest. 

Things were different then than they are today. The satellites housed film cameras with a then-impressive 5-foot resolution. But film? How’d we get the images back? Planes caught them in mid-air as they fell back to earth! 

Recovery of the Discoverer 14 return capsule (typical for the CORONA series), Wikipedia

Satellite imaging technology has evolved a great deal since the 1960s. We now capture images with incredibly sophisticated cameras and beam them straight down to ground stations on Earth. I could nerd out on the history for pages and pages, but I have a piece to write, so if you want to go a little deeper, Maxine Lenormand made an excellent video covering the history and basic technical details of satellites: 

For now, let’s fast-forward to today. 

Today, the largest EO company, Maxar, which was taken private by a PE firm for $6.4 billion at the end of 2022, and the leading (publicly traded) startup, Planet, both primarily operate optical imaging satellites. 

Optical imaging satellites capture photographs of the Earth’s surface using visible and sometimes near-infrared light, like a traditional camera but in space. Given that these space cameras shoot from roughly 600 kilometers above the earth, continuously, the images they’re able to produce are mind-blowing. 

Here’s an image from Planet’s Gallery showing flooding in New South Wales on November 23, 2022 in both visible (left) and near-infrared (right): 

New South Wales Flooding, Planet

And here’s one from Maxar, showing an airport (on the site, you can hover over parts of the image to magnify them, like I did over the airplane): 


We’ve come a long way from hand-drawn maps and pigeons. 

While they both use optical imaging satellites, Maxar and Planet have taken vastly different approaches to the trade-off between resolution and coverage

This is one of the key trade-offs, maybe the key trade-off, in EO. 

Higher resolution allows for more detailed images but over a smaller area, while lower resolution can cover a larger area but with less detail. If you opt for very high resolution (VHR), you can overcome coverage by putting a lot of satellites on orbit, but VHR birds don’t come cheap. If you opt for high coverage but low resolution, there’s not much you can do to go higher resolution without launching new, more expensive satellites. 

Maxar and Planet’s approaches set the edges of the trade-off space. 

Maxar’s satellites are the highest-resolution commercially available, with a ground sampling distance (GSD) of 30 centimeters. A 30 cm resolution means that one pixel in each image represents a 30 cm by 30 cm square on the ground. 

These satellites look like what you think of when you think of a satellite, small-SUV sized craft with solar panels, a big camera, thrusters, gyroscopes, and plenty of other bells and whistles. 

Maxar WorldView-3 via eoPortal

Built mainly for the military, they aren’t really used to cover the whole earth regularly, but are tasked, meaning that a customer can say “go take a picture of this specific place” and Maxar will add the location to the target list to collect next time it’s in view.

In 2022, before Maxar went private, it reported that two-thirds of its Earth Intelligence revenue, $722 million, came from the US government. The government has budget and is willing to pay for capability, and Maxar delivers by delivering expensive, highly-capable satellites. The WorldView-4, which it launched in 2016, was produced by Lockheed Martin for a cost of $600 million. It suffered a gyroscope failure in 2019 and is no longer in orbit. 

Maxar has announced its six-satellite WorldView Legion constellation, which is rumored to cost in the range of $100 million each and offer 25 centimeter resolution, and is scheduled to launch the first two with SpaceX on Halloween. Currently, Maxar has four satellites in orbit, as seen in this awesome real-time visual from Privateer’s Wayfinder (the fourth is somewhere near Australia in this shot): 

Maxar EO Satellites, Privateer Wayfinder

But high-res optical satellites like Maxar’s run into the laws of physics when trying to scale up to even higher resolutions. 

For VHR satellites, think about the high-res optical sensor as a tiny little soda straw. To snap those super-high-res 30 cm pics of the locations you care about (before you go whizzing by overhead and those places are out of view until tomorrow), you’re slamming and slewing that soda straw all over the place. Slamming means rapidly adjusting the satellite’s orientation to focus on a specific target, and slewing means rotating or pivoting the satellite to keep the camera aimed at the target while moving. 

Now, the problem is that in order to make smaller pixels from the same altitude, you’ve got to increase the size of the satellite (specifically, the diameter and length of the telescope). This also tends to reduce the field of view. This means that as your pixels get smaller,  your straw gets narrower, and it also gets much, much harder to move. (In fact, doubling the resolution of your satellite makes it roughly 40 times more difficult to slew.)

To perform these feats of agility, big satellites make use of larger and larger Control Moment Gyros (CMG’s), exquisite, high-speed gyroscopes which are pushed and pulled against in order to rapidly slam and slew the little soda straw back and forth, from place to place, when flying over a particular area. That way, they can ideally capture everything on their tasking scorecard.

It’s no surprise just how expensive spy satellites can get. In the 2000s, as the US looked to put two new “exquisite-class” satellites, US Senator Kit Bond said that the initial budget estimate for the two KH-11 sats was more than it cost to build the latest Nimitz-class aircraft carrier!!! The two satellites would end up being produced two years ahead of schedule and $2 billion under budget, for a total of $5.72 billion in today’s dollars. So, less than an aircraft carrier, and a W, in terms of timeline and budget, but not cheap. 

The long and short of it is that high resolution optical imaging can get really expensive, and that the higher resolution you go, the worse coverage you get. 

Planet takes a different approach, optimizing for coverage. The company was founded in 2010 by three NASA engineers who realized that iPhone cameras had gotten so good that they could put off-the-shelf cameras on off-the shelf cubesats (very small satellites) to create cheap optical imaging satellites. Instead of one really high-resolution $600 million satellite, they could throw hundreds of pretty low-resolution satellites into orbit, each less than $100k, collect an image of the entire planet every day at 3-5 meter resolution, and sell access to the images as a subscription.

Planet CEO Will Marshall with a Dove Satellite 

Instead of selling to just the military, Planet, which established itself as a Public Benefit Corporation, would heavily focus on serving commercial customers, such as those studying climate change, who need frequent revisits to measure change, but aren’t as sensitive about resolution. Planet’s Doves aren’t taskable – you get whatever they fly over, and they make it up by trying to fly over more areas with higher coverage. 

In 2017, Planet acquired the satellite company Skybox from Alphabet just three years after Alphabet bought it for $500 million. Skybox’s SkySat sits somewhere between Maxar’s WorldView and Planet’s Dove. They’re about the size of a minifridge, cost around $10 million, and deliver 50-70 centimeter resolution. Planet has announced its next constellation, Pelican, which will replace SkySats. It’s aiming for 30 satellites at about $5 million each, which will offer 30 centimeter resolution, up to 30 captures per day, and a 30-minute revisit time. 

According to Wayfinder, 362 Doves and 40 SkySats have been launched. 

Planet Labs EO Satellites, Privateer Wayfinder

There’s a lot more detail to dive into on Maxar and Planet – I found this Tegus interview with Maxar’s CEO to be an excellent guide – but for our purposes, the point is just to show the two ends of the spectrum in optical imaging satellites: high resolution vs. high coverage. 

But optical imaging satellites aren’t the only way to snap high-resolution images from the heavens. 

In the YouTube video I shared earlier, Maxine introduces a different approach by saying, “Every once in a while, humanity invents some weird, delightfully clever piece of engineering magic, and then gives it a convoluted name. This is one of those.” 

That convoluted name is Synthetic Aperture Radar, or SAR. 

The Engineering Magic of SAR

SAR achieves high-resolution imaging by simulating a long aperture by attaching a radar to a moving object, like a plane or satellite. The aperture effectively becomes as long as the section of the flight path during which the radars are aimed at a given target. 

If you want to understand why SAR is important, and what makes Array tick, this is the key formula you need to grok: 

Image quality = Aperture Size / Wavelength

Remember when I was talking about higher resolution spy satellites needing to be larger? This is the formula that explains it. If you’ve seen the lenses that professional photographers use, you’ve seen that larger diameter lenses take better photos (or at least, higher resolution photos). 

Wider Apertures Make Subjects More Distinct in Images

What you might not know is that the same formula works for every type of imaging

That extreme-UV lithography for etching those silicon chips? That’s smaller wavelengths making things better. Radar imaging is also subject to this formula. 

Radar has a number of advantages over optical imaging: 

  • All-Weather Capability: radar penetrates clouds, fog, and rain

  • Day and Night Visibility: radar systems can operate in complete darkness

  • Penetration: radar can see under tree cover and even below soil or shallow water

  • Different Information: radar can pick up roughness and surface material

  • Coherent Integration: radar pulses can be added together for increased power and resolution

But it also comes with its own challenges, chief among them that radio waves are so long.

Optical imaging is called optical because it uses visible light, which has wavelengths of 400-700 nm. Radar uses radio waves, which have 1 mm to 1 km wavelengths, or ~10,000x as long as visible light’s. 

The Electromagnetic Spectrum

Plugging radio waves into our image quality formula, we get a denominator for radar that’s 10,000 times bigger than that of optical imaging. That leaves two choices, if you insist on using radar because those advantages are worth it to you: 

  1. Accept a 10,000x lower-resolution image

  2. Make your aperture 10,000x bigger

If you want to compete in the earth observation or aerial imaging (same thing but with planes) markets, 10,000x worse image quality isn’t going to cut it. But as we discussed earlier, making an aperture 1,000x or 10,000x larger than a camera’s means that you can’t fit it on a rocket (or a plane). Hmmmmm 🤔

Luckily, there are geniuses in this world, and in the 1950s, two sets of them – engineers at Goodyear Aircraft Corp. in the US and at Ericsson in Sweden – struck upon a similar idea: what if we move the radar antenna over the target, by flying it on a plane, fire radio waves pulses the whole time, and capture the signals as they bounce back up? 

With Apologies to Any Real Radar People for this Terrible Drawing

Turns out, doing that allows you to create something that behaves like a much larger aperture synthetically, hence synthetic aperture radar. A 10 km flight path essentially gives you a 10 km aperture. SAR along a flight path gives you higher azimuth resolution, essentially increasing the horizontal sharpness of side-by-side items in the image. Vertical sharpness, or range resolution, is created by essentially measuring radar round-trip-time to determine how far away certain objects are. 

The US government, and particularly the defense and intelligence agencies, were very interested in a technology that could penetrate clouds and trees, and by the late 1960s, had developed SAR systems with one foot resolution. In the 1970s, the National Oceanic and Atmospheric Administration (NOAA) and NASA’s Jet Propulsion Lab (JPL) launched the first SAR satellite, Seasat, to demonstrate the feasibility of monitoring the ocean by satellite. 

Seasat, NASA

Until very recently, space-based SAR was the domain of wealthy governments who could afford the estimated $500 million to $1 billion price tag per satellite that the National Reconnaissance Office (NRO) ponied up for its Lacrosse Satellites. These things were expensive in large part because they were massive. Look how small they make these two humans look:

Tiny Humans Work on Lacrosse Satellite, Spaceflight101

In recent years, thanks to decreasing launch costs and technological advances, startups have gotten into the game. The three most impressive include Umbra, Iceye, and Capella. Here’s a quick overview of their capabilities: 

# on Orbit per Wayfinder

In August, Umbra dropped the highest-resolution commercial satellite image ever at 16 cm resolution. The only higher-resolution image the public has seen is this (likely classified*) 10 cm resolution image former President Trump tweeted in 2019 of the Semnan Launch Site One in Iran. 

Donald J. Trump on Twitter, *Definitely classified…the photo revealed a lot about the precise resolution capabilities of US spy satellites. While foreign intelligence agencies likely had a pretty solid, reasonably accurate guesstimate as to just how good the Americans’ eyes in the starry skies were, they were suddenly gifted a photo that revealed the imaging capability of an entire series of NRO birds. 

At the time, the image revealed that US spy satellites were 3x better than the best commercially available imagery. Now, that delta is shrinking. Another Y Combinator space startup, Albedo, aims to capture 10 cm optical imagery by flying its satellites very low. 

OK. Let’s return from the espionage-laced optical detour back to SAR.

One thing to note about the 2D SAR satellites is that they have two main modes: spotlight and stripmap. Spotlight mode allows them to focus radar beams on specific areas of interest at higher resolutions, while stripmap mode allows them to monitor larger areas at lower resolutions (think about it like the slew/slam soda straw dichotomy mentioned earlier). 

Commercial SAR satellites can switch between spotlight and stripmap—and turn the dial between resolution and coverage. It’s a step closer to eliminating the trade-off between resolution and coverage. 

But if we’re looking for a 3D map of the whole Earth, we need both the resolution and coverage turned up at the same time, and we need the images to be 3D. 

Turns out, you can make a 3D image from 2D satellite data. How? 

First, you take a stack of satellite images of the same location, but taken from different angles. Then, you perform a process called 3D reconstruction, where you analyze how different groups of pixels move relative to each other in each of the images. Combining that information with the knowledge of where the images were taken allows the 3D information to be extracted. Up to a certain point, the more images used, the higher the quality of the resulting 3D reconstruction. 

Unfortunately, it’s pretty hard to obtain a better 3D reconstruction than the 2D data you started with, and you often need to wait for a long time to gather all of the different angles you need in order to perform the 3D reconstruction. You run into the resolution vs. coverage trade-off in another dimension. 

Collection vs. Resolution, Courtesy of Array Labs

Is there a way to improve 3D image quality and get all of the data we need in a single pass? 

That’s what Array is cooking up. 

How to Make the Most Efficient Image Collection System Ever

Andrew Peterson is an aerospace engineer who’s worked in radar for a long time, first at General Atomics, where he worked on “Design and implementation of advanced SAR image formation and exploitation techniques, including multistatic and high-framerate (video) SAR” and “Guidance, navigation, and control systems for hypersonic railgun-launched guided projectiles and anti-ICBM laser weapons systems,” and then at Moog Space and Defense Group, where he “Worked on beam control systems for the world’s largest telescope (LSST).” 

(I, for comparison, write a newsletter.) 

When we first spoke in July, the first thing he told me was: “I was happy being a technical guy forever. I never thought I’d start a company. I still don’t introduce myself as an entrepreneur. But then I had an idea that was way too good.

Right around the time he had the way-too-good idea, a coworker gifted Andrew two books: one was about cows, and one was Peter Thiel’s Zero to One: Notes on Startups, or How to Build the Future. Fortunately, he mooooooved right past the cow book and dug into Zero to One

In the book, Thiel presents his now-famous question: “What important truth do very few people agree with you on?” 

Most of us get stumped by this one, think of something controversial that a lot of people do agree with you on, or resort to hyperbole to outdo others who only kind of agree. Andrew, after years in aerospace and radar, had a very specific, technical answer. That answer was the intro to this piece: you can build the world’s most efficient 3D image collection system by putting a cluster of simple radar cubesats into a giant ring.

He believed that swarms of dozens of satellites (just how many, I can’t say, because I’ve been sworn to secrecy) working together, orchestrated by software, could outperform the monolithic satellites that the rest of the industry was working on.

“If you think about scaling, as you continue to scale up an antenna to be larger and larger and larger, it gets exponentially less efficient. You cannot scale these things,” Andrew says. “But what if you recast the problem as scaling distributed systems?

I’ve been listening to the latest Acquired episode on Nvidia & AI, so I’ll go ahead and make the analogy: if traditional EO satellites are CPUs, what if you built the GPUs? 

In the world of computing, CPUs hit a wall in the early 2000s when it came to clock speed and power consumption. By 2005, Dennard scaling, which allowed for performance gains without increased power consumption as transistors got smaller, broke down. Clock speed, the rate at which a CPU or other digital component executes instructions, measured in hertz (or more frequently megahertz or gigahertz), leveled off. Pundits proclaimed Moore’s Law dead, as they’re wont to do. The very next year, Intel dropped an elegant solution: the Pentium Dual-Processor. Doubling the cores required interconnects to allow them to communicate and coordinate tasks, but it worked. Moore’s Law lived on. 

Modern high-end desktop and server CPUs can have up to 64 cores, but there are diminishing returns and CPUs with more cores eat a lot of power. 

GPUs, on the other hand, were designed from the ground up to be parallelized. With thousands of smaller, more specialized cores, they’re built explicitly to handle processing lots and lots of data at once, much more efficiently than CPUs. While Jensen Huang and Nvidia created GPUs to handle graphics, it turns out they’re perfect for training AI models. If Moore’s Law ever dies, Huang’s Law will be there to take the baton. 

Array Labs is taking a similar approach to satellites, designing a system of satellites from the ground up to be parallelized. 

Traditional EO satellites are like the CPUs: powerful but limited in how much they can scale. They’re big, expensive, and can only be in one place at one time. You can send more of them up, but like adding CPU cores, there are diminishing returns if you’re looking to build a 3D map. 

Array Labs, on the other hand, would deploy clusters of smaller, cheaper satellites that can work together in tandem, much like the cores in a GPU. 

This configuration is called multistatic SAR – instead of a single antenna sending and receiving signals, multiple antennas send signals down and each receives the signals from all of the other antennas in the cluster. So if you have a dozen satellites in a cluster, each submits a pulse and all receive twelve pulses, for a total of 144 image pairs of a particular scene. 

The upshot is that, once you stitch it all together, you can create better images, with both better resolution and coverage.

As an example, in April, Capella ran a test by turning two of their satellites into a bistatic SAR (same idea, but with just two SAR satellites instead of dozens), and the difference in image quality is stark: 

The hotel is much more clearly visible in the bistatic image (right); Capella Space

Capella’s test was basically a higher-resolution spotlight mode shot, but Germany’s TanDEM-X has been flying bistatic SAR since 2010 for scientific research, and this artist’s rendition shows another major benefit of even bistatic SAR: it can cover wide swaths in each path (coverage)

Germany’s TanDEM-X Bistatic SAR System

Andrew had a bigger idea. Instead of spending €165 million for two satellites, like Germany and Airbus did on Tandem-X, what if he could spend single-digit millions for dozens of cubesats with high-quality, miniaturized, components? Array’s “swarm” approach would enable both higher resolution – up to 10 cm – and better coverage – swaths over 100 km wide in each pass

All of that was the theory, at least. And then Andrew ran the numbers, and on paper, it worked. 

  • If you double the number of satellites in a cluster, you double the aperture size (and therefore the image quality) and the amount of antenna area you have in space. 

  • That halves the transmit problem: you need half as much power to transmit down to the ground. 

  • Plus, you have twice as many solar panels up there collecting power. 

The upshot is, when you double the number of satellites, you quadruple the daily collection rate – the amount of data that can be captured or transmitted – for just double the cost.

Instead of working against you, scaling works for you. The more satellites you put in the ring, the better it performs (once you write a lot of complex software and get them to work together).

Adding a height dimension, say with a resolution of 5 cm, multiplies the number of pixels Array can collect on any object. The more pixels you have, the better the computer inference (aka, AI) gets. So in a funny coincidence, Array’s data might be the key to delivering the high-fidelity 3D point cloud data needed to train AI models on the earth, just like the GPU was key to training large AI models in the first place. 

On paper, 3D SAR is superior to both optical imaging and 2D SAR on every dimension that matters for building a high-refresh rate 3D map of Earth. He had to put the theory to the test.

So reluctantly, Andrew let his crazy good idea beat his conservative, technical tendencies, and in the fall of 2021, teamed up with Isaac Robledo to found Array Labs.

Isaac (l) and Andrew (r) outside of YCombinator

A few months in, in the summer of 2022, Array was accepted into YCombinator’s S22 batch. The vaunted accelerator has a simple motto: “Make something people want.” 

That was another beautiful aspect of Andrew’s crazy good idea: he knew customers would want it, if he could pull it off. And he was pretty confident he could pull it off; it’s just physics and engineering. 

The US government alone happily spends billions of dollars on related capabilities, and Array should be able to deliver those capabilities much more cheaply. On paper, it could deliver a combination of 10x better persistence and 10x better quality than optical imaging, at super low costs over very large areas, refreshed frequently at scale. 

Privacy Pause: It’s important to note that while Array’s data quality should be excellent, it won’t be facial-recognition excellent. It won’t be able to identify people or license plate numbers, and it won’t see colors. So its existence shouldn’t pose concerns about individuals’ privacy.  

Not only would Array make something that the government would want, it could flip the tasking model on its head and sell anyone a subscription to the same dataset (maybe without access to the highest-resolution images over Area 51). On paper. 

But they still had to build the thing, and that meant putting together a team. 

Array is headquartered, peculiarly enough, in Palo Alto, the heart of Silicon Valley. 

Why is that peculiar? Because, while the Silicon Valley ethos may dominate mindshare of New Space companies, relatively few are actually based there. SpaceX, Hadrian, and Varda, for example, are all in or near LA. 

Remember the Corona program I mentioned earlier? Well, turns out in the late ‘50s to ‘60s, the CORONA birds were covertly produced for the CIA in the Palo Alto plant of the Hiller Helicopter Corporation. In that sense, Array Labs’ Palo Alto roots are a full-circle moment for the Valley. The US government arguably seeded Silicon Valley, buying from companies that built semiconductors for aerospace and defense programs. As Silicon Valley came of age, though, it moved away from the government. Now, we’re so back. 

Array set up shop in the Valley not for that nostalgia factor, though, nor to be near the VC dollars on Sand Hill Road, but so that it could recruit 99.99th percentile software and hardware engineers, the kind of bare metal bandits with experience at the Qualcomms, Amazons, and Metas of the world. (If that sounds like you, Array is hiring.)

The brains and hands Array has … clustered … in its Palo Alto HQ have not only worked on railguns, or helped develop a 26-foot fully robotic restaurant and the most powerful Earth-observing satellite ever, they’ve also led teams designing silicon, shipping semiconductors, laying the groundwork for mass-market augmented reality contact lenses and headsets, and more. They hold patents and have developed IP for technology that can be found in more than 100 million modems worldwide. That’s the kind of volume production and scale that aerospace engineers could only dream of. 

Space is sexy, sure, but Array lives and dies by systems and software that are already ubiquitous here on Earth. Satellites, of course, but also semiconductors, wireless communications, digital signal processing, and RF (radiofrequency)/analog design. The team is a collection of experts in each piece of technology on which Array relies. 

Economies of scale exist for the billions of components that are miniaturized and integrated into smartphones every year. Array Labs is riding the miniaturization wave like others who have come before it, but also bandwagoning on the proliferation of 5G. Technologies like multiple-input and multiple-output RF links, developed for next-gen wireless networks here on Earth, can also be put to work in distributed space swarms. 

As Varda CEO Will Bruey put it, “Each piece of what Varda does has been proven to work; what we’re building is only novel in the aggregate.” The same could be said for Array. 

It’s weaving together a number of curves that have hit the right place to build the most efficient image collection system ever, at the right time.

Intersecting Exponentials

The Array cluster is going to look something like this when it goes up in 2026, give or take a few cubesats: 

Array Labs Cluster

That doesn’t look like any of the monolithic satellites I’ve shown you so far. It’s a distributed swarm of cubesats operating as one very large, very powerful, and still very cheap satellite; a fresh take on a crowded industry. 

In I, Exponential, I wrote that, “Many of the best startup ideas can be found in previously overhyped industries. Ideas that were once too early eventually become well-timed.” That applies here. 

Earth observation is one of the few pieces of the space economy that isn’t practically as empty as space itself. Governments, defense primes, incumbents, and startups alike dot the landscape. Some have been successful, more have been expensive science projects. 

So the big question for Array is: why now? 

In one of 19 frameworks in a Google Doc titled Frameworks v0.2, Pace Capital’s Chris Paik explains why it’s important for startups to have a compelling answer to the question, “Why Now?”: 

Venture capital is particularly well suited to finance companies that are capitalizing on ‘dam-breaking’ moments—sudden changes in technology and regulation (and to a lesser extent, capital markets and societal shifts).

I’ve come across very few companies, maybe zero, with a more comprehensive “Why now?” than Array. 

Array Labs’ “Why Now?” is a mix of commercial, geopolitical, and technological factors. 

On the commercial side, there has never been greater demand for 3D point clouds of the Earth. For one thing, with the rise of AI, we now have models capable of doing something with all of that data, and so much economic activity in the space that both the models’ capabilities and thirst for fresh data seem likely to continue to grow. 

As one specific application, Autonomous Vehicle (AV) companies are actually making money now by offering robotaxis in select cities. In order to expand to new markets, reliably and affordably, they need better and cheaper 3D maps.

On the geopolitical side, the war in Ukraine and growing tensions with China mean that the US government has an even greater appetite for real-time, high-resolution images of the earth than normal. Last year, it spent roughly $722 million with Maxar alone. It also represents 50% of Umbra’s revenue, according to TechCrunch, and its allies represent an additional 25%. The government is a logical early customer, and non-dilutive funder, of Array’s product. 

There are no guarantees in startups, but if Array Labs is able to deliver on the product it plans to build, the demand will be there. Customers like the combination of cheaper and better. 

So can they build it? Array is taking the non-differentiating components – the cubesat, the launch – off the shelf. It’s building bespoke radar, antenna, and algorithms – the things that will make or break the company – with its “Blue Team” of experts who have built similar things before. 

On both the buy and build side, there are strong technological tailwinds, including: 

  1. Cheap Launch Costs

  2. Cheap, Performant Cubesats

  3. Modern Telecom

  4. Cheaper, More Compact Memory Systems

  5. ESPA Rings and Flatstacking

Cheap Launch Costs. It would not be a space essay if I didn’t whip out the chart. 

The cost of sending a kilogram to LEO, where Array will put its clusters, has fallen by two orders of magnitude since the 1970s. Theoretically, SpaceX can send each kilogram up for $1,500, but since it’s the only game in town, it charges a healthy margin. 

The fun thing is, you can actually plug in some Array numbers – first cluster’s estimated dry mass (150kg), orbital destination (sun synchronous orbit (SSO) / 550 km above Earth), and launch date (Q4 2025) – on SpaceX’s Transporter rideshare calculator, and get an estimated price: $830k, or $5,533/kg.

SpaceX Rideshare Program

Cheap, Performant Cubesats. Thanks to the burgeoning space economy, you can now take all but your most differentiated parts off the shelf. 

Cubesats, first launched by Cal Poly and Stanford in 2003, have come down in price by an order of magnitude over the past couple of decades, from ~$1-2 million to ~$100k today. Dozens of them now cost the same as, or less than, one larger, custom-built satellite. 

Thanks to Moore’s Law and the miniaturization of electronic components more broadly, these small cubesats can still pack a punch. Think “the iPhone in your pocket is more powerful than the computers we used to put a man on the Moon,” but for satellites. 

Modern Telecom. While the last two factors – launch and cubesats – benefit many companies, the learning curves in cellular and wireless telecommunications are more specifically beneficial to Array.

Array’s satellite is essentially an off-the-shelf cubesat with a 5G base station built in to send and receive the radar pulses. Thanks to the rapid buildout of advanced RF technologies across the globe, and the billions of dollars companies are pouring into it, telecom is getting exponentially cheaper. 

Cheaper, More Compact Memory Systems. A constantly-updating 3D point cloud of the Earth captures a ton of data, which needs to be stored and transmitted. 

In an interview with TechCrunch, Andrew pointed out that, “The system that they [TechSat-21] had come up with was ten spinning hard drives that are all rated together,” he said. “It weighed maybe 20 pounds, took 150 watts [of power]. Now, something the size of my thumbnail has 100 times more performance and 100 times less cost.” 

ESPA Rings and Flat Packing. Smaller, cheaper 5G base stations and memory systems mean that Array can fit better capabilities than a bus-sized, hundred-million dollar SAR satellite from a couple of decades ago on a cubesat. 

If you look closely at the satellites in the video at the beginning of this section, they look a bit like SpaceX’s Starlink satellites. That’s not a coincidence. 

Render of Starlink (left) and Array Labs satellite (right) 

SpaceX wants to send a lot of satellites into orbit – there are over 4,500 orbiting earth today, with a goal of 42,000 over the next few years. To launch them cost effectively, SpaceX needs to be able to send a lot of them up at once. Part of that equation is building bigger rockets – like Starship – and part is fitting more satellites into the same space. SpaceX designed Starlinks to stack as tightly as possible inside the nose cone of the Falcon rocket. It can fit 60.  


Array Labs designed its own satellites such that a whole cluster fits on a single one ESPA ring port, which was created by Andrew’s alma mater, Moog, and is used by SpaceX (and others) to launch secondary payloads and deploy them into orbit on its Transporter rideshare missions. 

There are other advances Array is taking advantage of: DAC/s, ADC/s and FPGAs (RF chips) are getting faster and cheaper, more ground stations are popping up worldwide, and elastic compute and AI inference are moving at warp-speed, just to name a few. As demands from other industries bring components down the cost curve, and up the performance curve, Array can benefit. It’s how they’ve designed the system, drawing inspiration from SpaceX and others. 

It’s not a technology curve, per se, but SpaceX’s example is another contributor to Array’s “Why Now?”.

In a 2021 interview, Elon Musk spelled out his five principles for design and manufacturing:

  1. Make requirements less dumb

  2. Delete part of the process 

  3. Simplify or optimize

  4. Accelerate cycle time

  5. Automate (but not too early)  

These principles show up everywhere at Array. 

Take Array’s formation flying. How do you get dozens of satellites to form, and stay in, a ring as they hurtle around the globe at 4 miles per second? Instead of adding thrusters, making them way more expensive, heavier, more complex, and less stackable, Array uses a novel ‘aerodynamic surfing’ approach, made possible in part by founding engineer and employee number 2, Max Perham, a former Maxar WorldView engineer and cofounder of Mezli, which allows them to fly through space without a propulsion system. Make requirements less dumb. Simplify. 

Or the rigid antennas Array uses. They don’t deploy, they  don’t look like radar antennas, they’re not disc-shaped. They’re just the most straightforward path to maximum antenna area at minimum cost.

Rendering of Array Labs’ Satellites; Formation not to scale

If you wanted to build something with similar capabilities to Array’s cluster a decade or two ago, it would have been possible, just prohibitively complex, expensive, and slow, as TechSat-21 demonstrated. The technological curves Array is operating in the middle of just make it economically feasible. The company’s approach is all about building for scale, building cost-efficiently, and building fast

Speed shows up everywhere at Array. In April, Array got the “dumb metal,” the structural components without the active, intelligent, or electronic functions, for its first test rig. Within a few months, the team had turned it into a fully functioning multistatic radar test range, building a robotic, rail-mounted SAR in the process, and used it to develop 3D imagery. 

They went from the top left rendering to the bottom-right reality in roughly 100 days:

CAD file → rig in road-ready contracted configuration in 100 Days

The successful test buys the team the confidence that the algorithms on the satellite will work. That’s critical, because while we’ve focused on the hardware, what Array is building is as much a software challenge as anything. SAR satellites have been around for a long time. It’s getting them to work together, to behave like one 50 km-wide satellite, to turn hundreds of image pairs into clear pictures and 3D point clouds customers can use. That’s the magic, and the algorithms are the spell. 

Andrew explained: “The test really burns down a lot of the technical risk.” With the test rig validation and algorithm experiments in hand, Array is completing its Preliminary Design Review (PDR) this Friday, a key step on the path to launch, less than one year after closing its seed round in November 2022. 

Array Labs is moving so fast because Andrew understands the need to get the best product to market and change the industry’s business model before everyone else realizes that his idea isn’t as crazy as it sounds. 

Array Labs’ Strategy 

My second conversation with Andrew took place on July 20th, two days after I published How to Dig a Moat. When we hopped on the phone, it was the first thing he wanted to discuss. 

Companies can be in the right place at the right time, ride the highest-sloped technology curves, with the right team, get to market first, and then watch all of their margin get competed away. Hardware innovations are notoriously difficult to protect from the erosive forces of competition. 

Andrew realized that the company had a head-start, and an uncertainty window to play in before competitors woke up to the beauty of Array’s approach. Which moats could the team start digging? 

In the very near-term, there are three. 

First, he pointed out, there aren’t many radar people with SpaceX-like training. There’s a frenzy for SpaceX-trained talent, but RF engineers are an undervalued resource in the aerospace world. Array’s first order of business was to hire as many of the best RF engineers it could find and get them working like a well-oiled, fast-moving startup. 

“Our hiring strategy is simple,” Andrew said. “Find uncommonly talented people and pay them uncommonly well.” 

Locking in the best talent before other companies catch on could extend the lead a bit: if the best people were at Array, it would be hard for anyone else to match their speed and skill. But that’s not a durable moat. 

Second, they could offer a product that’s both 100x cheaper than airplanes and 100x better quality than 3D satellites. Aerial data at satellite prices. That’s what they plan to do, and early revenue will fuel growth, speed up deployment, and extend the lead, but pricing isn’t a durable advantage. 

Third, they could counter-position against the incumbents and startups in the space. If companies like Maxar, Umbra, and Iceye built their teams, business models, and products based on the idea that they needed to pack more performance into each individual satellite, Array could design simple satellites that were cheaper, easier, and faster to manufacture. To match Array, competitors would need to scrap their work or try to string together clusters of more expensive satellites. But counter-positioning, too, only buys you more time. 

The master plan, though, is to turn the team’s technical innovation into a business model innovation.

The EO market, Andrew explained, suffers from the “tyranny of tasking.” 

“Imagine you have a mine where you’re digging minerals, and there’s a forest around it. Two groups are interested in looking at that region: the mine’s owners/investors and an ESG company that wants to understand its environmental impact — which has the ability to pay more?” 

The answer is typically the mine’s owners and investors, and so they’ll pay more to “task” the satellite to take higher-quality images. The EO company will sell the ESG company older, shittier images for a lower price. 

Andrew thinks this is stupid. The image is there and the marginal cost to serve it is approximately zero. He thinks the business model needs to change. 

Because Array Labs will be able to take high-res, near-real time images over very wide swaths, it doesn’t need to be tasked. Its first cluster will provide coverage of the 5% of Earth that houses 95% of people, hoovering up high-quality images the whole time—and refreshing every 10 days. Then, anyone who wants to access the data can pay a subscription to access it. Subscription models aren’t new – Planet and others offer subscriptions – but subscriptions on aerial-quality data at satellite prices are. 

Because of Array’s low costs, it’s feasible that one large customer paying much less than they do to a competitor today could cover all of Array’s COGS on a cluster for a year. But since Array essentially captures images of everything (there’s no tasking) and downlinks the image data to an AWS S3 bucket, it can sell access to any number of companies at de minimis marginal cost. 

Very roughly, the second customer would mean 50% gross margins, the third would mean 66.67% gross margins. By the tenth customer, you’re at 90% gross margins… on a business that builds radar satellites. It looks a lot more like a software business. 

For Illustrative Purposes Only

But even strong gross margins aren’t a moat. If anything, they’re a bright neon sign inviting competitors to come take some away. The first moat Array is looking to dig is scale economies

Strong margins on healthy revenue should allow Array to deploy more clusters into orbit more quickly. As it launches more clusters, it will be able to offer an increasingly great product (more frequent refreshes, more coverage area). Almost like Netflix, as it acquires more subscription customers, it will be able to amortize its costs of content – high-quality images – over more subscribers. This makes the subscription more valuable to each customer. 

So actually, the chart of gross margins might look something like this, with little dips as Array sends new clusters up. Each time, the dip is a little smaller, as the cost of each new cluster is amortized over more subscribers. 

Source? I made it up; For illustrative purposes only

It should cost Array less to serve a better product than any new entrant. Scale economies. 

But the way Array sees it, there are actually two types of potential customers:

  1. Those who can mainline a constantly-updating stream of 3D point cloud data and do something with it themselves.

  2. Those who can’t. 

AV companies, the Department of Defense, and large tech companies like Google, Meta, and Nvidia have the tools and the teams to plug into an API, pull the data, and work their own magic on it. They can pay Array directly. 

Others, like real estate developers, insurance companies, environmental groups, researchers, small AR developers, and more, should be able to find a use for the data, but they all have industry-specific needs, requirements, and sales cycles. 

For this group, the company envisions operating more like a platform on top of which new companies might form to serve specific markets. End users in these markets might need algorithms to help make sense of the data, vertical-specific SaaS that incorporates it, or self-serve analytics tools which third-party developers can provide. 

I might, as an example, form 3D Real Estate Data, Inc., pay Array for a subscription to the pixels, and use it to create a real estate-specific data product that I can sell to developers. At the end of the sale, I pay Array a 10% cut. That way, Array’s pixels – which are non-rivalrous – can impact dozens of industries without Array needing to Frankenstein its product and build out a huge sales force to serve those industries. 

If it pulls that off, it has the potential to capture two powers, depending on the choices it makes. 

The less likely is that it could provide data and resale licenses to third-party developers who build inside of an Array ecosystem, like an App Store for 3D Maps. Developers would build on Array, and customers would come to Array to find the products that fit their needs. In that case, it could be protected by something that looks a lot like platform network effects

The more likely is that Array sells the data and licenses to third-party developers who are free to build whatever type of product they want and sell it wherever and however they can. Array would develop tools to make it as easy as possible for developers to build products with its data and would continue to grow the data set and increase the refresh rate as it adds more clusters. In this case, the switching costs to companies who’ve chosen to build with Array would be high.

Array plans to go to market with the self-serve model – serving AV, DoD, FAANG, and the like, at launch, and build out the platform as it grows. In both cases, it may be the “Why Now?” for a wide range of existing and yet-to-be-imagined companies. 

Enabling New Markets

In Paik’s Frameworks, he writes of Being the ‘Why Now?’ for Other Companies

If a company can deliver, mostly through technological innovation, an answer to the question, “Why Now?” for other companies, it will be a venture-scale outcome, assuming proper business model—product fit. The challenging part here is that the vast majority of the customer base for the innovating company does not yet exist at the time of founding.

To me, this is one of the most compelling aspects of Array’s business: it has the potential to be the “Why Now?” for other companies without the market risk.

Ideally, at some point, a number of previously-impossible companies will come to life thanks to the existence of a real-time 3D map of the world. Maybe they’ll be augmented reality companies that design experiences based on live conditions. Maybe they’ll be gaming companies that update their maps in real-time in response to the real-world. Maybe they’ll be generative AI companies whose image quality dramatically improves by training on so much Earth image data. More likely, they’ll be things I can’t imagine. 

To bank on those companies’ existence and success, however, would be a very risky bet. 

In the meantime, Array has an established commercial ecosystem to sell into. 

  • Airborne LiDAR is an industry that already generates $1B+ in annual recurring revenue. Airborne LiDAR’s TAM is gated by accessibility and survivability (you can’t, for example, fly your light-ranging planes over Ukraine). 

  • 3D from space is a thing, too. Back in 2020, Maxar acquired Vricon for $140M (at a valuation of $300M). At the time, Vricon (a joint-venture between Maxar and SAAB) was making $30M in ARR by algorithmically processing and upscaled 2D satellite imagery, and turning that into 3D data. Last year, Maxar acquired another 3D reconstruction company, Wovenware, which focuses on using AI for 3D creation.

Airborne LiDAR is great for its accuracy, while satellite 2D → 3D, AI-derived reconstruction tools are great for cost. The bottleneck for 2D → 3D is that the data quality isn’t as good. According to Array, it’s 100X worse than what the clusters will collect. A company like Vricon has a significant market, in and of itself, and they were only processing someone else’s pixels. 

Combining the best of both worlds is the kicker. As Andrew put it, “What we’ve found in talking to customers is that if you can provide the quality of aircraft imagery at the cost and access of a satellite system, something that’s a drop-in replacement for what customers are already using, then that’s incredibly compelling.” 

Array is designing its orbital system (and processing algorithms) to match QL0 LiDAR (quality-level 0, the highest-resolution 3D LiDAR data commonly collected), which means the space-collected 3D data can be immediately piped into applications, such as insurance analytics, infrastructure monitoring, construction management, and self-driving vehicles, which all speak the language of point clouds. These industries already exist; Array’s data is just an improvement. 

Plus, the US government and our allies are always hungry for better information about the state of the world. As Maxar and Umbra have shown, you can build great businesses on that revenue alone. But Array isn’t a “Why Now?” for the government, either. I think it’s safe to say the US government will exist with or without Array. 

No, the most interesting customer segment for Array from this perspective is one that features an unusual combination: ungodly sums of money and uncertainty about its near-term economic viability. I’m talking about Autonomous Vehicles (AV, or self-driving cars). 

AVs are so tantalizing because their own ideal state is so obviously valuable. Cars are a roughly $3 trillion market globally. 46,000 people died in car crashes in the US last year alone. Trucking in the US is a nearly $900-billion annual industry. There are a bunch of big numbers I could keep throwing at you, but the point is this: self-driving cars and trucks have the potential to earn trillions of dollars and save tens of thousands of lives annually.

So it’s not a surprise that investors and the companies they’ve backed have spent something like $100 billion to make self-driving a reality. And it’s finally here, sort of. Cruise, the most widely available self-driving service, is now operating in three markets, and testing in 15 cities: 

The tweet provides a hint into why Cruise is only in 15 cities: to enter new ones, it needs fresh 3D maps, which it typically builds through manual data collection, or getting human drivers to drive around every road multiple times while LiDAR scanners on the top of the car collect data on their surroundings. 


And they don’t just need to do it when they enter the markets. Maps get stale and need to be refreshed very frequently, which means taking the cars out of revenue-generating service and paying human drivers to cruise around the city again and again. With LiDAR-equipped cars costing $1.5 million, this costs something like $20-30 per mile mapped per car. (Unfortunately, cars can’t map while they’re in self-driving mode; they need the sensors and processing power for the task-at-hand.) 

Self-driving car companies need LiDAR QL0 3D maps to operate. Array has back-of-the-napkin backed into the following guesstimate for the world’s leading Level 4 self-driving developers: generating and maintaining HD maps, – just in two to three cities – can cost as much as $1-3 million… per week… per company!

There’s long been a theory around self-driving cars that the technology would be ready long before regulation allowed them to drive around our cities. While certain cities might block self-driving, even San Francisco allows it. It turns out, the bigger threat to the spread of self-driving cars is that it just costs too much money to enter new markets and maintain them. 

I think you see where this is going… 

When Array Labs is up and flying, they should be able to provide AV companies – startups and incumbent automakers alike – LiDAR QL0-quality 3D maps of the entire country for a tiny fraction of the cost. Plus, those maps would refresh weekly and then daily and then, one day, maybe hourly, as Array sends up more clusters. By subscribing to the Array data, AV companies would improve both their top lines – more cars driving taxi trips instead of mapping, in more markets – and their bottom lines. 

In a very real way, Array could be the “Why Now?” for the growth of a market that’s already spent $100 billion to get to this point. 

It should come as no surprise, then, that along with the US government, AV is the first market that Array is targeting. They’re already talking with most of the leading players in the space, across urban ridehail, autonomous freight, and automated driver-assist features for OEMs

When you have a product that could serve so many different markets, having a clear strategy, including which markets to serve first, is critical. It determines who you hire, what software you build, and even where to position the clusters. 

The thing that’s impressed me most about Array is this: that while Andrew is a technical founder who would have been happy never starting a company, he’s been as meticulous about strategy as he is about engineering. “First time founders care about product, second time founders care about distribution” is the line I’ve heard him say most frequently. 

That said, a good strategy isn’t a guarantee. It just gives you the best shot. 

What might stand in Array’s way? 


I love the risk sections of these essays on super Hard Startups, because the whole point is taking crazy risks that the founder thinks are actually less risky than most people believe. That said, it would be equally crazy to write an essay on a company that’s >2 years away from launching a novel hardware product into space without acknowledging what could go wrong. 

There are a few big categories of risk that I see: Technology, Market, Funding, and Space Stuff. I’ll hit each. 


Array is de-risking as much of its technology on the ground as it can. Successfully creating 3D images with its test rig and algorithms was an important step. But a lot of what Array is doing can’t be fully de-risked until they actually put satellites in LEO. 

I’m not a rocket scientist (or a radar engineer), but to my untrained eye, formation flying, communicating radar data between dozens of satellites seems both challenging and impossible to perfectly test on earth. Plus, it’s still premature to write AI/ML tools to exploit and analyze the data.

A lot of things need to not go wrong in order for Array to deliver the high-quality data it hopes to provide customers. As Andrew said, “first time founders focus on product, second time founders focus on distribution.” Array needs to do both. If it can get a cluster of dozens of satellites to stay in formation, talk to each other, and send good data down to earth, it still needs to win customers. 


One of the tradeoffs deep tech startups make is technical risk for market risk. Ian Rountree at Cantos defines deep tech like this:

Deep tech = Predominantly taking technical risk (rather than market risk)

If you can build it, they will come. The Array team has had close to 100 customer discovery calls across Big Tech, automotive, insurance, real estate, and other sectors that attest to this holding true here. 

There is one piece of the market that’s riskier than the rest, though, and it’s one of Array’s first target markets: AVs. 

If you’ve followed the self-driving race somewhat closely, you might have been screaming at your computer reading the last section. I only described one of the two main approaches to self-driving: sensor fusion. As the name suggests, sensor fusion takes data from a bunch of sensors – LiDAR, radar, cameras, and GPS – to understand the car’s surroundings and make real-time driving decisions. Companies like Cruise and Waymo are on Team Sensor Fusion. 

But there’s another approach: vision-based. The vision-based approach takes inspiration from the way people drive – we just watch our surroundings and make decisions on the fly – by using cameras and computer vision algorithms to interpret the environment and make decisions in real-time. 

There are a number of companies working on the vision-based approach, including George Hotz’s Comma AI, Wayve AI, and, most prominently, Tesla. Tesla’s Full Self-Driving is powered by pure vision, and end-to-end ML (video in → controls out, without any human-programmed rules). 

The jury is still out on which approach will win. Most of the industry falls into the sensor fusion camp, but it’s always a risky move to bet against Elon Musk

The risk, for Array, is that vision-based self-driving doesn’t need high-res 3D maps. If vision-based wins, Array’s data becomes a lot less valuable to AV companies.  


In deep tech, there’s a risk that comes from the timing mismatch between potential future revenue and the real costs a company needs to incur today in order to capture it: “The Valley of Death.”

Array will need to to fund itself through its demonstration launch in 2024, orbital pathfinder mission in 2025, and first cluster launch in 2026. 

To date, Array has raised $5.5 million, $500k from YC and $5 million in an October 2022 seed round led by Seraphim Space and Agya Ventures. It will likely need fifteen to twenty million more dollars to get to cluster launch. 

In any market, but especially the market we’re in now, venture funding is not guaranteed. According to PitchBook data, venture investment overall is on track to decline 52% from a 2021 high of $759 billion to $363 billion in 2023.

Investment in space technology, however, is holding up better. VCs have invested $4.24 billion in space to date in 2023, on pace to edge out 2022 ($5.4 billion) at $5.8 billion. That would represent a less steep 30% drop from 2021’s highs. 

PitchBook, Space Technology, I added the dashed line for extrapolated 2023 funding

Fortunately for Array, investors seem to have an appetite for space-based SAR. The 2023 number includes a $60 million January Series C for Capella Space, valuing the company at $320 million post-money. Umbra raised its $79.5 million Series B in November 2022, valuing the company at $879.5 million post-money. Iceye raised a $136.2 million Series D in February 2022, putting its valuation at $727.1 million post-money. 

Getting to launch is critical, and can take tens of millions of dollars. Prior to launching their first satellites, Iceye, Capella, and Umbra had raised $18.8 million, $33.8 million, and $38.3 million, respectively. Each company raised a larger round of funding within six months of their first launch, once the technology had been de-risked. 

Presumably, Array will need to raise an additional $10-30 million before launching its full cluster in early 2026, at which point it should have contracts with both government and commercial customers that begin to contribute meaningful revenue. 

This is a solvable risk. After spending time with Andrew and Duffy, and writing this piece, I want to invest, and I doubt I’m alone. 

Space Stuff

Finally, there’s one unavoidable risk worth mentioning quickly: Hardware is hard. Space is harder. I had a bunch of paragraphs written on this, but I think it’s self-explanatory. Space junk and rocks can knock satellites out of orbit. Geomagnetic magnetic storms and solar flares could disable them. It can be difficult to find an open slot on a SpaceX rocket. These are risks that all space companies face, and Array does too. 

Array has built redundancy into its design — it has dozens of satellites in a cluster instead of one — but space is a harsh mistress. The only way to derisk is to go up.

Those are the big four categories of risk as I see it: technology, market, funding, and space. I’m sure I’m missing some. Array is trying to bring both a novel product and a novel business model to market, simultaneously. Plus, with such a large opportunity at stake, competitors won’t make it easy for Array. 

None of them is impossible, though. Array’s team has worked with similar technology before. The self-driving market is unlikely to be winning-approach-take-all, at least for many years. Array is very fundable. And space stuff is hard, but companies pull it off. They do it not because it is easy, but because it is hard. 

All of these are risks worth taking, because if Array succeeds, it will quite literally change the way we see the world. 

Mapping the Path to Mapping the World

Array Labs is sprinting to bring the earth observation market to its ideal state: near-real-time 3D maps of the Earth. 

In just over a year post their YC demo day, Andrew and Isaac will have put together a small but stacked team of eight, built a working multistatic test range and used it to construct 3D images, completed the PDR of the satellites it will send to space, and engaged in conversations with customers at the very top of its two first target industries: defense and AV. 

In advance of its demonstration launch next year, during which it will fly four satellites to de-risk formation flying and multistatic capabilities, it’s going to more than double the size of the team. That will be its own sort of test of the theory that the cluster gets more powerful with each additional unit you add. 

Array has to move fast. It’s rare to get an opportunity like the one it has. Advances in AI and ML have increased the demand for 3D point clouds just as launch costs are tumbling, cubesats are getting cheaper and more capable, and RF technology has advanced enough to make multistatic SAR possible. The conditions were ripe for Andrew’s crazy idea, and the company needs to capitalize before others catch on. 

If Array can execute, it has the potential to tap into billions of dollars of existing spend and enable new markets that weren’t possible before. If it plays its cards just right, it will be able to disrupt the traditional earth observation market, introduce a new higher-margin business model, and use that model to protect its margins against inevitable competition with scale economies and network effects or switching costs. Array can and should move fast because its strategy is so clear. 

Of course, there will be challenges. Startups are hard. Hiring excellent talent is hard. Getting teams to move fast together is hard. Hardware is hard. Space is hard. Formation flying and inter-satellite communication is hard. Selling to DoD is hard. Establishing new markets is hard. EO has proven to be a very hard market to make money in. Everything Array is doing is hard. 

But holy grails are, by definition, hard. All that difficulty is a sign that Array is on the right track. And seemingly impossible things are happening more and more frequently. I’d put my money on Array to pull this off. 

And if it does, the implications are enormous. Array has the potential to build a multi-billion dollar business that enables multiples more value than that to be created. That’s the real goal. 

In that conversation Andrew and I had about moats back in July, he said that one of the biggest questions he was working on was, “Is there a way to deploy this to as large a group as possible without capturing extraordinary profits?” 

Building the 3D map of Earth is the holy grail not because of the map itself, but because of what people can do with it. It’s more useful to humanity the more people can play with it and build on it. Customers in existing markets – like AV companies, the DoD, and Big Tech – should be able to get better performance for less money, but Andrew is thinking about how to give the data to universities, non-profits, and entrepreneurs as cheaply as possible. 

“I want to generate value first and foremost,” he told me, “and I’m OK not capturing 99.99% of that.” 

That could be worrying for investors until you realize how much value near-real-time 3D maps of the world might generate. AV is a multi-trillion dollar industry in waiting. So is autonomous trucking. Solving climate and deterring conflict are existentially important. And opportunities in energy, resource management, AR, urban planning, agriculture, construction, and infrastructure, logistics, and disaster response add up to trillions more dollars, and an incalculable impact on humanity. The more widely the maps are used, the better. There’s plenty of value to go around. 

I already ripped the band-aid with a Nvidia comparison earlier, so I’ll go with it: Nvidia has built a trillion dollar company by making AI possible. If AI is as valuable as even its more measured proponents think it can be, that trillion represents a small fraction of the overall value to humanity. I think a similar dynamic might be at play here, although only time will tell what order of magnitude we’re dealing with. 

But I’m getting ahead of myself. Array still needs to launch its first cluster, sign its first customers, and build a business. The idea works on paper. On paper, it’s one of the most compelling startups I’ve found. But there’s only one way to know if it works in practice… 

But to build the 3D map of the world, you need to send dozens and dozens of satellites to space. It’s a crazy idea, but I’m betting it’s just crazy enough to work. It’s going to be fun watching Array pull it off over the next couple years. 

Here’s to the crazy ones. The ones who redraw the boundaries.

Thanks to Dan for editing, and to Andrew and Duffy for teaching me the ins and outs of SAR and letting me tell the Array Labs story!

That’s all for today. We’ll be back in your inbox with a Weekly Dose on Friday. Go get it this week.

Thanks for reading,


The Gang Captures Washington

Welcome to the 141 newly Not Boring people who have joined us since last Tuesday! If you haven’t subscribed, join 211,865 smart, curious folks by subscribing here:

Subscribe now

Today’s Not Boring is brought to you by…Mercury

Mercury is banking engineered for startups. But they go beyond banking, providing the resources and connections startups need to become success stories. We’ve used Mercury at Not Boring since day 1…and frankly, it’s very rare I see a startup that doesn’t. 

As the banking platform of choice for startups, Mercury has a unique perspective on the fundraising landscape today. They published those insights in this article – highlighting the metrics and milestones investors care most about today.

The headline is that investors are still looking for opportunities, but have shifted exactly what they’re looking for. Soif you’re a startup founder, nail your next investor convo by locking in on what investors are looking for by reading Mercury’s full free article.

Read the Article

Mercury is a financial technology company, not a bank. Banking services provided by Choice Financial Group and Evolve Bank & Trust; Members FDIC.

Hi friends 👋, 

Happy Tuesday! What a weekend. We were at my parents’ celebrating my mom’s birthday this weekend. Our car got stolen out of the driveway early Sunday morning. The police were great, we tracked the car down with an app, and we got it back yesterday. There was some damage, but all-in-all, we got lucky. 

That does mean that a lot of the time I would have spent writing was spent tracking down the car, so I had to go a little off the dome with this one. Last week, I wrote about the idea that capitalism is good and capitalism can evolve. As Bill Gurley highlighted, one of the best ways to evolve capitalism is to fight back against regulatory capture in order to unlock an Age of Miracles. This piece digs into that idea. It’s meant to be a starting point for a discussion, not to have all the answers.

Let’s get to it.

The Gang Captures Washington

The reason Silicon Valley has been so successful is because it’s so fucking far away from Washington, DC.

– Bill Gurley

Last week, Benchmark partner Bill Gurley gave a talk at the All-In Summit on regulatory capture. 

He wrapped it up with a banger:  “The reason Silicon Valley has been so successful is because it’s so fucking far away from Washington, DC.”

That’s a great line. It brought the house down. Regulatory capture sucks. 

I was listening to it on a run, and when he said it, I did a very little, inconspicuous fist pump. 

But when I thought about it once the endorphins wore off, it seemed more like the end of a chapter. 

Silicon Valley may have been so successful to date because it’s so fucking far away from Washington, DC. It’s been the plucky, underestimated, unwashed underdog, and when you’re the plucky, underestimated, unwashed underdog, you can use that status to rack up surprise wins. 

But when you’re the favorite, when you’re expected to win, when opponents design strategies to prevent you from winning, the game changes. You can’t just rack up surprise wins far away from Washington, DC anymore. You need to accept that you have a target on your back, and do everything it takes to win by competing directly. 

That’s where Silicon Valley is today. It’s coming for the big prizes – trillion-dollar industries like energy, finance, healthcare, defense, telecommunications, education, manufacturing, automotive, and white-collar work – and facing savvy incumbents at every turn. 

We have the opportunity to live in an Age of Miracles. Cheap energy. Abundant intelligence. Supersonic transportation. Distributed opportunity. Longer, healthier lives for billions. To realize that opportunity, though, is going to take a lot of messy, bare-knuckled work that Silicon Valley has had the luxury of being unaccustomed to. 

If Silicon Valley really wants to make the world a better place, it needs to recognize that it can’t avoid Washington, DC anymore. It has a responsibility to fight the regulatory capturers on their own turf and win. 

That’s a hard transition. Let me tell you a story about my experience failing to make it.

Fat Kid Running 

I was a fat kid for most of middle school. I say “most of” because occasionally I’d do Weight Watchers or skip lunch for a week during basketball camp or train for the one cross country race our league hosted each year. In eighth grade, I got skinny and fast and won. 

So on the first day of cross country training camp my freshman year at a new high school, when Coach McAlpin told the returning runners that there was a new kid who might be pretty fast, Packy McCormick, one of the guys who had been a year ahead of me in middle school said, “Packy? He’s fat.” 

I showed up to camp in a not-fat phase, and on that first run on the trails at Haverford College, I hung with the varsity runners. I ran varsity all year, the only freshman to make the cut. I finished Second Team All Main Line. Over the next couple of years, I got faster. I was First Team All-Main Line my sophomore year, and finished sixth in the league championship. I beat the juniors and the seniors. My junior year, I was the fastest on the team more often than not. I finished third in the Pennsylvania Independent Schools Championship, surprising even myself, and then won the league 3200 Meter Championship in the spring. Not bad for a fat kid! 

Senior year, I was captain and I was feeling good. I was going to win everything. 

And then… I didn’t. I distinctly remember kicking a batting cage at Fairmount Park after a particularly disappointing race in which I lost to a junior on our team. I did OK at States, but I didn’t win, and I didn’t finish third. I think I finished seventh. 

This section from a rival school’s geocities site sums up that year well: 

Then came the 3200 meter run, starring the top 4 finishers from Cross Country championships. Packy McCormick from Episcopal was the defending Inter-Ac Champion, and had never lost to a Malvern runner in track in his high school career. Junior Brian Duffy, the defending Cross Country champ, took the first 1600 easy in 4:55, then surged past Packy on the 6th lap and ran away with the victory from there, clocking a 9:48 for a negative split second 1600 in 4:53, and just 6 seconds off the Inter-Ac Championship record of 9:42.

My memory isn’t great, but that race is still seared in there. It was a hot May day. I’d run a 9:42 earlier in the season – in an invitational with much faster kids, where I wasn’t expected to win and the pressure was off – and if I just did it again, I’d set the league record. As soon as the gun went off, though, I knew it wasn’t going to be a record day, that I’d have to fight just to win it again. 

Malvern knew that my strength was my endurance – I could get a lead early and keep it – and that my weakness was top-end speed – I had no kick – and designed a strategy to beat me. They had three strong runners. Two of them boxed me in, bumped me, and slowed me down so that the third could outkick me in the final 800. If I tried to break away, one of the two sprinted back in front of me, and then slowed down to mess up my pace. It worked. Duffy, a junior, won, and I finished second in the last race of my high school career. It was devastating. 

I learned a painful lesson that season: it’s a lot easier being the up-and-coming underdog than it is being the leader with a target on your back

Luckily, the stakes were low. It didn’t matter to the world if I ran two miles faster than another kid. The stakes are higher for Silicon Valley. It needs to win. 

To start, it needs to know what it’s up against. And I think Gurley was right: it’s regulatory capture.

Regulatory Capture 

Back to Gurley’s talk and regulatory capture. 

Regulatory capture occurs when a regulatory agency, created to act in the public interest, acts in the interest of incumbents instead. Gurley gives some colorful examples, like COVID tests. 

Germany approved 96 different antigen rapid test vendors; in the US, the FDA approved three. Tests cost less than $1 in Germany; they cost $12 in the US. Unsurprisingly, the person at the FDA responsible for approving antigen test vendors, Timothy Stenzel, had worked for two of the three approved companies. 

Does that annoy you? Yeah! 

Does it make you mad? Hell yeah! 

Does it make you mad enough to call your representative, switch your vote, protest in Washington, or organize a lobbying group that will show up year after year with millions of dollars in campaign donations so that it never happens again?! Errrrr… uhhhh… no. 

That’s the challenge with regulatory capture in a nutshell. 

In his talk, Gurley cites the work of George Stigler, the University of Chicago economist who coined the term “regulatory capture” in The theory of economic regulation in 1971. In a 1989 follow-up, Stigler’s collaborator Sam Peltzman wrote that The theory of economic regulation’s main conclusion was: 

In any similar political contest between groups of disparate size, the compact organized interest will usually win at the expense of the diffuse group.

Gurley asked the audience to repeat “regulation is the friend of the incumbent” after him. I’m going to ask you to repeat this one after me. If you’re at your desk, you can whisper, or type it out, or tweet it. OK, on the count of three. One, two, three:

“The compact organized interest will usually win at the expense of the diffuse group.” 

This one insight more than any other describes the bottleneck to progress in America. It means that a few loud, angry, well-organized people can make things worse for everyone else. We all care about a lot of things a little bit, which gives the advantage to people who care about one thing a lot. 

This has bugged me for a while. I wrote about it in How to Fix a Country in 12 Days. I tweeted about it in August after Illinois announced it was keeping its moratorium on new nuclear in place: 

When you’re the underdog, you can avoid captured areas, or blame defeat on regulatory capture. You can stay so fucking far away from Washington, DC. 

At some point, though, you need to play to win. 

Taking the Fight to Washington

There’s an emergent recognition in Silicon Valley that we’re at this turning point, even if they’re not saying it directly. In the past week alone, four of the most-discussed pieces of content in my corner, in addition to Gurley’s talk, were:

One talk about circumventing the regulators altogether, one story about how dangerous it would be to let Europeans regulate AI, one podcast (in two parts) about how the tech tribe can take back cities, and one book about the one person who’s been able to break through multiple captured industries. 

My Spidey Sense is tingling. This conversation is bubbling up all over the internet.

The government itself is often portrayed as the enemy in these discussions, but I think that’s a misplaced and futile fight. Washington is an algorithm; there are rules, even if they’re sometimes opaque. If Silicon Valley wants to bring about the Age of Miracles, it’s going to need to learn those rules and win the game on the field. 

I don’t pretend to know exactly how Silicon Valley can beat the incumbents at their own game. I’m already way out of my depth here. But I know that a few things will matter whatever the specifics. 

Being well-resourced and well-organized matters. 

In one of my favorite essays, Choose Good Quests, Trae Stephens and Markie Wagner write about the moral imperative for founders who’ve gained experience, resources, and connections building easier companies early in their careers to use their experience, resources, and connections to tackle really hard, civilizationally important problems in their second act. They call these “Good Quests.” 

The world is filled with good quests that require massively leveled heroes to complete: semiconductor manufacturing, complex industrial automation, natural resource discovery, next-generation energy production, low-cost and low-labor construction, new modes of transportation, general artificial intelligence, mapping and interfacing with the brain, extending the human lifespan. These future-defining problems are hard to recruit for, difficult to raise money for, and nearly impossible to build near-term businesses around, which is why they are exactly the types of problems we need the most well-resourced players pursuing.

Silicon Valley is going through the same arc on the meta-level, accumulating experience, resources, and connections over the past half century that it must now put towards Good Quests. 

This is already underway at the company level – I can name multiple companies attacking each of the challenges listed in the quote – but in order for them to have the greatest odds of success and largest impact, Silicon Valley has to give them air cover on the regulatory front. 

Silicon Valley is well-resourced, but it needs to be better organized. Currently, it faces the same challenge of diffused support that Stigler wrote about in 1971. “Tech” is a loose collection of people and companies that use technology to try to improve something. They take different approaches and attack different industries. 

While tech calls some of the largest companies in the world – Apple, Amazon, Google, Meta, Nvidia – its own, each individual company facing an established industry likely faces better-resourced and better-organized incumbents. 

Crypto startups face a regulatory apparatus captured by the financial services industry, as evidenced by Elizabeth Warren’s Anti-Crypto Army. Nuclear fission startups face a regulatory apparatus captured by oil and gas companies and the environmentalists they back, and fusion is likely to face the same. When the All-In hosts asked him about fusion after the talk, Gurley didn’t seem optimistic. 

Silicon Valley now has the resources, and it needs to capture a regulatory apparatus of its own, starting with getting clear about what it stands for and rallying people around that mission. This is as good a quest for entrepreneurs to embark on as any particular technological innovation. 

I think the e/acc movement is working in the right direction here. As much of an optimist as I am, I realize that battle lines need to be drawn, and the idea of acceleration versus deceleration or growth versus degrowth is a clarifying one. It’s as strong a Schelling Point as I’ve seen in tech.

It’s also too jargony to go mainstream. Thermodynamics, AGI, and Karshdev Scales work to rally the in-group, but to garner the necessary popular support, the message needs to go mainstream. 

Passionate popular support matters. 

Silicon Valley needs to both share an optimistic vision of the world it can create with and for people, which it does an OK-but-not-great job on, and make people aware of what’s at stake when incumbents block progress out of their own self-interest. 

Hope and Anger.

On the anger side, people need to be acutely aware of the fact that regulatory capture is definitionally anti-consumer. When it occurs, a small group receives benefits at the expense of the larger population. 

The stories that Gurley shared about regulatory capture demonstrate this. 

More expensive COVID tests mean three companies benefit at the expense of hundreds of millions of people paying more. Comcast blocking city-wide WiFi means that Comcast benefits at the expense of 1.5 million Philadelphians. Epic making healthcare providers use shittier software means that Epic benefits at the expense of millions of people who have a harder time dealing with an already complex healthcare system. All three examples stand in contrast to the competition that helps make America great. 

These stories should make people angry, and they’re not the worst of them. 

Back in June, I did a quick and dirty analysis of the number of deaths caused by using more dangerous energy sources instead of nuclear, and it’s something like 59 million deaths over the past 30 years

That should make people furious, but that story doesn’t have the same resonance as images from Chernobyl or Fukushima, which were far, far less deadly. 

The fact that tens of thousands of depressed people in the US commit suicide each year when we’ve known of the benefits of psychedelics with talk therapy when treating depression for decades but classified them as Schedule I drugs should make people furious. 

Just last night, Roon made a similar point about self-driving cars: 

This chart comparing the prices of goods in heavily-regulated versus less-regulated industries should make people furious. It should be plastered across cities and towns throughout the country, and even though it’s already shared often online, it should be shared more. 

From Bill Gurley’s 2,851 Miles

One of Silicon Valley’s challenges is that counterfactuals are hard to comprehend. It’s difficult to grok what could have been if things had gone differently: all of the people who wouldn’t have died if we used more nuclear instead of coal, all the people who would still be alive if research into psychedelics had continued, how cheap healthcare and education could be if those industries weren’t captured. 

Silicon Valley will need to figure out how to spread that message. 

And then there’s hope.

At the same time, Silicon Valley needs to do a better job describing what could still be in ways that demonstrate the real impact on peoples’ lives today. 

That starts, of course, with building technology that clearly improves peoples’ lives. Safer self-driving cars, cheaper energy, cheaper housing, better medicines. Cheaper and better across the board. In the underdog phase, social media and SaaS were valuable stepping stones, but in the leadership phase, Silicon Valley needs to make a clearer positive impact on the day-to-day. 

Of course, there’s a chicken and egg here. Regulatory capture can make it hard to actually do the things that Silicon Valley wants to do, so it needs to yell loudly whenever roadblocks are thrown in its way. Gurley’s point on the need for increased transparency and communication on regulatory capture is a good one. 

The message needs to be clear: if you let us cook, and come cook with us, we will make everything cheaper and increase all of our standards of living. On a podcast recently, someone made the point that billionaires and regular people all use the same iPhone connected to the same internet. That can be true for pretty much everything except land. 

People need to be motivated enough to call their representatives, switch their votes, protest in Washington, or organize lobbying groups that will show up year after year with millions of dollars in campaign donations so that it never happens again. 

Again, I don’t have the answers for how we do this, but Silicon Valley needs to get popular support on its side.

The good news is, while the scams and blowups get the headline, I genuinely believe that Silicon Valley has truth on its side. Its incentives are better aligned with the general population’s than the incumbents’ are. Tech companies win when they’re able to provide better, cheaper things to more people. 

Playing the long game matters.

Silicon Valley’s luminaries like Jeff Bezos talk about the importance of playing the long game. That’s more important in this battle than it is for any particular company. 

This war certainly won’t be won overnight. It will take years of strategic organization, clear communication, and most importantly, delivering on promises. When Silicon Valley is successful in defeating regulatory capture in a certain industry, it needs to show that it actually delivers results. Every time it does that, it will gain support for the next, harder battle.

Techno-capitalism and America, at its best, go hand-in-hand. It’s Silicon Valley’s responsibility not to overthrow the system, but to improve it, to get pro-growth candidates elected, work with regulators to help them do their job by protecting the interests of the American people, and prove that government and industry can work together for the benefit of all. 

And if it succeeds in capturing Washington, it will need to do the hardest imaginable thing: use that capture for the good of innovation and the good of the people. It will need to somehow open source regulatory capture and open itself up to disruption from the next up-and-coming underdog. 

The fact that leading tech companies already try to use regulatory capture for their own benefit demonstrates how difficult this challenge will be, but it’s crucially important.

If Silicon Valley’s mission really is to make the world a better place, it must recognize its responsibility to lead, fight on behalf of the people against incumbents, and do what it takes to win. Then it will have earned the right to show what’s possible. 

That’s the game on the field: capture Washington back and change it for the better.

In the coming decades, Silicon Valley will need to be successful by getting so fucking close to Washington, DC that it can improve it from the inside.

Thanks to Dan for editing!

That’s all for today! We’ll be back in your inbox on Friday with the Weekly Dose.

Thanks for reading,


Capitalism Onchained

Welcome to the 645 newly Not Boring people who have joined us since last Tuesday! If you haven’t subscribed, join 211,724 smart, curious folks by subscribing here:

Subscribe now

Today’s Not Boring is brought to you by… Tegus

You know you’re a smart investor. Tegus is here to make you even smarter.

Tegus gives you valuable primary research and call transcripts — collected, curated, and distilled for easy access. With full transcripts of earnings calls plus financial models and easy-to-cite SEC data, you can make bold, high-yield investment decisions with confidence.

And the stakes just got a little lower – you can trial Tegus for free today.

Free Tegus Trial

Hi friends 👋,

Happy Tuesday and welcome back to Not Boring! Hope your Falls are fantastic.

I haven’t written about crypto for a minute, and a bunch of people have asked me whether it’s because I’m less excited about it now than I was before.

While I’ve spent more time writing about atoms-based companies and exponential curves over the past few months, doing so has actually helped me see crypto’s potential more clearly. If we want to accelerate, we’re going to need capitalism to keep up, and I think crypto is uniquely positioned to make that happen.

Let’s get to it.

Capitalism Onchained

Any technology that is sufficiently valuable in its ideal state will eventually reach that ideal state.

By ideal state, I mean the imagined end goal or highest potential that technology could achieve if all the kinks were worked out and it was deployed ubiquitously. 

Understanding the ideal state is maybe the most important thing to do early in the life of a technology, because if that ideal state represents enough value to enough people, the kinks will be worked out and the technology will be deployed ubiquitously. 

Boom and bust cycles for such technologies are useful noise. The booms are useful for attracting resources. The busts are useful for regrouping, fixing problems, and mapping the next leg.  

Through any market cycle, the promise of the ideal state acts like a magnet, attracting new researchers, entrepreneurs, and investors to improve upon the work of those who’ve failed to reach it. If you believe the ideal state is possible, then you chalk up previous failures to poor timing or implementation and keep trying new approaches. 

AI, AV, and AR/VR are three technologies that wandered through the desert burning billions of dollars for decades and now look to be breaking out. It’s capitalism: if the opportunity is big enough if it works, ambitious people will continue to try to figure out how to make it work. These dreams can’t die, even if thousands of dreamers do on the way. 

Recently, there have been fresh calls that crypto is dead. Prices are down, activity is down, people are leaving. I get it. It’s been brutal, and even worse, boring. But I’m more convinced now, as of two weeks ago, that crypto is one of those dreams that can’t die. 

As soon as I hit send on I, Exponential, an ode to capitalism, it snapped into place for me. 

Crypto’s ideal state is to make capitalism more effective.

That’s a big claim that risks being overly grandiose. Crypto isn’t short on grandiose claims. So I’ll be as specific as possible, laying out my thinking in two parts:

  1. Capitalism is good and capitalism evolves.

  2. How crypto can make capitalism more effective. 

Capitalism is Good and Capitalism Evolves

Capitalism works by incentivizing people to act in their own self-interest and making it as easy as possible to do so. 

Unlike a centrally planned system like socialism, which trusts that a planning body knows both the problems to work on and the ideal solutions to those problems, capitalism works by letting anyone bring their best solution to whatever problem they see in the market. Many will fail, some will succeed wildly.

This is a core tenet of capitalism: that incentivizing entrepreneurship and increasing the variance of inputs leads to better outcomes. 

To believe my argument that making capitalism more effective would make crypto sufficiently valuable, we need to agree on the two premises: capitalism is good and capitalism evolves. 

Capitalism is good. 

I doubt many socialists read Not Boring at this point, so I’ll keep this section short. 

The invisible hand produces modern miracles by invisibly coordinating the actions of billions of self-interested people. Progress improves the living standards and quality of life for billions of people around the world, as seen in the World GDP over the last two centuries (capitalism started in earnest in the 18th century). 

Our World in Data

Not only has GDP per capita improved, as Robert Zubrin point out, it has “risen in proportion to the size of the population cubed.” What Malthus got wrong is that under capitalism, more people aren’t a drain on resources; more people, each able to contribute his or her best efforts or ideas, are the resources.  

I could go on, but if you’re still on the fence, read I, Exponential

Capitalism is good. But capitalism is not perfect. Luckily, capitalism evolves. 

Capitalism evolves. 

It’s easy to think of capitalism as a static system that enables the evolution of the goods and services available to humanity, but capitalism itself evolves, too. 

Consider the Industrial Revolution. Wonderful results. Just look at that GDP chart! But it was also brutal. Among many issues, children as young as five or six worked twelve to sixteen hours a day, often seven days a week, in conditions unsafe for workers of any age.

Child Laborers during the Industrial Revolution, Lewis Hine / The U.S. National Archives

Today, it is still better to own the means of production than to operate them, but the efforts of labor unions, journalists, regulators, and even progressive businesses responding to market forces have contributed to a vast improvement in worker’s conditions. Henry Ford, for example, implemented a five-day, 40-hour workweek in 1926 not out of benevolence, but to test his theory that reducing hours would improve worker morale and productivity. 

Or consider the way ambitious technology businesses are financed. Prior to the 1950s, in order to develop and scale a new technology, you’d either need to have been rich enough to fund it yourself, convince a bank to give you a loan, or build inside of an existing company. That constrained who could create what and how. When Sherman Fairchild wrote a $1.4 million check to the “Traitorous Eight” to form Fairchild Semiconductor, a new funding model was born. 

Adventure capital or liberation capital, now known as venture capital, ignited the tech industry as we know it today. As Sebastian Mallaby writes in his excellent book, The Power Law, “By freeing talent to convert ideas into products, and by marrying unconventional experiments with hard commercial targets, this distinctive form of finance fostered the business culture that made the Valley so fertile.” 

The exponential curves I included in I, Exponential wouldn’t have been possible without capitalism, but they also wouldn’t have been possible without the evolved form of capitalism that includes venture capital. 

I don’t believe that we’ve reached the end of history or the end of capitalism. 

I think crypto can make capitalism more effective. 

How Crypto Can Make Capitalism More Effective

What makes capitalism more effective? 

As the two examples above show, capitalism doesn’t evolve along a single axis. Improved working conditions and new funding models both made capitalism more effective. 

Crypto could potentially improve capitalism along a number of different axes. I asked Anthropic’s AI, Claude, what ideal capitalism would look like, and it told me that while economists don’t agree on an answer, there are some basic principles that would apply:

  1. Strong property rights and contract enforcement. 

  2. Free markets with prices set by supply and demand. 

  3. Low barriers to starting a business. 

  4. Healthy competition among firms with low concentration of power. 

  5. Open trade and capital flow between nations. 

  6. Democratic processes that reflect popular input and interests. 

  7. Equality of opportunity regardless of identity or background. 

  8. Alignment of business interests with long-term societal welfare. 

  9. Limited regulation focused on correcting market failures and protecting rights.

  10. Sufficient government revenue to fund public goods like infrastructure, education, basic research, and a social safety net to moderate capitalism’s structural challenges. 

We can quibble with specifics, but it’s close enough. And what’s striking to me is that the first seven read like a list of crypto’s ideal-world features.

  1. Crypto strengthens digital property rights and features smart contracts that self-enforce.

  2. Right now, I can find the current price for HarryPotterObamaSonic10Inu or a jpeg of a monkey, based purely on supply and demand. 

  3. Composability, open source code, and shared infrastructure make it relatively easy to spin up a new app and begin collecting payments. 

  4. Competition forces protocols to be minimally extractive

  5. Crypto is a 24/7, global marketplace with users and developers around the world. 

  6. Decentralized protocols rely on the governance of their holders. 

  7. The most popular new app in crypto was built by anonymous developers. 

If you’re reading closely, you’ll notice that not all of these have, uhhh, reached their ideal state. 

HarryPotterObamaSonic10Inu is a shitcoin; who cares if you can find the price and trade it right now? 

Governance has certainly not been figured out; voter turnout is anemic, and votes are susceptible to domination by whales. 

We can debate whether is good, bad, or neutral, but that it’s the most popular new app the space has to show for all of the money and effort poured in so far is not cause for celebration. 

But in the midst of all of the chaos, there are signs of progress towards the ideal state. There are a few avenues I find particularly compelling.

First, if you take the internet seriously, and I do, then giving digital assets physical properties, like property rights, is a big deal. 

I wrote about the need for crypto in giving people control over their personal AI models, for example, an idea that seems strange now but won’t pretty soon. It’s one thing to have the @x handle taken from you, it’s another to have your girlfriend taken from you. 

Computers that can make commitments are even more important if you’re building a company. Just as entrepreneurial activity in a society is shaped by the property rights structure, giving digital entrepreneurs guarantees that a protocol can’t revoke their access or throttle their distribution should encourage more entrepreneurial activity and investment. 

There are early signs in the companies building onchain – on L1s like Ethereum and Solana, L2s like Base, Optimism, and zkSync Eras, and on protocols like Farcaster – although performance will have to improve on a number of dimensions – cost, speed, security, UX – before onchain becomes the default. The fact that protocols can incentivize developers to build on top of them and align incentives long-term through protocol token ownership, an idea I wrote about in Small Apps, Growing Protocols, should accelerate that transition when the performance trade-offs fade. 

Digital property rights uniquely guaranteed by blockchains have the potential to increase entrepreneurial activity online in the same way physical property rights do offline. Incentivizing entrepreneurship and increasing the variance of inputs leads to better outcomes. Or as Chamath would say, “Some will work, some won’t, but always learning.” 

Second, crypto can create global free markets based on supply and demand more easily than any other technology or platform to date, even for things that don’t exist. 

Crypto provides the opportunity to apply free markets to nearly anything. Decentralized exchanges like Uniswap are the first products that allow anyone to list any digital asset, provide initial liquidity, and create a market without an intermediary. 

To be sure, the vast majority of the stuff that trades on crypto markets today is garbage – 99% or more of all tokens and NFTs ever created will be practically worthless – but the noisiness is a feature, not a bug. 99% or more of all websites on the internet are garbage. 99% or more of the ideas people have about what company to build, how to explain natural phenomena, or how to engineer the next big technology have been garbage. Capitalism works because it allows the less than 1% of really great ones to emerge. 

Molecule is one of my favorite examples of new free market creation onchain. It’s using what it calls IP-NFTs to fund scientific research by bringing “rights to IP and R&D data on-chain, unifying the legal rights, data access, and economics around research projects into cryptographic tokens on Ethereum.” It’s funded research into longevity, hair regrowth, autophagy, and Alzheimer’s. 

Projects Funded by Molecule’s DAOs

Importantly, Molecule’s potential is to move the impact of free markets down to the research level so that scientists can research what the market finds important, even if their university or publications don’t. 

Onchain, it’s possible to create free markets for even less obvious assets, like ideas. I love Jacob Horne’s idea of Prophecy Markets and played around with my own idea for Startup Prophecies. You could imagine letting people stake their ETH on ideas that they want to see built, providing price signal before the entrepreneur decides to take the leap. PartyDAO’s John Palmer is playing around with a toy model of this with Idea Guy Summer: buy an NFT to join the DAO, propose ideas, holders vote on them, the ones with enough votes execute, all ETH must be spent by the last day of summer (September 23rd). Right now, people are proposing things like buying NFTs and swapping ETH for USDC, but as more of the economy moves onchain, the ideas can become more substantial. 

Allowing any arbitrary digital asset to find a free market based on supply and demand is likely useful in unpredictable ways, but what would be predictably useful would be to bring existing assets onchain. That’s starting to happen. 

Third, real-world assets are coming onchain, which could lower the cost of capital for businesses and projects, increase liquidity by tapping into 24/7 global markets, and lower barriers to starting a business. 

This is the big potential unlock, and I think it’s how crypto can most obviously make capitalism more effective. Capitalism is more efficient when capital can more easily find the right opportunity, frictionlessly and with as few transaction fees as possible. When funding flows freely to the most promising businesses, products, or ideas, productivity and progress are maximized. Frictionless movement of capital allows it to be redeployed rapidly as new opportunities emerge, meaning less deadweight loss as capital languishes. Lower transaction fees mean more capital for value-generating activities instead of being extracted by middlemen. 

In a Not Boring guest post, Everything is Broken, Blocktower’s Kevin Miao explained his excitement about moving securitization onchain. Real-World Asset (“RWA”) DeFi protocol Centrifuge turns the traditional nine-step monthly flow of funds for a securitization, involving 14 parties… 

Parties involved in securitization transaction, per PWC

… into a four-step process managed by code: 

Kevin Miao, Everything is Broken

He points out that not only does the streamlined process shave basis points of the cost of capital – “at the scale of our $14 trillion securitization market, even a 25bps efficiency gain amounts to $35 billion a year back in borrower’s pockets” – building on a credibly neutral, public, and open blockchain means that developers can compete to provide value-added services on top of Centrifuge. 

Despite the bear market, Centrifuge has more than doubled its cumulative origination in 2023 to $436 million. Small potatoes against the total securitization market, but growing quickly. 

RWA DeFi may have an even larger impact on markets that have traditionally suffered from a lack of liquidity and accessibility. 

Goldfinch and Jia (a Not Boring Capital portfolio company) offer loans to small businesses around the world. My sister works in SME lending in Africa, so I’ve heard horror stories of the extractive rates businesses have to pay to access capital – sometimes up to 12% monthly interest. Both Goldfinch and Jia allow these small businesses to access global liquidity, and a much cheaper cost of capital, onchain, and back the loans with assets and income off-chain. Over time, as they repay on-time, they can let small businesses build up onchain credit scores and access lower rates. Jia even rewards borrowers, and those who vouch for them, with an ownership stake in Jia’s long-term growth, which it believes will lead to better borrowing behavior. 

Jia just launched its first pools this summer in Kenya and the Philippines, and is already seeing strong early results. Goldfinch, which started lending in December 2020, has $100 million in outstanding loans, with a 1.66% loss rate after suffering its first loss this summer, and has already had $27.6 million repaid. 


Back at home, another project that’s caught my eye is Plural Energy, which lets people invest in solar fields, wind farms, and battery storage projects with as little as $10 as opposed to typical $50k minimums. Plural is focusing on small and medium-sized projects that struggle with a lack of liquidity – they’re too small for traditional infrastructure investors – and long, complex underwriting processes, driving up project developers’ cost of capital to as high as 30%.  

By filing with the SEC under Reg A+, Plural can tokenize equity in the projects and use that equity as a wedge into DeFi. Over time, since developers will hold equity in their projects onchain in the form of tokens, they’ll be able to borrow against that equity onchain for things like construction loans at a lower cost of capital than they can find off-chain. 

Those are just a few examples of RWA DeFi projects that are building onchain to remove friction, increase liquidity, and lower the cost of capital. They’re early signs of the opportunity to help global capital find the right opportunities and grease the wheels of capitalism. 

Over time, I expect to see more types of projects and assets come onchain. In the ideal state, capital can flow into and out of opportunities at the speed of the internet, making capitalism more efficient in the process. 

Regulation is still a big bottleneck here. It’s complicated to bring assets that represent something like shares in a company onchain, particularly in the United States, but with time, I the size of the opportunity should pressure regulators into establishing sensible regulation. Big players like Visa and JP Morgan both announced onchain settlement products in the past week, Blackrock and others have applied for Bitcoin and Ethereum ETFs. As large institutions continue to see the opportunity to lower costs and increase liquidity, I expect they’ll bring their lobbying efforts to bear.

The early signs of crypto’s ideal state – making capitalism more effective – are there. Crypto can provide property rights to developers, encouraging more people to experiment and start businesses and increasing the variance that is the lifeblood of capitalism. It can create free markets for anything, from files to tokens to ideas. And it’s beginning to bring real world assets onchain, enabling capital to flow more freely to the right opportunities and lowering the barriers to starting a business. 

The signs are there, even if they’re hidden by the bear market. 

We’re Still So Early

Let’s be honest: crypto in reality has yet to meet crypto’s on-paper promise. There are bright spots, but they’re often overwhelmed by the bad actors, scams, bots, pumps, and dumps. 

I’ve spent a lot of time in Not Boring trying to explain what crypto has built to date – talking about the “real use cases” (see: here and here) – but the more I’ve thought about it, the less I think that matters, for now.

Crypto’s lack of broad usefulness is to be expected at this point in the game. “We’re so early!” is oft-repeated and kind of cope, but I think it also happens to be true. 

Ethereum launched less than a decade ago. It’s the newest major technology platform we have – AI has been under development for nearly seven decades, VR traces its roots back to Headsight and Sensorama in the 1960s – and even sci-fi authors barely anticipated it. Something like Bitcoin first appeared as Electronic Cash in Bruce Sterling’s 1994 Heavy Weather. Smart contracts didn’t really show up until Charles Stross’ Accelerando in 2005 and Daniel Suarez’s Daemon in 2006. 

Crypto hasn’t had very much time to figure itself out, even in the idea phase, and the time it’s had has been spent in the maelstrom of the internet, with money baked in, a recipe for both very good things and very bad.

The lack of sci-fi mentions could mean a couple of things:

  1. It’s not a valuable enough technology for sci-fi authors to bother imagining it.

  2. It’s a rare genuinely new idea.

I think it’s the latter, obviously. If that’s the case, then I think we’re still in the sci-fi ideation stage for crypto – the period in which people dream up the potential use cases and ideal state that would be possible if the technology worked perfectly – people just happened to be building through it. 

H.G. Wells dreamed up the “Networked World,” a very early precursor idea to the internet, in 1899. We had a century to think through the internet’s implications before the Dot Com Boom, and we ended up with this:

Bad 90s Websites from Amazon, Apple, Disney, Coca-Cola, Webvan, and More!

Despite the internet’s awkward beginnings, and the billions of dollars lit on fire in any company that threw a “.com” in its name, the ideal state of the internet – connecting everyone in the world to communicate and transact – was so obviously valuable that researchers, entrepreneurs, and investors persisted through the crash and built the internet we know and (mostly) love today. 

In this phase, the most important thing is to grok the ideal state. With a valuable enough ideal state, everything else is (challenging, messy, uncertain) implementation. 

Crypto’s ideal state is that it will make capitalism more efficient and accelerate progress across industries. That’s civilizationally valuable enough that people will pursue it through booms and busts. 

This is actually happening. It’s cliche to talk about investing in infrastructure at this point in the cycle, but onchain infrastructure has made huge improvements over the past year. 

Layer 2’s (L2) like Optimism, Base, Arbitrum, zkSync, and StarkNet are making blockspace cheaper. Innovations like account abstraction, Supersends, embedded wallets, and multi-party computation give developers tools to create smoother user experiences while retaining crypto’s benefits. Stablecoins are becoming infrastructure, as highlighted by Visa’s announcement that it’s using USDC on Solana to speed up merchant settlement. Stanford researchers even proposed an ERC-xR standard that would make certain transactions reversible. I’ve spoken to a ton of smart teams working on correcting crypto’s obvious shortcomings. 

Progress on infrastructure despite low prices and slow activity is a sign that there are smart people who still believe enough in crypto’s ideal state to bet their careers on it. It’s also an acknowledgement that the average user isn’t going to want to make trade-offs: they’ll want the benefits of web3 with the convenience of web2. While I don’t think mass adoption is important yet, it will be crucial at some point if crypto is going to reach its ideal state. Entrepreneurs will need to get unique value from building onchain. Businesses will need to turn to crypto to finance their projects, and lenders will need to be there with capital seeking opportunity. 

I think that this coming decade will see faster progress, and more economic opportunity, than any previous decade. The wheels are in motion, and that will happen with crypto or without it. My hope, however, is that by bringing more of the capitalist engine onchain, we’ll accelerate progress, let wilder ideas flourish, and give more people ownership in the upside. 

As everything decentralizes, crypto can push capitalism further towards the edges, diminishing the role of Coase’s firm to the benefit of individuals and entrepreneurs.

Capitalism is good. Capitalism evolves. And I think crypto has a role to play in that evolution. With a valuable enough ideal state, it’s inevitable.

Thanks to Dan for editing!

That’s all for today! We’ll be back in your inbox on Friday with the Weekly Dose before taking next week’s essay off for Labor Day Weekend.

Thanks for reading,


I, Exponential

Welcome to the 414 newly Not Boring people who have joined us since last Tuesday! If you haven’t subscribed, join 210,924 smart, curious folks by subscribing here:

Subscribe now

Today’s Not Boring is brought to you by… Fundrise

Venture capital has finally been democratized. You can now invest like the largest LPs in the world, gaining access to one of the best performing asset classes of the last 20+ years.

Fundrise is actively investing in some of the most prized private tech firms in the world—including those leading the AI revolution.

This first-of-its-kind product has removed nearly all of the typical barriers that gatekeep VC by offering:

  • No accreditation required. 

  • No 2-and-20 fees.

  • No membership fees.

  • The lowest venture investment minimum ever offered. 

Join America’s largest direct-access alternative asset manager and the more than 2 million people already using Fundrise.

Learn More Now

Hi friends 👋,

Happy Tuesday!

Recently, I’ve noticed a lot of chatter about degrowth, the idea that we need to stop growing, consume less, go back to the way things were. I think it’s one of the dumbest and most dangerous ideas there is, modern Malthusianism.

To be clear: I do not think that we should, as Jane Goodall suggested, get rid of ~7 billion people to maybe save the planet. I think we should harness those peoples’ ideas and efforts to create abundance for even more people, all without harming the planet.

We should celebrate our progress, not for technology’s sake, but because it’s taken the work of billions of people over millennia to get here. So today, we’ll do that.

Dan told me to make this one much shorter, but I kept it long and full of messy details to express just how complex and improbable progress is.

Let’s get to it.

I, Exponential

“There is no reason to regret that I cannot finish the church. I will grow old but others will come after me. What must always be conserved is the spirit of the work, but its life has to depend on the generations it is handed down to and with whom it lives and is incarnated.”

-Antoni Gaudí

There’s a popular complaint, consistently good for thousands of agreeing likes, that because we no longer construct cathedrals, humans just don’t know how to build beautiful things anymore.

“We can’t. We don’t know how to do it.” 

lol. lmao. 

Burj Khalifa, James Webb Space Telescope, NIF Fusion Reactor, H100s, Starship

The implication that we can’t build beautiful things anymore, that we’ve lost our way, that we must reject modernity, is misguided and backwards. 

Certainly, no one person, no matter how insanely gifted, knows how to make anything like the Duomo. No one person knows how to make anything, even a pencil. Collectively, though, we build things that make cathedrals look like simple LEGO sets. 

Exponential technology curves are emergent cathedrals to humanity’s collective efforts. 

I realize that statement might elicit a “I am begging tech bros to take just one humanities course.” But hear me out. 

What makes exponential technology curves so beautiful to me is how improbable and human they are. They require an against-all-odds brew of ideas, breakthroughs, supply chains, incentives, demand signals, luck, setbacks, dead ends, visionaries, hucksters, managers, marketers, markets, research, commercialization, and je ne sais quoi to continue, yet they do.

I’ll show you what I mean by exploring three exponential technology curves: Moore’s Law, Swanson’s Law, and the Genome Law. All three required initial sparks of serendipity to even get off the ground, after which a multi-generational, global, maestroless orchestra has had to play the right notes, or the wrong ones at the right times, for decades on end in order for the sparks to turn into curves. 

I think it’s easy to forget the human element when we talk about technology, or something as abstract as an exponential curve, but that’s exactly what drives technological progress: human effort. As Robert Zubrin put it in The Case for Nukes:

The critical thing to understand here is that technological advances are cumulative. We are immeasurably better off today not only because of all the other people who are alive now, but because of all of those who lived and contributed in the past.

The bottom line is this: progress comes from people.

Exponential technology curves emerge from humanity’s collective insatiable desire for more and better, and from the individual efforts of millions of people to fulfill that desire, often indirectly, in their own way. 

Do you see how unbelievable these curves are? That they’re the Wonders of the Modern World? 

I can tell you’re not with me yet. Hmmmm. What if we just considered the pencil, and worked our way up from there? 

I, Pencil

Do you know how to make a pencil? 

In 1958, Leonard E. Reed wrote a short letter, I, Pencil, from the perspective of the “seemingly simple” tool. 

“Simple? Yet not a single person on the face of the earth knows how to make me.” 

The pencil narrator writes of the trees that produce his wood, of the trucks, ropes, and saws used to harvest and cart the trees, and of the ore, steel, hemp, logging camps, mess halls, and coffee that help produce those trucks, ropes, and saws. 

He writes of the millwork and the millworkers, the railroads and their communications systems, the waxing and kilning that go into his tint, into the making of the kiln, the heater, the lighting and power, the “belts, motors, and all the other things a mill requires.” 

All of those things fit into three paragraphs; he goes on for another eight. But I’ll spare you. You get the point: 

Actually, millions of human beings have had a hand in my creation, no one of whom even knows more than a very few of the others… There isn’t a single person in all these millions, including the president of the pencil company, who contributes more than a tiny, infinitesimal bit of know-how.

No one involved in the process knows how to make a pencil, but all contribute to the process, whether they intend to or not. 

If that’s true of a pencil, imagine how little of the know-how that goes into a solar panel or semiconductor is contributed by any single person. It must be measured in nanometers. 

I, Exponential

To tell the story of any exponential technology curve from the perspective of the curve itself would take more than an essay. It would fill a library. The pencil is but a miniscule input into any one of them, and even the pencil’s story is endlessly deep. 

Scaling up to the level of semiconductors, solar panels, and DNA sequencing, each infinitely more complex than a pencil, and introducing exponential improvement into the mix, reveals the man-made miracle that is the exponential technology curve. I’ll tell their abridged stories on their behalf.  

Moore’s Law: Semiconductors

Just as no one person knows how to make a pencil, no one fully understands the intricacies behind semiconductors.

Moore’s Law is the most famous of the exponential technology curves. It states that the number of transistors on an integrated circuit doubles roughly every two years. 

When Fairchild Semiconductors’ Gordon Moore wrote the 1965 paper that birthed the law – the wonderfully named Cramming more components onto integrated circuits – he only projected out a decade, through 1965. 

Chart from Moore’s original article (l) and continuance of Moore’s Law from Our World in Data (r)

Famously, though, Moore’s Law has held true through the modern day. Apple’s M1 Ultra chip packs the most transistors on a single commercial chip – 114 billion – and Intel recently said it expects there will be 1 trillion transistors on chips by 2030. When Moore wrote the paper, there were only 50.

Today, some are worried that Moore’s Law is nearing its end. How much smaller can we go than 3nm?! Others, including chip god Jim Keller, believe that we still have a lot of room to run. 

Me? If you’re asking me to weigh in on semiconductor manufacturing and the limits of known physics, you’ve come to the wrong place. What I do know is that betting against Moore’s Law has been a losing bet for nearly seven decades. 

Moore’s Law has been so consistent that we take it for granted, but its continuation has been nothing short of miraculous. There’s a rich story hidden beneath its curve. 

In Chip War, author Chris Miller tells that story. He specifically calls out the breadth of roles that have contributed to maintaining Moore’s Law: 

Semiconductors spread across society because companies devised new techniques to manufacture them by the millions, because hard-charging managers relentlessly drove down their cost, and because creative entrepreneurs imagined new ways to use them. The making of Moore’s Law is as much a story of manufacturing experts, supply chain specialists, and marketing managers as it is about physicists or electrical engineers.

That’s just one paragraph in a 464-page full of colorful characters and dizzying details. Like the precision achieved by ASML, the crucial Dutch company that makes the EUV lithography machines used to create transistors. MIT Technology Review explains the mind-bending precision of ASML’s mirror supplier, Zeiss: 

These mirrors for ASML would have to be orders of magnitude smoother: if they were the size of Germany, their biggest imperfections could be less than a millimeter high. 

There are many such stories, each of which is a nesting doll of its own stories. In order for Zeiss to manufacture such smooth EUV mirrors required centennial progress in fields like materials science, optical fabrication, metrology, ion beam figuring, nuclear research, active optics, vibration isolation, and even computing power – Moore’s Law – itself! 

I asked Claude to come up with some rough napkin math on how many person-hours have gone into the creation and continuation of Moore’s Law. It came up with 5 billion over the past 60 years. When I pushed it to include adjacent but necessary industries, we got to 9-11 billion person hours across research, engineering, manufacturing, equipment development, materials science, computer manufacturing, software and application development, electrical engineering, and investing.

This is an extremely conservative estimate. If you were to take the I, Pencil approach, you’d need to include the people who mine the quartzite from which silicon is extracted, and all of the people who make all of the things that support their effort, as just one example. You’d need to include the many people over the centuries before 1959 whose efforts led to the first semiconductor in the first place. You’d have to include the teachers who taught the people who drove Moore’s Law forward, and maybe even the people who taught them. Certainly, you’d need to include the consumers and businesses whose demand for better compute and richer applications pulled the industry forward. 

You could play this game ad infinitum, and at the end, all of those person-hours turn into a clean chart that looks like this: 

It’s log scale, so that’s exponential. Our World in Data.

There isn’t a single person in all these millions who contributes more than a tiny, infinitesimal bit to semiconductors’ progress, yet that progress flows so smoothly it’s called a Law. 

Swanson’s Law: Solar Panels

No one person knows how to pull electricity from the sun, either, but we do that, too, to the tune of 270 terawatt-hours (TWh) per year. 

The solar story introduces another fun dimension to our exploration of exponentials: it’s impossible to pull them apart. One curve propels the next.  

Moore’s Law is an important input into Swanson’s Law: the observation that the price of solar photovoltaic modules tends to drop 20 percent for every doubling of cumulative shipped volume.

The first modern solar PV cell was a direct result of research on transistors. In 1954, a Bell Labs scientist, Darrly Chapin, was trying to figure out how to power remote telephone stations in the tropics. He tried a bunch of things, including selenium solar PV cells, but they were just 0.5% efficient at converting sunlight into electricity. Not competitive. As luck would have it, though, a Bell Labs colleague named Gerald Pearson was doing research on silicon p-n junctions in transistors. Pearson suggested that Chapin replace the selenium cells with silicon ones, and when he did, silicon PV cells were 2.3% efficient. After further improvements by the Bell Labs scientists, they achieved 6% efficiency by the time Bell Labs announced its “Solar Battery” in 1954. 

Ever since, progress against Moore’s Law has contributed to progress in solar panel price and efficiency through, among other things, thinner silicon wafers, precision manufacturing from fabs, advanced materials, and computer modeling. Innovation in chips spilled over to cells. 

But semiconductors are just one of many, many factors that have driven down the price of solar PV cells. 

That Bell Labs anecdote above comes from a phenomenal two-part essay by Construction Physics’ Brian Potter: How Did Solar Panels Get So Cheap?  It’s the story behind the curve.

It’s easy to look at a curve like solar’s and simply credit Wright’s Law and learning curves – cost declines as a function of cumulative production – to trust that as we make more of something, we can make it more cheaply. In solar’s case, every doubling of cumulative production leads to a 20% decline in price. You can just plug the numbers into a spreadsheet, and voila! 

But that approach misses all of the magic. 

After the serendipitous encounter at Bell Labs, itself a study worthy of the excellent 422-page The Idea Factory, solar cells struggled to find a market. They were just too expensive, even for the Coast Guard and Forest Service. Chapin calculated that it would cost $1.5 million to power the average home with solar PV cells. 

But in 1957, the Soviets launched Sputnik (powered by heavy, short-lived batteries). The Space Race was on. By March of the next year, the US launched the first solar powered satellite, Vanguard I. 

Vanguard I, the first solar-powered satellite, Wikipedia

Over the next 15 years, the US and USSR launched over 1,000 satellites, more than 95% of which were powered by solar PV cells, which were much lighter than batteries and provided energy nearly as long as the sun shone. As late as 1974, satellites made up 94% of the solar PV market. The cost fell 300% to $100 per watt on the back of satellite demand. 

Without perfectly-timed demand from satellites, solar PV might have been DOA. I trust I don’t need to remind you to consider all of the work, research, and materials that went into getting satellites to space in the first place as an input into solar’s progress. 

But even at $100 per watt, solar wasn’t competitive with other electricity sources for terrestrial use cases, but an entrepreneur – Elliot Berman – launched a company to change that. When his Solar Power Corporation failed to raise from venture capitalists, it found an unlikely backer in Exxon. Solar Power Corporation cut costs by “using waste silicon wafers from the computer industry” and using the full wafer instead of cutting off their rounded edges. They made trade-offs – “a less efficient, less reliable, but much cheaper solar PV cell” – and by 1973, they were able to “produce solar PV electricity for $10 per watt, and sell it for $20.” 

I’m getting carried away in the details, I know. This stuff fascinates me. But the show must go on, so I highly suggest you read Potter’s posts (Part I & Part II) for the full story. 

Suffice it to say, it’s a tale of entrepreneurship, government dollars, an international relay race across the US, Japan, Germany, and China, individual installation choices, an ecosystem of small businesses formed to finance and install panels, environmentalism, economies of scale, continued R&D, climate goals, and much, much more. The result is that in 70 years, solar has gone from the most expensive source of electricity to one of the cheapest thanks to the efforts of millions of people. 

There isn’t a single person in all these millions who contributes more than a tiny, infinitesimal bit to solar’s progress, yet that progress flows so smoothly it’s called a Law.

The Genome Law: DNA Sequencing

Exponential technology curves even contribute to our knowledge of ourselves, even though not one of us knows how to catalog all of the base pairs that make up the DNA in our own bodies. 

Over the past couple of years, I’ve spent enough time with Elliot and seen enough techbio pitches that this chart is seared in my brain:  

It shows that since 2001, the cost to sequence a full human genome – to read the instructions encoded in the 3.2 billion DNA base pairs inside each person’s body – has fallen from $100 million to $500. Last September, after the chart was published, Illumina announced that they can now read a person’s entire genetic code for under $200, one step closer to their stated goal of $100. 

The 500,000x decrease in 21 years far outstrips Moore’s Law; since 2007, the chart looks exponential even though it’s log scale. The Genome Law is the observation that the cost to sequence the complete human genome drops by roughly half every year.

Like semiconductors and solar panels, the story of the gene can fill a whole book. It does: Dr. Siddhartha Mukherjee’s excellent The Gene: An Intimate History. You should read it, too. 

One thing Mukherjee failed to mention, though, is that Francis Crick was microdosing acid when he uncovered DNA’s double-helix structure, along with James Watson and Rosalind Franklin, in 1953. You can see it in his eyes, right? 

Watson (left) and Crick (right) with a model of DNA 

Nor did Mukherjee mention the role that LSD played in DNA sequencing three decades later, when Cetus Corporation biochemist dropped some acid on the drive out to his cabin in Mendocino County, California. 

While road tripping, Mullis had a eureka moment where he envisioned a process that could copy a single DNA fragment exponentially by heating and cooling it in cycles and using DNA polymerase enzymes. When he got to the cabin, he worked out the math and science behind his idea, and within a couple hours, he was confident that polymerase chain reaction (PCR), a way to amplify particular DNA sequences billions of times over, would work. 

“What if I had not taken LSD ever; would I have still invented PCR?” Mullis later mused. “I don’t know. I doubt it. I seriously doubt it.”

Drugs are fun. These anecdotes are fun. But I hope they illustrate a larger point: progress that now seems inevitable was anything but

Had Swiss chemist Albert Hofmann not accidentally absorbed the LSD he’d synthesized when researching compounds derived from ergot through his fingertips, we may not understand the human genome today. That he did (and then intentionally took a healthy 250 microgram dose to explore further) is one of many little things that had to go just right in order to sequence the genome. 

Two years after Mullis invented PCR, the US Department of Energy and National Institutes of Health met to discuss sequencing the entire genome. By 1988, the National Resource Council Endorsed the idea of the Human Genome Project. And by 1990, the Human Genome Project (HGP) kicked off with $200 million in funding and James Watson at its head. At the core of the Human Genome Project’s process was Mullis’ PCR. 

Government funding and coordination drove public and private innovation in sequencing technology. In 1998, an upstart competitor named Celera, led by now-legendary Craig Venter, entered the race, intending to finish the project faster and cheaper using a shotgun sequencing technique. Competition accelerated sequencing, and under political pressure – a startup upstaging the HGP would have been a black eye – the HGP and Celera jointly announced a working draft of the human genome in 2000: 

By 2003, two years ahead of schedule and for a cost of $3 billion, humans had succeeded in mapping the human genome. And then the Genome Law started in earnest. 

The story of genome sequencing’s cost declines will be familiar now. 

Government funding played a role; in 2004, the US National Human Genome Research Institute kicked off a grant scheme known as the $100,000 and $1,000 genome programs, through which they awarded $230 million to 97 research labs and industrial scientists. 

Private enterprise played a role, too. Illumina, which now employs nearly 10,000 people working to bring the cost of sequencing down, acquired a British company named Solexa for $600 million in 2007 to expand into next-generation sequencing. (Even investment bankers have a hand in these exponential technology curves!) Illumina integrated the technology that Solexa brought to the table into its product line, leading to the development of the widely-used sequencers like the HiSeq and MiSeq series, and more recently, the NovaSeq series. 

Illumina HiSeq, MiSeq, and NovaSeq

Look at those machines. Without even considering the really challenging pieces inside, imagine the plastics, touchscreen, and feet pad manufacturers who provided components. Imagine the printer designers from whom Illumina’s industrial designers clearly cribbed notes. I couldn’t do what any one of them do, could you? 

Moore’s Law has clearly played its part – providing the computational foundation for managing and analyzing the large datasets involved in genomics – but the Genome Law has primarily been fueled by domain-specific advancements in biotech and chemistry from scores of academic researchers and industry pros, and by massive parallelization. To understand those, you should read Elliot’s excellent piece on Sequencing.

Today, a number of companies are competing to bring down the cost of sequencing and open up more use cases, which in turn will drive down the cost of sequencing. One startup, Ultima Genomics, came out of stealth last May with an announcement that it can deliver the $100 genome. That’s promising, but remains to be seen. You can be sure, however, that the announcement spurred acceleration among competitors. 

I’m leaving out the work of millions of people across generations, from Mendel to metallurgists, each of whom has played a small but important role. 

There isn’t a single person in all these millions who contributes more than a tiny, infinitesimal bit to the progress in DNA sequencing, yet that progress flows so smoothly it’s called a Law.

When Exponentials Break

You’ll have to excuse me if I’ve gotten a little carried away. Reading back what I wrote, I may have made these curves seem a little too inevitable, a little too Law-like. 

They’re not really laws, though. Not every technology improves exponentially, and even curves that start going exponential aren’t guaranteed to continue. 

Nuclear energy – both fission and fusion – provide counterexamples. 

Fission and Fusion

America first generated commercial power from fission with the opening of the Shippingport plant in 1957. By 1965, nuclear plants generated 3.85 TWh. By 1978, just 13 years later, nuclear power generation had grown 75x, reaching 290 TWh, good for a 40% CAGR. Had that growth rate continued for just another eight years, nuclear would have generated more electricity by 1986 (4,180 TWh) than the United States consumed in 2022 (4,082 TWh). 

But the growth stopped. Nuclear power generation actually dipped in 1979, the year of the Three Mile Island accident. Three Mile Island often takes the blame for nuclear’s demise, but a confluence of factors – economic, regulatory, and cultural – killed nuclear’s growth. 

Unpacking all of the factors would take an entire newsletter (or podcast series 👀), but for now, what’s important to understand is that continued exponential growth isn’t inevitable once it’s started. It takes will, effort, luck, demand, and a million other little things to go right, which makes it all the more incredible when it happens. 

Our World in Data

Fusion has fallen off, too. As Rahul and I discussed in The Fusion Race, “The triple product – of density, temperature, and time – is the figure-of-merit in fusion, and it doubled every 1.8 years from the late 1950s through the early 2000s.” That progress slowed, however, as the world’s governments concentrated its resources on the ambitious but glacial ITER project.  

But there’s emergent magic in these exponential technology curves, and dozens of privately funded startups are filling in where governments left off and racing to get us back on track. If they do, we’ll shift from the more scientific triple product curve to a more commercial fusion power generation curve, and watch the wonders that unfold when that goes exponential itself. 

Zooming up a level, despite the shortcomings in any one particular curve, what’s remarkable is that all of the curves – technological, cultural, economic – resolve into one big, smooth curve: the world’s GDP has increased over 600x since the dawn of the first millennia. 

Our World in Data

World GDP is an imperfect measure of progress, but it’s a pretty good one. It represents a better quality of life for more people; World GDP per capita is also increasing exponentially, even as the planet supports more human life. In fact, as Zubrin pointed out, “The GDP/capita has risen as population squared, while the total GDP has risen, not in proportion to the population size, but in proportion to the size of the population cubed!” The more people working on these curves, the faster they grow. 

I think that’s what all of the curves represent at the most fundamental level: billions of people over millennia trying to make a better life for themselves and their families by contributing in the specific way they can, and succeeding in aggregate. 

But where are the literal cathedrals? 

Resurrecting Sagrada Família

Exponential technology curves, and even their direct products, are beautiful but hidden. 

Semiconductors are hidden inside of computers. Solar panels are visible but not particularly beautiful. DNA sequencing takes place in a lab. 

These things are awesome, but I understand if you don’t think they’re as beautiful as the Duomo. 

Well, consider the Basilica de la Sagrada Família. 

Sagrada Família, CNN

In 1883, a small group of Catholic devotees of Saint Joseph (Josephites) entrusted a young Catalan architect, Antoni Gaudí, to build them a church.  

Gaudí, as anyone who has been to Barcelona will gladly tell you, was a genius. He was also slow and unconventional. As architect Mark Foster Gage writes in a piece for CNN

Gaudí’s vision of the church was so complex and detailed from the start that at no point could it be physically drawn by hand using the typical scale drawings so common to almost all architectural projects. Instead, it was almost entirely constructed through the making of large plaster models to communicate Gaudí’s desires to the army of stonemasons slowly liberating its form from blocks of local Montjuïc sandstone. 

That worked well enough while Gaudí was there to oversee the models and the stonemasons, but in 1926, the architect was hit by a tram car. Dressed poorly and with nothing but snacks in his pocket, Gaudí was mistaken for a pauper and not given proper care. He died within days. 

When Gaudí died, 43 years after taking on the project, Sagrada Família was only 10-15% completed. He did leave behind sketches, photographs, and plaster models that guided builders for the next decade. Those, unfortunately, were destroyed in 1936, during the Spanish Civil War, when revolutionaries burned the sketches and photographs, smashed the models, and desecrated Gaudí’s tomb. 

Over the next four decades, construction snailed along in fits and starts. Gage explains that “Even after nearly a full century of construction, by 1978 there remained numerous aspects of the project that had still never been designed, and even more that nobody knew how to build.”

“We can’t. We don’t know how to do it.” for real. 

In 1979, however, a 22-year-old Kiwi Cambridge grad student, Mark Burry, visited Sagrada Família and interviewed some of Gaudí’s former apprentices. They showed him the boxes of broken model fragments, and offered him an internship. Burry got to work trying to reconstruct the mind of Antoni Gaudí in order to construct the church that lived within. 

At first, Burry tried to hand-draw the “complex intersection of weird shapes, including things like conoids and hyperbolic paraboloids” but he realized that the tool wasn’t up to the task. Gage again: 

Burry realized as early as 1979 that the only tool that could possibly calculate, in a reasonable amount of time, the structures, forms, and shapes needed to solve the remaining mysteries of the Sagrada Família was one relatively new to the consumer world: the computer.

Moore’s Law, baby! 

Gage’s piece from which I learned this story is titled How robots saved one of the world’s most unusual cathedrals. It’s the story of how, after nearly a century of slow progress against one man’s genius design, technologies built over billions of person-hours by millions of people are finally bringing Sagrada Família to life.

Burry brought in software used to design airplanes to solve the otherwise-impossible problem of translating Gaudí’s sketches of bone columns into 3D models. 

Arxiu Mas/Courtesy La Sagrada Familia

In order to actually construct the building, Burry and team hooked their computers up to a relatively new invention, CNC (computer numerically controlled) machines. CNC machines were themselves a product of a number of technological advances in computing power, data storage, electronics, motors, material science, user interfaces, networking and connectivity, control systems, and software designs. All of those curves converged in time for Burry to feed his 3D models into CNC machines that could precisely carve their designs out of stone. 

Today, the team working on Sagrada Família uses a full arsenal of modern technology, from 3D printers to Lidar laser scans, from sensors to VR headsets. The official architecture blog of the Sagrada Família even put together a short video on the role of technology in the building’s construction:

Sagrada Família is an architectural wonder, maybe the greatest in the world. Brunelleschi could not have built it with the tools and labor available when he conceived of the Duomo. Architects and builders have struggled for 140 years to complete it. Burry and team are now racing to complete it by 2026 to mark the centennial of Gaudí’s death. 

Thanks to the exponential technology curves built by billions of people, and Burry’s application of them, “We can. We do know how to do it.” 

That just leaves the question of will and economics. One of the reasons we don’t build churches and stone buildings like we used to is that they just don’t make financial sense. The modern Medicis are funding modern pursuits, like space travel and longevity. 

But there’s cause for hope. Building the next Sagrada Família, and the next, and the next, will get cheaper and faster as the physical world hops onto exponential technology curves, and those curves continue to exponentiate. Modeling will get better, robots will get cheaper and better, 3D printing will get faster and more accurate. 

Eventually, we’ll have the machines that can build the machines that can build Sagrada Familias. 

The claim that exponential technology curves are more beautiful than cathedrals is a bold one, but I stand by it. 

You can’t use cathedrals to make exponential technology curves, but you can use exponential technology curves to make cathedrals. And exponential technology curves continue to expand our capabilities, so that we can build even greater cathedrals, literal and metaphorical, as time goes on. 

Less glibly, we should be able to appreciate the beauty of the past while celebrating the tremendous things humans have accomplished since and working to make the future even better. 

To use the same G.K. Chesterton quote that Read did in I, Pencil: “We are perishing for want of wonder, not for want of wonders.” 

There are wonders all around us. We should wonder at these curves the same way we wonder at the ceiling of the Sistine Chapel, not for the technological progress they represent alone, but because they show us what humans, from the lumberjack to the Nobel Laureate, are capable of. 

Trust the Curves

Let’s wrap this up on a practical note. Exponential technology curves aren’t just beautiful, they’re investible. 

A key pillar of Not Boring Capital’s Fund III thesis is that enough of these curves are hitting the right spot that previously impossible or uneconomical ideas now pencil out. 

Not Boring Capital, Fund III Deck, Slide 2

Understanding and betting on exponential technology curves is the surest way for entrepreneurs and investors to see the future. With each doubling in performance or halving in price, ideas that lived only in the realm of sci-fi become possible in reality. 

We just invested in a company whose “Why Now?” is that they’re building at the intersection of two powerful curves that are about to hit the right point. They’re working on a holy grail idea that others have tried and failed to build, because they tried a little too early on the curves. 

This logic is why many of the best startup ideas can be found in previously overhyped industries. Ideas that were once too early eventually become well-timed. 

One of my favorite examples is Casey Handmer’s Terraform Industries, which is using solar to pull CO2 from the air and turn it into natural gas. The company is an explicit bet on solar’s continued price decline, something that even the experts continue to underestimate by overthinking it, and an effort to bend the curve even faster by creating more demand.  

New companies enabled by new spots on the curve, in turn, create more demand, which sends capitalist pheromones throughout the supply chain that entice people, in their own self-interest, to make the breakthroughs and iterations that keep the curves going. That makes existing companies’ unit economics better, and unlocks newer, wilder ideas. 

On the flip side, curves explain why it’s been so hard to create successful startups that don’t sit at the edge of the curve. As Pace Capital’s Chris Paik explained on Invest Like the Best, “One thing that I think is underexplored is the impact of data transfer speeds as actual why now reasons for companies existing. Let’s look at mobile social networks and the order that they were founded in; Twitter 2006, Instagram 2010, Snapchat 2011, maybe most recently TikTok named in 2014.”

He argued that the most successful mobile social networks are the ones that were born right when they could take advantage of a new capability unlocked by advances in mobile bandwidth: text, images, short clips, full videos. In other words, successful social networks built right at the leading edge of Nielsen’s Law, which states that a high-end user’s connection speed grows by 50% each year.

The flip side was true, too. Paik made the case that Instagram couldn’t have been successful if it had come after TikTok, and that companies like BeReal and Poparazzi struggled because they are “not strictly more bandwidth consumptive than TikTok.” 

Getting curve timing right isn’t the only factor that determines whether a startup will be successful. There are companies that get the timing just right, but fail for a number of reasons: they’re not the right team, they have the wrong strategy, others build against the same curve they do but better, their product sucks, whatever. There are companies that fail because they think they’re building at the right spot on the curve, but are actually too early. And there are companies that succeed despite not building at the leading edge of the curve. 

But building and investing in technology startups without at least understanding and appreciating exponential technology curves will lead to more heartache and write downs than is necessary. 

That said, the crazy ones who ignore the curves are important, too. Jumping in too early and overhyping a technology before it’s technically and economically rational to do so is one of the many things that keep curves going against the odds. That internet bandwidth curve might look a lot flatter if telecom companies didn’t get too excited and overbuild capacity during the Dot Com Bubble. 

It didn’t work out for them, but it worked out for humanity. Even the failures push us forward. 

How beautiful is that?

Thanks to Dan for editing!

That’s all for today! We’ll be back in your inbox on Friday with the Weekly Dose before taking next week’s essay off for Labor Day Weekend.

Thanks for reading,


A Market For (Almost) Everything

Welcome to the 414 newly Not Boring people who have joined us since last Tuesday! If you haven’t subscribed, join 210,924 smart, curious folks by subscribing here:

Subscribe now

Today’s Not Boring is brought to you by… Sandhill Markets

The top blue-chip startups like Stripe, SpaceX, OpenAI, Instacart, and Databricks have created hundreds of billions in value in the private markets. Their shares have been nearly impossible to access for non-institutional investors. 

Sandhill Markets gets you into those deals. 

Sandhill Markets is a “Robinhood for Private Markets” with the Top 25 pre-IPO blue chips – all of the companies above, plus Epic, Discord, Anduril, and Neuralink. And they’re backed by top investors like a16z, Floodgate, and Matrix.

Skip the $100K minimums and 2/20 fees. With Sandhill, you get low $5K minimums, with 0% management fees, 0% transactions fees, and 0% carry.

The first 100 Not Boring readers who click get a discounted monthly fee of just $99 (down from $149). Claim here.

Learn More

Hi friends 👋,

Happy Tuesday! We’re in the thick of August, so I hope this email finds you either on a beach or working on something you don’t want to stop working on. 

In either case, I hope you have better things to do than read a long newsletter. 

Plus, yesterday was Puja’s birthday – happy birthday Puj!! So I had better things to do than to write one. 

So let’s keep it short and give you something to chew on on the beach. 

Let’s get to it.

A Market for Almost Everything

Used to be, a man wanted something to eat, he grew it or killed it and cooked it for himself. 

Millennia ago, there was no market for food, no buyers or sellers. A man couldn’t, at the end of a long day, say, “Let’s just order tonight.” Couldn’t even go to a grocery store to get the ingredients and cook it himself. There was no price he could pay for a meal but his own direct labor. 

Today, we do this: 

Look, it was Puja’s birthday and the kids were up so I couldn’t run out… 

That’s a real screenshot from my phone yesterday, and look, I fully admit $29.06 is an absurd amount of money to pay for a dozen donuts, but I had a choice:

  1. Not get the donuts. 

  2. Spend half an hour wrangling the kids into the double stroller, walking to Dunkin’, ordering the donuts, and carrying them back home. 

  3. Just order the donuts and pay the delivery fee, service fee, and tip. 

It was Puja’s birthday and she loves Dunkin’ Donuts, so A wasn’t an option. I chose C. 

As much as people love sharing screenshots of food delivery fees in horror, there’s something kind of great about them. 

There’s now a market – I can pay some price to get what I want. It’s legible – my time was clearly priced at the $9.76 fee + tip bundle. And it’s liquid – as soon as I decided I wanted the donuts, there was someone willing to get them for me for that $9.76, in less time than it would have taken me. 

Something similar is happening, has happened, and will continue to happen everywhere you look: donuts, dating, stocks, jobs, revenge

It’s like a law of physics, or at least a law of capitalism: 

Markets emerge for almost everything and become more liquid and legible over time. 

Take views. On Friday, I was sitting there minding my business, when I got an email from X:

The company formerly known as Twitter paid me $3,523.35 for tweeting, something I’ve been addicted to doing, for free, for years. That’s like a pack-a-day smoker waking up one morning to a big check from Marlboro simply for smoking a pack-a-day. (Tweeter beware: that does actually happen sometimes, but when it does, it’s because Marlboro gave you cancer.) 

Twitter has always been very valuable to me. Not Boring and Not Boring Capital probably wouldn’t exist without it. But that value was indirect: 

  • Find an idea on Twitter, write about it, get more subscribers, sell more sponsorships. 

  • Or, newsletter shared on Twitter, more people subscribe, sell more sponsorships. 

  • Or, tweet something insightful, impress a potential LP who invests in the fund. 

  • Or, tweet something about an industry I’m exploring, find a founder we invest in. 

I couldn’t have told you how much a view was worth. It wasn’t legible. And I couldn’t convert views directly into dollars. It wasn’t liquid. 

With Creator Payouts, it’s both. Roughly 15 million views at $0.0002 per view, paid directly into my account. 

That combination – legibility and liquidity – is as alluring as it is dangerous. It can tempt you into short-term moves at the expense of long-term ones. 

In The Great Online Game, I wrote that, “Medium-level Twitter is Threads and engagement hacks. Twitter Mastery is indistinguishable from an ongoing game.”

Now, medium-level Twitter, the threads and engagement hacks, gets you paid. There are playbooks you can follow and accounts you can copy to build a following and produce mindless view fodder as predictably as a delivery person can make $9.76 for bringing me donuts. 

You do so at the risk of getting stuck on medium, polishing away exactly the unique and weird curiosities and obsessions that might attract the other people with the same weird curiosities and obsessions. And you do so at the risk of being labeled, subconsciously, as an engagement hacker by the very people whose engagement might matter most to you. 

As markets get more liquid and legible, it’s too easy to be mid, to be the rat in Steve Cutts’ Happiness: 

But that’s not you! Resisting the temptation is more valuable now, too, because it’s harder. 

As people contort themselves to please the algorithm for short-term payouts, there’s even more of the less legible, less liquid, but ultimately more important long-term value up for grabs for you if you continue to just be yourself and talk about whatever lights your fire, algorithm be damned.  

Don’t get me wrong. I think it’s great that Twitter is sharing revenue with the creators who drive it, and I’ll happily take the money. I got donuts to buy. And I love that some people are making more money from tweeting than they do in their day job; what a world! 

It’s just that Twitter is an easy toy example to make a bigger point: 

It’s useful to be aware of the trade-offs and incentives pulling at you as (almost) everything turns into a liquid and legible market.

Liquid and legible markets are, on balance, a great thing. They’re a sign of progress’ march. 

The market for sending things to space is now liquid and legible. The market for compute is now liquid and legible. The market for intelligence is even becoming more liquid and legible. 

Nine times out of ten, the right move is to simply and gratefully take advantage of that legibility and liquidity. Buy a SpaceX rideshare instead of building your own rocket. Use AWS instead of racking your own servers. Plug in OpenAI’s API instead of building your own foundation model. Hell, go farm views on Twitter to make the money to do that thing you really want to do, that only you can do. 

If the liquid and legible market is in something that’s not core to who you are or what you want to do, by all means, squeeze that liquidity and legibility for all they’re worth. If paying someone $9.76 to pick up donuts means I get to spend 30 minutes writing or playing with my kids, I’ll make that trade all day. If Twitter wants to pay me to do exactly what I’d do anyway, awesome. 

Just beware of the temptation to take the liquid and legible path ten times out of ten. 

As markets become more liquid and legible, beta improves while alpha erodes. The average person in those markets is better off, but it’s harder to outperform. It’s easier than ever to make $1,000 by tweeting threads, and harder than ever to stand out as unique by doing so. 

Know what you want to stand out for, how you’re different, what you can do that no one else can that can’t be captured by an algorithm or simple market signals, at least not yet. 

There’s a market for almost everything. 

The most valuable things will always be the ones that are hardest to price.

Thanks to Dan and Puja for editing!

That’s all for today! We’ll be back in your inbox on Friday with the Weekly Dose.

Thanks for reading,


Sci-Fi Idea Bank

Welcome to the 1,216 newly Not Boring people who have joined us since last Tuesday! If you haven’t subscribed, join 210,510 smart, curious folks by subscribing here:

Subscribe now

Today’s Not Boring is brought to you by… Percent

The alternative asset you probably haven’t yet considered: Private credit.

While the traditional 60/40 portfolio has fallen short over the past few years, private credit is currently offering double digit yields to investors, while being less-correlated to the market. 

It’s no wonder Blackstone’s billionaire president Jonathan Gray when discussing private credit last month stated: “I would say whenever you can get equity-like returns taking debt-like risk, that’s something you should do.”

While institutional investors have been loading up on private credit for years, this asset class has largely been out of reach for main street investors.

But that’s changing thanks to Percent – a frontrunner in private credit investments.

Percent offers exclusive private credit deals previously out of reach to most investors. If you’re an accredited investor, you get access to:

  • Attractive Yields: Percent’s current weighted average is 18.32% as of July 31, 2023

  • Liquidity: Many deals mature in months, not years, with some having liquidity available after just the first month

  • Recurring income: Generate passive income during the lifetime of the deal

  • Low Minimums: Invest as little as $500 to start

Not Boring readers can receive up to a $500 bonus with their first investment. 

Sign Up Now

Hi friends 👋,

Happy Tuesday!

Back in 2021, I tweeted, “The only way to not be totally flummoxed by everything that’s going on right now is to have read a lot of sci-fi.” That’s even more true now than it was then.

If you don’t read much sci-fi, the speed of technological progress can seem overwhelming. There are so many new things, and they’re coming so quickly.

If you do read a lot of sci-fi, though, your reaction to each new announcement might be: “What took you so long?”

One way to look at progress in tech is that we’re just working our way through ideas that sci-fi writers came up with a long time ago. So a bank of those ideas might be a roadmap to the future and a compass for those who want to build the sci-fi futures we were promised.

Let’s get to it.

Sci-Fi Idea Bank

Over the weekend, I built a spreadsheet with 3,567 sci-fi ideas, a Sci-Fi Idea Bank, using the best website on the internet and a team of AI research assistants. I hope it can be a starting point for people who want to bring sci-fi to life, because:

Ideas for new technologies almost always appear in sci-fi before they show up in real life. 

Not the exact ideas, of course, but science fiction writers are astonishingly good at sketching the outlines of technologies that will only become possible decades or even centuries in the future. 

Did you know, for example, that Jonathan Swift foresaw 3D modeling, search engines, biofuels, and even floating rocks in Gulliver’s Travels, way back in 1726? 

Or that Ephraim Chambers wrote about humanoid robots nearly 300 years ago in 1727’s Cyclopaedia

Or that John Jacob Astor IV, that John Jacob Astor IV, the richest passenger on the Titanic, the “Astor” in “Astor Place,” wrote about rooftop windmills for energy, security cameras, traffic cameras, regenerative braking, electric cars (complete with recharging stations), hydrofoil boats, spaceships, and airlocks in his 1894 book, A Journey in Other Worlds?

When Jeff Bezos walked through the logic behind selling books online in this legendary 1997 interview, he looked insanely prescient. But H.G. Wells imagined e-commerce 98 years earlier in When the Sleeper Wakes (and remote work, handheld video players, a proto-internet, and more). 

It’s hard to find an example of a tech company whose product didn’t appear in sci-fi first. 

Over the weekend, I asked ChatGPT to make a table of the most valuable tech companies, plus a handful of hand-selected ones, and where the technology they built first showed up in sci-fi. 

It’s not perfect – even some of the books’ published years are wrong – but it’s directionally correct. The seemingly novel ideas behind the biggest tech companies show up in sci-fi before they’re brought into reality. 

When you think about it, of course they do. It’s easier to write something down than it is to build it. Ideas precede reality. 

When a sci-fi writer comes up with an idea, it floats around in latent space, waiting to be pulled down when the necessary underlying technologies finally exist and hit their sweet spot on the cost and performance curves. When the time is right, an inventor or entrepreneur grabs it and tries to wrestle it into the real world. 

Viewed another way, sci-fi is a goldmine of ideas for startups

There are thousands of ideas waiting to be built when the time is right. And as AI explodes, energy becomes more abundant, space becomes cheaper to access, DNA becomes cheaper to sequence, breakthroughs occur, and any number of curves continue to compound, more ideas, pre-vetted in narrative, get unlocked. 

I think this explains why everyone got so excited about LK-99. 

I’m sure that a handful of materials scientists and physicists genuinely cared about room-temperature ambient-pressure superconductivity itself, about the science and the decades-long pursuit in nondescript labs across the globe. Most of us, though, cared for a different reason: room-temperature superconductors unlock a new batch of sci-fi ideas

With room-temperature superconductors, so many ideas locked in the pages of sci-fi novels could be brought to life: 

  • High-speed mag-lev trains, 

  • Desktop quantum computers, 

  • compact affordable fusion reactors, 

  • lossless long-range power transmission.

Superconductors or not (and it’s looking like not, for now), we’re bringing sci-fi ideas to life faster than we have in half a century. SpaceX is the most valuable startup in the US. San Francisco just approved Cruise and Waymo for 24/7 service. Varda is manufacturing drugs in space. Nuclear energy’s approval rating is the highest it’s been in a decade and fusion companies are racing to commercialize the power of the sun. All of these companies’ products were predicted in sci-fi. 

There are many such sci-fi technologies, many of which have yet to be built. 

Luckily, after I tweeted that sci-fi table, Dan Jeffries replied with a link to the goldmine: Technovelgy

I don’t know how I’d never heard of Technovelgy before. Bill Christensen, who runs the site, is maybe the most criminally underfollowed person on Twitter with only 984 followers on @technovelgy. The website is both a crazy labor of love and an absolute goldmine. On it, there are over 3,000 ideas from sci-fi novels over four centuries – Gravity Neutralizing Disks, Telelubricator, and Multivac, to pick three at random – with quotes and explanations. 

Once I found it, I got obsessed. I spent more time than I care to admit this weekend on a combination of Technovelgy, ChatGPT, Claude (x2 accounts), Excel, and Google Sheets. I wanted to make a Sci-Fi Idea Bank

The Sci-Fi Idea Bank

And it actually kinda worked! You can check it out and play with it yourself.

Sci-Fi Idea Bank

The Sci-Fi Idea Bank is a spreadsheet of 3,567 sci-fi ideas lovingly pulled from Technovelgy, updated to include whether they’ve been built yet, and if so, when, by whom, and what the product is, plus some additional notes and whether they’re mainly bits or atoms. 

There are some interesting stats. Like: 26% of ideas have been built, and ideas involving bits (32.4%) were more likely to come true than those involving atoms (23.7%).  Or: Philip K. Dick is the GOAT, with 152 ideas on the list, 52 of which have been built. Or: 1931 was a particularly rich year for sci-fi ideas, leading the list with 152 ideas, 52 of which have been built.  I bet you could use Code Interpreter to slice and dice the data in more interesting ways:

  • Break them down by category – my hunch is a disproportionate number involve space. 

  • Measure the idea/reality gap – how long between idea and reality? Is it different for bits and atoms ideas? 

Filling in the spreadsheet took a process straight out of the pages of sci-fi itself. After fumbling around with ChatGPT (and the BrowserOp Plugin and Code Interpreter) for far longer than I care to admit, I figured out a way to get my AI interns to fill out the sheet.

If you want to read about how I did it, check out Building the Sci-Fi Idea Bank.

Note: this is still not perfect by any means. There are still formatting issues. Claude missed a bunch of things. There are some things that have been made that it didn’t catch, and I didn’t catch nearly all of its misses. It gave broad credit instead of specific companies and products in a lot of cases. It didn’t categorize bits and atoms the way I would have in some cases. But it’s a start. 

I’m going to keep cleaning it up. I locked the main sheet, but I added a “Hivemind” sheet that you can edit if you want to help update it, correct it, add color, highlight the startups working on certain sci-fi ideas, or copy it and extend it in any way, and use it to bring sci-fi to life.

Sci-Fi Idea Bank as Treasure Map

That last part is the most exciting. If you want, you can take this Idea Bank and add color and context to it. 

I think one of the most interesting extensions would be to figure out which underlying technologies are required for each idea, how far along they are in their development, and how economically feasible they are. Technical specs and roadmaps for sci-fi ideas. 

Casey Handmer is working his way through Kim Stanley Robinson’s Mars Trilogy and adding technical commentary chapterbychapter. Something like that. 

If history is a guide, there are billion-dollar-startups hiding all over the Sci-Fi Idea Bank. 

Varda is a perfect example. In June, I wrote, “When you say the Varda concept out loud – space factories making drugs in orbit – it sounds crazy, or at least sci-fi, like one of those things that could maybe happen in the future but surely not now.” 

Turns out, it was sci-fi.

Jerry Pournelle wrote about an Orbital Factory in 1976. 


Arthur C. Clarke wrote in 1978 that “a continuous pseudo-one dimensional diamond crystal… can be mass-produced only in the orbiting factories.”


And William Gibson imagined automated space factories in 1988. 


While governments experimented with space manufacturing starting at Skylab in the 1970s, it would take 47 years from Pournelle’s orbital factory complex and 35 from Gibson’s automated space factories for Varda to launch the first automated commercial space factory in 2023.  

In order for that to happen, the technological and economic infrastructure had to catch up with sci-fi. SpaceX brought down the cost of launches. RocketLab let companies buy satellite buses practically off-the-shelf. NASA developed a thermal protection shield perfect for the reentry capsule. The International Space Station proved out drug crystallization in microgravity.

And it can happen now. It took Robert Zubrin to recognize that the tech and economics might finally make sense for spade manufacturing in his 2019 book, The Case for Space. Delian read the book, realized it was time, and four years later, Varda made drugs in space. 

It’s not just Varda. Startups constantly pull ideas from sci-fi into the real world.  

Prophetic is trying to bring Peter Watts’ Lucid Dreamer to life. Pipedream is working to make Miles J. Breuer’s Pneumatic-Tube Zone a reality. Astroforge would make a dozen sci-fi writers look prescient. Think of your favorite frontier tech startup, and search the list for the idea it’s working on – I bet it’s there. 

And there are plenty of ideas left. Find your favorite sci-fi idea, figure out what needs to be true for it to become possible, then track the progress of those things until it’s time. If you wanted to, you could even use ChatGPT or Claude to get started on a technical details doc, then track from there. Soon, you should be able to task an AI agent with scanning Elicit and informing you when relevant progress or breakthroughs occur. 

If you do make that doc, link it back in the Hivemind sheet. Or keep it to yourself and go start a company that makes sci-fi reality. 

When the time is right, you’ll still have to do the hard work of building a Hard Startup – you’ll have to come up with a strategy, build a team, build a product, sell it, and dig moats. The more time I spend with sci-fi companies, the more I realize that the hardest part is all of the unsexy stuff. But at least you’ll have a head start. 

All of the 3,567 ideas on the list started out as just ideas, and crazy sci-fi ones at that. To date, 936, give or take, have been brought to life. That leaves 2,630 to build. 

Sci-Fi Idea Bank Stats

Some ideas might have been beaten to the punch by a better one. Jane Webb London’s 1828 idea for a Mail-Post Letter-Ball, “a system of sending mail quickly from town to town via steam-cannon-powered hollow spheres,” is probably no longer worth building now that we have email. 

Some might be impossible. At least 20 of the ideas involve faster-than-light travel or communications, which feels hard. We’re probably not going to get the Farcasters from Dan Simmons’ Hyperion series, but at least Farcaster makes a great startup name. 

But hundreds of them will become feasible at some point, and I hope that this sheet can be a helpful starting point for those who want to build the future with the ideas of the past. 

In Where is My Flying Car?, J. Storrs Hall found that of the technologies that sci-fi writers and futurists predicted in the 1960s, the ones that had come true were the ones that consumed less energy (mainly bits-based) and the ones that hadn’t consumed more energy (mainly atoms-based). 

J. Storrs Hall, Where is My Flying Car?

Hall wrote that the empty top right half, “represents the futures we were promised but were denied.” 

It’s time to build the futures we were promised.

Thanks to Dan for editing and to Bill Christensen for making Technovelgy!

That’s all for today! We’ll be back in your inbox on Friday with the Weekly Dose.

Thanks for reading,


WTF Happened In 2023?

Welcome to the 797 newly Not Boring people who have joined us since last Tuesday! If you haven’t subscribed, join 209,294 smart, curious folks by subscribing here:

Subscribe now

Today’s Not Boring is brought to you by… Pesto

Pesto is the easiest way to recruit and hire high-quality developers in emerging markets. Through its plug-and-play platform, you can quickly source, recruit, and hire affordable top-tier developers. Pesto wades through the noise so you don’t have to. 

Pesto becomes your local team to:

  • Source the right talent for your company

  • Recruit and match developers to meet your needs

  • Hire based on local salaries and regulations

Pesto moves fast – you can go from demo to developer in less than two weeks. The platform is already used by dozens of enterprise clients and startups like Alloy, Pulley, and Snorkel AI to scale their engineering teams.

Give them a shot & enjoy a 25% discount on your first hire when you mention “Not Boring”.

Request Demo

Hi friends 👋,

Happy Wednesday! Apologies for getting this one out a day late — I was a little under the weather but back at it and feeling good.

Every week in the Weekly Dose of Optimism, we highlight five stories of the incredible things that humans pull off. The Weekly Dose started as a small fight back against the prevailing sense of pessimism in the air that I described in Optimism.

Recently, though, Dan and I have noticed the winds shifting. There’s a growing sense of optimism beyond this little newsletter. We are so back.

Let’s get to it.

WTF Happened in 2023?

When economic historians in 2073 look back on the past fifty years of growth and abundance, they’ll pull up a bunch of charts, and one question will jump out at them: WTF Happened In 2023?

Maybe I’m drunk on a cocktail of optimism and technological progress, but it feels like 2023 is the turning point: from Stagnation to Acceleration.

The last big turning point is clear in hindsight, so clear that there’s a whole website dedicated to a single year: WTF Happened in 1971? 

It’s just dozens of graphs showing that things kept getting more awesome until 1971 and then stopped getting more awesome after 1971. 

Economist Tyler Cowen calls the period since 1971 The Great Stagnation, a half-century slowdown in productivity and innovation. 

It wouldn’t have been obvious in 1971 that things were going to slow so dramatically. The data might have looked a little wobbly, but that happens year to year. There might have been clues, though, if you were looking: the dollar depegged from gold, regulation was increasing, America was losing in Vietnam, growth fell out of vogue. You might have picked up a shift from optimism to pessimism, which trickled into policies, corporate decisions, and then, years later, into the data. 

I think we’re at a similar but opposite point in 2023. We won’t be able to see it in the data for a while, but you can already feel it in the air. It’s just wisps, and I’m admittedly cherry-picking, but the wisps are everywhere. 

We’re shifting from indefinite pessimism and insanity to what I’ll call pragmatic optimism – a balanced, realistic optimism focused on achievable progress. Over the past 18 months, prevailing sentiment became irrationally pessimistic based on how bad we heard everything was, and turned pragmatically optimistic based on our lived experience that it wasn’t. Social media amplified and spotlighted extreme views, and after the violent amplification of extreme views on both ends, we’re settling somewhere more sane. Reality is winning, and reality is pretty good, actually. 

At the same time, we’re in a period of technological acceleration. We talk about it all the time here: AI finally works, the economics of space pencil out, fusion startups are making progress, renewable energy is booming, and now we might even get room-temperature superconductors. Optimism shapes reality, and an optimistic foundation is necessary to leverage instead of impede the benefits of these technologies. 

In short: it’s a perfect storm of sentiment shift and technological progress that could mark the end of The Great Stagnation and the beginning of The Great Acceleration. 

I know, I know, when you have a hammer, everything looks like a nail. Last summer, in Optimism, I argued that: 

  1. We’re more pessimistic than we should be.

  2. Optimism shapes reality.

  3. The downside to optimism is limited, the upside is uncapped.

So of course I would overindex on optimism as the kindling for The Great Acceleration. But I really do think optimism is a necessary precondition, and there are signs that pragmatic optimism is both back and shaping reality. I’ll give five examples: 

  1. The Dimon Speech

  2. Consumer Sentiment and the Economy

  3. Infrastructure and Manufacturing

  4. Nuclear Energy

  5. Superconductors

They’re just wisps at this point, but taken together, they begin to paint a picture. 

The Dimon Speech

The first time my Spidey Sense really started to tingle was when, right around the 4th of July this year, a video of JP Morgan CEO Jamie Dimon speaking at The Economic Club went viral:

Dimon’s was the most unabashedly optimistic message about America’s position from any major leader in a long time, the antithesis of that Jeff Daniels rant against American Exceptionalism in the opening episode of Newsroom. People cheered the message, and called for a Dimon 2024 Presidential run. 

He might run, no idea, although it seems unlikely this late in the game. But one thing is for certain: this video was not part of a 2024 Presidential campaign. This video was from 2016. 

Jamie Dimon at The Economic Club of Washington, D.C., September 12, 2016

That people received a patriotically optimistic rant from a Big Bank CEO enthusiastically was telling on its own, but the fact that the video floated around the internet ether for seven years until people were ready for it is fascinating. In 2016, Americans wanted to Make America Great Again; in 2023, they celebrated the fact that, despite its issues, it’s already pretty great. 

One reason we may have been more receptive to that message in 2023 is that, in the face of what we were told would be a brutal recession, the American economy stood strong. 

Consumer Sentiment and the Economy

Pessimists and extremists overplayed their hands during periods of uncertainty and are being exposed; now optimism and sanity are taking root. 

To be fair, permabulls like me got too excited during the COVID bull run, which invited macro bears to gleefully predict that the world was going to fall apart once the Fed started raising rates. Consumer sentiment sank to an all-time low of 50 last June as people bought into the doom. 

And then… the economy didn’t fall apart. It’s doing pretty great, actually, as Noah Smith laid out on Monday. Real GDP growth hit 2.4%. Inflation cooled to 3%. Age-adjusted employment is the best it’s ever been. 

As a result, consumer sentiment rebounded sharply to 71.6 in July, marking the fastest growth in sentiment since 1984 at 39%. 

In absolute terms, 71.6 is still low, well below the 85.6 average over the 70 year life of the UMich survey, but if we’re looking for shifts in sentiment, that relative improvement is an important signal. People can feel the momentum. 

We’re in no way out of the woods. The data could turn. Fitch just downgraded US debt to AA+. But as it stands, it’s almost as if we simulated the worst case scenarios virtually and came back more confident in the real world. On the ground, while there are weaknesses in real estate and household wealth, and while a recession may still hit, right now jobs are plentiful, wages are growing, inflation is cooling, inequality is shrinking, and manufacturing is booming. 

Infrastructure and Manufacturing

Speaking of building things… one of the biggest knocks against America is that we’ve forgotten how to build big things fast. That’s a well-deserved criticism and both a symptom and cause of our stagnation. As I wrote in How to Fix a Country in 12 Days, “Americans built the Empire State Building in just 410 days; NYC’s Second Avenue Subway project, California’s High-Speed Rail, and Boston’s Big Dig have missed their timelines by decades and their budgets by tens of billions of dollars.” 

But that same piece highlighted a tiny glimmer of hope: the I-95 Rebuild. For 12 days, people were captivated by Philadelphia’s effort to rebuild a damaged stretch of highway as quickly as possible. 

Look, it was just a temporary fix to a piece of road less than half-a-football-field long. It’s easy to imagine a version of the world in which the project was met with bemused cynicism, but people were just genuinely enthused. Americans want to live in a country that builds things again.  

The Philadelphia Inquirer

On its own, it was a fun diversion, but the I-95 rebuild came against the backdrop of a country that’s trying to prove to itself that it can build again. Construction spending on manufacturing has shot up to record highs this year, spurred by the CHIPs Act and Inflation Reduction Act. 

The building debate seems to have moved from a high-level battle over whether we can or should build big things again into the nitty gritty details: permitting reform, job training, immigration. 

This is what I mean by pragmatic optimism: excitement coupled with a focus on the specific challenges we need to overcome to build the future that we want. 

Nuclear Energy 

In nuclear energy, too, the conversation is shifting to focus on specific challenges to tackle. Nuclear’s resurgence signals two healthy things: 

  1. Recognition that more energy is good, contested during the Great Stagnation.  

  2. Fact-based instead of ideological analysis of where to get that energy. 

Certainly, the War in Ukraine played a role in bringing sanity back into the conversation. Germany’s hypocrisy – favoring Russian gas and dirty coal to clean, abundant nuclear – forced people to reexamine their assumptions. When they did, nuclear came out looking good. 

In a recent Pew poll, nuclear power was the only energy source that saw growing support from both Democrats and Republicans. 

Now, while there’s still work to be done to counter the damaging effects of the anti-nuclear disinformation campaign that persisted throughout The Great Stagnation, the most pressing challenges are more practical: we want more nuclear power, but it’s too slow and expensive right now. 

Yesterday, Georgia’s Vogle Unit 3 reactor, the first newly constructed nuclear unit in the US in over 30 years, began commercial operation. That’s a huge win, but it also highlights the need to get faster and cheaper. It arrived seven years late and $17 billion over budget. 

The future of nuclear power rests on the practical instead of the ideological. How can we untangle the messy web of regulation, financing, and construction to build new nuclear capacity quickly and cheaply? What role can small modular reactors and other advanced reactor designs play? These are questions that a pragmatically optimistic society can tackle. 


Then, of course, there are the superconductors, which are simultaneously a product of the pragmatic optimism (the Race to Replicate) and, if replicated, a driver of pragmatic optimism (how can we manufacture LK-99 at scale and what can we build with it?).

If LK-99 is legit and we can manufacture it at scale (big ifs!), it does seem to be the kind of technological breakthrough that could unleash a Great Acceleration. This thread has 500 replies with the things people are “most stoked for if LK-99 turns out to be a room temp superconductor.” 

The prediction market Manifold currently puts the odds of replication at 34%, but whether or not it replicates, the pragmatic optimism on display has been phenomenal. 

I think this superconductor experience is something that could only happen in the current environment: pragmatic optimists connected across the globe watching science experiments like sports and getting really excited but waiting for a bunch of national and academic labs and assorted internet geniuses to replicate before getting too excited. As Amjad put it

I’ve had a version of this piece in my drafts for a month, long before I’d ever heard the words “room-temperature superconductor” strung together. Instead of the prime mover of backness, it feels like the Race to Replicate is a confirmation that things really are shifting. 

Again, it’s just a wisp. But against the drumbeat of declining trust in institutions, it’s beautiful to see people around the world say, “Fuck it. We’ll do it ourselves.” 

Growth is back on, but The Great Acceleration will be a different flavor than pre-1971 growth. 

The Gains from Truly Mobilizing the Internet

In a 2020 follow-up to his Great Stagnation book, Tyler Cowen wrote a blog titled What might an end to the Great Stagnation consist of?. He concluded with this observation: “The gains from truly mobilizing the internet may in fact right now be swamping all of the accumulated obstacles we have put in the way of progress.” 

I’m not sure if this is how he meant it, but I think the internet has played a crucial role in reigniting optimism, even if the path there was a bit painful. It amplified the craziest views, and I think we’re starting to realize that most of us are happier somewhere in the middle. It helped spread doom and gloom, forcing a confrontation with reality that reality is starting to win. It touches each of the wisps I covered in this essay, in direct or indirect ways. And it made the global race to replicate a non-peer reviewed paper both possible in the first place and a captivating live event. 

The internet also enables a healthy self-policing optimism – the hive mind swarms overly grandiose or underly specific claims. It’s worth highlighting your own obstacles and inviting others to help solve them before the internet identifies them for you and labels you a fraud. 

In Working Harder and Smarter, I argued that the last fifty years, spent developing software and services at the expense of Total Factor Productivity, weren’t a period of stagnation at all, but a necessary precondition for the accelerated progress we can make when we combine innovation in bits and atoms. A year later, we’re starting to see signs that that may be true. 

After decades of dealing with the instability of a shift from centralization to decentralization, it feels like we’re settling into a more stably decentralized world. Instead of waiting for peer review, people are replicating themselves. Instead of slowly pushing infrastructure projects through bureaucracy, politicians are rallying support online. Instead of relying on potentially hostile enemies for energy and manufacturing, countries are building capabilities at home.   

My evidence is just wisps and hints. A more fact-based argument could be made that we are not so back, that it really is so over. The superconductor might not replicate; current odds are that it won’t. AI could be regulated to death or cause real harm. A recession may still come. Polarization is still a thing. Governments and institutions are slow to shift, even when people want them to.  Progress could increase inequality and lower happiness, since we so often measure happiness on a relative basis. 

In fact, there’s a good chance that if you polled people today, most of them would think that I’m crazy for suggesting that optimism is on the rise. Maybe my brain’s been warped by the internet, or the X algo has just gotten really good at showing me what I want to see. 

My hunch, though, is that things look particularly bleak before the shift, just like things looked particularly rosy in 1970. Those wisps and many that we haven’t covered, advances in biotech, crypto, education, space, defense – we didn’t even talk about AI! – are early clues that things are shifting. Maybe you feel it. 

The Great Acceleration is not inevitable. Cruft built up over the last fifty years of stagnation threatens to jam the gears of progress. Regulations are harder to tear down than they are to build up. We’re already facing a shortage of skilled laborers needed to build the things we’re building now – like chips and houses – and we’re going to need a lot more: nuclear engineers, machinists, materials scientists, craftspeople. We need politicians who reflect the vigor and drive of the people. We need to demand excellence of ourselves. 

There are a million details to tackle, but I think we have the momentum to tackle them.

If we get it right, future generations will look back admiringly and ask, WTF Happened In 2023?  

Thanks to Dan for editing!

That’s all for today! We’ll be back in your inbox on Friday with the Weekly Dose.

Thanks for reading,


In Defense of Strategy

Welcome to the 579 newly Not Boring people who have joined us since last Tuesday! If you haven’t subscribed, join 208,497 smart, curious folks by subscribing here:

Subscribe now

Today’s Not Boring is brought to you by… Slack

What’s the first thing founders do when starting a company? Buy a website domain? File incorporation docs? Add “Stealth” to their LinkedIn?

Nope. The first thing founders do is start a new Slack Workspace.

Why? Because Slack is where thoughtful work gets done quickly.