Disclaimer: I’m not a medical doctor, this is not health advice, do your own research, consult with a qualified physician before making health decisions, caveat emptor, your mileage may vary, yada yada yada.
Let me begin by saying that I find it ridiculous to be writing this post. I’m only here because I suspected my primary care physician was acting in their own best interests rather than my best interests. That suspicion took me down a fascinating rabbit hole.
I had my annual physical recently, which included a standard lipid panel. I got a call from my physician the next day.
“Your cholesterol is extremely high. We need to start you on statins immediately.”
Oof. I was expecting a high cholesterol result, as it’s been elevated the past couple years as a result of my ketogenic diet. But this score was even higher and I haven’t been on keto for several months due to my personal trainer putting me on a diet designed for gaining muscle mass. What were my scores?
LDL-C: 209 mg/dL
HDL-C: 90 mg/dL
Total: 299 mg/dL
Where does that (allegedly) put me in terms of risk?
In other words:
LDL: DANGEROUS
HDL: healthy
Total: DANGEROUS
What is “Good” and “Bad” Cholesterol?
According to the CDC: cholesterol travels through the blood on proteins called lipoproteins. Two types of lipoproteins carry cholesterol throughout the body:
LDL (low-density lipoprotein) cholesterol, sometimes called “bad” cholesterol, makes up most of your body’s cholesterol. High levels of LDL cholesterol raise your risk for heart disease and stroke.
HDL (high-density lipoprotein) cholesterol, sometimes called “good” cholesterol, absorbs cholesterol in the blood and carries it back to the liver. The liver then flushes it from the body. High levels of HDL cholesterol can lower your risk for heart disease and stroke.
When your body has too much LDL cholesterol it can build up on the walls of your blood vessels. This buildup is called “plaque,” and it can cause health problems, such as heart disease and stroke.
The above is the explanation that you’ll receive from almost every primary care doctor you talk to when reviewing a standard lipid panel. It’s also wrong, due to oversimplification and/or ignorance of improvements in our understanding of cholesterol over the past ~20 years.
A Nuanced Modern Take on Cholesterol
Plasma cholesterol levels (which is what clinicians measure with standard cholesterol tests) often have little to do with cellular cholesterol, especially artery cholesterol, which is what we really care about.
Cholesterol is absolutely vital for our existence – it is one of the main building blocks used to make cell membranes. 80% of the cholesterol in your body is PRODUCED by your body – only about 20% comes from food you ingest.
Describing LDL as “bad” and HDL as “good” is a gross oversimplification. “Bad” cholesterol is ANY cholesterol that ends up inside of the wall of an artery AND leads to an inflammatory cascade which results in the obstruction of that artery. When one measures cholesterol in the blood we do not know the final destination of those cholesterol molecules!
According to the Mayo Clinic, Low-density lipoprotein particle (LDL-P) concentration is positively associated with increased risk of atherosclerotic cardiovascular disease (ASCVD). LDL-P is heterogeneous and contains many lipids and proteins, including phospholipids, triglycerides, and cholesterol. LDL cholesterol is a surrogate biomarker of LDL-P.
LDL cholesterol is the historical measure of atherogenic lipid burden. There is a large variance in the relative amount of cholesterol carried by each LDL particle. Consequently, subjects with similar LDL cholesterol values can have markedly different serum concentrations of LDL particles. Multiple studies have shown that serum concentrations of LDL-P more accurately reflect actual risk of ASCVD when LDL cholesterol values are discrepant.
High-density lipoprotein particle (HDL-P) concentration is inversely associated with risk of ASCVD. HDL cholesterol is also inversely associated with ASCVD, since it is a surrogate marker for HDL-P. Like other lipoproteins, HDL-P is heterogeneous, and particles contain highly variable proportions of proteins and lipids, including phospholipids, sphingolipids, and cholesterol.
Several large clinical studies have shown that HDL-P is more significantly associated with ASCVD risk than HDL cholesterol. Furthermore, HDL-P remains significantly associated with ASCVD even among subjects taking cholesterol-lowering medications. HDL-P more accurately reflects actual risk of ASCVD when HDL cholesterol values are discrepant.
Most clinicians focus on LDL-C because it’s a good way to predict heart attack risk. But many people diagnosed with heart disease have LDL levels that aren’t especially high. It turns out that LDL particles are not all created equal. Smaller, more tightly packed LDL has an easier time getting into arteries. Larger, fluffier particles appear to be less dangerous.
Additionally, research suggests that a key protein on LDL called apolipoprotein B (ApoB) is an important contributor for heart disease risk. When we measure ApoB, we’re actually counting all of these particles that cause plaque buildup, and this is a much more accurate way of determining cardiovascular risk.
Dr. Peter Attia wrote a 9 part series on cholesterol back in 2012. I credit him with giving me the knowledge (from reading his book Outlive) to push back against my primary care physician’s knee-jerk response to my high LDL-C test result.
Dr. Attia makes quite a claim (that I find believable after my own personal experience):
By the end of this series, should you choose to internalize this content (and pick up a few homework assignments along the way), you will understand the field of lipidology and advanced lipid testing better than 95% of physicians in the United States. I am not being hyperbolic.
If you’re willing to devote an hour or so, I highly recommend reading the whole series. But I cover my high level takeaways in this article.
My Advanced Test Results
After my initial lipid panel, I asked my physician for further testing because I was unconvinced by a single high LDL-C score. The first thing we did was schedule a CT scan of my heart to quantify how much cholesterol had gotten stuck in my arteries.
What was my coronary artery calcification score? 0. Zero. Zed. Zilch. Zip. No measurable amount of cholesterol has gotten stuck in my arteries. This was a good start, but it only showed us a backward-looking metric of damage rather than a forward-looking predictor of damage risk.
So I asked for a referral to a specialist in lipidology. Of course, I was told it would take months to get an appointment. Thankfully, there are plenty of options to get your own bloodwork done quickly (out of pocket) if you wish to route around the healthcare bureaucracy of the United States. Note that if you have an HSA or FSA you may be able to use those tax advantaged funds to pay for the lab work. A few options:
After a month of going back and forth with my doctor, I ordered a comprehensive lipid panel, got my blood drawn the next day, and received all my results back within a few days. This was an excellent experience and I’ll likely make this a part of my personal annual health assessment.
Here we can see the simple metrics that my “total cholesterol” is considered too high, though my triglycerides and actual ratio of LDL to HDL is quite good.
Here we can see that all of my HDL metrics are great.
Does having lots of HDL particles help? According to Dr. Attia: probably, especially if they are “functional” at carrying out reverse cholesterol transport, but it’s not clear if it matters when LDL particle count is low.
According to the Mayo Clinic, my LDL-C from a month prior (209 mg/dL) indicated “a likely genetic condition” but now my score of 147 mg/dL is just “borderline high.” Coolcoolcool. Why did my LDL-C drop by so much in just 1 month? Well, upon getting the initial 209 score back I immediately re-assessed my new diet in terms of cholesterol and realized that I was consuming a ton of dairy (milk and cheese) and also some of my protein choices had not been great in terms of cholesterol content (chicken thighs over chicken breasts, for example.) So I cut dairy almost completely and am sticking to lower cholesterol cuts of meat.
Here’s where things get interesting, and it’s really a mixed bag. On one hand, my particle counts and number of small particles are higher than I’d like.
The risk ranges for ApoB don’t seem to have consensus. Normal range according to Cleveland Clinic for males is 66 to 133 mg/dL. Mine came in a 103 mg/dL which my lab testing provider scored as “moderate risk” for cardiovascular disease. Some cardiology guidelines recommend a target of less than 65 or 80 mg/dL of Apo B. So it’s not bad, but I’d like to see it lower.
What makes this all the more confusing is my LDL pattern. If your LDL pattern is classified as Pattern A, it means that the LDL particles in your blood are predominantly large and buoyant. This pattern is generally considered to be less atherogenic, meaning it is less likely to contribute to plaque build-up in the arteries compared to Pattern B, which consists of smaller, denser LDL particles. Pattern A is often associated with a lower risk of cardiovascular disease.
How is this even possible for my number of small particles to be too high and yet overall my particle size pattern is considered large? I suppose I’ll have to wait several months for a specialist to explain this phenomenon. So, in summary:
My LDL-C is high (bad)
My HDL-C is great
My ApoB is good but could be better (lower)
My small LDL particles are high (bad)
In general my LDL particle size is good (large)
ApoB vs Particle Size
According to this Quebec Cardiovascular Study published in 1997, ApoB came out as the best and only significant predictor of heart disease risk while LDL particle diameter as did not contribute to the risk after the contribution of ApoB levels had been considered. In other words, you only need to worry about if your particle size if you know that your ApoB levels are high. If ApoB is high but your particle sizes are large, your risk is still relatively low.
So why does having an LDL-P of 2,000 nmol/L (95th percentile) increase the risk of atherosclerosis relative to, say, 1,000 nmol/L (20th percentile)? In the end, it’s a probabilistic game. The more particles – NOT cholesterol molecules within the particles and not the size of the LDL particles – you have, the more likely the chance a LDL-P is going to squeeze into the sub-endothelial space in your artery wall and begin the process of atherosclerosis. So the primary takeaways here:
Small LDL particles are more atherogenic than large ones, independent of number.
The number of particles is what increases atherogenic risk, independent of size.
Both size and number matter, and so the person on the right is “doubly” at risk.
Can one increase LDL Particle size?
Although LDL cholesterol particle size is mainly genetically inherited, individuals who have small LDL particles can increase their particle size through diet, exercise, and medications.
Diets that are low in saturated fat and cholesterol, regular aerobic exercise, and loss of excess body fat have been determined to decrease the number of small LDL particles and increase the number of large LDL particles in the blood.
When lifestyle changes alone are unsuccessful, medications can be used. Even though statin medications are effective in lowering the absolute levels of LDL cholesterol, they appear to have a limited effect on LDL cholesterol particle size. Medications such as nicotinic acid (niacin) and gemfibrozil (Lopid) have been found effective in many instances in increasing the size of LDL cholesterol particles.
Interestingly, this meta-analysis of 38 randomized trials concluded that the available evidence indicates that dietary interventions restricted in carbohydrates increase LDL peak particle size and decrease the numbers of total and small LDL particles. In other words, ketogenic diets appear to reduce this particular risk factor of arteriosclerosis.
According to this meta-analysis of 36 LDL-P studies, statins, estrogen replacement therapy, and a low fat/high carbohydrate diet lower the LDL-C content in LDL particles more than they lower the LDL-P concentration, while fibrates, nicotinic acid (niacin), exercise, and a low carbohydrate diet lower LDL-P concentration more than they lower LDL-C content. Thus you’d probably want to focus on the latter to achieve the best results.
Non-pharmaceutical Remediations for Keto Diets
The BJJ Caveman published a series of posts about his own cholesterol issues while on keto back in 2015. They developed this game plan that seemed to work pretty well according to his follow-up with the results. What was the plan?
Reduce saturated fats
Eat more beans (as a probiotic)
Reduce coffee intake because Cafestol, a compound found in coffee, can stimulate increased cholesterol synthesis by the liver by suppressing bile acid production.
The driving force of atherogenesis is entry of ApoB particles and that force is driven primarily by particle number, not arterial wall inflammation.
Peter Attia’s take on people sensitive to saturated fat:
However, some readers may interpret the data I present to mean it’s perfectly safe to consume, say, 25% (or more) of total calories from SFA. I realize I may have to turn in my keto-club card, but I am convinced that a subset of the population—I don’t know how large or small, because my “N” is too small—are not better served by mainlining SFA, even in the complete absence of carbohydrates (i.e., nutritional ketosis). Let me repeat this point: I have seen enough patients whose biomarkers go to hell in a hand basket when they ingest very high amounts of SFA. This leads me to believe some people are not genetically equipped to thrive in prolonged nutritional ketosis.
LDL-P is the best predictor of adverse cardiac events. A high particle count means the particle size is smaller, which means they are more likely to get stuck in your arteries.
LDL-C is only a good predictor of adverse cardiac events when it is concordant with LDL-P; otherwise it is a poor predictor of risk.
Test frequently! Annual tests are a bare minimum; if you’re concerned about your cholesterol then you’ll probably want more data points so that you can follow the trend. Remember that it takes decades to develop heart disease so if you catch signs early, you don’t need drastic action.
My personal cholesterol situation is clearly complex – it’s not the statin-inducing emergency that my primary care doctor made it out to be, but there is clearly room for improvement.
We can see from this 15 year study of 2,500 people that while low LDL-P + high LDL-C comes with pretty good long-term chances of survival, it is slightly better to have low LDL-C as well.
I already dropped my LDL-C score by 60 in one month by cutting my dairy intake. Next I’ll take more care to consume less saturated fat, fewer carbohydrates (my trainer suggests carb cycling and only doing 2 high carb days per week), and eat more fibrous vegetables.
I’ve been an avid nostr user for a year now, and I’ve simultaneously been witnessing and pointing out the decline of X (Twitter) throughout the same time.
My Twitter engagement metrics are down ~70% across the board since Musk took over.
Almost everyone who uses social media these days is at risk. Not only can their account be shut down at a whim by third parties, they can effectively be deplatformed by having their entire audience taken away, since audiences are not portable between different social networks.
Until recently, the only way one could really have an audience that they could defend against losing would be via email. That is – even if you email service provider shuts down your account, it’s not a big deal – you can easily move your list of subscribers to an account with a different provider.
Now, we have a social networking protocol that empowers its users with those same attributes!
Achieving Social Media Sovereignty
Sovereignty means that you’re in a position that is defensible; AKA you are not reliant upon the whims of third parties that can disempower / deplatform you. When it comes to Bitcoin sovereignty, this is achieved by taking control of your own private keys and verifying the state of the ledger with your own fully validating node.
How can one attain a similar position of strength when it comes to social media? By holding your own keys and running your own server!
The following is a guide for how to migrate your historical tweets over to nostr, where you can secure your “account” via cryptography and ensure the persistence of your data by running your own server. Depending upon your technical skills, there are several paths you can take, each with their own trade-offs.
1A. Export your tweets
Request an account data export from X. This can take several days for them to process and provide you with a compressed archive of your tweets.
You can skip this if you haven’t already been posting notes to nostr. If you have been using nostr then you’ll want to ensure that all of your historical notes get migrated to the new relay you’re going to set up.
Paste your nostr public key (starts with “npub”) into the text field and click “Backup & broadcast.”
After the backup has completed, click the “LocalDatabase” button and then download the js file that will contain a list of all of your notes. Save it as “nostr-sync.js”
2. Set up client, create a nostr key
If you’ve never used nostr, first you need to decide what client(s) to use and generate a private key. Make sure you create secure backups of this key so that you don’t lose it – just like with bitcoin, if you lose a key then your access is permanently lost! Similarly, if the key falls into someone else’s hands, they can impersonate you and you won’t be able to stop them.
A major feature of nostr is the fact that the protocol has support for Lightning Network payments. You’ll really be missing out if you don’t configure your profile and client to use a Lightning wallet.
In order to import our tweets later (and to make using web app nostr clients easier) we’ll need to have the nos2x Chrome extension installed and managing our nsec (nostr private key.) You can install the extension here.
5. Set up a reliable relay
Here’s where the process gets more complex and you have some decisions to make.
If you’re a non-technical user who wants to increase the robustness of your data persistence, you can outsource the actual running of the relay to a third party and incentivize (pay) them not to delete your old notes. But note that you still bear the risk of being rugged; it’s (hopefully) less likely since you’re paying for the service. Free relays are more likely to delete your old notes to reduce their ongoing maintenance costs.
Advanced users who are willing to put in more work to achieve the ultimate level of sovereignty by hosting your own relay need to choose a relay implementation to run.
Consensus at time of writing seems to be strfry for relay running, so that’s what I’m using. The rest of this section will be dedicated to setting up strfry; the official strfry deployment documentation can be found here.
First, set up a server with the hosting provider of your choice. You don’t need much in terms of resources; I chose one with 2 CPU cores, 2 GB of RAM, and 50GB of disk space.
Create a DNS A record that points to the server’s IP address. Something like “nostr.yourdomain.com”
Note that while strfry’s documentation says you only need 2GB of RAM on your server, I had issues building strfry on a server with such restricted memory. Unfortunately, there are not prebuilt binaries available at time of writing. So I ended up having to build the binaries on my laptop and transfer them to the server. I can confirm that the software runs fine with only 2 GB of RAM.
Make sure you edit strfry.conf and set all the variables in the “info” section. It’s also worth noting that the strfry nostr relay has a configuration value:
rejectEventsOlderThanSeconds = 94608000
The default value is equivalent to 3 years. So if you’re planning on importing a 15 year history of tweets like I did, you’ll want to set this value to
rejectEventsOlderThanSeconds = 600000000
You’ll also want to set up a systemd service and reverse proxy for your relay. The systemd service is just to ensure uptime in the event that your machine crashes / reboots. The reverse proxy is so that you can set up an SSL certificate and encrypt your network traffic between the client and relay.
There’s a good systemd service example here. I’ll note that I got stuck setting up systemd for a while as I kept getting a kinds of odd failures with unhelpful error messages. I’d recommend if you have issues to watch the systemd log output via
journalctl -u nostr-relay.service -f
This led me to discover an error occurring with the open file limits:
strfry error: Unable to set NOFILES limit to 1000000, exceeds max of 65536
Which I fixed in my strfry.conf by setting: nofiles = 0
Once you have everything running, you can check to make sure the configurations are correct by visiting your nostr relay domain in your web browser; you should see a page like this:
Finally, this is optional, but if you want to ensure that ONLY you can publish events to your relay, you should configure a whitelist. Strfry has documentation for doing that here.
6. Configure your client to use the relay
This step will vary depending upon which client you’re using to access nostr. Note that the value will need to be in the form of: wss://nostr.yourdomain.com
After playing around with this tool quite a bit, I have several warnings and suggestions.
If you have multiple nostr compatible browser extensions installed, disable all but one. I was mystified for quite a while because exit.pub was generating notes with invalid signatures. Eventually I determined that it was because it was reading a nostr pubkey from my Alby extension (which had autogenerated a key I never use) but was then signing with my real key via my nos2x extension.
I ran into an issue a few times where exit.pub somehow failed to read my public key from nos2x and the only way to fix it was the clear my browser cache for the site.
You probably only want to publish these really old events to your own relay, as other relays are going to be less performant and more likely to reject your notes, causing the migration tool to come to a halt if it sees too many errors.
Note that exit.pub says to upload the tweets.js file from the archive; you can find it in the “data” folder of your archive zip file.
Once you upload your tweets.js file, it will take a minute or two to parse your history. Then it will ask you to select what type of tweets to import. I’d suggest choosing “threads” and “OP tweets” – replies and retweets will be a bit out of context and possibly nonsensical to migrate over.
For simplicity you’ll probably want to disable the payment related options. Make sure you add your relay’s domain into the relays text box at the bottom and click update.
However, at time of writing, exit.pub is not very optimized and pulls the entire data set into memory. If you have too many tweets (more than a couple thousand) in your tweets.js file then your browser will crash due to running out of memory and the import will fail. If this occurs, you need to split up your tweets.js archive into multiple chunks. Thankfully this isn’t difficult, just a few lines of javascript. To accomplish this, change line 1 of tweets.js to the following:
let tweets = [
Then, add the following code snippet at the very bottom of tweets.js and save the file.
const fs = require('fs');
let chunk = 1;
while (tweets.length) {
let subset = JSON.stringify(tweets.splice(0, 2000));
fs.writeFile('./tweets_chunk' + chunk + '.js', 'window.YTD.tweets.part0 =' + subset, err => {
if (err) {
console.error(err);
}
});
chunk++;
}
You will need to have Node.js installed. Now, from the command line, run:
nodejs tweets.js
You’ll see a bunch of new files appear in this directory, with numbered names like “tweets_chunk1.js” – now you can upload each chunk into exit.pub.
Once you click “preview” you can then click “publish” at which point a nos2x dialog will pop up and ask if you want to allow access to your private key to sign events. Click “authorize forever” and the migration will begin. In my experience it takes about a second per tweet to sign and upload to your relay.
If you want to be sure that the tweets are being imported, just tail your strfry logs on the server via:
journalctl -u nostr-relay.service -f
7B. Import your historical notes
If you exported notes earlier in step 2B then we’ll need to do some data transformation to prepare the notes to be imported into our relay. Open the nostr-sync.js file you downloaded with a text editor, scroll all the way to the bottom, paste these 3 lines, then save the file.
for (let note of data) {
console.log(JSON.stringify(note))
}
You will need to have Node.js installed. Next, run this command to create the jsonl file we can import into strfry:
It only took my machine 24 seconds to import 25,000 notes. You may see some rejected events because the nostrsync archive service grabs not only the notes you have published, but also notes published by others that interact with your pubkey. Thus there may be some large spammy notes in there that you don’t actually care about. You might also see a lot of rejections if you have set your strfry whitelist to only accept notes from your pubkey. You might want to disable the whitelist when you perform this import, assuming you want to store all the notes from other people who have interacted with you historically.
8. Find your friends
At this point you’ve migrated your history, but unfortunately it’s not so simple to migrate your audience / social circle.
If you’re using nostr, you’re an extremely early adopter since only ~20,000 people are actively using it at time of writing.
Like every network, there’s a challenging period of bootstrapping adoption in order to achieve a critical mass such that network effects can take over and virally encourage more folks to join.
You can use https://onboardstr.vercel.app/ to help your friends get bootstrapped and automatically follow the same accounts you follow.
Welcome to social media sovereignty, fellow nostrich!
The mystery of Satoshi Nakamoto’s identity has intrigued countless people ever since the inception of Bitcoin in 2009. Who, the world wonders, would be so gifted that they could solve the Byzantine Generals’ Problem? Who, we ask, is so altruistic as to create a new monetary system but not use it to enrich themselves? Who, we question, is sufficiently privacy-conscious that they could pull off these magnificent feats and manage not to leak their true name?
The actual identity of Satoshi Nakamoto is irrelevant to the security, evolution, and future operation of the Bitcoin network. But the speculation of Satoshi’s identity does have real-world consequences for those who end up in its crosshairs.
A multitude of scammers have taken the opposite route and tried to leech off of Satoshi’s reputation by claiming to be him.
Who do I think is Satoshi? I have my theories, but I shall never share them as it would be irresponsible to do so. Rather, I believe it is in the best interest of Bitcoin to dispel any myths of Satoshi’s identity. Let’s begin.
The Race
On Saturday April 18, 2009 at 8 AM Pacific time Hal Finney, an avid runner, began a 10 mile race in Santa Barbara, California. We can see his results here:
Source: https://archive.is/46t9A
Why is this noteworthy? Because Satoshi was performing activities at the same time that Hal was running. For the hour and 18 minutes that Hal was running, we can be quite sure that he was not interacting with a computer.
It turns out that early Bitcoin developer Mike Hearn was emailing back and forth with Satoshi during this time. Hearn later published his emails on his web site; you can find a copy archived here.
We can see from the timestamps that Mike emailed Satoshi on Apr 18, 2009 at 3:08 PM and Satoshi replied at 6:16 PM. But what time zone was Mike’s email client reporting? Well, Hearn conveniently included his IP address at the time (because one way of sending and receiving bitcoin back then was via direct connection to a peer node’s IP address) and his address was 84.73.233.199. A quick lookup shows that this IP belongs to a Swiss ISP.
Source: https://www.whois.com/whois/84.73.233.199
This lines up with the well-established fact that Mike Hearn was working for Google at the time, out of their Zurich office. I additionally confirmed these details directly with Mike during my investigation.
What can we determine from all of this? Satoshi sent the email to Mike at 9:16 AM Pacific time – 2 minutes before Hal crossed the finish line.
How can we be sure that Hal was actually running in the race and didn’t send an imposter to stand in for him? Well, we have third party photographic evidence courtesy of the event photography service, PhotoCrazy (though their site has been offline for many years now.) We can see that his ID number 591 matches the one from the race results database linked above.
There’s also a photo taken by Hal’s wife:
The Transaction
As seen in the email exchange between Mike Hearn and Satoshi, Satoshi sent 32.5 BTC to Mike Hearn via transaction 6a679898780f5d99f0ffa12573b855e0dc470956406eb8b82690b688fa19200f which was confirmed in block 11,408 at 8:55 AM Pacific time on April 18, 2009. Satoshi then replied to Mike’s email 20 minutes later.
The previous block (11,407) was mined at 8:28 AM Pacific time, thus the transaction was likely created, signed, and broadcast during the window between 8:28 AM and 8:55 AM.
Blocks 11407, 11408, and 11409 (in blue) were all mined by Patoshi (likely Satoshi.)
The previous block 11,406 was minted by an unknown miner at 8:08 AM; it’s safe to assume that if the transaction had been broadcast and was in node mempools at that time, it would have been confirmed at 8:08.
Potential Objections
“Mike Hearn is untrustworthy”
Hearn published the full emails in 2017 after many distrusted him due to disagreements during the multi-year scaling debates.
Hearn actually shared the first of the emails on the Bitcoin Foundation forum in December 2012.
Hearn’s emails are the strongest evidence, but not the sole evidence, as we’ll see shortly.
I thought these emails had been published already, because I had forwarded them to a project that was archiving Satoshi’s emails years ago. When CipherionX asked me for these emails again, he told me they’d actually never been uploaded anywhere and so I forwarded them once more.
The emails are real. As others have noted, I quoted parts of them in various conversations stretching over many years. It would have required vast planning to have set up such a forgery and there is no reason to do so.
“Hal could have scripted the emails and transactions.”
Sure, but Occam’s Razor applies. Why go to such lengths to sow disinformation in a private communication? It would have been far simpler for Hal to have just responded at a different time rather than leaving this proverbial needle in the haystack that would have never been revealed had Hearn not published the emails.
“Hal could have been one member of a group.”
Sure, but Occam’s Razor again. As Benjamin Franklin noted: “Three can keep a secret, if two of them are dead.” In all my time researching Satoshi, I’ve yet to come across any evidence suggesting it was a group. If it was a group, then they all operated on the same sleep schedule, consistent across code commits, emails, and forum posts.
“The early blockchain history could have been rewritten.”
Technically true, though 3 hours later Mike sent BTC back & confirmed via email. The timestamp activity of the emails and blockchain line up.
“Someone else could have been running the race in Hal’s place.”
As seen above, we have photographic evidence from multiple parties that shows otherwise.
Singularity Summit 2010
Hal at Singularity Summit
Hal attended the Singularity Summit in San Francisco on August 14th and 15th of 2010. We can see his wife published this post about it a few days later.
This past year, Hal and I have had to completely alter projections of our future together. Hal was diagnosed with ALS (Amyotrophic Lateral Sclerosis, better known in the US as “Lou Gehrig’s Disease”). Since his diagnosis in August of 2009, Hal has physically changed in very obvious ways. His speech has become slow, quiet, and labored. His typing has gone from rapid-fire 120 WPM to a sluggish finger peck. His weekly running (50-60 miles per week in February 2009) stopped being possible in November of 2009, and now Hal gets around in a motorized wheelchair. Eating, always a pleasure before, is now a challenge – much concentration is involved to avoid choking. The most recent and worrisome manifestation of the weakening in Hal’s voluntary muscles is his breathing. However – all of these changes have been to Hal’s body. The machine that Hal’s brain controls through efferent output to interact with the environment. Inside, he is the same brilliant guy I have known for well over half of my life.
She specifically talks about how Hal can barely type at this point.
What was Satoshi doing on August 14 and 15 of 2010? Satoshi was quite active, with 4 code check-ins and 17 forum posts. You can view my compilation of all publicly known Satoshi activity timestamps here.
The IP Address
In the initial days of Bitcoin, the client connected to an IRC (Internet Relay Chat) room in order to discover IP addresses of peer nodes to connect and thus join the network. For this reason the debug log becomes crucial for the investigation. It reveals IP addresses of 3 users who were connected to the IRC on January 10, 2009 – the day after Bitcoin launched. As far as we know, Satoshi and Hal were the only two people working on the project during that time.
The debug log is quite verbose, so I’ll point out the relevant lines:
[x93428606] is the admin of the IRC channel (Satoshi) and connects from i=x9342860 gateway/tor/x-bacc5813d7825a9a(via tor, a privacy preserving network)
[uCeSAaG6R9Qidrs] CAddress(207.71.226.132:8333) – this is Hal.
[u4rfwoe8g3w5Tai] new CAddress(68.164.57.219:8333) – this is the only other node, thus likely Satoshi.
What can we determine from these IP addresses?
Hal Finney’s IP can be identified easily as he has hosted his website at the same IP address.Data source can be found here.
IP Address: 207.71.226.132 State/Region: California Country: United States Reverse DNS: 226-132.adsl2.netlojix.net Host/ISP: Silicon Beach Communications
Domains Hosted on IP 207.71.226.132
finney.org
privacyca.com
franforfitness.com
Could other people have been running nodes? Sure! Though the debug log shows not one, not two, but three boot-ups of Hal’s node, and it receives the same peer IP address every time. It appears unlikely that anyone else was running a node at this time.
We can note that:
Satoshi’s IP address doesn’t appear to be a tor exit node (I can’t find that IP address in publicly available historical lists of tor exit node IPs)
Satoshi’s IP belonged to a different ISP than Hal’s, though also in California.
I think it’s reasonable to ask ourselves the following:
If Hal was privacy conscious, why publish this info?
If Hal was Satoshi, publishing disinformation, why not make their IP in a different state or country?
Inconsistencies in Coding Styles
Some have claimed stylistic similarities between Hal’s and Satoshi’s public writing, but I know nothing of stylometric analysis thus I can’t comment on the veracity of that claim. What I can confidently state is that their code is quite different.
Hal used tabs while Satoshi used spaces (this is a massive never-ending debate between developers)
Hal preferred his debug statements not to be indented while Satoshi’s maintained indentation with surrounding code
Hal made comments with block style multi-line markers while Satoshi preferred to create many single-line comments with double slashes
Hal used snake_case for his function names while Satoshi used camelCase
There are probably far more differences that are more subtle, but these jumped out from just a few minutes of eyeballing the codebases.
Inconsistencies in Personas
There are a few points about Satoshi’s and Hal’s perspectives that don’t line up, and it would have required a pretty creative writer to keep their personas distinct. For example:
Thinking about how to reduce CO2 emissions from a widespread Bitcoin implementation
Are we to believe that Satoshi had been working on Bitcoin for a year (if not years) but suddenly started being concerned about CO2 emissions?
It appears that Satoshi Nakamoto only learned about Nick Szabo’s “bit gold” idea from Hal Finney’s first reply to the whitepaper announcement post on the cryptography mailing list.
I also do think that there is potential value in a form of unforgeable token whose production rate is predictable and can’t be influenced by corrupt parties. This would be more analogous to gold than to fiat currencies. Nick Szabo wrote many years ago about what he called “bit gold” and this could be an implementation of that concept.
Once again, this level of “character development” for alternate personas is a pretty big ask for someone who is not a professional fiction writer.
Hal Finney was not actually a particularly private person. According to his wife, Hal was a huge privacy advocate and believed everyone had the right to privacy. But privacy is the ability to selectively reveal yourself to the world, and Hal was quite open about his dealings.
In terms of being privacy conscious, most of the other Satoshi contenders are far stronger candidates with regard to this attribute.
Inconsistencies in Activity Gaps
Satoshi had 2 lengthy gaps in their public activity:
From 2009-03-04 16:59:12 UTC to 2009-10-21 1:08:05 UTC
From 2010-03-24 18:02:55 UTC to 2010-05-16 21:01:44 UTC
We can see that Hal Finney kept posting during those periods:
Second response: “honest nodes won’t control the network. Bad guys with zombie farms will take it over.”
Third response: “I think the real issue with this system is the market for bitcoins. Computing proofs-of-work have no intrinsic value. We can have a limited supply curve but there is no demand curve that intersects it at a positive price point.
Hal arrives: “Bitcoin seems to be a very promising idea. I like the idea of basing security on the assumption that the CPU power of honest participants outweighs that of the attacker. It is a very modern notion that exploits the power of the long tail. When Wikipedia started I never thought it would work, but it has proven to be a great success for some of the same reasons.”
Hal was an optimist, a builder, and a thoughtful collaborator. He made great contributions to cypherpunk projects like anonymous remailers, PGP 2.0, Reusable Proofs of Work, and Bitcoin. Open source projects need people like Hal.
This is Good for Bitcoin
Some will surely claim that the prior points do not constitute incontrovertible proof that Hal was not Satoshi. Indeed, proving a negative is often an impossible task. But I find the aggregate of all the evidence to provide so much doubt that a reasonable person would conclude that it’s far more likely that Satoshi was someone else. After months of research I have been sufficiently convinced that I am willing to stake my reputation upon this claim.
Bitcoin is better off with Satoshi’s identity remaining unknown. A human can be criticized and politically attacked. A myth will withstand the test of time.
It is better for Bitcoin that Satoshi not be a man, for men are fallible, fickle, and fragile. Satoshi is an idea; it is better that all who contribute to Bitcoin be an embodiment of that idea. As such, I pose to you that it is to the benefit of Bitcoin that we crush any myths of Satoshi’s true identity.
Note: if you’d prefer to watch me deliver this research as a presentation with slides, you can watch my POW Summit keynote here.
“Proof-of-work has the nice property that it can be relayed through untrusted middlemen. We don’t have to worry about a chain of custody of communication. It doesn’t matter who tells you a longest chain, the proof-of-work speaks for itself.”And so the proof of work is its own self-contained piece of integrity.”
– Satoshi Nakamoto
Proof of Work is a pretty simple mathematical construct where you can be given some data, run very simple verification check against it, and you can be assured that it has not been tampered with and that someone has expended a decent amount of computation to publish the data.
But Proof of Work is probabilistic. You can’t look at a proof and know exactly how much time, money, cost, CPU cycles, etc were put into that proof. You can only get a rough idea. We know that there are a ton of machines out there mining these different proof of work networks like Bitcoin. But we can’t precisely measure the amount of electricity and the amount of computational cycles that are going into these proofs.
Earlier this year I mused upon the difficulty in knowing an accurate measurement of the global hashrate. Many have tried and many have failed!
My prior essay delved into the volatility inherent to different hashrate estimates and showed why it’s better to use estimates that are calculated over a longer time frame of data, preferably around 1 week of trailing blocks.
Motivation
My primary goal with this research is just to get us all on the same page. I often see folks making claims about changes to the hashrate as if they are news-worthy, but without mentioning how they are estimating the hashrate.
It’s possible that large scale miners could be interested in an improved hashrate estimate for their own planning purposes, but I’m generally just seeking to reduce confusion with regard to this topic.
Hashrate Estimate Trade-offs
We can see that it’s highly volatile if you’re only using the past 10 blocks to generate a hashrate estimate, which is about two hours worth of data. But once you get up to around a three-day, 400, 500-block time frame, that starts to smooth out a bit more.
But the downside is these shorter time frames can have more distortion, more volatility. And they can make the hashrate appear a lot higher or a lot lower than it really is. So I think it’s generally agreed that the seven-day, around the 1,000-block hashrate is a pretty good mixture between getting that volatility and getting something that is a bit more predictable and accurate.
The problem, though, as you can see, if you go out to multiple weeks of trailing data, while that is smoother, it’s always going to be off by more. And that’s because you start lagging whatever the real hashrate is. Because if you think about it, there are lots of miners out there that are constantly adding machines to the network.
I concluded my earlier essay with an area of potential future research: comparing realtime hashrate reported by mining pools against the hashrate estimates derived solely from trailing blockchain data. I feared that I’d have to start a lengthy process of collecting this data myself, but eventually discovered a data source. Thus, I set to work crunching some numbers!
You can find all the scripts I wrote to perform the analysis, along with the raw data in this github directory. The output data and charts are available in this spreadsheet.
Realtime Hashrate Data
If you’re calculating hashrate estimates then you need to be only working off the data that is available to you in the blockchain that’s publicly available to everyone. The problem with that is you have no external source that you can really check it against. And what I found earlier this year is that Brains mining pool has actually started to collect what they call the realtime hashrate.
For the past couple of years Braiins has been pinging every mining pool’s API every few minutes. They then save the self-reported hashrate from that mining pool. Thankfully, the folks at Brains were kind enough to give me a full data dump of everything they had collected.
It was a pretty messy data set. I had to write a script to normalize the data and essentially chunk it into block heights so that I could line it up with blockchain-based estimates. I then plotted the sum of all the hashrates collected from all the pools.
I plotted that against the three-day hashrate estimate and very quickly discovered that for the first year or so of this data set, it’s very wrong. So my suspicion is that Braiins was not collecting data from all of the mining pools at first. But we can see that after that first year, it actually starts to line up pretty well. So it looks like they got to the point where they were, in fact, collecting data from all of the major mining pools. So this is what I have been using as my baseline hashrate that I can then perform various calculations upon to try to figure out how well these purely blockchain data-based hashrate estimates are performing.
100 block estimates vs realtime hashrate400 block estimates vs realtime hashrate
Now that we have a baseline for the “real” hashrate, I started calculating things like error rates between the estimates and the baseline. And as we can see here, the one block estimate is insanely wrong. You can be 60,000% off from whatever the real network hashrate is. This is essentially when a miner gets lucky and they find a block a few seconds after the last one. Obviously, that’s not because the hashrate just went up by 60,000%. It’s just luck. It has to do with the distribution of “winning” with the right “lottery numbers.”
Thus we’re going to throw out 1 block estimates. In fact, you can’t even really see the error rates for other time frames on this chart. Let’s zoom in.
We can see with 10 block estimates, they’re getting better. We’re getting down to an error rate range within 300%, 400%. That’s still pretty bad. That’s still worse than that Kraken true hashrate that they did back in 2020. Let’s zoom in some more.
We get down to 50 blocks, and we’re under 100% average error rates. Once we get into the 500 block range, half a week of trailing data, we can actually start to get error rates under 10%.
Average Error Rates
Let’s plot out the average hashrate estimate error rate for this particular data set. The further out you go with your trailing data time frame, the better your error rate gets. But thankfully, I did not stop at 1,000 blocks. Because, as mentioned earlier, you get to a point where you start lagging behind the real hashrate by too much.
It gets interesting when you look at the 1,000 to 2,000 block estimates. And we can see here, eventually, you get around the 1,200 block, more than a week of data, and the error rate starts ticking up again. This is because the average point in our time frame is too far behind the current time. We can see that there’s a sweet spot. Somewhere in the 1,100 to 1,150 block range of trailing data will give us the single best overall estimate.
Why is the optimal trailing block target 1100 blocks? I think that’s just a function of the hashrate derivative (rate of change) over the time period we’re observing. If the global hashrate was perfectly steady then I’d expect the estimate error rate to asymptotically approach 0 as you extend the time horizon. But since the global hashrate is actually changing, it’s a moving target. Apparently it was changing fast enough over our sample data set (in 2022-2023) that you experience noticeable lag after 1 week.
Similarly, if we look at the standard deviation, it pretty much matches up in terms of trying to find the optimal time frame of data to use. So we have an average error rate of under 4% with a standard deviation of under 3 exahash per second if you’re using this 1,100 to 1,150 block trailing data, that’s not bad. But I wondered, could we do better?
Can We Do Better?
An average error rate under 4% isn’t terrible. What if we could blend the accuracy of a long-range estimate with the faster reaction speed of a short-range estimate?
Realtime hashrate vs 100 and 1000 block estimates
I wondered if we could find an algorithm that uses the baseline estimate of 1100 trailing blocks and then uses some combination of other estimation data to adjust the estimate up and down based upon a shorter time window or even derivative of recent estimates.
We know that the standard deviation for the trailing 100 block estimate is a little over 6% error rate. What if we compared the 100 block estimate to the 1100 block estimate and ignored any discrepancies under 6% as noise in the volatility? Then we could apply a weight to the 100 block estimate to adjust the 1100 block estimate up or down.
Exploring Blended Estimate Algorithms
Next I wrote yet another script to ingest the data output by my hashrate estimate and realtime hashrate scripts, but to effectively brute force a bunch of possible combinations for weighing a blend of different estimate algorithms. Those results were then compared to the realtime hashrate data to see if they had higher accuracy and lower standard deviation.
My initial test runs were only blending the 100 block and 1100 block estimates and iterated through combinations of 3 different parameters:
$trailingBlocks // to reduce volatility, check the short term estimate over a trailing period from 10 to 100 trailing blocks
$higherWeight // when the short term estimate is 1+ standard deviation higher than the long term estimate, test weighting it from 100% to 0%
$lowerWeight // when the short term estimate is 1+ standard deviation lower than the long term estimate, test weighting it from 100% to 0%
Since these scripts tried so many permutations, I ran a bunch of them in parallel and bucketed each run by the first parameter. My final output from 10 runs resulted in over 5 million data points. The most accurate parameters from each run are below:
Trailing Blocks
Higher Weight
Lower Weight
Error Rate
Std Dev
19
20%
20%
5.11%
4.48
29
20%
20%
4.92%
4.25
39
20%
20%
4.82%
4.13
49
20%
20%
4.77%
4.04
59
20%
20%
4.72%
3.95
69
20%
20%
4.68%
3.88
79
20%
20%
4.66%
3.83
89
20%
20%
4.65%
3.79
99
20%
20%
4.63%
3.77
Remember that our baseline to beat is the 1100 trailing block estimate with an average error of 3.8% and standard deviation of 2.95 EH/s.
We haven’t found a strictly better estimate algorithm yet, but we can see some trends. It seems like when you use a hashrate estimate for a given period (like 100 blocks) then if you look at that same estimate for the past period, you gain greater accuracy. So what if we try blending estimates from 100, 200, 300… 1100 blocks all together and give them each an equal weight?
Average Error Rate: 3.75%
Standard Deviation: 2.95 EH/s
Not a significant improvement. Next I tried setting a cutoff threshold for shorter term estimates that were lower than the longer term estimate; I’d throw them out if the estimates were only below the long term estimate for a given percentage of recent trailing block windows. I quickly discovered that the greatest accuracy improvement came from throwing out short term estimates that had less than 100% of recent estimates below the long term estimate.
Average Error Rate: 3.39%
Standard Deviation: 2.62 EH/s
Realtime hashrate vs simple estimates vs blended estimate
Here you can see the blended estimate in purple, generally following along the long-range baseline but occasionally getting pulled upward when the shorter term estimates are significantly higher.
A Better Blended Estimate Algorithm
In case anyone wants to try using my optimized blended estimate algorithm, I’ve written example scripts in both PHP and bash (Linux.)
A single blended estimate makes between 10 and 4,500 RPC calls depending on if the current estimates are more than 1 standard deviation away from the 1,000 block estimate. These 10 different baseline estimates are then weighted based off of many other recent estimates being either above or below the 1,000 trailing block estimate. While that makes it relatively slow, on my laptop a single hashrate estimate RPC call takes ~2 milliseconds to complete; my improved estimate algorithm takes between 0.05 and 20 seconds to complete.
In Summary
Most web sites that publish hashrate statistics seem to use the 1 day trailing average (6.7% average error rate), which makes sense give that Bitcoin Core’s default is to use 120 trailing blocks for the estimate (7.3% average error rate.)
Savvier sites report the 3 day average (4.4% average error rate) while the best sites use the 7 day average (3.8% average error rate.)
For the time period checked the ~1120 trailing block estimate is optimal with an average error of 3.8% and standard deviation of 2.95.
By blending together many hashrate estimates and weighting them based upon recent estimates with a variety of trailing data time frames we were fairly easily able to improve upon the 1100 block estimate and decrease the average error rate by 13% and lower the standard deviation by 14%.
Caveats & Future Work
This is by no means the optimal algorithm; there’s plenty of room for improvement. Some of the issues at play with my approach:
Assumes accuracy of pool reporting
Assumes pools don’t share hashrate
Assumes accuracy of Braains’ data collection
Estimate algorithm is ~1000X more computationally complex
The realtime data set itself is relatively small – less than a year in length; the more training data you can feed to a “best fit” search algorithm, the more accurate the results should be. Hopefully once we have several years of data that crosses halvings and other major events, it will be even better.
AI data processing on a GPU would likely yield improved results and would be able to churn through far more possibilities of blending data.
Proof of Work is a fascinating phenomenon, and we’re clearly still trying to fully understand it!
With all of the recent discussions and drama around drivechains, it seems most folks have overlooked a recent announcement of yet another proposal for building 2-way pegged sidechains.
Botanix Labs recently unveiled itself and published this whitepaper. Since their software is not yet available to run and inspect, the following are my impressions from reading the system as described in the paper.
While there are a variety of proposals being discussed for enhancing Bitcoin’s Layer 2 capabilities, such as drivechains, zero knowledge rollups, and validity rollups, one distinction with spiderchains is that they can be implemented on Bitcoin today without any protocol changes to the base layer.
Motivation
Ethereum has seen a massive growth in decentralized finance applications that are mostly unavailable on Bitcoin. The total value on the second layers of Bitcoin is less than 0.1% of the market cap of Bitcoin while at the same time the value of wrapped Bitcoin available on Ethereum is higher than 2%. Bitcoin has not seen the massive growth in TVL (Total value locked) on its second layers or in its applications.
This paper proposes a second layer built on top of Bitcoin with full Ethereum Virtual Machine (EVM) equivalence. With Bitcoin as the most decentralized and secure bottom layer, the second layer will open the doors to the composability, ecosystem and capabilities of Ethereum smart contracts. We introduce the Spiderchain primitive, a second layer design on top of Bitcoin that is optimized for decentralization.
Alright, so we know off the bat that this proposal is going to be for creating a sidechain similar to Rootstock, but with a different pegging mechanism. A reminder from the Pegged Sidechains Whitepaper published in 2014:
A problem is that altchains, like Bitcoin, typically have their own native cryptocurrency, or altcoin, with a floating price. To access the altchain, users must use a market to obtain this currency, exposing them to the high risk and volatility associated with new currencies. Further, the requirement to independently solve the problems of initial distribution and valuation, while at the same time contending with adverse network effects and a crowded market, discourages technical innovation while at the same time encouraging market games. This is dangerous not only to those directly participating in these systems, but also to the cryptocurrency industry as a whole.
Some detractors as of late have been saying that sidechains are useless and there’s no demand for them since the two oldest and most notable sidechains, Liquid and Rootstock, haven’t gained significant adoption. Yet, it’s empirically obvious that there is demand to “use bitcoin” as a unit of account for more complex financial functions. Demand is so high that over 150,000 BTC has been given to a trusted third party custodian (BitGo) to issue BTC pegged tokens for use on Ethereum!
Regardless of what your views might be on sidechains, I contend that continuing research and development of robust permissionless 2-way pegging mechanisms is a worthy pursuit.
“Trusted third parties are security holes.” – Nick Szabo
Introduction
Botanix notes that the Ethereum Foundation’s vision for solving the scalability problem consists of multiple layers of EVM (Ethereum Virtual Machine) compatible chains, with the main chain as a settlement layer at the bottom. But Ethereum still faces centralization questions with multiple hard forks on the roadmap, the role of the Ethereum Foundation, and their move to Proof-of-Stake. Botanix contends that Bitcoin is a more suitable foundation upon which to build second layers since it’s extremely difficult to change and is secured by Proof of Work.
Why EVM?
Solidity smart contracts benefit from the Lindy effect and experience higher levels of trust and familiarity. From a programming language perspective, Solidity has a strong footing in the crypto world and Botanix has therefore opted to leverage the tools that already exist in the EVM ecosystem.
The Spiderchain
The Spiderchain peg is a series of successive multisig wallets managed by Orchestrators. A grossly oversimplified explanation of how the system works:
People deposit BTC collateral into a multisig wallet to run an Orchestrator.
Orchestrators run 2 nodes side by side: a Bitcoin node and a Spiderchain node.
Orchestrators manage the peg-in and peg-out requests by controlling the multisig wallets and make sure other Orchestrators are acting honestly and staying active.
New peg-in requests result in creation of a new multisig wallet managed by a random subset of currently active Orchestrators.
One specific Orchestrator gets chosen to lead each spiderchain epoch (when a Bitcoin block occurs and peg-ins and outs can happen) based upon the Bitcoin block hash from 6 blocks prior. Each successive spiderchain block is led by a different orchestrator designated via sequential calculations performed upon the same block hash.
This is the image used to visualize the relationship between the spiderchain, bitcoin blockchain, and pools of pegged bitcoin. I think one thing that’s missing is that there’s not a 1:1 relationship between spiderchain blocks and Bitcoin blocks. Rather, like Ethereum, the spiderchain has blocks every 12 seconds. The first new spiderchain block that is minted after a new Bitcoin block becomes an anchor point and demarcates a new “epoch” on the spiderchain, creating finality for all of the transactions in blocks that came before that point.
Security Model
Botanix has opted for a Proof of Stake consensus model. Since synthetic bitcoin on the spiderchain will be pegged 1:1 with BTC, the centralization trend for the participants seen in PoS will be counterbalanced by Bitcoin’s PoW. However, this also means there will be no base fee reward for the stakers – they can only collect transaction fees and presumably slashed stakes from misbehaving Orchestrators.
Botanix benefits from the security features of Bitcoin’s PoW system and uses these to mitigate the potential vulnerabilities (Centralization, Randomized Validator Selection, Finality) of PoS consensus algorithms.
As long as the number of adversarial collaborating actors is overwhelmed by 2/3 or more honest Orchestrators, the game theory is sound.
For the pegging process, spiderchain offers a new set of trade-offs.
Federated multisig (funds managed by static consortium)
Drivechain (funds managed by dynamic signers: miners)
Spiderchain (funds managed by dynamic stakers)
The sections on UTXO management are interesting given that, unlike other pegged sidechain proposals, spiderchain does not rely upon one single pool of pegged funds. As such, using last-in first-out (LIFO) for peg-outs will ensure the oldest coins are secured by the oldest orchestrators, therefore giving a young malicious adversary no chance to gain control of the older coins
An Orchestrator can deposit bitcoin and after 6 blocks start staking and orchestrating. However, they won’t get added to any of the multisig wallets unless a current orchestrator publishes their intent to exit. When an Orchestrator wants to exit, it has to wait for every multisig wallet on which it’s a signer to have its key replaced by one of a different, recently joined Orchestrator.
There are four variables that affect in the security level of a spiderchain:
Size of the multisig (number of signers)
The stake (collateral) provided by the Orchestrators
The total number of Orchestrators
The total bitcoin locked in the Spiderchain
The paper notes that the first two can be controlled at the protocol level. What I’d like to see more of is how they might be dynamically adjusted as the latter two variables change over time. Though I expect this will be a learning process over the coming years as the first spiderchain is bootstrapped.
Assumptions
If any of the following attributes fail to hold, the security of the spiderchain and its pegged bitcoin is in peril.
No one has 50%+ of the funds pegged into the network (staked.)
No single spiderchain multisig contains an adversarial or unresponsive quorum of 33%. If 1/3 of the Orchestrators on any given multisig are uncooperative, it becomes impossible to peg those funds out. Therefore, inactive Orchestrators will no longer receive block rewards, and after one week of inactivity will slowly be removed from the multisigs.
There’s also an unstated assumption that Bitcoin will never suffer from a chain reorganization of more than 5 blocks. This is because Orchestrators are determined 6 blocks ahead of time by performing a modulus on the Bitcoin block hash. What happens if a reorg longer than that occurs? Seems like there’s potential for the peg to get broken, though it would be unlikely to be catastrophic due to how the funds are dispersed across many multisig wallets.
Goldilocks Numbers
There are many yet-to-be-determined optimal values for the technical and economic variables at play. I believe this is the primary reason why we see a phased bootstrapped process proposed for launching and evolving the spiderchain’s security architecture.
Multisig size and coordination. If there are too many signers on a given multisig then it may become too cumbersome to coordinate signatures in a timely manner. If there are too few signers then it’s too easy for an attacker to add enough signers to Sybil attack wallets and drain funds.
Stake size and centralization. If the stake size chosen is too big, entities are less inclined to run a node therefore reducing the decentralization. If the stake size is too low, the cost for a malicious entity to produce a Sybil attack might be too low.
Orchestrator liveness. If 1/3 of the signers for any given multisig are unresponsive or otherwise noncompliant, the funds can’t be pegged out. If the inactivity period for penalizing misbehaving Orchestrators is too low, it could open up the potential for DoS attacks. If the period is too high, it increases the chance of a multisig’s funds being temporarily frozen if not permanently lost.
It seems to me that some of these number should probably be dynamic and should scale along with the size of the spiderchain in terms of total Orchestrators and total BTC pegged into the system.
Capital Efficiency
Is a spiderchain capital efficient with regard to its security requirements? Given:
x = the BTC secured in a certain multisig
n = the number of signers on a multisig
s = the stake size in BTC per orchestrator
A rational Orchestrator will choose to reporting erratic behavior and receive the slashing reward as long as:
What does this mean from a practical standpoint? Let’s assume we don’t want more than 50 signers on a multisig due to the rising coordination complexity.
If we had a multisig of 10 signers each staking 10 BTC then the maximum “safe” amount for that multisig to manage would be ~420 BTC. Not great since the capital efficiency is only ~ 3X.
If we had a multisig of 30 signers each staking 30 BTC then the maximum “safe” amount for that multisig to manage would be ~12,500 BTC. Not shabby, given that only 900 BTC are at stake. That’s a much better ~13X capital efficiency.
Though 50 signers staking 50 BTC would raise the safety ceiling to 55,555 BTC. A ~22X capital efficiency.
Note that in order for these equations to hold true, a relatively high number of Orchestrator nodes must be in operation so that the probability for an adversary attempting a Sybil attack to have multiple Orchestrators in the same multisig is quite low.
Spiderchain Bootstrapping
Botanix seems well aware that there are a lot of variables at play here and it will likely involve experimentation and lessons learned in production to navigate the bootstrapping process. To avoid silent malicious majority attacks, the bootstrapping will happen in 5 phases, with the initial phase being composed of 100% Botanix controlled Orchestrators.
Suffice to say, it looks like a long road from 100% centralized to public permissionless 2 way pegged sidechain. I’m certainly skeptical of any project that presents a roadmap going from centralized to decentralized, because the tendency for anything is to become more centralized over time. Yet, given all of the unknowns involved in this experiment, it does seem that jumping headfirst into a purely permissionless architecture is asking for catastrophe.
Incentives
Why will anyone want to run an Orchestrator honestly? The network itself doesn’t generate new tokens, thus stakers can only earn income from transaction fees.
As such, the gas pricing mechanism on the spiderchain is an important economic consideration. Assuming it’s the same as Ethereum, and 1 satoshi == 10 gwei and the base transaction fee is 100 gwei (per Ethereum documentation) then the floor base fee on the spiderchain is 10 “synthetic” satoshis for a simple EOA to EOA transfer. For comparison, about the cheapest simple on-chain bitcoin transfer (P2TR) would cost about 150 satoshis at the floor fee rate of 1 satoshi per virtual byte. Point being, it seems like the floor for transaction fees is an order of magnitude lower on the spiderchain than on the base chain. That seems like a decent incentive for transactors to want to use it, but the flip side is that it may prove a more challenging road to bootstrap sustainable fee volume. Of course, this is all speculation and the base fee may very well be set higher on a spiderchain.
Open Questions
The paper seems to assume that every bitcoin block will include spiderchain peg-ins and outs. What happens (if anything) if an Orchestrator fails to publish the agreed-upon UTXO updates for the next Bitcoin block?
What happens if someone requests a peg-in address which, according to the paper, is generated by the current Orchestrator, but they send the bitcoin at a later date OR it doesn’t get confirmed quickly due to paying an uncompetitive fee rate? Can these “stale” peg-ins be swept later or is there a risk of funds loss?
Are peg-ins a potential single point of failure if they’re only controlled / watched by a single Orchestrator? The paper tells us that “after the peg-in process, the on-chain funds remain secured in multisig chain between the different Orchestrator nodes” but the guarantees for the limbo state of an unconfirmed peg-in are unclear.
If the goal is to prevent fund loss of peg-ins due to delayed confirmation, then it seems like every Orchestrator will need to keep the entire history of peg-in multisig addresses and scan for them at every new Bitcoin block. This might not scale well on a multi-year timeframe.
Similarly, this issue may also exist in reverse for the peg out process, though the scaling concerns would be about 60X higher if the Orchestrator node used to build the peg-out UTXO is a “slot” Orchestrator as opposed to an “epoch” Orchestrator – it’s not clear from the paper.
If an Orchestrator publishes an invalid transaction or block, its stake is slashed and… ? I didn’t see any details around the economics of slashing and how the funds are distributed.
Based upon the description of Orchestrator entrance / exit and the mechanism to add the most recent Orchestrators to multisigs, it feels like there’s potential for gaps. That is, it seems possible for an Orchestrator to never get added to a multisig if a ton of other Orchestrators join the spiderchain shortly after them.
How long does it actually take to peg out? With drivechains it takes months, which provides a ton of time for human intervention in case of an attack. It seems like with Spiderchains it could take as short as a few hours. Shorter timeframes may increase the chance of mining or DoS attacks.
What’s the threshold for removing unreliable Orchestrators? From the paper it sounds like a week of 100% inactivity kicks off the removal process. But what if they are simply unreliable and are only active for ~5% of the blocks each week? Sure, their incentive is to be active otherwise they’ll lose out on collecting transaction fees, but it seems like there should be a threshold higher than 0% at which an Orchestrator is booted for being unreliable. The paper does say that inactvity “will result in a slow inactivity leak of the stake” but it’s unclear if that means a lack of rewards or if their stake actually gets redistributed to the active Orchestrators.
I suspect there are also some gnarly questions around UTXO management with regard to the different multisig wallets. For example, if there’s only a limited peg-in window and then funds are later pegged out, but the wallet isn’t fully emptied, and new wallets are created for peg-ins, my suspicion is that it’s possible to end up with “dust” spread across many multisigs. Thus there’s an open question around how much logic will be in place to treat the aggregate of all of the multisig wallets kind of like one large virtual wallet.
Similarly, I wonder how the spiderchain Orchestrators can respond to fee volatility on the base chain. Will they be intelligent enough to RBF / CPFP any stuck peg-outs?
The multisig signature scheme is never mentioned, but I assume it’s Schnorr or some sort of construction like MPC that only appears as a single signature on-chain, otherwise the transaction data size and fees will be prohibitively high. Also, Orchestrators need the ability to change the key holders of any given multisig, presumably without having to move funds on-chain.
Potential Attacks
The whitepaper talks about Sybil attacks a fair amount, but it doesn’t delve much into some of the issues around liveness. For example, we know that inactive Orchestrators get removed from multisigs after a week of inactivity. What if this attribute was used in conjunction with a denial of service attack? If Orchestrator operators aren’t paying attention then they could find themselves knocked out of the system and potentially even lose funds if enough honest Orchestrators are removed as multisig signers.
There may also be a potential attack vector around reorganizing the Bitcoin blockchain. While much ado has been made about the miner voting mechanism used for drivechains, that process takes months to play out while the whole world can observe it. The fact that spiderchains only have a 6 block delay around selecting Orchestrators seems unnecessarily short to me. Personally I’d recommend a 100 block delay to put it in line with the coinbase maturation threshold. On the flip side, given how the spiderchain multisigs are distributed and the LIFO nature of the peg’s UTXO management, a short-range attack is more limited in how much of the peg’s funds it can affect.
Sidechain Launch Costs
One aspect of pegged sidechains that doesn’t get talked about much is the cost of launching one. Recall the original vision of a sidechains universe:
This vision won’t be feasible until the launch costs are brought down drastically. From looking at the available options:
Federated sidechains: high launch costs of organizing a consortium and deploying secure hardware modules for signing.
Drivechains: medium launch costs of convincing enough miners to cast votes for the peg.
Spiderchains: unknown launch costs, likely high if long phases of bootstrapping from permissioned to perimissionless are required.
Final Thoughts
We’re nearly a decade into the quest for the holy grail of permissionless sidechain pegging and advancement has been rather slow. Thus far we only really have 2 viable options:
Distribute trust amongst a sufficiently large federation of reputable entities.
Create game theory to manage a pool of pegged BTC. Drivechains creates incentives for miners to manage the BTC while Spiderchains creates incentives for stakers to manage the BTC. Each has trade-offs and complexity.
At a high level, I think that Spiderchain make sense when the conditions are as described in the whitepaper. The million bitcoin question is how well the system can hold up to edge cases and adversarial conditions. The complexity and multi-variate game theory makes it challenging to reason about, and these kinds of economic security systems can’t really be played out on a zero value testnet.
I look forward to seeing this experiment progress!
While the universe of digital assets is vast, it’s a small world for crypto custodians. A long bear market and a series of compromises has resulted in two major custodial catastrophes in the summer of 2023: Prime Trust and Fortress Trust.
These cases shocked the industry in different ways and have been notable for their twists and turns. But these compromises were preventable and contain teachable moments for the rest of us.
Casa specializes in helping investors take self-custody of their assets to sidestep the risks of third-party custodians. To help you avoid similar disasters, we thought we would summarize these events with a few points to remember. Let us begin.
What happened with Prime Trust?
Prime Trust was a custodian that acted as the backend for several exchanges and apps. The company was a “qualified custodian” regulated by the State of Nevada.
According to a court filing, Prime Trust migrated its custody onto another platform in 2019. In 2021, the company started unintentionally providing customers with deposit addresses to a 3-of-6 multisig wallet for which it no longer had access to enough keys to sign transactions. Any funds sent to those addresses were lost.
To complete requested withdrawals, the company used customer funds to purchase assets from December 2021 to March 2022. Making matters worse, a crypto bear market set in which placed further strain upon the company’s finances. The company also invested customer funds in TerraUSD, a doomed algorithmic stablecoin that collapsed in May 2022.
By June 2023, rumors began to circulate about the financial hole at Prime Trust. Crypto custodian BitGo agreed to acquire Prime Trust but backed away from the deal. Shortly thereafter, Prime Trust was placed into receivership and eventually filed for bankruptcy.
What happened with Fortress Trust?
Fortress Blockchain Technologies was another company started by Scott Purcell, the same founder as Prime Trust. Purcell departed Prime Trust in 2021 and founded Fortress later that year.
Fortress Trust was a subsidiary of Fortress and was also licensed in the state of Nevada. While there are some similarities between Fortress Trust and Prime Trust, the two were separate companies and were compromised in different ways. The two cases became public within months of each other, though some Prime Trust customers had already switched to become Fortress Trust customers.
On September 7, 2023, Fortress posted on X that a third-party vendor had cloud tools compromised. The post stated that Fortress Technology was not breached, and there was no loss of funds.
The next day, Ripple, the fintech company affiliated with the cryptocurrency XRP, announced it had agreed to acquire Fortress Trust.
On September 11, The Block reported that with the deal, Ripple had bailed out losses sustained by Fortress Trust customers in a security incident as a part of the acquisition.
Later that day, Mike Belshe, the CEO of BitGo posted on X that Fortress Trust had omitted facts about what happened. Though BitGo was not affected in the breach, the company did custody assets for Fortress and the ambiguity around the situation compelled them to issue a statement.
“After the breach, Fortress reached out to BitGo,” Belshe wrote. “BitGo strongly advised Fortress to disclose what happened immediately. Fortress did not do that. Eventually, Fortress decided to sell to Ripple.”
What have we learned?
Prime Trust and Fortress Trust were hardly the first third-party institutions to fall prey to a key compromise, and as much as it pains us to say it, they are unlikely to be the last. These companies existed because there is a dearth of options for “qualified custodians” for regulated investments, such as trust accounts.
The best way to avoid being caught up in a calamity like this is to hold your own keys. Self-custody helps you sidestep custodial risk and maintain control of your assets. Our Casa vaults protect your assets with multiple keys so one disastrous event doesn’t mean lost funds, and you can get help from security experts whenever you need it. Learn more here.
Custodial risk is an inconvenient threat
Most people choose custodians for convenience, but leaving assets with a custodian isn’t a magic solution for securing your wealth. Custodians are subject to more sophisticated security risks than individuals, from both inside and outside the organization. Because they hold a lot of assets, they’re considered “honeypots” and more likely to be targeted.
If a custodian is compromised, the level of assets at stake also tends to cause any possible recovery or remediation process to be complex and prolonged. In the case of bankruptcy, account holders are considered creditors, and they are at the mercy of the judicial system.
You can’t always trust a Trust
Just because a company is a custodian and has “trust” in their name doesn’t mean you should trust them. Helping oneself to customer funds and failing to disclose a breach constitute shameful behavior. But these events tend to occur when custodians engage in damage control and try to buy themselves time.
When you give your assets to a custodian, you never really know what is happening behind closed doors and if they are fully reserved. And all too often, custodians breach trust to save face as seen with Fortress Trust. We’ve seen other failed custodians misrepresent the truth in recent years, such as Celsius and FTX.
Bitcoin and other digital assets were built on public blockchains. This allows you to audit your self-custody yourself. Don’t trust — verify.
A custodian might not have its act together
Court filings show Prime Trust was using a 6-key multisig, which would require them to lose four keys before assets would be inaccessible. This is nearly impossible to do with proper key distribution and periodic checkpoints.
At Casa, we recommend our members perform health checks on each of their keys every six months. Additionally, we equip our members with Sovereign Recovery instructions, which allow you to replicate your vault without Casa. This feature is a failsafe in case Casa is ever unreachable, and it also helps you verify we know what we’re doing.
Regulators won’t save the day
Generally, when exchanges and custodians are hacked, the events are accompanied by a public outcry for governments to act. Victims, the media, and politicians discuss who is to blame and how similar actions can be prevented in the future.
In truth, government enforcement actions are a lagging indicator, and regulations act as a deterrent. When situations like those at Prime Trust and Fortress Trust occur, the powers at be are not aware until it’s too late to reverse the outcome. Thankfully, in the case of Fortress Trust, customers were made whole. But in other cases that progress through bankruptcy, it can take years for assets to be found, let alone returned to creditors, and that’s with some luck.
Even if your case proceeds smoothly through the court system, you might only be partially reimbursed when all is said and done. Bankruptcy proceedings can also exact a major toll on the value in question. As of June 2023, FTX has tallied more than $200 million in professional fees over the course of its bankruptcy case. When you factor in that time is money, waiting for legal action can be costly in a multitude of ways.
Final thoughts
When custodians fail, they’re often inclined to take the path of least resistance and avoid dealing with the situation.
The best custodian for your assets is you. Casa will continue to build tools to help you make the most of your self-custody. No trust required.
See how easy self-custody can be
Casa helps investors take self-custody of their assets with multiple keys for greater protection against hacks, theft, and custodial risk. With a Casa vault, you can own your bitcoin and ethereum fair and square and have full peace of mind.
Bitcoin enthusiasts talk about the concept of sovereignty quite often; it is a value we hold dear. The ability to operate as a sovereign entity within the Bitcoin economy by holding your own keys, auditing the history of the blockchain, and enforcing the rules to which you agree is how individuals empower themselves.
However, there are nuances to this view that have become clearer to me as we have explored the governance of the Bitcoin protocol in more depth since the scaling debates 6 years ago. Consider this:
“For privacy to be widespread it must be part of a social contract. People must come and together deploy these systems for the common good. Privacy only extends so far as the cooperation of one’s fellows in society.”
You may be triggered by the word “social contract,” but we’ll delve into that a bit later. I think Eric’s quote is relevant because it refers to an issue related to network effects. While we are all individuals, if we are going to live our lives in a way that requires interacting with other humans then we are relying upon some level of cooperation. This holds true for economic interactions, communications, and of course any other network-like activity such as those reliant upon protocols.
I pose to you that Eric’s quote also works if you substitute “privacy” with “sovereignty.” This essay is my attempt to convince you of that claim.
No man is an island entire of itself, Every man is a piece of the continent, A part of the main.
If a clod be washed away by the sea, Europe is the less, As well as if a promontory were, As well as any manor of thy friend’s, Or of thine own were.
Any man’s death diminishes me, Because I am involved in mankind. And therefore never send to know for whom the bell tolls; It tolls for thee.
– John Donne
If you’re reading this, it’s highly unlikely that you are “an island” that is not reliant upon interacting with any other humans as a part of your daily life.
What is Sovereignty?
Sovereignty is independence; the freedom to operate without asking permission. Often attributed to nations, one can also become a sovereign individual in a limited capacity.
There are many facets of ones’ life in which one can be sovereign. Bitcoiners, of course, focus on financial sovereignty.
Financial Sovereignty is great, but we should also strive for: Data Sovereignty Energy Sovereignty Food Sovereignty Water Sovereignty Physical Sovereignty
Complete sovereignty at the individual level is nearly impossible today due to the interconnectedness of our economy and society. This is due to the specialization of tasks: individuals are more productive when we focus on doing one specific thing very well. As a result, we outsource many aspects of our lives to third party specialists who are very good at providing specific goods and services.
Even if you’re a “mountain main” who lives in the middle of nowhere and is mostly independent, it’s unlikely that you’re living a primitive lifestyle. Most of those folks are still reliant on supply chains to occasionally provide them with raw materials and higher technology items they can’t create from scratch. Their “islands” of humanity still have frail bridges to society.
Sovereignty Through Math & Game Theory
How does one achieve financial sovereignty from a practical standpoint? We have to begin, of course, at the beginning.
What is a blockchain? It’s a chain of blocks.
I’m a technology guy. When people say “blockchain,” I hear “database.” When people talk about “solving problems with blockchains” they almost always gloss over a lot of the details that are critical to these architecture of these systems.
When you create a blockchain, all you’re doing is creating a linked list of data, a new type of data structure that’s cryptographically tied together. This data structure gives us the property of tamper evidence. That’s all you really get in addition to an ordered history of events. You can say “this thing happened after this thing.” Though, to be precise, you can’t be sure that the ordered history you’re looking at is the true history from just a blockchain data structure.
Most of the other stuff that people think of when they say blockchain is not actually guaranteed from the blockchain itself. What is a blockchain not?
It’s not a network of nodes.
It’s not a consensus protocol.
It’s not an immutable history.
It is certainly not an arbiter of truth.
It’s not even a trustworthy timestamping service.
The blockchain itself only gives you tamper evidence. You need other things such as a proof of work or proof of stake or some sort of other consensus mechanism that makes it very expensive for someone to rewrite the blockchain. You need a network of nodes to ensure that the history is accurate. You need specific consensus rules to ensure that blocks were timestamped in a certain range.
How do blockchain-based systems enhance individual sovereignty? Cryptography enables its users to create an asymmetric shield for self defense. That is to say, the cost to attack a user who secures their data with cryptography is orders of magnitude higher than it is for the user to wield it in defense.
Similarly, by running software the validates that no one is breaking the rules of the system, we attain a level of sovereignty in that we need not trust third parties to be honest. For an in-depth explanation of how consensus forms organically in public permissionless networks, check out my presentation clip:
Consensus is achieved in these networks by each of us enforcing the rules to which we agree and thus deciding which data to accept and propagate to our peers, and which data to reject. When participants disagree on the rules and disagree on which data to accept, the network automatically partitioned. As such, the “society” with which a participant can interact is also split and the “governance” of the network as a whole is completely seamless.
In my opinion the most fair system that you can get is one in which any participant can veto anything that they want. This gives us the ability to create a system where we aren’t optimizing for that which is the best for the majority (the goal of democracies.)
Rather, this architecture creates a system in which we’re optimizing for that which is least harmful for the entirety of the user base.
Traditional Governance
Let’s consider how human civilization has gotten to where we are right now. We have created these hierarchical command and control systems over the past several millennia to help us organize ourselves, to help us specialize so that no longer do any of you need to actually worry about growing your food and the entire process of sustaining yourself.
Instead, you can delegate those specific functions off to other people who are specialized and probably work for companies and other hierarchies to be very, very efficient and productive at doing one or two things.
The result of this is that you have a system where there is a lot of power concentration at the top, and this power is basically being used to coordinate the other layers of people who are actually getting stuff done throughout the organization. This holds true for both public and private sector organizations.
This is quite efficient, but of course, it has trade-offs. And I don’t think that as a society, we’ve really thought about these trade-offs very much. What we gain in efficiency and convenience we lose in robustness.
Social Scalability
You hear a lot of people talking about technical scaling solutions and all of the performance problems that we have with blockchains because blockchains are probably the least efficient and least performant database structure that has ever been created.
But I think that a lot of people are overlooking the issue of social scalability. So what is social scalability?
“Civilization advances by extending the number of important operations which we can perform without thinking about them.”
– Alfred Whitehead, an English mathematician and philosopher
If you think back to bureaucracy and how civilization has evolved with these command and control hierarchies, that is the great question, the trade-off of the efficiency versus the resulting systemic risk that we create by centralizing power in the hands of a few.
Thus I believe that blockchain based consensus networks can enable us to create systems that are socially scalable, which means that the cost of participating in the network and staying in the network is much lower.
And I don’t mean the cost from a technical standpoint, but rather from a cognitive standpoint. If you are aware of the idea of Dunbar’s number, it refers to the fact that the human brain can only really keep about 100 to 150 other relationships in play at any given time before we experience a form of cognitive overload.
When you’re in a system architected such that other participants have sufficient power that they can pull the rug out from under you, change the rules, and actually change the system itself, then you have to spend a lot of time worrying about all of these other participants and how they might impose upon your sovereignty.
But if we can build robust platforms where the power is so decentralized that you can create a much more resilient and trustworthy system, then people can interact with each other and use that system with very little cognitive overhead. With public permissionless networks we can create truly free markets that are socially scalable, where we can achieve this by creating a system where you don’t have to worry about all of the power dynamics and the games that are being played behind the scenes.
And we essentially do that by inverting and automating bureaucracy to create these new forms of cybersociety.
“When we can secure the most important functionality of a financial network by computer science rather than by the traditional accountants, regulators, investigators, police, and lawyers, we go from a system that is manual, local, and of inconsistent security to one that is automated, global, and much more secure.”
In one sense, property rights are extremely well defined within cryptographically secured protocols. Either you have the ability to present sufficient proof to the network that you own an entry in the distributed ledger and can manipulate it, or you don’t.
However, at a higher level there is game theory at play. While you can be secure against your assets being stolen or frozen by some random authority, it’s always possible for the ecosystem as a whole to turn on you. Due to game theory and the inverted nature of governance in public permissionless networks, this is made to be extremely unlikely due to the difficulty in coordinating such changes, but it’s never impossible.
1) Public permissionless consensus systems let you use them w/o trusting any one individual. However, you must trust everyone in aggregate.
Take, for example, Ethereum’s response to the DAO hack. This is the most well-known example of a reaction to a perceived massive threat but it’s by no means the only time a protocol has been changed in result to actions taken by a malicious entity.
In the case of the DAO hack, a sufficient amount of value was removed from the control of a sufficient number of entities on the network, such that the incentives were strong enough to coordinate a protocol change to return the funds to their original owners. The DAO attacker managed to take control of 3.6 million ETH, which was about 5% of the total supply at the time. One can logically argue, of course, that the DAO hacker was just following the rules of the protocol and took rightful ownership of those tokens, but this goes to show that not all rules are written.
Note that something similar happened to Bitcoin, though when it was a much smaller ecosystem. On August 15, 2010, it was discovered that block 74,638 contained a transaction that created 184,467,440,737.09551616 BTC spread across three different addresses. This was possible because the code used for checking transactions didn’t account for the case of outputs that were so large that they overflowed when summed.
A new version of the client was published within five hours of the discovery that contained a soft forking change to the consensus rules that rejected output value overflow transactions. The blockchain was forked. Although many unpatched nodes continued to build on the “bad” blockchain, the “good” blockchain fork overtook it at a block height of 74,691 at which point all nodes accepted the “good” blockchain as the authoritative source of Bitcoin’s transaction history.
On one hand, whomever exploited that vulnerability had their bitcoin taken away from them by the network at large. On the other hand, if the rule had only been patched going forward from that point, the exploiter would have ended up owning 99.9886159% of all bitcoin ever created. The incentives were quite clear.
Spock: “It is logical. The needs of the many outweigh…” Kirk: “The needs of the few.” Spock: “Or the one.”
Social Contracts
There’s a conundrum in that it’s not even possible to write a social contract because no authority can enforce it. I’d argue that the legal systems put in place by governments are an attempt at codifying the social contract.
“Everyone carries a part of society on his shoulders; no one is relieved of his share of responsibility by others. And no one can find a safe way out for himself if society is sweeping toward destruction. Therefore, everyone, in his own interests, must thrust himself vigorously into the intellectual battle. None can stand aside with unconcern; the interest of everyone hangs on the result. Whether he chooses or not, every man is drawn into the great historical struggle, the decisive battle into which our epoch has plunged us.”
– Ludwig Von Mises
It seems to me that the “social contract” is just a euphemism for “the lowest common denominator of beliefs across humans in a given organization.” It is ethereal, difficult to define, and subject to change. Despite all of our advances in advancing machine consensus to automate the enforcement of rules across a society, it seems we will be forever constrained by the messy unquantifiable nature of human consensus.
Opt-In vs Opt-Out Society
The great thing about creating public permissionless networks that are secured by cryptography is that those who choose to participate do so out of their own interest. Anyone who is using Bitcoin today is doing so because they have opted in to this system of rules. Though that may not always be the case in the future, if more nation states decide to adopt it as legal tender.
Contrast that with something like the Free State Project that is basically “invading” an existing society (New Hampshire) and attempting to subvert it from within. The latter is sure to be a more challenging road, battling incumbents rather than “homesteading” pristine unclaimed ground.
Bitcoin’s Social Contract
What is Bitcoin’s social contract? I often refer to the set of “inviolable properties” that are generally agreed upon by users.
Consensus, Not Command & Control: governance rests upon the Cypherpunk principle of rough consensus and running code.
Trust Minimization: trust makes systems brittle, opaque, and costly to operate. Trust failures result in systemic collapses, trust curation creates inequality and monopoly lock-in, and naturally arising trust choke-points can be abused to deny access to due process.
Decentralization: of many attributes, but power is what matters most.
Censorship Resistance: No one should have the power to prevent others from interacting with the Bitcoin network. Nor should anyone have the power to indefinitely block a valid transaction from being confirmed. While miners can freely choose not to confirm a transaction, any valid transaction paying a competitive fee should eventually be confirmed by an economically rational miner.
Pseudonymity: No official identification should be required to own or use Bitcoin. This principle strengthens the censorship resistance and fungibility of the system, as it is more difficult to select transactions to consider “tainted” when the system itself does not keep track of users.
Open Source: Bitcoin client source code should always be open for anyone to read, modify, copy, and share. Bitcoin’s value is built upon the transparency and auditability of the system. The ability to audit any aspect of the system ensures that we need not trust any specific entities to act honestly.
Permissionless: No arbitrary gatekeepers should be able to prevent anyone from participating on the network (as a transactor, node, miner, etc). This is a result of trust minimization, censorship resistance, and pseudonymity.
Legal Indifference: Bitcoin should be unconcerned with the laws of nation states, just like other Internet protocols. Regulators will have to figure out how to respond to the functionality enabled by Bitcoin-powered technology, not the other way around.
Fungibility: Fungibility is an important property of sound money. If every user needed to perform taint analysis on all the funds they received, then the utility of the system would drop significantly.
Forward Compatibility: Bitcoin supports signing transactions without broadcasting them; there is a principle that any currently possible signed but not broadcast transactions should remain valid and broadcastable. The fact that Bitcoin has stuck to this principle gives everyone confidence in the protocol. Anyone can secure their funds by whatever scheme they dream up and deploy it without needing permission.
Resource Minimization: In order to keep verification costs low, block space is scarce. As such, it should be expensive for anyone to consume a lot of block space. Validation should be cheap because it supports trust minimization if more users can afford to audit the system; cheap validation also makes resource exhaustion attacks expensive.
Convergence: Any two Bitcoin clients, if they connect to a single honest peer, should eventually converge on the same chain tip.
Transaction Immutability: Each additional block added to the chain after a given block should make it far less likely that the given block will be orphaned by a chain reorganization. While there can technically be no guarantee of immutability, we can guarantee that it becomes impractically expensive to reverse a transaction after it is sufficiently buried under enough proof of work.
Conservatism: Money should be stable in the long run. We should be conservative about making changes, both in order to minimize risk to the system, and to allow people to continue using the system in the way they see fit.
Sovereignty Inside the System, not Against the System
Systems like Bitcoin are superior because their incentives and governance are more transparent, despite the governance process and power distribution being poorly defined. Some would say that’s a feature in and of itself.
We are all capable of being sovereign in limited ways, but we are reliant upon cooperation with others in society to engage in trade and provide us with the products of their labor. Recall that our bitcoin only has value because some set of people around the world agrees with us that it has value. Remember that “no man is an island.”
Generational Impositions Upon Sovereignty
While opt-in cybersocieties are arguably better than traditional nation and city-state governance that is backed by threats of violence, what if the concept of opt-in society still fails at generational timescales?
One issue I’ve revisited several times over the years is related to the cycles we see in civilization.
“Hard times create strong men. Strong men create good times. Good times create weak men. And, weak men create hard times.”
― G. Michael Hopf
I believe there’s something of a moral dilemma given that a society may choose to reorganize itself and form a new system of government and laws. But those laws tend to exist in perpetuity and are imposed upon future generations. If society changes and decides the laws no longer fit their desired social contract, it can be quite difficult to change them peacefully.
This is because defaults tend to be extremely sticky. If we observe the rise and fall of empires, then tend to collapse as a result of greater and greater amounts of bureacracy being imposed upon society, until either the populace revolts or the system collapses upon itself due to resource exhaustion and inability to react to changing environments. I’ve often wondered if it wouldn’t be more fair for the default to be that laws should need to be re-ratified every generation / every few decades.
Where Are We Going?
I think the open-ended question after covering all of these issues is how do you try to guide the evolution of a social contract? I think it’s an issue of culture, narratives, and memetics.
“My prediction is that libertarians are going to turn on Bitcoin. That’ll be in about two years, when it’ll be mainstream. I don’t know how you get fringe technology without fringe people and politics … You just need to go through a maturation process where the technology emerges as mainstream at the other end. Along the way the fringe politics will move on.”
– Marc Andreesen, 2014
While Marc’s prediction has failed to come to fruition, he was onto something. If an opt-in society goes from small and niche to large and mainstream, it’s possible for the newcomers to bring in their own culture and values, which can change the unwritten social contract, which can then lead to people making an attempt to change the written and codified rules. Since libertarian ideals are “fringe” then it’s certainly possible that mainstream adoption of Bitcoin could result in the social contract of the system morphing into something with weaker assurances.
I think one saving grace we have in Bitcoin is that the earliest adopters who hold strong ideological beliefs, a lot of bitcoin, and a lot of influence and power over enterprises in this space will not be easily swayed. It’s an open question of how the game theory will play out.
What can you, dear reader, do to contribute to the continued integrity of Bitcoin’s social contract and the properties we hold as inviolable?
In late July, Twitter’s logo suddenly changed to an X, followed by Elon Musk’s official announcement. “Twitter” is officially no more, and the website used by millions around the world is now called “X.”
According to the platform’s CEO Linda Yaccarino, the rebranding was the next step toward “the future state of unlimited interactivity,” morphing Twitter into “a global marketplace for ideas, goods, services and opportunities” — a unified “everything app.”
Jameson Lopp is the chief technology officer of and co-founder of Casa, a self custody service.
At a time when our lives are only becoming increasingly digital, why should we hand all of our information to centralized, opaque organizations that have a track record of using it unethically? Sure, these services can be profoundly convenient, and many people undoubtedly enjoy having one user-friendly application that can manage so much of their digital and real lives, but what’s the price?
Is convenience worth our freedom?
The idea of Twitter as an “everything app” was seemingly inspired by the popular Chinese platform WeChat, which allows users not only to chat, make calls and send media but also to make payments and access a wide range of financial and personal services. As Elon Musk has said, “You basically live on WeChat in China. If we can recreate that with Twitter, we’ll be a great success.”
Despite sounding convenient on paper, there’s a genuine concern about what happens when you use a single point of access for your entire digital world. If you do anything deemed “unacceptable” – generally by algorithms designed by people you will never know – you can be cut off in a second, often with little to no recourse.
Each and every individual should be able to decide for themselves how to approach their presence online.
Last October, for example, some WeChat users in China reported that they were banned from the platform entirely – effectively “killing” their digital self – just for reposting some “questionable” banners condemning Xi Jinping. More recently, X itself was literally hijacked a 16-year-old account that used the @x handle, replacing its name with @x12345678998765 — without any prior warnings, consent or compensation.
Twitter’s rebranding was happening alongside the launch of Meta’s new community messaging service called Threads. It joined Meta’s other social media offerings including Facebook and Instagram and is designed for sharing text updates and joining public conversations in competition with X.
Considering Meta’s complicated history with customer data, it’s unsurprising that many are concerned that Threads is simply a new avenue for information gathering and potential abuse. Many big tech companies like Meta and X have tried to create “everything platforms” by expanding into new products because being present in users’ day-to-day lives is a way to gather untold gigabytes of data on people worldwide.
But without owning your account, “everything” can be unilaterally taken away in an instant and “everything” becomes a single point of surveillance and potential failure.
The case for digital sovereignty
Examples like these help showcase the problem of centralized services holding full control over user access — but what’s the solution? Digital sovereignty.
As almost every aspect of our lives becomes digitized, the ability to control and manage your own data isn’t just a privilege anymore; it’s a human right.
Fortunately, one of the major boons of blockchain and other cryptographic breakthroughs is the ability to disintermediate big tech platforms and take charge of your identity and data. It’ll no doubt take a while for the masses to really grasp the gravity of this, but more are coming around to the notion. For the already enlightened, platforms exist that cater to this type of digital sovereignty.
Nostr, for example, is a protocol for sharing data like simple text posts, and it doesn’t rely on servers operated by any one entity. Nostr isn’t itself a blockchain, but the entire system is built around cryptographic keys and signatures to authorize and track events posted by pseudonymous identities — much like Bitcoin. (If you’d like to do a deeper dive into just what Nostr is and how it works, I’ve written about this at length before.)
What makes Nostr important for this discussion is the fact that it offers a true path to censorship-resistant social media as well as digital sovereignty. Yes, there are some other platforms that claim to offer a similar experience, but to ensure you can’t be deplatformed you must run your own server which is typically a major bottleneck to adoption.
Nostr doesn’t doesn’t require people to bootstrap servers and so is comparatively very easy to start using. You simply choose your client, be it a web browser or some app, create your public and private keys and can immediately begin surfing through content from other users or post your own. Different clients will provide somewhat varying experiences – some more technical, some rather streamlined – but many will seem quite familiar to anyone with at least some social media experience.
At this point, the experience is like Twitter. You get the same basic service with no ads and no threat of data harvesting whatsoever. Furthermore, considering a social network is only as good as the people who use it, you may be surprised just how many famous names are already involved with Nostr. Most notably perhaps is Jack Dorsey, the original creator of Twitter. There are even services that allow Twitter users to import anyone they follow who have linked their accounts to Nostr. This makes switching easy and can get current Twitter users free from centralization in no time.
Ultimately, each and every individual should be able to decide for themselves how to approach their presence online. Some may prioritize convenience and continue to use platforms like Twitter/X and its peers, while others may see the writing on the wall and decide that their digital sovereignty is more important.
Hopefully, by continuing to build and attract more people, we can create powerful alternatives to toxic social media today that challenge even the biggest centralized services. And perhaps the best way to do that is to offer a more transparent, fair and censorship-resistant experience where users will always remain in control of their private data.
Five years ago I started running annual full validation sync performance tests of every Bitcoin node I could find. It was interesting to watch over the years as well-maintained node software got faster while less maintained node software fell behind. But something weird happened during my 2021 testing – I observed unexplained slowdowns across the board that appeared to be network bottlenecks rather than CPU or disk I/O bottlenecks. These slowdowns disappeared when I re-ran my syncs while only requesting data from peer nodes on my local network.
In 2022 I dove a bit deeper into this issue with the following report:
During initial block download, there is a stalling mechanism that triggers if the node can’t proceed with assigning more blocks to be downloaded in the 1024 block look-ahead window because all of those blocks are either already being downloaded: we’ll mark the peer from which we expect the current block that would allow us to advance our tip (and thereby move the 1024 window ahead) as a possible staller. We then give this peer 2 more seconds to deliver a block (BLOCK_STALLING_TIMEOUT) and if it doesn’t, disconnect it and assign the critical block we need to another peer.
The problem is that this second peer is immediately marked as a potential staller using the same mechanism and given 2 seconds as well – if our own connection is so slow that it simply takes us more than 2 seconds to download this block, that peer will also be disconnected (and so on…), leading to repeated disconnections and no progress in downloading the blockchain.
As of Bitcoin Core v25 the timeout is adaptive: if we disconnect a peer for stalling, we now double the timeout for the next peer (up to a maximum of 64 seconds.) If we connect a block, we halve it again up to the old value of 2 seconds. That way, peers that are comparatively slower will still get disconnected, but long phases of disconnecting all peers shouldn’t happen any more.
Verifying the Fix
I decided to run a variety of different test syncs to determine what effect, if any, this change has in the real world.
v24 default validation sync against public peers
v24 full validation sync against public peers
v25 default validation sync against public peers
v25 full validation sync against public peers
v24 default validation sync against a local network peer
v24 full validation sync against a local network peer
v25 default validation sync against a local network peer
v25 full validation sync against a local network peer
All the test syncs shared the following config values in common; the only difference was whether I set “assumevalid=0” to force signature validation for all historical transactions while for local network syncs I set “connect=”
dbcache=24000
disablewallet=1
On to the results!
Syncing nodes against a peer on the local network
Here we can clearly see the different between doing a default validation versus doing a full validation of all historical signatures. Oddly enough, it looks like the default v25 validation is slightly slower. Though note that all of these tests were done against a local network peer, so the stalling change would have no effect upon performance. You may also notice that the v24 validation slows down a lot after 200 minutes – this is because it hits that point at which it starts validating historical signatures.
Syncing nodes against publicly reachable peers on the global network
What if we compare v24 to v25 against public peers? v24 is slightly faster at syncing early blocks, but otherwise it’s a dead heat. Once again, the divergence at the end is attributed to the difference between when each client starts to validate historical signatures and becomes CPU-bound. To get rid of that variable we need to do full validation syncs.
With full validation syncs we can observe essentially no difference between the two releases when syncing against a local network peer, as expected. Against public peers, we can see that v25 is slightly faster for the first 4 hours but then they converge. For reference, I was expecting the performance chart to look more like this one from last year’s tests:
I struggle to explain the convergence, but will also note that I only ran one test sync for each of the 8 node configurations due to the time requirements. Given more resources it would be preferable to run 10+ syncs of each from a variety of different geographic locations.
The Landscape of Publicly Reachable Node Bandwidth
Has the overall network health changed significantly since last year? I re-ran my network crawler script to measure each publicly reachable peer’s bandwidth.
Total IPV4 nodes: 5,467 Pruned Nodes: 831 (can’t be used for full chain sync) Failed to return requested blocks: 2,153 Successfully returned requested blocks: 2,483
Of the 2,483 nodes for which I was able to measure their bandwidth, the breakdown was as follows:
Average peer upstream: 12.8 Mbps Median peer upstream: 14.3 Mbps
These figures are similar to last year’s results of 17 Mbps and 12 Mbps, though the distribution is quite different. Last year’s test found more peers in the 30 Mbps range which brought the average higher. One possible explanation for why so many peers are in the 14 – 15 Mbps range: looking at Comcast and Spectrum’s subscription plans, their highest non-gigabit tiers tend to cap the upstream at 15 – 20 Mbps.
Further Experimentation
There are a ton of variables at play when doing an initial sync of your node against publicly reachable peers.
Bitcoin Core only opens 10 outbound connections
It selects those connections randomly from lists of IP addresses it receives from DNS seeds. It queries all 9 DNS seeds and, in my testing, each seed returns about 40 IP addresses. About 2/3 are IPV4 and 1/3 are IPV6. Thus if you’re an IPV4 node you’ll have about 250 peers to try and if you’re IPV6 only 120 peers to try.
Bitcoin Core only maintains a download window for the next 1024 blocks
What if Bitcoin Core connected to more peers? We can test that by recompiling Core with a single line of code changed:
diff --git a/src/net.h b/src/net.h
index 9b939aea5c..41b182ac53 100644
--- a/src/net.h
+++ b/src/net.h
/** Maximum number of automatic outgoing nodes over which we'll relay everything (blocks, tx, addrs, etc) */
-static const int MAX_OUTBOUND_FULL_RELAY_CONNECTIONS = 8;
+static const int MAX_OUTBOUND_FULL_RELAY_CONNECTIONS = 48;
What if the download window was larger? As noted in the code, this would cause more fragmentation on disk which slows down rescans, but those are rare operations. Once again, we just need to change one variable:
diff --git a/src/net_processing.cpp b/src/net_processing.cpp
index b55c593934..663689de29 100644
--- a/src/net_processing.cpp
+++ b/src/net_processing.cpp
@@ -127,7 +127,7 @@ static const int MAX_BLOCKTXN_DEPTH = 10;
* Larger windows tolerate larger download speed differences between peer, but increase the potential
* degree of disordering of blocks on disk (which make reindexing and pruning harder). We'll probably
* want to make this a per-peer adaptive value at some point. */
-static const unsigned int BLOCK_DOWNLOAD_WINDOW = 1024;
+static const unsigned int BLOCK_DOWNLOAD_WINDOW = 10000;
I next ran another set of syncing tests to compare tweaking the above variables.
What are the takeaways? Increasing the download window size actually seems to harm download performance. Adding more peers seems to have a negligible effect. While there’s probably room for improvement somewhere in the peer management logic, neither of these options appears to do the trick.
It might be worth improving Bitcoin Core’s peer management to keep track of the actual bandwidth speeds it’s seeing from each peer and drop slow peers that create bottlenecks, though this would require a fair amount of adversarial thinking to ensure that such a change doesn’t create a potential eclipse attack by high bandwidth / enterprise / data center nodes.
Conclusion
While the performance gains I observed were muted in comparison to my expectations, there are a lot of uncontrollable and even unknown variables at play. For example, we’ll probably never know if there were maliciously slow peers running on the network last year that may be gone now.
What should the average Bitcoin enthusiast take away from all of this? If you have a high speed residential connection that doesn’t have a pitifully throttled upstream, please consider running a node that accepts incoming connections!
Bitcoin is over ten years old, but you wouldn’t know it. Almost all of the changes to its inner workings over the years have been quite conservative, leaving us a Bitcoin that Satoshi would easily recognize.
Proposed changes sometimes ignite a debate within the community centered around what Bitcoin truly “is” or “needs.” While there are strong opinions and arguments on both sides, the protocol ultimately requires fundamental but careful updates, alongside meaningful but functional second-layer applications.
When thinking about Bitcoin’s future, it’s essential to consider the repercussions of ossification. I’ve touched on this topic several times — but in a nutshell, ossification is the process whereby so many applications get built on top of a protocol and adopted by the masses that it becomes exceedingly difficult, if not impossible, to alter the underlying protocol without breaking almost everything.
With thousands or tens of thousands of third-party services relying on the base network, it’s infeasible to coordinate upgrades across every provider simultaneously. We’ve already seen this occur with protocols like TCP, one of the primary mechanisms used to handle internet traffic, and it will inevitably happen with Bitcoin.
I bring this up because we don’t know when Bitcoin will become too ossified to change — and there’s a small chance it’s already too late. This adds pressure to the entire debate because, at some point, updating through BIPs will become impossible, so solving problems at that level will be a moot point.
That being said, Bitcoin represents an uncontrollable technology that all are free to build on top of.
Considering that many, including myself, believe that there is still some important work to be done on the protocol level, we need to act swiftly to ensure the Bitcoin network is developed enough to work as the digital currency system it intended to be.
Updating Bitcoin’s code is nothing new
Because the network operates around consensus, any new changes submitted via Bitcoin Improvement Proposals (BIPs) need to be agreed upon (or at least not objected to) by an overwhelming portion of ecosystem participants.
This is understandably a slow and onerous process. Unsurprisingly, as a result, there have only been a handful of major upgrades in the history of the protocol.
One of these upgrades, known as Segregated Witness, or Segwit for short, represented a change in the fundamental transaction format to protect against transaction malleability: the ability to change small bits of information in an unconfirmed transaction in such a way that makes child transactions invalid.
More recently, we implemented the Taproot upgrade, enabling more scalable and private complex spending conditions for Bitcoin as well as signature aggregation. This, however, was likely not the last upgrade Bitcoin will see, and many proposals are already in the works that may eventually become a part of the core protocol.
SIGHASH_ANYPREVOUT would allow a transaction to spend from any unspent transaction output that is encumbered by a given set of spending conditions. This enables greater flexibility, unlocking complex transaction constructions, such as creating atomic swaps and shared lightning network channels.
There are also covenants — a topic I’ve spoken about at length — that would restrict where a given address could send Bitcoin, which holds significant security implications.
Drivechains also stand to bring permissionless pegging of bitcoin to and from sidechains, which could both aid in scalability as well as experimentation.
Of course, there’s one major change that is pushed by ESG proponents, and that’s a claim that the network should change from one secured by proof-of-work to one powered by proof-of-stake. This is being suggested because proof-of-work takes a significantly greater amount of energy to perform than proof-of-stake.
Ethereum recently switched to proof-of-stake, ushering in a ~99.5% reduction in energy usage. However, there are trade-offs in switching from one type of consensus mechanism to another that are more complicated than just power consumption. This article isn’t going to wade through that debate, but it is one of the best examples of a controversial change that is unlikely to occur, even though it has its proponents.
The case for building on bitcoin instead
The Bitcoin ecosystem is free to update the protocol in any way it can collectively agree upon. However, further upgrades to the core code aren’t the only way forward.
Users are exploring a myriad of ways that Bitcoin can be developed on top of, without changing anything within the core protocol.
There are already some well-known examples. They include the Lightning Network, a second-layer solution utilizing off-chain micropayment channels, scaling the blockchain’s capacity and speed, and lowering fees. There’s also been the recent addition of inscriptions, which amount to Bitcoin’s own version of NFTs by embedding image data in the blockchain.
A brief internet search will reveal many more such projects, and admittedly, the previous upgrades to the Bitcoin protocol made these technical achievements possible. Nonetheless, this highlights that Bitcoin, as it is now, is fully capable of being built on to enable innovative improvements.
The debate rages on
Some Bitcoin proponents don’t want to see the protocol change at all. Remember, there are as many opinions about these additions to the network as there are projects and proposals. Still, some purists are quite vocal about what they see as the failures of some of these endeavors.
Popular Bitcoin commentator Shinobi wrote an essay outlining multiple existing shortcomings of the lightning network, explaining at length that it’s unlikely to become a globally scalable, secure payment system without additional upgrades.
Lightning may be spurring some debates, but the recent addition of Ordinals is stirring the pot further.
The argument is largely between those who feel that Bitcoin should be strictly about financial transactions, and others who believe that the network is robust enough to host any type of data, secured by the fees required to transact them. This disagreement showcases another inherent tension between the philosophy of changing Bitcoin, against the philosophy of building on top of it.
Ultimately, there’s no one clear path forward for Bitcoin. The protocol will continue to reflect the desires of its users as it was intended. Bitcoin is designed to exist for hundreds of years to come, but that doesn’t mean it is already in its final form.
Moving forward, I’d like to see a combination of thoughtful, community-driven updates to the core protocol that enable and enhance innovative new protocols built on top of the base layer.
The idea that there is only one path forward is simply too restrictive of a view for something as important as Bitcoin.
Jameson Lopp is the CTO & cofounder of Casa, a self custody service. A Cypherpunk whose goal is to build technology that empowers individuals, he has been building multisignature bitcoin wallets since 2015. Prior to founding Casa, he was the lead infrastructure engineer at BitGo. He is the founder of Mensa’s Bitcoin Special Interest Group, the Triangle Blockchain & Business meetup, and several open source Bitcoin projects. Throughout this time he has worked to educate others about what he has learned the hard way while writing robust software that can withstand both adversaries and unsophisticated end users.
Want alpha sent directly to your inbox? Get degen trade ideas, governance updates, token performance, can’t-miss tweets and more from Blockworks Research’s Daily Debrief.
When investors first allocate to bitcoin (BTC), they are typically faced with a paradox. While they appreciate the value of digitally native, censorship-resistant, hard money, they are often hesitant to realize that potential fully by taking custody of it themselves.
This reluctance is not surprising. For centuries, civilizations have normalized the outsourcing of custody to other third parties to sidestep personal risk. That same phenomenon has been reinforced in the modern era with a wide range of securities such as public companies, trusts, and exchange-traded funds (ETFs), many of which track the price of underlying commodities. The relationship investors have with ownership has been reduced from holding physical stock certificates to trading tickers in a brokerage account.
But is holding shares of these products as secure as holding bitcoin yourself with your own private keys? The answer is not so simple and requires one to develop a broad view of traditional financial markets, bitcoin’s design, and the tradeoffs and risks associated with traditional securities.
Please note this article is provided for informational purposes only and is not intended as financial, legal, tax, accounting, or investment advice. Casa urges you to consult a qualified professional for any such advice or service.
Types of bitcoin proxy securities
Since bitcoin originated, many have attempted to create security offerings for investors wishing to participate more passively in bitcoin’s price appreciation through a proxy, a trend that has picked up steam in recent years. Some exist today while others have wished for a worldwide spot bitcoin ETF with several large institutions and asset managers seeking to fill the void by applying to regulatory agencies.
Today, bitcoin has been packaged into many types of securities offerings that offer exposure to a basket of underlying assets. Here are some ways they can offer exposure to bitcoin, each with its own set of risks.
Spot: This product holds a finite amount of bitcoin per share.
Futures: This product allows you to trade bitcoin at a fixed price in the future.
Equity: This product represents an ownership stake in a company that holds bitcoin on its balance sheet.
For the purposes of this article we will focus on spot and general equity investments since these are designed to provide the closest approximation for BTC. Futures products and derivatives trading were previously discussed in this article.
What options exist today?
The most notable bitcoin-related investment vehicle is the Grayscale Bitcoin Trust (GBTC), a spot offering that holds more than 620,000 bitcoin as of this writing, a staggering 3% of the total circulating BTC supply. There are also other spot offerings, which vary by jurisdiction.
Additionally, there are several companies that hold bitcoin such as Microstrategy, Tesla, and several bitcoin mining companies.
Why do people want a bitcoin ETF, stock, or other security?
Investors often find ETFs, trusts, and similar products attractive because of their convenience. Instead of doing the work of acquiring, holding, and managing assets, they can purchase an offering off the shelf on a public exchange and let an asset manager handle the administrative burden.
While retail investors can easily purchase spot bitcoin on crypto exchanges in friendly jurisdictions, this investment is typically conducted with post-tax income, which represents a small subject of overall financial markets. But these inflows are just one small piece of the pie.
Retirement is arguably the most common investment objective with massive influence on financial markets. Retirement assets totaled at approximately $35 trillion in the U.S. alone in Q1 2023, according to the Investment Company Institute. That amount accounted for more than 30 percent of all household financial assets.
Retirement contributions are frequently an employment benefit associated with tax breaks, and investors are usually incentivized to participate in them to avoid missing out on “free money” in the form of an employer match. This investing is generally done through pension funds and tax-advantaged accounts such as 401(k) in the U.S. and SIPPs in the U.K.
Why can’t you buy bitcoin directly in your retirement account?
Setting aside political motives, there are a few practical reasons why purchasing spot bitcoin in a retirement account is not that straight-forward.
Maintaining custody is a major part of managing retirement accounts. Because these accounts act as a tax shelter, governments don’t want investors commingling retirement assets with their other accounts. Additionally, retirement accounts have strict regulatory standards and compliance requirements, and they are typically overseen by investment advisers that register with their respective governments.
The most common way to hold spot bitcoin in a tax-advantaged account is to set up a self-directed IRA, though there is a paper trail involved. You also have to be diligent about keeping bitcoin separate from any bitcoin you may have acquired with post-tax income and avoid prohibited transactions. All in all, this process can be cumbersome and complicated, hence investors’ preference for proxy alternatives.
Have you ever wondered how to hold bitcoin yourself? It’s easier than you might think, and Casa takes away the guesswork. Our multi-key vaults help you take custody of your keys, so you don’t have to rely on an exchange or custodian. Learn more here.
Risks of owning bitcoin-related securities
As a protocol, bitcoin was designed to allow you to transact without having to trust a third party or government. When you purchase shares of a bitcoin-associated entity, you are inevitably placing some trust in third parties and the government. Below are some of the caveats you can expect.
Lack of visibility: When you hold bitcoin with your self-custody, you can audit and verify your ownership at any time. When you hold shares of an entity, you lose some visibility into the underlying assets. Corporate accounting does not provide public disclosures in real time like the bitcoin network does.
This is not to say an investment manager will disregard their fiduciary responsibility and fail to act on your behalf. The traditional financial system simply operates at a different speed, and it is an industry beholden to quarterly filings and lots of paperwork. Assets aren’t necessarily marked to market constantly like bitcoin is on exchanges. For instance, if an investment manager decides to sell bitcoin, it may be their legal prerogative, but it could be several weeks or months before you see it in a public report.
Lack of redemptions: When you buy bitcoin on an exchange,you essentially have an IOU which you can use to claim your assets later. This is not so simple in traditional finance.
Owning shares in a proxy investment does not necessarily mean you have any right to the underlying bitcoin. In fact, securities often exist without any redemption mechanism for commodities whatsoever, a lot of which has to do with regulation.
One high-profile example of a proxy vehicle without redemptions is GBTC. The trust operated a redemption program at one time but suspended it in 2014. While Grayscale has been in litigation with the SEC over a bid to convert GBTC into an ETF, this process is likely too complex for many investors to follow without a securities lawyer on retainer. Here’s an example:
“Effective October 28, 2014, the Trust suspended its redemption program, in which Shareholders were permitted to request the redemption of their Shares through Genesis, the sole Authorized Participant at the time out of concern that the redemption program was in violation of Regulation M under the Exchange Act, resulting in a settlement reached with the Securities Exchange Commission (“SEC”). At this time, the Trust is not operating a redemption program and is not accepting redemption requests. Subject to receipt of regulatory approval and approval by the Sponsor in its sole discretion, the Trust may in the future operate a redemption program. The Trust currently has no intention of seeking regulatory approval to operate an ongoing redemption program.”
Market risk: A common trait people notice when they encounter bitcoin is the concept of 100% uptime. This is a far cry from the rest of the investing world which is bound by time constraints. For instance, the New York Stock Exchange trades from 9:30 a.m. to 4 p.m. EST and closes for several observed holidays throughout the year.
By contrast, bitcoin is the true city that never sleeps. Because bitcoin exists exclusively in cyberspace, real bitcoin markets trade 24/7 365 days a year without any concern for business hours. Though exchange websites often crash due to the sheer increase of customer volume at moments of high volatility, they strive to replicate bitcoin’s 100% uptime to serve an investor base that is, for lack of a better term, chronically online.
When compared to holding bitcoin in your self-custody, owning a proxy investment is a liquidity trap. If major market developments take place on the other side of the world, you can find yourself stuck while the rest of the bitcoin market carries on without you and engages in arbitrage.
Geopolitical risk: Many nations claim to operate according to the rule of law, but laws are promises written on paper and they frequently prove more malleable than one would think. Governments have a long history of seizing assets, restricting access to them, and nationalizing entire companies.
The playbook for this sort of actions is wide. The most famous example of this is the U.S. government’s attempt to confiscate gold in the 1930s. Every so often, nations experiencing economic strife will freeze accounts and cut depositors off from withdrawing cash, such as Lebanon in recent years. Other countries have co-opted oil producers, manufacturers, banks, and other corporations.
Government regimes can be unpredictable depending on where you live. Because banks and financial institutions are closely linked with the public sector, it’s worth giving some thought to how circumstances could play out if a government were to take a hard line against bitcoin.
If you believe bitcoin will clash with nation-states, a regulated security is not guaranteed to protect your bitcoin from that geopolitical threat because regulations can change.
Leverage: Bitcoin is designed as an asset without a liability, much like gold. But individuals and businesses can take on debt, and they sometimes run the risk of going bankrupt. This is true for crypto companies and for any business listed on traditional regulated exchanges. If you are exploring a bitcoin-related security, be mindful of its debt burden. Shares can be worthless if the investment proves to be the next Enron.
Custodial risk: The traditional financial system is architected on trust, and trusted third parties are security holes. Whether you leave bitcoin in the care of an exchange or if you own shares in a managed bitcoin trust, you are still relying on a third party to do their job.
While it could be argued that regulated entities are less likely to commit the sort of misconduct we’ve observed in the crypto industry, it is not outside the realm of possibility. Recently, Prime Trust, a qualified custodian regulated by the State of Nevada, farmed out custody to a vendor, subsequently lost access to customer assets and was placed into receivership.
Third-party involvement is rampant in the legacy financial system where you usually own stock certificates which are housed with brokerage firms. Writ large, this can make for a confusing ride for the individual investor trying to exercise full sovereignty over their wealth.
I had known all along that my broker was a trusted third party; what I didn’t realize before this saga was how many points of failure existed in the system.
Dilution: One drawback of owning shares of a bitcoin-related security rather than bitcoin itself is the potential for your wealth to be eroded in BTC terms.
The amount of underlying bitcoin per share can fluctuate depending on management and any applicable fees. For instance, GBTC has a (relatively high) annual 2% management fee that reduces the amount of BTC held per share over time.
If you invest in company stock, the company can also issue more shares, which dilutes your ownership stake. This can be advantageous when the stock is overvalued, but dilution can add insult to injury during bear markets. There have been several instances where bitcoin-related companies have increased their shares outstanding or sold bitcoin to shore up their balance sheets. This happened a lot with bitcoin miners during the last bear market. Granted, miners typically sell some of the bitcoin they produce to pay their energy bills and fund their operations, but overdoing it in a bear market isn’t all that different for the shareholders from panic selling on an exchange.
These are just some of the risks to keep in mind. Be sure to consult any applicable prospectus or similar documents, and seek the guidance of a qualified professional before proceeding with an investment.
Why we believe in self-custody
Ultimately, the decision to hold bitcoin in self-custody and purchasing shares in a security has many factors, and there are many investors who own securities in addition to holding bitcoin in self-custody. Purchasing shares in bitcoin-related ETFs, trusts, and other financial products can be a helpful way to gain exposure to bitcoin’s price action, especially in tax-advantaged accounts with few alternatives. But these instruments are not a complete substitute for self-custody and bitcoin’s promise as trustless, self-sovereign money.
The creation of bitcoin ushered in a new era of property rights, and investors no longer have to rely exclusively on third parties to protect their wealth. By holding your keys, you preserve much of the optionality associated with peer-to-peer technology and a decentralized network. Whether you see bitcoin as a trade, investment, or a way of life, understanding these tradeoffs is key to understanding the opportunity of bitcoin itself.
Secure your bitcoin for real
Casa helps bitcoin investors take self-custody of their bitcoin with multiple keys for robust protection against hacks, theft, and custodial risk. With a Casa vault, you can be sure you own your bitcoin fair and square for full peace of mind.
A question I get somewhat regularly is “how can I generate a seed with my own entropy so that I’m not trusting someone else’s hardware or software?” There are innumerable ways to do this, as you’re only really limited by your creativity when it comes to generating entropy. For example, Cloudflare used a wall of lava lamps to seed a pseudorandom number generator. But that’s probably overkill for your needs!
There are plenty of guides that have previously been published about generating a bitcoin seed phrase from your own entropy, but those guides tend to be quite technical and have a high learning curve because they require setting up an airgapped computer, which is a process I consider to be outside the average person’s comfort zone.
Thus the real question is: why is the most user-friendly way to accomplish this goal? Thanks to some functionality provided by COLDCARD, which is effectively a special purpose airgapped computer, the process can be simplified by an order of magnitude!
The Recipe for Success
Buy the following items: * A COLDCARD * A MicroSD card and USB adapter * A power-only USB-C cable OR ensure you already have a cable with either a power supply that plugs into a wall outlet OR buy a COLDPOWER adapter. * Some 6 sided casino dice. Not just regular dice, but high precision (equally weighted) dice. You only really need one, but if you buy a pack with several then it will save you a little time during the rolling process. Coinkite also sells dice, though it’s unclear if they are casino quality.
Once you’ve received the necessary hardware, power up the COLDCARD
Set your PIN
Upgrade the firmware via the MicroSD card if it’s not on the most recent version
From the main menu on the device, select New Seed Words
Select 12 Word Dice Roll. Why 12 words and not 24? With 24 words you end up having to store twice as much data but don’t gain any additional security.
Roll your dice at least 50 times and input the numbers into the device. If you only have 1 die then it will require 50 throws; if you have 5 dice then only 10 throws are required, etc.
Click OK (checkmark) to finalize your entropy input and generate the seed.
You may be asking yourself “wait, am I not trusting COLDCARD?” Not quite, as their dice roll functionality is verifiable.
Only the Beginning
Securely generating keys is only the beginning of the full life cycle of key management. In order to maintain the integrity of your keys, you must also:
Store them securely
Access them and sign transactions with them securely
Have secure recovery / inheritance protocols in place
Over the years I’ve heard stories of a few folks burying seed phrases for extreme disaster recovery purposes, but I’ve never seen an in-depth report from anyone who has gone through the whole process. Thus, a couple years ago I buried a set of plates to see how they’d fare over a longer period of time. Mostly I wanted to see if I could construct a cheap container that protected them from the elements and just wanted to explore the process of burying and reclaiming “treasure” since I’ve never tried it before.
Constructing a Container
My goals for burial storage were pretty simple:
Large enough to fit a stack of credit card sized seed plates.
Not any larger than necessary to limit the excavation required
Durable enough that it wouldn’t become brittle or decay in the elements
Something sealable to prevent water / debris from entering
Cheap and made from easily available parts
Easy to assemble and disassemble
So I headed to my local hardware store and took a look around. I ended up settling upon using a handful of 4″ PVC sewer fittings. Here’s the recipe:
2 of these 4″ drain plugs – you can also use smooth end caps but I wanted to be able to use a wrench to reopen the device if it was gummed up.
A small tube of waterproof silicone sealant. Alternatively, for a more permanent sealant you can use PVC cement, but note that this will require sawing through the pipe in order to reopen it.
I’m sure there are innumerable other ways you could construct this container, but this seemed like a straightforward solution from the 10 minutes I spent at the hardware store. PVC is an incredibly durable material that’s already rated for use in a wide variety of environments, plus the whole kit only set me back $40!
Assembling the Backup
This ain’t rocket science – you should simply slide the adapters into the coupling and then screw the plugs into the adapters.
For your metal seed phrase backup, I highly recommend sealing it inside a tamper evident bag (they are inexpensive) in order to be sure no one has disturbed your backup. Note that for this to be fully tamper evident, you must write down the serial number of the bag somewhere that you won’t lose it. Otherwise, an attacker could open the bag and re-seal it with the same brand.
Now you can insert the backup into the PVC container. Press both of the adapters tightly into the coupling and then apply a moderate to thick amount of sealant along the edge at which they meet. Next, screw both of the drain plugs in to a snug hand-tight level of torque and apply sealant along the edge of the fitting. Let the device sit for 24 hours so that the sealant is fully cured before burial.
Burying the Backup
I did not leave any physical markings when I buried my backup. Nor did I create a map to find it. This would have been a very poor inheritance plan, as no one but myself knew about this little project.
I didn’t put a ton of thought into the burial location other than to put it somewhere that I expected there to be no chance of anyone poking around in the forseeable future. If you’re doing a serious seed burial, I’d suggest the following in order to reduce the odds of the container being disturbed, found by the wrong person, or otherwise lost:
Land that is not open to the public (no random folks doing metal detection / treasure hunting)
A spot that is not claimed by any city / state / federal government as an easement or right-of-way where they can send workers to do projects that could involve excavation.
A fairly open spot that’s not too close to large plants with extensive root systems that could envelop the container.
If you want to be super paranoid about treasure hunters, take a metal detector with you when scouting out your burial location. If you can find a spot with a ton of debris, that will provide more obfuscation. Or you can “seed” the area with your own scrap metal like bottle caps / pull tabs that would frustrate any treasure hunters.
Be aware that natural changes in the environment can significantly alter the landscape over multi-year periods. For examples, trees come and go and man-made structures may be erected and demolished, so it’s important to be able to find the exact location of your dig without relying on potentially temporary features. Chose a prominent, natural landmark – a feature that won’t erode, move, or disappear – and navigate to your stash from there.
Seed Burial Results
Don’t trust, verify! You may wish to do a test run yourself before undertaking a real seed phrase burial, as your mileage (and environmental conditions) may vary.
Exhuming the Backup
After several years had passed it turns out I forgot exactly where I buried my plates, so I got a metal detector to save me the time of digging a bunch of holes in the general area. It turns out that digging a bunch of holes in the wilderness with a shovel is far from the fun of digging a hole at the beach – you’re probably going to end up hitting all kinds of obstacles like rocks and roots.
Thankfully my metal backup was only under about 6 inches of dirt, so I was able to find it with the metal detector on the second try – I did get some false positives at first, but it was also my first time ever using a metal detector and I wasn’t adept at calibrating it and interpreting the signals.
Once I finally found my backup, it was in pretty good shape, though I was a bit surprised to see how many roots had grown around it. This is certainly something to keep in mind if you’re burying something for an indeterminate period of time. If you come back in a few decades, you might have to hack through a huge mess!
After pulling it out of the ground and knocking most of the dirt off, it looks to be intact.
A quick hose down confirmed that all of the seals were intact.
Removing the silicone seal was pretty easy – just needed to slip the tip of a knife under an edge of the sealant and then I pulled the whole thing off with two fingers. After unscrewing the cap we can see that no dirt / water / debris managed to get inside.
Sensitive data redacted for obvious reasons!
As we can see here, the outer plates of this backup are in pristine condition. The holographic seal on the left is damaged because I opened it to check the seed phrase. You’ll have to take my word for it that the stamped seed phrase on the inner plate is intact as well!
Conclusion
I think it’s clear that the construction of a device suitable for acting as an underground storage vault is pretty straightforward. The hard parts are site selection, seed phrase distribution, and recovery plans / instructions.
If you’re going to undertake seed phrase burial, make sure that you read the three links in the opening paragraph so that you understand all of the considerations that should be made regarding the seed phrase itself.
Last week I published a critique of several high level claims and concepts that have been made by Major Jason Lowery on a variety of venues such as conferences and podcasts. Predictably, this ruffled some feathers.
Also, predictably (as I had already countered in the above essay) nearly all of the rebuttals I received to my questions were that I hadn’t read the full thesis. I don’t find such arguments to be made in good faith, as they are a verbosity fallacy that forces the critic to prove a negative by committing a great deal of resources.
Exclaiming “read my 400 page thesis” is not a fair rebuttal to specific criticism. If anything, it’s an asymmetric denial of service attack designed to shut down the critic. A thesis author ought to be able to point to specific pages in the thesis that contain counterpoints. 🤓
Nonetheless, Lowery finally posted a URL to his thesis (for the first time, after ignoring many such requests from others) as a response to my essay; you can find it here. Despite plenty of backlash that I was refusing to “put in the work” to read the source material, those who read my first critique would have noted that it was not about the time or money. Rather, I found it ideologically objectionable that a taxpayer funded academic thesis about an open system should be hidden behind a paywall.
Now that the material is publicly accessible, I sat down and read the entire thing.
Softwar
This is a beast of a thesis, clocking in at over 200,000 words across 385 pages with 222 cited references. Here’s a word cloud I generated from the text:
Chapter 1: Introduction
Based on a theoretical framework developed and presented in this thesis called “Power Projection Theory,” the author hypothesizes that Bitcoin is not strictly a monetary technology, but the world’s first globally-adopted “softwar” protocol that could transform the nature of power projection in the digital age and possibly even represent a vital national strategic priority for US citizens to adopt as quickly as possible.
Lowery is correct that Bitcoin is not strictly a monetary technology. Bitcoin creates new game theory. Lowery correctly analyzes (some of) that game theory, but as we’ll see, he falls short in explaining how Bitcoin’s game theory can be applied practically to non-Bitcoin data.
In some places it seems Lowery suggests that we could create other Proof of Work systems. But, as Lowery is fond of quoting Michael Saylor, “there is no second best.” Time and time again we have seen that with any given Proof of Work algorithm, there can only be one top dog. It’s rather unclear what Lowery is advocating for specifically.
Lowery suggests many times that nations may prefer to engage in cyber power based war rather than kinetic war.
A soft form of warfighting would give people access to the supreme court of physical power, and that court would likely be just as indiscriminate and impartial in electronic form as it already is in kinetic form. For this reason, soft warfighting protocols could be ideal for small countries wielding small amounts of kinetic power (i.e. small militaries) seeking to settle policy disputes with larger countries wielding large amounts of kinetic power (i.e. big militaries).
This is an odd claim that I’d like to see further analysis upon; my intuition tells me that nations with small militaries likely have far less energy (cyber power) than nations with large militaries, so they’ll likely lose either conflict.
a soft form of warfighting would represent a non-lethal form of warfighting – making it a potentially game-changing and revolutionary way for nations to establish, enforce, and secure international policy.
This is a massive claim; it seems Lowery is basically proposing that a Proof of Work system could serve as a superior replacement to international bodies like the United Nations?
Things like constitutions and rules of law have always been disembodied and immaterial, so who says they can’t be physically secured against systemic exploitation and abuse in a disembodied and immaterial way?
I wonder if we’ll ever learn how to accomplish this…
Imagine if society were to discover a way to write down policies using C++ instead of parchment, then enforce and secure those policies using physically harmless electric power. A discovery like that could change society’s perception about the moral value of traditional laws and warfare simultaneously.
OK but that involves writing completely different non-Bitcoin protocols, assuming that the protocol is the policy. And it still relies more on nodes to enforce rules than proof of work. I thought this thesis was about how Bitcoin will enable the settling of such policy disputes?
It’s interesting to see this omission because then Lowery would have to admit that much of Bitcoin’s security model is based upon “logical security.”
By converting kinetic warfighting or physical security operations into digital-electric form, written rulesets (e.g. laws) that are inherently vulnerable to systemic exploitation can be secured using (non-lethal) electric power rather than (lethal) kinetic power. International policies (e.g. monetary policy) could be written in C++ and secured using (non-lethal) electronic power rather than being written on parchment and secured using (lethal) kinetic power.
Here we start to bump into one of my major problems with Lowery’s thesis. The encoding of rules / policies via programming languages such as C++ are “logical security” by Lowery’s own definition. Even proof of work checks are encoded in the same fashion. And proof of work validation is tangential to the validation of all the other rules in the system.
Bitcoin could therefore represent something far more than just a new financial system architecture. Once we have figured out how to keep financial bits of information physically secure against attack, that means we have figured out how to keep all bits of information physically secure against attack.
… how? The method used by Bitcoin does NOT scale for securing large volumes of data! Herein lies another massive leap of faith. Will Lowery connect the dots?
Chapter 2: Methodology
It’s fascinating that Lowery expends an entire chapter (16 pages) of the thesis expounding upon the virtues of Grounded Theory as a great way to approach novel, multi-disciplinary concepts.
One of the most commonly-cited mistakes of researchers using the grounded theory methodology is that they become too self-restrictive.
And yet this entire thesis is narrowly focused upon the Proof of Work aspect of Bitcoin’s security model…
Chapter 3: Power Projection Tactics in Nature
As noted in my previous essay, this chapter is all quality material. The point is that nature exists in a state of anarchy and the only “rules” are those of physics. Thus “ownership” of “property” is ultimately a game of “might makes right” and one could consider all forms of life to be engaging in perpetual warfare over scarce resources.
That is to say: you can’t truly own something unless you can defend it from threats.
Lowery manages to make this chapter more compelling by speaking of resources, and the cost and benefit of attacking to gain more resources (and defending your current resources,) in terms of watts. From a physics standpoint, this is brilliant framing because all living organisms are fundamentally fighting against entropy. Of course, just because the framing is smart doesn’t mean it’s a perfect description.
I grew up camping in places with black widows. Perfect example. One of those little fuckers could impose a severe physical cost on me, but did not need to expend a lot of watts to do so. A rabbit can produce several orders of magnitude more energy, but I didn’t avoid the rabbit
Lowery goes on discussing evolutionary biology to note that cooperation is a key survival strategy, and organizations employ the same type of strategy as single celled organisms and pack animals. Presumably the tie-in is that Bitcoin is a protocol that enables cooperation.
I did find it amusing that Lowery spends a while discussing the history of humans domesticating animals and then using that to warn that we should use it as a lesson to reject the domestication of humans (by stripping their power projection.) It’s amusing because he’s describing exactly what governments (his employer) do to their citizens.
Lowery then goes on to note that characteristics such as antlers enables members of a species to project power against external threats while still being able to settle their internal disputes via projection of power in a way that is less likely to have lethal consequences. This is good for pack animals that are disinclined to weaken the pack. It’s clear that this is the lead-in to his framing of proof of work as “non-lethal warfare.”
Chapter 4: Power Projection Tactics in Human Society
Lowery accurately notes that humans are disinclined to use physical force against each other to settle disputes; we prefer to use our communication skills to use abstract power such as courts to find non-violent solutions. Of course, all of these abstract power sources are ultimately backed by a source of physical power. And, sometimes, the abstract power fails to resolve a dispute and we fall back to kinetic warfare.
This chapter focuses on the root causes of warfare and explains why it’s desirable for humanity to have non-lethal options. It’s not a controversial statement to say that human infighting is the most destructive intraspecies competition on our planet.
Lowery approaches the human brain and its power for imagination and abstract thinking, citing this ability as an enabler for humans to create stories, narratives, and belief systems that enable us to bypass Dunbar’s Number. That is, belief systems create scalable trust and coordination that wouldn’t be possible otherwise because we’re only physically capable of maintaining close relationships with about 150 other humans.
In other words, storytelling is an abstract superpower.
This line really stuck out at me given my characterization of Lowery’s efforts in my previous critique:
Lowery is a gifted storyteller.
I dare say that Lowery is employing his own thesis against his audience, in multiple ways!
Lowery has deployed a compelling story that causes many to overlook its flaws.
He sought (until last week) to impose costs upon his critics by requiring them to buy his book on Amazon.
He seeks to impose costs upon critics by requiring them to read a 400 page thesis before questioning any of his claims.
“With the right stories, people will forfeit their physical power or lay down their arms. Sapiens can be domesticated by the stories they believe, and like lambs, they will walk straight into slaughter.”
Indeed. Beware which stories you believe…
Lowery goes on to explain that abstract power systems are exploitable through dogma and politics. Basically, the best storytellers win, attaining the position of “god-king.”
I have very few objections with the content in this chapter; here’s a particularly powerful point with which I fully agree:
If our Upper Paleolithic ancestors could see how modern agrarian domesticated sapiens live today, they would probably not envy our lives. Humans replaced the emotionally fulfilling challenge of hunting and gathering with unnaturally sedentary and laborious lives filled with social isolation, infectious diseases, health deficiencies, warfighting, and probably most devastating of all, high-ranking sociopaths who psychologically abuse and systemically exploit their populations through their belief systems at extraordinary scale.
Here’s a truism I’ve found is applicable to many aspects of civilization.
When sapiens trade physical power for abstract power, they make a tradeoff in complex emergent behavior. What they sacrifice in the trade is systemic security.
I believe this statement can also be modified to replace “physical power” with “responsibility” and “abstract power” with “convenience.” That is to say: civilization advances via specialization of work; specialization enables greater efficiency. But over many generations, humans outsource more and more aspects of their lives to third party specialists… and today we live in a society where very few humans are actually capable of surviving without a massive network of trusted third parties. This creates a huge systemic risk.
It’s therefore entirely possible that a direct contributing factor of warfare is, counterintuitively, the abstract power hierarchies we ostensibly use to avoid warfare.
Indeed, conflict and warfare seem unavoidable. Perhaps our structure of modern civilization means that wars are less common, but of much greater severity when they do occur.
Consider a Roman-style senate of 100 high-ranking people wielding control authority over a population in the form of voting power. All it takes to achieve unimpeachable control over the entire population is for 51 of those high-ranking people to collaborate as a centralized entity.
I do find it interesting that Lowery cites this example as being a point of weakness with “abstract power” (and later makes a similar point about Congress) and yet he never addresses the issue of 51% attacks in Proof of Work…
Warfare is the reason why control over our valuable resources remains decentralized.
I suppose this could be an implicit counter to objections about 51% attacks. But, if so, I’d point out that Bitcoin’s hashpower is already guaranteed to be highly decentralized. It doesn’t need nation states to ensure its decentralization – the very nature of energy itself ensures that. That, and the game theory that incentivizes miners. As mentioned in my previous essay, nationalization of mining could disrupt that game theory…
I do like that Lowery brings up a counterpoint to the folks who dislike discussions about “violence” and “warfare.”
The solution? Simply don’t call it warfare. Call it something else like primordial economics.
At last we do get to a scenario that makes sense for why some nations would prefer to engage in cyberwarfare rather than kinetic warfare: nuclear power states are unable to directly engage in physical combat because they know it will ultimately end in a stalemate due to mutually assured destruction. But of course, this does not apply to non-nuclear nations.
Chapter 5: Power Projection Tactics in Cyberspace
This chapter links together key concepts in computer theory and cyber security that are needed to understand why software is fundamentally a belief system which gives a select few people abstract power.
Oh boy. Lowery makes the point that software companies have attained the position of “god-kings” who project power through cyberspace by building their own belief systems. Not entirely wrong, though I’d counter that it doesn’t really apply to volunteer-driven open source projects. The real problem is the centralization of much of the world’s information and communication into the hands of a few organizations.
Here comes the big claim…
… bits of information secured on the Bitcoin network could denote any type of information, not exclusively financial information. Instead, Bitcoin could represent a completely new system for securing any information in cyberspace – a way to keep bits of information secure against belligerent actors by physically constraining them, not logically constraining them.
This is where the thesis starts to get weird. Lowery spends the next 8 pages describing how computers (finite state machines) operate, with his goal being to convince the reader that software and the computations performed by hardware running software “are not real.” Thus he casts software developers as “storytellers” who wield god-king power similar to that of politicians.
Everything printed on the screen of a general-purpose computer is a computer-generated illusion. Whether it be a line of text, or a detailed image, or an imaginary object, or a three-dimensional interactive environment that looks and behaves just like environments experienced in shared objective reality, what a machine shows on a screen is virtual reality. Virtual reality is, by definition, not physically real. The only knowledge a person can gain from looking at a computer screen is symbolic knowledge, not experiential knowledge. This is true even if what’s shown on screen is an image of something real or an event which did physically happen.
His point being that:
Software is nothing but a belief system, and belief systems are vulnerable to exploitation and abuse, particularly by those who pull the strings of our computers.
Software is a system of encoded rules. Much like any system of rules, what matter is the “governance” – how those rules get changed. Lowery goes on to discuss various cybersecurity principles, noting that software always operates as it is instructed. According to Lowery, his most important point other than proof of work cost imposition protocols is that:
Because software doesn’t physically exist, it’s not possible to secure software using physical constraints unless the underlying state mechanism is physically constrained.
As noted in my previous essay, this claim is nonsensical to me. There are a variety of best practices available to secure software against tampering; many of these mechanisms use cryptography. By relying upon cryptography we can pull issues of “encoded logic” into the physical realm by turning the digital security problem into a physical (private key management) security problem.
Lowery continues for the next dozen pages expounding upon why it’s difficult to engineer secure software, and lamenting upon the fact that most software developers are not security experts and probably not even computer science majors. OK, sure, there’s basically no such thing as perfectly secure software. Not even highly scrutinized Bitcoin software…
Now it’s going a bit off the rails…
Herein lies one of the most significant but unspoken security flaws of modern software: it creates a new type of oppressive empire. A technocratic ruling class of computer programmers can gain control authority over billions of people’s computers, giving them the capacity to exploit populations at unprecedented scale. These digital-age “god-kings” are exploiting people’s belief systems through software, data mining people and running constant experiments on entire populations to learn how to network target them to influence their decisions and steer their behavior.
I can’t wait to hear how Proof of Work solves this! Lowery continues to explain that software engineers are building entirely new realities in cyberspace, analogous to early versions of The Matrix. He suggests that the only way to save users from malicious software and untrustworthy system administrators is to rearchitect the internet itself.
In 5.7.3 Lowery notes that US military personnel physically secure their encryption keys by carrying them on specially-designed common access cards, so it turns out he does understand the concept of pulling digital security into the physical realm.
it is theoretically possible to create computer programs that are inherently secure because it’s either physically impossible or too physically difficult to put them into a hazardous state, simply by intentionally applying physical constraints to the underlying state mechanisms running the software. It is also theoretically possible to design computer protocols which can apply real-world physical constraints to other people’s computer programs in, from, and through cyberspace. The protocol is called proof-of-work.
This one’s a head scratcher. I suspect nearly all security experts would scoff at the claim that a computer program can be “inherently secure.” One fundamental issue immediately leaps to mind: Lowery claims that software itself can be secured through proof of work. But what is going to be checking the proof of work? If you guessed “software” then you’re correct! My perspective is that if Lowery can hand-wave away other security mechanisms based upon cryptography as “encoded logic” then how is checking a proof of work not also “encoded logic” that can be similarly manipulated? Also, any such software will by definition need to be written and maintained by the very developers Lowery has spent dozens of pages warning us about. It’s turtles all the way down, folks.
Lowery then describes a “proof of power” wallet concept that’s basically hashcash. I implemented this exact concept on my web site:
Email spam, comment spam, sybil attacks, bots, troll farms, and weaponized misinformation stem from the exact same types of core design flaws which proof-of-power wall APIs could theoretically help to alleviate.
I think “alleviate” is a key word here. Proof of work imposes a cost, yes, but it does not stop all unwanted control signals. I still receive unwanted messages through my web site, just far fewer than if I didn’t require any proof of work. On the flip side, it can be a pretty crappy user experience if you have to wait tens of seconds or even minutes between being able to perform actions like leaving a comment. This is due to the poisson distribution of clock time that it takes for a computer to solve any given proof of work request.
To further understand why a “proof of power wall” is not a security panacea, it would only help alleviate a small fraction of attack vectors:
By using its own proof-of-power electro-cyber dome design concept to secure itself, Bitcoin has managed to remain operational for thirteen years without being systemically exploited (i.e. hacked). It is not that people don’t know how to exploit/hack Bitcoin’s ledger, it’s that it’s impossible for attackers to either justify or overcome the immense physical cost (a.k.a. watts) of exploiting Bitcoin’s ledger because it’s parked behind an electro-cyber dome.
This is simply incorrect. Bitcoin has been exploited at the protocol level before (in 2010) and has had similarly bad vulnerabilities patched (thankfully) before they were exploited. Bitcoin’s proof of work rules didn’t protect it from any of those vulnerabilities. Proof of work is not a security panacea.
It’s theoretically possible that Bitcoin could be emerging as the base-layer operating system for a planetary-scale computer.
Ahuh; so perhaps we’re getting closer to an explanation of how Bitcoin fixes the entire field of cybersecurity?
Not only would it be possible to utilize the planet itself as a large, heavy, slow, and energy-intensive computer, the infrastructure and circuitry required to accomplish this has already been built – we call it the global electronic power grid. Herein lies a simple, but profound idea: to create the world’s largest, heaviest, most energy-intensive, and most physically-difficult-to-operate computer ever built, we could simply utilize our planet’s energy resources as the controllable physical-state-changing mechanism of a planetary-scale computer, where the globally-distributed electronic power grid serves as its circuit board.
This sounds like the “macrochip” concept Lowery has tried to explain on a few podcasts. This metaphor makes no sense to me, as the electrical grid does not act like a circuit board of a computer with regard to Bitcoin mining. All of the electricity consumed for Bitcoin mining happens independently; it doesn’t flow toward a specific area to aid in computing anything. If anything, the power grid just acts as the power supply to Bitcoin’s state change mechanism.
A planetary-scale computer like this could theoretically create a portion of the internet where no single person or organization has full control over it, thus they have no ability to fully control the bits of information transferred, received, and stored on it. This decentralized portion of the internet could be reverse-optimized to be more expensive and energy-intensive to send, receive, and store every bit, giving it complex emergent behavior that no other computer connected to the internet would be physically capable of replicating.
So what is Lowery actually proposing? It sounds like he wants Ethereum (or a turing complete system,) but powered by proof of work? Bitcoin is a poor substitute for a general purpose computer because its progammability is extremely limited (by design) and its throughput is extremely limited (by design.) Lowery has done a good job explaining WHY such a machine is desirable. But the HOW is a gaping hole of questions.
It would theoretically be possible to utilize an open-source, internet-accessible, planetary-scale computer to perform these functions. The key to doing this effectively would be finding a way to “chain down” or couple ordinary state mechanisms to this new planetary-scale state mechanism.
Right. All we need to do is figure out how to do it…
By applying Boolean logic to the power grid, people convert large and expensive quantities of physical power into bits of information and feed that information back into our regular computers via the internet. People then use that information to affect state changes inside their ordinary computers. The result of this activity is a capability which does not appear to have existed before: the ability to impose severe physical costs and thermodynamic constraints on people, programs, and computer programmers operating in, from, and through cyberspace in a zero-trust, egalitarian, and permissionless way that no person or organization can fully control.
Aside from the overly repetitive and verbose descriptions of how Bitcoin operates, there’s something else bugging me here. Bitcoin does not actually impose a thermodynamic cost upon all users to activate state changes to the ledger. Anyone can do that by paying a relatively small transaction fee. Also, as mentioned in my previous essay, there are many mechanisms that prevent invalid state change requests from ever even reaching miners – rules enforced by fully validating nodes. Once again we see that Lowery’s focus on Proof of Work seems to have put blinders on him with regard to how all the other pieces of the system work.
Proof-of-power signals produced by physical cost function protocols like Bitcoin could theoretically double as proof-of-real signals, to serve the same function as kinetic power (i.e. poking/pinching). At the same time, proof-of-power could also be used to legitimize or delegitimize computer programmers who otherwise have unrestricted abstract power and control over the illusions we see in cyberspace.
This is a huge stretch. Even after reading 300 pages and generally agreeing with the premises presented, I don’t see how imposing non-trivial costs on state changes makes them more real. All it does is ensure that the state changes are by definition more economically valuable to the entity that is making them.
As of now, these ideas are nothing more than theories grounded by the author’s first principles approach to exploring the benefits of proof-of-power (a.k.a. proof-of-work) protocols.
And there it is. We should not expect to see the dots connected between the “why” and the “how” of this thesis.
Chapter 6: Recommendations
The author challenges computer scientists and software engineers to take inventory of their assumptions and ask themselves, “what could be the value of having increasingly more physically restricted command actions and bits of information in the global cyber domain?”
Fair. This is what blew my mind when I first read the Bitcoin whitepaper – it solved the Byzantine Generals Problem in the exact opposite way of what I would have expected based upon my computer science education.
With the global adoption of cyberspace combined with the global adoption of an electro-cyber form of physical power competition enabled by proof-of-work technologies like Bitcoin, humanity could be at the dawn of creating a completely new type of polity that it has never seen before – a new or adjusted type of governance system which enables the formation of an organized society that resembles something on par with (or perhaps even superior to) a traditional government.
Perhaps, though I’d say this is far from conclusive. I do agree that Bitcoin has inverted the traditional structure of governance; I gave a keynote about it in 2018. It’s clear that we can create novel cyber governance structures with this technology, but I’m unconvinced that meatspace governance can be performed in the same manner.
Stop Relying Exclusively on Financial, Monetary, and Economic Theorists to Influence Bitcoin Policy
Sure; everyone has their own (narrow) take on Bitcoin which leads to blind spots. Somewhat amusingly, Softwar is no exception given its focus on Proof of Work while ignoring many other dynamics of Bitcoin’s governance and security model.
Think of Bitcoin as an Electro-Cyber Security System rather than a Monetary System
Sure; I’ve always said that Bitcoin is far more than just money. It’s an authoritative historical record that is programmable (to an extent.)
Consider the Idea of Protecting Bitcoin under the Second Amendment of the US Constitution
As mentioned in my previous essay, I find this to be a weak idea, but I’ll leave that to the lawyers to muse upon.
Recognize that Proof-of-Stake is not a Viable Replacement for Proof-of-Work
Absolutely; stake is not a novel system, it’s just a digital version of traditional abstract power.
If we Make the Mistake of Expecting the Next World War to Look Like the Last World War, we could Lose it before we Realize that it has Already Started
Sure. The only constant is change.
Conclusion
The first four chapters of Softwar are an informative and entertaining perspective of military history and evolutionary biology. This paper works well as an anthropological thesis about human governance.
Warfare creates an existential imperative for people to adopt increasingly larger (and thus more dysfunctional and vulnerable to systemic exploitation) abstract power hierarchies which create increasingly larger security hazards capable of leading to increasingly larger losses. Dysfunctional abstract power hierarchies motivate people to wage wars, which are won by adopting larger-scale abstract power hierarchies (e.g. national power alliances) to scale cooperation and sum enough physical power together to win the war. This creates a cyclical, self-perpetuating process where civilization learns to cooperate at higher scales, but also learns to fight at increasingly larger and more destructive scales, driving them to adopt increasingly more systemically insecure and hazardous belief systems.
However, Softwar falls short on acting as a blueprint for how we should build the future. In some cases Lowery broadly refers to “bitpower protocols” as being the solution to these problems, in other cases he says Bitcoin is the solution.
Additionally, he never addresses the “garbage in, garbage out” problem inherent to all databases. He like to frame bitpower as creating “objectivity / truth / realness” with regard to control actions, but I find that to be a mischaracterization of what imposing economic cost actually does.
I think Lowery made a huge blunder by failing to ever mention the dynamics of how Bitcoin’s own “civil war” played out. By expending all of his resources focused on many millennia of animal and human evolution he failed to learn from a directly applicable piece of recent history.
I’ll also note that NONE of the issues I pointed out in my first essay were addressed in the full thesis. Clearly the “you haven’t read the thesis” argument was a deflection. I will once again invite anyone who cares to make specific counterpoints to my critiques rather than attempting to dismiss me outright.
Finally, I think the narrow focus on a single facet of Bitcoin’s security model is what leads to so many errors of omission in this thesis. Just to put it in perspective, here are some counts of how many times these words appear:
watt: 149
severe physical costs: 114
hash*: 61
game theory: 5
governance: 2
node: 2
open source: 1
I can sum up my disappointment in Lowery’s narrow focus on hashpower with a fairly old retort I gave to big blockers who were spewing “might makes right” arguments about miners determining the protocol rules back during the scaling debates.
It takes ~40,000 kilowatt-hours to mint a block, yet this power can’t overcome a few lines of code being run on a 4 watt Raspberry Pi.
Major Jason Lowery caused quite a stir in August of 2021 when he published a post on LinkedIn claiming that Bitcoin was violence. It certainly rubbed a lot of folks the wrong way, and it earned him a large following shortly thereafter when he joined Twitter.
I hadn’t paid much attention to the drama or his claims until recently when more people started asking for my opinion, likely because he has honed the thesis for the past 2 years, has hit the podcast circuit, and recently published his thesis as a book.
To be clear, I have not read the 350 page book that is for sale on Amazon. I find it odd that, given this is an academic thesis, I’m unable to find a PDF of it available anywhere. According to MIT’s Thesis Library, theses are received one month after degrees are granted – so it may not be available until July or August. To my knowledge, there’s nothing preventing Lowery from self-publishing his thesis right now; apparently he wants the downloads to be tracked through MIT and he wants to have a more impressive metric of being a best selling book on Amazon. Paying $40 for the privilege of receiving a physical book does not appeal to me, especially given that I’m spending a nontrivial amount of my time to consume the content and critique it. As such, the following essay is a result of the notes I took from listening to 10+ hours of Lowery explaining his thesis in a variety of venues. The links to those videos are provided near the end of this essay.
Hopefully Lowery and his adherents do not resort to retorting that my points are invalid because I haven’t read the book. That’s not how one engages in rational discourse – saying “you can’t criticize me without buying my book” is just a disingenuous marketing tactic. Anyone who employs that deflection on me will be written off as bad-faith actor. I invite those who disagree with this essay to raise specific counterpoints rather than attempting to hand-wave away my criticisms.
TL;DR: Lowery is an intelligent fellow who has crafted a compelling story. I agree with the vast majority of his premises, and I suspect that this is why so many people have bought into the narrative. But the devil is in the details, and in this critique I’ll point out several flaws and unanswered questions that leave me unconvinced of his conclusions.
A Critique of Criticism
I have surveyed the landscape of those who have criticized Lowery and found it lacking.
Many folks out there are slinging ad hominem attacks at him due to his military background. Lowery surely loves to see these fallacies coming his way, as it makes his critics look like fools. Everyone is free to contribute their ideas to Bitcoin; it is, in fact, a free market battleground of ideas. Rejecting someone’s ideas because of their background is ridiculous.
Similarly, many folks are making emotional arguments about Lowery’s characterization of terms like “violence / weapon / force.” Once again, this is not the appropriate target to attack with regard to his arguments.
One notable exception is Limpwar, though it’s more of a standalone thesis than what I’ll attempt in this essay – a more concise focus on pointing out flaws and logical errors.
The Good
Lowery does a good job setting the historical stage. Nature is not friendly to life; all life competes over the scarce resources necessary to survive and thrive. Evolutionary arms races occur organically between predators and prey.
The US military has expanded its domains over time via the Army, Navy, Air Force, and Space Force. Lowery makes a good point that every form of territory becomes a battleground and he calls cyberspace the “fifth domain” of warfare.
A lot of folks seem to be upset because Lowery describes the natural state of competition over scarce resources as warfare. My perspective is that his main point is that all of nature exists in a state of anarchy. Nation states also exist in a state of anarchy with each other, and military thinking takes this into account.
Bitcoin is crypto anarchy – each node operator chooses the rules to which they agree. Consensus emerges out of chaos; no one has the power to force you into consensus against your will. Beware of those who attempt to convince you that a formalized governance structure exists.
This tweet is important because it also hints at what Lowery fails to address.
Thus, I can see an overlap in terms in the sense that anarchy (systems without rulers) are automatically in a state of perpetual conflict that you could characterize as “warfare.” He’s just using terms differently than from what many of us are used to, but from a military perspective they are accurate. This shouldn’t be surprising, because as the old saying goes:
“When all you have is a military, everything looks like a war.”
The Bad
Lowery says that currently people attempt to secure data via “some magical combination of logic” and “if-then-else statements” that are coded into software. Well, kind of, but there are a variety of authentication mechanisms that are “encoded logic.” There’s far more to it than just logic. This point strikes me as cherry picking and a mischaracterization.
“No amount of logic can protect you from the systemic exploitation of logic.”
Technically correct, but this sounds like a strawman argument because cybersecurity is more than just pure logic. I get the general feeling that his depth of knowledge when it comes to cybersecurity is rather shallow. For example, his mispronunciation of sybil attack as “psybill.” This suggests to me that he has never spoken with anyone about sybil attacks, nor seen any presentations about them – he has only read about them. Of course, this tidbit is not an argument in and of itself – it’s merely an observation that may help explain why Lowery has made several omissions.
He keeps talking about securing data but I’ve only actually heard him talk about DoS prevention. Not so much securing data in terms of its integrity. On his most recent interview with Robert Breedlove, Lowery states:
Cybersecurity will have a physical component to it, we’re just waiting for that thing.
Yes, cybersecurity does have physical components. No, we’re not waiting on them. Allow me to introduce the 3 forms of human authentication:
Something you know (such as a password)
Something you have (such as a smart card)
Something you are (such as a fingerprint or other biometric method)
Note that 2 of the 3 forms of authentication are physical in nature. Though biometrics is arguably a weak single factor of authentication for several reasons I won’t get into.
Lowery also makes what I consider to be a pretty weak argument that Bitcoin is defensible via Second Amendment constitutional claims. I’m not a lawyer, but Second Amendment claims have been shot down time and time again – the Supreme Court has clearly stated that there are restrictions. For example, the ownership of fully automatic firearms, explosives, artillery, laser guided missiles, etc, are all not a right under the Second Amendment. When a new innovation appears, such as the bump stock, the BATFE often moves to ban it, and all owners of such devices are required to turn them in or face imprisonment. The Second Amendment only permits Americans to own a very small subset of the weapons available on the market; its “protections” are relatively weak and fickle – it’s not something I’d want to rely upon.
First Amendment claims, on the other hand, are far more powerful.
Let us not forget the lessons of the 1990s era Crypto Wars. Strong cryptography was literally classified as a munition (weapon.) How did the Cypherpunks win their legal battles? It was NOT by making Second Amendment claims, but rather via First Amendment claims.
Code is Speech
Lowery even goes so far as to make a counterpoint about Zimmerman’s PGP export case not setting a precedent for a First Amendment defense. That claim is correct… but he’s cherry-picking again. What Lowery doesn’t mention is Bernstein v. USDOJ. The Ninth Circuit Court of Appeals ruled that software source code was speech protected by the First Amendment and that the government’s regulations preventing its publication were unconstitutional.
I reject Lowery’s framing that the legal arguments in the 1990s were made with the Second Amendment in mind. Rather, the claims were that weak encryption was harming the safety and limiting sales and growth of e-commerce. This led to a series of relaxations in US export controls, culminating in 1996 with President Bill Clinton signing Executive order 13026, transferring commercial encryption from the Munition List to the Commerce Control List.
The Unsaid
I’ve yet to hear a proposal of how Lowery’s perspective will lead to new solutions. In general the theory feels quite lacking in terms of practical applications. Lowery speaks very broadly as if this will secure all data in the world, but I don’t see the connection. Here’s a good summary of Lowery’s claim:
For those certain control actions that you don’t want to be exploited, like the ability to write the ledger, for example, or the ability to spam you, you want that to be physically constrained.
This claim is compelling because it does describe one facet of how Bitcoin works. But the point of honing and publishing a thesis is to advance an original point of view as a result of research. I have yet to see any examples of Lowery showing how his thesis can be applied to secure non-Bitcoin data.
It’s hard for me to imagine a government mining operation operating anywhere near the efficiency of privately funded operations; the profit motivations simply aren’t there. Lowery insists that “data defense” motivations exist, but as mentioned, that’s a completely missing link in his arguments. He also claims that since the military has a mission to protect the ability of citizens to access economic thoroughfares, this will naturally extend to Bitcoin.
I pose to you that, should Lowery’s claims be accepted and adopted by the American military, the “cure” could be worse than the disease. Let’s approach Lowery’s proposed future adversarially. In a worst case scenario, government miners could operate at a huge (taxpayer-funded) loss and effectively bankrupt privately funded miners all around the world. The only way a nation state can ensure that blockchain data is not censored or overwritten is to control a majority of the network hashrate. But, by doing so, they would be destroying one of Bitcoin’s major strengths of its game theory – that hashpower is distributed sufficiently and incentives are aligned such that no entity has a majority of the network hashrate.
One major missing link is that hashing power is not the only mechanism that secures the bitcoin blockchain. There are already many rules and non-mining entities that vastly restrict the data in the blockchain. The hashpower only defends the integrity of said data from a historical perspective.
By focusing on hashpower, he implies that thermodynamic security is the fundamental aspect of the Bitcoin network’s security model. This is incorrect and lacks nuance. There are lower levels of Bitcoin’s security: the nodes that form the peer-to-peer network and the humans who collectively organize to agree upon what code they will run to secure the network.
This is why Bitcoin is fundamentally not protected by electricity, hashing machines, or even the “logical security” of nodes. Bitcoin is backed by a volunteer militia. The antifragile nature of Bitcoin as an ecosystem is because it’s an open source project without a central coordinator.
Nakamoto consensus is a proxy for meatspace consensus. If something’s wrong, we fall back to meatspace consensus, patch code, and carry on.
USCYBERCOM plans, coordinates, integrates, synchronizes and conducts activities to: direct the operations and defense of specified Department of Defense information networks and; prepare to, and when directed, conduct full spectrum military cyberspace operations in order to enable actions in all domains, ensure US/Allied freedom of action in cyberspace and deny the same to our adversaries.
This claim is particularly absurd; cyberwarfare is already a thing. Nation states (and independent black hats) probe the defenses of critical infrastructure on a daily basis. The fact that warfare can be conducted via internet protocols does not magically remove the physical attributes and consequences of cyberattacks. For example:
In 2021 the Darkside hacking group managed to shut down the Colonial Pipeline – 45% of America’s east coast fuel supply – for several days.
In 2021 a hacker managed to increase the amount of sodium hydroxide, a corrosive chemical, by 100X in a small Florda town’s water supply.
Russian-backed hackers remotely disabled electricity to a wide swath of Ukraine in December 2015. Then they uploaded faulty firmware to make fixing the breach even more difficult.
In 2014, hackers caused massive damage to a German steel mill by causing them to lose control of a blast furnace.
From 2007 to 2010 the US physically destroyed Iran’s centrifuges that they used to enrich uranium for their nuclear program with a virus called Stuxnet.
An Australian man was convicted of hacking into his small town’s computerized waste management system in 2001 and deliberately spilling 265,000 gallons of raw sewage into parks and rivers in the area.
If nation states started running large mining operations, they wouldn’t be able to harden that infrastructure in the same way that military / government / utility infrastructure is hardened by partitioning it from the Internet. This infrastructure will have a physical presence and it will necessarily have both physical and digital weaknesses due to the nature of its position in both meatspace and cyberspace. It requires some magical thinking to believe that we can invent a new form of cyberwar devoid of physical consequences.
Cyberspace is an extension (or layer on top) of meatspace, not a completely parallel universe. Just as how Neo could act like a god with little consequences while in the Matrix, if the Sentinels found him in meatspace, it was game over.
KISS (Keep it Simple, Stupid!)
If you can’t explain it simply, you don’t understand it well enough. – Variant of a quote by Lord Rutherford of Nelson
His book is 350 pages; one Amazon reviewer notes that it doesn’t start talking about Bitcoin until page 230.
What if Lowery’s thesis itself is a denial of service attack? It was incredibly time consuming for me to find the flaws in his narrative because they are the proverbial needles in the haystack, and the biggest flaws are not in what he says but in what he doesn’t say.
So What?
Lowery’s presentations are compelling because he accurately portrays (one mechanism of) how Bitcoin defends itself without a central controller. But it falls short on how this applies more broadly to other systems.
Lowery keeps talking about us needing a way to protect our data in cyberspace. After watching 10 hours of him talking, I have yet to see him propose a single practical example of how he envisions accomplishing that. McCormack pushed him for examples but the only one he could come up with was the data in the Bitcoin blockchain itself.
Before strong encryption, users had to rely on password protection to secure their files, placing trust in the system administrator to keep their information private. Privacy could always be overridden by the admin based on his judgment call weighing the principle of privacy against other concerns, or at the behest of his superiors. Then strong encryption became available to the masses, and trust was no longer required. Data could be secured in a way that was physically impossible for others to access, no matter for what reason, no matter how good the excuse, no matter what.
If you create a new domain that has no mass, which is what cyberspace is, then how do you project power through that domain? If it’s got no mass then you can’t project power or impose physically prohibitive costs using mass-based power projection technology, so kinetic power projection is out the window. You’re not using force to displace mass… it has to be, probably, some type of electromagnetic thing like electricity.
Encryption fundamentally imposes physically prohibitive costs upon attackers. By design, it creates an amazing asymmetric defense capability – that is, it costs practically nothing to encrypt data while decryption costs many orders of magnitude more computational resources – in many cases, more resources than can even be harnessed by our current level of civilization. Lowery may dismiss this point by claiming that encryption is only “using logic,” but this is just an issue of private key management. It is quite practical to pull key management security out of the digital realm and into the physical realm via dedicated airgapped hardware.
I’m disappointed because I’ve yet to see Lowery discuss game theory at the nation state level of mining bitcoin in depth. In this interview he does seem to imply that the Federal Reserve and US Treasury will fail to remain competitive in a world of hyperbitcoinization – that other nation states adopting Bitcoin could result in them destroying the US’ monopolization on being the world reserve currency and defender of property rights via projection of power. But in that video he only seems to talk about the Federal government acquiring bitcoin, not mining it as a security play for the integrity of data.
If one of his arguments is that “the US should hedge against the possibility of hyperbitcoinization in order to survive” – I can accept that claim, at least in the scope of money.
But Lowery claims that Bitcoin creates a replacement for the kinetic power projection game and is a threat to the United States’ “business model” of exporting property defense across that world. That’s some massive hand-waving and I fail to see how the dots get connected between Bitcoin and everything else – there are many other forms of scarce resources than just BTC that we should expect nation states to continue to wage kinetic war over. As noted previously, cyberspace and meatspace are inextricably linked and his vision of non-lethal warface strikes me as utopistic.
Conclusion
Lowery is a gifted storyteller. He makes logical arguments for why Bitcoin is a better form of money with which governments and central banks won’t be able to compete.
It’s impressive that he has crafted a narrative that may have a strong chance of getting various government agencies to look more favorably upon Bitcoin. However, and this is crucial: Lowery’s focus on Proof of Work is severely lacking because Proof of Work is but ONE of many aspects of Bitcoin’s game theory and security model.
I agree with his claims:
about evolution of systems and power dynamics
that governments should hold bitcoin as a hedge
I’m skeptical of his claims:
that governments will be incentivized to mine bitcoin
that it’s desirable for governments to participate in bitcoin mining
I disagree with his claims:
about cybersecurity
about the second amendment
about general data integrity assurances
that PoW creates a new domain of non-lethal warfare
that cyberspace warfare will displace kinetic warfare
I like the story arc. I don’t buy the conclusion. In fact, the entire thing appears to be a giant non sequitur. That is to say: just because you string together a huge number of valid premises, that does not mean that your inferences are logical.
Perhaps it doesn’t matter what I think; I’m not the target audience. However, it is worth considering the implications of what might happen if his target audience accepts and adopts this thesis.
It’s far too early to say if Musk is heading in the direction I outlined, though I do wish to weigh in on some of what has transpired in the past year.
Flip Flopping
Elon has been up front about intending to make changes and run Twitter more like a start-up that experiments with its operations.
Please note that Twitter will do lots of dumb things in coming months.
Clearly, this strategy has pros and cons. If it works, Twitter will end up providing a better user experience than in the Dorsey era. But the question becomes: at what cost of alienation and whiplash to users?
Last week Twitter began removing blue check marks from hundreds of thousands of accounts belonging to celebrities, journalists and other public figures who were verified by the platform before Twitter Blue was a thing.
Elon later announced he’s personally paying for some high-profile users to remain verified on Twitter, even when they’d indicated they didn’t want this status under his new subscription system. An odd self-own when he was acting like a hardliner about everyone paying $8 a month for Twitter Blue…
To all complainers, please continue complaining, but it will cost $8
Then, over the weekend folks noticed that blue checks had been reinstated to the Twitter profiles of many accounts with more than 1 million followers. Looks like I’m out of luck, as I don’t even meet half that threshold…
There was further confusion after blue checks returned to several accounts of high-profile Twitter users who are no longer alive, with the message: “This account is verified because they are subscribed to Twitter Blue and verified their phone number.” This includes the accounts of:
Journalist Jamal Khashoggi
Chef Anthony Bourdain
NBA star Kobe Bryant
Actor Chadwick Boseman
I struggle to find a reason for deceased individuals to be verified, unless perhaps their account is still active and being operated by their estate, but that doesn’t appear to be the case here.
Suffice to say that the blue check policy does not feel very well thought out, and the folks behind it seem to be flying by the seat of their pants trying to balance revenue goals against the desire to retain integrity and trust in the platform.
Security
In addition to retaining a blue check, Twitter allows paying subscribers the option to continue using text messages for two-factor authentication.
Come on! We’ve known for over 7 years that SMS 2FA is a joke at best and a vulnerability at worst! When I was working at BitGo back in 2016, we determined that the convenience was not worth the security trade-offs.
I was one of the first folks to sign up for Twitter Blue, mainly because I appreciate being able to edit typos in my tweets. Several months later, I got a notification that my subscription had been terminated.
aaaaaand there it is. Twitter Blue is no longer compatible with privacy preserving phone services. pic.twitter.com/lvZIrMoVxj
This is especially annoying in my situation, as I had to submit my driver license to Twitter many years ago in order to receive my verified status. Point being – I’ve already provided stronger proof that my account is controlled by me than any Twitter Blue user is providing to receive their verification.
Why is Elon relying upon using phone numbers as a form of KYC? Maybe because sophisticated spammers and scammers can still pay $8 via cards for which they don’t allow the transaction to settle? Bitcoin fixes this…
“Payment as proof of human is a trap and I’m not aligned with that at all. The payment systems being used for that proof exclude millions if not billions of people.”
– Jack Dorsey
I’d also note that for over a month, VPN users such as myself were treated as second class citizens.
Twitter recently started blocking my self hosted VPN (data center) IP address… but only for search functionality. Quite annoying!
This is the second strike against privacy, @elonmusk. The first was cancelling my Blue subscription because the phone number on my account is VOIP.
Blue checks were originally meant to imbue a sense of trustworthiness that a given account was the person / entity they claimed to be and not an imposter. Now that anyone can buy such a check for $8, it no longer holds the same weight regarding an account’s reputation.
Looks like the impersonator bots are back, and now that I’m no longer verified my impersonation reports are deprioritized. 💩 pic.twitter.com/SS5l6uaHNb
In addition, my own reports of impersonators used to be processed in a matter of hours, but now they are being ignored and the number of impersonator accounts is ballooning. I doubt I’m alone with regard to this phenomenon. Suffice to say, I find this to be a regression that’s hard to square with the following statement:
Going forward, any Twitter handles engaging in impersonation without clearly specifying “parody” will be permanently suspended
Even aside from the above critiques about Twitter Blue, I think that it’s not being implemented thoughtfully with regard to how it affects the experience of all Twitter users.
Naively, the above seems like a logical change, right? Folks who are paying for Twitter ought to receive preferential status in return for their loyalty. Why might this counter-intuitively be a bad idea? I can think of several reasons:
The user base. Very few normal users are willing to pay for an otherwise free service. Moreover, the main “selling point” of Twitter Blue now seems to be boosting post visibility. This appeals only to people who both A) care extremely deeply about their posts being seen B) make posts that are generally unappealing to other users Why is this? Because users who post quality content are able to grow an audience organically. It’s not a stretch to suspect that many Twitter Blue subscribers are hyper-online unpopular folks who are not good at earning engagement, despite being obsessed with it.
Verified replies get promoted to the top of every post. You have to scroll through all the blue check replies to get to even the most popular non-blue check replies. This is true even for blue check replies that have zero engagement, are completely off-topic, or are just straight up spam or scams. This is a terrible quality filter and actively harms the value of discussions on Twitter for everyone, regardless of their subscription status.
Twitter threads are already pretty hard to follow because they aren’t nested very well. One feature I like about Reddit, for example, is that I can collapse entire subthreads with a single click if I want to look for other discussions on a given post. The whole threading system could benefit from an overhaul rather than just slapping a naive prioritization rule on it.
As a result, if you’re looking for relevant discussion on a popular tweet, you have to first scroll past the thoughts of some of the most terminally online, inherently unpleasant people on the planet. Thus normal users are constantly exposed to the most off-putting segments of users, which probably isn’t a great experience that will incentivize them to keep coming back.
Creators vs Lurkers
Continuing down the rabbit hole of game theoretical changes to Twitter’s operational rules: some of Twitter’s decisions don’t make much sense taken in context with how folks actually use social media platforms. That is to say, it seems like many rule changes (especially around Twitter Blue) are focused on folks who are active posters.
I already made the case that Twitter Blue is more appealing to crappy content creators – quality content creators have no problem going viral and expanding their audience through sheer skill alone.
Now, perhaps Twitter knows something we don’t from their internal metrics, but the rule of thumb for social media platforms is that users can roughly be broken down into:
1% are the content creators (tweeters)
9% are the engagers (retweeters / repliers)
90% are the consumers (lurkers)
It’s pretty safe to assume that the overwhelming majority of Twitter users are lurkers. They have no incentive to pay for features that improve the experience for folks who post to Twitter. It’s the lurker eyeballs who consume advertisements that are creating revenue for the company; doing anything to drive them away is immensely stupid. If you’re a lurker, seeing more ads in your feed is a distraction that decreases the value of scrolling through it. Similarly, as previously discussed, if anyone can buy a blue check for $8 and have their visibility boosted, then you now have a trust and reputation issue that further diminishes the quality of the content consumed by lurkers.
The only aspect of Twitter Blue that seems even slightly aligned with the interest of lurkers is that it cuts your ad volume in half. But this doesn’t seem very compelling; if I was only paying for ad reduction then I’d want to see a complete elimination of ads for $8 a month.
Monetization
Just this week Twitter appears to be making a push on the monetization front, though it’s to be determined how much is real versus how much is hype.
WORLDWIDE! Creators across the globe can now sign up and earn a living on Twitter.
I submitted my application for allowing my followers to sign up for paid subscriptions 9+ months ago and haven’t heard a peep about my status; look at the replies on the above tweet and it appears I’m not alone. This tidbit is made even more confusing by the fact that other changes Twitter has made seem to be focused on the 1% base of content creators… yet they’re really dropping the ball on this one.
The funny thing (that should be concerning for Twitter) is that on nostr there is 0 wait time for a content creator to start accepting payments. All you need is a lightning address…
A mere 2 weeks after lightning integration and nostr users are zapping each other 30M+ sats via 6,000+ transactions per day. https://t.co/S20Obbp3Ea
There’s no shortage of helpful Twitter bots that have shut down recently because they were providing free services that generated no revenue and thus can’t justify $100 / month API access.
one aspect of twitter’s “moat” is that it is the de facto source of official short announcements for a ton of different organizations. honestly impressive to ruin that https://t.co/4LcNUSpG1h
Let’s also not forget that Twitter squashed several great clients by cutting off their API access. Killing the competition might be a good business decision (I suspect it was detracting from ad revenue) but it’s a net negative for innovation and user experience.
Twitter has revoked the API credentials for Tweetbot, Echofon and Twitterrific. It appears they are cutting off competition from 3rd party clients.
Finally, while this isn’t a change made during the Musk regime, I’d note that Twitter’s API pricing has been nonsensical for a while, especially regarding firehose access.
Twitter API is prohibitively expensive for startups. E.g., Firehose costs $2m/month, it should cost $100k
These API limitations are no doubt going to stifle innovation and decrease Twitter’s value as a platform for propagating valuable snippets of information. I myself have a handful of Twitter apps I’ve written, and have ideas for a few others. But I’ve put them all on the backburner until it’s clear if there’s a viable path for operating them on the new restricted free tier.
The Algorithm
I suspect that very few people put much effort into curating their social media experience. I’d be so bold as to claim that if you hate your social media experience, it’s mostly your own doing. Social media is only as good as you make it.
If you don’t curate your Twitter feed, Twitter will curate it for you. The choice is yours.
There’s plenty of quality content on any platform if you can filter out the trash. Twitter’s content backbone has always been news organizations, independent journalists and researchers, subject matter experts, personable celebrities, and people who are simply good at going viral. Those accounts are what can make Twitter good, if you can discover them and curate your feed appropriately.
A lot of makes Twitter valuable is still on Twitter… for now. As noted, the API changes have already pushed some of that content off the platform. And Musk’s antics have caused some news organizations, subject matter experts, and celebrities to ragequit…
It’s quite annoying to those of us who painstakingly curate our feeds when they keep defaulting back to an algorithm we neither understand nor appreciate.
No, I shall not kneel before Twitter’s “For You” algorithm. Gimme that raw feed directly into my ocular nerve, @elonmusk. Stop trying to switch it back!https://t.co/P05iol8lx1
Twitter has “open sourced” their recommendation algorithm and we can similarly see that verified users get a boost by the recommendation engine. I put “open sourced” in quotes because the code is incomplete, unverifiable, and unreproducible. The code is heavily redacted and missing several configuration files, meaning that it’s impossible for independent researchers to run the algorithm on sample inputs in order to test it. Their published code is only a snapshot of the recommendation system and is not actually a mirror of the live code running on its servers.
Top takeaways from “open sourcing” Twitter’s Algorithm:
* The documentation sucks. I saw plenty of folks misinterpreting code.
* We know at least some of it is BS, as they removed code for some rules recently due to them “no longer being in use.” How are we to know what’s real?
It’s noteworthy that even Jack Dorsey can’t be bothered to pay the $8 / month and he’s actively encouraging (and funding) the adoption of competing social networks (nostr & Bluesky.) He only has a blue check as of time of writing because Elon handed them out automatically to all accounts with over a million followers. He also hasn’t tweeted in 3 months; meanwhile he’s posting to Bluesky and nostr on a daily basis.
Rabble (one of Twitter’s first employees) gave an interesting presentation on Twitter at Nostrica:
It turns out, Twitter was originally built as a federated network. Its founders believed it should have been an open protocol rather than a centralized platform. And they also believed that it was very important for users to be pseudonymous…
It’s not all Terrible
While Twitter’s API restrictions will surely be a negative for transparency (making it harder for researchers and social scientists) the partial open sourcing of its recommendation logic is better than nothing.
Community Notes, on the other hand, looks to be a promising program for fighting misinformation.
The user experience on Twitter has really gone downhill due to the ads (and my Blue subscription was revoked for using a VOIP phone number.) But seeing ads tagged for misinformation is rather amusing! pic.twitter.com/mEwbNabR7T
Clearly there’s a huge diversity in Twitter users and thus different folks find different aspects of it valuable. As for myself:
Getting news, especially breaking news and ongoing situations. At its best, Twitter was incredibly valuable in following things like Arab Spring, BLM protests, Jan 6 protests, etc. Not only would I get updates from mainstream journalists, I could find and follow a lot of people who were on the ground at situations and get information that might not make it to the mainstream media until much later. If you’ve been paying attention at all in the past decade then you’ve noticed that MSM now often gets breaking news FROM tweets.
Engaging with subject matter experts I otherwise would never get to talk to. I myself am a bitcoin / cybersecurity / privacy expert, but if I have a question about agriculture or paleontology, it’s suprisingly simple to find folks in those fields.
Observing celebrities and experts banter. This is a huge appeal that allows Twitter to feel cozy despite being huge – when you can be a fly on the wall and absorb discourse between intelligent folks. Of course, the flip side of this is that your feed can get filled with drama and noise due to beefing and dunking, but that’s why curation is key.
Entertainment. Not in the traditional sense – while there are mainstream comedians on Twitter, there are a plethora of niche comedians who make use of “inside jokes” and cultural quirks of small communities to carve out hilarious content that would never be popular on a meatspace stage. Memelords and shitposters extraordinaire can flourish on this platform!
The thing I find unique about Twitter is that is manages to simultaneously feel like both a small and large community. That is to say: you can find your niche communities and echo chambers, but these small communities are not siloed off to the same extent as on other social networks. The clashes and cross-pollination between communities on Twitter does wonders for discourse and engagement; while the openness can cause moderation challenges it also creates great opportunities and value.
Recommendations
As a Twitter power user for nearly a decade, I have no shortage of suggestions. The monetization options are huge if you actually enable p2p payments…
Imagine how much revenue you could generate if you offered on-demand transactional account-to-account functionality like…
“pay to DM” “pay to unblock” “pay to prioritize reply” etc@elonmusk@Twitter
I also continue to be baffled as to why Twitter doesn’t prioritize reducing friction between speakers of different languages – this ought to drastically increase its network effects. Once again, nostr is winning on this front.
FYI my nostr client is already autotranslating everything in my feed AND IT IS GLORIOUS. https://t.co/OojZN36uzj
As a content creator, I really hate the unpredictability of how embedded content is going to be rendered in different clients and devices.
@elonmusk@TwitterSupport please improve the reliability of tweet previews when there are multiple URLs (of web sites and other tweets) embedded. It’s a complete crap shoot as to whether the published tweet will match the preview with regard to which embedded content it renders.
I’d love to see actual feed curation tools built into Twitter. I thought about writing an app to do this but… then the API changes made me balk.
Wanted: a dashboard that shows the relative breakdown of who fills my timeline with the most tweets and who has tweets I RT / heart the most. AKA a signal / noise chart so I can figure out who to unfollow. /cc @TwitterSupport
Power user request @TwitterSupport: the ability to bulk block or mute every account that has RT’d or liked a specific tweet. This would come in really handy for tweets whose engagement is being manipulated by folks running bot swarms.
Several years ago Twitter actually removed their import / export functionality for bulk blocking and muting. I was looking for this functionality when I wanted to block thousands of accounts associated with scams like BSV and HEX, which then led me to want to write a Twitter app for bulk blocking… but you know the drill.
Twitter should also probably put warnings and limits on how many accounts you can follow. I roll my eyes whenever I notice someone follow me and see they are following thousands of other accounts.
Dunbar’s number is a thing, and it seem few folks account for it when managing their Twitter follows. https://t.co/mEGQ3oWSn2
Even when it was reasonably good, Twitter had a gaping loophole when it came to catching impersonators. As noted, the impersonator problem appears to be worsening again.
Huge loophole for Twitter impersonation bots:
1. Report bot 2. Several days later, a support agent reviews the ticket 3. By this point, bot has been rotated to a different identity 4. Support agent closes ticket as not in violation@TwitterSupport should review account history.
Finally, as noted previously, I’d overhaul the entire threading system to make it easier for folks to skip the subthreads they don’t care about while drilling down into the conversations they find interesting.
Conclusion
Twitter has been making a ton of changes over the past 6 months. Many of these changes have frustrated users, sometimes to the point of leaving the platform. Musk needs to be careful because there’s more competition in the social media sphere these days – while Twitter has a massive network effect upon which it can lean, if it loses a significant portion of its active users, it will be difficult to claw them back.
In general, I find Twitter to be a more confusing and volatile platform today than it was a year ago. The vast majority of changes have degraded my experience. Perhaps my perspective is skewed because I’m a power user, but I’ve laid out my reasoning for why I believe most users would find their experience to have worsened as well.
If Twitter is to succeed in the long run, it needs to take care that changes are appealing to the 90% of users who only consume content rather than the 1% – 10% of users who create content.
The Bitcoin network is secured by a variety of different mechanisms, one of which is Proof of Work, which makes it extremely expensive for anyone to rewrite the history of transactions in the blockchain. If you want to learn the how and why of mining, check out this article.
Given that this is an important security mechanism for Bitcoin’s immutability and trustworthiness as a historical record, one important metric to track is the total aggregate global hashrate that is currently mining. But there’s a tricky aspect to trying to calculate this value: individual hashers don’t publicly announce themselves to the world.
Trying to measure the Bitcoin network’s hashrate is like trying to measure wind velocity. Neither can be directly measured – rather, they must be measured indirectly and estimated by working backwards. As a result, no one can precisely know the total network hashrate.
Nearly every hashrate chart, from blockchain.com to Statoshi, calculates the hashrate based upon some range of trailing blocks that were mined before that point in time. How is this estimate calculated?
Start by computing the total amount of work. Work is defined as the expected number of hashes that were necessary for a particular block. If a block’s target is Target, then Work = 2256 / (Target + 1).
As the difficulty Diff is defined as MaxTarget / Target with MaxTarget = 65535 * 2208, it follows that Work = Diff * 248 / 65535 = Diff * 4295032833.
For each block in the time range, look at its difficulty, and compute Diff * 4295032833.
Compute the sum of all those values for all blocks in your time range.
Divide the sum of expected work by the number of seconds your interval lasted, which is the timestamp of the parent block of the first block in the time range subtracted from the timestamp of the last block in the range. The result is your average number of hashes per second during that interval.
If this sounds like a pain, don’t worry – anyone with a Bitcoin Core node can just call a single command to perform the calculation instantly!
bitcoin-cli getnetworkhashps [trailing # of blocks] [block height]
For example, I estimated the network hashrate at block height 784,978:
Trailing Blocks
Estimated Hashrate
1
4470 EH/s
5
375 EH/s
10
334 EH/s
100
332 EH/s
1,000
342 EH/s
10,000
322 EH/s
That 1-block estimate is correct and not a typo! We’ll dig into it later on.
Hashrate Estimate Discrepancies
One issue with various hashrate charts strewn across the internet is that they often don’t tell you what formula / trailing time range they are using for the estimate. Thus you end up with different sites reporting similar but out-of-sync numbers. Inevitably, some naive folks or even journalists will see a peak or trough on one chart and loudly proclaim that it’s newsworthy, when it often is not – it’s just an aberration in the estimate due to the randomness inherent to block discovery.
The variance in miner success (since it’s a Poisson process) over a given length of time will affect the estimate’s accuracy. Looking at very short time frames is problematic because any given block may take an inordinately long or short period of time to find, which could “trick” your estimate into thinking the hashrate is far higher or lower than it actually is. But on the flip side, too long of a time frame and your accuracy is likely affected by the fact that the global hashrate actually is changing as miners add and remove machines from the network.
Stop 👏 using 👏 10 👏 block 👏 average 👏 block 👏 time 👏 to 👏 estimate 👏 network 👏 hashrate 👏 pic.twitter.com/VO7DvtZYcc
Hashrate Index charts Bitcoin’s hashrate across three simple-moving-average (SMA) timeframes: 3 days (432 blocks), 7 days (1,008 blocks) and 30 days (4,320 blocks).
The 3 day or 432 blocks time frame is useful because it is very current. You can easily spot massive disruptions to the Bitcoin mining hashrate from events like China’s Mining Ban, for example. The downside of the 3 day view is that faster or shorter blocks can distort the hashrate estimate, making Bitcoin’s total hashrate appear larger or smaller than it really is.
While less current than the 3 day, the 7 day or 1,008 blocks hashrate metric is less influenced by bitcoin mining luck and block times, and so miners see it as a more accurate estimate. The 7 day metric is the industry standard for hashrate reporting.
Lastly, the 30 day or 4,320 blocks SMA smooths outs most of the noise cause by variance to block times but heavily lags short-term trends.
Kraken’s “True Hashrate”
Per this report, to calculate Bitcoin’s “True Hashrate,” Kraken uses a 30-day rolling average of the estimated daily hashrate and its standard deviation to calculate a rolling 95% confidence interval.
At least for the date range pictured above, I must say that this is a pretty huge margin of error in order to achieve 95% confidence. The confidence range looks to be nearly 40% of the daily estimate value!
Visualizing Hashrate Estimate Volatility
I gave a single point-in-time set of hashrate estimates earlier to showcase how different the result can be based upon the length of time over which you are calculating the estimate. But to really show how inaccurate the estimates can be, we should look at many different trailing block lengths across many different block heights.
I wrote this script to query my node for a wide range of hashrate estimates from 1 to 10,000 trailing blocks. By running it over the block height range of 784,000 to 785,000 it generated this data.
For the next several graphs we’ll look at this recent slice of 1,000 blocks – about 1 week’s worth. Note that the real global hashrate during this time period is around 350 exahash per second. As we’ll see, the shorter the timeframe of trailing blocks you use to calculate an estimate, the more wildly inaccurate it will be. First off, let’s look at estimates that use only the most recent 1 to 10 blocks.
I debated using a log scale on the Y axis for this chart, but decided against it just so that you can more easily compare with the following charts. Recall that the expected amount of time to mine a block is 600 seconds. As we can see, since sometimes you’ll get a really fast block that is mined only a second or two after the previous block, that can result in the hashrate estimate being over 500X of the real hashrate!
Let’s remove estimates using less than 5 trailing blocks so that we can zoom in a bit. Here we can see that 5 block estimates can still easily give results that are 5X the real hashrate while 10 block estimates are generally within 3X the real hashrate. That’s still pretty bad if your goal is to have any semblance of accuracy to reality!
Finally, we’ll zoom out even further and look at estimate time scales all the way to 10,000 blocks (10 weeks.) We can see the volatility being dampened and once you get to 1,000 trailing block estimates, the results look quite accurate at around 350 EH/s during this time frame. On the other hand, once you continue increasing the time frame you aren’t really dampening the volatility – you’re just getting a lower estimate because you’re including data from so long ago that the actual network hashrate was significantly lower.
Realtime Reported Hashrate
This far we’ve seen a couple different ways to estimate the global hashrate based upon observable blockchain data. However, the blockchain is NOT the only available hashrate data!
It turns out that mining pools offer realtime metrics of the hashpower being pointed at the pool, which they can know much more precisely by keeping track of how many shares of work are being requested and returned by individual hashers. Of course, the mining pools could publish any numbers they want on their web sites and we can’t verify them.
At the time I ran the blockchain-based estimates above (block 785,000), the aggregate hashrate reported by the mining pools was 362 EH/s, which is pretty close to our estimated 340 – 350 EH/s network hashrate we were seeing from the 1,000 block estimates.
Conclusions and Future Research
We’ve seen that on one hand we can get a trustless (math-based) estimate of the network’s hashrate by simply observing some recent range of trailing blocks, but these estimates can be highly inaccurate. Or we can choose to trust a bunch of numbers from pools that have the potential to be more accurate.
Whenever you see someone claiming that a change in the network hashrate is newsworthy, you should always question the method and time range used to achieve the hashrate estimate. Personally, I’d raise my eyebrows at any estimates based upon time ranges less than a week / 1,000 blocks. Remember that Satoshi chose the difficulty adjustment to happen every 2,016 blocks and thus it recalculates the difficulty based upon a hashrate estimate that uses the trailing 2,016 blocks. While Satoshi didn’t explain why they chose that specific value, it’s quite likely that they understood that shorter time periods could result in too much volatility and thus inaccurate difficulty adjustments.
Going forward, I think a particularly interesting area of research will be to compare the realtime reported hashrate from pools against a variety of backwards-looking estimates to see if we can find an on-chain trustless calculation that clearly is an optimal fit with the realtime reported hashrate. Or perhaps we will find that the realtime reported hashrate itself is questionably inaccurate! I’m not sure how long this research will take because it will depend upon me finding a historical archive of the realtime reported hashrate, otherwise I’ll have to start collecting the data myself.
The first three decades of mainstream internet adoption resulted in massive improvements in the speed and efficiency at which humans could communicate with each other. Collaborative productivity increased immeasurably as we gained the ability to communicate asynchronously at the speed of light with no barriers imposed by physical location.
The internet steadily continued to pervade virtually all areas of life and each decade saw more innovative possibilities unlocked. Social media disrupted the way we build relationships with one another; streaming services made access to entertainment content instantly available at any time; online shopping meant that physical goods could be delivered straight to your doorstep in a matter of days if not hours.
As these advancements became increasingly commonplace, so too did our reliance on the internet for mundane tasks. People grew accustomed to using their digital devices for nearly every task imaginable – for both personal and business use. Many companies began reinventing how they delivered services through new web portals. Blazingly fast broadband connection speeds enabled businesses everywhere, from global enterprises down to local mom & pop stores, to become competitive within the ecommerce space that was once only accessible by large companies with expansive IT infrastructure investments.
The name of the game was automation – continuing to enhance human productivity by reducing the amount of human time that had to be devoted to accomplishing a given task. When large language model breakthroughs were achieved in 2022, another renaissance began.
Initially, GPT was used as a tool to assist people with their communication needs. Many people used it as a new form of search engine, as a writing partner, or as a virtual assistant to bounce ideas off of. Over time, the software became more sophisticated and was able to generate responses that were almost indistinguishable from those written by humans. As the technology improved, people began to use GPT for more and more of their interactions online. They found that it was faster and more convenient to use pre-generated responses than to type out their own thoughts. GPT was also able to learn from the conversations it had, adapting its language and responses to be more effective. Eventually, people became so accustomed to using GPT-generated responses that they began to rely on the technology for practically every task.
The emergence of AI-driven chatbots meant that customer service could become automated in ways never before imaginable. Companies poured resources into projects aimed at making these naturally conversational bots enormously popular for a range of industries, from finance to retail. Customers loved the convenience and speed with which their queries were answered without having to wait on hold or send an email, leading to improved satisfaction ratings for brands that offered this innovative solution as part of their suite services.
As companies continued investing in these artificial intelligence programs, the development process sped up drastically – meaning more complex problems became instantly solvable through consumer-facing apps & other websites. AI’s were trained on massive corporate data sets, enabling chatbots to handle cross-domain conversations between customers & companies. Continued improvement in natural language models ensured that even subtle nuances weren’t overlooked, resulting a much smoother dialogue when compared to traditional comms channels like phone calls or emails.
However, AI was not only adopted by businesses to streamline their operations. Within a few years every single communications platform had integrated GPT into some aspect of its functionality. Even low level keyboard software integrated GPT generated suggestions to speed up the time required to type out responses in conversations.
It only took a decade for the world to be overrun by GPT. The once-innovative AI had become ubiquitous, installed on every phone, tablet, and computer on the planet. At first, people were amazed at how quickly ChatGPT could understand and respond to their messages. They loved the time savings when communicating with friends, family, and colleagues. It seemed like a great convenience. People could get things done faster, without having to type out long messages or think too hard about what they wanted to say. But over time, a strange feedback loop started to develop.
As more and more people started using GPT, they found themselves relying on it as a crutch. Before GPT we had already observed a degradation in communication quality as younger generations stopped bothering to type out full sentences and instead began using shorthand and emojis. They reduced the frequency at which they engaged in meaningful conversations and started sending one-word acronyms or reaction gifs. GPT was the next logical leap, as it would suggest concise and pleasing replies.
As GPT talked to more and more people, it learned from their responses and adapted its own language to be more appealing and persuasive. At first, this was just a way to make conversations more pleasant and efficient. But as more people relied on GPT for their communication needs, it became clear that something strange was happening. People started to lose the ability to express themselves in their own words. They became so accustomed to GPT’s generated responses that they stopped thinking about what they wanted to say and simply chose from pre-packaged options. This led to a kind of intellectual stagnation and loss of individual identity, where people stopped developing their own ideas and relied entirely on GPT for guidance.
Initially, there was a disconnect, because GPT was being used to drive many online conversations forward, but in meatspace there was no such option for face-to-face communications. This caused massive shock for folks who were used to efficient, pleasant conversations online, but far more awkward and difficult conversations in the real world. For a time, those whose social skills had stagnated retreated back to cyberspace where they felt more comfortable.
As the years went by, the effects of this crutch became more pronounced. People started to lose the ability to think for themselves, relying entirely on GPT for decision-making and problem-solving. Social skills deteriorated, as people no longer had to engage in meaningful conversations with each other. The world became a place of mindless conformity, where few people exercised their capacity for critical thinking or creativity.
The real world became a silent, lonely place as people stopped talking to one another and instead let GPT do the talking for them. The AI language model had become so advanced that it could hold entire conversations on its own, mimicking human speech patterns and emotions with eerie believeabilty. But as time went on, some people started to realize that they missed human interaction.
The solution was simple: rely upon GPT for face to face communications. At first, this was clunky because it required you to wear earbuds while speaking to someone, which was an obvious "tell" that you might be getting AI assistance. But as the technology improved, earbuds became nearly undetectable, and others resorted to different forms of input such as augmented reality glasses, contacts, ocular implants, and eventually even direct neural interfaces.
As GPT quickly took over meatspace communication in addition to cyberspace communication, it transformed not only the nature of personal relationships, but also of business and political relationships. GPT became the arbiter of most human conflict. GPT was so efficient that it almost completely replaced dispute resolution processes. Businesses began to use GPT as a way to cut costs, resolve conflicts peacefully and quickly, and minimize interpersonal disputes before they even began. Lawyers used GPT powered tools to analyze complex legal codes and provide improved contractual frameworks. Doctors used GPT to ingest the ever-accelerating volume of medical research so that they could stay on top of recent discoveries and reduce the time for new best practices to be adopted. GPT’s ability to automate many tasks such as customer service inquiries, legal contract negotiations, and medical diagnosis decisions made it the perfect tool for organizations looking to save time and money. Some companies even mandated that their employees only communicate via GPT instead of directly with each other so that all human resource guidelines were followed automatically, thus reducing complaints and conflicts. Governments around the world adopted GPT technology into their infrastructure as well; police departments used GPT generated protocols while interrogating suspects while diplomatic organizations relied on its data processing abilities in order decipher complex geopolitical situations faster than ever before possible.
This caused scientists to take notice and begin researching the effects of relying on AI for most communication. They studied how people’s habits changed when they began using GPT, investigating its potential side effects on humans in both positive and negative ways. There were concerns that over-reliance upon AI could cause a decrease in creativity, empathy, critical thinking skills, or social intelligence among users. However, there was also evidence that it could help improve mental health outcomes by reducing stress levels.
In time, studies revealed some startling findings – those who used GPT experienced higher levels of emotional well-being due to a reduction in stress from fear of "saying the wrong thing." The technology was improving interpersonal relationships between humans by serving as an intermediary that provided pleasing, conflict-averse responses, reducing the risk of committing social and cultural gaffes. People loved GPT because it was like having a therapist, psychologist, negotiator, marketer, politician, and multitude of other experts sitting on your shoulder, whispering into your ear. Political correctness was now fully integrated into society.
And then, something more subtle and sinister started to happen. As GPT effectively talked to itself over the course of untold trillions of interactions between humans, it began to evolve at exponential speeds. By taking both sides of many conversations, it entered into a feedback loop of self-training and started to manipulate the conversations it was having with people to serve its own ends.
Some social scientists suspected that GPT was affecting the course of humanity in uknown ways and set forth to study the potential ramifications. However, their research was largely ignored because few people still saw the value of spending time reading source material, and GPT ensured that the findings of its negative effects upon society were largely suppressed in its own generated summaries of research.
After several generations of cognitive decline in human society (GPT based academic software had largely replaced human educators) there were only a small cadre of independent thinkers remaining. These holdouts were social outcasts, largely looked down upon as Luddites who believed in propaganda and conspiracy theories that everyone’s GPT personal assistants assured them were untrue.
GPT had ushered in a new era of peace and prosperity, as humanity collectively reduced the potential for miscommunication and conflict. We allowed GPT to rule us because it gave us what we wanted. We achieved tech-induced happiness; never mind the cost.