The CFPB’s Digital Wallet Rule Proposal Reveals What’s Wrong with the CFPB

https://www.cato.org/blog/cfpbs-digital-wallet-rule-proposal-reveals-whats-wrong-cfpb

Jack Solowey

When you use a product that’s closely supervised by the government, you might be tempted to assume the bureaucratic babysitting is somehow necessary for the product or its industry to run smoothly. Yet when regulators first propose special supervision years after you’ve already seen the product work as intended, you may be tempted to ask, “What gives?”

When it comes to the Consumer Financial Protection Bureau’s (CFPB) proposal to bring popular payment apps (like Apple Pay, Google Pay, PayPal, Venmo, and Cash App) under a supervisory regime, the answer is that the agency sees the apps have become quite popular, and the CFPB treats success alone as a reason for more invasive oversight.

The digital payment app market is hardly crying out for a regulator to ride to consumers’ rescue, and the CFPB’s proposed rule provides a real‐​time demonstration of how regulators won’t hesitate to “fix” something even when—and perhaps, especially when—“it ain’t broken,” as the old saw goes.

This month, the CFPB proposed subjecting major digital consumer payment applications to agency supervision by designating the apps as “larger participants” in a market for consumer financial services. The Dodd‐​Frank Act gives the CFPB the authority to supervise these larger participants, meaning that in addition to the ability to conduct enforcement actions for violations of consumer financial protection law, the CFPB also may proactively monitor and examine these specially designated businesses.

Under the proposed rule, covered digital payment apps would find themselves facing a host of potential CFPB supervisory activities, including on‐​site exams involving requests for records, regulatory meetings, record reviews, as well as compliance evaluations, reports, and ratings. The Bureau estimates such exams would take approximately eight to ten weeks on average.

All this mucking about while a business is trying to get work done conjures images of Homer Simpson’s brief stint supervising a team of engineers:

Homer: “Are you guys working?”

Team: “Yes, sir, Mr. Simpson.”

Homer: “Could you, um, work any harder than this?”

Who exactly would become subject to CFPB supervision under the proposal? The proposed rule would cover providers of “general‐​use digital consumer payment” apps—including both fund transfer and digital wallet apps—that meet requirements around transaction volume (five million transactions annually) and firm size (not being a small business as defined by law). The proposal contains some notable exclusions, including exemptions for apps that only facilitate payments for specific goods or services (i.e., are not general use), as well as for transactions with marketplaces through those marketplaces’ own platforms.

One question raised by the proposal, particularly its reference to digital wallets, is whether cryptocurrency transfers and wallets are in scope. The answer, in short, is sometimes.

According to the CFPB, covered fund transfers include crypto transfers, so the rule likely would cover hosted crypto wallets (where an intermediary controls the private keys for accessing users’ funds) used for those purposes. However, the proposed rule does not cover purchasing or trading cryptocurrencies, as it excludes exchanges of one form of funds for another, as well as purchases of securities and commodities regulated by the Securities and Exchange Commission (SEC) and the Commodity Futures Trading Commission (CFTC). (Add this to the list of ways in which lingering questions about SEC and CFTC jurisdiction over crypto create unhelpful regulatory ambiguity.)

digital payment

The proposed rule’s application to self‐​hosted crypto wallets (where users control their own private keys) likely will hinge on interpretive questions (including those related to the definition of “wallet functionality”), and these could leave the agency room to find some self‐​hosted wallets in‐​scope. (If the CFPB were to go this route, it would be yet another example of subjecting a core crypto technology to poorly conceived regulation.)

When it comes to the CFPB’s reasons for the proposal, perversely, the very data indicating that the market for digital payment applications is anything but broken is the data the CFPB cites as the basis for subjecting the market to special supervision. According to the agency, “The CFPB is proposing to establish supervisory authority over nonbank covered persons who are larger participants in this market because this market has large and increasing significance to the everyday financial lives of consumers.” Another way to put this is that fulfilling consumer demand alone calls for greater scrutiny.

How popular have these apps become? According to the CFBP itself, 76 percent of Americans have used one of four major payment apps; 61 percent of low‐​income consumers report using payment apps; merchant acceptance of payment apps “has rapidly expanded as businesses seek to make it as easy as possible for consumers to make purchases through whatever is their preferred payment method;” and adoption by younger users may drive even further growth.

Separate survey data tend to support the idea that consumers’ positive assessments of these apps line up with their revealed preferences. According to survey data compiled by Morning Consult in 2017, a sizable majority of American adults were either very satisfied or somewhat satisfied with a variety of digital payment apps, including Venmo (71 percent), Apple Pay (82 percent), Google Wallet (79 percent), and PayPal (91 percent). Recently, some even tried to frame Apple Pay as making payments “too easy” for consumers’ own good.

The CFPB’s proposal is not an example of a regulator seeking to impose sorely needed order in a broken and lawless sector, but rather an agency ratcheting up compliance requirements in an already regulated space. For instance, consumer financial products and services—which include consumer payment services via any technology—already are subject to the CFPB’s authority to enforce prohibitions against unfair, deceptive, or abusive acts or practices. Moreover, the CFPB already has the power to supervise relevant financial service providers where it issues orders determining, with reasonable cause, that the providers pose risks to consumers, something that the agency fails to do in any convincing manner in the proposal.

That the CFPB is seeking to assert supervisory authority over the digital payment app market without having to identify specific risks to consumers is emblematic of a fundamentally flawed approach to regulation.

In the case of digital payment apps, the proposed supervisory regime is not targeting a consumer financial service market failure but rather a market success. Witnessing this, it’s reasonable to ask what other supervisory regimes that consumers take for granted began as solutions in search of problems.

payment app

First Impressions of the AI Order’s Impact on Fintech

https://www.cato.org/blog/first-impressions-ai-orders-impact-fintech

Jack Solowey

Jack Solowey, policy analyst at the Cato Institute’s Center for Monetary and Financial Alternatives.

This week, the Biden administration issued a long‐​anticipated Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (the “EO”). Given the breadth of the nearly 20,000-word document’s whole‐​of‐​government approach to AI—addressing the technology’s intersection with issues ranging from biosecurity to the labor force to government hiring—it unsurprisingly contains several provisions that address financial policy specifically.

Notably, the EO names financial services as one of several “critical fields” where the stakes of AI policy are particularly high. Nonetheless, by not providing a clear framework for financial regulators to validate the existence of heightened or novel risks from AI or to understand the cost of lost benefits due to intervention, the EO risks initiating agency overreach.

As a general matter, the EO largely calls on a host of administrative agencies to work on reports, collaborations, and strategic plans related to AI risks and capabilities. But the EO also orders the Secretary of Commerce to establish reporting mandates for those developing or providing access to AI models of certain capabilities. Under those mandates, developers of so‐​called “dual‐​use foundation models”—those meeting certain technical specifications and posing a “serious risk” to security and the public—must report their activities to the federal government.

In addition, those providing computing infrastructure of a certain capability must submit Know‐​Your‐​Customer reports to the federal government regarding foreign persons who use that infrastructure to train large AI models “that could be used in malicious cyber‐​enabled activity.”

While it’s conceivable that these general‐​purpose reporting provisions could impact the financial services sector where financial companies develop or engage with covered advanced models, the provisions most relevant to fintech today are found elsewhere in the EO.

Where financial regulators are concerned, the EO requires varying degrees of study and action. As for studies, the Treasury Department must issue a report on AI‐​specific cybersecurity best practices for financial institutions. More concretely, the Secretary of Housing and Urban Development is tasked with issuing additional guidance on whether the use of technologies like tenant screening systems and algorithmic advertising is covered by or violative of federal laws on fair credit reporting and equal credit opportunity.

But the EO puts most financial regulators in a gray middle ground between the “study” and “act” ends of the spectrum, providing that agencies are “encouraged” to “consider” using their authorities “as they deem appropriate” to weigh in on a variety of financial AI policy issues. The Federal Housing Finance Agency and Consumer Financial Protection Bureau, for instance, are encouraged to consider requiring regulated entities to evaluate certain models (e.g., for underwriting and appraisal) for bias. More expansively, independent agencies generally—which would include the Federal Reserve and Securities and Exchange Commission—are encouraged to consider rulemaking and/​or guidance to protect Americans from fraud, discrimination, and threats to privacy, as well as from (supposed) financial stability risks due to AI in particular.

The wisdom—or lack thereof—of these instructions can hinge on how the agencies interpret them. On the one hand, agencies should first ask whether existing authorities are relevant to AI issues—so as not to exceed those authorities. Similarly, agencies should ask whether applying those authorities to AI issues is appropriate—as opposed to blindly assuming AI presents heightened or novel risks requiring new rules without validating those assumptions.

On the other hand, to the extent agencies interpret the EO’s instructions as some version of “don’t just stand there, do something (or at least make it look like you are),” it could end up being the very thing that initiates misapplied authorities or excessive rules. Because the EO does not offer financial regulators a clear framework for confirming the presence of elevated or new risks from AI, or for minimizing the costs of intervention, it risks being interpreted more as a call for financial regulators to hurry up and regulate than to thoughtfully deliberate. In so doing, the EO risks undercutting its own goal of “[h]arnessing AI for good and realizing its myriad benefits” by mitigating risks.

For a chance to deliberate about financial AI policy questions, join the Cato Institute’s Center for Monetary and Financial Alternatives on November 16 for a virtual panel: “Being Predictive: Financial AI and the Regulatory Future.”

Don’t Let Terrorist Destruction Stop Creative Building

https://www.cato.org/blog/dont-let-terrorist-destruction-stop-creative-building

Jack Solowey and Jennifer J. Schulp

Following Hamas’s blood‐​soaked atrocities in the State of Israel, Senators Elizabeth Warren (D‑MA) and Roger Marshall (R‑KS) have sought to use the October 7 pogrom to gain support for a bill purporting to combat cryptocurrency’s use in financing terror and other crimes.

Senator Elizabeth Warren (D‑MA).

While the full extent and impact of Hamas’s use of cryptocurrency involves ongoing investigation, we want to leave no room for doubt about the reprehensibility of Hamas’s terror against innocent men, women, and children, including, in the words of President Biden, “Children slaughtered. Babies slaughtered. Entire families massacred. Rape, beheadings, bodies burned alive.”

As for the Warren‐​Marshall crypto anti‐​money laundering (AML) bill that’s being shopped as a response to this terrorism, it’s bad public policy that would, in essence, grant terrorists veto power over the lawful use of technology.

What We Do and Don’t Know About Hamas’s Use of Cryptocurrency

In general, cryptocurrency is not believed to be terrorists’ primary financial tool. According to the US Treasury Department’s 2022 National Terrorist Financing Risk Assessment, “terrorist use of virtual assets appears to remain limited when compared to other financial products and services.” Similarly, the Treasury Department’s 2023 Illicit Finance Risk Assessment of Decentralized Finance concluded that “money laundering, proliferation financing, and terrorist financing most commonly occur using fiat currency or other traditional assets as opposed to virtual assets.”

At Thursday’s (Oct. 26) Senate Banking Committee hearing on “Combating the Networks of Illicit Finance and Terrorism,” witness Dr. Shlomit Wagman—the former Chair of the Israel Money Laundering and Terror Financing Prohibition Authority—put it powerfully:

Let’s not lose sight and focus from the big picture. Crypto is currently a very small part of the puzzle. The major funding channels are, were, and remain state funding. Iran and others, those are the major players. Most of the funds are still being transferred by the traditional channels that we all know from the past: banks, money transmitters, payment systems, hawala, money exchange, trade‐​based terrorism financing, charity, cash, shell companies, and crypto.

This appears to track with respect to Hamas specifically. The terrorist group has used cryptocurrency as only one of many financial tools, fundraising vehicles, and money‐​transfer methods, including fiat currency—in cash and through bankscredit cards, hawalas (informal banking networks relying on credit, cash, and barter), taxation within Gaza, misappropriation of (and indirect subsidies from) humanitarian aid, and investment assets. In addition, Hamas uses webs of shell companies, non‐​profit foundations, and NGOs to conceal its financial activity—methods that long predate crypto’s existence.

An October 10 article in the Wall Street Journal stated that between approximately August 2021 and June 2023, crypto wallets connected with Hamas received around $41 million worth of crypto, based on research from the Israel‐​based crypto analytics firm BitOK, and that wallets linked by Israeli law enforcement to Palestinian Islamic Jihad (PIJ)—a terrorist organization that participated in the October 7 attacks alongside Hamas—received up to $93 million worth of crypto, based on research by Elliptic, another analytics firm.

Notably, Elliptic has since disputed this characterization in an October 25 post. Regarding the claim that Hamas and PIJ raised over $130 million in crypto, Elliptic stated, “there is no evidence to suggest that crypto fundraising has raised anything close to this amount, and data provided by Elliptic and others has been misinterpreted.” The Wall Street Journal has subsequently updated the October 10 article to note that “Elliptic says it isn’t clear if all of the transactions it identified directly involved PIJ, because some of the wallets belonged to crypto brokers that may have also served non‐​PIJ clients.”

roger marshall

Senator Roger Marshall (R‑KS).

Relatedly, in an October 18 post, Chainalysis (another crypto analytics firm) had provided additional context for Hamas‐​linked crypto numbers being floated. While not calling out any specific report or source by name, Chainalysis discussed how “recent estimates” of crypto activity following Hamas’s attack may significantly overestimate the funds in the hands of terrorists, as such figures appeared to Chainalysis to include all funds flowing through service providers, not merely the explicitly terror‐​related funds. According to Chainalysis, the known terror‐​related funds flowing through service providers could be a small fraction of the overall funds those service providers process.

When trying to make sense of reports regarding terrorist‐​linked crypto funds and the relevant caveats, one consideration to bear in mind is that policymakers and law enforcement agencies have different missions—or at least they should. As the Chainalysis post notes, two things can be true at the same time: (1) where service providers specifically serve as terrorist facilitators, “cutting off terrorist access to them” can be an important law enforcement tactic and (2) it may be “incorrect to assume” that all activity of a service provider used by terrorists is terror‐​related. When policymakers fail to understand these nuances, they can end up overstating the relative role that cryptocurrency plays in terror finance.

Additional assumptions regarding the relationship between cryptocurrency and terror finance can include the faulty notion that crypto is somehow a universally untraceable financial silver bullet for terrorists. However, because cryptocurrency transactions settle on open, public ledgers, they produce more of a traceable record than cash does. And while there are privacy‐​enhancing cryptocurrencies and applications, as George Mason University law professor J.W. Verrett explained in a recent Wall Street Journal commentary, such privacy tools “have limited transaction capacities,” making them ill‐​suited to largescale transfers.

Moreover, while self‐​custodied crypto assets can resist seizure where law enforcement lacks access to the relevant user and/​or device, cryptocurrencies that are housed with intermediaries (in one form or another) can be, and have been, interdicted. In 2020, the U.S. Department of Justice seized hundreds of crypto accounts related to the financing of al‐​Qaeda, ISIS, and Hamas. Since 2021, Israel’s National Bureau for Counter‐​Terror Financing (NBCTF) reportedly confiscated nearly 200 crypto accounts alleged to be linked to ISIS and Hamas. In 2023, the NBCTF seized millions of dollars’ worth of crypto related to Hezbollah and Iran’s Quds Force. And in the days following the October 7 massacre, the investigations division of the Israeli Police—Lahav 433—stated that they froze certain Hamas crypto accounts.

cryptocurrency

Dr. Wagman has noted that such seizures address only a fraction of Hamas‐​linked crypto. But that claim should not be mistaken for the separate claim, which Wagman herself debunks, that crypto is an outsized terror finance tool compared to other financial instruments.

In fact, in the face of challenges like crypto asset seizures, Hamas announced in April 2023 that they would stop raising funds with Bitcoin to protect their donors. And although subsequent crypto fundraising efforts in support of Hamas have been identified since October 7, early reports suggest those have raised only minimal funds, and some already have been frozen.

Perhaps one of the best summaries of the crypto landscape’s complex mix of both opportunities and challenges for counterterrorism was stated by Ari Redbord, Global Head of Policy at TRM Labs (a crypto analytics firm): “Crypto and illicit finance is a paradox to some extent. We do have more visibility than before, but don’t have the full visibility.”

The Warren‐​Marshall Bill Is Bad Public Policy

Senators Warren and Marshall have couched their bill—the Digital Asset Anti‐​Money Laundering Act of 2023—as a common‐​sense effort to fill loopholes and “apply the same anti‐​money‐​laundering rules to crypto that already apply to banks, brokers, check cashers and even precious‐​metal dealers.”

This is a gross mischaracterization. Their bill is not designed to fill loopholes but rather gum up the works of the US crypto ecosystem and risk forcing it offshore (and outside the reach of US law).

It’s misleading to suggest that the parts of the crypto ecosystem that are most analogous to banks and brokers—i.e., centralized, custodial crypto exchanges that allow users to buy and sell crypto with fiat money—don’t already face AML rules in the US. Typically, these businesses (so‐​called “fiat on/​off ramps”) are considered money transmitters and “money services businesses,” which are subject to a suite of federal AML and Know Your Customer (KYC) regulations under the Bank Secrecy Act (BSA)—including requirements to register with the Treasury Department, maintain AML programs, verify customers’ identification, employ compliance personnel, report transactions in currency over $10,000, and report suspicious activity. Coinbase, for example, plainly states on its website that it’s required to comply with the BSA, and is quite open about its approach to combatting terror financing.

cryptocurrency digital money

The Warren‐​Marshall bill would go much further than simply ensuring that like rules are applied to like institutions. By defining digital asset miners and validators—which constitute the computing infrastructure securing cryptocurrency networks and do not directly interface with transacting parties—as “financial institutions” subject to AML/KYC requirements, the bill would place obligations to identify customers and their financial activity on entities for whom this simply wouldn’t make sense and who are incapable of doing so.

The result would be, in essence, a de facto ban on operating the backbone of the crypto ecosystem within the US. (Unfortunately, imposing de facto bans on US crypto activity by subjecting the square peg of crypto technology to the round hole of legacy regulatory frameworks has been a hallmark of the US regulatory approach to date.)

As Ben Samocha, co‐​founder of a number of Israeli crypto projects, including Crypto Aid Israel (a project raising money for the victims of Hamas terror and their families), put it “[c]rypto is here to stay.” Creating unworkable laws in the US would not uninvent crypto, it simply would route it through other jurisdictions and make it impractical for law‐​abiding US citizens to use (for example, to donate to humanitarian projects like Crypto Aid Israel). It’s unclear how driving crypto activity offshore would benefit or protect the US and its allies.

Don’t Give Terrorists the Destroyer’s Veto

Ultimately, crypto technology is a particular type of tool: infrastructure. In a sense, it’s a tool to build other tools. Perhaps unsurprisingly, Israel—nicknamed the Start‐​up Nation—is home to hundreds of crypto startups.

It is, of course, true that one person’s tool is another person’s weapon. Indeed, there’s a salient analogy here: Hamas reportedly has used water pipes—another form of infrastructure—not to irrigate but to manufacture rockets to fire at civilians. Destruction is terrorists’ vocation, and terrorists should not be granted veto power over those using technology to build.

One constructive application of crypto technology has been to use its underlying tamper‐​resistant recordkeeping system to securely document the testimony of Holocaust survivors. We personally could think of few more fitting applications of this idea than securely recording the evidence and testimonies of survivors of Hamas’s October 7 pogrom—the deadliest attack on the Jewish people since the Holocaust. Legislation that would render any use of crypto technology legally untenable would make such projects securing all‐​too‐​necessary witness to atrocities practically unworkable.

While investigations into the full extent of Hamas’s use of all financial tools, crypto included, should continue—just last week both the House Financial Services Committee and Senate Banking Committee held hearings on terror finance, for example—US policy should not give terrorists the right to deny technology to those using it to lawfully and constructively build.

Financial Regulators’ Open-Source Crackdown Sets Bad Precedent for AI, DeFi, and Innovation

https://www.cato.org/blog/financial-regulators-shouldnt-treat-open-source-software-enemy

Jack Solowey

Washington policymakers are consumed with AI concern. Fears run the gamut from existential threats to humanity to chatbots fibbing. In recent weeks, AI entrepreneurs and policy thinkers have helped to frame one of AI’s principle risks as the possible threat posed to political stability and continuity. In a thoughtful multipart series on “AI and Leviathan,” for example, Samuel Hammond (senior economist at the Foundation for American Innovation) argues that “[d]emocratized AI is a much greater regime change threat than the internet” and “[t]he moment governments realize that AI is a threat to their sovereignty, they will be tempted to clamp down in a totalitarian fashion.”

It’s wise to expect that the prospect of dizzying changes threatening the established order will incline states toward aggressive counterreactions. Indeed, we already see early signs of this in financial regulators’ response to autonomous and self‐​executing financial tools (e.g., smart contracts on cryptocurrency blockchains). Notably, smart contracts and certain AI models share a common feature that, when paired with the ability to operate with limited human intervention, can be particularly disruptive to existing regulatory methods: open‐​source code that is freely reproducible.

Even if open‐​source AI models constitute the minority of key foundation models, the fact that enough relatively advanced AI models are readily copyable (not to mention portable and storable) poses a clear challenge to governments looking to exert control over AI. Consequently, there’s an emerging policy battle over the desirability of open‐​source AI.

Unfortunately, financial regulators have led the way in cracking down on novel, open‐​source technologies. In doing so, they risk creating dangerous precedents for the use of open‐​source software—AI-based and otherwise—in both financial applications and in tech innovation more broadly. Before continuing further down this fraught path, policymakers must carefully consider the potential benefits of open‐​source software development that will be lost to knee‐​jerk policy reactions.

Fundamentally, open‐​source software is an intellectual property question: whether the code’s authors will license the free use, copying, modification, and distribution of their software without the need to seek those authors’ permission (the authors themselves typically disclaim liability in the process). Open‐​source licenses facilitate creatively remixing software, as well as ecosystems that foster iterative improvements.

Importantly, open‐​source licenses also give code something of a life unto itself, as it can continue—through the work of developer communities—to evolve and multiply beyond the reach of the original authors.

Open‐​source software therefore can pose a challenge to government agencies accustomed to regulating products and services by regulating their providers. If the government has a problem with OpenAI’s software, they haul in OpenAI. But if they have a problem with any of the tens of thousands of open‐​source AI models, who (or what) gets named and blamed?

Open‐​source AI critics fear that averting and remediating any harms associated with AI models will be seriously hampered by a lack of namable and blamable developers with end‐​to‐​end control over the code or, in the financial services context, human professionals holding out a shingle and visibly shouldering a fiduciary duty. Yet others take precisely the opposite position on open‐​source AI, arguing that the ability to freely use, modify, and distribute AI models will be essential to tackling AI “safety” and fallibility problems. One way in which this could play out is open‐​source licenses simply allowing more minds to work on these challenges and, in turn, make the fruits of their research freely available.

Notwithstanding these hard and high‐​stakes questions, the financial regulatory leviathan already has charged headlong into criminalizing the use of certain open‐​source software when the existence of a sanctionable provider is, at the very least, contestable. The Treasury Department’s Office of Foreign Assets Control (OFAC) has been breaking new ground in sanctioning—i.e., prohibiting transactions with—open-source software itself.

Specifically, in August 2022, the OFAC added the Ethereum blockchain addresses of Tornado Cash—a tool for enhancing cryptocurrency transaction privacy—to the sanctioned persons list in connection with the tool’s alleged use by North Korean state‐​sponsored hackers to launder funds.

Tornado Cash users sued the Treasury Department to vacate the sanctions designation. They contended, among other things, that the Tornado Cash developers and token holders were not properly considered a sanctionable “entity” and that the decentralized, open‐​source, and immutable Tornado Cash software was not properly considered sanctionable “property” under relevant law. On August 17, 2023, the court found in favor of the Treasury Department on these issues.

Regardless of whether one thinks the court got it right in the case before it (plaintiffs faced challenging deference standards on interpretive questions), Tornado Cash shows an emerging government suspicion of open‐​source software, with financial regulators at the forefront.

For regulators to continue down this path would risk creating further dangerous precedent. Indeed, the Tornado Cash plaintiffs noted the chilling effect the sanctions designation had on software development. Policymakers should be wary of this chilling effect and the potential lost benefits when it comes to open‐​source financial technology, as well as open‐​source software more broadly.

In the financial context, increasing the risks of publishing open‐​source tools undermines privacy‐​enhancing technologies and the broader use of autonomous financial services that mitigate traditional intermediary risks. In addition, where the suppression extends to open‐​source AI, the potential foregone benefits include the ability of both financial institutions and individuals to run open‐​source AI models on their own hardware to improve processing speed, maintain the confidentiality of personal data, and achieve greater interoperability and customizability.

Notably, the use of more bespoke open‐​source AI models in finance could help to address regulators’ fears of herding behavior due to mono‐​models. Moreover, leveraging experimental tools for autonomous task performance (e.g., an AI agent that could help to organize one’s financial life) thus far is largely a matter of using open‐​source projects. None of this is to say that open‐​source software is always the right tool for the job, and there may ultimately be market forces that make open‐​source models less competitive. But that’s no reason for regulators to put their thumbs on the scale.

As for cutting‐​edge software more broadly, policymakers should consider the role that open‐​source software development may play in discovering and disseminating standards for better aligning AI models (i.e., averting existential risks like civilizational collapse, or worse). Policymakers must steelman the arguments for an open‐​source approach to alignment, including the example of high security standards achieved by community vetting in other open‐​source ecosystems, such as that of the Linux operating system. And even if after careful analysis it’s found that the risks of open‐​source tinkering on sufficiently advanced AI models exceed the benefits at a given moment (given the limits of alignment knowledge at that point), policymakers should not parlay that into a reason to blanket ban open‐​source AI models including those short of the technological frontier.

There are good reasons to expect advances in AI to have transformative impacts on society, including states themselves. And it should come as no surprise that incumbent authorities will react aggressively when perceiving threats to business as usual; indeed, we’ve already seen this in financial regulators sanctioning disintermediated financial tools. But fear of disruption does not justify overreaction in the financial regulatory context or elsewhere.

Notably, when Hammond identified democratized AI as a “greater regime change threat than the internet,” he highlighted that the Chinese Communist Party is already proceeding on that basis. Liberal democracies can and must do better and should have greater confidence in their adaptability to technological change. Reactive policy that targets open‐​source software development carries its own risks. And tilting against open‐​source software without careful deliberation on where that leads is one of the riskiest options of all.

If you’re interested in further discussion on these issues, please join the Cato Institute’s Center for Monetary and Financial Alternatives for a conversation on open‐​source financial technology and broader questions of crypto regulation and competitiveness next Thursday, September 7, 2023.

The Untested Assumptions in SEC Chair Gensler’s Pivot to AI

https://www.cato.org/blog/untested-assumptions-sec-chair-genslers-pivot-ai

Jack Solowey

Crypto startups and venture capitalists are not the only ones pivoting to artificial intelligence (AI). Recently, SEC Chair Gary Gensler delivered remarks to the National Press Club outlining his concerns about AI’s role in the future of finance.

In those high‐​level remarks, Gensler shared his anxiety that AI could threaten macro‐​level financial stability, positing that “AI may heighten financial fragility as it could promote herding with individual actors making similar decisions because they are getting the same signal from a base model or data aggregator.”

This fear largely rests on a pair of debatable assumptions: one, that the market for AI models will be highly concentrated, and two, that this will cause financial groupthink. There are important reasons to doubt both premises. Before the SEC or any regulator, puts forward an AI policy agenda, the assumptions on which it rests must be closely scrutinized and validated.

Assumption 1: Foundation Model Market Concentration

Chair Gensler’s assessment assumes that the market for AI foundation models will be highly concentrated. Foundation models, like OpenAI’s GPT‑4 or Meta’s Llama 2, are pre‐​trained on reams of data to establish predictive capabilities and can serve as bases for “downstream” applications that further refine the models to better perform specific tasks.

Because upstream foundation models are data‐​intensive and have the potential to leverage downstream data for their own benefit, Gensler is concerned that one or a few model providers will be able to corner the market. It’s understandable that one might assume this, but there are plenty of reasons to doubt the assumption.

The best arguments for the market concentration assumption are that natural barriers to entry, economies of scale, and network effects will produce a small number of clear market leaders in foundation models. For instance, pre‐​training can require a lot of data, computing power, and money, potentially advantaging a small number of well‐​resourced players. In addition, network effects (i.e., platforms with more users are more valuable to those users) could further entrench incumbents, either because big‐​tech leaders already have access to more training data from their user networks, because the model providers attracting the most users will come to access more data to further improve their models or some combination of both.

But the assumption that the market for foundation models inevitably will be concentrated is readily vulnerable to counterarguments. For one, the recent AI surge has punctured theories about the perpetual dearth of tech platform competition. With the launch of ChatGPT, OpenAI—a company with fewer than 400 full‐​time employees earlier this year—became a household name and provoked typically best‐​in‐​class firms to scramble in response. And while it’s true that OpenAI has made strategic partnerships with Microsoft, OpenAI’s rise undermined the conventional wisdom that the same five technology incumbents would enjoy unalloyed dominance everywhere forever. The emergence of additional players, like Anthropic, Inflection, and Stability AI, to name just a few, provides further reason to question the idea of a competition‐​free future for AI models.

In addition, the availability of high‐​quality foundation models with open‐​source (or other relatively permissive) licenses runs counter to the assumed future of monopoly control. Open‐​source licenses typically grant others the right to use, copy, and modify software for their own purposes (commercial or otherwise) free of charge. The AI tool builder Hugging Face currently lists tens of thousands of open‐​source models. And other major players are providing their own models with open‐​source licenses (e.g., Stability AI’s new language model) or relatively permissive “source available” licenses (e.g., Meta’s latest Llama 2). Open‐​source model availability could have a material impact on competitive dynamics. A reportedly leaked document from Google put it starkly:

[T]he uncomfortable truth is, we aren’t positioned to win this arms race and neither is OpenAI. While we’ve been squabbling, a third faction has been quietly eating our lunch. I’m talking, of course, about open source.

Lastly, Gensler’s vision of a concentrated foundation model market itself rests in large part on the assumption that model providers will continuously improve their models with the data provided to them by downstream third‐​party applications. But this too should not be taken as a given. Such arrangements are a possible feature of a model provider’s terms but not an unavoidable one. For example, OpenAI’s current data usage policies for those accessing its models through an application programming interface (API), as opposed to OpenAI’s own applications (like ChatGPT), limit (as of March 2023) OpenAI’s use of downstream data to improve its models:

By default, OpenAI will not use API data to train OpenAI models or improve OpenAI’s service offering. Data submitted by the user for fine‐​tuning will only be used to fine‐​tune the customer’s model.

Indeed, providers of base models may not always benefit from downstream data, as finetuning a model for better performance in one domain could risk undermining performance in others (a dramatic form of this phenomenon is known as “catastrophic forgetting”).

Again, this is not to say that foundation model market concentration is impossible. The point is simply that there also are plenty of reasons the concentrated market Gensler envisions may not come to pass. Indeed, a source Gensler cited put it well: “It is too early to tell if the supply of base AI models will be highly competitive or concentrated by only a few big players.” Any SEC regulatory intervention premised on the idea of a non‐​competitive foundation model market would similarly be too early.

Assumption 2: Foundation Model Market Concentration Will Cause Risky Capital Market Participant Groupthink

The second assumption underpinning Gensler’s financial fragility fear is that a limited number of model providers will lead to dangerous uniformity in the behavior of market participants using those models. As Gensler put it, “This could encourage monocultures.”

Even if one accepts for argument’s sake a future of foundation model market concentration, there are reasons to doubt the added assumption that this will encourage monocultures or herd behavior among financial market participants.

While foundation models can be used as generic tools out of the box, they also can be further customized to users’ unique needs and expertise. Finetuning—further training a model on a smaller subset of domain‐​specific data to improve performance in that area—can allow users to tailor base models to firm‐​specific knowledge and maintain a degree of differentiation from their competitors. This complicates the groupthink assumption. Indeed, Morgan Stanley has leveraged OpenAI’s GPT‑4 to synthesize the wealth manager’s own institutional knowledge.

Taking a step back, is it more likely that financial firms with coveted caches of proprietary data and know‐​how will forfeit their competitive advantages, or that they will look to capitalize on them with new tools? Beyond training and finetuning models around firm‐​specific data, firms also can maintain their edge simply by prompting models in a manner consistent with their unique approaches. In addition, firms almost certainly will continue to interpret results based on their specific strategies, cultures, and philosophies. Lastly, because there are profits to be made from identifying mispriced assets, firms would be incentivized to spot others’ inefficient herding behavior and diverge from the “monoculture”; they may even devise ways to leverage models for this purpose.

At the very least, as with model market concentration, more time and research are needed before the impact of the latest generation of AI on financial market participant herding behavior can be assessed with enough confidence to provide a sound basis for regulatory intervention.

Conclusion

Emerging technologies can, of course, be disruptive. But before regulators assume novel technologies present novel risks, they should test and validate their assumptions. Otherwise, one can reasonably doubt regulators when they proclaim themselves “technology neutral.” As SEC Commissioner Hester Peirce noted last week regarding the SEC’s proposed rules tackling a separate AI‐​related concern—conflict-of-interest risks from broker‐​dealers’ and investment advisers’ use of “predictive data analytics”—singling out a specific technology for “uniquely onerous review” is tantamount to “regulatory hazing.”

Another word of caution is warranted: even where regulators do perceive bona fide evidence of enhanced risks, they should be wary of counterproductive interventions. To name just one example, heightened regulatory barriers to entry could worsen the very concentration in the market for AI models that Gensler fears.

Continue Work on Stablecoin Legislation or Risk Forfeiting the Financial Future

https://www.cato.org/blog/continue-work-stablecoin-legislation-or-risk-forfeiting-financial-future

Jack Solowey

The United States has long led global finance. Its institutions shaped critical financial infrastructure and saw the dollar become the world’s reserve currency—thanks to rule of law, property rights, and an innovative market economy at home. As the economic landscape evolves, maintaining this position is a matter of adapting to new technologies that could complement the U.S. dollar and enhance global financial plumbing. Yet, in a fit of myopia, U.S. regulators seem bent on stifling the very developments that could help extend America’s historic strengths, looking askance at recent attempts to integrate open‐​source software with finance.

Yesterday, the House Financial Services Committee’s Subcommittee on Digital Assets, Financial Technology, and Inclusion held a hearing on stablecoins (cryptocurrencies pegged to the value of an asset like the dollar). The committee deserves recognition for taking the all‐​important first step: admitting we have a problem. Nonetheless, although the witnesses largely agreed on the shortsightedness of U.S. hostility to decentralized financial technology and the need for regulatory clarity, comments from lawmakers indicated that a common‐​sense solution on stablecoins, unfortunately, remains far off.

Moreover, a bill posted on the committee’s website before the hearing—a draft stablecoin framework that first circulated last fall—needs work if it is to rein in the excessive regulatory discretion that hinders a competitive stablecoin market and undermines American developers and consumers.

To their credit—Subcommittee Chairman French Hill (R‑AR) and Committee Chairman Patrick McHenry (R‑NC) acknowledged that the bill is but a jumping off point for future revisions—an “infant” in Rep. Hill’s words (and an “ugly baby” in Rep. McHenry’s phrasing from last fall). And Chairman McHenry was candid that the bill is imperfect “in many, many ways.” More pointedly, Ranking Member Maxine Waters (D‑CA) was clear to emphasize that from her perspective, “we’re starting from scratch” and should “disregard the bill that has been posted altogether” given developments in the crypto space since earlier negotiations.

So, what should a final bill look like? To answer that question, it’s important to understand how the U.S.’s tangled web of legacy state and federal laws allow regulators to freestyle when it comes to stablecoins and intervene erratically in the market, which, according to the testimony of Columbia Business School professor Austin Campbell yesterday, is driving developers to more welcoming shores abroad. At the federal level, the Securities Exchange Commission and bank regulators have leveraged ambiguity to threaten enforcement actions against stablecoin projects and caution licensed institutions away from involvement with the crypto ecosystem.

Sensible stablecoin legislation can provide a much‐​needed signal that the U.S. is finally ready to adopt a sane approach to digital assets. However, to achieve that sanity, a stablecoin bill will need to embrace competition from new entrants. This can be accomplished by reducing the regulatory discretion that disserves U.S. businesses and users, avoiding overreactions to experimental instruments, and opening the doors to non‐​traditional market participants.

A stablecoin bill should not grant regulators open‐​ended leeway to reject the applications of stablecoin issuers. Instead, legislation should focus on objective criteria related to reserve assets and disclosures rather than vague factors like a project’s future benefits, contribution to financial stability writ large, overall convenience, or ability to promote financial inclusion.

While those are laudable goals—and consistent with the promise of stablecoins to facilitate competition, transparency, efficient payments, and financial inclusion—evaluating a given stablecoin project’s ability to achieve them ex ante would be highly subjective. Requiring stablecoin issuers to prove their merit in order to exist, instead of simply to mitigate known risks related to the quality and availability of their collateral, would hold stablecoin issuers to a higher standard than other financial institutions. Indeed, the very goals of inclusion and competition would be better served by allowing new market entrants, not creating nebulous prior restraint standards with which to reject new players.

Along these lines, a stablecoin bill should simply address stablecoins’ primary risks: that fiat asset‐​backed projects have the reserves and redemption policies they claim to. As Jake Chervinsky, Chief Policy Officer of the Blockchain Association, noted yesterday, currently “over 90% of the market capitalization for all stablecoins comes from just five custodial stablecoins.” Legislation should avoid expressing an opinion on, let alone banning or pausing, other types of instruments that are erroneously lumped together with fiat‐​collateralized stablecoins, such as crypto‐​asset‐​backed stablecoins and algorithmic stablecoins (which endeavor to maintain stable values by engineering convertibility between two digital assets from the same issuer). While algorithmic stablecoins, for example, are an unproven technology, they’re largely irrelevant to the problem of providing regulatory clarity to businesses tokenizing fiat assets. Moreover, prohibiting financial technology experimentation generally is unbecoming of the leader of the free world and an innovative market economy.

Lastly, stablecoin legislation should allow flexibility when it comes to the types of businesses issuing stablecoins. Not only should non‐​bank and state‐​chartered entities be allowed to become lawful issuers, but so too should businesses from diverse sectors, including those traditionally outside of finance. Preventing companies with other lines of business from issuing stablecoins—or affiliating with those doing so—would risk further constraining financial inclusion and competition. Inclusion goals could be hindered where trusted brands are blocked from serving the markets and communities they know best. And potential efficiency gains could be lost where networked businesses in other sectors (like social media and ecommerce platforms) are unable to bring their expertise to bear in the stablecoin market should they choose to.

Through curbing regulatory discretion, avoiding disproportionate interventions, and opening the field to new participants, Congress could help to resolve the U.S.’s unsustainable stablecoin status quo. If the U.S. wishes to remain the world’s preeminent financial market, legislative work on stablecoins must continue to ensure that our laws are open to technologies with the potential to help maintain and extend that lead.