Does Section 230 Cover Generative AI?

https://www.cato.org/blog/does-section-230-cover-generative-ai

Jennifer Huddleston

Whether or not Section 230 applies to Artificial Intelligence (AI) is a hotly debated question. Somewhat surprisingly, the authors of Section 230 claimed it did not apply, but it is likely more complicated than just a simple yes or no answer. Section 230 has been critical to how the internet has expanded free speech online by creating a market that provides opportunities for users to speak, as well as reflecting core principles about the ability of private platforms to make decisions about their services.

Legislating an AI carveout from Section 230, however, would have much deeper consequences for both online speech as we already experience it as well as the future development of AI.

A Refresher on Section 230 and What It Tells Us About the Debate on If It Applies to AI

The basic text of Section 230 reads, “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” In analyzing whether Section 230 applies to AI, we should go back to the text as drafted.

Generative AI likely is an interactive computer service but there may remain some debate on who is the speaker when a generative AI produces content. (A debate that is also playing out involves questions about the application of certain intellectual property principles.) However, these questions won’t often be of concern. Most questions about AI and Section 230 are not about the mere production of content or an image, but rather involve a user reposting the content on other platforms or are otherwise connected to the content generated by a user.

Removing Section 230 for AI Would Have Far More Significant Consequences

While generative AI services like DALL‑E and ChatGPT gained popularity in 2022, AI has been used in many ways, including popular user‐​generated content features, for much longer. As a result, attempts to remove Section 230 protection for AI through legislation would likely impact much more content than the narrower subsection of generative AI services.

AI, including generative AI, is already used in many aspects of online services such as social media and review sites. AI can help identify potential SPAM content and improve search results for a specific user. Beyond that, it is also already used in many popular features.

artificial intelligence

For example, removing protection for AI could eliminate commonly used filters on social media photo sites and even raise questions about the use of certain features that could help generate captions for videos. This would be considered the type of user‐​generated content and creativity supported by Section 230. But an AI exception would likely lead many platforms to disable such tools rather than risk opening themselves up to increased liability.

An AI exception to Section 230 also undermines much of the framework and solution intended by the law. First, it would shift away from the American approach that has encouraged innovators to offer creative tools to users by punishing the same innovators for what others may do with those tools. This undermines the basis of Section 230 and hampers innovation that could be beneficial. It also misguidedly moves the responsibility from bad actors to innovators. 

Conclusion

Some critics may still debate how Section 230 applies to specific elements of generative AI services, but a Section 230 AI carveout would bring more problems than solutions. AI already interacts with a wide array of user‐​generated content and such a loophole would have a broad impact both on the current experience of users on the internet and on the future development of AI. 

What Might Good AI Policy Look Like? Four Principles for a Light Touch Approach to Artificial Intelligence

https://www.cato.org/blog/what-might-good-ai-policy-look-four-principles-light-touch-approach-artificial-intelligence

Jennifer Huddleston

So far, this month has seen the Biden administration issue a significant executive order and the US-UK Summit on artificial intelligence (AI) governance, as well as a growing number of bills in Congress. Much of the conversation around AI policy has been based on a presumption that this technology is inherently dangerous and in need of government intervention and regulation. Most notably, this is a major shift away from the more permission‐​less approach that has allowed innovation and entrepreneurship in other technologies to flourish in the US, and towards the more regulatory and precautionary approach that has often stifled innovation in Europe.

So, what would a soft touch approach look like? I suggest that to embrace the same light touch approach that allowed the internet to flourish, policymakers should consider four key principles.

Principle 1: A Thorough Analysis of Existing Applicable Regulations with Consideration of Both Regulation and Deregulation

Underlying much of the conversation around potential AI is a presumption that AI will require new regulation. This is also a one‐​sided view of regulation as it does not consider how existing regulation may also get in the way of beneficial AI applications.

AI is a technology with many different applications. Like all technologies, it is ultimately a tool that can be used by individuals with a variety of intentions and purposes. For this reason, many of the anticipated harms, such as the potential use of AI for fraud or other malicious purposes, may be addressed by existing law that can be used to go after the bad actors using the technology for such purposes, and not the technology itself.

Some agencies, including the Consumer Financial Protection Bureau (CFPB), have already expressed such views. It is likely that many key concerns, including discrimination and fraud, are already addressed by existing law and such interpretations should be focused on the malicious actor, which in most cases will not be the technology itself.

After gaining a thorough understanding of what is already covered by regulation, the appropriate policymakers could identify what — if any — significant harms are unaddressed or where there is a need for clarity.

In accounting for the potential regulations that may impact AI, policymakers should consider not only those areas but look at both harms that remain unaddressed by current law, as well as the need for deregulation in some cases where existing regulations may get in the way. This may include opportunities for sandboxes in highly regulated areas such as healthcare and financial services where there are potentially beneficial applications but existing regulatory burdens make compliance impossible.

Principle 2: Prevent a Patchwork, Preemption of State and Local Laws

As with many areas of tech policy, in the absence of a federal framework, some state and local policymakers are instead choosing to pass their own laws. The 2023 legislative session saw at least 25 states consider legislation pertaining to AI. The nature of this legislation varied greatly, from working groups or studies to more formal potential regulatory regimes. While in some cases states may be able to act as laboratories of democracy, in other cases, such actions could create a patchwork that prevents the development of this important information.

With this in mind, a federal framework for artificial intelligence should consider including preemption at least in any areas addressed by the framework or otherwise necessary to prevent a disruptive patchwork.

chatgpt

While regulatory burdens of an AI patchwork could create problems, some areas of law are particularly reserved for the states, where it would be inappropriate to have federal control. In some cases, such as the use of AI in their own governments, state and local authorities may be better suited to those decisions and even able to show the potential benefits of embracing a technology. With this in mind, preemption should be tailored to preserve the traditional state role but prevent the type of burdens that would not allow the same technology to operate across the nation.

One example of what such a model of preemption may look like could come from the way states have sometimes preempted city actions to ban home‐​sharing services like AirBnB while still allowing them their traditional regulatory role. Home‐​sharing and other sharing economy services have often been a point of friction in certain cities. But when large cities in a state ban these services or engage in overregulation, it can have a disruptive effect on the ability of individuals across the state to use or offer these services and can undermine good state policy on allowing such forms of entrepreneurship.

This approach prevents cities from banning or creating regulations, such as a de facto ban on home‐​sharing, but allows them to preserve their traditional role when it comes to concerns within their control, such as trash and noise ordinances.

A similar approach could be applied to potential preemption in the AI space, clarifying that states may not broadly regulate or ban the use of AI but may retain their ability to consider how and if they want to choose to use AI in their government entities or other traditional roles. Still, this should be done within the traditional scope of their role and not as a de facto regulation, such as banning the ability to contract with private sector entities if they are using AI or requiring actions that would result in being unable to have any product comply.

Principle 3: Education Over Regulation: Improved AI and Media Literacy

Many concerns arise about the potential for AI to manipulate consumers through misinformation and deep fakes, but these concerns are not new. We have seen in the past with technologies like photography and video that society develops norms and adapts to the way we find out the truth when faced with such concerns. Rather than try to dictate norms before they can organically evolve or stifle innovative applications with a top‐​down approach, policymakers should embrace education over regulation as a way to empower consumers to make their own informed decisions and better understand new technologies.

Literacy curriculums evolve with technological developments, from the creation of the internet to social media platforms. Improved AI literacy — as well as media literacy more generally — could empower consumers to be more comfortable with the use of technology and make sound choices.

Industry, academia, and government actions towards increased AI literacy have provided students and adults alike with opportunities to increase their confidence around and in artificial intelligence tools. Initiatives like AI4K12, a joint project of the AAAI and CSTA in collaboration with the NSF, are already working to develop national guidelines for AI education in schools and an expansive community of innovators to develop a curriculum for a K‑12 audience.

In higher education, many universities have started to offer courses on ChatGPT, prompt engineering, basic literacy, and AI governance. Research projects like the Responsible AI for Social Empowerment and Education (RAISE) at MIT work with government and industry partners to deploy educational resources to students and adult learners on how to engage with AI responsibly and successfully. In a society that is increasing its reliance on technology, innovators in every sector are providing a multitude of avenues to familiarize users of all ages with innovations, like artificial intelligence.

Many states are utilizing this approach as they propose and pass social media literacy bills. While many states have a basic digital literacy requirement, a few states are putting forth proposals with a focus on social media literacy. Florida passed legislation earlier this year mandating a social media curriculum in grades six through twelve. Last month, California’s Governor Newsom signed AB 873, a bill that included social media as a part of required digital literacy teachings for all students K‑12.

AI

Other states such as Missouri, Indiana, and New York are considering similar bills that would require education in social media usage. Such an approach could be expanded to include AI as well as social media. Policymakers should also consider ways this approach could be applied to reach adult populations—through public service campaigns or simple opportunities like providing links to more information from a range of existing civil society groups or through public‐​private partnerships.

Principle 4: Consider the Government’s Position Towards AI and Protection of Civil Liberties

The government can impact the debate around artificial intelligence by restraining itself when appropriate to protect civil liberties, while also embracing a positive view of the technology in its own rhetoric and use.

Notably, during the Trump administration, an executive order was issued about the government’s use of AI that was designed to foster guardrails around key issues like privacy and civil liberties. It also encouraged agencies to embrace the technology in appropriate ways that could modernize and improve their responses. Further, it encouraged America’s development of this technology in the global marketplace.

However, the Biden administration seems to have shifted towards a much more precautionary view, focusing on the harms rather than the benefits of this technology.

Ideally, policymakers should look to see if there are appropriate barriers to using AI in government that can be removed to improve the services it provides to constituents. At the same time, much as with concerns over data, they should provide clear guardrails around its use in particular scenarios, such as law enforcement, and respond to concerns about the government’s access to data by considering data privacy not only in the context of AI but more generally.

Conclusion

Policymakers in the US decision during the 1990s to provide a presumption that the internet would be free unless regulation was proven necessary provided a great example of how a light touch approach can empower entrepreneurs, innovators, and users to experience a range of technological benefits that could not have been fully predicted. This light touch approach was critical to the United States flourishing while Europe and others stifled innovation and entrepreneurship with bureaucratic red tape.

As we encounter this new technological revolution with artificial intelligence, policymakers should not rush to actions based solely on their fears but instead look to the benefits of the approach of the previous era and consider how we might continue it with this new technology.

Youth Online Safety Bills and the First Amendment: Reflections on Arkansas and California Cases

https://www.cato.org/blog/youth-online-safety-bills-first-amendment-reflections-arkansas-california-cases

Jennifer Huddleston and Gent Salihu

Introduction

As discussed in previous work, “youth online safety” bills, however well‐​intentioned, raise significant constitutional concerns. These laws are likely to violate the First Amendment due to their impact on the speech rights of users of all ages, and there are additional concerns about their impact on privacy. The emergence of legal challenges against the implementation of these laws has substantiated these concerns. 

The US district courts in both Arkansas and California issued preliminary injunctions stopping the enforcement of the Arkansas Social Media Safety Act and the California Age‐​Appropriate Design Code Act (CAADCA). These laws, intended to implement age verification and restrictions for children accessing the internet, are likely to limit speech for adults without necessarily providing enhanced protections for young internet users. However, a court has allowed a Utah law related only to age verification for pornographic content to proceed, denying the challenge to the law.

What the Courts Decided in Arkansas and California

The US District Court in California granted a preliminary injunction against enforcing CAADCA, holding that the act “likely violates the First Amendment” based on NetChoice’s claim that the law imposes “speech restrictions” that “fail strict scrutiny and also would fail a lesser standard of scrutiny.”

The court notes that California fell short in identifying case law that would support the kind of restrictions on the collection and sharing of information as imposed by CAADCA. The court also notes that given Supreme Court case law that the “creation and dissemination of information are speech within the meaning of the First Amendment,” CAADCA’s restriction on usage and access to information is likely to fail constitutional muster. Even if CAADCA were to dictate usage and access of commercial speech, the court, relying on Sorrell v. IMS Health Inc., notes that such speech is entitled to at least intermediate scrutiny under the First Amendment. 

The US District Court in Arkansas went a step further, noting that the Social Media Safety Act would not only restrict speech under a strict scrutiny standard but also prove ineffective in protecting youth, as it is “littered with” exemptions. For example, companies in the business of interactive online games are exempt from the law. This means that the Arkansas law does not extend protections beyond COPPA because gaming companies will not be prevented from collecting data from children above 13 years old and then selling it to advertisers.

Both Arkansas and California sided with NetChoice’s concerns about the burdens that the laws impose on adult speech. Noting the difficulty of estimating children’s age without figuring out the age of everyone else, the California court ruling noted that businesses opting not to estimate age will end up shielding content for both children and adults, thereby reducing the adult population to “only what is fit for children.”

The CAADCA restrictions, according to the California court, would fail intermediate scrutiny, let alone strict scrutiny. The Arkansas court ruling expresses concern over the law’s requirement to forego anonymity on the internet, a mandate that would in turn deter users from accessing certain websites and chill speech.

internet safety

While Arkansas and California took different paths in their legal analysis, they reached the same conclusion: both the CAADCA and the Social Media Safety Act are likely to violate the First Amendment.

Given that California’s CAADCA restrictions are unlikely to meet either strict or intermediate scrutiny standards, and since Arkansas’s law has been enjoined due to its ineffectiveness and limitations on speech, other states contemplating similar laws should reconsider their approach to protecting young people online, which is a matter of serious concern. The laws designed after the Arkansas or California model are likely to fail in protecting young internet users, while inadvertently suppressing speech for everyone.

Key Takeaways from These Early Cases

While both the underlying laws and the cases themselves are distinct, there are a few general takeaways from these cases that policymakers, internet users, and innovators should be aware of.

These cases illustrate that youth online safety laws will undergo strict scrutiny by the courts due to their impact on First Amendment rights. This means that for such laws to be upheld, states must demonstrate that the laws are narrowly tailored to serve a compelling government interest. For example, questioning will likely arise regarding the laws’ impact on users over the age of 18, scrutinizing whether the laws are narrowly tailored to affect only underage users. Additionally, proving that safeguarding online youth is a state’s interest—rather than that of individual households—will also pose challenges. Defenders of broader social media regulation may also face an uphill battle given the abundance of tools presently available to parents, suggesting the existence of less restrictive means without consequences for speech and innovation.

As mentioned, another key takeaway from these cases was the courts’ recognition of the impact these laws have on adult users of the internet. It is not surprising that, in such cases, courts relied on precedent from prior online safety battles in Ashcroft v. ACLU, which struck down COPPA, and the debates over video games and free speech in Brown v. Entertainment Merchants Association, which struck down a California law restricting the sale of “violent” video games to minors.

Youth online safety regulations still impact those over the age of 18 and their speech rights, as the only way to ensure compliance is to either treat everyone like they are under 18 or to require age verification for all users. While much of the speech analysis has been focused on the impact of adult users, it should not be forgotten that such regulations also impact the speech rights of young people, including the positive and entrepreneurial opportunities the internet has provided them.

Notably, there are distinctions in the courts’ reasoning due to the difference in the laws, but policymakers should be aware that these cases show it will be difficult to craft general‐​purpose age verification or age‐​appropriate design codes that can pass strict scrutiny. 

Conclusion

While proponents of government regulation of social media and the internet in the name of protecting the next generation may find the passage of the United Kingdom’s Online Safety Bill noteworthy, recent court decisions in the United States should prompt reflection on the broader impact of such proposals on the free expression of users of all ages. The court decisions issuing the injunctions demonstrate that these proposals would affect not only the speech and access to information of those under 18 but also of all internet users.

As parents and policymakers continue to debate the impact of technology and social media on young people, perhaps these recent court rulings should encourage reconsideration of how these technologies have enabled speech and highlight the necessity for nuanced safety solutions that do not need to arise from a government‐​dictated approach.

social media

Google’s Antitrust Trial Starts: What’s at Stake and Why This Case Matters

https://www.cato.org/blog/googles-antitrust-trial-starts-whats-stake-why-case-matters

Jennifer Huddleston

The antitrust cases against Google brought by the Department of Justice (DOJ) and several state attorneys general begin today. This is the first of the major cases against the “big tech” companies to go to trial.

But are these cases really about protecting the consumer, or are they more a political move by regulators? Furthermore, what is actually at stake in this case?

The case against Google was initially brought under the Trump administration but has continued during the Biden administration. The case claims that Google is dominant in search (including specialized search) and search advertising. The government frames its case to claim that, despite the presence of other competitors like Bing and DuckDuckGo, Google has obtained monopoly power and is using that power in anti‐​competitive and harmful ways, such as obtaining default search engine status on various devices.

(Source: Getty Images)

The problem is that a successful case against Google may help these competitors but harm consumers.

The real question should not be if Google has been more successful than the alternatives, but whether it achieved this success through a superior product or anticompetitive means. Consumers choose Google largely because they consider it a better product, not because they have been manipulated into choosing it as their option for search. After all, one of the most popular search queries on Bing is consumers looking for Google, illustrating that it is not by force but through consumer choice that has led to its popularity.

The same can be said for the “specialized search engines” referenced. Consumers can easily use Google to locate other platforms, such as Yelp, to search for reviews. Even on the hotly contested issue of mobile phone defaults, choosing another default search engine is only a few clicks away.

The timing of this case, however, may mean that by the time it is decided, innovation may have proven to be a better form of competition policy by disrupting the current vision of the underlying market. While Google has been put on trial, generative AI innovations, such as OpenAI’s ChatGPT, are already changing how we search for information. Such innovations, not legal trials, are also very likely to help Bing outcompete Google due to its linkage to ChatGPT.

Antitrust cases are not particularly fast, and technology can move rapidly during that time. For example, by the time the famous antitrust case against Microsoft had concluded, the market was significantly more mobile‐​focused and the so‐​called “browser wars” were largely over.

Cato interns clustered around a computer, looking at the screen.

Unfortunately for consumers, Microsoft’s antitrust battles made it less able to focus on competing in the mobile operating system space. It is impossible to yet know what similar choices a company like Google may face or the deterrent factor such actions by the DOJ might be having on companies entering markets where they might be able to benefit consumers.

Antitrust actions should not be based on the presumption that big is bad. They should be firmly based on consumer welfare. When it comes to search, choice is rarely more than a few clicks away, and innovation is often our best competition policy.

The Brussels Effect?: Potential Impacts of Speech Regulation Around the World on Americans’ Online

https://www.cato.org/blog/brussels-effect-potential-impact-speech-regulation-around-world-americans-online-0

Jennifer Huddleston

One of the great benefits of the internet is how it connects global communities. From its earliest days, people have used the internet to make friends and learn about different experiences around the world. Many have credited the internet and social media in lowering the barriers to speech and providing new opportunities for voices that might have otherwise been oppressed.

The United States has been the birthplace of many of the leading global online platforms, and these companies have expressed a strong commitment to free speech online. What was initially considered a strong feature of the internet is now facing rising pushback both at home and abroad as not all governments support a broad conception of free speech.

While the strength of the First Amendment may allow Americans to think they are insulated from attacks on free speech, especially given the dominance of American platforms, the global nature of the internet means restrictions and changing perceptions of free speech abroad are still likely to impact users and businesses in the United States.

What Might Be Causing a Brussels Effect on Speech?

The Brussels Effect refers to European Union regulations which become the de facto governance norms beyond the borders of the EU. One notable example of this phenomenon is the General Data Protection Regulation (GDPR), which has emerged as the default for data privacy regulation. The potential penalties for violation and the significant expenditures and manhours for compliance with the regulation are likely part of this story. Additionally, the concrete definition of data subject or covered entity also may encourage broader applications. Now, various proposed regulations could have a similar impact on speech, both through formal regulation and more informal influence on online speech.

In the EU, the Digital Services Act, a sister piece of legislation to the Digital Markets Act, updates the legal framework for how companies advertise and report their content, and is likely to have significant impacts on free speech. In its attempts to “protect users,” the DSA bans targeted advertising for online platforms based on personal data, requires transparency by online platforms on recommendation algorithms, and implements a user “flagging” system to combat illegal goods and misinformation online, along with other regulations designed to insulate consumers from harm.

While the DSA may be seeking to limit the impact of “bad” content, it is also likely to impact access to content more generally. The Act gives the European Commission more regulatory oversight through the creation of a European Board for Digital Services to inform and enforce the new rules. Among its requirements are transparency about specific harms. Such a requirement, however, is unlikely to only impact “bad” content and could evolve into a much broader requirement that has a more significant impact on speech. In some cases, these may start out as “optional,” but such a requirement is still likely to be enforced broadly out of concerns for further regulatory scrutiny or involvement. The bill went into effect in November of last year, and full enforcement for the DSA will begin next February.

Even though the United Kingdom has left the European Union, it too has recently considered numerous proposals that would have a significant impact on the future of online speech.

The UK’s Online Safety Bill is legislation with the focus of protecting children online, similar to the many U.S. state‐​level bills and the Kids Online Safety Act presented in this year’s U.S. Congress. However, the OSB has become a catchall for content moderation policy since its first draft in 2021, including provisions that require age checks on pornography sites in the same breath as removing child sexual abuse material (CSAM) from platforms through the usage of “accredited technology.” There are aggressive consequences for failure to comply with the OSB: fines up to 10% of worldwide revenue of the company and the blockage of their service from the United Kingdom market. The bill is in the House of Lords, the Parliament’s upper chamber, and could be passed by the end of this summer at its current pace.

Additionally, the update of the Investigatory Powers Act put forward would require companies to approve new security features before launching them, or even disable certain features. Such a requirement would make encrypted messaging services more vulnerable. As a result, a number of companies, including Apple, have threatened to remove their messaging products from the UK if the proposal goes through as currently drafted. While we often think of encryption as a privacy issue, this tool is critical for the speech of those who may be concerned about their safety and security from their own government, including journalists and activists.

Such changes have been increasing outside of Europe as well. For example, in Latin America, government espionage through spyware and sweeping online speech restrictions violate the right to privacy and freedom of expression of citizens throughout the region.

Human rights groups have uncovered the use of the Israeli spyware Pegasus in three Latin American countries — Mexico, El Salvador, and the Dominican Republic — by government executives to spy on their citizens without their knowledge or consent. Those engaged in investigative journalism and civic activism are targeted in particular by the spyware. In addition to unconstitutional surveillance, the legislative bodies of many governments in the region have passed or are in the process of passing laws which would stifle free speech under the guise of fighting against disinformation or hate speech online. For example, Venezuela’s Law Against Hate (passed unanimously by an illegitimate chamber) squashes online discussion on messaging and social media platforms which were safe havens until the legislation, which cracks down on speech. The bill operates through Maduro loyalists and government technicians who point out social media posts or text messages that “promot[e] national hate” to prosecutors without much definition on what constitutes such hate.

There are many other international examples that could be explored but, in general, a growing amount of regulation around the world risks spillover effects to comply with such laws.

How International Tech Policy Could Impact Americans’ Speech

American companies often bring American values regarding free speech and expression into their policy decisions. However, many companies find it easier to have a single set of standards around issues — such as content moderation — rather than have specific rules for each country of operation. While it is certainly understandable why this may be easier, the reality is many Americans may find changes to their online experiences or their own ability to speak freely online. Additionally, this raises questions of what such shifts may do to the positive ways in which the internet has expanded speech for marginalized users and communities as platforms face an increasing number of challenging regulations.

Attacks on encryption could lessen the security of everyone that uses these services. A “backdoor” that allows law enforcement to scan messages could easily be abused by bad actors to obtain information. Additionally, it could render users more vulnerable to surveillance by adversarial nations like Russia or China and limit the ability of journalists to safely contact those engaged in activism in such countries.

Changes to online speech, however, can also happen in more informal ways. For example, many European and Latin American countries have created laws governing hate speech or harmful content online. Platforms may use such laws to formally govern content in such countries, but they are also likely to further the development of internal policies around such issues as well. While many would applaud platforms for taking down racist, sexist, or antisemitic content, the result of hate speech laws’ interpretation is the take‐​down of much more speech than many would immediately assume. Platforms might remove debates over important but sensitive topics such as the Israel‐​Palestine conflict or transgender athletes in women’s sports because of the way moderation terms might be adapted to avoid violating such laws. The result is a concerning shift in norms around online speech that might favor over‐​moderation over potential risks in legitimate debate.

Additionally, a number of governments have placed informal or formal pressure on online platforms to moderate their users’ content. Again, this can remove certain information or opinions from everyone due to the preferences of a particular government.

There is an extensive history of jawboning over the last half a century in the United States, culminating in the age of social media where it manifests as invisible changes to content moderation policies of platforms. However, this phenomenon is not exclusive to the United States. Authoritarian regimes around the world engage in similar practices to control the kinds of speech that exist online, especially if said speech talks ill of the administration in power. Countries like Turkey and India have ordered social media platforms to take down content or block users in the name of national unity or to maintain a positive national image.

While this jawboning and direct censorship by foreign actors does not carry an obvious effect on the United States, it limits the diversity of sources online and forces platforms to possibly change their content moderation policies to adapt to the threats and regulations of countries that seek to limit freedom of expression. These actions come at the detriment of all social media users.

Conclusion

Americans have trusted that the value of free speech on the internet will be based on an American approach to such principles. A growing number of international laws, however, are creating new challenges for both American companies and users. This certainly does not mean that the United States should seek to change its position and more greatly regulate online speech, but it does mean that, in our increasingly connected age, it is important to be aware of the challenges to free speech around the globe and the consequences such proposals have for both companies and users.

The State of Kids Online Safety Legislation at the end of the 2022–2023 State Legislature Session

https://www.cato.org/blog/state-kids-online-safety-legislation-end-2023-2024-state-legislature-session-0

Jennifer Huddleston

In previous years, debates about online speech at a state level had largely focused on issues such as concerns about anti‐​conservative bias or online radicalization. More recently, however, many states have instead focused on the impact of social media platforms and the internet on kids and teens.

While many of the proponents of these bills may have good intentions, these proposals have significant consequences for parents, children, and all internet users when it comes to privacy and speech. States that have enacted such legislation have faced legal challenges on First Amendment grounds and cases are currently pending in the courts.

In general, there have been four categories of legislation at a state level: age‐​appropriate design codes, age‐​verification and internet access restrictions, content‐​specific age‐​verification laws, and digital literacy proposals. With many state legislatures recessing this summer, there is an opportunity to analyze what the emerging patchwork of such laws looks like, the potential consequences of these actions, and what — if any — positive policies have happened.

Age‐​Appropriate Design Codes and Age Verification for Online Activity in the US

Signed into law in 2023, the California Age‐​Appropriate Design Code Act is the first of its kind in the United States. The law obliges businesses to conduct risk assessments of their data management practices and to estimate the age of child users with a higher degree of certainty than existing laws, controlling their access to certain content. While such a law is well intended, it has raised serious concerns about privacy and free speech and is currently being challenged in court. Other states are considering bills that require age verification for using social media.

Such proposals originate in European countries, such as the UK, which is considering its own Online Safety Bill to prevent young people from harmful content but also raises serious concerns around the censorship of lawful speech, privacy, and encryption. On speech, such initiatives threaten the right to anonymous speech. On privacy, kids and adults are likely to be harmed by invasive, yet currently unsafe methods of age‐​verification technologies in an online ecosystem where at least 80% of businesses claim to have been hacked at least once. On encryption, some have advocated introducing backdoors in end‐​to‐​end encryption to catch malicious actors that harm kids while overlooking the importance of encrypted channels for kids to safely call out abusers.

This legislative session, several U.S. states have contemplated bills that would require additional steps to verify who may have a user account on social media or other websites, each with their unique approaches. But many share common concerns. For example, a cluster of states have sought to mandate explicit parental consent for minors creating or operating a social media account, like Pennsylvania, Ohio, Connecticut, and Louisiana. Pennsylvania, for example, has proposed legislation stating that a minor cannot have a social media account unless explicit written consent is granted by a parent or guardian. Ohio and Connecticut have followed a similar path requiring parental consent for children under 16 using social media. Wisconsin considered a bill recently to require social media companies to verify the age of users and require parental consent for children to create accounts. More than 60 bills were introduced in 2023 and at least nine states considered age verification, age‐​appropriate design codes, or other restrictions on young people’s internet usage. Most of these proposals failed; however, there are a few significant age verification bills that were still pending or enacted as of July.

The Governor of Louisiana signed the Secure Online Child Interaction and Age Limitation Act (SB162) into law on June 28. This law not only enforces parental consent for minors, but expressly requires companies to verify the age of all Louisiana account holders. As will be discussed below, this is often the case with age‐​verification laws more generally. Similarly, Arkansas passed the Social Media Safety Act, requiring children under 18 to obtain parental consent for creating a social media account. Utah went a step further by banning access to social media after 10:30 pm for all children under 18 unless parents modify the settings.

Consequences of Age‐​Appropriate Design Codes

The implementation of overly broad policies raises significant privacy concerns, not only for young users but for everyone. The process of accurately determining the age of an underage social media user inherently necessitates determining the age of all users. In a context where social media companies may be held accountable for errors in age determination, the request for sensitive information such as proof of ID becomes a requirement for all users. This poses immediate questions regarding the type of identification data to be collected and how companies might utilize this information before the age verification process is complete.

On a practical level, social media platforms cannot solely depend on their internal capabilities for age verification, thus necessitating reliance on third‐​party vendors. This reliance presents a further question: who possesses the necessary infrastructure to manage such data collection? Currently, MindGeek, the parent company of PornHub, stands as one of the dominant international market players in age verification. Many conservatives may question such a company or the social media platforms they are concerned about having the IDs or biometrics of young users. For example, the Arkansas Social Media Safety Act relies on third‐​party companies to verify users’ personal information.

Options that do not require the collection of sensitive documents — like government IDs or birth certificates — are likely to rely on biometrics. In such cases, not only are there concerns about the potential risk of this information falling into the hands of malevolent actors, but also questions of the accuracy of such technology in cases, such as distinguishing the difference in a 17 ½‑year‐​old and an 18‐​year‐​old. These are critical considerations for legislators as they advance bills aiming to replace parental oversight with governmental control, a shift that may also generate unforeseen consequences and risks.

We must also consider the potential repercussions on youth when their freedom of speech, expression, and peer association are curtailed due to the absence of social media. How can we balance the disparities between parents who restrict their children’s access to social media and those who permit it? In today’s digital age, children often forgo playing in neighborhood streets and optinstead for virtual interaction.

Social media platforms have empowered young people to voice their opinions on political matters and vital issues such as climate change. Without the communication channels provided by social media, the reach and organization of initiatives like Greta Thunberg’s “Fridays for Future” would have been significantly reduced. It’s crucial to consider the potential loss of such influential platforms which serve not only as a stage for youthful expression, but also a catalyst for activism. Introducing bills that impose broad restrictions on access to social media is likely to also obstruct these beneficial aspects stemming from social media usage.

Additionally, these restrictions would make it difficult — if not impossible — for users of all ages to engage in anonymous speech as well as access communication and lawful speech. The only way to verify users under a certain age — such as 16 or 18 — is to also verify users over that age. This means all users would be forced to provide sensitive information like passports, driver’s licenses, or biometrics in order to participate in online discussions. This information would have to be tied to a user’s account, meaning it would be impossible for users to retain true anonymity. This sets up a honeypot of sensitive personal information for malicious hackers.

Topic‐​Based Age‐​Appropriate Design Codes or Age‐​Verification

Some states have introduced age‐​verification legislation that targets specific content. Currently, these proposals have been limited to pornographic material and websites. For websites exclusively dealing with pornography, the task of flagging them is relatively straightforward. However, challenges arise when attempting to regulate more malleable platforms that do not primarily host adult content.

Louisiana was the first state to take such an approach with a law that requires age verification for access to platforms if pornographic content comprises more than one‐​third of the overall content. However, such thresholds can often be arbitrary and could impact more general‐​use websites that may be attempting to remove such content. For example, platforms like Twitter and BlueSky allow adult nudity, and other “sensitive media content” are permissible with certain restrictions. The platforms likely engage in significant content moderation and flagging of such content; however, the exact percentage of such content on a website may vary.

Lawmakers must also take into account how such laws could impact smaller platforms. A new platform with fewer users could have only a small amount of adult content but cross an arbitrarily set threshold based on the percentage of content. Small websites that see a sudden increase in users might also struggle to keep up with moderation for a time and end up over thresholds — even if such content violates their official terms.

Pragmatically, these laws may not be as effective at achieving their goal as policymakers may hope. As of July 1st, Virginia is the most recent state to enact a law that requires age verification for websites showcasing adult content. However, given that consumers have privacy concerns over sharing their sensitive personal data, they tend to bypass these protective measures, raising concerns over their effectiveness. For instance, since the enactment of the law, Google Trends data indicates that Virginia leads the US in searches for virtual private networks (VPNs), a tool that allows individuals to access such sites without disclosing sensitive information to these adult‐​content websites. Utah also saw an uptick in VPN searches when it introduced its age verification law (SB287). It’s worth noting that bypass methods aren’t exclusive to adults.. A study on the enforcement of similar laws in the United Kingdom revealed that 23% of minors say that they can bypass blocking measures. In addition to relying on VPNs to bypass age verification, users may also visit more obscure adult content sites that are less likely to follow safety protocols.

The ease with which these measures can be circumvented suggests that these government laws may put people’s sensitive data at risk and infringe upon young people’s rights to access various speech forums, all without providing effective ways to reap their intended benefits. Rather than enacting laws that may not achieve their intended effects, focus should be shifted toward actionable measures like public awareness and education. The state‐​level patchwork approach to handling people’s sensitive data underscores the urgent need for a comprehensive federal privacy bill.

A Better Alternative: State Bills Promoting Digital Literacy

The concerns about young people online are quite varied and an important reason why the best solutions are likely left to parents and trusted adults in a child’s life, rather than a government one‐​size‐​fits‐​all approach. One positive set of legislative proposals that has emerged over this last session are those that focus on the education of young people through improved digital literacy curriculum. This approach will empower young people to use technology in beneficial ways while also advising them what to do should they encounter harmful or concerning content.

As discussed in more detail in a recent policy brief, many states already have an element of digital literacy in their K‑12 curriculum; however, such standards typically pre‐​date the rise of the internet and social media. This year, Florida passed a law that would include social media digital literacy in the curriculum. States including Alabama, Virginia, and Missouri also considered such laws.

An education‐​focused approach will empower young people to make good decisions around their own technology use. Ideally, such a curriculum should be balanced or neutral in its approach to explaining the risks and benefits of social media or other online activities. States should not be too prescriptive in their approach or allow individual schools to make decisions that reflect specific values or issues encountered by their students. They should give way to parental notification and responsiveness when it comes to discussions around such issues. Civil society and industry have provided a great number of responses to support parental choice and controls. If policymakers are to be involved, the focus should be on education and empowerment rather than restriction and regulation.

Conclusion

2023 has seen an increase in policy proposals seeking to regulate the internet access of young people, but this carries consequences for all internet users. Such actions will likely face challenges in court on First Amendment grounds as seen with the Arkansas and California laws. As with users of any age, children and teens’ use of and experience with technology can be both positive and negative. A wide array of tools exists to empower parents and young people to deal with concerns, including exposure to certain content or time spent on social media. If policymakers seek to do anything in this area, the focus should be on empowering and educating children and parents on how to use the internet in positive ways and what to do if they have concerns, not through heavy‐​handed regulation that both fails to improve online safety and takes away its beneficial uses.

Changing the Rules in the Face of Increasing Losses: Initial Thoughts on New Proposed Merger Guidelines

https://www.cato.org/blog/changing-rules-face-increasing-losses-initial-thoughts-new-proposed-merger-guidelines

Jennifer Huddleston

Much like little children who are losing a game, the Federal Trade Commission (FTC) and Department of Justice (DOJ) have decided to change the rules that govern when they engage in enforcement actions to intervene in mergers. These proposed changes follow several recent losses in court, and they are designed to increase the enforcers’ chances at deterring mergers. The draft of the DOJ/FTC merger guidelines released in July 2023 indicates concerning potential changes that would take the focus off objective economics and the consumer and, instead, place more emphasis on subjective political and policy preferences of enforcers to engage in more intervention in a variety of industries. This is because these guidelines rely on faulty policy presumptions about the nature of mergers based on selectively chosen case law and shift away from sound economics.

The changes in these draft proposed new guidelines are significant and will have a negative impact on companies of all sizes and consumers, as well as allow more government interference in the economy in general. While there is much more to dig into with the specifics of these new guidelines, at a high level they are particularly concerning for the shift away from a long‐​standing focus on consumers.

New guidelines shift away from sound economics

One of the most significant trends is that the new draft guidelines shift standards without providing much, if any, economic reasoning for such changes. This is most notable in the lowering of the threshold for concentration to be considered a potential issue in whether there is a harm to competition. The proposed guidelines are much more concerned about the potential number of players in a market and the share of any one player, but notably, such changes are not backed up by economics or past examples.

As Brian Albrecht points out, economics questions the idea that concentration correlates to an anti‐​competitive effect. Similarly, others have discussed how increased concentration does not mean higher prices for consumers. In fact, it would punish firms from improving efficiency in ways that lead to lower prices as such actions can lead to increased concentration due to consumer response to lower prices.

This shift in guidelines suggests that rather than relying on objective standards and looking to consumer welfare, enforcers at the DOJ and FTC are instead choosing the levels of concentration they believe will most likely allow them to succeed in the cases they want to bring. History shows that presumptions and predictions from regulators may miss what is actually occurring in a market, as well as what would play out if consumers were the ones to make the choices about products. Such a shift away from economic reasoning, however, is likely to result in the agency intervening more subjectively in a range of markets —including technology — and preventing mergers that would prove to be beneficial from occurring.

New guidelines selectively choose case law

The new merger guidelines rely significantly on case law, but not on sound precedent. In fact, the case law relied upon largely comes from the 1970s or even earlier and ignores more recent precedents. This illustrates how, once again, the FTC’s approach is not actually “updating” rules to handle a novel challenge, but a return to the past and its problems of more subjective standards.

These proposed revised guidelines are backed up by selectively drawing upon case law in ways that would position enforcers to be more likely to win cases regardless of their impact on consumers. This practice even extends to cherry‐​picking dictums from these cases. For instance, when attempting to implement a preemptive strategy to hinder mergers without apparent harm to competition, the FTC leans on dictum stemming from non‐​binding case law, such as United States v. Microsoft Corp. 253 F.3d 34, 79 (D.C. Cir. 2001). This reference stands out, given that it is a decision from the D.C. Circuit over two decades old and used to interpret the spirit of the Sherman Act.

As Gus Hurwitz tweets, while many of these cases are technically “good law”, this is largely due to the fact that previous guidelines for agency enforcement have led to more informal behavioral changes and settlements that meant courts have not had the opportunity to formally repudiate them in the past. The selective nature of the new guidelines is likely to meet skepticism from courts more familiar with the entire body of law and could, in fact, result in more formal repudiation of the cases on which the guidelines are based. Agency officials seeking to enforce under the new guidelines could find themselves worse off than they are now by providing courts a more formal opportunity to overturn these outdated precedents and diminish the courts’ view of the soundness of the agency’s guidelines.

The myth of the “kill zone” rises again

Proponents of stricter merger guidelines and deterring mergers and acquisitions, particularly in the technology sector, often point to the idea that large companies kill off nascent rivals through acquisitions. This, however, misunderstands the role mergers and acquisitions play and instead should serve as a reminder that new guidelines will harm small and large companies by limiting their options.

Making mergers and acquisitions more difficult eliminates one exit strategy for companies. Some small companies may be seeking to make an existing product better, and they find being acquired by that product’s original developer is the best way to reach a wider audience. Others may find that they enjoy being entrepreneurs or creating new products, but they have no desire to manage the many aspects that come with a growing company. Some may find that they do want to challenge existing giants and remain independent and eventually “go public” via an initial public offering (IPO). All of these should be considered valid strategies in different situations, but the added difficulty and scrutiny of the revised merger guidelines would make it more difficult for those small companies whose preferred strategy involves acquisition.

Beyond the reasons for exit, the idea of a “kill zone” — or any other justifications for changes to existing evaluations of mergers and acquisitions — neglects to consider the various benefits of these transactions in the market. In the tech sector, mergers are often about talent as well as product, a practice known as acquihiring. Consumers benefit from new collaborations not only from the products, but from the creative and talented individuals in charge of their creation and distribution. Of course, mergers can also create more solid competitors, which may provide consumers with broader access to a range of options.

Finally, the idea of a “kill zone” for new players in a given market has largely proven false, whether analyzed through new entry or investment.

The bottom line: the real losers in the new merger guidelines are the consumers

Much of the discussion around the new merger guidelines will focus on the impact on businesses, both large and small, and particularly how it relates to the ongoing debates around “Big Tech.” The bottom line is consumers are ultimately the ones that will feel the brunt of the negative impact if enforcement shifts away from sound economics and law and focuses more on competitors than consumers. Beyond this, new guidelines will likely stifle beneficial deals and result in costly litigation for taxpayers and businesses, ultimately passing along to consumers. While there are certainly consequences for the tech sector in such changes, this significant shift in enforcement guidance will impact a wide variety of industries and their consumers.

If the FTC Gets Its Way, Could This Be the Last Prime Day?

https://www.cato.org/blog/ftc-gets-its-way-could-be-last-prime-day

Jennifer Huddleston

For the millions of consumers who enjoy the benefits of their Amazon Prime membership, Amazon’s annual Prime Day has effectively become a new Black Friday in July. But while Prime Day may be a chance to grab a great deal, Amazon’s popular Prime program has, at times, drawn attention from regulators.

A new FTC case alleges that these consumers are not willing members of Prime, but instead are trapped or misled into their membership. In June, the FTC filed a complaint against Amazon for their usage of “dark patterns” to manipulate consumers into enrolling in the company’s Prime subscription and overcomplicating the process to cancel. The agency alleges design features, such as the location of the subscribe button on the online checkout screen and the multiple webpages required to click through to initiate a cancellation of Prime, are “tricking and trapping” consumers into their subscription.

Amazon rejected the “concerning” claims of the complaint in a statement, maintaining that they “make it clear and simple” for consumers to subscribe and unsubscribe from its Prime service. As many have pointed out, unlike some services, cancelling Amazon Prime takes a mere six clicks and not the more cumbersome processes many consumers experience with gyms, newspapers, or cable, where they can click to subscribe but must call or go in person to cancel.

As Prime Day’s popularity shows, Prime members remain members because they find the service valuable. In fact, following the popularity of Prime, other large retailers like Walmart have launched their own similar services. Many retailers, including Target and Macys, will also have July deal weeks to counter Amazon’s Prime Week. Shoppers continue to have an abundance of choices when it comes to retail — both online and offline — and feel that consumers have many options.

Customers continue to choose Amazon, Prime, and Prime Day over other options not because they are trapped, but because they find value in it. Amazon declares “customers love Prime,” and there is much polling to support the company’s claim.

According to the American Customer Satisfaction Index (ACSI), Amazon had a 2023 overall satisfaction rating of 84%, an increase of 8% from 2022. Among online retailers, this is higher than Walmart, Target, and Costco.

One facet of the Prime service, Amazon Prime Video, has the highest satisfaction rating in the ACSI at 80% for streaming services. In the first quarter of 2021, the one‐​year renewal retention rate for Prime was 93% and 98% for two‐​year renewals. Even in the wake of new entrants to the retailer subscription market, like Walmart+ last year, consumers retain their ties to Amazon Prime, likely in part due to their overall satisfaction with the service.

Despite this consumer satisfaction and competition, the FTC continues to spend resources pursuing Amazon. If it is successful, it could ruin Prime and many other subscription‐​based discount services in the process, resulting in higher prices overall for consumers. The alleged “dark patterns” described are far from fraudulent or misleading, and much more user‐​friendly than many other services.

The FTC should be concerned about fraud, but its definition of “dark patterns” in the Amazon case would expansively include many common marketing practices and could even result in the agency dictating the design choices. If the agency is successful in its case, it could result in government bureaucrats dictating how subscription interfaces must look, rather than those designing the products, thus deterring providers from offering these services.

But Prime and Prime Day don’t just benefit consumers: they also benefit millions of small businesses that use Amazon’s platform to sell and connect with consumers. 2022’s Prime Day saw over $3 billion spent on more than 100 million items purchased from small and medium‐​sized businesses. The Amazon Prime “badge” that sellers can display comes with a certainty for customers that can increase comfort with small businesses or new products in expecting a specific level of service.

Prime Day has become a new, hotly‐​anticipated sale over the last nine years. The FTC case against Amazon’s Prime subscription is one of several challenges based in antitrust that could make it difficult to offer the Prime program that customers love. So much like that air fryer you’re eyeing up on Prime, let’s hope this isn’t the last year consumers get to take advantage of the generous deal Prime offers.

What “Threads” Tells Us about Social Media Competition

https://www.cato.org/blog/what-threads-tells-us-about-social-media-competition

Jennifer Huddleston

Meta launched a new text‐​based social media app called Threads on July 5. The app—which is connected to Instagram—has been referenced by both the media and users alike as an alternative to Twitter. There is much excitement about the latest social media app and Threads tells us a lot about the dynamic nature of the social media marketplace and the potential impact of regulations aimed at leading technology companies.

Threads Illustrates Social Media Remains Dynamic

There has been much handwringing about whether today’s social media giants are monopolies. Threads illustrates that there are new entrants into the market and, at a minimum, that there is continued competition between even the most successful tech companies.

Elon Musk’s takeover of Twitter was met with applause by some, but left others searching for alternatives following his changes to the product, including the loss of certain features without paying for a monthly subscription to Twitter Blue and changes to content moderation policies. To at least some users, Threads appears to provide an alternative with a similar text‐​focused experience. Some may have avoided leaving for new social media apps out of concerns about a steep learning curve or a requirement to rebuild one’s connections on a new app. Threads’ connection to Instagram allows existing Instagram users to port over their connections from a platform that they are already comfortable with.

Still, Threads and Twitter are not the only text‐​based social media apps available to consumers. New competitors, including Blue Sky and Mastodon, continue to emerge as alternatives for those dissatisfied with current social media options. While Threads may be tied to an existing social media giant, it is only one of many alternative platforms that are competing for popularity in this format. These platforms all benefit from the United States’ light touch approach to regulating social media platforms, giving way to innovative and entrepreneurial efforts toward launching new social media products. This latest launch shows that social media remains dynamic not only with new ideas, but with new products that build on the popularity of certain formats.

What Threads Shows about How Regulation Can Prevent New Entrants

Generally, the European Union (EU) has taken a more stringent regulatory approach to a variety of technology policy issues. These regulations can stand in the way of new entrants being able to enter the market. Though over 30 million users downloaded Threads within a day of its launch, a considerable swath of consumers did not have access to the app.

Regulatory concerns have prevented Threads from currently being available in the EU, unlike its United Kingdom and United States counterparts. The Digital Markets Act (DMA) looms large in this decision, as there remains much uncertainty around its future impact on big tech companies like Meta. DMA restricts how data can be shared between platforms, having an immediate effect on an application like Threads that imports user data from Instagram. The vagueness of the DMA produces extra hurdles to entry that Meta may not want to jump over until the company has more clarity around how to comply with this recently enacted act and whether their adherence is worth the effort.

Unfortunately, the United States has considered similar policies that would change the existing consumer focus from antitrust to something that allows far more government intervention in competitive markets including social media. For example, had the Ending Platform Monopolies Act (H.R. 3825) passed in the last Congress, it could have complicated Meta’s U.S. launch of Threads by limiting its integration with the already popular Instagram platform—a feature that gave rise to the instant popularity of Threads.

The FTC’s aggressive antitrust scrutiny of tech companies could also deter those companies from launching new products or force them to focus on ongoing litigation instead. For example, the antitrust scrutiny of Microsoft in the late 1990s and early 2000s was a deterrent in its development of a mobile operating system. Microsoft co‐​founder Bill Gates said in 2019, “There’s no doubt the antitrust lawsuit was bad for Microsoft, and we would have been more focused on creating the phone operating system, and so instead of using Android today, you would be using Windows Mobile if it hadn’t been for the antitrust case.” With the FTC engaged in multiple actions against Threads’ parent company Meta, it’s a relief that the company was still willing to launch this new product in the United States. One can’t help but wonder what other products or services from various innovative companies might be waiting or lost out of wariness or the need for direct resources to ongoing litigation instead.

Is Threads the Next Twitter?

For all the excitement and optimism around the Threads launch, some solid caution is also needed. Given the wide array of social media options out there, Threads will have to solidify an audience of users after the immediate novelty of a new platform wears off. It is unclear how its use may fully evolve within Instagram or if ultimately the platform may become an entirely separate form of social media.

While much of Threads is Twitter‐​like, the platform also has unique features, namely the ability to post videos up to five minutes long. In addition, unlike Twitter after the launch of Twitter Blue, all users have access to a longer character count of 500 characters. This may attract a distinct type of content and conversation that separates Threads from other sites. While connection to an existing common tech company may help, as the case of Google’s failed social network or Twitter’s defunct Vine illustrates, it does not guarantee a home run.

Threads does, however, show that new social media platforms can quickly gather public excitement and consumers are not locked in to only the existing options. The question of if Threads becomes the next big social media app should stay firmly in the hands of consumers and not government regulators.

Are the Latest FTC Cases against Tech Good for Consumers?

https://www.cato.org/blog/are-latest-ftc-cases-against-tech-good-consumers-0

Jennifer Huddleston

In June, the Federal Trade Commission has taken a number of actions against America’s leading tech companies. The agency is tasked with protecting consumers from actions that might manipulate the benefits of a free market, such as fraud or illegal monopolization behavior. But these latest actions appear to focus on something other than consumers.

FTC v. Amazon Prime

The FTC recently filed a case against Amazon, claiming it was using manipulative practices, or “dark patterns,” to mislead consumers into joining its Prime service. Amazon then made it more difficult to unsubscribe. The company’s Prime service has over 200 million subscribers worldwide (including over 160 million in the US) and retains high levels of consumer satisfaction.

Earlier this year, when the FTC announced a new rule targeting services that lock consumers in their subscriptions by making it easy to sign up and more difficult to cancel, it initially seemed like a return to a consumer‐​focused priority for the agency. Many consumers probably welcomed such an announcement, having been annoyed by fitness programs or newspapers that provide easy ways to sign up for a subscription online, only to find that they require in‐​person visits or calls to cancel.

Regardless of whether consumers feel this behavior amounts to personal manipulation and therefore in need of free market regulation, or if they feel it to be merely a disliked practice that may turn out bad for business in the future, Amazon’s Prime service does not fit this model. Amazon goes above and beyond in the ease it provides customers if they decide to stop their Prime service: the ability to unsubscribe only takes a few clicks, and the company will even provide a refund if the user didn’t utilize the service since their last membership charge.

Beyond the alleged “difficulty” in unsubscribing, the FTC also alleges Amazon uses “dark patterns” to keep consumers from unsubscribing to Prime. Originally, the term “dark patterns” was meant to apply to deliberately deceptive practices designed to trick consumers. Now, it is misapplied to either any practice that might attempt to persuade consumers or the results of a choice to end a subscription or opt out of a feature may have. This is why the term has been used as a critique of Prime’s unsubscribe process.

Any business would want to make sure consumers are aware of the services they are losing or could gain access to via the product they are canceling in order to allow them to make an informed decision. Calling this a “dark pattern” is yet another attempt by the FTC to vilify standard business practices. Demonizing such information could result in less information for consumers to make thoughtful choices.

FTC Challenges Microsoft‐​Activision

The FTC has continued to challenge several mergers and acquisitions and seems to apply particular scrutiny to those within the technology industry. The latest example of this is the FTC’s request for a preliminary injunction to prevent Microsoft’s acquisition of video game company Activision. As with some of its previous challenges, this action focuses on a market that does not accurately reflect the consumer experience so the FTC can make a case that there is anti‐​competitive behavior.

The Microsoft‐​Activision deal has already been approved by European competition authorities. The gaming market is incredibly competitive and evolving, with options for traditional consoles, PCs, mobile, and even virtual reality gaming. But the FTC, as well as the United Kingdom’s competition authority, have chosen to focus their case on “cloud gaming,” a new form of gaming that relies on remote services to stream games over the internet.

The problem with this approach is that cloud gaming has struggled to gain traction with both consumers and game developers. Microsoft is an early actor in this space, but cloud gaming itself has not emerged as a unique marketplace. Notably, however, in highly popular fields like mobile gaming, Microsoft is far from the largest player. In fact, their acquisition of Activision may lead to more sizable competition with rival Sony.

FTC and Meta

Finally, two concerning revelations about the FTC’s actions towards Meta have emerged. These actions should give pause around the agency and administrative state at large to abuse its power.

In May, the FTC announced that it was modifying its 2020 settlement with Meta to include new requirements and restrictions that would blanket ban the use of data on users under the age of 18. This stems from concerns related to a flaw in the Messenger Kids app that predated the original agreement and was identified in an independent assessment. While all three commissioners voted to support the proposal, Commissioner Alvaro Bedoya expressed concerns about whether the agency had the legal authority to impose such limits on data use based on the evidence. Commissioner Bedoya’s concerns are legitimate, and Meta has challenged the action in court.

Not long after this action, it was made public that FTC Chair Lina Khan had refused to follow internal ethics recommendations to recuse herself from the Meta‐​Within case, a recent FTC action to block a merger that the agency lost in court. In the memo addressed to then Commissioner Christine Wilson, the agency’s ethics official expressed concerns about how Chair Khan’s prior statements on Meta acquisitions might raise questions about the Chair’s ability to be impartial in this matter.

Khan’s refusal to follow this advice did not amount to an ethics violation but highlights concerns that these actions are not based on a sound policy belief, but rather on personal politics against certain companies. It should certainly be concerning to those beyond tech companies that recommendations are not being followed. Such a trend could impact the perceived legitimacy of future FTC action even well beyond Khan’s tenure as chair.

Conclusion

The FTC’s increased scrutiny of America’s leading tech companies shows no sign of slowing down, but these latest cases only further highlight the concerns that such actions are not focused on the consumer. An FTC run amok without an objective focus on consumer welfare, and unchecked by courts or Congress, risks harming rather than protecting its customers.

The Consequences of Regulation: How GDPR is Preventing AI

https://www.cato.org/blog/consequences-regulation-how-gdpr-preventing-ai

Jennifer Huddleston

Recently, the Irish Data Protection Commission halted the launch of Google’s new artificial intelligence (AI) product, Bard, over concerns about data privacy under European Union (EU) law. This follows a similar action by Italy following the initial launch of ChatGPT in the country earlier in 2023.

Debate continues over whether potential regulation is needed to address concerns about AI safety. However, the disruptive nature of AI suggests that existing regulations, which never foresaw such a rapid development, may be preventing consumers from accessing these products.

The difficulty of launching AI products in Europe shows one of the problems of the use of static regulation to govern technology. Technology often evolves faster than regulation can adapt. The rapid uptick in the use of generative AI is the latest example of the increasingly fast adoption of new technologies by more consumers. But regulations typically lack the flexibility to consider such disruption, even if it might provide better alternatives.

The General Data Protection Regulation (GDPR) is an EU law that created a series of data protection and privacy requirements for businesses operating in Europe. Many American companies spent over $10 million dollars each to ensure compliance, while others chose to exit the European market instead. Furthermore, the law led to decreased investment in startups and development of apps in an already weaker European tech sector.

But beyond these expected outcomes stemming from static regulations, the requirements of GDPR have raised questions about whether new technologies can comply with the specific requirements of the law. Stringent and inflexible technology regulations can keep us stuck in the past or present rather than moving on to the future. At the time, much of this concern was focused on blockchain technology’s difficulty in complying with GDPR requirements, but now the disruptive nature of AI is showing how a regulatory and permissioned approach can have unintended consequences for beneficial innovation. A static regulatory approach impedes the evolution of technology, which, if permitted to develop without such restrictions, could potentially rectify the very deficiencies that the regulations originally aimed to prevent.

Unlike market‐​based solutions or more flexible governance, such a compliance‐​focused approach presumes to know what tradeoffs consumers want to make or “should” want to make. Ultimately, it will be consumers who lose out on the opportunity and benefits provided by new technologies or creative solutions to balancing these concerns.

While there may be privacy debates to be had over the use of certain data by the algorithms that power AI, regulations like the GDPR presume privacy concerns should always win out over other values that are significant to consumers. For example, more inclusive data sets run afoul of calls for data minimization in the name of privacy but are more likely to respond to concerns about algorithmic bias or discrimination.

Europe has long seemed set on continuing a path of heavy‐​handed regulation over a culture of innovation, and the growing regulatory thicket is starting to result in regulations that contradict one another on issues such as privacy. As the U.S. continues to consider any data privacy regulations and any regulatory regimes impacting AI, policymakers should carefully watch the unintended consequences the more restrictive approach in Europe has yielded.

Improving Youth Online Safety While Preserving Consumer Choice and the Benefits of Technology

https://www.cato.org/blog/improving-youth-online-safety-while-preserving-consumer-choice-benefits-technology

Jennifer Huddleston

As headlines have gotten the attention of parents and policymakers, a new Surgeon General’s report has raised concerns about children’s and teens’ time on social media. Some states, including Utah, Arkansas, and California, have introduced bills that claim to “protect children,” but these bills would carry significant consequences for speech and privacy. Many parents and policymakers are wondering what could be done to help keep kids safer online without taking such a restrictive —and likely unconstitutional — approach.

The issue of keeping children and teens safe online is as unique as each individual child and family. The best answers for these concerns are not one‐​size‐​fits‐​all and thus will emerge from a variety of market and civil society forces that can respond to these unique needs. In my latest policy brief, I highlight some ways that policymakers who want to support parents and families navigating these questions could do so without the problems for speech, privacy, and parental choice.

Many great resources exist already, from parental controls to resources for having conversations about technology with children and teens, but parents are often unsure of what parental controls are available, how to have conversations with their children about technology, or where to look for guidance. Policymakers could help empower parents by collating existing resources or engaging in other educational opportunities so families can choose the right solutions for their concerns. These resources need not be developed by the government, as a wide array of both industry and civil society groups have already developed such resources.

Second, further research is needed to understand the underlying concerns around issues like teenage mental health and social media. It should not be presumed that technology is always to blame, and how technology can help with these same issues should also be explored. Not only should further research and conversations include scientific and social science research, but policymakers and trusted adults like parents, caregivers, and teachers should also ask children and teenagers why they prefer to spend time online and discuss the value they find in online communities.

Finally, many states already have a digital or computer literacy component in their curriculums. However, many of these curriculums and standards were developed before social media gained popularity. This year, Florida passed a law to include updated online safety and media literacy around social media in a way that allows schools and parents to be aware of and choose the curriculum. This flexible approach does not dictate to children and teenagers what choices they should make, but instead prepares them to both make responsible choices and understand the risks and benefits of using technology.

In short, it is understandable that many parents are concerned about what they hear about children, teenagers, and social media. It is not uncommon for these concerns to arise with technology or in popular culture. Similar concerns have played out over everything from the novel to video games.

Despite these concerns, many children and teens have found valuable online communities, educational opportunities, or new passions online. Rather than rushing to regulate or take away technology from teenagers, parents, and policymakers should look at the tools available to empower and educate all users on how to have a beneficial online experience.

Three Reasons Americans Should Be Concerned about the United Kingdom’s Online Safety Bill

https://www.cato.org/blog/three-reasons-americans-should-be-concerned-about-united-kingdoms-online-safety-bill-0

Jennifer Huddleston

On the TV show Parks and Rec, libertarian character Ron Swanson takes a trip to London and famously quips, “History began on July 4 1776. Everything before that was a mistake.”

When it comes to issues like free expression online, it is easy for Americans to get the myopic view that we don’t need to worry about the impact of other countries’ regulations because we are protected by the First Amendment.The United Kingdom has been debating the “Online Safety Bill” that could have serious consequences for many internet companies. This sizable piece of legislation would create many new requirements for platforms that carry user generated content including search engines, messaging apps, and social media. Among these requirements include strict age‐​verification requirements and limiting certain types of “legal but harmful” content that raise significant concerns for the implications the bill would have on users’ privacy and speech. While the regulations will mostly be felt by U.K. internet users, the global nature of the internet means that the regulations will likely impact users more generally. With that in mind, here are three key reasons Americans should be concerned about what might happen if the United Kingdom passes the Online Safety Bill.

  1. It could undermine encrypted services

One of the key concerns about the Online Safety Bill is the way it would undermine encryption. This is because the bill disincentivizes the use of end‐​to‐​end messaging by incentivizing services to scan all messages for child sexual abuse imagery. While the intention of identifying the criminals behind this heinous act is noble, such a requirement would require platforms to screen all messages in the United Kingdom to prevent certain illegal content. In short it creates a guilty‐​until‐​proven‐​innocent standard for messages on encrypted services.

Many encrypted messaging services have opposed the bill and even stated they would cease services in the United Kingdom rather than compromise the level of privacy and security in this way. This could deny U.K. users access to popular apps like Signal and WhatsApp, which also make it more difficult for Americans and others to communicate and connect with their friends and family in the United Kingdom. For companies that did comply, the message screening requirement would certainly catch messages from American users to the United Kingdom, as well as impact any number of American services that host user‐​generated content links to the United Kingdom.

  1. Many of its provisions may be applied globally

While the First Amendment may protect Americans from many potential speech restrictions domestically, the actions of other countries can certainly impact the experience of American users because of the internet’s global nature.

Americans have previously experienced this with many of the privacy changes following the European Union’s General Data Protection Regulation, such as the increase in cookie pop‐​ups and being unable to access certain newspapers if traveling in Europe. We’ve also seen a similar international impact on American companies through the U.K.’s pro‐​competitive actions, such as blocking Meta’s acquisition of Giphy and, most recently, Microsoft’s acquisition of Activision.

If the Online Safety Bill were to become law, many companies will find themselves deciding if the cost of compliance will be too much to stay in the United Kingdom. For those that do stay, it may be easier to apply the same restrictions globally to ensure compliance. The result might mean that American users are subject to the same requirements around age verification and restraints on content moderation. This is particularly true if the proposal is read broadly enough to impact certain virtual private networks (VPNs) that would allow U.K. users to claim they were in another country.

  1. U.S. policymakers are looking to it as a model for youth online safety proposals in the United States

At a state and federal level, there has been much debate over potential youth online safety or privacy legislation. Policymakers behind these proposals are likely watching the debate over similar policies across the pond. In fact, during the debate over California’s Age Appropriate Design Code, one of the Online Safety Bill’s key proponents, Baroness Beeban Kidron, wrote positively of its similarities. In turn, recent American actions only further some of the U.K. legislation’s proponents’ ardent advocacy for the law.

However, policymakers should be cautious not to forget one key difference between the U.S. and the U.K.’s foundations: the First Amendment. In fact, the California law has already been challenged in court on First Amendment grounds, and other state or federal legislation would likely face similar legal challenges.

Still, in some cases, this has done little to dissuade policymakers who often have well‐​meaning intentions of protecting the next generation. But proposals modeled after the U.K.’s Online Safety Bill would diminish the privacy and benefits of the internet for all users, including the children that it claims to protect.

The Online Safety Bill has a whole plethora of concerns that many scholars have written about in great detail. If passed, it will have significant negative impacts on the speech and privacy of British internet users, but its broader impact will also likely reach America’s shores.