What Else Was Trending in State Technology and Innovation Proposals in 2022–2023?

https://www.cato.org/blog/what-else-was-trending-state-technology-innovation-proposals-2022-2023

Jennifer Huddleston and Gent Salihu

As discussed in prior posts, the 2022–2023 legislative session saw a surge in state legislation to protect kids’ online safety and state data privacy laws. But what else have states been working on during this recent legislative session?

An overwhelming majority of states have considered actions related to TikTok particularly on government devices and networks. With the rise in popularity of generative artificial intelligence (AI) applications, states have also rushed to sponsor laws to regulate technology for problems that yet remain to be defined. It is likely that these topics will continue to be part of state debates in upcoming legislative sessions, as well as the subject of continued debate at the federal level.

State‐​Level TikTok Bans

As of August, a total of 34 states have passed legislation that bans TikTok on government devices or networks. These restrictions are largely based on concerns about the national security risks posed from TikTok’s parent company ByteDance’s position in China and the Chinese National Intelligence Law. Under that law, China can require its corporations to provide data.

In some instances, however, states have enacted bans that extend beyond government devices. Oklahoma, for example, has extended the TikTok prohibition to not only government devices, but to any contractor transacting with the state. Some extensions seem to push the boundaries of justification under the umbrella of national security concerns. In Texas, TikTok restrictions were also applied to public universities and their networks. This is now being challenged on the grounds of academic freedom. In a similar vein, Florida outlawed TikTok in both public‐​school devices and networks.

Perhaps most concerning from a free speech perspective, Montana enacted a TikTok ban for all citizens. Departing from the national security rationale, Montana expanded the reasoning for banning TikTok: the conduct and mental well‐​being of children. A concern reflected in the Montana law’s preamble is that TikTok “directs minors to engage in dangerous activities.”

Such a proposal raises significant concerns about its impact on the First Amendment rights of TikTok’s American users and, unsurprisingly, the law was almost immediately challenged in court. TikTok bans like the Montana law deprive Americans of a unique forum of speech that they have chosen to use for expression. TikTok fosters a unique community of creators and followers, often sparking trends and challenges, and provides specialized features for content creation that its creators prefer to those available on other platforms like Instagram Reels or YouTube Shorts.

TikTok is a forum of speech that stands apart in its function and impact. A TikTok ban at any level faces a significant hurdle in proving it meets a compelling government interest and is being implemented using the least restrictive means to achieve that interest. There are ongoing processes that may lead to a better understanding of whether there are any concerns regarding TikTok that require government policy action, but even if so, there are many options short of a full ban to achieve such an interest.

State‐​level bans raise even more concerns regarding the realistic enforcement of such laws even if they do pass such a test. Many proposals would require the government to take more concerning steps for enforcement or are nearly impossible at a practical level. Americans in states that adopt measures to ban certain apps or websites may turn to Virtual Private Networks (VPNs) to circumvent these restrictions. This was demonstrated by the increased popularity in searches and downloads of VPNs following PornHub’s exiting Utah due to government ID requirements. These bans would also extend to control over app stores, dictating what apps they could carry within a specific state—a decision that is typically made only at the federal level. But the popularity of certain apps means that many Montanans may already have access to them, so it would only prevent those users who do not currently have the app and could even create more risks by preventing security updates to the existing app.

TikTok has been policymaker’s focus due to the unique intersection of concerns about China’s technological progress and youth social media use. But many legislative proposals at both a state and federal level would impact much more than just the app. Policymakers should tread carefully when considering the precedent such actions could set and ensure that any concerns are based on sound evidence and not just the targeting of a specific company.

AI Regulation Attempts by States

As the federal government grapples with what—if anything—should be done to regulate AI and Large Language Models (LLMs), a small set of states have already taken charge and pushed forward their own bills with the rationale of protecting their citizens. New York and California have proposed centralized frameworks, echoing the EU AI Act, while others like Louisiana, Montana, and Texas have targeted more specific concerns. A confusing patchwork of rules for developers, deployers, and end users of AI could be on the horizon if states take the lead.

New York, through A7501, seeks to follow a centralized approach to regulating AI usage, with plans to establish an Office of Algorithmic Innovation which would have the power to set standards for the usage and auditing of algorithms. The New York bill resembles the centralized structure laid down by the EU AI Act and departs from the sectoral‐​based solutions that dominate the debates at the federal level. With the New York approach, creating new institutions might only lead to red tape and confusion, instead of building on existing institutions with a sectoral approach.

While New York’s legislative proposal is still under consideration, California’s AB 331 has been recently suspended. Still, its features deserve close attention, as similar bills are likely to be sponsored in upcoming legislative sessions. California’s bill sought to expand the existing responsibilities of the Civil Rights Department to regulate automated decision tools. Utilizing an existing body is a departure from New York’s goal of creating a new agency. However, both California and New York aimed at entrusting a single agency to oversee all deployers and users of AI.

Even with the suspension of AB 331, California may consider AI regulatory action through its existing California Privacy Protection Agency (CPPA). The CPPA has the power to draft regulations on automated tools and has become a de facto AI regulator in California.

Unlike New York and California, which have taken a broad and centralized regulatory approach to AI, Louisiana has focused on more specific and tangible use cases. Louisiana’s SB 1775, which has already been signed into law, criminalizes deepfakes involving minors and defines rights to digital image and likeness. This represents a tailored response to a particular concern on AI usage for which there is solid evidence and insight.

Montana and Texas have also adopted similar targeted approaches. Montana’s SB 397, also signed into law, prohibits law enforcement from use of existing facial recognition technology, aiming to safeguard individual liberty and prevent the perpetuation of racial bias by state authorities.

Rather than rushing to enact broad regulations for technology that keeps transforming every day, Texas established an AI Advisory Council to study and monitor AI systems developed or used by state agencies. This approach could provide opportunities for deregulation as well as regulation by identifying current barriers to deployment or development. It also focuses on the state’s own use of the technology rather than on private sector applications.

As with data privacy or youth online safety, many state legislatures may be asking what they can or should do about their constituents’ concerns about AI. It is important to remember that AI is a general use and data‐​intensive product and typically concerns relate to a specific application, not the technology more generally. Over‐​regulation could limit many existing and beneficial applications. Like the internet, AI crosses borders in ways that makes a federal framework preferable for any potentially necessary regulations.

Conclusion

A wide range of tech policy issues have seen activity at the state level during the latest legislative session. In some cases, this activity may be a reaction to the perceived ability to “do something” in the absence of federal action, as evidenced by recent measures surrounding a broad array of tech debates, including new topics like AI and TikTok. Many state technology proposals are an attempt to respond quickly to perceived concerns without strong evidence of the alleged harm or thorough consideration of the consequences of government action on key values like speech. While state governments are often seen as the laboratories of democracy or more closely tied to the population they represent, the situation becomes more complex with tech policy when many proposals can have an impact beyond state borders or could create a disruptive patchwork.

The Patchwork Strikes Back: State Data Privacy Laws after the 2022–2023 Legislative Session

https://www.cato.org/blog/patchwork-strikes-back-state-data-privacy-laws-after-2022-2023-legislative-session-0

Jennifer Huddleston and Gent Salihu

Prior to the 2022-2023 legislative session, five states (California, Virginia, Utah, Colorado, and Connecticut) had passed consumer data privacy laws, but now the patchwork of state laws has more than doubled. Congress has continued to debate a potential federal standard with the American Data Privacy Protection Act in the 117th Congress being the first such proposal to be voted out of a committee; however, without momentum around a federal standard and with continuing and new concerns about data privacy from consumers, many states are undertaking their own policy actions around data privacy.

The patchwork nature of these individual state laws can potentially amplify compliance costs for businesses operating across different states and create confusion among American consumers whose digital footprint often crosses state borders. The potential financial impact of complying with 50 distinct state laws could surpass $1 trillion over a decade, with a minimum of $200 billion being borne by small businesses. As this patchwork grows, what does data privacy look like as the 2022-2023 legislative session comes to a close?

What happened with data privacy in 2022-2023?

As of 2023, the majority of states have considered data privacy legislation, likely in response to consumer concerns on this issue — 32 state legislatures have kicked off the debate and presented bills. Ten states have already signed comprehensive privacy bills into law. Six states—Florida, Indiana, Iowa, Montana, Tennessee, and Texas—enacted data privacy legislation this year. Oregon is the latest state to pass a comprehensive law, which is now awaiting the governor’s signature. Additionally, there are five more bills under consideration as of July 2023. Most of these bills share similarities with the existing data privacy laws in California, Virginia, and Utah.

States with data privacy acts enacted in 2023 that have followed the California model

Of the five additional states that enacted data privacy laws this year, Indiana and Montana appear to most closely resemble California’s model, which relies heavily on administrative rules. Montana, for example, even goes beyond California by creating a right for consumers to revoke their consent to data processing. None of the states that have enacted laws this year have created a private right of action as seen in a limited capacity in the current California law.

States that have followed the Virginia or Utah model

Notably, a growing number of states have passed or considered a data privacy framework that more closely resembles the laws initially passed in Utah and Virginia. This includes Iowa, Tennessee, and Texas as well as a bill still under consideration in North Carolina. Such models provide baseline protections but typically have fewer obligations or areas of covered data, limit enforcement to the attorney general, and are more likely to provide safe harbors.

Still, each proposal remains unique. For example, Tennessee became the first state to create a compliance safe harbor for companies complying with National Institute of Standards and Technology (NIST) standards. Other states have considered similar carve-outs for existing standards. Such an approach may lessen some problems with the patchwork by providing a way for a single set of best practices that could be compliant from state to state.

Notable privacy bill trends to watch

In addition to the growing patchwork of state privacy laws, this latest legislative term has also provided additional information about the debates around data privacy legislation. Notably, private rights of action continue to raise concerns and may make proposals less likely to succeed. Additionally, a new trend of health privacy-focused bills is emerging at the state level.

Currently, four states that still have active bills—Maine, Massachusetts, New Jersey, and Rhode Island—contemplate creating a private right of action. However, to date, all bills from Hawaii to Mississippi to New York that included provisions on the private right of action have failed. New York’s failed “It’s Your Data Act” had foreseen that consumers “need not suffer monetary or property loss as a result of such violation in order to bring an action for a violation.” The Washington Privacy Act was passed only after eliminating the private right of action, which was later reinstated in a very limited form by allowing a private right of action only for injunctive relief without monetary damages.

The inclusion of a private right of action for statutory violations so that individuals can sue companies without the need to prove that actual harm inflicted upon them has grave consequences. Such private right of action for statutory damages raises significant concerns about how litigation could be used to prevent innovation. While a private right of action wouldn’t pose any significant issues if the burden of proof was solely tied to demonstrating the harm, the problem arises when there’s no requirement to prove harm. Such a provision could prompt a surge in class action lawsuits, thereby impeding innovation, especially among small companies that may become more risk-averse for fear of being sued.

The United States, with its distinct litigation system, and features such as the absence of a “loser pays” rule, is more susceptible to the abuse of the private right of action for statutory violations. Illinois’s Biometric Information Privacy Act provides such a right in the context of certain collection of data and has seen everything from photo tagging to trucking companies be sued. Most of the resulting funds have gone to attorneys, with limited amounts to the class members alleged to be “violated” by the action. In the photo tagging case, Facebook was directed to pay $650 million without the necessity of demonstrating any harm. In the trucking case, truck drivers secured a $228 million judgment because, as employees, they were required to scan fingerprints to confirm their identity, again without the need to show actual harm.

A new emerging trend to watch is the ongoing debate surrounding the sponsorship of bills aimed at regulating consumer health data, primarily focusing on reproductive health data. Washington is the first state to pass such a law, which is set to take effect in 2024. In a post-Roe context, it is likely that similar legislation — particularly in blue states — will emerge, regulating actors that are not governed by HIPAA. Given the broad scope of what is classified as health data, debates on its definition, collection, and usage are likely to be heated. Such laws also raise unique compliance questions for a variety of popular apps that are not regulated as medical devices but provide consumers with empowering ways to track information from blood sugar to mental health.

What do state data privacy laws mean for consumers, innovators, and the federal privacy policy debate?

States are acting on data privacy in part because of the continued interest in the issue from constituents. In 2022, more than 80% of voters polled supported the idea of a federal data privacy law. Given that data privacy remains a concern and due to the lack of progress on a federal bill, it is unsurprising that much of the debate over data privacy has shifted to a local or state level where legislatures are able to move more quickly. But is this good for consumers and innovators?

Is there a case for data privacy legislation anyway?

While many polled consumers are in favor of data privacy legislation, there remains a great amount of difference in the actual privacy preferences they have. In fact, the overwhelming support for data privacy becomes far more complicated when you consider questions like how much an individual would be willing to pay for social media or other products as opposed to an ad-supported version. Similarly, research has shown a “privacy paradox” where revealed preferences for privacy tend to be weaker than stated preferences.

If policymakers are to consider legislation around data privacy, they should focus on real and widely agreed-upon harms, not merely expressed preferences. This approach prevents a shift toward a more European “privacy fundamentalism” that is more likely to result in conflicts both with other rights, like speech, as well as create a static approach that could deter innovation including those that may improve privacy.

Understanding the problems of a patchwork approach

The continuing, emerging patchwork of data privacy laws at a state level is likely to lead to both increased costs and confusion. This is true not only for the businesses that handle data but also for consumers.

A state-by-state approach makes it uncertain for both innovators and consumers what may or may not be done with their data. For consumers, this can create confusion about why certain products or features may not be available in their state or what rights they have when it comes to obtaining or correcting their data online. Particularly for small businesses, a state-by-state approach is likely to significantly raise costs as new compliance concerns arise in each state. In some cases, this may result in applying the most restrictive standard necessary, but in other cases, it may require development of specific features to comply. In either case, again both consumers and innovators lose out. Consumers may find themselves losing features because of standards imposed by legislatures in other states and innovators may find themselves focusing on compliance rather than the improvements that best serve their customers.

Far from being the second-best solution, it is almost inevitable that proposals will eventually conflict with one another which makes it impossible to comply with all such state laws. The most obvious example of this would be if one state chooses an opt-out model while another chooses an opt-in model, but many other conflicts could arise around issues such as data minimization or retention.

Given the potential and likelihood for conflicts and the burden on out-of-state businesses, a state-by-state approach also should give rise to dormant commerce clause concerns. The interstate (and international) nature of data means that a federal standard should be considered constitutionally necessary in this case.

Conclusion

The 2022-2023 session saw a doubling of the number of states with consumer data privacy laws. While policymakers may feel they are responding to constituent concerns, the patchwork approach remains problematic for both innovators and consumers.