Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
As artificial intelligence reshapes various aspects of American life, Congress is contemplating legislation that could diminish state authority to implement essential protections. This potential overreach, detailed in Section 43201 of the recently passed House reconciliation package, aims to prohibit states from regulating artificial intelligence models and systems for a decade.
In the latest developments, the Senate introduced a similar measure that would prevent states from receiving federal funds for broadband infrastructure unless they comply with federal AI regulations. Proponents of this moratorium argue that a unified national approach is necessary to maintain the United States’ competitive edge in AI technology.
However, this sweeping regulation poses a real threat to state initiatives designed to safeguard against the more harmful excesses of major technology companies. These initiatives include protecting children online and ensuring data privacy, all while addressing issues related to platform censorship.
Section 43201 could undermine numerous state laws that, while not directly related to AI, have been crafted to ensure digital safety and privacy can be at risk. The provision’s broad definition of “automated decision systems” encompasses fundamental functionalities of social media platforms, which might include systems like TikTok’s recommendation feed and Instagram’s algorithm.
At least twelve states have enacted laws requiring parental approval or age verification for minors using these platforms. However, these laws could be interpreted as regulations of automated decision systems and thus fall under the moratorium’s restrictions.
Moreover, Section 43201 may block stringent provisions within existing state privacy laws that restrict the implementation of algorithms—particularly AI—that predict individual behaviors and preferences. Such limitations may stifle individual rights at a local level without establishing equivalent federal safeguards.
Even beyond the expansive reach of this moratorium, the fundamental issue at hand is the potential erosion of American federalism. By undermining state regulations, Congress may hinder the development of AI in a way that fulfills its stated promises, as emphasized by Vice President J.D. Vance during the Paris AI Summit.
Vance cautioned against perceiving AI solely as a disruptive force that will displace jobs. He advocates for policies that enhance workforce productivity and lead to better wages and living conditions for employees. Effectively realizing this vision relies heavily on the actions taken at the state level.
States like Tennessee and Utah are already pioneering innovative measures aimed at protecting their constituents. For instance, Tennessee’s ELVIS Act prohibits the non-consensual use of artists’ voices and likenesses in AI content, while Utah mandates that developers of generative AI systems inform users when they are engaging with AI.
Similarly, states like Arkansas and Montana are laying down legal frameworks addressing digital property rights concerning AI models, algorithms, data, and their outputs. Such proactive measures are crucial as states serve as practical testing grounds for regulation.
As essential laboratories of democracy, states can effectively navigate and address the complexities tied to the advent of groundbreaking technologies. Federalism allows these entities to innovate continuously and compete with one another, showcasing the most effective and ineffective regulatory methods in an environment that is rapidly changing.
This capacity is vital in managing AI’s expansive impact on children and the workforce and the broader socio-economic landscape. Leading advocacy and research organizations have raised alarms about the dangers posed by AI chatbots to minors. They have pointed to disturbing instances where teens encountered harmful scenarios, including addiction and self-harm due to AI interaction.
Industry leaders have echoed these concerns. Anthropic CEO Dario Amodei estimates that the rise of AI could lead to as much as 20% unemployment in the next five years. Thus, it is clear that while innovation drives society forward, it also carries the risk of societal disruption.
Given these concerns, a coalition of 40 state attorneys general, from both sides of the political spectrum, has expressed opposition to Section 43201. They argue that its implementation would nullify carefully crafted laws that specifically address risks associated with AI use.
Not all state laws are equally beneficial. Some states, like California and Colorado, have introduced stringent AI regulations that could adversely affect smaller tech companies and open-source developers. Nonetheless, Congress should avoid discarding the principles of federalism to address concerns arising from these more extreme regulations.
Instead of imposing sweeping pre-emption, legislators could reflect on targeted limits. Such limits could be designed to mitigate high-risk legislation while allowing states to retain the flexibility needed to craft effective solutions for their unique populations.
Without a comprehensive federal framework governing AI, it is crucial to ensure that states maintain the autonomy necessary to foster American innovation and competitiveness. In achieving a future where AI technologies thrive, the role of state legislative actions cannot be overstated.
The potential of AI in America is vast, but it relies heavily on the unique contributions of state lawmakers. By maintaining a balance between federal oversight and state initiative, the United States can navigate the complexities of AI while safeguarding the interests of its citizens.