Home

Donate

October 2024 US Tech Policy Roundup

Rachel Lau, J.J. Tolentino, Gabby Miller, Ben Lennett / Nov 1, 2024

Rachel Lau and J.J. Tolentino work with leading public interest foundations and nonprofits on technology policy issues at Freedman Consulting, LLC. Ben Lennett is managing editor of Tech Policy Press, and Gabby Miller is staff writer at Tech Policy Press.

October 5, 2024: Elon Musk (R) jumps on stage as he joins former US President and Republican presidential candidate Donald Trump during a campaign rally at the site of the first assassination attempt on Trump in Butler, Pennsylvania. (Photo by JIM WATSON/AFP via Getty Images)

With November 5th quickly approaching, over the past month, policymakers were largely focused on elections, meaning limited action on tech policy. Despite technology not being a significant campaign issue for either party during this election, it may yet again play a role in shaping political discourse as the votes are tabulated. Based on his recent statements, former President Trump is likely to dispute the election results if not in his favor, and as in the period following the 2020 election, social media and other technologies will be used by his supporters to mobilize and amplify his rhetoric and election disinformation. How the social media platforms will respond this time is an open question. In the 2024 US presidential election, Elon Musk, the owner of X, formally endorsed former President Donald Trump and is funding a political action committee dedicated to his reelection. In addition, there is evidence that Musk is using X to elevate his political views along with MAGA and right-wing content.

The election outcome will also have a major impact on the direction of tech policy in the US and globally. Tech Policy Press published a range of perspectives on priorities and possibilities for US tech policy after November. While most of the experts we interviewed predict that a Harris administration would largely maintain the US on its current course for tech policy and regulation, whereas a Trump administration could depart not just from Biden but past Republican administrations, there remains a great deal of uncertainty. Ultimately, the tech policy agenda will depend on a combination of electoral outcomes, both in the race for the presidency and control of Congress; the growing political and lobbying influence of the tech companies and Silicon Valley; and broader global trends around issues such as national security and competitiveness.

Despite the attention on the upcoming election, there were some important policy developments this month:

  • A bipartisan coalition of 14 state attorneys general filed a lawsuit against TikTok for allegedly collecting minors’ data without consent and harming youth’s mental health;
  • The White House released its much anticipated national security memorandum (NSM) on AI, including a companion framework to advance AI governance and risk management in national security;
  • The Department of Justice (DOJ) moved forward with potential rules to implement President Biden’s executive order protecting Americans’ personal data from “countries of concern,” and the Consumer Financial Protection Bureau (CFPB) issued guidance regarding the use of AI-powered or algorithmic tools by financial institutions;
  • The first civil lawsuit against an online platform that allows users to create and interact with AI-generated agents was filed in Florida on behalf of a parent whose 14-year-old took his life after interacting with and becoming dependent on the platform’s role-playing AI "characters."

Read on to learn more about October developments in US tech policy.

State Attorneys General Sue TikTok

Summary

On October 7, a bipartisan coalition of 14 state attorneys general, led by California and New York Attorneys General Rob Bonta and Letitia James, respectively, sued TikTok for allegedly collecting minors’ data without their consent and "misleading the public about the safety of its platform and harming young people’s mental health." The coalition of state attorneys general includes California, Illinois, Kentucky, Louisiana, Massachusetts, Mississippi, New Jersey, North Carolina, Oregon, South Carolina, Vermont, Washington, and the District of Columbia.

The suits, filed in each state or district’s respective jurisdictions, are broadly seeking injunctive relief against the video-sharing app for its alleged harmful practices and asking the courts to impose significant financial penalties against the video-sharing platform. While each lawsuit relies on a legal approach tailored to meet the requirements of its relevant state laws, the overarching allegations rest on the unfair and deceptive actions TikTok allegedly has taken in violation of state consumer protection laws. For instance, Massachusetts claims TikTok's misleading actions have resulted in a public nuisance to the people in its jurisdiction. In contrast, New York claims TikTok's "false advertising" violates the state's general business law.

This suit against TikTok is another instance of state attorneys general working together to hold the major social media companies accountable for business conduct and design decisions that impact kids' safety. The legal approach is meant, at least in part, to sidestep Section 230, which provides legal immunity to online platforms from harms associated with the content posted by third-party users, and the First Amendment, which, according to the most recent Supreme Court decision, protects their content moderation decisions.

This effort is similar to a lawsuit against Meta filed by 42 state attorneys general last October, alleging that it designed and deployed features on Facebook and Instagram that encouraged addictive behaviors it knew to be harmful to its young users’ mental and physical health. More recently, New Mexico Attorney General Raúl Torrez sued Snap over the design and implementation of certain features that contribute to harming children, including child sexual abuse material.

Stakeholder Response

In a statement announcing the lawsuit, California Attorney General Rob Bonta stated that “TikTok intentionally targets children because they know kids do not yet have the defenses or capacity to create healthy boundaries around addictive content.” New York Attorney General Letitia James added, “TikTok claims that their platform is safe for young people, but that is far from true. In New York and across the country, young people have died or gotten injured doing dangerous TikTok challenges, and many more are feeling more sad, anxious, and depressed because of TikTok’s addictive features.” Common Sense Media came out in support of the lawsuits: "The new lawsuits against TikTok demonstrate that the Attorneys General are once again using the power of their offices to protect children online by focusing on design features that are known to be harmful to kids and teens.”

In response to the lawsuit, TikTok spokesperson Alex Haurek said in a statement, “We strongly disagree with these claims, many of which we believe to be inaccurate and misleading.” EFF’s free speech and transparency litigation director, Aaron Mackey, offered a statement to Engadget: “Social media algorithms aren’t inherently evil – they can sift through vast amounts of data to present users with content they’ll find relevant, entertaining, and educational. The states' claims regarding features like autoplay and endless scrolling are really just a smokescreen for their distaste for First Amendment-protected content.”

What We’re Reading

  • Bobby Allyn, “More than a dozen states sue TikTok, alleging it harms kids and is designed to addict them,” NPR.
  • Gabby Miller, “Social Media Lawsuits by State Attorneys General Surmount Section 230, Other Challenges,” Tech Policy Press.

The White House Releases Memorandum on National Security and AI

Summary

This month, the White House released its much-anticipated national security memorandum (NSM) on AI. The NSM establishes guidelines for the adoption and deployment of AI systems for federal agencies within national security contexts. The NSM builds on several Biden administration AI policy initiatives, including the 2023 AI executive order, and seeks to advance the federal government's AI and national security interests in key areas while establishing safeguards and cultivating a stable and responsible framework to advance international governance.

The NSM directed specific federal actions to strengthen the country’s foundational AI capabilities, including the United States’ chip supply chain, and prevent foreign competitors from upending US AI leadership. It also established that the US must proactively construct testing infrastructure to assess and mitigate AI risks, including developing tools for “reliably testing AI models’ applicability to harmful tasks and deeper partnerships with institutions in industry, academia, and civil society.” The document further emphasized that federal AI systems designed to achieve national security goals must protect “human rights and democratic values.” To accomplish this, the NSM calls for developing and implementing AI governance and risk management practices, as well as further collaboration with allies and relevant stakeholders to advance international governance related to AI.

The White House also released a Framework to Advance AI Governance and Risk Management in National Security to accompany the NSM and provide further guidance. The Framework included additional requirements for agencies to “monitor, assess, and mitigate AI risks related to invasions of privacy, bias and discrimination, the safety of individuals and groups, and other human rights abuses.” This document will be updated regularly to ensure future AI systems are “responsible and rights-respecting.” The memo also formally designated NIST’s AI Safety Institute, housed within the Commerce Department, as the nation’s “primary point of contact when it comes to emerging technology” and reinforced the National AI Research Resource (NAIRR) as a valuable resource for AI researchers across industries.

Stakeholder Response

Responses to the NSM have been generally positive, with various stakeholders praising the Biden administration’s continued efforts to provide guardrails for the safe and responsible deployment of AI systems in the federal government. National Economic Advisor Lael Brainard released a statement in support of the NSM, stating that “NSM is just the latest step in a series of actions thanks to the leadership and diplomatic engagement of the President and Vice President.” Senate Intelligence Chairman Mark Warner (D-VA) issued a statement supporting the NSM but urged the Biden administration to work with Congress “to advance a clearer strategy to engage the private sector on national security risks directed at AI systems across the AI supply chain.” Jason Oxman, President of the Information Technology Industry Council, praised the NSM as a means for the US to remain a leader in AI innovation and development. Just Security provided an overview of additional civil society perspectives on the NSM – many praised the guidance’s efforts to establish more guardrails but also expressed concerns about loopholes and limitations.

What We’re Reading

  • Daniel Castro, “National Security Reminds Policymakers What Is at Stake for the United States in the Global AI Race,” Center for Data Innovation.
  • Gabby Miller and Ben Lennett, “White House AI Memo Promises to Balance National Security Interests with Privacy and Human Rights,” Tech Policy Press.
  • Gregory C. Allen and Isaac Goldston, “The Biden Administration’s National Security Memorandum on AI Explained,” Center for Strategic and International Studies.
  • Mohar Chatterjee and Joseph Gedeon, “New Biden policy takes a big swing at AI — and sets political traps,” Politico.

Tech TidBits & Bytes

Tech TidBits & Bytes aims to provide short updates on tech policy happenings across the executive branch and agencies, Congress, civil society, industry, international governance, and courts.

In the executive branch and agencies:

  • The Office of Management and Budget (OMB) issued a memorandum on the federal government’s responsible acquisition of AI technologies. The guidance ensures that the federal government “appropriately manage[s] risks and performance; promote[s] a competitive marketplace; and implement[s] structures to govern and manage their business processes related to acquiring AI.”
  • The National Institute of Standards and Technology (NIST) established a new Standardization Center of Excellence with a $15 million grant to ASTM International to “support US engagement in international standardization for critical and emerging technologies (CETs) essential to US competitiveness and national security.”
  • The Bureau of Industry and Security (BIS) announced proposed rules aiming to limit the use of surveillance tools by repressive governments through US export controls. The proposed rules establish a new control for the harmful use of facial recognition technology for mass surveillance, create a new foreign-security end user control, and strengthen restrictions on Americans aiding foreign governments with surveillance violations.
  • The Securities and Exchange Commission (SEC) published its annual Examination Priorities for FY2025, which includes an analysis of risk for cybersecurity and cryptocurrency markets, as well as emerging financial technologies.
  • The DOJ published a Notice of Proposed Rulemaking, which would implement President Biden’s executive order protecting Americans’ personal data from “countries of concern” by “establishing categorical rules for certain data transactions that pose an unacceptable risk of giving countries of concern or covered persons access to government-related data or bulk U.S. sensitive personal data.”
  • The CFPB issued guidance clarifying that companies using AI-powered or algorithmic tools must continue to adhere to the Fair Credit Reporting Act (FCRA). Under this guidance, “employers must obtain consent to use third-party consumer reporting tools from employees, allow employees to dispute inaccurate information contained in the reports, provide transparency to workers when data is used in adverse employment decisions, and limit how employees share and use the information employers obtained by using third-party consumer reporting tools.”

In civil society:

  • The Center for Democracy and Technology (CDT) published a report on online attacks against 2024 election candidates who are women of color, finding that both offensive and hate speech are disproportionately targeted at women of color.
  • Public Knowledge and four other civil society and consumer advocacy groups wrote a letter to Congress opposing the NO FAKES Act (S.4875), arguing that the bill exacerbates the concerns about democracy posed by AI tools.
  • Freedom House published its annual Freedom on the Net report, which examines global internet freedom across 72 countries around the world. The United States remained at an Internet Freedom Score of 76 out of 100.
  • The Center for AI and Digital Policy (CAIDP) published a report on federal agencies’ use of rights-impacting and safety-impacting AI, finding that “no agency has published rights-impacting and safety-impacting determinations as required by OMB.”
  • A coalition of 21 civil society organizations published a letter to the National Telecommunications and Information Administration (NTIA) in response to a Request for Comment on the environmental, social, and economic impacts of growing data centers around the US, highlighting the need for renewable energy as the energy usage required by AI grows.

In industry:

  • A coalition of 76 industry organizations and corporations led by the Information Technology Industry Council (ITI) sent a letter to congressional leadership urging Congress to codify and fund the NIST AI Safety Institute.
  • Meta’s oversight board issued a warning about the company’s content moderation system, arguing that Meta might be over-enforcing its content rules when removing political speech.
  • The Wall Street Journal published an exclusive on Elon Musk’s relationship with Vladimir Putin, finding that Musk and Putin have been in regular contact on business, personal, and geopolitical matters.

In the courts:

  • The Computer and Communications Industry Association and NetChoice filed suit against Florida Attorney General Ashley Moody over Florida H.B.3, an “online protections for minors” bill that would require social media platforms to terminate the accounts of minors under 14 and prohibit them from creating new accounts, permanently delete all personal information held by the minor’s terminated account, and more.
  • A federal district judge ruled in favor of a multi-district litigation filed by hundreds of local school districts against Meta, Snap, TikTok, and YouTube. The decision found that the negligence and public nuisance complaints from the school districts can proceed in part, allowing them to seek damages for expenses incurred by students' addictions to social media platforms. Some of the plaintiffs' theories of injury claims were, however, dismissed on Section 230 and First Amendment grounds.
  • The Social Media Victims Law Center and the Tech Justice Law Project filed a lawsuit against Character.AI, the company's two cofounders, and Google on behalf of a parent whose 14-year-old who took his life after interacting with and becoming dependent on role-playing AI "characters" offered by the company. Filed in the US District Court for the Middle District of Florida, the 126-page complaint alleges the defendants knew the design of its app was dangerous and would be harmful to a significant number of minors, failed to exercise reasonable care for minors on its app, and deliberately targeted underage kids.
  • The US Court of Appeals for the Third Circuit denied TikTok's petition for rehearing in the Anderson v. TikTok case. In August, a panel of judges at the Third Circuit reversed the district court’s decision to dismiss the case, based in part on the recent Supreme Court NetChoice decision, which argued that “platforms engage in protected first-party speech under the First Amendment when they curate compilations of others’ content via their expressive algorithms," thus making the company’s algorithms subject to liability.
  • A judge in Epic Games’ case against Google granted a permanent injunction against Google that prohibits the company from entering into agreements that would harm would-be competitors to the Google Play Store. Google appealed the order and filed an emergency motion to partially stay the permanent injunction pending Google's appeal.
  • In response to the passage of California AB.2655 and AB.2839, Christopher Kohls, the creator of a deceptive AI-generated video that involved Vice President Harris and was promoted by Elon Musk on X, filed a complaint challenging the constitutionality of the two laws. A California District Court issued a preliminary injunction blocking the enforcement of AB 2839 on the grounds that the law “does not pass constitutional scrutiny because the law does not use the least restrictive means available for advancing the State’s interest here."
  • The state of Texas filed a lawsuit against TikTok for violating the State's Children Online through Parental Empowerment (“SCOPE”) Act. It is the state attorney general’s first enforcement action for the state law that went into effect in September.

In Congress:

The following bills were introduced across the House and Senate in October:

  • Ending FCC Meddling in Our Elections Act - H.R.9913. Introduced by Reps. Andrew Clyde (R-GA), James Baird (R-IN), Harriet Hageman (R-WY), Claudia Tenney (R-NY), and Doug LaMalfa (R-CA), the bill would “prohibit the Federal Communications Commission from promulgating or enforcing rules regarding disclosure of artificial intelligence-generated content in political advertisements.”
  • Next Generation Military Education Act - H.R.9903. Introduced by Rep. Rick Larsen (D-WA), the bill would provide the Department of Defense personnel with increased access to training and education in artificial intelligence and machine learning.
  • Facial Recognition Ban on Body Cameras Act - H.R.9954. Introduced by Rep Donald Beyer (D-VA) and Ted Lieu (D-CA), the bill would “prohibit use of remote biometric surveillance technology on any data acquired by body-worn cameras of law enforcement officers.”

Authors

Rachel Lau
Rachel Lau is a Senior Associate at Freedman Consulting, LLC, where she assists project teams with research, strategic planning, and communications efforts. Her projects cover a range of issue areas, including technology policy, criminal justice reform, economic development, and diversity and equity...
J.J. Tolentino
J.J. Tolentino is a Senior Associate at Freedman Consulting, LLC where he assists project teams with research, strategic planning, and communication efforts. His work covers issues including technology policy, social and economic justice, and youth development.
Gabby Miller
Gabby Miller is a staff writer at Tech Policy Press. She was previously a senior reporting fellow at the Tow Center for Digital Journalism, where she used investigative techniques to uncover the ways Big Tech companies invested in the news industry to advance their own policy interests. She’s an alu...
Ben Lennett
Ben Lennett is managing editor for Tech Policy Press and a writer and researcher focused on understanding the impact of social media and digital platforms on democracy. He has worked in various research and advocacy roles for the past decade, including as the policy director for the Open Technology ...

Topics