News

New California bills aim to limit uses of consumer data and AI systems: What companies should know

mobile payment
mobile payment

California has long been a frontrunner when it comes to consumer protection and privacy laws, and 2025 is shaping up to be no exception. State lawmakers and regulators are advancing a wave of new bills and regulations across a range of issues, including those specifically aimed at addressing how companies tailor pricing, use consumer data, and deploy artificial intelligence (AI) systems. Should these bills become law, companies in the tech and telecom sectors may particularly feel their effects, but their impacts will likely be far-reaching and could extend beyond the state’s borders.

State-level activity in the consumer protection space is accelerating, and the regulatory environment around consumer data use and AI is growing more complex.

With potential new obligations on the horizon, from transparency requirements around pricing to mandatory impact assessments before using high-risk AI systems, it is important to carefully monitor these developments.

Overview of proposed California bills and their potential impacts

I. Restrictions on surveillance pricing

In the 2025 legislative session, California lawmakers have been targeting so-called “surveillance pricing”, a term used to describe the practice of using personal data to adjust prices for individual consumers. Two pending bills, SB 259 and AB 446, would restrict these practices.

SB 259 - The Fair Online Pricing Act would amend the California Consumer Privacy Act (CCPA) to prohibit businesses from using certain personal and device-specific data (e.g., hardware state, installed software, and geolocation data) to set or generate individualized prices for consumers, with narrow exceptions managing real-time demand and for differences in the costs or demands associated with providing goods or services to different consumers.

Critics of the bill claim it could interfere with legitimate pricing strategies, dynamic offers, and market efficiencies already protected under existing privacy law and it could be interpreted to apply more broadly than to just traditional retailers.

AB 446 - AB 446 would ban businesses from using personal information (as defined by the CCPA) or aggregate information relating to groups or categories of consumers (with some exceptions) to charge different prices for the same product or service. This would include setting customized prices based on internet browsing history, biometric identification (e.g., facial recognition via CCTV cameras), location data, and certain other data.

Critics warn that the bill’s broad language could unintentionally penalize certain routine practices.

II. Restrictions on use of AI systems

California is also advancing regulations on the use of automated decision systems (ADS) based on concerns about potential discrimination, similar to proposals in other states. SB 420 targets high-risk ADS, aiming to protect consumers from discriminatory outcomes caused by AI technologies. AB 1018 focuses on ADS used to make consequential decisions on certain listed topics. Both bills would impose compliance requirements on developers and deployers of ADS.

SB 420 - SB 420 would regulate high-risk ADS with the goal of protecting individuals from discrimination resulting from the use of AI technologies. The bill would require developers and deployers of high-risk ADS to conduct impact assessments before releasing or implementing these systems, evaluating potential risks and effects on consumers. It would also mandate notifying individuals affected by high-risk automated decisions and require companies to establish a risk governance program to oversee responsible use of these technologies.

Proponents of SB 420 claim that it would make AI systems fairer and more transparent, while opponents argue that the bill could make it harder for companies in California to deliver innovative, AI-powered services.

The bill stands to impact businesses developing or deploying AI systems and could introduce new operational and compliance challenges around existing AI tools and systems. Further, the risk of overlapping obligations as the California Privacy Protection Agency (CPPA) finalizes its proposed revised draft regulations under the CCPA creates additional uncertainty and potential compliance conflicts for companies already managing AI-driven services and systems in California.

AB 1018 - AB 1018 would regulate ADS used to make “consequential decisions,” including in the context of employment, education, provision of legal services, and Internet and telecommunications access (among many other contexts). Using Colorado’s AI Act as a starting point, the bill goes further by placing significant responsibilities on both AI developers and deployers, including performance evaluations, third-party audits, transparency mandates, opt-out rights for affected individuals, expanded definitions of consequential decisions, and data management rules, with penalties of up to $25,000 per violation.

Critics of the bill argue it could restrict California businesses from using widely adopted AI tools, increase compliance burdens, and may conflict with existing laws in California and other statesas well as the forthcoming CPPA automated decision-making regulations.

In addition, AB 1018 is notable because it expressly identifies Internet and telecom access as a consequential decision. Allowing consumers to opt out of AI-driven processes could impose significant operational and administrative burdens on companies in this space.

III. Beyond California: Nationwide trends

States are taking varied approaches in protecting consumers and regulating AI but are similarly focused on higher-risk activities. For example, Texas recently enacted a broad AI law, the Texas Responsible AI Governance Act, requiring, among other things, disclosures about certain AI interactions and prohibiting the development or deployment of AI systems for purposes such as manipulating human behavior, infringing constitutional rights, engaging in unlawful discrimination, or producing sexually explicit or harmful content.

This growing wave of state-level legislation is creating an increasingly fragmented regulatory landscape, posing operational and compliance challenges for businesses as they work to navigate differing obligations around the use of consumer data, AI systems, and automated decision-making.

While federal legislators recently proposed a ten-year state AI regulation moratorium as part of the One Big Beautiful Bill Act, the moratorium was ultimately not included in the final Act. Now it appears that states remain eager to propose new AI legislation. Staying ahead of these developments will be critical for maintaining compliance and managing risk under a growing patchwork of regulations.

IV. Key takeaways

Moving forward, it will be particularly important for companies to:

  • Stay Proactive on State Legislation: Actively track and assess the impacts of state-level legislative and regulatory proposals, particularly those that address how automated systems and personal data are used.
  • Engage Early in the Process: Identify opportunities to get involved in the legislative process and advocate for changes through testimony, written advocacy, and other forms of engagement while bills are still being debated and amended.
  • Assess Current Practices: Evaluate whether current practices could trigger obligations under emerging laws, such as conducting AI risk assessments, completing impact analyses, disclosing pricing practices, or implementing governance programs for AI uses.
  • Prepare for Compliance Shifts: Navigate this evolving landscape with strong cross-functional coordination with legal, compliance, product, and engineering teams to help ensure business operations stay ahead of rapidly changing regulatory expectations.

 

 

Authored by Mark W. Brennan, Bret S. Cohen, Katy J. Milner, Alyssa Golay, Aaron Lariviere, and Jordyn Johnson.

View more insights and analysis

Register now to receive personalized content and more!