California Privacy Agency kicks off major regulation in fast-track process that could impose regulations on AI


Data protection, privacy and security alert

Special AI regulation in the United States has been very limited so far. That will likely change as part of broader California privacy regulations happening next year. This regulation is expected to lead California to regulate by 2023 a number of automated decision-making processes using the personal data of California residents.

The new California Privacy Protection Agency has requested and will receive the first pre-rule-making comments on November 8 on this and several other important privacy and data security issues. As for AI, the agency, created by the California Privacy Rights Act Initiative approved by voters in November 2020, is charged under this Initiative with:

(16) Issue regulations governing access and opt-out rights with respect to businesses ‘use of automated decision-making technology, including profiling, and require that businesses’ responses to access requests include meaningful information about the logic involved in these decision-making processes, as well as a description of the likely outcome of the process for the consumer.

Cal. Civ. Code § 1798.185 (a) (16).

Thus, three potential rights are at issue:

(1) a right of access to meaningful information regarding the logic involved in the covered automated decision-making processes – tempered by other required regulations on the protection of secret business information from consumer demands, identifier. § 1798.145 (a) (3)

(2) a right to a description of the likely outcome of a process as it affects the consumer and

(3) a kind of right of withdrawal with regard to automated decision-making processes.

Unlike the specific privacy rights set out elsewhere in CPRA, this text on AI regulation (along with text regarding several other new regulatory issues) is general and open-ended. It does not refer to the scope, procedures or exceptions to these rights, although the CPPA agency may fill these spaces.

Additionally, CAPP’s Board of Directors includes Vinhcent Le, a leading AI equity advocate working at the Greenlining Institute. Mr. Le and four other board members are responsible for overseeing the development of the rules and have final authority to approve the rules.

All of these factors make it important for companies that process or plan to process the personal data of California residents in AI processes to closely follow these regulations and consider filing comments.

The invitation to comment prior to the development of the rules issued by CAPP raises five questions concerning the interpretation of the automated decision-making power of the CAPP agency:

  1. What activities should be considered to constitute “automated decision-making technology” and / or “profiling” (a critical issue)
  2. When consumers should be able to access information about business use of automated decision-making technology and what processes consumers and businesses should follow to facilitate access
  3. What information businesses should provide to consumers in response to access requests, including what businesses should do in order to provide “meaningful insight into the logic” involved in the automated decision-making process
  4. The extent of consumer opt-out rights with respect to automated decision-making and the processes consumers and businesses should follow to facilitate opt-out (another critical issue).

CAPP Board Chair Jennifer Urban told a California Bar webinar that the board will review the pre-comments submitted before November 8 in order to publish the proposed rules for comment in January or February 2022, and that he hopes publish the final rules in May 2022, with the rules coming into force in 2023.

Even if that date slips by a few months, it is a tight deadline for a new agency, which must address not only that but a long list of other issues. The deadline would also not allow for more than one, if not two, more opportunities to comment before the rules are finalized.

Companies that are committing or considering committing or investing in the treatment of potentially affected AI may consider commenting. Even if they don’t comment, they may want to follow this important procedure carefully.

Source link


Comments are closed.