The California Supreme Court issued a ruling this week that expands the definition of employer under the state’s main discrimination statute, the Fair Employment and Housing Act (FEHA). This expansion not only increases the number of defendants that can be swept into a FEHA action, but it may also have a significant impact on California’s burgeoning efforts to regulate the use of artificial intelligence in employment decisions.BackgroundAs we previously noted, on March 16, 2022, the U.S. Court of Appeals for the Ninth Circuit certified to the Supreme Court of California the following question:
Does California’s Fair Employment and Housing Act, which defines “employer” to include “any person acting as an agent of an employer,” permit a business entity acting as an agent of an employer to be held directly liable for employment discrimination?1
In Raines v. U.S. Healthworks Medical Group, the California Supreme Court answered in the affirmative to this question, first concluding that an employer’s business entity “agents” may be considered “employers” for purposes of the statute, and then holding that such an agent may be held directly liable for employment discrimination in violation of the Fair Employment & Housing Act when it has at least five employees2 and “when it carries out FEHA-regulated activities on behalf of an employer.” The court recognized that its ruling “increases the number of defendants that might share liability” when a plaintiff brings FEHA-related claims against their employer.In reaching its holding, the court analyzed the FEHA Section 12926(d)’s language, stating that the “most natural reading” supports the determination that an employer’s business-entity’s agent “is itself an employer for purposes of FEHA.” The court further addressed the statute’s legislative history, tracing the origins of the definition of “employer” to the Fair Employment Practices Act (FEPA) enacted in 1959, which adopted the National Labor Relations Act’s (NLRA) “agent-inclusive language.” The court also looked to federal case law, finding support for the idea that “an employer’s agent can, under certain circumstances, appropriately bear direct liability under the federal antidiscrimination laws.” Significantly, the court found that its prior rulings in Reno v. Baird3 and Jones v. Lodge at Torrey Pines Partnership,4 which did not extend personal liability for claims of discrimination or retaliation to supervisors, did not dictate the result here.The court also reviewed policy reasons that could impact the reading of the statutory language:
- Imposing liability on an employer’s business entity agents broadens FEHA liability to the entity that is “most directly responsible for the FEHA violation” and “in the best position to implement industry-wide policies that will avoid FEHA violations”;
- Imposing liability on an employer’s business entity agents “furthers the statutory mandate that the FEHA ‘be construed liberally’ in furtherance of its remedial purposes”; and
- The court’s reading of the statutory language “will not impose liability on individuals who might face ‘financial ruin for themselves and their families’ where held directly liable under the FEHA.”
Equally important are rulings not made by the court in Raines. The California Supreme Court noted that it was not deciding the significance, if any, of an employer’s control over an agent’s acts that gave rise to a FEHA violation, nor did the court decide whether its conclusion extends to business-entity agents that have fewer than five employees. Critically, it also did not address the scope of a business-agent’s potential liability pursuant to FEHA’s aiding-and-abetting provision.
Impact on California’s Efforts to Regulate AI in Employment DecisionsRaines will likely have a significant impact on businesses that provide services or otherwise assist employers in the use of automated-decision systems for recruiting, screening, hiring, compensation, and other personnel management decisions. Coupled with proposed revisions to the state’s FEHA regulations, this expansion of the statute’s reach takes California one step closer to establishing joint and several liability across the AI tool supply chain.Under the Fair Employment & Housing Council’s proposed regulations5 addressing the use of artificial intelligence, machine learning, and other data-driven statistical processes to automate decision-making in the employment context, it is unlawful for an employer to use selection criteria—including automated decision systems—that screen out, or tend to screen out, an applicant or employee (or a class of applicants or employees) on the basis of a protected characteristic, unless the criteria are demonstrably job-related and consistent with business necessity. The draft regulations explicitly define “agent” broadly to include third-party providers of AI-driven services related to recruiting, screening, hiring, compensation and other personnel processes, and redefine “employment agency” to similarly cover these third-party entities.6 One key proposal – under the aforementioned aiding-and-abetting provision – even extends liability to the “design, development, advertisement, sale, provision, and/or use of an automated-decision system.” The high court’s decision in Raines unquestionably supports the Council’s proposed revisions, and enhances joint and several liability for artificial intelligence tool supply chains regardless of the final incarnation of the Council’s regulations.