California Labor &
Employment Law Blog
Federal Government Signals Enforcement Priority Regarding Artificial Intelligence Tools in The Workplace
May 22, 2023

Federal Government Signals Enforcement Priority Regarding Artificial Intelligence Tools in The Workplace

Topics: Discrimination, Harassment & Retaliation, Employee Hiring, Discipline & Termination, Workplace Privacy

On top of last week’s Senate hearing into artificial intelligence (“AI”) featuring the testimony of OpenAI’s CEO, Sam Altman, the Equal Employment Opportunity Commission (“EEOC”) and the Federal Trade Commission (“FTC”) issued twin advisories about the potential dangers that the use of AI and new technology may carry for employers.

The EEOC issued a “technical assistance document” entitled “Assessing Adverse Impact in Software, Algorithms, and Artificial Intelligence Used in Employment Selection Procedures Under Title VII of the Civil Rights Act of 1964,” (“AI Technical Guidance”) aimed at preventing disparate impact discrimination against job seekers and employees that might be triggered by AI or other automated software and algorithms.

And the FTC issued a “policy statement” warning that the increasing use of biometric information, including technology powered by machine learning, raises significant privacy and data security concerns and the potential for bias and discrimination.

The EEOC’s AI Technical Guidance

Generally speaking, Title VII of the Civil Rights Act of 1964 prohibits employment discrimination based on race, color, religion, sex, or national origin. Title VII prohibits not only intentional discrimination but also “disparate impact” discrimination, which means facially neutral selection procedures that have the effect of disproportionately excluding persons with specific protected characteristics if those procedures are not job-related and consistent with business necessity. The AI Technical Guidance rolled out this week seeks to illuminate and prevent such disparate impact discrimination that might be triggered by automated systems, algorithms, or AI processes. 

The AI Technical Guidance gives examples of the types of AI or algorithmic decision-making tools that employers sometimes rely on: “resume scanners that prioritize applications using certain keywords; employee monitoring software that rates employees on the basis of their keystrokes or other factors; ‘virtual assistants’ or ‘chatbots’ that ask job candidates about their qualifications and reject those who do not meet pre-defined requirements; video interviewing software that evaluates candidates based on their facial expressions and speech patterns; and testing software that provides ‘job fit’ scores for applicants or employees regarding their personalities, aptitudes, cognitive skills, or perceived ‘cultural fit’ based on their performance on a game or on a more traditional test.”

Some key takeaways from the AI Technical Guidance are as follows:

  1. AI systems and algorithmic decision-making tools that create a disparate impact may give rise to employer liability, even if the tool was developed or administrated by an outside vendor. Employers are advised to ask vendors, at a minimum, whether steps have been taken to evaluate whether the tool causes a substantially lower selection rate for individuals with protected characteristics, and if it does, the employer should consider whether the use of the tool is job-related, consistent with business necessity and whether there are alternatives that may meet the employer’s needs and have less of a disparate impact. However, if a vendor advises an employer that the tool will have no disparate impact, the employer could still be liable if it turns out that the tool does create an adverse impact. 
     
  2. Employers should assess whether the tool has an adverse impact on protected groups as set forth in the standard 1978 Uniform Guidelines on Employee Selection Procedures (UGESP), but should be mindful that the “four-fifths rule” is a general rule of thumb only and that smaller differences in selection rates may still indicate adverse impact if they are statistically significant. The four-fifths rule is the general rule for determining whether the selection rate for one group is “substantially” different than the selection rate for another group, and which states that one rate is substantially different than another if the ratio is less than 4/5, or 80%.
     
  3. Employers are encouraged to conduct self-analyses on an ongoing basis to determine whether their employment practices, including AI systems, have a disparate negative impact on protected groups so that they can proactively change those practices going forward.

Employers using or considering the use of AI tools in any of their hiring, promotion, or firing procedures should beware that the EEOC has taken a particular interest in the potential for disparate impact discrimination resulting from the use of these tools. This is not the first time that the EEOC opined on AI issues as part of its Artificial Intelligence and Algorithmic Fairness Initiative. Although technical assistance documents do not carry the force of law, and are meant to provide clarity for employers regarding the application of existing law to new contexts, they are a clear signal of enforcement policy that should not be ignored.

The EEOC has already prosecuted employers regarding allegedly discriminatory software and AI-based hiring practices. For example, last year, the EEOC alleged that an employer’s online recruitment software automatically rejected older applicants because of their age. EEOC v. iTutorGroup, Inc., ED NY Case No. 1:22-cv-02565. Private lawsuits, too, are on the rise as illustrated by a recent federal class action alleging that an employer’s artificial intelligence systems and screening tools disqualified black, disabled, or older applicants at a disproportionate rate in violation of Title VII. Mobley v. Workday, Inc., ND Cal. Case No. 3:23-cv-00770-TSH. 

The FTC’s Policy Statement

The FTC’s Policy Statement warns that false or unsubstantiated claims about the accuracy or efficacy of biometric information technologies or about the collection and use of biometric information may violate the FTC Act. It also notes that the FTC will consider several factors to determine whether an employer’s use of biometric information or biometric information technology violates the FTC Act, including:

  • Failing to assess foreseeable harms to employees and consumers before collecting biometric information;
  • Failing to promptly address known or foreseeable risks and identify and implement tools for reducing or eliminating those risks;
  • Engaging in surreptitious and unexpected collection or use of biometric information;
  • Failing to evaluate the practices and capabilities of third parties, including affiliates, vendors, and end users, who will be given access to consumers’ biometric information or will be charged with operating biometric information technologies;
  • Failing to provide appropriate training for employees and contractors whose job duties involve interacting with biometric information or technologies that use such information; and
  • Failing to conduct ongoing monitoring of technologies that the business develops, offers for sale, or uses, in connection with biometric information to ensure that the technologies are functioning as anticipated and that the technologies are not likely to harm consumers.

Like the EEOC, the FTC will test its policies with litigation. In 2019, Facebook, Inc. settled with the FTC and agreed to pay a $5 billion penalty, submit to restrictions, and modified its corporate structure arising from charges that the company violated a 2012 FTC order by deceiving users about their ability to control the privacy of their personal information. More recently, the FTC settled its claims against the developer of a photo app that allegedly deceived consumers about the use of facial recognition technology and retention of the photos and videos of users who deactivated their accounts. While this settlement did not include any monetary penalty, it required numerous changes to the employer’s business and operations, in addition to compliance and reporting to the FTC.

Conclusion

Skynet is not sending out the killer robots, but the Federal watchdogs are paying close attention to guard against the violation of privacy rights and discrimination that they foresee arising from AI and other technology. Employers should evaluate their systems to protect against litigation before any starts. The regulators and courts will hold employers to higher standards on these issues as guidance increases. Employers should contact their favorite CDF attorney to discuss a self-audit to ensure that they are not creating a disparate impact on protected populations or other potential inadvertent violations from AI tools that appear to be neutral on their face.

About CDF

For over 25 years, CDF has distinguished itself as one of the top employment, labor and immigration firms in California, representing employers in single-plaintiff and class action lawsuits and advising employers on related legal compliance and risk avoidance. We cover the state, with five locations from Sacramento to San Diego.

> visit primary site

About the Editor in Chief

Sacramento Office Managing Partner and Chair of CDF’s Traditional Labor Law Practice Group. Mark has been practicing labor and employment law in California for thirty years. His practice has a special emphasis on the representation of California employers in union-management relations and handling federal and state court litigation and administrative matters triggered by all types of employment-related disputes. He is also adept at providing creative and practical legal advice to help minimize the risks inherent in employing workers in California. He recently named “Sacramento Lawyer of the Year” in Employment Law-Management for 2021 by Best Lawyers®.
> Full Bio   > Email   Call 916.361.0991

CDF Labor Law LLP © 2024

Editorial Board About CDF What We Do Contact Us Attorney Advertising Disclaimer Privacy Policy Cookie Policy