December 23, 2024

Federal Agencies Announce Oversight Focus on Use of Automated Systems

Federal agencies charged with enforcing laws to combat discrimination and bias recognize the increasing use of automated systems, including those leveraging artificial intelligence (AI), to help organizations make decisions affecting consumers and employees. Four agencies recently issued a joint statement identifying the potential for discrimination in the use of automated systems. They emphasized that current laws against discrimination and bias apply to such systems.

In their statement, the Equal Employment Opportunity Commission, the Department of Justice, the Federal Trade Commission, and the Consumer Financial Protection Bureau identified how automated systems may contribute to unlawful discrimination or otherwise violate federal law:

  • Data and datasets: Automated systems often rely on vast amounts of data to find patterns or correlations and then apply those patterns to new data to perform tasks or make recommendations. System results can be skewed by unrepresentative datasets or other kinds of errors. 
  • Lack of transparency: Some automated systems’ internal workings are unclear to their users and can limit user ability to assess whether an automated system is fair and compliant with legal requirements.
  • Design errors: There can be a disconnect between the system developers and the users based on flawed assumptions, relevant context, or the practices or procedures being replaced by the system.

The agencies each described their legal authority regarding the regulation of automated systems and announced their resolve to enforce current laws to protect the rights of consumers and employees. The announcement did not include any technical assistance about complying with specific laws. 

The announcement can be found at https://www.eeoc.gov/joint-statement-enforcement-efforts-against-discrimination-and-bias-automated-systems.