Are There Any Risks to Using AI to Enhance Diversity in the Workplace?

Aug 4 2023

Q: Are there any risks to using AI to enhance diversity in the workplace?

A. The use of artificial intelligence (AI) has become increasingly prevalent in hiring decisions, particularly as a means to increase diversity in employment. In January 2023, the chair of the Equal Employment Opportunity Commission (EEOC) estimated that 83% of employers rely on artificial technology in decision-making. When used thoughtfully, AI tools can help employers more effectively analyze data and trends necessary to improving diversity, such as employee retention, pay inequality, and bias in job postings and hiring practices. For instance, generative AI platforms can enhance diverse employee retention by preparing career path guides specific to an employee’s skills and values, which allows diverse employees to view opportunities for internal career growth with transparency. Additionally, employers may use AI to assist in screening candidates during the recruiting process to avoid the unconscious biases that human screeners bring to the process. Despite the benefits and growing adoption of AI, however, the EEOC and the Biden administration have recently warned of the inherent risks that employers should be aware of when leveraging AI to enhance workplace diversity. 

One major risk is inadvertent employment discrimination. Even employers acting without discriminatory intent may face disparate impact discrimination claims if individuals in certain protected classes are disproportionately “screened out” by algorithms. For example, AI that tests candidates’ personalities or skills for “cultural” fit may exclude candidates based on race or national origin, or an algorithm that rejects applicants with resume gaps may impact those whose gaps are explained by a disability. 

Local governments like New York City have implemented legislation to highlight and mitigate AI’s potential for employment discrimination. Under NYC Local Law Int. No. 144, employers are prohibited from using AI unless the technology undergoes a bias audit, the results of which must be posted publicly on the employer’s website. Other governments, including Washington D.C., New Jersey, and Massachusetts, have followed suit, drafting legislation with similar requirements. 

Employers also should be mindful of discovery challenges that could arise in the event of an EEOC charge or litigation of an employment discrimination claim stemming from the use of AI tools. Depending on the AI platform at issue and how it was deployed, it may not be possible to fully preserve, collect, review, or produce the audit trail of AI inputs and outputs that factored into certain employment decisions, let alone the internal algorithms and underlying training data that informed them. Relying on the potentially narrower scope of regulatory records retention requirements may not suffice in the discovery context.

Takeaways

While AI can be an effective tool in identifying and remedying deficiencies in workplace diversity through initiatives such as bias screening and analyzing employee satisfaction trends, employers considering the use of AI tools for these purposes should first evaluate whether they could cause an adverse impact on a protected group. If so, employers must ensure that the AI tool is job-related and consistent with business necessity, or identify an alternative tool that does not have an adverse impact on any protected group. In addition, employers should:

Share Share Tweet

Previous Next