Workplace AI Risks for Employers
Why AI Challenges in the Workplace Should Worry Employers! Artificial intelligence refers to computers or computer-controlled machines that can simulate human intelligence in various ways.

These machines can range from a laptop or cellphone to computer-controlled robotics. Software programs, which give directions to control the behavior of the machine, are specialized to mimic human intelligence and capabilities. The coupling of hardware and software brings about artificial intelligence.
83% of employers and 99% of Fortune 500 companies use some type of automated tool in their hiring processes, according to the Equal Employment Opportunity Commission (EEOC). EEOC has established a guidance tool to help Employers manage AI without violating discrimination protections.
First, front-line HR managers and procurement folks who routinely source AI hiring tools do not understand the risks. Second, AI vendors will not usually disclose their testing methods and will demand companies provide contractual indemnification and bear all risk for the alleged adverse impact of the tools.
Employers can’t rely on a vendor’s assurances that its AI tool complies with Title VII of the Civil Rights Act of 1964. If the tool results in an adverse discriminatory impact, the employer may be held liable, the U.S. EEOC clarified in new technical assistance on May 18. The guidance explained the application of Title VII of the Civil Rights Act of 1964 to automated systems that incorporate artificial intelligence in a range of HR-related uses.
A primary area that the rise of AI has largely impacted is the workforce—namely, the labor and employment laws in place to protect employees from discrimination, harassment, and other forms of harm in the workplace.
Unfortunately, a lack of federal regulations around AI use has already opened the door to lawsuits as we continue into 2024 and beyond, especially when it comes to workers’ rights.
AI is being used in the workplace to manage the full employee life cycle, from sourcing and recruitment to performance management and employee development. Recruitment and hiring are by far the most popular areas where AI is used for employment-related purposes. However, AI can be utilized in almost any human resource discipline, as listed below.
- Privacy Breaches
- Transparency
- Accountability
- Hiring & Selection
- Discrimination
- Violations of the American
- Disabilities Act (ADA)
- Confidentiality &
- Data Privacy
Generative AI, such as OpenAI’s ChatGPT and Google’s Bard, allows users to ask questions conversationally to find answers or create or edit written content.
For example, a manager might ask the bot to write an employee recognition letter, or a recruiter might prompt it to
draft a job description.
While the output from generative AI programs can be impressive, human review and final editing is almost always necessary.
Other ethical concerns include whether it will replace human workers, the rise of fake media and disinformation, and creating transparency in AI decision-making, according to Forbes.
Consider common issues that AI can lead to in the workplace:
Employment Discrimination
A primary potential outcome of AI in employment entails an increased risk of adverse consequences for employees with certain protected characteristics. For instance, an AI tool may inadvertently discriminate against certain groups when analyzing resumes or making hiring decisions, such as people of color or applicants who identify as queer or neurodivergent.
While AI as a technological concept is not inherently biased, such technologies can develop biases through machine learning applications, making it all the more crucial for wronged workers to speak up against workplace discrimination and understand what constitutes Title VII violations. According to the EEOC, Title VII forbids discrimination in employment based on race, color, religion, sex, or national origin, with some limited exceptions.
Employment discrimination is a significant risk of AI use by employers. When companies fail to meet their ethical and legal obligations under the law-for example, failing to understand the functionality of AI algorithms or not recognizing discriminatory violations caused by AI use, this opens the door to employment lawsuits. It’s essential for employees to speak out against discrimination in the workplace to prevent further harm and hold companies accountable.
Some employers have started using AI and algorithms to help with decisions such as recruiting, interviewing, hiring, pay, and promotions. However, AI has the potential to perpetuate unlawful bias, automate unlawful discrimination, and produce other harmful outcomes based on race, gender, disability, or other protected characteristics, the HUD/DOJ Joint Statement noted.
Unchecked AI poses threats to fairness in ways that are already being felt, research says. While machines crunching numbers might seem capable of taking human bias out of the equation, that’s not what’s happening.
Outcomes from AI tools, including employment decisions, can be skewed by datasets with unrepresentative or imbalanced data, historical bias, or other types of errors, the joint statement noted.
For example, AI can result in discriminatory employment decisions if it relies on datasets based on a workforce that’s predominately white and male.
AI poses some of the greatest modern-day threats when it comes to discrimination today. We have an arsenal of bedrock civil rights laws that do give us the accountability to hold bad actors accountable. Those laws include the Civil Rights Act of 1964, the ADA, the Fair Credit Reporting Act (FTC), and the Equal Credit Opportunity Act (ECOA).
EEOC Cracking Down on AI Bias
The number of employers using AI is skyrocketing: Nearly 1 in 4 organizations reported using automation or AI to support HR-related activities, including recruitment and hiring, according to a 2022 survey by SHRM.
However, the National Telecommunications and Information Administration (NTIA) noted that a growing number of incidents have occurred where AI and algorithmic systems have led to harmful outcomes.
For example, the U.S. EEOC sued an English-language tutoring service for age discrimination. The agency alleged that the employer’s AI algorithm automatically rejected older applicants.
The EEOC has heavily focused on preventing AI from discrimination at work. The agency has held public hearings and released guidance focused on preventing bias against applicants and employees, particularly those with disabilities.
On Aug. 9, 2023, a tutoring company agreed to pay $365,000 to settle an AI lawsuit with the EEOC.
The settlement comes on the heels of multiple EEOC warnings to employers about potential discrimination associated with the use of AI for hiring and workplace decisions.
According to the EEOC’s complaint against iTutorGroup, the Company program its application review software to automatically reject female applicants aged 55 or older and male applicants aged 60 or older.
One rejected applicant realized they may have been the victim of discrimination when, after resubmitting an application with a more recent birth date but which was otherwise identical to the rejected application, they were offered
an interview.
What Should Employers Consider
Instead of avoiding AI altogether, employers can take measures to prevent bias and illegal discrimination. Understanding the algorithms that are used and how individuals are screened in, or out, is important when implementing AI tools. Regular review of this information and the subsequent results is necessary to ensure that the tool isn’t learning bias or illegal selection criteria over time.
On May 12, 2023, the EEOC issued long-awaited guidance on the use of such AI tools (the Guidance), examining how employers can seek to prevent AI-related disability discrimination.
More specifically, the Guidance identifies several ways in which employment-related use of AI can, even unintentionally, violate the ADA, including if:
- The employer does not provide a ‘reasonable accommodation’ that is necessary for a job applicant or employee to be rated fairly
and accurately by the AI; - The employer relies on an algorithmic decision-making tool that
intentionally or unintentionally ‘screens out’ an individual with a disability, even though that individual can do the job with a reasonable accommodation; or - The employer adopts an AI tool for use with job applicants or employees that violates the ADA’s restrictions on disability-related
inquiries and medical examinations.
The Guidance further states that in many cases, employers are liable under the ADA for use of AI even if the tools are designed and administered by a separate vendor, noting that employers may be held responsible for the actions of their agents . . . if the employer has given them authority to act on its behalf.
ISSUE TECHNICAL ASSISTANCE TO GUIDE ALGORITHMIC FAIRNESS AND THE USE OF AI IN EMPLOYMENT DECISIONS.
EEOC Guidance
Announcing generally that employees and applicants subject to an AI tool may request reasonable accommodations and companies should provide instructions as to how to ask for accommodations.
Providing information about the AI tool, how it works, and what it is used for to the employees and applicants subjected to it. For example, an employer that uses keystroke-monitoring software may choose to disclose this software as part of new employees’ onboarding and explain that it is intended to measure employee productivity.
IF THE SOFTWARE WAS DEVELOPED BY A THIRD PARTY, ASK THE VENDOR WHETHER:
- The AI software was developed to accommodate people with disabilities, and if so, how;
- There are alternative formats available for disabled individuals; and
- The AI software asks questions that are likely to elicit medical or disability-related information.
If an employer is developing its software, engaging experts to analyze the algorithm for potential biases at different steps of the development process, such as a psychologist if the tool is intended to test cognitive traits.
We should use only AI tools that directly measure traits necessary for performing the
job’s duties.
Additionally, it is always a best practice to train staff, especially supervisors and managers, how to recognize requests for reasonable accommodations and to respond promptly and effectively to those requests. If the AI tool is used by a third party on the employer’s behalf, that third party’s staff should also be trained to recognize requests for reasonable accommodation and forward them promptly to the employer.
How Do Algorithms Discriminate?
DISCRIMINATION RESULTING FROM AI SYSTEMS CAN OCCUR BOTH INTENTIONALLY AND UNINTENTIONALLY.
Intentional discrimination is straightforward, occurring when rules or conditions that will have a discriminatory outcome are written into an algorithm on purpose. E.g., a rule that automatically rejects loan applications submitted by women.
However, in most cases of discriminatory AI systems, the developers never set out with that intention. Rather, discrimination was an inadvertent result of the development process.
Artificial Intelligence and the Potential of Discriminatory Practices
The employer administers a selection procedure, it may be responsible under Title VII if the procedure discriminates on a basis prohibited by Title VII, even if the test was developed by an outside vendor, the agency states in its technical assistance guidance.
A few key areas concerning AI for HR include recruiting, employee monitoring, and data privacy.
Tools like resume scanners, chatbots, video interviewing software, and testing software are often used during the recruiting or hiring process. While you might not think about this as artificial intelligence since they have been around for a while, they use different aspects of AI. These tools also save time and make the job of the recruiter or hiring manager easier.
How to Avoid a Discrimination Lawsuit
Attorneys recommended that employers avoid becoming fully reliant on automation in the workplace and
continue to involve humans in the decision-making process.
You will want to have a decision for why someone was hired, promoted, or fired, he said. ChatGPT is a dynamic tool that will not necessarily generate the same results or interpret ability as a human decision-maker.
Federal agency leaders made clear that employers can be held liable for the ways they deploy AI tools developed by technology companies.
Companies aren’t off the hook just because they didn’t develop the tool themselves. Claims of innovation must not be cover for legal violations, said FTC. There is no AI exception to the laws on the books.
Through the initiative, the EEOC will examine more closely how existing and developing technologies fundamentally change the ways employment decisions are made.
The initiative’s goal is to guide employers, employees, job applicants, and vendors to ensure that these technologies are used fairly and consistently with federal equal employment opportunity laws.
Best Practices for Using AI
BEFORE IMPLEMENTING AI, CONSIDER THE CREATION OF POLICIES AND PROCEDURES ADDRESSING THE FOLLOWING:
- In the decision-making process, employers should not rely exclusively on AI; some of the best (and worst) hiring decisions
have been made without the use of this technology. - Before implementing AI in any aspect of HR, carefully document the process, including the factors used in creating the algorithms.
- For hiring and screening processes, implement a review process, such as full and false inclusion/exclusion tests of those selected
and not selected. - If you use 3rd party vendors for IA services, ensure they provide you with all the elements of their systems to confirm the risk
factors. Remember, vendors work for you, however, you still have liability for fines, penalties, or criminal sanctions. - Provide notice before using AI software in HR functions.
- Obtain employee consent.
- Conduct bias audits regularly.
- Maintain awareness of different laws in different jurisdictions.
- Create a Use of AI policy.
Please connect with all Federal, State, and Local regulatory agencies’ websites for free
subscription updates to keep you all part of the regulations’ updates!
Margie Faulk, PHR, SHRM-CP, HR Compliance Advisor
Margie provides many Employers and professionals with e-learning training to help them create effective risk management strategies to avoid fines, penalties, and criminal sanctions. You can reach out to Margie Faulk at mfaulk@hrcompliance.net for more information on all workplace compliance e-learning training for your organization.