An employee time traveling to today’s workplace from the 1980s would be astounded at where we’ve arrived. While we don’t (yet!) have flying cars as imagined by many 1980’s sci-fi movies, the world of work is undergoing its next renaissance in the form of Artificial Intelligence (AI).
Employers in both the public and private sectors are increasingly turning to AI tools to handle tasks currently being done by humans. The long-term impact of AI utilization on employment is projected to be significant, particularly because of the power and potential efficiencies of generative AI tools.
What is Generative AI?
Generative Artificial Intelligence (GAI), at its core, is a deep-learning AI model capable of generating text, images, and other content based on its training data.[i] It “learns” by identifying patterns and structures within its existing training data to create new and original content. Chances are you have seen or tinkered with GAI through tools such as ChatGPT, Dall-E, and/or Bard.
How is AI Being Used?
A 2023 Goldman Sachs study[ii] found that roughly two-thirds of current jobs are at risk to being replaced by AI, and that GAI could substitute for up to one-fourth of current work. The same study asserts that AI could expose 300 million full-time jobs to automation.
Many employers have begun integrating AI into their human resources (“HR”) functions and tasks such as recruiting, screening, interviewing, and rating job candidates. Some employers are utilizing AI monitoring tools that track key strokes for remote employees in an effort to ensure activity and productivity. Other employers are utilizing AI tools that conduct email scans to flag potential harassment and bullying. Additionally, more and more employers are utilizing timekeeping and leave management software that incorporates AI. Simply put, employers are currently utilizing many HR-related AI tools.
In fact, a 2022 survey by Eightfold AI[iii] found that almost three-quarters of U.S. companies were using AI in some form to recruit and hire employees and to manage their performance. Further, employees may even be using GAI tools like ChatGPT to assist in drafting and creating daily work product such as emails, letters, correspondence and the like. Without an GAI-acceptable use policy in place, employees may be using GAI tools without their employer’s knowledge and without their employer’s input and guidance as to proper (and improper) uses of same. Such ungoverned employee use of GAI tools exposes employers to serious risk.
EEOC, DOL, and OFCCP Guidance
In concert with the increase of AI use in the workplace and its associated risks, the Equal Opportunity Commission (EEOC), the Department of Labor (DOL), and the Office of Federal Contract Compliance Programs (OFCCP) have provided guidance that employers should be mindful of.
The EEOC
Last year, in May of 2023, the EEOC released a technical assistance document entitled “Assessing Adverse Impact in Software, Algorithms, and Artificial Intelligence Used in Employment Selection Procedures Under Title VII of the Civil Rights Act of 1964.” [iv]
The EEOC’s Guidance is designed to protect workers from unlawful discrimination due to employers’ use of AI. It warns the use of AI may result in “disparate impact” discrimination. Disparate impact discrimination is unintentional discrimination occurring through an employer’s use of selection and screening tools if those tools result in an adverse impact on a legally-protected class. For example, if an AI-screening tool’s algorithm (unknown to the employer) “decides” that certain ethnic-sounding names are to be eliminated from hiring consideration, that AI tool could likely lead to a disparate impact claim. Employers must ensure that their selection procedures, including algorithmic tools, do not disproportionately exclude protected groups unless the procedures are job-related and consistent with business necessity.
Accordingly, it is highly recommended that employers thoroughly vet AI-tools and gain assurance from AI-vendors that the tools have been validated and tested to ensure no biases exist in their algorithms. Continual auditing of the AI-tool is also a highly recommended practice. Ensuring that a proper indemnification and duty to defend clause is in the contract with the AI-vendor is wise. While Ohio public employers are legally prohibited from agreeing to indemnify a third-party, there is no similar restriction on a public employer receiving indemnification from a third-party.
The DOL
The DOL’s Wage and Hour Division (WHD) has issued a Field Assistance Bulletin to its Regional Directors.[v] The Field Assistance Bulletin provides guidance as to the application of the Fair Labor Standards Act (FLSA), the Family and Medical Leave Act (FMLA), and other federal labor standards as employers increasingly use AI and other automated systems in the workplace. As a reminder, the FLSA governs wage and hour practices such as minimum wage, overtime pay, and recordkeeping while the FMLA guarantees eligible employees unpaid leave for specified family and medical reasons.
As noted, new AI technology exists for employee monitoring, measuring and analyzing productivity or activity in real time, such as computer keystrokes, mouse clicks, web camera monitoring, and other data to determine whether and when an employee is active during the work day. Employers generally use these tools to more accurately determine and track hours worked.
While these are certainly helpful tools, the WHD’s guidance is clear that employer reliance on such technology without regular, consistent human oversight may create FLSA compliance challenges as to when employees are actually performing “work” (e.g., when an AI system docks employee pay because it perceives that an employee is not working but the employee disagrees).
Addressing this concern, the WHD guidance notes that “an AI program that incorrectly categorizes time as non-compensable work hours based on its analysis of worker activity, productivity, or performance could result in a failure to pay wages for all hours worked.” Therefore, employers utilizing such AI-tools should conduct regular audits and provide for active human monitoring. Similarly, employers using AI should develop clear policies, including ways for employees who believe they may be underpaid to seek correction of their paychecks. Developing and communicating such policies to employees is critical.
For purposes of FMLA compliance, the WHD cautions that AI presents unique challenges regarding leave eligibility and accommodation, and specifically emphasizes that AI systems must appropriately handle requests for FMLA leave, accommodate eligible employees, and avoid discriminatory practices in the process. For example, if an AI system that manages FMLA leave certification requires employees to disclose excess medical or other personal information, it could violate the FMLA or even the Americans with Disabilities Act’s (ADA) limitations on requesting certain medical information.
Therefore, as with FLSA compliance, employers should establish clear communication channels for employees to seek FMLA-related information or ADA accommodations to ensure that AI systems do not inadvertently cause issues in the process. Using human monitoring for the leave and accommodation process will help reduce compliance-errors.
The OFCCP
The OFCCP guidance[vi] applies only to Federal Contractors regarding the use of AI and other technology in hiring decisions. The OFCCP guidance is in two parts: (1) a list of Frequently Asked Questions (FAQ) regarding the use of AI in the Equal Employment Opportunity context for federal contractors; and (2) a list of “promising practices” that, while not required, could help avoid violations when using AI in employment decisions.
While not binding on public employers, the OFCCP’s “promising practices” list is instructive to those public employers using AI for HR functions as it contemplates what may be best practices when using AI technology in employment decisions. For example, it states employers should:
- Provide advance notice to employees if they are using AI.
- Communicate transparently with employees, applicants, and representatives to ensure that all are adequately informed of the relevant AI policies and procedures.
- Monitor use of AI in making employment decisions and keep track of the resulting data in order to standardize the system(s), provide effective training, and create internal governance structures with clear case standards and monitoring requirements.
- Make sure to verify the AI system and its vendor (if using a vendor-created system) and know the specifics of the system (data, reliability, safety, etc.).
- Conduct testing of the AI system to ensure that it is working properly and not circumventing any required compensation calculations, disability accommodations, or other legal protections.
Where is AI Going Next?
As AI technologies evolve, so too will employment and labor laws and related guidance. In turn, employment compliance strategies will continue to evolve as well. Maintaining up-to-date knowledge of the legal implications of AI usage is, and will continue to be, crucial.
[i] https://research.ibm.com/blog/what-is-generative-AI
[ii] https://www.gspublishing.com/content/research/en/reports/2023/03/27/d64e052b-0f6e-45d7-967b-d7be35fabd16.html
[iii] https://eightfold.ai/wp-content/uploads/2022_Talent_Survey.pdf
[iv] https://www.eeoc.gov/laws/guidance/select-issues-assessing-adverse-impact-software-algorithms-and-artificial
[v] https://www.dol.gov/sites/dolgov/files/WHD/fab/fab2024_1.pdf
[vi] https://www.dol.gov/agencies/ofccp/Artificial-Intelligence
[View source.]