AI has become an incredible tool for many Connecticut companies. However, it is not infallible and should not be used indiscriminately. Companies should be well-informed of the legal risks of AI use in employment decision-making, as failing to appropriately monitor these programs can leave them vulnerable in many ways.
If you have questions or concerns about how your company uses AI in employment decisions, you can contact the employment law attorneys at Wofsey Rosen for guidance and support.
How Employers Use AI to Help With Employment Decisions
When it comes to using AI as a tool for decision-making, many companies use it to help with some of the more repetitive tasks, such as:
- Resume screening and ranking
- Targeted ad posts
- Interview notes and evaluation
- Skills matching
These tasks are often time-consuming for an employee to do, and may be easily automated based on certain keywords or objective variables such as years of experience in a certain role or job functions in a resume.
Discrimination Risks and Algorithmic Bias
AI makes decisions based on the information we provide it. Therefore, without thorough oversight, programmers may unknowingly program the system to make decisions based on biased data, causing your company to make employment decisions that disproportionately harm protected classes.
How Bias Is Built Into AI Systems
There are several ways that programmers and software developers may accidentally teach their AI program to make biased decisions. A significant factor is when developers train the program based on historical or biased data. No one has to tell the program that nursing is a female profession for it to look at historical data and see that it is predominantly female.
An example from the linked Chapman University article is a program that learns from data provided by a company that has historically favored male candidates and may continue that pattern, discounting applicants with more feminine-sounding names or job histories in traditionally female-dominated fields.
In this example, no one has to specifically tell the program to discriminate for it to make discriminatory decisions based on the information it has available.
AI Employment Decisions Can Have a Disparate Impact on Protected Classes
When we consider the above situation, it stands to reason that AI may overlook underrepresented groups for a number of reasons, whether due to a simple lack of representation in the data or to a history of bias and discrimination by humans in charge of hiring practices in the past.
Accidental Barriers to Individuals with Disabilities
Additionally, if AI is basing its evaluations on a model of a ‘typically abled’ person, it may flag symptoms of a disability as a disqualifying factor. For example, it may determine that someone with a speech impediment lacks adequate communication skills or indicate that someone with Autism or ADHD appears disinterested or distracted based on lack of eye contact or movement patterns.
Accommodating individuals with disabilities is a nuanced, interactive process that many AI programs cannot identify from the provided data.
Employers Have a Responsibility to Ensure Their AI Program Is in Compliance with Employment Discrimination Laws
When we discuss concerns of bias and discrimination in AI-based employment decisions, this most often pertains to compliance with federal and state laws such as
- Title VII of the Civil Rights Act
- The Americans with Disabilities Act (ADA)
- The Age Discrimination in Employment Act (ADEA)
These pieces of federal legislation provide protection against discrimination for members of certain populations, among the most common are age (over 40), sex, disability, race, and religion, among many others. It is the responsibility of the employer to ensure the programs and tools they use allow them to conduct hiring, promoting, disciplinary action, and terminations in accordance with these policies.
Transparency and Notice Requirements
Additionally, employers must ensure they comply with state laws as it relates to labor, privacy, and notification requirements. In 2025, Connecticut mandates disclosure where your personal data is used to train AI programs per Public Act 25-113.
As the topic of transparency and notice regarding AI continues, it is critical that employers stay on top of these matters, whether they do so on their own or with an attorney. In many cases, it is a safe move to notify candidates and employees when you use AI in your processes, whether or not it is mandated.
Liability When AI Makes the “Wrong” Decision
Typically, you are responsible for any liability resulting from the use of AI. If your program is making decisions in a discriminatory way, you are unlikely to be successful if you simply place blame on the vendor and continue with your business unscathed. However, like many legal matters, your potential liability will be based on the facts of your specific situation. To minimize your potential liability, it will be helpful if you can demonstrate that you would have made the same hiring decision using non-biased, human decision making.
Unfortunately, many vendor contracts will include waivers and protection clauses stating that they are not responsible for adverse outcomes, which can leave you, as the employer, stuck with full responsibility. Even if it was an honest mistake, you may still bear the brunt of consequences for the negligent (ie, inadvertently discriminatory) use of technology.
Steps Employers Can Take to Reduce Legal Exposure
It is well known that AI makes mistakes, and while AI can be an incredible tool for individuals and businesses, it should not be used without human interaction. The best way to protect your company and ensure you are in compliance with national and state regulations is oversight.
Conducting AI Audits
Whether you have someone who is internally auditing AI results, or you have a contract clause that allows you to bring in a 3rd party to audit them, conducting periodic AI audits is important to ensure you are catching and addressing any concerns or missteps in the way the program is used and how it makes decisions.
Maintaining Human Oversight
You should always have a human that double checks what is happening. Leaving AI to make decisions without human oversight is a recipe for a small problem to become a major issue that can pose a threat to the well-being and future of your company.
Work With a Legal Team Experienced in AI Liability
Whether you have been using AI, or you want to explore how it can support your team in employment decisions, the best way to protect yourself is by consulting with an attorney. At Wofsey Rosen, we can help you determine where your risk is and work with you to create a strategy that minimizes that liability.