Almost all businesses use AI to some extent when hiring new employees; among Fortune 500 organizations, that number rises to 99 percent. The effects will be devastating, especially for those who are already victims of systemic discrimination in the workplace. AI recruiting algorithms are full with biases that hurt people. Their training was based on extremely biased real-world recruiting data, and this reflects that.
Companies that use AI to make hiring decisions are required to undergo audits that evaluate biases in “sex, race-ethnicity, and intersectional categories.” This measure was enacted last year in response to concerns about job discrimination in New York City. In spite of all the the hype, the groundbreaking legislation fails miserably in providing the necessary safeguards.
Not only did New York’s law fail to include quality control standards and enforcement mechanisms, but it also omitted disability, a category of identity-based job discrimination that is reported more often than any other, from the list of bias assessment categories.
That is to be expected. Legislators in New York, including Mayor Eric Adams, are strong advocates of artificial intelligence. In theory, AI hiring tools might be made useless if their inherent biases, especially against applicants with disabilities, are subjected to more thorough or more stringent evaluations. Instruments used to facilitate the hiring process that ignores fundamental principles of fair hiring are damaging as well as worthless.
Companies are left with no easy solution to the problem of bias in AI hiring tools since the algorithms that underlie them are tricky to modify. These algorithms were developed over time by code technicians. To make decisions, the tool will look for several patterns in the training data, which is typically a list of previous or ideal employees. As a result, it will consistently produce biased results.
People with disabilities who have faced discrimination in the past are even more likely to experience these consequences while applying for certain occupations. Because algorithms are inflexible and there is a wide variety of disabilities, adding additional disabled profiles to training models will not fix the issue either.
The training data does not fairly reflect the wide range of conditions that fall under the umbrella term “disability,” particularly those that overlap with other stigmatized identities. In addition, AI hiring tools like Pymetrics disregard the fact that companies consistently put their disabled employees in a worse position than they would be in if they had access to reasonable accommodations in the workplace.
Because they do not conform to strict qualification standards that are indicators of future success but have no impact on real job performance, disabled applicants are still undervalued as candidates by both human recruiters as well as hiring AI. For instance, someone who was out of commission for six months owing to a long-term health condition could find it difficult to land an interview. Artificial intelligence (AI) recruiting tools will adhere to discriminatory assumptions, in contrast to human recruiters who can demonstrate nuance and provide applicants with the accommodations mandated by the Americans with Disabilities Act.
In addition to automating ADA violations, AI hiring tools have created new standards for discrimination and increased scrutiny. Artificial intelligence (AI) recruiting tools go beyond just reviewing resumes; they also try to evaluate candidates’ character attributes and potential performance on the job by analyzing their actions in a video game or during an interview. On two fronts, these technologies will most certainly lead to biases. The first issue is that people of color and people with impairments have a hard time having their faces recognized by video analysis technologies. Second, despite their claims, these instruments are unable to detect mental capacity and emotion.
Extremely biased and pseudo-scientific definitions of positive expressions and actions form the basis of this investigation. An individual with a stammer, speech impediment, or hearing loss-related speech difference may be labelled as “poorly spoken” or “lack speaking skills” based on AI analysis of their video recordings for the application process. People with disabilities or neurological differences may have trouble keeping their eyes fixed on the camera, which can lead AI to label them as “unfocused.”
You might ask why the lawmakers in New York City passed such a useless bill, considering the severe and well-documented biases of AI recruiting tools, especially against applicants with disabilities. Instead of getting to the bottom of why these technologies are being used in recruiting practices, it promotes more audits and half-steps.
Similarly, recent legislative initiatives in the state are ill-advised since they focus solely on adding robust and inclusive auditing procedures to cover up the shortcomings of the New York City bill. The widespread biases in AI recruiting tools cannot be addressed by algorithmic audits, no matter how careful they are. Companies should completely cease utilizing AI for personnel decisions and recruiting if they want to erase these prejudices.
Legislators in New York have done the workforce a favour by ignoring calls to outlaw AI recruiting tools. They are avoiding accountability by promoting legislation that downplays the extent to which these technologies enable discrimination. They also make it clear that applicants are the ones who should be holding firms responsible.
Disabled job applicants, in particular, endure the effects of AI-driven employment discrimination for a longer period of time since they are unwilling to take strong action.