Risk Versus Reward: AI, Biometrics and Employment - Industry Today - Leader in Manufacturing & Industry News
 

February 23, 2022 Risk Versus Reward: AI, Biometrics and Employment

Employers can reap the rewards of AI and biometrics solutions in employment while mitigating risks by following five best practices.

By: Alonzo Martinez, Associate General Counsel at HireRight

As employers in the U.S. grapple with labor shortages, more are turning to artificial intelligence (“AI”) and biometric solutions to help with their recruitment efforts and manage their workforces. 

The benefits of these technologies are broad. For recruitment, AI can help source candidates, support diversity efforts, and accelerate time to hire. Biometrics coupled with AI, like facial and voice scans, used within video interviews, can help facilitate remote recruiting and identify on-brand talent.

Utilizing AI and biometrics with employees can manage worker productivity and reduce work-related accidents. Many companies apply AI and biometrics solutions to augment tasks typically handled by people allowing for significant productivity gains by utilizing technology to diminish manual oversight.  

While advocates of AI and biometrics technologies praise their efficiencies, critics voice concerns over the potential for unchecked bias and discrimination within an AI solution that adversely affects hiring and retention practices. As a result, lawmakers and regulators in the U.S. are wading into the debate.

Changing laws affecting employers

At the federal level, The Equal Employment Opportunity Council (EEOC) recently announced that it would lead efforts to ensure that AI and other emerging tools used in hiring and employment decisions comply with federal civil rights laws.

Absent federal legislation, numerous local jurisdictions have also passed laws regulating the use of AI and biometrics in employment. 

Illinois introduced the Biometric Information Privacy Act (“BIPA”) in 2008 and later passed the Artificial Intelligence Video Interview Act (“AIVIA”) in 2019. Both measures require that employers provide notice and obtain consent before deploying either solution.

Baltimore legislators recently introduced the Surveillance Technology Act, which bans the use of facial recognition technology for any purpose except access – with potential criminal sentences for violators.

Maryland’s Facial Recognition Technology Bill prohibits an employer’s use of facial recognition technology during an applicant’s interview unless the applicant consents by signing a written waiver.

Effective January 1, 2023, New York City employers will be prohibited from using automated employment decision tools to screen candidates or employees unless a bias audit has been conducted before deploying the AI solution.

These are not isolated measures; similar laws are being enacted or considered in California, Texas, and Washington among others. 

Despite the rise in legislation, many employers are unaware of the full scope of these laws or even how their company may be using these technologies.

Facial recognition is permitted for access use, such as entry to a building or a computer.
Facial recognition is permitted for access use, such as entry to a building or a computer.

Protecting from risk while reaping the rewards

Laws and public sentiment are evolving rapidly, and it’s easy to misstep based on lack of information, awareness, and poor or non-existent processes. 

Here are five best practices that could help reduce the risks of using these technologies while supporting a better quality of hire, cost savings, increased security, and productivity gains. 

  1.    Actively and continuously curate company datasets 

To limit bias in AI-based screening, employers must recognize that AI learns as it goes, and bias can creep in over time. For example, suppose after reviewing thousands of candidates, an AI tool “realizes” that a large number of candidates it has “recommended” are 35-year-old males with 15 years of experience. In that case, the original dataset that comprises the decision algorithm may evolve to institutionalize a preference for candidates who meet those criteria or exclude those candidates altogether due to fears of over-saturation of this candidate type.

To ensure that their algorithms more accurately mimic human decision-making, it’s critical that firms continuously audit and curate their datasets. Companies can also benefit from working with outside experts to test AI-based tools to ensure that an unintentional bias has not been introduced from the start.

  1.     Promote transparency

Law or otherwise, many employers choose to inform candidates and employees how a company utilizes AI and biometrics and how those solutions support decision making. 

  1.     Assess opt-out requests

While many laws do not specifically require an option to opt-out of AI or biometric data collection, employers may reduce risks pre-emptively by offering, when feasible, an alternative for those who choose not to engage with the technologies. For instance, they might offer phone or in-person interviews in place of utilizing AI or collecting biometrics. 

  1.     Practice data minimization

Many employers utilizing AI and biometrics data only collect and store the minimal amount of data needed to support the data’s purpose. Employers also obtain consent to do so. The less data collected and retained, the smaller the compliance footprint.

  1.     Recognize the fluidity of laws

It is important that employers closely track new AI and biometric laws. As lawmakers and regulators step up efforts to evaluate and regulate the use of AI and biometrics in the workplace, organizations should be mindful of evolving compliance requirements.

In 2022, employers should be deliberate and educated on how to use AI and biometrics with employees and candidates. Perhaps the best advice is to engage legal counsel and third-party experts who may better understand the nuances of the law as it applies to their usage. With risk comes reward.

Alonzon Martinez

Alonzo Martinez is Associate General Counsel at HireRight. Mr. Martinez is responsible for monitoring and advising on key legislative and regulatory developments globally affecting HireRight’s service delivery. His work is focused on ensuring HireRight’s performance as a consumer reporting agency and data processor complies with relevant legal, regulatory, and data furnisher requirements. Mr. Martinez obtained his Juris Doctorate from the University of Colorado, and is licensed by the Supreme Court of the State of Colorado. He is a member of the Colorado Bar Association Employment Law Division, the Association of Corporate Counsel, and the Professional Background Screening Association.

www.hireright.com

 

Subscribe to Industry Today

Read Our Current Issue

ASME & Discovery Education: STEM Programs Prepare Future Workforce

Most Recent EpisodeASME: Driving STEM Education Initiatives

Listen Now

Patti Jo Rosenthal chats about her role as Manager of K-12 STEM Education Programs at ASME where she drives nationally scaled STEM education initiatives, building pathways that foster equitable access to engineering education assets and fosters curiosity vital to “thinking like an engineer.”