5 Tips to Help You Prepare for the Future of Hiring
What does the future of hiring look like? How will you adapt to new and different needs? Our Summit speakers share 5 things HR and recruiting professionals need to know.
Elizabeth McLean
10 min read
More and more employers use HR solutions that rely on artificial intelligence (AI) to automate steps in the hiring process. AI can streamline manual processes, improve the candidate experience, and enhance HR efficiency, but it can also put you at risk of discriminatory hiring.
Here’s how to implement AI in a way that supports fair and ethical decision-making, protecting both your job candidates and your business.
The use of artificial intelligence (AI) in HR technology is growing. From applicant tracking systems to recruiting and background screening solutions, AI can streamline workflows, speed hiring, and save HR teams time and effort. But if not used carefully, AI also poses risks of unintentional discrimination.
Both the Federal Trade Commission (FTC) and the Equal Employment Opportunity Commission (EEOC) have announced they will focus on ethical use of AI in employment. What do employers need to know when incorporating AI into hiring decisions?
AI technology performs tasks once done only by humans and learns from experience so that it continually improves. According to a 2021 study reported by Human Resource Executive, 60% of companies currently use AI for talent management and over 80% plan to increase their use of AI in the next five years.
There are solutions incorporating AI for a variety of HR tasks. Some of the most common uses include:
When used correctly, AI-based hiring tools can deliver many benefits for employers. They can save time by automating formerly manual tasks. This improves efficiency for recruiting teams and hiring managers, giving them more time to spend on higher-value tasks and potentially decreasing time-to-hire. By automatically guiding candidates through the steps of the hiring process and responding more quickly to candidates’ questions, AI can greatly improve the candidate experience. Finally, AI can remove biases that human hiring managers may unintentionally bring to the hiring process, helping employers build a more diverse workforce.
But poorly implemented AI-based solutions can pose serious risks employers should be aware of. A report from Harvard Business School found that applicant tracking systems using AI often remove qualified candidates from consideration simply because they’re missing one skill or fail to meet one minor requirement. At a time when employers are already struggling to find qualified employees, this unnecessarily restricts your candidate pool.
By limiting potential candidates to those who fit a predetermined mold or have certain characteristics, AI technology can also result in a less diverse workforce. This robs companies of the benefits they enjoy when employees bring diverse experiences, skills, and insights to work.
When used incorrectly, AI may even lead to unintentional discrimination. In 2015, Amazon discovered its recruiting software was weeding out female candidates. The AI was trained to look for candidates similar to Amazon’s top employees. Since most of those employees were men, the AI gradually began penalizing resumes that included the word “women’s,” such as “women’s volleyball team.”
Both the FTC and the EEOC have been studying the issue of AI in HR since 2016. With the use of HR technology leveraging AI on the rise, both agencies have recently stepped up their attention to the topic.
In April 2020, the FTC released new guidance, “Using Artificial Intelligence and Algorithms.” This guidance states that AI tools should be transparent, explainable, fair, and empirically sound, and that employers should hold themselves accountable for compliance, ethics, fairness, and nondiscrimination. While the FTC’s guidance is not legally binding, employers should be aware that this is an area of growing concern for the agency.
In October 2021, the EEOC announced an initiative to ensure AI HR tools comply with the federal civil rights laws it enforces. “These tools may mask and perpetuate bias or create new discriminatory barriers to jobs,” the EEOC stated. “We must work to ensure that these new technologies do not become a high-tech pathway to discrimination.”
Automated decisioning in itself is not the issue. The problem arises when AI has an adverse impact on a particular group of people. HR teams should be proactive by following best practices for using AI in hiring and employment decisions. How can you do this? Take the FTC’s guidance as your roadmap.
Conduct an audit at least once a year to assess the impact that your AI, automated decisioning, and other rules using algorithms have on human beings. This audit should be both qualitative and quantitative.
Even if the affected population is not in a protected class, consider the ethical implications. Is the impact on this group fair? Also consider how your business is affected. Are you squashing diversity or missing out on qualified candidates? The way you use AI can impact your corporate reputation.
Based on your audits, you can adjust the data input and decision rules you set to improve outcomes. AI learns from experience, but it needs good guidance to make good decisions.
Put the results of your audit in writing. This creates documented evidence that you’re making a good-faith effort to follow FTC guidance, comply with equal employment opportunity laws, understand the impact of AI, and work to continually improve.
Consumer reporting agencies (CRAs) may develop the AI and algorithms used to automate the delivery of background screening results, but as an employer, you are ultimately responsible for the decisions you make using a background screening solution. Protect yourself by taking steps to ensure your background screening provider is using AI in a way consistent with fairness. Ask these questions:
The EEOC has indicated they’ll be producing best practices and guidance for what they consider ethical and appropriate frameworks for using automated decision making. The FTC will likely produce more guidance during the next few years as well. In developing this guidance, regulators are likely to look to Europe, which has stricter data privacy regulations than the US, as a model. Monitoring privacy trends in Europe as well as EEOC and FTC announcements can help you stay current on the latest developments.
The increased focus the FTC and EEOC are placing on AI in hiring creates an opportunity for employers to develop clearer data policies they can share with candidates and employees for greater transparency. This is already the standard in Europe, and US companies that embrace it now can get ahead of the curve, reduce the risk of enforcement action, and enhance both corporate reputation and candidate experience.
The resources provided here are for educational purposes only and do not constitute legal advice. We advise you to consult your own counsel if you have legal questions related to your specific practices and compliance with applicable laws.
Follow Me
Elizabeth McLean is GoodHire’s General Counsel, an FCRA-compliance attorney and expert in the background screening legal landscape. She monitors all things FCRA and EEOC. That means she follows new legislation and court decisions and advises the company on processes that follow compliance best practices.
What does the future of hiring look like? How will you adapt to new and different needs? Our Summit speakers share 5 things HR and recruiting professionals need to know.
As you adapt your hiring processes for the new world of work, find out how background checks can help you build a quality workforce, both today and tomorrow.
The world of work is dramatically different than it was at the start of 2020. We organized the GoodHire Summit to help HR & TA teams prepare for the new future of hiring.