The concept of artificial intelligence (AI) has been steadily gaining traction in the media, specifically around what it means for workers. For example, will a robot or machine take my job? But, for those who recruit and hire workers, the question is not necessarily if it will replace people (which it might), but more around how it can be used successfully to help find people.
First the good news:
Artificial intelligence is probably going to help human resources (HR) and recruiting professionals, and in some cases it already is. For many recruiting processes or parts of processes, such as on-boarding, some level of AI is helping to make it better, faster or more automated. For example, it may know when documents have been returned, what has been signed up for, and what is next when processing benefits or other details. In recruiting, many applicant tracking systems (ATSs) have built-in logic for screening or moving along applicants. Most recruiters use job boards and other databases, which also use some type of logic to suggest candidates or fit.
Now the bad news:
As we have hopefully learned in the past, machines are only as good as those humans who are programming and monitoring them. And, in the case of AI, how they are learning from humans and human behavior. So, the potential for recruiting can be good, as in automating processes, but it can also be bad in that AI doesn’t have a conscience or things like cultural awareness that humans use to make decisions. As it is, AI relies (generally speaking) on patterns which can be amplified or misinterpreted. Concepts like hiring for potential, or cultural fit, will generally be harder for a machine which makes “logical” decision based on patterns. And machines use historical data to find these patterns – data which may be highly biased, unbalanced or incomplete.
So, how do you implement AI in recruiting?
The short answer is: carefully. As you continue to use more automated processes of increasing complexity, a human or group of humans needs to be in the chain to audit the results. Also, it’s a good idea to audit and be aware of what historical data is used and why (and how current the data is). Another issue is concepts such as thresholds, averages or limits. Logically speaking, it may be reasonable to limit the output to the top three candidates, for example, but we’ve all seen fourth and fifth candidates that turn out to be better fits for a position. If you never see them, or if a single variable throws them out, you may miss good candidates. And if we assume things are working without inspection, bad things can happen more rapidly and to potentially more applicants.
Lastly, few of us will actually develop AI software or machines. This means that like other technology, some person or company will be selling us the program. We are the ones with a vested interest in making it work well for our company, to comply with policies and laws and ultimately, serve the needs of our company. As we make decisions about AI, we need to ask a lot of questions of these vendors and test the results before making wholesale changes. I’m not saying it will be like the movie Terminator, where the machines take over, but ultimately, a human usually takes the blame. You may get the credit for implementing AI, but you make also get the blame.
About the Author: This blog was written by Matt Rivera. Matt serves as Vice President, Marketing and Communications and is responsible for overseeing all aspects of Yoh’s marketing and brand communications. Matt holds a degree in Journalism/Public Relations and has been working in the staffing industry for more than 25 years. Prior to this role, Matt held many different roles from branch recruiting and proposal writing to technology management and online marketing.