While many are concerned about how automation will replace human workers & eliminate certain jobs, a more pressing consideration may be how humans and machines can work collaboratively in the present environment. For instance, artificial intelligence is increasingly being utilized by employers to recruit, hire, and retain employees. Whether it is Unilever using algorithms to vet candidates or IBM using Watson to provide career advice to current employees, the trend of digitizing talent management is expanding. While companies are excited about the cost savings and efficiencies this AI-driven recruitment might provide, there are some concerns to explore. While we are consistently sold the notion that algorithms are objective, it is important to recognize that algorithms, and the data which drives them, originate from humans. Therefore, any biases that the programmers or data scientists might have when training the algorithm will be reflected in the way the algorithm behaves. One famous example of this bias occurred in 2016, when a ProPublica investigation found that a so-called crime-predicting algorithm in Florida incorrectly categorized Black defendants as having a high risk for recidivism, and incorrectly categorized White defendants as low risk.

This belief that machines are more objective than humans is called mathdwashing. This term was coined to explain the false assumption about the neutrality of computers & algorithms, and the contention that they would eliminate bias in the workplace. As artificial intelligence continues to expand its role in recruitment and overall talent management, it is even more critical that companies provide competent oversight, in order to detect bias. Such bias, in the guise of objective, mathematical analysis, can imperil the opportunities for under-represented groups. Therefore, the following are recommendations for how to reduce bias and address diversity & inclusion concerns, when developing and deploying algorithms for the talent management process.

1)   Provide Broad Unconscious Bias Training

The data scientists, engineers and programmers who develop algorithms for candidate selection and vetting should be provided with the suitable training to reduce their bias when creating and evaluating these algorithms. Such insight can enable them to be more aware of and prevent possible issues such as the previously cited crime-predicting algorithm example. For instance, if their training data indicates that the best job candidates are White males from particular colleges, the algorithm will attempt to find matches who fit that criteria, excluding large portions of the candidate pool. While such training alone will not eliminate bias, it can increase awareness of it, in order to intervene in more effective ways when developing and training algorithms.

2)   Institutionalize the Diversity & Inclusion/Artificial Intelligence Partnership

Companies, who are utilizing artificial intelligence in recruitment and retention, should have a Chief Diversity Officer, who can collaborate with the Engineering and Product teams, to ensure that the design and development of products consider unconscious bias and diversity & inclusion related issues. Diversity and inclusion should not be an afterthought or an add-on, but rather a full partner in the overall process, with authority to impact decision-making. Therefore, D & I experts should be included in all stages of the product process, from development to testing to deployment to evaluation. By forming a legitimate partnership, with an internal D & I team, who has the power to make recommendations about algorithm specifications, companies will increase its ability to be more inclusive & less biased, improving the algorithm’s overall effectiveness and utility.

3)   Conduct Regular Algorithm Performance Reviews

Human employees receive performance reviews to assess their strengths & areas for improvement.  Therefore, it will also be necessary to conduct performance reviews of algorithms on a periodic basis, to assess their areas for growth, including possible biases and how to address them through re-training. If the algorithms are not meeting expectations, then a performance improvement plan should be developed to enhance optimal functioning.

4)   Expand Hiring Roles and Cross-Training

In their book about artificial intelligence in the workplace Human + Machine: Reimagining Work in the Age of AI, authors Paul Daugherty and H. James Wilson discuss the need for new roles to manage the relationship between humans and artificial intelligence. For instance, they identify empathy trainers and machine relations managers as two examples of positions which will work with AI algorithms to help them function more responsibly. Further, roles which might currently exist in the organization, such as an ethics compliance manager or ombudsperson, who once only provided oversight to human employee issues, would now need to broaden the scope of his or her purview to include AI-related matters, such as how bias an algorithm may be behaving, and its implications for talent management. Thus, companies working in this space will need to provide more cross-training and create roles which will be required to manage this new frontier of recruitment & retention.

As AI -driven talent management evolves, there will be plenty of opportunities for unconscious bias to enter into the process. While some companies tout the use of algorithms as the panacea for eliminating bias, we must continue to be vigilant, due to the recognition that such algorithms must also be evaluated for unconscious & conscious biases. Artificial intelligence provides exciting opportunities to revolutionize the talent management process. However, there should be a true partnership between machines and humans, where diversity & inclusion experts team up with data scientists and programmers to deliver the least biased algorithms possible. Such partnerships and forward-thinking vision will enable a truly more diverse & inclusive workplace to be created, and to thrive now and for future generations.