AI recruitment systems to be investigated over discrimination worries
The UK privacy watchdog is set to probe whether employers using artificial intelligence in their recruitment systems could be discriminating against ethnic minorities and people with disabilities.
John Edwards, the information commissioner, has announced plans for an inquiry into the automated systems that screen job candidates, including looking at employers’ evaluation techniques and the AI software they use.
Over recent years, concerns have mounted that AI, in many cases, discriminates against minorities and others because of the speech or writing patterns they use. Many employers use algorithms to whittle down digital job applications enabling them to save time and money.
Regulation has been seen as slow to take up the challenge presented by the technology with the TUC and the All Parliamentary Group on the Future Work keen to see laws introduced to curb any misuse or unforeseen consequences of its use. Frances O’Grady, TUC general secretary, said: “Without fair rules, the use of AI at work could lead to widespread discrimination and unfair treatment — especially for those in insecure work and the gig economy.”
Edwards pledged that his plans over the next three years would consider “the impact the use of AI in recruitment could be having on neurodiverse people or ethnic minorities, who weren’t part of the testing for this software”.
Autism, ADHD and dyslexia are included under the umbrella term “neurodiverse”.
A survey of recruiting executives carried out by consulting firm Gartner last year found that almost all reported using AI for part of the recruiting and hiring process.
The use of AI in recruitment process is seen as a way of removing management biases and prevent discrimination, but could be having the opposite effect, because the algorithms themselves can amplify human biases.
Earlier this year Estée Lauder faced legal action after two employees were made redundant by algorithm. Last year, facial recognition software used by Uber, related to AI processes, was alleged to be in effect racist. And in 2018, Amazon ditched a trial of a recruitment algorithm that was discovered to be favouring men and rejecting applicants on the basis they went to female-only colleges.
A spokesperson for the Information Commissioner’s Office said: “We will be investigating concerns over the use of algorithms to sift recruitment applications, which could be negatively impacting employment opportunities of those from diverse backgrounds. We will also set out our expectations through refreshed guidance for AI developers on ensuring that algorithms treat people and their information fairly.”
The ICO’s role is to ensure people’s personal data is kept safe by organisations and not misused. It has the power to fine them up to 4% of global turnover as well as to order undertakings from them.
Under the UK’s General Data Protection Regulation (which is enforced by the ICO), people have the right to non-discrimination under the processing of their data. The ICO has warned in the past that AI-driven systems could lead to outcomes that disadvantage particular groups if the data set the algorithm is trained and tested on is not complete. The UK Equality Act 2010, also offers people protection from discrimination, whether caused by a human or an automated decision-making system.
In the US, the Department of Justice and the Equal Employment Opportunity Commission warned in May that commonly used algorithmic tools including automatic video interviewing systems were likely to be discriminating against people with disabilities.
Latest HR job opportunities on Personnel Today
For all the latest Technology News Click Here
For the latest news and updates, follow us on Google News.