Quick News Bit

AI in employment: the pitfalls and laws on the horizon

0

A recent House of Commons Library report highlighted how AI is being used by employers and the legal risks that could arise. Eliza Nash outlines what employers need to know.

Artificial intelligence rose to the forefront of the public consciousness last year with the launch of ChatGPT. It is set to revolutionise the way we live, as well as the way we work.

The House of Commons Library recently published a research briefing on AI and employment law. The briefing provides an introduction to AI itself and outlines the ways it is used in the workplace, how this use is restricted by current employment law and what is on the horizon in terms of future policy and regulation.

The use of AI or other algorithmic tools by employers to manage workers has become increasingly widespread. The briefing focuses on the use of AI as a workplace tool in three broad areas: recruitment, line management, and monitoring and surveillance.

Recruitment

AI can play a major role at each stage of the recruitment process.  Among the examples given are:

  • Sourcing: AI used to identify skills, qualifications, and experience required for a particular job from a job description, which hiring managers can  use to reach out to suitable candidates
  • Screening: AI algorithms sifting through application forms and CVs by extracting relevant information and categorising it based on key criteria such as skills, education, and experience
  • Selecting: AI used to evaluate online interview performance by analysing biometric data.

The duties under the Equality Act 2010 to avoid discrimination on the basis of a protected characteristic are of particular significance in this area. These duties apply regardless of whether the employer’s decisions were made by human managers or with the assistance of AI.

The use of an AI could be seen as a “provision, criterion or practice” which could give rise to claims of indirect discrimination if it had disproportionate effects on a protected group.

One such example is that of Amazon. In 2014, Amazon developed its own automated CV screening algorithm using a decade’s worth of internal recruitment data. The algorithm aimed to identify the traits and qualifications highly valued by the company in potential candidates. However, it emerged that the algorithm had inherited biases from past hiring practices, which led to an imbalance between male and female candidates. In effect, Amazon’s system taught itself that male candidates were preferable.

Line management

AI can play a major role in line management. Examples include shift scheduling (using automatic shift allocation algorithms) and performance evaluation – algorithms used to quantify worker productivity and performance, which may in turn affect decisions about promotion, rotation, and firing.

The briefing suggests that AI’s role in appraisal processes can have advantages for both employers and employees because they allow real-time evaluation, avoiding the delay of annual appraisals, while also potentially mitigating human biases sometimes displayed by managers. However, they have been criticised for lacking a “human element”, and potentially failing to consider employees’ “human potential” not evident in the data. Also, the potential for AI to exhibit biases of their own is evident from the Amazon example.

The common law duty of mutuality of obligation may be threatened by the increasing use of AI to make or inform employers’ decisions, making it harder for employers to explain that their decisions have been taken in good faith. It is also possible for dismissal decisions, which are unfair because of flaws in the AI processes used, to be covered by existing protections against unfair dismissal.

AI-based decisions can be difficult to comprehend because the workings of many so-called “black-box” algorithms are often very difficult to explain. This lack of transparency could pose evidentiary challenges for future employment law cases, where being able to give explanations or justifications for why certain decisions were made is often key in meeting the requisite legal tests.

Data protection law places restrictions on the ways in which AI tools can collect and process data about workers, as well as granting workers a right in principle not to be subject to significant decisions made solely by automated systems.”

Monitoring and surveillance

Whilst employers and tech producers may argue that monitoring employees can improve both productivity and workplace safety, AI surveillance raises concerns about privacy and mental health. For example, some technologies can capture employees’ unsent emails, webcam footage, microphone input, and keystrokes, while more advanced monitoring systems even allow live streams of employees in a shared digital environment. Video surveillance or the accessing of personal data by an employer could constitute an interference with employees’ privacy rights under Article 8 of the European Convention on Human Rights.

In addition, data protection law places restrictions on the ways in which AI tools can collect and process data about workers, as well as granting workers a right in principle not to be subject to significant decisions made solely by automated systems.

What’s next?

The UK government’s March 2023 white paper is aimed at providing a specific framework around AI development and use. The approach is guided by five ‘non-statutory’ principles: safety, transparency, fairness, accountability, and contestability, which will fall to be implemented by existing regulators.

In contrast with the UK ‘regulation light’ approach, the EU is introducing legislation to regulate AI systems, prescribing legal obligations throughout the lifecycle of an AI system and intends to establish new regulators in each member state. The EU provisions (which may take years before coming into force) will apply to businesses whose system outputs are used in the EU, even if the provider is based outside of the EU, so many UK businesses will inevitably find themselves subject to the EU regulations.

The difference in approach between the UK and EU would appear to be underscored by ideological differences in the importance placed on innovation on the one hand and the risks posed to workers’ rights on the other.

Employers need to be aware of the potential risks, as well as the opportunities, presented by AI before introducing such systems.

 

 

People analytics opportunities on Personnel Today


Browse more people analytics jobs

For all the latest Technology News Click Here 

 For the latest news and updates, follow us on Google News

Read original article here

Denial of responsibility! NewsBit.us is an automatic aggregator around the global media. All the content are available free on Internet. We have just arranged it in one platform for educational purpose only. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials on our website, please contact us by email – [email protected]. The content will be deleted within 24 hours.

Leave a comment