AI revolution? ‘What if I like repetitive tasks?’
AI applications are multiplying and we are all being offered the chance to ‘free up’ our time. But what are we going to do with that time, asks Adam McCulloch.
“We’re excited to leverage emerging technologies, such as AI, to free up people’s time for more strategic work.” So says Maria Angelidou-Smith, chief product and technology officer at software firm Personio.
The company, launching a new product, sees “AI’s role as being about solving challenges for HR – such as them being short on time due to more routine, admin tasks”.
Personio is one of many HR tech firms bringing AI products to market, offering solutions designed to improve processes, increase efficiency, and free-up HR practitioners’ time – so they can do a bit more blue-sky thinking, presumably.
But what happens when AI becomes good at strategic tasks too? What will HR managers do then?
And who’s to say HR managers don’t like doing routine tasks? Some of us enjoy the challenge of focusing on mind-numbingly repetitive procedures – they exercise parts of the cerebrum no amount of brainstorming can reach. Not everyone is strategic or finds strategy particularly entertaining.
And some might argue that employees are right to expect human interaction and people who understand and acknowledge the specific nuances of their queries. These are not “routine” tasks to be shunted off to a robot tool.
Meanwhile, it appears the first signs of a pushback against increasing automation is happening. Generation Z, who many assume would be the group most willing to accept new tech, have apparently become frustrated with non-human recruitment, a new study has found, and are likely to abandon applications if their questions are not addressed by an actual person in real-time. This is nothing new; for many years it has been recognised that automated call centres are far less popular than properly manned ones. It’s the same principle.
The swift deployment of generative AI is like building Jurassic Park, putting some danger signs on the fences, but leaving all the gates open” – Beena Ammanath, Deloitte AI Institute
Perhaps organisations should bear in mind the words of Erik Brynjolfsson, professor and senior fellow at the Stanford Institute for Human-Centered AI, who told Davos earlier this year that AI should not be left to its own devices when used in decision-making. “I would not advise turning it on and walking away. These systems have too many flaws. They don’t understand truth very well and kind of hallucinate facts. Right now it’d be downright dangerous to use AI tools without a human in the loop.”
But he also warned against another commonly heard view: that AI will lead to more – not fewer – people being employed. But what kind of work would these extra people be doing? Brynjolfsson says workers will be demoralised if they are forced into secondary roles by AI. Employing more people to monitor AI’s doing could lead to “pernicious problems” as was illustrated by Google’s driverless cars, he said. “First they had to have a safety driver to watch the system… but they got bored so then they added a second safety driver to watch the first driver.”
But lurking behind the speculation over the impact of AI on work is the fear that the technology will outsmart us. Chris Moran at the Guardian points to an incident in which a researcher had used ChatGPT to research an old article written by a specific journalist. The article was duly found by the tool but although the text and headline appeared in the correct style, the journalist named on the piece could not remember writing it.
It turned out that the AI had decided to write the article itself and pass it off as the journalist’s work. Moran wrote: “In response to being asked about articles on this subject, the AI had simply made some up. Its fluency, and the vast training data it is built on, meant that the existence of the invented piece even seemed believable to the person who absolutely hadn’t written it.” What this could mean for HR policies is anyone’s guess.
First they had to have a safety driver to watch the system… but they got bored so then they added a second safety driver to watch the first driver” – Erik Brynjolfsson, Stanford University
But whatever we think of AI, it is an unstoppable force. The headline of a CNN article on the technology says it all: “Welcome to the ‘generative AI’ era. Resistance is futile”. Of course, there are transformative benefits: health, scientific research, teaching and environmental protection are all likely to gain from the technology, for starters.
But Beena Ammanath, the executive director of the Global Deloitte AI Institute, says the risks have not been fully understood. She tells Allison Morrow at CNN: “The swift deployment of generative AI is like building Jurassic Park, putting some danger signs on the fences, but leaving all the gates open.”
She adds: “The challenge with new language models is they blend fact and fiction. It spreads misinformation effectively. It cannot understand the content. So it can spout out completely logical-sounding content, but incorrect. And it delivers it with complete confidence.”
Morrow concludes: “Dear God. It’s the over-confident white man in bot form.”
Latest HR job opportunities on Personnel Today
For all the latest Technology News Click Here
For the latest news and updates, follow us on Google News.