The pandemic has driven a rapid increase in the use of technology at work, and visionaries say technologies like AR and VR are going to become increasingly pervasive.

Should HR be more involved in the design and rollout of workplace technology?

Will modern technology force us to replace roles (performed by people) with functions (performed by people, technology, robots and algorithms)?

At what point do employees start to resent technology impinging on their jobs? When it can do 10%, 20% or 30% of their role faster and better than they can?

These are just some of the questions we explore in this episode.

Our news section covers a rare case where the tribunal gave some useful advice and a stark warning against allowing your equalities training to go “stale”.

Our listener’s question asks about dismissal and reengagement or “firing and rehiring”, which has become increasingly popular during the pandemic, and we share our top tips.

Please note: since we recorded this episode, the court in the Netherlands has ordered Uber to reinstate the five British drivers who were struck off by robot technology.

Thanks for listening and let us know if you’re enjoying the podcast or if there’s a topic or question you’d like us to cover. You can email emplawpodcast@tlt.com or tweet us @TLT_Employment or use the hashtag #tltemploymentpodcast

Transcription

Jonathan Rennie:

Hello, this is Employment Law Focus and I'm Jonathan Rennie, a partner in TLT's Glasgow office. I'm joined today by Sarah Maddock, a professional support lawyer, and also Emma Erskine-Fox, who has the remarkable job title of Associate in our data privacy and cybersecurity team. Many will have heard from Emma on previous episodes on data-related topics. Today, we will be looking at the impact of workplace technology on employees, and what HR needs to know about the way these are designed, rolled out, and also the implications from an employment law and data privacy perspective.

I'm sure we've all at times over the last year felt some rage against the machines when the laptop stops working, or if you can't connect to the network and you can't figure out why. Often of course, it's just a case of needing to turn something off and on again. But for many employees, their issues with the technology their employer requires them to use can prove a lot more challenging and be a lot more serious.

This is obviously an enormous topic so we're going to focus on three key areas where issues can arise, and some of the more taxing challenges for HR to get right. So, firstly we'll look at the impact of new technology on workforce structure and design. Secondly, we'll explain the HR risks associated with that and how you can deal with them. And then thirdly and finally, we'll look at recruitment technology. Now, before we get started, Sarah, you had a quick news story that you wanted to share with us all.

Sarah Maddock:

Yes. I wanted to talk about a really useful case on the importance of not only providing equalities and diversity training, but also making sure that that training is up to date and has gone far enough to prevent discrimination and harassment from happening. I'm particularly interested in this one because it's not often that this area is considered by the courts or the tribunals.

So, the case is called Allay (UK) Ltd v Gehlen. As many people listening will already know, if an employer can show that they have taken all reasonable steps to prevent harassment or discrimination from happening at work, then they can't be held liable should any discrimination or harassment actually happen. But this begs the obvious question, what do “all reasonable steps” actually mean in practice?

So, as with so many areas of employment law, there's no set definition, but it usually means having an equalities policy and providing equalities and diversity training.

The question that the Employment Appeal Tribunal had to decide was whether the training that had been provided by the employer was sufficient for them to show that they had actually taken all the necessary reasonable steps to prevent the harassment of one of their employees. The answer was that the employer had fallen pretty far short of the required standard.

So, although they did have an equality opportunities policy and had provided equality and diversity training, that training had become stale. It had taken place more than a year before the harassment occurred, and the appeal tribunal said that a reasonable step would have been to have refreshed that training because it was clearly no longer effective.

And what's interesting in this case is that the appeal tribunal noted that the employer had provided further training when the harassment came to light, but this in itself suggested that there were those further steps that needed to be taken. And helpfully, in its decision, the appeal tribunal laid out three factors which employment tribunals – and then obviously, by extension, employers – should consider when they're thinking about whether they have taken all those reasonable steps to prevent discrimination or harassment. Number one, the likelihood of the steps being effective. Number two, their cost. And the third and final step is practicality.

Jonathan Rennie:

I think then anyone involved in promoting equality and diversity will be interested and very pleased with this, to see that the appeal tribunal is promoting an active approach to the reasonable steps defence. It really reinforces what we've always advised about equalities at work: that it's not sufficient to have an equalities policy that simply sits in a drawer or to simply provide sheep dip training to your workforce once in a blue moon. Training needs to be kept up to date, alongside an actively implemented equalities policy.

Sarah Maddock:

Yeah, I think that's absolutely right, Jonathan. And I think while we're talking about best practice, I'd always say that we'd recommend other steps such as visible and senior support for the equalities and dignity at work policies, very clear reporting lines for people who want to raise concerns, perhaps with some form of anonymity alongside that, and checking that managers actually know what to do if they become aware of discrimination or harassment at work.

And of course, that's not only because it's the right thing to do, but also because it means that if something goes wrong and an employer does find themselves facing an allegation of discrimination or harassment, then they can genuinely look back and say, "Well, we did take all reasonable steps possible to try and prevent that from happening in the first place."

Jonathan Rennie:

I think all businesses have experienced the digital push over the last year, with more technology being rolled out to make things more efficient and to respond to the rapid shift towards home working. Even before the pandemic, we were seeing an increase in the use of modern technologies like automation and AI. And this month, Mark Zuckerberg has been talking about the use of augmented and virtual reality to make virtual experiences feel more real. As Chris Nuttall at the Financial Times put it, “Facebook, Apple, and Microsoft feel augmented reality and virtual reality are so close to becoming mass-market technology that they can almost touch them. The demand is there for the humanising of working from home video chats, the bandwidth has arrived, and the products are under development.”

Now, regardless of whether this all sounds a bit science fiction to you, technology is going to become increasingly pervasive in the workplace and it's going to affect employees in ways we couldn't possibly previously have imagined.

Emma Erskine-Fox:

It really feels like we've been talking about a lot of these technologies for a very long time, but they've really started to take off in the last couple of years, and I think take-up has definitely been accelerated by the pandemic. And some of the tech that we've seen cropping up has been directly related to the pandemic. So for example, Amazon released a tool last year called Panorama, which enables organisations to detect when staff are not wearing face masks or socially distancing in offices. And we've seen lots of sort of employee monitoring tech being implemented as well, just to make sure that people are actually working from home.

Jonathan Rennie:

Yeah, and we talked about that in our previous episodes, Emma, and it continues to move in that direction of surveillance, I think. And as a result of all of that, we therefore expect to see an increase in the number of tech-related issues that HR teams are having to deal with, whether it's from an employment law or a data privacy perspective. So, let's take decision-making technologies as an example then, and as I mentioned, we're going to speak about recruitment technologies separately a little bit later on.

Sarah Maddock:

Yes, this is something that I'm really interested in as well. The decisions that are made every day in business can be so important and have such a huge effect on people. This has always felt a really risky, albeit necessary, or at least inevitable direction that we're heading in.

Emma Erskine-Fox:

Yeah, I agree. And using technology to make those sorts of important decisions about employees is something that requires really quite careful thought from a data protection perspective. So, the GDPR actually prohibits organisations from making legal or significant decisions about individuals entirely based on automated processing of personal data (so essentially, just using technology without any human intervention) unless specific conditions are met. So, for example, you can make those types of decisions if they are necessary for entering into or performing a contract, or if the individual has consented.

And even if one of those conditions is met, and so you can make that decision just using technology, there are still quite strict rights that apply for individuals. So individuals can request that the decision is effectively remade by a human, they can appeal the decision, and they have a right to an explanation of the logic of that decision as well.

Jonathan Rennie:

Yeah, so much like Sarah's analysis of that expression, the “reasonable steps” defence, we have this expression of “significant decisions”, and of course, people love it when lawyers get involved in the assessment of materiality and what is “significant”.

It seems to me anyway that a lot of these employment-related decisions could be legal or significant. Clearly, lots of decisions about someone's employment – whether to promote them, decisions about dismissing people – these surely would be quite significant.

Sarah Maddock:

Yeah, I think it's probably fair to say, Jonathan, that it'd be quite difficult to think of a decision in the employment law context that wouldn't be legal or significant.

Emma Erskine-Fox:

Yeah, I agree. And I think it's worth noting that the legal or significant decision has to be about an individual, rather than about the business or about the workforce as a whole. But I agree. I think that a lot of the decisions about employees that employers would be looking to use technology to assist with would fall within this category.

And it really depends a lot of the time on exactly what the technology is going to be used for. So, taking the Amazon Panorama tool, if an employee was found not to have worn a face mask, and this automatically resulted in the employee being subject to disciplinary action – or even in the worst-case scenario, dismissed – then that would pretty clearly be a significant effect. But on a less extreme level, if that technology resulted in the employer just having a conversation with the employee and reinforcing the importance of mask-wearing and requesting that they do continue to wear their mask, that probably doesn't fall within that concept.

Sarah Maddock:

And you said earlier, Emma, that these restrictions only apply if employers are making these kinds of decisions using only technology without any human intervention. So, if the employer reviews the decision on a manual level as well, does that mean that the restrictions don't apply?

Emma Erskine-Fox:

In theory, yes, but it depends on really what that manual review looks like. So for decisions to fall outside of the restrictions, the human intervention has to be meaningful. So it requires really something more than employers just looking at a decision and sort of rubber-stamping it. There has to be some proper consideration of the data going into that decision. And there's certainly nothing to stop employers from using technology to assist with decision-making where there is some meaningful human intervention, but the extent of that reliance on technology needs to be considered really carefully.

And just while we're on this topic of automated decision-making, there have been some recent court challenges relating to the use of technology to make decisions about employees that it's probably worth touching on. Uber has obviously been in the news in an employment context because of the Supreme Court case about Uber drivers' status as workers, but there have been a couple of data protection related challenges as well involving Uber and also Ola. One of those challenges has actually recently resulted in an order for those companies to provide greater transparency in terms of the technology that they use to track their drivers, which is really interesting and just shows how important it is to be open and transparent about these types of technology.

The other challenge is still going through the courts in the Netherlands and claims that Uber has automatically cancelled drivers' accounts on the basis of alleged fraudulent activity that's been detected by AI, without any human oversight and without a legal justification for making those decisions solely by automated means. So it's definitely going to be really interesting to see how that goes, but again it just highlights how important it is to make sure that you go through the right steps and the right considerations when you are thinking about making these sorts of wholly automated decisions.

Sarah Maddock:

Yeah, I think that's a really good point Emma, and I think we're really in the thick of it at the moment in terms of the cases that are moving through the system and showing how modern technology is now bumping up against traditional pre-digital legal concepts. On the Uber Supreme Court decision on employment status that you just mentioned, Emma, it was really interesting to see that the employment law framework was straining a bit to keep pace with modern technology. So, the three categories of worker, employee and self-employed were drafted back in the mid-nineties, when it couldn't even have been dreamed possible that an employer could have access to the services of individuals so rapidly via an app and with such ease.

Emma Erskine-Fox:

It's also important to be aware that even if you're not just using technology to make decisions about people, any processing of personal data using these sorts of tech tools has to comply with all of the GDPR principles. So, one of the key things to be aware of is that if you are using these types of tools to process personal data in a way that uses new or innovative technology or otherwise poses a high risk to employees, that will require a data protection impact assessment to be carried out before any processing takes place. And not only is that a mandatory requirement of the law, it can be a really helpful process to go through because it enables employers to really dig into what the privacy risks are for employees and look at what solutions they can put in place to address and mitigate those risks.

I've talked about the importance of transparency, which has been highlighted by a couple of those challenges recently in the courts as well. And again, it's really important for employers to be transparent with individuals about how these kinds of tools use their personal data and what sort of impact they're going to have for employees. And employers will also always have to make sure that they've got an appropriate legal justification for using data in the ways envisaged, and that the use of these tools is justified and proportionate from a privacy perspective.

Sarah Maddock:

I think another area that's really fascinating in terms of the impact of technology on people's work roles is when technology starts to impinge on undertaking people's roles for them – either as well as or perhaps even better than a human can do it – and the impact that that's going to have more widely on jobs and the structure of workforce and employment disputes. And we're already seeing this happening, and I think it's a trend that is set to continue.

So, for example, whilst time-saving technology is often welcomed in the workplace, you could say that there is nearly always a tipping point. So for example, most people would be quite pleased to have say 10% of their day freed up by a piece of technology that does the more tedious or repetitive parts of their job for them in moments. But what happens when that technology can do say 20% of your job, and then what if it creeps up to 30%? What is left behind that can't be undertaken by a machine? When does the machine stop becoming a helpful tool and start becoming a threat?

Jonathan Rennie:

Yeah, I think that's why the Terminator movies have the subtitle, The Rise of the Machines, Sarah, and have left me rather scared of some of these concepts.

One area, certainly, that's particularly been affected has been manufacturing. And in fact, an analysis carried out in 2019 showed that up to 20 million factory jobs around the world could be replaced by robots by 2030. And it's also been recently reported that Amazon has 200,000, that's 200,000 robots working in its warehouses in the US.

This all sounds very concerning in some ways for the employees, but there's also optimism that new jobs will be created as automation becomes more widespread. In some cases, for example, it's possible to automate aspects of a worker's role that are repetitive and time-consuming, which might then free them up to focus on more creative and skilled tasks, which we're calling upskilling.

Sarah Maddock:

Yeah, and I think that leads on to the next sort of really interesting issue, which is what impact technology will have on the shape of the workforce more generally and the job market. So, when automation becomes more and more sophisticated, which it will do over time, will we see jobs that rely on rules-based work almost completely evaporating?

Jonathan Rennie:

Yeah, it's pretty technical, isn't it, to think about this vision of the future, but that certainly seems pretty likely that that might happen.

Sarah Maddock:

Yeah, so I think the question that leads onto is wondering where that leaves us in terms of the skills that businesses are going to need as that change takes place over time. Of course, this is something that is impossible to predict, and we're certainly not going to get it resolved in the space of this podcast episode. But at the moment, the characteristics that machines will struggle to replace are those at opposite ends of the employment spectrum.

Jonathan Rennie:

Would you be able to expand on that a little bit?

Sarah Maddock:

Well, what I think that means is that jobs that rely on skills with human characteristics – such as teamwork, empathy, strategic thinking, creativity – are likely to only increase in value, and they're at the top end of the employment market. And roles that rely on manual dexterity – such as preparing food, making beds, cleaning work – will also remain in demand and they lie at the other end of the employment market.

Jonathan Rennie:

Okay. So, in effect there, we might see what could be described as a hollowing out of the workforce with a reduced demand for mid-skilled rules-based roles.

Sarah Maddock:

Yes. I think that's one theory that's being discussed at the moment.

Jonathan Rennie:

So, I've had a little bit of a look into this – and of course, the whole process has been accelerated by the pandemic – and I noted that the Fabian Society has reported that many roles that were most at risk of automation have been adversely affected by lockdown restrictions, as customers shift permanently online and the move towards automation gathers pace. This is all really interesting stuff, but what we need to consider is what are the legal risks here, and how should employers respond to them?

Sarah Maddock:

In the shorter term, new technology may mean that employers are looking to redesign employees' work to accommodate new technologies, and also may ultimately entirely replace a role. In these circumstances, it's important to remember that the normal rules around fair dismissal and fair treatment at work are going to apply.

So, where a new piece of technology involves an employee's job duties – so as opposed to a dismissal – the employee's express agreement to the changes should first be obtained because other than in very limited and exceptional circumstances, employers are not entitled to unilaterally amend the terms and conditions of employment.

And if express agreement cannot be reached, then a dismissal and re-engagement process could be used, but this involves a clunky procedure and is quite risky. In fact, we've had a listener's question about this, which Jonathan is going to answer at the end of this podcast episode.

Another factor I think it's important to consider when automation is happening at work is that, of course, some redundancies may be inevitable. And again, just remember that the usual rules and procedures will apply. So, there must be a fair reason for the termination, and if there is a reduced need for work of the kind that the employee does, that would usually be a redundancy dismissal. Or alternatively, there might be a reorganisation of work, which could come under the category of “some other substantial reason” under the Employment Rights Act, justifying the dismissal in question. But remember, whatever the reason for the dismissal, they should always be undertaken in consultation with the employee, allowing a right of appeal.

Emma Erskine-Fox:

So, one question that occurs to me is that these types of new technologies that we're talking about are often going to be driven by the IT department, or potentially by some sort of business change or transformation function. But HR often has a very important role to play, and it might not always be obvious to their colleagues how, why and when they should be involved. Is that a challenge HR faces, do you think, and should they be more involved in tech decisions generally speaking, so that they can consider the impact on staff and jobs and anything that they need to be doing or preparing for?

Sarah Maddock:

Yeah. I think those are all really excellent questions, Emma, to which the answer to all of them has to be yes, I would say. And I would actually go further and ask the question whether, when we look sort of further into the future, will we actually see the distinction between HR and IT departments starting to blur a bit, as the emphasis shifts away from people undertaking jobs which are then supported by technology, and perhaps what we're moving more towards is functions, a focus on functions at work. And whether those functions are fulfilled by machines or people becomes almost an incidental question.

Jonathan Rennie:

So it could in fact be an opportunity for HR to be involved in the system designs. And if that is indeed the case, there is an argument to say that HR will have a very important role in the design and even the rollout of these new technologies, perhaps advising on consultation with affected employees where technology is likely to impact on job numbers or the design of an employee's roles. And clearly, HR's involvement will be beneficial for a number of reasons. It will help comply with any legal obligations to consult, ensure that any reskilling takes place, and will also help with employee buy-in to the changes, and all those important communications.

I know that the involvement of HR is something Emma has advised on from a data privacy perspective.

Emma Erskine-Fox:

Yeah, sure. And I think this is really important from a data protection point of view. One of the central principles of the GDPR is something called data protection by design, which essentially means that privacy should be built into any data processing solutions right from the outset. So, data protection compliance should always be considered at as early a stage of a new tech project as possible.

Emma Erskine-Fox:

And we've seen that one of the ways to make data protection by design really effective in practice is to involve a broad range of teams and people within the organisation in those initial considerations. I mentioned earlier on the importance of doing a data protection impact assessment or DPIA for these types of technologies. And from my perspective, it's really important that when these technologies are going to affect employees, HR should play a key role in the preparation of that document, because that requires a really thorough understanding of how the technology will be used and how it could affect employees. Jonathan, you've mentioned consultation a couple of times as well, which I'm aware is often very important from an employment law perspective. But it crops up in data protection as well, and it's recommended by the regulator to consult with the actual individuals who are going to be affected by these types of new processing solutions as well.

Sarah Maddock:

I think a really good example of technology having an impact on an individual is a case that came out back in 2017. And this concerned an automated recruitment process, which led to a finding of indirect disability discrimination against an applicant with Asperger's syndrome.

Now, in this case, applicants were asked to complete short multiple-choice questions, and the marking was done by a computer and there was no human intervention or judgment in that. And what the claimant argued here was that she should have been able to submit short written answers, because she said that the multiple-choice format placed her at a disadvantage.

So, in this scenario, HR could really helpfully have intervened in the rollout of this new technology – to ensure that flexibility was built into the design to accommodate those who might be disadvantaged by the obvious efficiencies that were offered by the new recruitment system.

Jonathan Rennie:

But if you think about how some of these technologies work, you're not necessarily going to know at the outset what the real risks are, because the technology can evolve. And certainly in the case of AI, we hear a lot of stories about trials or rollouts being stopped because there was an issue with how it was working and the decisions it was making, or the impact, indeed, that it was having on people.

Some of you might remember the AI chatbot, Tay, that Microsoft launched on Twitter a few years ago. It was susceptible to unsavoury tweets made by some of its followers, and very soon adapted itself to sending out its own racist and sexist remarks and had to be pulled. It's easy to see how a similar situation could arise in the workplace, leaving the door open for grievances, and that would be a major negative setback and embarrassment for any employer that's working hard to address inequalities in the workplace.

And if there is an issue with the technology, you don't always know who's responsible for it or how it works, and it can be a real challenge to find the right solution.

Emma Erskine-Fox:

Yeah, that's really interesting, Jonathan. And that issue around sort of the allocation and division of responsibilities is another one that's really important from a data protection perspective as well, because the employer as the data controller of employee data is almost definitely going to be the one that's primarily liable from a data protection perspective and accountable for data protection compliance.

But with any tech solution, there'll obviously be a pretty heavy reliance on technology vendors, and the employer itself is unlikely to know the intricacies of how the technology works. So, vendors often do what we call “black boxing” their algorithms, which essentially means making them quite opaque so that they protect their IP, so they don't reveal their trade secrets. And so employers won't be able to control on a day-to-day level how the tech works and what it does, or really know any details of that.

So it's going to be really important for employers to do really thorough due diligence on those tech vendors and speak to them about things like the accuracy rates of their decisions, and what happens if something does go wrong, and make sure that they're getting as robust as possible contractual protections from them in case anything does go wrong.

Sarah Maddock:

Yeah, I think those are really excellent practical points for employers, Emma. I think what we need to get across is that there's really an authority issue here for employers when it comes to engaging with tech providers. And what I mean by that is that I think we're often led to believe that tech-driven decisions or tech-driven solutions are the right decisions, or that technology is a fixed concept. But it's really important to remember that that might not necessarily always be the case, and employers really have got a responsibility to keep monitoring and keep assessing and taking an active role in looking at these systems in terms of their potential risk. They've really got a responsibility to help to ensure that technology and people can co-exist without infringing on people's rights.

Jonathan Rennie:

We've briefly mentioned recruitment technology in the 2017 case that Sarah outlined, but we said we would look at this big topic separately. This is obviously a very labour-intensive process so it was always going to be ripe for disruption, and AI has certainly helped to make the process a lot faster and more efficient for a lot of recruiting employers.

Sarah Maddock:

Yes, and that's going to be even more important now with the end of the furlough scheme set to take place in September, and an increase in insolvencies and restructurings expected to mean an increase in unemployment in 2021. It'd be very unsurprising if companies turn to technology to sift through very high volumes of job applications.

Jonathan Rennie:

And using that technology can also help to address some systemic issues like bias and diversity, which has really come to the fore in the last decade and will continue to be a big driver of recruitment technology. Some of the biggest trends that I've seen are, for example, using artificial intelligence to push job adverts to the right kinds of people online, perhaps interview chatbots that help with pre-screening and scheduling interviews. And obviously, AI can also look at past applicants and match them to current job opportunities and even contact candidates and update your systems with any new career information on social media.

We're also seeing the gamification of the recruitment process, which always sounds quite fun – using games to test skills – perhaps lending itself more to some jobs than others, such as air traffic controllers. I know I tried and failed at that one.

A more futuristic trend that we're seeing, and coming back to Facebook's virtual reality comments, is using virtual reality to give candidates virtual tours of the workplace or work situations and role-playing or role-modelling as part of the selection process.

Evidently, the speed and the convenience of using these new types of technology is a key benefit to most organisations, but you'll all be thinking there are other more far-reaching positives to consider. For example, the use of online chats and chatbots may help to improve social mobility in recruitment by removing barriers that may discourage applications for more diverse candidates. Also, not only might this improve the opportunity for applicants, but employers might also benefit commercially from a wider pool of candidates.

Recruitment software may also have an important role to play in removing unconscious bias. Systems that are data-driven and can be taught to ignore traditional prejudices can help address a lack of female and BAME representation we see across many industries and help build more inclusive workplaces. However, HR needs to tread very, very carefully here from an employment law perspective. We need to be sure that the data on which decisions are being based needs to be free from bias, and we need to be aware that algorithms are obviously programmed by humans who are prone to bias and subject to prejudices themselves.

Sarah Maddock:

That actually reminds me, Jonathan, of a story I read a couple of years ago about Amazon, who had to scrap an AI tool that it used to vet candidates for jobs because the system taught itself that male candidates were preferable, because the system was trained on data from applicants over a 10-year period which were largely male. So the system automatically downgraded any references to things like “women's college” and “women's chess club captain”, for example. So obviously, a very big mistake and I'm sure Emma would agree from a data perspective, it's not good.

Emma Erskine-Fox:

No, absolutely Sarah, yeah. And it is an area where you do have to be very careful from a data protection perspective. So any processing of personal data that you carry out has to be fair, and if algorithms are producing biased or discriminatory decisions then that's unlikely to be the case. So it's important to regularly audit the outcomes of algorithmic decision-making to check whether bias is inadvertently being introduced.

The data used to train the system is really important as well. There are obligations in the GDPR to make sure that data is accurate, and introducing incorrect data could lead to bias. But you've got to think about the accuracy of the output data as well. You know, Sarah, what happened in that Amazon tool that you've just talked about is that the data itself that went into the system was historically accurate. It just still led to a discriminatory outcome.

Sarah Maddock:

Yes, and another point I think we need to touch on here is just referring back to your points about bias and poor decision-making. So whilst technology might be very good at picking up on negative decisions within an organisation or bad decision-making and highlighting those, then HR needs to be aware that it should be on standby to step in and remedy those problems when they're identified. Because there will be a legal risk if AI or another type of technology identifies a problem, and then there's no follow up from HR to put that right. So again, we sort of seem to be finding ourselves talking about how technology and human intervention kind of act together to produce a positive result overall.

Emma Erskine-Fox:

Yeah, definitely. And that certainly links back to some of the points that we were making when we were talking earlier about wholly automated decision-making. So there's a right to appeal any wholly automated decisions with legal or significant effects and a right to obtain human intervention. So as part of that data protection by design piece that we've also touched on, employers will need to think about how they can build those processes in to allow employees to challenge these types of decisions when they're made. And probably more broadly from a fair processing perspective, whenever any technology is being used to have an impact on individuals, you really do need to think about what happens, what the processes are to challenge things if they do go wrong.

Jonathan Rennie:

We just have time for one listener question, and that's about when it's appropriate to consider dismissal and re-engagement, or “firing and rehiring” as it's known. This is something we've all been seeing during the pandemic, as clients have been forced to review people's roles, but don't want to lose that talent – perhaps because they expect to see a pick-up in demand down the line.

Another pandemic-linked reason for a fire and rehire tactic could be where employers are looking at making a permanent change to working arrangements and that change is not allowed for in employees' contracts. For example, if there's no flexibility about place of work in the contract or working from home and the employee doesn't agree a change, this could be one way of forcing through the change.

It can also sometimes be used as an emergency measure while employers weigh up their other options, because it doesn't preclude a redundancy from happening further down the line. So we sometimes see this as a bit of an emergency measure or a holding position whilst an organisation is looking at other approaches. And because the fire and rehire doesn't constitute a redundancy in law, redundancy payments are avoided if the individual doesn't agree to the new terms and are fairly exited.

So, there's three key steps to this process, the first of which is that there has to be a justified business rationale for the decision – which may very well be cost reasons, of course. Secondly, there needs to be consultation with staff to reach agreement if possible. And then thirdly, if agreement cannot be reached with employees, then serve notice (that's important, it's notice to be served) and enable the employee the period of a notice under the old contract to accept the new one. So, in essence, the employee is faced with a stark choice of whether to be fairly exited or accept potentially reduced terms.

Something to watch out for here is, if you're proposing to dismiss more than 20 employees at one location over a period of 90 days or less, then this can trigger collective consultation requirements because the dismissal is for a reason not connected with the employee. You can read more on our website about this, and we'll include a link to that article in the episode notes.

All that leads me to say then is thanks from myself, Sarah and Emma for listening. If you're enjoying the podcast, please do rate us and follow us and tell your teams and networks.

If there's a topic or question you'd like us to cover, you can email us at emplawpodcast@tlt.com. And you can find us on Twitter at @TLT_Employment or use the hashtag #TLTEmploymentPodcast.

The information in this podcast is for general guidance only and represents our understanding of the relevant law and practice at the time of recording. We recommend you seek specific advice for specific cases. Please visit our website for our full terms and conditions.

Date published

19 April 2021

Get in touch

RELATED INSIGHTS AND EVENTS

View all