The use of employee monitoring has grown as more people work remotely and employers seek to get a better understanding of their workforce. However, whether it involves the use of surveillance technology to monitor people’s devices, CCTV, microchips or gathering personal data, there’s a whole host of data protection and employment risks to consider.

Is it ever ok to monitor on a blanket basis? Can you justify surveillance based on good intentions, like measuring wellbeing? Can you use personal data to positively discriminate against employees? And who’s responsible for monitoring in shared workspaces?

In this episode, we explore the trends and issues surrounding:

  • Remote monitoring, including extra precautions around “special category” data
  • The GDPR principles of transparency, necessity and proportionality
  • Using employee records, and what’s regarded as “excessive”
  • Positive discrimination linked to equality, diversity and inclusion
  • Surveillance in buildings 

In our news section, we cover:

  • The use of algorithms to make automated decisions about employees
  • The ICO’s final guidance on data subject access requests, including when you can “stop the clock” and what constitutes a manifestly unfounded or excessive request

We also answer your questions about whether an employee can withdraw a data subject access request and preparing for the end of the Brexit transition period.

 

Useful links:

Send us your ideas for future episodes – email emplawpodcast@tlt.com or tweet us using the hashtag #TLTemploymentpodcast and tagging @TLT_Employment

Transcription

Siobhan Fitzgerald:

Hello, and welcome to Employment Law Focus.

In this podcast, we aim to help you understand some of the biggest challenges facing HR teams and businesses today – from sexual harassment and philosophical beliefs, to the GDPR, burnout, and unconscious bias – and our experts share their insights and advice on how to deal with them.

I'm Siobhan Fitzgerald, a partner in our UK wide employment team, and today I'm speaking to Emma Erskine-Fox, who's an associate in our data privacy and cybersecurity team. Emma joined us on the podcast about a year ago to talk about dealing with employee data subject access requests, and she's back today to talk about employee monitoring and the use of personal data.

Emma Erskine-Fox:

Hi Siobhan. It's great to be back.

Siobhan Fitzgerald:

Now, if you've listened to our last but one episode on agile working, you'll know that employee monitoring and surveillance has become a really big topic since the first national lockdown. A sudden increase in people working from home has meant that companies selling remote monitoring technologies became a lot more prominent and were talked about in the media, and companies using this software have faced questions over what they're doing and why and, of course, the legitimacy of that.

But it's fair to say that this can be a legitimate business practice, and in some industries it's even required by the regulator for good governance, for example. It's been used of course in one form or another for decades, and it's probably a lot more common than people actually think. But there is of course a right and a wrong way to go about it, and there's a whole host of ways in which this can actually become really complicated.

Emma Erskine-Fox:

That's absolutely right, Siobhan, and I have certainly seen recently an increase in people coming to us and asking for advice on employee monitoring from a data protection perspective. And it's quite clear from some of the articles that I've seen on this, some of the commentary around the issues that the full legal implications aren't always fully understood.

And it's easy to think of employee monitoring as a technological issue, but actually it goes far beyond technology. So we should also look at some of the issues stemming from cases like the recent fine against H&M for its illegal surveillance of several hundred employees. And interestingly, this was less about monitoring people through their devices and more about the excessive records that H&M was keeping and what they did with them. And it highlights how good intentions can inadvertently land you in hot water – costing around £32m in H&M's case.

Siobhan Fitzgerald:

Goodness. That's very hot water indeed, Emma.

Emma Erskine-Fox:

Yes.

Siobhan Fitzgerald:

So, Emma, any other recent developments that would be useful for our listeners to know about? Just before we get into the detail on a few more particular matters.

Emma Erskine-Fox:

Yeah, sure. So one of the things that I was keen to highlight is there have been some particular developments in the AI space recently, and there have been two legal challenges brought in the last few months against Uber by a union on behalf of some of Uber's drivers relating to Uber's use of AI and algorithms to make decisions about employees.

So this whole issue of algorithmic decision-making was something that hit the headlines in a pretty big way over the summer with all of the issues around A-level results and the alleged bias in the algorithm that was used to make those decisions.

Siobhan Fitzgerald:

Yes. And that was certainly very controversial.

Emma Erskine-Fox:

Yes, absolutely, yeah. And the Uber cases are interesting in a HR context because they relate to decisions made about employees. So one of the cases focuses on the lack of transparency in the AI system that was used to allocate jobs between drivers, and the second alleges that Uber has made decisions about firing employees solely based on an algorithm which Uber claims detected fraudulent activity.

The union's claim is that Uber made these decisions without any meaningful oversight by a human and without having an appropriate lawful basis under the GDPR. The union also claims that drivers weren't given sufficient information about the decisions or any right to appeal those decisions.

And it's a really interesting case for employers because the use of AI is certainly on the rise and it's something that we're advising on more and more, but it just shows how careful you need to be if you're using automated systems to make important decisions about employees or candidates. Very much a watch this space at the moment, but the outcome of those cases will be really fascinating to see.

Siobhan Fitzgerald:

Yeah. I mean, I completely agree with you there, Emma. I mean, the idea of an algorithm making a decision on say whether to dismiss an employee or to terminate an agreement with a worker is certainly a new one for employment law. And at this rate, the Employment Rights Act, which as you know governs unfair dismissal could soon be out of date, I think.

Emma Erskine-Fox:

Yes, it certainly sounds like it. And one other thing that it might just be worth mentioning before we dive into the employee monitoring issues, just because this links very nicely to what you and I spoke about last time I joined the podcast, Siobhan, is that the Information Commissioner's Office has now released its finalised guidance on DSARs under the GDPR, so, data subject access requests.

After we spoke about it, the regulator consulted on draft guidance under the GDPR and did actually take a number of the points of feedback into account. So in particular, the finalised guidance confirms that organisations can “stop the clock” if they need more information from the individual in order to clarify the request, which I know is something that a lot of our clients were concerned about. And there's also more guidance on when a request can be considered manifestly unfounded or excessive.

So some of the points that are drawn out around that are that organisations should consider whether the DSAR is clearly or obviously unreasonable taking into account the burden and the cost involved for the organisation. And you can consider things like the context of the request, whether refusing to respond could cause substantive damage to the individual, what resources you've got and whether the request is a repeat request or whether it overlaps with other requests.

The guidance does maintain the organisation shouldn't have a blanket policy for categorising DSARs as manifestly unfounded or excessive, and there should still be strong justifications for making that decision. But these points are certainly very helpful for employers.

There are various other interesting points in that guidance as well. So we'll provide a link when we send the podcast recording out to the guidance and to TLT's coverage of it.

Siobhan Fitzgerald:

Employees have always been under surveillance of one kind or another, whether that's through security card data when accessing and moving around buildings, maybe attendance logs or CCTV cameras, or simply being observed by their colleagues and managers. But if we bring things bang up-to-date, increasingly employers are keen to capture more data generally on staff.

And in 2020, the use of device monitoring technology has risen, of course, because of the move to homeworking. There's no doubt that this obviously has its advantages but also risks as well.

Now, as you mentioned, Emma, in the last few months, we've also seen some high profile cases about data capture and how that's being used.

Emma Erskine-Fox:

Yes, we have. And of course, data protection laws have been strengthened quite significantly in the last few years, primarily because of the arrival of the GDPR back in 2018. So employees now have more rights in relation to their personal data and there's a greater risk for businesses if they're caught doing something that they shouldn't be. And I think a good starting point for today's conversation might be that employee monitoring is often painted as a scary “Big Brother” type exercise designed just to catch employees out and build a case against them. But as you've mentioned briefly at the start Siobhan, it can often be used for legitimate reasons or at least with good intentions.

Emma Erskine-Fox:

You can see for an example how an employer might want to use some form of employee monitoring to measure performance and productivity now that, as you said, a lot of the workforce is working remotely. And of course, that sounds like a very legitimate purpose for an employer. When you can't physically see that people are putting in the hours they're paid for in the office, it's natural the employers would want to be able to ensure that employees are still working as they should be.

Siobhan Fitzgerald:

Yes, absolutely. I can see that. And you can also see how employee monitoring could be used as an indicator of wellbeing actually to ensure that people are not suffering from maybe lack of motivation or engagement with their job as a result of maybe mental health issues like depression or anxiety, which is something that has been coming up quite regularly in the media.

And we've also talked in previous podcasts about what we refer to as the work-life blur. Could employees actually be working too much? So if an employer can see that they're working very late into the evening, for example, then this could be a bit of a red flag for a manager to check up on.

It might also be used as an indicator of where more clarity support or focus from the employer is needed as organisations are moving to this new way of operating, this new world that we're all facing.

Emma Erskine-Fox:

Yeah, and interestingly, I've seen a few queries from clients recently about the extent to which they can use sensitive employee data to positively discriminate, which is linked to equality, diversity and inclusion objectives, which are clearly great things to be thinking about.

Siobhan Fitzgerald:

Yes, absolutely. And maybe we can come back to that in a little bit more detail, Emma, because that idea of positive action, positive discrimination – employers need to take quite a lot of care there.

Emma Erskine-Fox:

And more generally, employee monitoring can also be an essential part of good governance and security. The shift to remote working has removed a lot of the security measures and safety nets that organisations had in place in buildings before the pandemic. And I certainly can't see a time, for example, when employers are allowed to install CCTV cameras in people's homes.

Siobhan Fitzgerald:

Emma, I would say that's probably for the best for those who may still be working in their pyjamas perhaps at 11am.

Emma Erskine-Fox:

Yes. Present company excluded, obviously, Siobhan.

Siobhan Fitzgerald:

Of course, of course.

Emma Erskine-Fox:

There's also generally less opportunity for policing behaviour and spotting illegal or unscrupulous behaviour through people's attitudes, actions and body language like you'd be able to do in the office.

And in the financial services industry, for example, the Financial Conduct Authority has highlighted the risk that bankers could use privately owned devices to make personal trades based on confidential information, which is obviously much harder to do on a physical trading floor. And the FCA has said it expects firms to have updated their policies, refresh their training, and put in place rigorous oversight now that traders are routinely working from home.

Siobhan Fitzgerald:

So employee monitoring could arguably then be important to keep clients, staff and the organisation safe and to reduce the risk of illegal or reputationally damaging behaviour. And, this might include keeping data to look back on if required to help with investigations and legal processes. Many of our listeners will be familiar with employment law processes like disciplinaries, for example, as well as having that real time data.

Now, employers will have to ensure that they comply of course with their own policies and any express contractual obligations as well. But also of course, the implied legal duty on employers to maintain their employees' trust and confidence, which means they have to treat them in a fair and reasonable way.

Now, as we explained in our episode on agile working, there are four things to consider here especially in respect of employee monitoring.

Siobhan Fitzgerald:

So firstly of course, you, as the employer should act in a proportionate and justified manner. Secondly, you have to inform employees, of course, of your intention to monitor them. Thirdly, you need to be mindful of the risks around discrimination. And lastly, employers need to ensure that there are sufficient safeguards in place to prevent abuse or over-monitoring.

Emma Erskine-Fox:

Yeah, and there's definitely some overlap there from a data protection perspective. So as I said, the main legal framework here is the GDPR, which sets out a number of principles that need to be considered if employers are going to process employee data for monitoring and surveillance purposes.

Siobhan, you mentioned transparency just then and this obligation to inform employees of any intention to monitor them. And that's so important in a data protection context. Transparency is a really key principle of the GDPR, so you need to make sure that you're really upfront with employees about your intention to monitor them and exactly how you're going to do it, as well as what the consequences could be for them.

However, before you even get on to that, it's important to establish if the monitoring you're proposing to carry out is going to be lawful under the GDPR in the first place.

Emma Erskine-Fox:

So to carry out any processing of personal data, you need to have a lawful basis under the GDPR. And really, the most likely one you're going to be looking at for employee monitoring is that the processing is necessary in your legitimate interests as an employer. If you do have legal and regulatory obligations to ensure that staff are acting in certain ways, as we've talked about, you might be able to rely on the lawful basis that applies where there's a legal obligation to process personal data. But that might actually be quite limited in this context.

Siobhan Fitzgerald:

Yeah. And, Emma, when it comes to legitimate interests, is it enough to simply say that the employer clearly has an interest in, for example, monitoring productivity, and that means therefore that the monitoring can take place?

Emma Erskine-Fox:

No, if only it were that simple. And there are a couple of other things that need to be taken into account as well. The monitoring has to be “necessary” for those purposes, which means that there must not be a less intrusive way to do things in order to meet those purposes.

So employers need to be able to demonstrate that they've thought about other ways that they could achieve the relevant purpose and why they've come to the conclusion that the particular monitoring is the least intrusive way that's going to allow them to meet those purposes.

For example, if productivity is the concern, it might be that the first port of call should be requiring line managers to speak to their direct reports and try and encourage an increase in productivity that way, before implementing technology that might make employees feel like they're constantly being watched.

Emma Erskine-Fox:

You also have to balance the legitimate interests of the employer against the interests, rights and freedoms of the employee. So employees obviously have a right to privacy, so it's important that any proposed monitoring is a justified interference with those rights. And the more intrusive the monitoring, the higher that bar is going to be.

Siobhan Fitzgerald:

Yes, because obviously intrusive monitoring sounds like it could be quite risky for employers. I mean, do you have any examples of that that you could share with us?

Emma Erskine-Fox:

Yeah, certainly I've heard about software, for example, that takes regular screenshots of what employees are doing, or even that uses webcams on employees' laptops to take photos of employees at regular intervals to check that they're still at their desks.

Siobhan Fitzgerald:

Oh my goodness.

Emma Erskine-Fox:

I know, yeah. That was my reaction as well. Clearly, that's something that's very intrusive indeed. So you'd need a really strong justification to argue successfully that this type of processing outweighed the employee's rights to privacy.

Some software I've heard about also uses keystroke data. So information about your typing, or even your cursor movements, the movements of your mouse. And that type of data could potentially be classed as biometric data because it relates to a behavioural characteristic of the employee in terms of how they use their device. That would make the data special category personal data, which is subject to even greater restrictions on how it can be used. So employers would need to be particularly careful in using that type of data for monitoring purposes.

Siobhan Fitzgerald:

Emma, I'm really interested in this and I'm sure our listeners will be, too. I mean, you can envisage all sorts of arguments about this. I mean, it does feel really intrusive, and what may be ‘reasonable thinking’ time for sending an email or how quick should you be typing, it's really quite an interesting topic to think about. Does it mean then that any monitoring should be on a more targeted basis to deal with perhaps particular problems rather than being implemented on this blanket basis?

Emma Erskine-Fox:

Yeah. I think the key really is proportionality. So any processing of personal data has to be proportionate taking into account the purposes for the processing. So ideally, monitoring should be implemented to tackle a particular identified problem rather than jumping straight into monitoring as a solution where there might not be a problem in the first place.

Coming back to this example of productivity monitoring that we keep talking about, if productivity levels are good across the organisation, there's probably no real reason to be concerned about what employers are doing at home. And if productivity is low just for particular employees, then monitoring may not be the right way to deal with that. And instead, as I said, it should be maybe more about performance management with those particular employees.

Siobhan Fitzgerald:

So you mentioned the Uber case earlier, which is really interesting in terms of using technology to make those key decisions about employees. So I guess that's something that employers would need to think about with using monitoring technologies as well – so if they wanted to use results from those technologies and decisions about performance management or disciplinaries I suppose.

Emma Erskine-Fox:

Yeah, absolutely. So the impact of the monitoring on employees will be really, really key in determining whether the processing of personal data for monitoring purposes is fair, which is another requirement of the GDPR.

Employers will have to be really careful to ensure that if they did make decisions on the basis of the monitoring – for example, to put employees into performance management, not promote people, or in the worst case scenario, potentially to dismiss somebody – the results of the monitoring should not be the only factor and the decision should be made by a human taking into account other factors as well.

It's also really important for employers to make sure that they give employees the chance to discuss the findings from the monitoring and to address any wider issues – you know, look at other reasons why employees might not be performing as they should be – before any key decisions are made.

Siobhan Fitzgerald:

Yes. And that would definitely be part of a fair employment process as well, which obviously many of our HR listeners will be very familiar with. So it sounds like there's an awful lot for employers to think about here.

Emma Erskine-Fox:

Yes, definitely. It can really get quite complicated when you start thinking through all of the data protection implications. And I think the absolute best thing that employers can do if they are considering implementing any sort of employee monitoring is to carry out a thorough data protection impact assessment or DPIA. And that's essentially a risk assessment that enables employers to consider all of the different privacy risks and obligations that arise and to look at whether those risks can be mitigated and whether the GDPR requirements can be met.

This can help to really flush out all of those issues that we've discussed and more, and it's a really important part of demonstrating accountability and compliance with the GDPR.

Emma Erskine-Fox:

DPIAs are also mandatory under the GDPR where processing is high risk. And I'd argue that most, if not possibly all employee monitoring would be considered to be high risk – particularly if you are looking at using AI technologies as part of that, for example. So a failure to conduct a DPIA could arguably be a breach of the GDPR in itself.

Siobhan Fitzgerald:

We mentioned earlier that employee monitoring isn't always about using software to keep track of employees' activities, and that actually the concepts of employee monitoring and surveillance can extend further than this and encapsulate retaining excessive records about employees.

So, Emma, you'll be very familiar with this case, but a recent one involving H&M. And there was a fine by the Hamburg data protection regulator which was over €35m, which equates to about £32m. Now, the regulator found that H&M had been recording extensive details about its employees for many years, including information gathered through so-called welcome back talks. So these were talks conducted after sick leave or holiday between the employee and their manager.

So H&M recorded information provided by employees in those talks, including information about illnesses. And it was also revealed that supervisors were recording details of employees' private lives that they had obtained through those normal conversations with employees, the water cooler chat, and notes of these talks and conversations were stored permanently in a network drive and could be accessed by other managers. And then that information was used to create profiles on employees to help H&M make decisions about those employees. So quite an interesting situation, Emma.

Emma Erskine-Fox:

Yeah, very, very interesting indeed. And the regulator did consider this to be a form of employee monitoring, as you said. And the fine is actually the second largest fine to be issued under the GDPR so far. So it shows how seriously the regulators are likely to take these types of monitoring issues.

A really interesting point about the case is around what employees' reasonable expectations are when they're giving their data over to an employer. For things like monitoring productivity, which we keep coming back to, you can see how an employer could raise an argument that if an employee is contracted to work certain hours and is paid for those hours, the employee might reasonably expect that there would be some level of checking whether they are doing this.

Emma Erskine-Fox:

In the H&M scenario, information being input into these profiles included information given just in those day-to-day conversations with line managers. You referred to it, Siobhan, as the water cooler chats, when an employee might bump into their manager and just have a chat with them about what happens to be on their mind that day. And that information included things like family issues that those employees were facing, those employees' religions, etc.

And employees certainly wouldn't expect that that type of information that was disclosed just in a friendly chat with a colleague might then be used as part of a profile to make decisions about them.

Now, as well as the fine, H&M has confirmed that it's going to pay up compensation to affected employees, the number of which is somewhere in the hundreds. So even if that's only a few hundred euros per employee, those costs will quickly mount up on top of the fine. And obviously there are reputational aspects to think about as well in terms of the risks.

Siobhan Fitzgerald:

Yes, of course. And I suppose one of the things that this case demonstrates is how good intentions can sometimes have implications that you wouldn't normally consider, because presumably they were doing this to try and keep information about employees to be able to help and support them and maybe help them settle back into work or deal with any challenges they might be facing.

So a noble objective, but they just went about it in the wrong way. And recording data from those talks, even with a view to later helping the employee, I mean, it still does feel pretty intrusive, going back to that word we were talking about earlier.

Emma Erskine-Fox:

Yeah, absolutely. And this ties into that interesting point that we touched on earlier about questions that we've seen from employers in the context of what they can do with information collected for diversity monitoring purposes and the extent to which they can use that data to make decisions about people.

Obviously, there have been a number of events and campaigns throughout 2020 that have very rightly brought issues around racism and unconscious or perhaps conscious bias really to the forefront of employers' minds. And it's absolutely right that employers are looking at what they can do to increase diversity in their organisations.

Of course, a lot of employers will already hold data about employees' ethnicity that they would have obtained as part of diversity monitoring at the start of those employees' employment. And I know that lots of clients are starting to look at how they can use that information to help inform the measures that they can put in place to increase diversity.

Siobhan Fitzgerald:

Again, that's a very important and well-intentioned goal. And generally speaking, the diversity monitoring is carried out on a more aggregated level to make those more high level general decisions about the employer's strategy and approach to diversity. So this is what the data will have been collected for presumably, and therefore what employees should have been told about when the data was first collected.

Emma Erskine-Fox:

Yeah, absolutely. So as you say, generally speaking, that is exactly what normally happens with diversity monitoring. I have had a couple of clients recently though who have wondered about whether they can use that data in its personally identifiable form to make decisions about specific individuals. So for example, for line managers to have this information available so they can use this as part of appraisals and decision-making regarding particular employees.

So ethnicity data is a special category of data under the GDPR, which means that a specific additional condition needs to be fulfilled to justify processing that data. There is a condition that lots of employers rely on that allows processing this type of information for equal opportunities monitoring, which would capture the normal diversity monitoring activities that you talked about, Siobhan. However, that condition specifically states that it can't be used to justify processing of that sort of ethnicity data or indeed data about other protected characteristics to take decisions or measures about particular individuals.

So if that's the case, even if that decision is a positive decision – so for example, a decision to promote somebody – then the condition couldn't be relied upon, and it's likely that the employer would have to obtain consent, which is very difficult to do in an employment context.

Siobhan Fitzgerald:

And it must be quite a hard line to draw as well though, because even decisions made at an organisational level will obviously have an impact or could have an impact on individual employees.

Emma Erskine-Fox:

Yeah, it is definitely. And it's not clear cut as to at what point a decision crosses over into being a decision about a particular individual. And it's not an area that's really been tested yet. I think in my mind the difference is between promoting somebody of a particular ethnicity, for example, because they're very good, they’re very qualified, and the employer has factored into that decision that they are actively trying to include more diversity at a senior level. And then on the other hand, looking at an employee's ethnicity only and saying, "We're going to promote this person because of their ethnicity."

Siobhan Fitzgerald:

Yes. And, Emma, there's a lot of employment law around this as well that our listeners have really got to take some care here.

So there are specific provisions about positive action in the Equality Act. So positive action relates firstly to being able to take proportionate action to reduce disadvantage to and increase participation for underrepresented groups. So for example, perhaps advertising job vacancies in a magazine that might be particularly read by a particular ethnic group.

And secondly, there's what's called the tie breaker provision. So let's say for example you have a number of people applying for a role and then there's a stalemate between two candidates. So they've done equally well in the process and their scores are exactly the same. Then employers are able to give preference to a person from an underrepresented group. But what you can't do is automatically grant a preference to that person in the process – it really only is when you're in that stalemate position.

So I think this whole area is just an area where the employers need to take quite a bit of care.

Emma Erskine-Fox:

Yeah. And from a data protection perspective, that condition that I talked about doesn't actually distinguish between making positive and negative decisions. If diversity data is used to make any decisions about particular individuals, then that condition won't apply.

You could argue for example, that the employee is less likely to complain about a positive decision being made about them. They're much less likely to complain if they have been promoted than if they haven't been promoted. But you could still see how that could cause some distress or anxiety. So for example, employees might feel like they've been patronised if they think that they haven't been promoted on their merits, but instead on the basis of ethnicity, religion, sexual orientation, for example.

Emma Erskine-Fox:

Transparency also comes into play here. Employees are unlikely to have been told that the information that they're giving about those protected characteristics would be used for these purposes. And as with the H&M fine, if that information is forming parts of profiles on employees that are used for specific decisions, there's certainly a risk of non-compliance there.

So I think the key message from both of us Siobhan really is that no matter how good your intentions are, you still need to consider compliance with both data protection legislation and employment laws as well.

Siobhan Fitzgerald:

Now, we've talked a lot about remote monitoring, and of course, that's very relevant at the moment when most people are working from home. But not everyone's working from home, and for some people, this will of course just be a temporary or part-time arrangement while we're in the pandemic.

So let's look at employee monitoring in the workplace as well, bearing in mind that we, of course we might not be returning to exactly the same workplace in the future. It might be a new hub or a shared workplace for some.

It's not uncommon for employers to implement CCTV monitoring of workspaces, for example, to protect against the risk of theft by employees. There are of course data protection considerations to think about here. Emma, could you give us a brief overview of these?

Emma Erskine-Fox:

Yeah, sure. So all of the same principles that we've already talked about apply to monitoring in physical buildings as well. So a DPIA or data protection impact assessment, something we mentioned earlier, should be conducted if CCTV is being considered. The CCTV should be proportionate and you should have a clear justification for using CCTV and for why less intrusive means are not appropriate.

Again, transparency is really important. So there should be clear signage around and employees should be made aware of where they can obtain further information about the processing of CCTV footage if they want to.

Emma Erskine-Fox:

One of the interesting things we've advised on previously is the use of covert CCTV, so CCTV where you don't tell people that you're using it. And that might be justified in very, very exceptional circumstances for a limited time and in respect to a limited area of a building.

For example, I've previously assisted a client with a DPIA for covert CCTV where there had been a number of thefts from a particular area of the building, always in the evening, and all other ways of trying to identify the thief had been exhausted unsuccessfully. So the DPIA helped that employer to determine that the covert CCTV could be put in place in that particular area during the evenings until the thief was identified.

Siobhan Fitzgerald:

And, Emma, as we move into new working models such as shared workspaces, CCTV could be presumably controlled by someone other than the employer – for example, a landlord or another occupant of the building. Or the employer might control CCTV footage for a building in which employees of other businesses might be working. So what particular issues does that raise?

Emma Erskine-Fox:

Yeah, it's a really good point, Siobhan. The entity that controls the CCTV and that has decided to implement it will be primarily responsible for GDPR compliance when using that CCTV system. And one of the key things to think about will be ensuring it's clear to employees whose CCTV system it is. If an employee wants to access footage of themselves through a data subject access request, which they are entitled to do, it will be really important that the employee knows who they need to go to with that request.

There might also be a need to share footage between different entities in some circumstances. So for example, if CCTV is controlled by a landlord but the employer needs to access footage to investigate allegations of a physical assault by one employee on the other.

Emma Erskine-Fox:

If there are various different entities whose employees could be caught by a CCTV in that building, it might not be a bad idea to consider entering into a data sharing agreement between those entities to govern how and in what circumstances different organisations can request footage from each other and how that will be shared.

Siobhan Fitzgerald:

Now, we've also seen an increase in facial recognition being built into CCTV systems as well, which I presume comes with all sorts of other data protection risks.

Emma Erskine-Fox:

Yeah, definitely. I mean, I'd query from the start I think whether the facial recognition would be needed at all in an employment context. Usually, facial recognition technology is used to cross-reference against a database of individuals to identify a previously unknown person. For employers, it seems much more likely that the employer would be able to identify an employee who appears in footage because they'd be known to the employer already.

Generally speaking, facial recognition, as you say, comes with a number of specific risks. You are looking at biometric data there, which as I said is special category data. So I would recommend thinking really carefully about where the facial recognition technology is going to be justified in this kind of context.

Siobhan Fitzgerald:

Now, one final thing I just wanted to touch on before we take some listeners' questions is something which – I mean, following on from facial recognition, this feels even more space age, but has definitely been talked about in the past – which is micro-chipping employees. And I've heard that on the radio in the last few days as well.

So there've been some companies, primarily in the US, but a small number in the UK as well who have considered or even implemented microchips for employees so they can act as security ID and help with things like access controls to buildings and IT systems. I mean, does that give you absolute nightmares as a data privacy lawyer?

Emma Erskine-Fox:

A little bit. Yeah, certainly one that needs to be thought about very carefully. I mean, it feels to me like employers might struggle to get over the hurdle of demonstrating that there aren't less intrusive means to achieve those purposes, given that access to buildings and systems has been controlled through ID cards, codes, multifactor authentication etc. for years.

That said, some employers might actually find that microchips make things more convenient for them, so they might be very happy to be chipped. There are two key points that really come to mind here.

Firstly, there could be an argument that employees can't fully exercise choice in an employer-employee relationship because of the imbalance of power. So I could definitely see concerns about employees being coerced into having these chips implanted.

And secondly, what's the chip actually doing? If it is just for access control purposes and you've given employees a really genuine choice about whether they use these microchips, that's one thing. But the risk increases significantly if microchips are being used to track and monitor employees in other ways.

Siobhan Fitzgerald:

As always, we'll end with some listeners' questions. Emma, it's good that you're here as we've had a question about data subject access requests, which we obviously talked about at length in episode five and mentioned briefly at the start of this one. So the question is, "Can an employee withdraw a DSAR, a data subject access request?"

Emma Erskine-Fox:

That's a really good question and one we come across all the time. We've had a number of cases where this has arisen.

So DSARs can be withdrawn and often are. If the DSAR is made in connection with potential or actual proceedings and those proceedings then settle, it's really common for the settlement agreement to encapsulate that the employee withdraws the DSAR. However, because subject access is a statutory right, if the employee goes on to make another DSAR in the future, the employer is unlikely to be able to rely solely on the settlement agreements or refuse to respond to that DSAR.

Siobhan Fitzgerald:

And we've had one more very topical query about Brexit. "As the end of the transition period approaches, what should employers with operations and employees across the UK and Europe be thinking about on the data protection front?"

Emma Erskine-Fox:

That's a very good question again. And I think the full answer is probably longer than the time we've got left for it. I should say as well, we're recording this on 19 November. By the time this goes out, we may well have some more clarity on the outcome of Brexit negotiations seeing as it’s very much a moving feast obviously.

There are two key things I'd flag for multinational employers now. Firstly, you need to look at your cross border data flows between the UK and the EEA. You're unlikely to have to do anything further about data being transferred from the UK to the EEA. But if data is coming back and the European Commission doesn't decide that the UK's data protection regime is adequate to protect personal data – which may well be the case if we end up with a no deal – you'll need to put in place a mechanism to ensure that transfers from the EEA to the UK continue to comply with the law.

Emma Erskine-Fox:

Secondly, the GDPR requires organisations to have a lead supervisory authority or a lead regulator in the EU if they do engage in cross-border data transfers involving EU data. And they also have to appoint a representative in the EU.

So organisations that have previously appointed the ICO in the UK as their lead authority will need to identify the right authority in the EU to be their EU lead authority. And those that already have an EU authority as their lead authority will also need to register with the ICO in respect of their UK operations. So certainly a few complexities to think about there.

Siobhan Fitzgerald:

Yes, absolutely. Thank you so much, Emma, for such an interesting conversation, and thank you for listening. I hope you found it useful.

If there's a topic you'd like us to cover in future episodes, please email us at emplawpodcast@tlt.com. We'd love to hear from you. And also send us any general questions to tag onto the next episode.

If you enjoyed the show, please rate us and leave us a review so that others can find us and take a listen. Also, don't forget to subscribe so that you know when we publish our future episodes.

You can also find us on Twitter @TLT_Employment, where we use the hashtag #TLTemploymentpodcast. And Emma and I are also on LinkedIn so you can connect with us there. See you next time.

 

The information in this podcast is for general guidance only and represents our understanding of the relevant law and practice at the time of recording. We recommend you seek specific advice for specific cases. Please visit our website for terms and conditions.

 

Get in touch

Date published

07 December 2020

RELATED INSIGHTS AND EVENTS

View all