Crash, Bang, Wallop! What happens when Artificial Intelligence meets GDPR?

As a technologist, I am both excited and appalled at the developments in AI and it seems from various surveys (some linked below here) that I am not alone. My greatest wish is that we can harness its power for good while dampening its power for misuse. It is early days yet – let’s hope this wish comes true!

AI and GDPR

Having completed a handful of DPIAs that address this topic de jour, it’s a good time to talk about the common themes that are emerging.

One observation worth making here is that the GDPR has plenty of teeth when it comes to AI but there are some key differences that will come in with the AI Act.

The GDPR places much of the burden on controllers, which is at the same time a strength and a weakness. It’s a strength because there is a clearly defined place where the buck stops. It’s a weakness because we often see small controller organisations out of their depth dealing with larger software application development houses – without the skills, negotiating power, technical capability or the time to really put them through their paces.

This problem is turbocharged when small organisations, with no in-house technical capabilities to speak of, start using powerful technologies like AI. We used to call the Internet the wild-west – on that scale the AINet is the Milky Way. It does feel like another great gold rush for modern times, this time with a bit of unlimited space exploration thrown in.

The AI Act addresses this by placing more requirements on developers of AI systems and not just on the controller companies who use the AI systems. This is a good thing because those companies are best placed to understand and address the potential harms of this powerful technology that is being unleashed. Independent oversight will in theory keep them honest.

But back to our emerging DPIA themes. We have spotted a few interesting common themes from our work so far.

 

When AI is not actually AI

Some suppliers of software solutions are taking on a whole lot of unnecessary pain claiming to develop AI solutions when in reality there's no AI involved at all.

It’s just some common or garden automated processing - still often high risk and requiring a DPIA and triggering Article 22 but it’s not AI. This "Bandwagon" effect is costing time and effort for controllers who have to dig deeper when carrying out DPIAs.

 

Training your AI engine – lawfully, transparently and fairly

When a controller’s customer/employee/prospective employee personal data is being used to train the AI Engine, there is a need to establish a lawful basis for this processing and that can be very challenging.

Equally, controllers need to satisfy themselves that where other personal data is used to train the AI engine, it was obtained and used lawfully. That requires a whole new level of supplier due diligence.

Finally, if the data generated as a result of their customers/employees use of the tool includes personal data which is used for ongoing training of the AI Engine - and it nearly always does - does the controller have a lawful basis for that processing? What about situations where personal data generated by one organisations use of the AI Engine is used for ongoing training of an AI Engine used by other organisations?

Apart from getting the appropriate boundaries in place, the AI training activity will need careful ongoing oversight by the controller. This will need to be formally scheduled and documented in order to ensure the lawful basis stands up over time. That in turn will require undertaking quite a bit of ongoing supplier due diligence.

The difficulty here is compounded by a lack of Data Protection by Design in the approaches to AI solution development – we see fundamental misunderstandings among developers that stripping datasets of so-called “PII” (please don’t get me started on “PII”) is a data anonymisation technique and organisations not realising that they are dealing with at best-pseudonymised data which still legally constitutes personal data under the GDPR.

How do you tell customers/employees/prospective employees that their data is being used to train an AI engine? Often that data was collected originally for a different (and not a compatible) purpose.

This concern is emerging in the EU and in the US alike with the Federal Trade Commission citing deceptive practices. It may be unfair or deceptive for a company to adopt more permissive data practices—for example, to start sharing consumers’ data with third parties or using that data for artificial intelligence (AI) training—and only inform consumers of this change through a surreptitious, retroactive amendment to its terms of service or privacy policy.”

The requirement to communicate meaningful information in clear and plain language can be a challenge when explaining how an AI training engine uses data especially given the public perception that AI poses significant risks to us and our society. How does an organisation using an AI Engine that quite often they barely understand, ensure that they can be transparent about this use and meet their article 12-14 and Article 22 requirements?

 

Using an AI Solution

The use of an AI based tool will regularly trigger Article 22 of the GDPR relating to automated processing. This brings tighter requirements onto the controller regarding the lawful basis and transparency of the activity.

As with training the AI, establishing the lawful basis to use the AI functionality can be challenging. This regularly comes down to the necessity test - is the use of the AI functionality or output really necessary for the purpose? There can be a significant burden on the controller to demonstrate the necessity of the AI activity on an ongoing basis.

The controller is bound to ‘seek the views of data subjects or their representatives’ (per Article 35(9) GDPR), ‘where appropriate’ in carrying out the DPIA.  We often see a resistance to conducting end user / data subject consultations.

Sometimes these exercises can be expensive and time-consuming and the team is under pressure to get the solution implemented.  Sometimes there is a reluctance to listen to the opinions of the users who are going to be impacted for fear that it could derail a project.

Seeking data subject views is even more challenging where AI is involved. It is genuinely a challenge to explain how the AI solution works and to explain the safeguards because these are complex solutions. While they won’t admit it, developers often don’t really know themselves. People can simply hear the word AI and be influenced by something they have read about AI taking over our jobs even if it has nothing to do with the solution being developed.

Public consultation is tricky at the best of times, with AI solutions it’s much more difficult.

 

Our Conclusion

Embedding AI capability into your recruitment process or your building entry system might sound like a great idea – and it will be sold to you as a great idea by the solution developers who aren’t going to suffer the wrath of GDPR fines and sanctions when things go wrong. But you definitely need to think twice before you take that “one small step”.

These DPIA exercises can be very challenging - more challenging are the risk mitigation activities that the controller will need to undertake in order to ensure they effectively manage their use of AI software that involves processing personal data.

GDPR doesn’t weigh in on ethics, competition or intellectual property debate but if your AI system is trained on personal data or processes or generates personal data as a result of its use GDPR has a lot to say and you should take heed!

While these new and shiny solutions are coming in thick and fast, many are being developed in a hurry and without a legislative framework governing their creation.  The AI Act is imminent and will require that manners are put on these solutions, and some may not pass muster when it comes to the retrofit.  Where does that leave the customer, whose business is relying on the solution?  Sometimes it’s worth giving things a little time to see if the shine lasts.

 

Some further reading

  • The UK Based Alan Turing Institute and Ada Lovelace Institute paired up in 2022 – just before OpenAI introduced ChatGPT to investigate how people feel about AI and their report makes for interesting reading.
  • The US based Pew Research Organisation is always a reliable source of information and there are interesting reads if you want to start at this page and dive in further.
  • In Ireland, the UCD Centre for Digital Policy has a remit to understand knowledge of, attitudes to and perceptions of personal data sharing, data rights, and artificial intelligence. In 2022, the Centre published the report of a survey into the future of personal data use, AI, and advanced technologies.

Join Our Newsletter

Sign-up to receive news and information from Fort Privacy

Fort Privacy processes your personal data in order to respond to your query and provide you with information about our products and services. Please see our Data Protection Statement for further information

AI: THE DOUBLE-EDGED SWORD OF CHANGE

12 June 2024

My personal favourite old Chinese curse, "May you live in interesting times," feels particularly relevant these days. Our world is changing, with both exciting possibilities and daunting challenges emerging on every front. Change, after all, is a double-edged sword. And amidst this whirlwind of change, a new force is rapidly taking shape: Artificial Intelligence.

The Great 2024 GDPR Quiz!

08 January 2024

Everyone loves a quiz so we decided we would kick-off the new year with a bit of tongue-in-cheek fun.

Have you been naughty or nice this year?

21 December 2023

Continuing the tradition of the Fort Privacy Christmas blog this year we are thinking about Santa and AI. Well, we need to keep these articles topical after all!

Scroll to top