EU Data Protection Laws Are Not Fit For Purpose: They Undermine the Very Autonomy of the Individuals They Set Out to Protect
EU Data Protection Laws Are Not Fit For Purpose: They Undermine the Very Autonomy of the Individuals They Set Out to Protect
The European Union is supposed to have the strongest data protection laws in the world. So why do privacy violations continue to make the headlines? I believe that the lack of material privacy compliance is not due to lack of enforcement, but is due to a fundamental flaw in our European data protection laws. Our laws are supposed to ensure people’s autonomy by providing choices about how their data is collected and used. In a world driven by artificial intelligence, we, however, can no longer understand what is happening to our data, and the concept of free choice is undermined by the very technology our laws aim to protect us against. The underlying logic of data-processing operations and the purposes for which they are used have now become so complex that they can only be described by means of intricate privacy policies that are simply not comprehensible to the average citizen. Further, the reality is that organizations find inscrutable ways of meeting information and consent requirements in a way that discourages individuals from specifying their true preferences, and, therefore, they often simply feel forced to click “OK” to obtain access to services.
Our data protection laws have resulted in what Prof. Corien Prins and I have named mechanical proceduralism (read here), whereby organizations go through the mechanics of notice and consent without any reflection on whether the relevant use of data is legitimate in the first place. In other words, the current preoccupation with what is legal is distracting us from asking what is legitimate to do with data. We even see this reflected in the highest EU court having to decide whether a pre-ticked box constitutes consent (surprise: it does not). Privacy legislation needs to regain its role of determining what is and what is not permissible. Instead of a legal system based on consent, we need to re-think the social contract for our digital society, by having the difficult discussion around where the red lines for data use should lie, rather than passing the responsibility for a fair digital society to individuals to make choices they cannot fully comprehend.
It means that privacy protection is, for now, best served by adopting the legitimate interest ground as the only legal basis for data processing (read more on these proposals here (2018) and also here (2016).) Any processing of data is contextual and, having been a practicing lawyer for a while, my conclusion is that any attempt to regulate specific processing activities upfront will be counterproductive, because the issues at hand will either be over- or under-regulated. The EDPB does an admirable job of trying to mitigate any such issues, but the end result is that this makes the GDPR unnecessarily complicated to apply for businesses, which ultimately undermines its effectiveness and legitimacy. Here are two examples to help further clarify my point.
The debate about which categories of data should qualify as special has become irrelevant. Practice shows that the same data may be sensitive in one context but not in another. Rather, the use of data may be sensitive. As a consequence, the existing regime—which is based on the processing of a pre-defined set of special categories of data—does not achieve the intended effect. It sometimes over- and sometimes under-regulates (IAPP Op-ed on special categories of data and IAPP Op-ed on GDPR drafting flaws). These issues are already well known to the EDPB (and its predecessor, the WP29) and have been addressed, in their opinions, by introducing additional requirements. For example, the specific legal grounds do not require a contextual balancing of interests, which would include an assessment of the measures taken by the data controller to mitigate any adverse effects on the privacy of the individuals concerned. In this respect, the legitimate interest ground, contrary to what is often thought, actually provides greater privacy protection for individuals (see also the WP29 (Opinion 06/2014, pp. 9–10). The WP29 had already attempted to overcome this problem by requiring that the protection of such data under Article 9 GDPR should not be less than if the processing had been based on Article 6 GDPR, and by subsequently applying the legitimate interest test on top of the regime for special categories of data (Opinion 06/2014, pp. 15–16). This begs the question: why not apply the regular grounds of Article 6 in the first place (which work fully well for other sensitive data such as genetic data, biometric data, location data, and communication data)?
Article 22 GDPR applies solely to automated decision-making and therefore does not apply, as long as the output of an algorithm is subject to meaningful human review (see WP29 Opinion on Automated Decision-making and Profiling). However, in practice, we see many examples of AI-assisted decision-making, whereby algorithm output is indeed reviewed by a human, so the output itself may well be wrong, not explainable, or biased as a result of which the subsequent human review may obviously be flawed. The ICO recently issued draft draft guidelines on explaining AI, basically also applying the same requirements to AI-assisted decision-making, not on the basis of Article 22 GDPR, but on the basis of the general GDPR principles of fairness, transparency, and accountability. I wholeheartedly agree with this position, but it again begs the question of whether we need the narrowly written Article 22 GDPR in the first place. It puts organizations very much on the wrong track when deploying algorithms, which will lead to non-compliance and potentially unnecessary litigation. My proposal is to delete Article 22 GDPR. The EDPB can then provide guidance on how to apply the legitimate interest test and general principles of the GDPR to automated profiling. If this is too shocking of an approach, I recommend to at least turn around the scope of Article 22 GDPR: instead of applying to “automated decision‑making, including profiling,” the provision should apply to “automated profiling, including AI or AI-assisted decision-making.”
This article originally appeared in IAPP.