Recital 26 of the General Data Protection Regulation states that data protection principles shall not apply to anonymous information. In particular, anonymous information is not related to an identified or identifiable natural person in such a manner that data subject is not or no longer identifiable. Contrarily, the same recital confirms the application of data protection principles to anonymised information. In fact, pseudonymised personal data might be attributed to a natural person by the use of additional information. Subsequently, pseudonymised data is considered information on an identifiable person. The aforementioned technical measures constitute a relevant aspect for both data controllers and processors. In fact, adequate measures that provide a certain degree of anonymisation or pseudonomysation are considered paramount in relation to the accountability principle. These processes might be described as:
However, not everyone believes that anonymisation process is possible. In fact, the illusory nature of anonymisation procedures has been argued several times. In particular, anonymisation is an illusion because of the existence of many datasets to cross-reference. Then, any set of data combined with a non-trivial information on some data subject is likely to match identifiable public records. Furthermore, it has been argued that personal data can either be useful or perfectly anonymous; but never both. In addition, reidentification sciences are dismantling privacy policies by undermining trust we have placed in anonymisation. Moreover, a group of MIT scientists have demonstrated how easy and fast can be deanonymisation process. However, in such depicted scenario, we may find undertakings and database claiming full anonymisation of stored information or of their technical services. Those datasets are deidentified rather than anonymised; names and other relevant identifiers are cancelled while the rest of data is untouched. Then, deidentification does not ensure anonymised data due to the vast amount of sources on the Internet that still have many identifying information in therm. In this context, differential privacy may constitute a crucial point for ensuring anonymisation services. Differential privacy technology is helpful to discover usage patterns of several data subjects without compromising privacy. Roughly, differential privacy is a digital constraint on the algorithms involved in the publishing of aggregate information about a database that limits the privacy impact on data subjects whose information are included in the database. Deidentification is still a useful data minimisation technique. Data processing may require deidentification rather than anonymisation due to certain necessities. For instance, deidentification is useful in long-term datasets where you need to keep track of where – what user and/or device – certain data come from. For further information please contact us or leave your contact details in the Contact Form and you will be contacted within 24 hours.