Dr. Stephan Dreyer and Prof. Dr. Wolfgang Schulz ask the following question in their contribution to the Völkerrechtsblog [international law blog]: To what extent can the new General Data Protection Regulation (GDPR) protect the interests of individuals, groups and society as a whole that appear to be threatened by algorithmic decision-making systems?
In algorithmic decision-making systems (ADM systems) machines evaluate and assess human beings and, on this basis, make a decision or provide a forecast or a recommendation for action. Thus, it is not only the data processing as such, but above all the automated decision resulting from the data processing that contains risks for the user. The current international legal framework encompasses such risks by guaranteeing privacy, data protection, personality rights and autonomy. However, there are group-related and societal interests such as fairness, non-discrimination, social participation and pluralism. In order to attain such supraindividual goals, experts have suggested the adoption of certain measures which contribute to making ADM procedures transparent, individual decisions explainable and revisable, as well as to making the systems verifiable and rectifiable. Furthermore, ensuring the diversity of ADM systems can make a contribution to safeguarding the mentioned interests.
Dreyer, S.; Schulz, W. (2019): “The GDPR and Algorithmic Decision-Making – Safeguarding Individual Rights, but Forgetting Society”, Völkerrechtsblog, https://voelkerrechtsblog.org/the-gdpr-and-algorithmic-decision-making/, 3 June 2019.
Against this background and in light of the growing use of ADM systems we need to ask ourselves an important question: To what extent can the EU General Data Protection Regulation (GDPR) support such measures and protect the interests of the individual, of groups and of society as whole that seem threatened by algorithmic systems?
The entire article can be read here.