Every day around the world, an incredible amount of data is gathered and processed by individuals, companies and governments. The purpose of gathering this data is often to make certain decisions, be it to offer an individual a personal loan on the basis that they earn above the required income threshold in order to qualify for said loan or to target a particular group of people in a particular geographical location with product discounts for new motor vehicles as it has been determined that people in that area are more inclined to spend money on luxury goods.
The decisions illustrated above are for the most part performed by employees at companies who
have obtained the necessary data, in the form of personal information, so as to make said decisions. As such there is an application of a person’s intellect in order to arrive at the aforementioned decision, this intellect being affected by, for example, personal biases, company policy or education. With the rise and use of machine learning and artificial intelligence as common place in internal data processing systems, the application of human intellect to make decisions will decline and be replaced with systems that make decisions, based on certain designed parameters, without the need for human intervention and are thus done autonomously thereby automating the decision making
From a privacy and data protection point of view, having decisions that in some cases determine the economic, personal, legal or financial future of an individual could be considered problematic.
What does automated decision making mean in law?
The current South African legislative framework does make provision for automated decision making under Section 71(1) of the Protection of Personal Information Act, Act 4 of 2013 (POPIA) which states that:
“a data subject may not be subject to a decision which results in legal consequences for him, her or it, or which affects him, her or it to a substantial degree, which is based solely on the basis of the automated processing of personal information intended to provide a profile of such person including his or her performance at work, or his, her or its credit worthiness, reliability, location, health, personal preferences or conduct”.
However, POPIA does provide for a number of conditions where Section 71(1) will not apply. For the purposes of this article, the scope will be restricted to Section 71(2)(ii) which makes allowance for automated decision making when “appropriate measures have been taken to protect the data subject’s legitimate interest”.
Section 71(3)(b) further defines “appropriate measures” as referred to in Section 71(2)(a)(ii), as requiring “a responsible party to provide a data subject with sufficient information about the underlying logic of the automated processing of the information relating to him or her to enable him or her to make representations…” in terms of Section 71(3)(a) of POPIA.
Unpacking “appropriate measures” and understanding “underlying logic”
Considering the above, picture a company which has gathered extensive amounts of personal
information wishes to process this data using automated means so as to facilitate efficient internal business systems. Looking at POPIA, one way that the company could process this information is to “provide the data subject with sufficient information about the underlying logic of the automated processing of the information”. The question that arises immediately is what does this mean, especially practically?
Unfortunately, the Information Regulator has yet to issue a Code of Conduct to reference when
faced with the above scenario. As such we may look at existing foreign legislation or legal
frameworks to provide an understanding of how to understand the scope of the requirement set
forth in Section 71(3)(b) of POPIA. Providing the data subject with sufficient information about the underlying logic of automated processing is mirrored in a number of existing legal systems most recently in the General Data Protection Regulation (GDPR), however the exact meaning and application of this requirement has been the subject of many academic, technical and legal debates.
Specifically, Article 13 and Article 22 of the GDPR mirror the relevant principles in POPIA.
Often referred to as the “right to explanation” both the GDPR and POPIA require that the data
subject be afforded a level of insight into the process or “logic”, in this case logic in a computer
science context, behind the automated decision making and how the decision was arrived at.
In the next part of this series, the scope of this “right to explanation” will be unpacked further and the practical and legal implications will be explored.
Disclaimer: the information contained in this Insight is for awareness and discussion purposes only and does not constitute legal advice. For any enquiries, please get in touch at firstname.lastname@example.org