Megrendelés

Igor Gontarz[1]: Judicial Review of Automated Administrative Decision-making - The Role of Administrative Courts in the Evaluation of Unlawful Regimes* (ELTE Law, 2023/1., 151-162. o.)

https://doi.org/10.54148/ELTELJ.2023.1.151

Abstract

Automated administrative decision-making in Europe draws attention to legal issues related to its scrutiny. The algorithm may not be an exact translation of the legal norms that it is supposed to enforce; moreover, the logic behind opaque systems is inaccessible to individuals affected by its operation. In the age of mass decisions on access to benefits and public services, how can it be ensured that the legal interest of individuals will be protected? The discussion on general assumptions of administrative justice towards the digital state has already begun in the UK, with some interesting developments on administrative courts' jurisdiction and evidentiary proceedings. At the same time in the EU, there are discussions on the Proposal for Artificial Intelligence Act resulting in Model Rules on Algorithmic Impact Assessment, with the significant role of the Supervisory Body and the Expert Board. In this paper, I would like to compare two approaches and reflect on them from the viewpoint of Polish administrative justice. To do that, I analyse the English legal framework of judicial review and its recent case law. I conclude that Polish administrative justice doesn't have the legal competence to evaluate the policymaking process and the role of the court should be limited to examining the decisions of the Supervisory Body.

Keywords: algorithms in public administration, algorithmic decision-making, structural review, jurisdiction over administrative regimes, review of policymaking, administrative justice, judicial review

- 151/152 -

I. Introduction

The reviewability of automated decision-making is one of the central topics raised in the discussion concerning the operations of the digital state.[1] As a rule, individuals may defend themselves against the illegal activity of a government in an administrative court, performing the right to a fair trial (art. 6 sec. 1 European Convention on Human Rights[2]). Nevertheless, fixing public law errors in a concrete administrative case doesn't fix a problem that might be systemic. With their intrinsic discriminatory potential and rigidity, digital tools massively breach individual interests, having an adverse impact on whole social groups.[3] The public body might not be responsible for shortcomings in a faulty system developed by the government. Should the administrative courts then have jurisdiction to assess whole administrative regimes? If yes, do they possess sufficient expertise to assess the legality of automated decision-making systems? In this article, I would like to focus on the role of administrative courts in the examination of administrative regimes of automated decision-making. To do that, I will compare two approaches with a different role for judicial review; the one discussed in the English literature and the one based on algorithmic impact assessment proposed by European law Institute. Having presented the state of the art in the debate on systemic review in the UK and Poland, I will consider existing proposals from a comparative perspective and assess how reliable they could be for domestic lawmakers.

II. Definition and Overview of Automated Administrative Decisionmaking

A legal analysis of the operations of the digital state requires an explanation of certain basic concepts to which I refer further. This article concerns automated decision-making systems (or algorithmic decision-making systems; hereinafter 'ADM systems'), which I understand as 'decision-making systems that operate entirely without or with reduced human input, reaching decisions instead through the use of mathematical instruction sequences called

- 152/153 -

algorithms'[4]. Recently, ADM systems have become used in numerous public services, from calculating benefits to tax fraud detection and profiling the unemployed. Governments are eager to automate their operations, though the logic behind it is not entirely transparent to society. In Germany, there are over 150 automated systems that affect access to important goods, services and enjoyment of civil liberties.[5] In the US, the number of discovered algorithms used by the Federal Government reached 829.[6] Also, in other countries, such as the UK,[7] Poland[8] and Norway,[9] there is a clear tendency to use data analytics (and algorithms) in the provision of public services. For a few decades, when the technology employed by public officials was rather simple and the degree of human engagement in decision-making was significant, the automation of public administration did not attract as much attention from legal scholars as it does today. Unsurprisingly, there were no challenges on the grounds of human rights infringement when a driver received an automatically generated speed ticket. Similarly, there was no reason to commence legal debate on automated income tax calculations, as the exact mechanism was transparent, and a taxpayer could question the amount to be paid. However, since algorithms have become more complex and governments have become more confident in automating public administration, discussion has boomed.[10] With the new types of algorithms, enabling classification, matching patterns and more

- 153/154 -

complex applications of laws, new ideas of use have appeared in spheres susceptible to human rights infringements. Among typical examples, one could mention fraud detection, welfare debt recovery systems and national security contexts, such as immigration (e.g. the UK settlement scheme).[11] Depending on the purpose, automated administrative decisionmaking may employ a simple algorithm, a rule-based expert system or a machine learning model. From the technical point of view, the most dangerous for an individual is the latter, as an algorithm can self-develop in an unpredictable, sometimes also inexplicable way. However, as it follows from the Proposal for an Artificial Intelligence Act[12] what matters more than the employed technology itself is the purpose of its use. All automation methods might constitute a high risk if they are employed in a context of access and enjoyment of essential public services and benefits or migration and border control management.[13] Therefore, for this paper, I do not wish to focus on any particular technology. For the reasons I have mentioned above, the conclusions that I draw might be universally applied to both: machine learning algorithms and simple ones, based on statistical, logic and knowledge-based approaches.[14]

III. Review of an Act or the ADM System?

Considering the risks to human rights, such as a right to privacy, equal treatment and social protection, the use of ADM systems in public administration must be under control. As a rule, it is not a problem in the case of individual decisions, as at least a person to whom a decision is addressed may challenge it in court.[15] The review performed ex-post implementation of the ADM system might, however, be ineffective.

- 154/155 -

First, algorithmic tools are employed in circumstances that allow mass decision-making. The source code of ADM systems is 'rigid' (inflexible), making them incapable of considering open terms such as 'public interest' or 'justified exemption'. It is not possible (for now) to create a system of mass decisions on permits to build a house or to run a new petrol station. On the other hand, decisions on social benefits or asylum, which usually affect the poor and socially excluded, are issued daily. The adverse impact that is caused by irrational automation of some parts of public services might however be structural rather than individual.

Furthermore, the risks from which society needs to be protected, are systemic. The exemplary side-effect of automation is discrimination against some societal groups.[16] A good illustration might be the Polish system of profiling the unemployed, introduced in 2014. Based on answers provided by an unemployed person in the special 'Profiling questionnaire', the interviewee was ascribed to one of three predetermined profiles. The decision of which profile to attribute to an individual was taken automatically, and it determined the kind of support that the unemployed person could receive from the state.[17] While a person assigned to profile II could be offered a wide range of various active labour market programmes (apprenticeships, training, postgraduate studies), another one assigned to profile III was not offered any attractive form of assistance.[18] Such segregation into better and worse is highly controversial and, if proven, could be deemed discriminatory.[19]

Society usually does not have access to information that would allow us to assess whether the tool is discriminatory or not. Governmental use of ADM is not transparent to society, neither when it comes to numbers, nor the technology employed.[20] In the above example, legal regulations failed to provide for how a specific active labour market programme should be determined. Both the algorithm and the questionnaire were kept secret, fearing that the unemployed would learn the answers and manipulate the system.[21] Nevertheless, exercising its right to public information, a non-governmental organisation (Panoptykon) finally received the list of questions put to the unemployed and published it on its website.

- 155/156 -

Later, in 2018, the regulation on data collection was declared unconstitutional by the Polish Constitutional Tribunal.[22]

Non-transparency as a general ADM problem is also visible in other issues. In similar cases, the societal control of digital government faces the industrial secrets of private IT systems vendors.[23] Even after gaining access to essential information on the logic involved in the ADM system, a meaningful evaluation of it requires technological literacy.[24] The potential group of evaluators is therefore limited to expert bodies with the legal power to intrude into the complex sphere of digital tools and their documentation.

The role of evaluating the fairness, rationality and legality of automation should therefore be held by competent public bodies with access to whole documentation, describing the characteristics of the ADM system rather than an individual decision with reasoning limited to a single case. That body ought to have the competence to establish ex-ante whether data was processed in such a way that leads to discriminatory treatment, or the weight ascribed to selected variables in the algorithm was not objectively determined.[25] Bearing in mind the input of NGOs to the system of ADM control, one should be aware that oversight of the performance of public administration is the job of the administrative judiciary.[26] Nevertheless, whether the decision to automate something is in fact 'the performance of public administration' depends on the legal form of this rule-making activity.

IV. Jurisdiction over Automated Administrative Decision-making

That leads to whether it is permissible to complain about the ADM system to the administrative court. The response would vary in every country considered, so my goal is not to consider each legal system separately; however, some general remarks might be made here from the viewpoint of Polish administrative justice.

Above all, it is necessary to establish in what legal form the decision to automate some administrative activity is made. As mentioned in section II, the core element of the ADM system is an algorithm. From the ontological viewpoint, it is the regime of making decisions, written in a form of source code executed by a machine. The algorithm encompasses rules, which would normally be stipulated in statutes or secondary acts. In some cases, they would exist in an unwritten form of policy applied by the government or a public body. By

- 156/157 -

introducing an algorithm to public services, one must translate existing legal norms into lines of code, which would ideally reflect the law. On the other hand, the government may write down its policy in machine code, hence, creating the law that originally didn't exist (code as law[27]).

It follows that the decision to automate some activity could be made in a policymaking process. It follows numerous important decisions made by various bodies, not only ministers. As J. Cobbe and others observed, individual automated decisions are 'heavily influenced by choices around the system (i.e. selection of training data, design and training of models, and testing of systems)', which are usually made by software developers, and other nongovernment actors. In the end, however, legal responsibility is borne by the government, which decided to introduce an algorithm to its operations. Given that an effective review of the ADM system requires jurisdiction over policymaking activity, it is necessary to establish whether administrative courts may assess it.

A typical example of courts competent to review whole administrative regimes is found in the United Kingdom. In English scholarship, the concept of investigating the rules that govern administrative decision-making is known as a 'systemic review'[28]. Adjudicating a complaint against some system, the role of the court is to evaluate the merits of the regime's rules rather than the actions or intent of people tasked with enacting them. It focuses on 'the upstream decision' of a public body made to create, develop or manage the system. When it comes to the legal form, it might be the decision of the executive (represented by the Lord Chancellor) endorsed by Parliament.[29] That constitutes a challenge for administrative justice, as courts are not generally configured to be part of a policymaking process but to adjudicate disputes.[30] On the other hand, it is commonly held that 'the opportunity to be heard by an impartial adjudicator is central to legitimate democratic authority'[31] and judicial review is 'a key guardrail of legality'[32]. These arguments prompt me to consider the role of courts in policymaking, even though it doesn't fit entirely with their nature.

Systemic review in the UK has been recognised as promising for ADM systems, and there are several arguments for it.[33] Given public law infringements or unfairness are built

- 157/158 -

into the algorithm, deciding on individual cases distances courts from the root of the systemic error - decisions made by the relevant department or authority as to the design and implementation of such systems.[34] Furthermore, as Chauhan notes, 'systemic review may help to circumvent opacity [non-transparency] by encouraging inquiries into both the input and output of ADM systems' as well as '[to] look at the risks introduced into the ADM system by human decision-makers'[35].

On the contrary, a review of the policymaking activity of public administration lay outside the jurisdiction of the administrative judiciary in Poland. Neither policies nor secondary acts passed by public administration appear on the closed list of acts and activities reviewed by administrative courts.[36] The role of the holistic reviewer is performed by the Constitutional Tribunal, competent to evaluate compliance of secondary acts with statutes[37] and statutes with the Constitution.[38] Due to the ruling concerning data collection used to profile the unemployed, one could observe that it would be the Tribunal's role to decide on, at least, the constitutionality of such administrative regimes. From the other end of the telescope, the inability to perform a systemic review does not mean that ADM is entirely outside the jurisdiction of Polish administrative courts. In fact, the latter are competent to examine errors in activities preceding administrative acts. Mistakes might appear, though, in terms of inputting the wrong data for the applicant or errors in data exchange between databases. As competent to adjudicate complaints against administrative decisions, the administrative court would then normally quash the decision, as the public body has not considered all relevant circumstances.[39] However, the court would not be competent to quash the whole regime of making decisions, on such grounds that the ADM system is intrinsically unfair, non-compliant with human rights or discriminatory. Depending on the type of law that empowers the public body to use the ADM system, the court could ask a Constitutional Tribunal for a preliminary ruling (when a legal basis is provided in a statute) or refuse to apply secondary law in that specific case (if a legal basis is stipulated in the secondary act). Nevertheless, the court could not review the ADM system itself. In Anglo-American law, jurisdiction to evaluate whole administrative regimes would appear as a shift in jurisprudence, while in civil law countries, at least in Poland, it would require an amendment of the Constitution.[40]

- 158/159 -

Although the UK has a legal framework that could be used to evaluate ADM systems, commentators have observed that the subject of systemic review would not only require some procedural amendments but also more creativity and engagement from the litigants. As J. Tomlison and others note, litigants will, inter alia, have to resort to new fact-finding techniques to establish evidence based on the operation and impact of an ADM system, as well as to rely more on legal routes to gain access to information.[41] On the other hand, courts will have to rely on expert evidence to 'translate complex technological issues to legal audiences'[42]. An alternative for the latter could be, as Lord Sales calls it, 'some system whereby the court can refer the code for neutral expert evaluation by [an] algorithm commission or an independently appointed expert, with a report back to inform the court'[43]. Lord Sales' idea originates from current forms of pre-legislative scrutiny of Acts of Parliament. It has also been included in the model proposed by the European Law Institute.[44] As an entirely different approach, with a limited role for judicial review, it will be discussed in a separate section.

It follows that an effective review of the ADM system would require competence to review legislation, secondary acts or the decisions made in a policymaking process. Jurisdiction over individual administrative acts does not allow the court to assess the whole regime of making decisions or legally oblige the public body not to use it. While in the UK, courts may evaluate whole administrative regimes in a systemic review, it would be outside the jurisdiction of administrative courts in Poland. Judges must be supported by other reviewers capable of assessing general rules and policies against structural problems, such as discrimination or bias. This supplementary role might be performed by central actors in the justice system, like constitutional courts. As it is noted in the UK, even with the general legal framework to address unlawful policies, judicial review would face challenges in the shape of evidential proceedings and permissible engagement of expert witnesses. That thought prompts me to reflect on alternative models of judicial review with the central role of an external expert commission.

V. Algorithmic Impact Assessments, an Expert Commission and Supervisory Bodies

Considering the role of administrative courts in the evaluation of ADM systems, it is necessary to reflect on the 'Model Rules on Impact Assessment of Algorithmic Decision-

- 159/160 -

Making Systems Used by Public Administration' by the European Law Institute.[45] The Model Rules propose an alternative to systemic review, promoting the idea of an independent Expert Board and Supervisory Authority. The role of judicial review is in this way limited to the procedure following a complaint lodged with the latter.

The approach in the Model Rules is shaded according to the risk that algorithms pose to individuals. The starting point is that the implementing authority shall carry out an impact assessment before deploying any system that is listed in Annex 1 (high-risk systems) or when the system constitutes at least a substantial risk according to the screening procedure[46]. The list of high-risk systems has not been proposed; nevertheless, the ELI refers to Annex III of the Proposal for an Artificial Intelligence Act, mentioned earlier in this article. The systems previously discussed in the context of access and enjoyment of essential public services and benefits, or migration and border control management, would be mandatorily evaluated in an impact assessment. After answering the list of questions, the implementing authority would draft a report, containing, among others, 'an assessment of the specific and systemic impact of the system on fundamental or other individual rights or interests, democracy, societal and environmental well-being'[47]. Additionally, the authority would make 'a reasoned statement on the legality of the use of the system under the applicable law, in particular data protection law, administrative procedure law and applicable sectoral legislation'[48].

While the role of the independent Expert Board would be to audit the report for its accuracy, adequacy, completeness and compliance with the Model Rules, the application of the Rules would be investigated by the Supervisory Authority - a body responsible for overseeing the use of ADM systems by public authorities. The proceedings could be commenced on its initiative or a complaint from members of the public having a sufficient interest, or alternatively, maintaining the impairment of a right, where administrative procedural law requires this as a precondition. Whereas the proceedings before a court of law could be initiated after an unsuccessful recommendation by the Supervisory Authority, the court would adjudicate also in cases concerning the rejection and dismissal of a complaint by the Supervisory Authority or its inactivity.

In my view, the idea of using impact assessments to perform ex-ante control of the ADM system has many advantages. First, it guarantees an independent expert evaluation before the system is implemented, eliminating the problem of the expert witness in administrative courts. Hence, the Model Rules circumvent the issues of evidential proceedings in administrative courts, by its nature limited to documents.[49] Second, it still guarantees the right to a fair trial before an independent, impartial court. Members of the public who have sufficient interest or maintain the impairment of a right, where administrative procedural

- 160/161 -

law requires this as a precondition, may demand that the court evaluate the negative decision of the Supervisory Authority or continuing usage of a system by an implementing authority. A significant difference when it comes to the legal form of activity being the subject of the review is that it is not a secondary act or policy but a decision of the Supervisory Authority. At the same time, individuals may challenge the legality of decisions that are reviewable in accordance with the applicable law. In effect, an individual might be provided with existing protection deriving from the current legal framework but, in addition, he or she gains new instruments that allow issues of a systemic nature to be challenged. Finally, Model Rules requires full transparency towards the expert board, making it possible to evaluate documentation regarding the ADM system and the system itself. Secrecy by contract, business secret or safety of the tool would be no obstacle to effective review. It does not end the discussion about the public nature of information regarding source code or algorithms.[50] Nevertheless, given measures of social participation in the Expert Board, it allows society to exercise control over ADM systems mitigating both the risk of disclosing the trade secret and governmental fears of society manipulating the system.

The Model Rules allow some of the issues discussed in the previous sections to be avoided. The idea of implementing them seems convincing to me, but it leads to additional questions for domestic legal orders. For instance, the state will have to decide who will perform the role of the Supervisory Authority (the European Law Institute proposes data Protection authorities[51]). Political decisions will have to be made also on which ADM systems should be mandatorily subject to impact assessment and which will not require it (annexes 1 and 2 to the Model Rules). It should also be noted that the idea to carry out algorithmic impact assessments is not a new one. In 2019, the European Parliament already stressed that 'algorithms in decision-making systems should not be deployed without a prior algorithmic impact assessment (AIA) unless it is clear that they have no significant impact on the life of individuals'[52]. The obligation to designate a supervisory authority was also provided in the provisions of the Proposal for an Artificial Intelligence Act.[53] The Model Rules are, however, a detailed conception of what should be the role of each body in the evaluation of

- 161/162 -

ADM, in my opinion, balancing the role of administrative courts well with essential issues concerning jurisdiction, acceptable level of evidential proceedings, participation of experts and transparency towards individuals.

VI. Conclusions

There are various approaches to the role of administrative courts in the review of automated administrative decision-making. While there is already a discussion concerning the jurisdiction of administrative courts to perform a review of ADM systems and the required shift in approach to evidential proceedings in the UK, the power of the Polish judiciary to evaluate administrative regimes seems to be in question. There is no doubt that the risks ADM systems pose to individuals require providing the latter with legal protection. Part of it should be procedures before an administrative court, with the power not only to decide on the legality of individual decisions but of a whole administrative regime. The effectiveness of this protection calls for a way to be found to influence unfair, irrational or illegal practices by public bodies in countries where the review of administrative regimes is inadmissible. The Model Rules proposed by European Law Institute seem to fill that gap, by providing members of the public with a right to challenge the decisions of the supervisory body. In my opinion, the Rules balance the right to a fair trial with practical problems of administrative courts' jurisdiction, limited evidential proceedings and necessary technical expert knowledge. Moreover, the proposed model envisages social participation in the Expert Board. On the other hand, it is not certain that administrative courts don't have to evolve in order to face technological complexities. Further questions may appear in the review of the Supervisory Body's decisions, as well as continuing unlawful usage of ADM. The findings of UK scholarship might therefore still impact the direction of national courts' development and should be included in the discussion on the judicial review of the digital state. ■

NOTES

* This research was carried out within a project 'Informatisation of the judiciary in Norway', Study@Research IDUB decision n. 014/34/UAM/0081.

[1] Jennifer Cobbe, Michelle Seng Ah Lee, and Jatinder Singh, 'Reviewable Automated Decision-Making: A Framework for Accountable Algorithmic Systems' in Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (FAccT '21, Association for Computing Machinery 2021, New York, NY, USA, 598-609) https://doi.org/10.1145/3442188.3445921

[2] Convention for the Protection of Human Rights and Fundamental Freedoms, opened for signature 4 November 1950, as amended by Protocols Nos. 11, 14 and 15 supplemented by Protocols Nos. 1, 4, 6, 7, 12, 13 and 16, CETS No. 213 (entered into force 1 August 2021).

[3] Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC ('GDPR'), art. 22 sec. 1.

[4] I repeat this definition after Abe Chauhan, 'Towards the Systemic Review of Automated Decision-Making Systems' (2020) 25 (4) Judicial Review 285-295, 286; https://doi.org/10.1080/10854681.2020.1871714. By comparison, J. Cobbe et al. use the term 'automated decision-making' for decisions or interests of natural or legal persons made by other natural or legal persons using automated processes, Cobbe et al. (n 1) 599.

[5] Atlas of Automation - Automated decision-making and participation in Germany (1st edn, April 2019), <https://atlas.algorithmwatch.org/en/> accessed 29 September 2022.

[6] Algorithm Tips offers a curated set of algorithms being used across the US government at the federal, state, and local levels at: <https://db.algorithmtips.org/db> accessed 29 September 2022.

[7] Lina Dencik, Arne Hintz, Joanna Redden and Harry Warne, Data Scores as Governance: Investigating uses of citizen scoring in public services (Project Report Data Justice Lab, Cardiff University December 2018, UK).

[8] Natalia Mileszyk, Bartosz Paszcza, Alek Tarkowski, AlgoPolska (Raport 07/2019, Fundacja Centrum Cyfrowe Klub Jagielloński Krakow 2019, Warszawa), <https://centrumcyfrowe.pl/algopolska-raport/> accessed 29 September 2022.

[9] In Norway most tax decisions concerning individual taxpayers, more than 70 percent of applications to the Norwegian State Educational Loan Fund and the large majority of applications for housing benefits are totally automated, see Dag Wiese Schartum, 'From Legal Sources to Programming Code: Automatic Individual Decisions in Public Administration and Computers under the Rule of Law' in Woodrow Barfield (ed), The Cambridge Handbook of the Law of Algorithms (Cambridge University Press 2020) 307. https://doi.org/10.1017/9781108680844.016

[10] See e.g. Lord Sales, 'Algorithms, Artificial Intelligence and the Law' (2020) 25 (1) Judicial Review 46-66. https://doi.org/10.1080/10854681.2020.1732737; Joanna Mazur, Algorytm jako informacja publiczna w prawie europejskim (Wydawnictwo Uniwersytetu Warszawskiego 2021, Warszawa); Mateo Pressi, 'The Use of Algorithms within Administrative Procedures: National Experiences compared through the Lens of European Law Review of European Administrative Law' (2021) 14 (2) Review of European Administrative Law 69-84, https://doi.org/10.7590/187479821X16254887670900; Rashida Richardson, Jason M. Schultz and Vincent M. Southerland, Litigating algorithms 2019 US report: New Challenges to Government Use of Algorithmic Decision Systems (AI Now Institute September 2019).

[11] Monika Zalnieriute, Lisa Burton, Janina Boughey, Lyria Moses Bennett, and Sarah Logan, 'From Rule of Law to Statute Drafting: Legal Issues for Algorithms in Government Decision-Making' in Woodrow Barfield (ed), The Cambridge Handbook of the Law of Algorithms (Cambridge University Press 2020) 254. https://doi.org/10.2139/ssrn.3380072; Some of the authors perceive as administrative decision-making also law enforcement and predictive policing. I omit these activities as in some countries they don't relate to operations of public administration but rather are domain of criminal law.

[12] Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts (COM/2021/206 final).

[13] See annex nb III to the Proposal for an Artificial Intelligence Act.

[14] The technology is less relevant than simply a fact of delegating the issuance of the decision to a data-driven algorithmically controlled system. Though machine learning has proved to be most dangerous to human rights and completely untransparent, the scholarship has discovered that simple algorithms may also be difficult to review. Abe Chauhan mentions, for instance the GSCE and A-Level 2020 results fiasco, which was the consequence of implementing fairly simple algorithms, Abe Chauhan, footnote nb. 5.

[15] Model Rules on Impact Assessment of Algorithmic Decision-Making Systems Used by Public Administration: Report of the European Law Institute (European Law Institute 2022) 51.

[16] Joanna Mazur, 'Can public access to documents support the transparency of automated decision-making? The European Union law perspective' (2021) 29 (1) International Journal of Law and Information Technology 1-23. https://doi.org/10.1093/ijlit/eaaa019

[17] The profiling program involved processing of data of about circa 1.5 million people. See more: Jędrzej Niklas, Karolina Sztandar-Sztanderska, Katarzyna Szymielewicz, Profiling The Unemployed in Poland: Social And Political Implications Of Algorithmic Decision Making (Fundacja Panoptykon 2014, Warsaw) 5.

[18] Niklas and others (n 17) 13.

[19] As the authors of the report indicate, 'in some cases, the very fact of having a disability or being a single mother turned out to be sufficient to assign a person concerned to Profile III'. See Niklas and others (n 17) 37.

[20] Jack Maxwell, Joe Tomlinson, 'Public Law and Technology: Mapping and Analysing Legal Responses in UK Civil Society' (2020) 25 (1) Judicial Review 28-38, 29 ff, https://doi.org/10.1080/10854681.2020.1732741

[21] Similarly, the transparency of the ADM system detecting tax fraud is limited in order to protect its safety. See Ustawa z dnia 29 sierpnia 1997 r. - Ordynacja podatkowa, t.j. Dz.U. 2021 poz. 1540 (the Tax Ordinance in Poland), art. 119zo.

[22] Wyrok Trybunalu Konstytucyjnego z dnia 6 czerwca 2018 r., K 53/16, OTK ZU A/2018, poz. 38 (Judgment of the Constitutional Tribunal).

[23] See e.g., Judgement no. 8472/2019 of the Council of State in Italy and resolutions no. 123-124/2016 of The Comissió de Garantia del Dret d'Accés a la Información Publica (GAIP) of the Generalitat de Catalunya.

[24] At last, even developers and programmers sometimes don't follow the reasoning of the machine if it is able to learn and change the way it solves problems (machine learning).

[25] Mazur (n 16) 5.

[26] Konstytucja Rzeczypospolitej Polskiej z dnia 2 kwietnia 1997 r., Dz.U. 1997 nr 78 poz. 483 (Constitution of the Republic of Poland) art. 184.

[27] Lawrence Lessig, Code and Other Laws of Cyberspace (Basic Books 1999).

[28] This form of the review was commenced by the seminal authority R (Refugee Legal Centre) v Secretary of State for the Home Department [2004] EWCA Civ 1481, [2005] 1 WLR 2219 and originally was recognised as 'Structural Procedural Review', see: Carol Harlow and Rick Rawlings, Law and Administration (3rd edn, Cambridge University Press 2009) 669. https://doi.org/10.1017/CBO9780511809941

[29] See for instance R(Howard League for Penal Reform) v Lord Chancellor [2015] EWCA Civ 819, which concerned the amount of legal aid funding available for disputes between prisoners and authorities.

[30] Joe Tomlison, Katy Sheridan, Adam Harkens, 'Judicial Review Evidence in the Era of the Digital State' (May 31, 2020) 4. Available at SSRN: https://ssrn.com/abstract=3615312 or http://dx.doi.org/10.2139/ssrn.3615312.

[31] Ari Ezra Waldman, 'Algorithmic Legitimacy' in Woodrow Barfield (ed), The Cambridge Handbook of the Law of Algorithms (Cambridge University Press 2020) 116. https://doi.org/10.1017/9781108680844.005

[32] Tomlison and others (n 30) 4.

[33] Chauhan (n 4) 289.

[34] Chauhan (n 4) 293.

[35] Chauhan (n 4) 293.

[36] Ustawa z dnia 30 sierpnia 2002 r. Prawo o postępowaniu przed sądami administracyjnymi, t.j. Dz.U. 2022 poz. 329 (Law on Proceedings before administrative courts), art. 3 § 2, art. 4.

[37] Art. 188 sec. 1 of the Polish Constitution.

[38] Art. 188 sec. 3 of the Polish Constitution.

[39] Art. 145 § 1 p. 1, c) of Law on proceedings before administrative courts (Poland).

[40] An exception are local enactments issued by local government authorities and territorial and territorial agencies of government administration; however, it is seldom that an ADM system would find its legal basis in such an act. See art. 3 § 2 p. 5 of Law on proceedings before administrative courts (Poland).

[41] Tomlison and others (n 30) 19.

[42] Tomlison and others (n 30) 21. It would also require extending the limitation period for a claim. See Jennifer Cobbe, 'Administrative Law and the Machines of Government: Judicial Review of Automated Public-Sector Decision-Making' (2019) 39 (4) Legal Studies 636-655, 654. https://doi.org/10.1017/lst.2019.9.

[43] Lord Sales (n 10) 54.

[44] Model Rules on Impact Assessment.

[45] Model Rules on Impact Assessment.

[46] Model Rules on Impact Assessment, art. 1 sec. 2, art. 4 sec. 1.

[47] Model Rules on Impact Assessment, art. 6 sec. 2 c.

[48] Model Rules on Impact Assessment, art. 6 sec. 2 g.

[49] Art. 106 § 3 of Law of Procedure before Administrative Courts.

[50] An interesting illustration might be reasoning against disclosure of an algorithm and source code of the electronic case distribution system in Poland (System Losowego Przydziału Spraw), see Wyrok Naczelnego Sądu Administracyjnego z dnia 19 kwietnia 2021 r., sygn. III OSK 836/21 (Judgement of the Supreme Administrative Court) and Wyrok Naczelnego Sądu Administracyjnego z dnia 26 maja 2022 r., sygn. III OSK 1189/21, (Judgement of the Supreme Administrative Court).

[51] Model Rules on Impact Assessment, 50.

[52] European Parliament resolution of 12 February 2019 on a comprehensive European industrial policy on artificial intelligence and robotics (2018/2088(INI)) (2020/C 449/06), 154. It is necessary to mention here also a proposal by AlgorithmWatch - Michele Loi, Anna Mätzener, Angela Müller, and Matthias Spielkamp, Automated Decision-Making Systems in the Public Sector: An Impact Assessment Tool for Public Authorities (AlgorithmWatch June 2021) https://algorithmwatch.org/en/adms-impact-assessment-public-sector-algorithmwatch/accessed 29 September 2022.

[53] Art. 63 sec. 5 and art. 64.

Lábjegyzetek:

[1] The author is a PhD candidate at the Department of Administrative and Administrative Judicial Procedure of Adam Mickiewicz University in Poland. This research was carried out within a project 'Informatisation of the judiciary in Norway', Study@Research IDUB decision nb. 014/34/UAM/0081.

Tartalomjegyzék

Visszaugrás

Ugrás az oldal tetejére