Megrendelés

Benjámin Máté Einvág[1]: The fundamental rights regulation of artificial intelligence (Studia, 2025/1., 49-65. o.)

Absztrakt - A mesterséges intelligencia alapjogi regulációja

A mesterséges intelligencia (a továbbiakban: MI) fejlődése alapvető jogi és társadalmi kihívásokat vet fel, amelyek új szabályozási megközelítéseket tesznek szükségessé. A tanulmány célja az MI alapjogi vonatkozásainak vizsgálata, különös tekintettel az alapjogok horizontális hatályára, valamint az Európai Unió AI Act és a dél-koreai AI Act Bill szabályozási modelljeinek összehasonlítása. Az elemzés kitér az MI-hez kapcsolódó felelősségi kérdésekre, az adatvédelemre és a diszkrimináció kockázataira, továbbá a bevezeti a proaktív állam fogalmát, amely jogi keretek kialakításában játszhat kulcsszerepet. A dolgozat hangsúlyozza, hogy a mesterséges intelligencia szabályozása nem korlátozódhat szektorális megközelítésre, hanem egy dinamikus és adaptív jogi környezet kialakítását igényli, amely biztosítja az innováció és az alapjogok védelmének egyensúlyát.

Kulcsszavak: mesterséges intelligencia, alapjogok, horizontális hatály, AI Act, szabályozási modellek, jogi felelősség, proaktív állam

Abstract - The fundamental rights regulation of artificial intelligence

The development of artificial intelligence (hereinafter: AI) raises fundamental legal and societal challenges that require new regulatory approaches. The aim of this paper is to examine the fundamental rights aspects of AI with a particular focus on the horizontal scope of fundamental rights and to compare the regulatory models of the European Union AI Act and the South Korean AI Act Bill. The analysis covers liability issues related to AI, data protection, the risks of discrimination and introduces the concept of the proactive state, which can play a key role in the development of legal frameworks. The paper stresses that the regulation of AI cannot be limited to a sectoral approach but requires the development of a dynamic and adaptive legal environment that ensures a balance between innovation and the protection of fundamental rights.

Keywords: artificial intelligence, fundamental rights, horizontal effect, AI Act, regulatory models, legal liability, proactive state

- 49/50 -

1. Introduction

Artificial intelligence (hereinafter: AI) is not a new concept to consider from a fundamental rights perspective. A large number of recommendations in relation to AI have been developed on the basis of universal principles of human rights, some of which have ultimately been elevated to the level of binding legislation. These include the European Union's Artificial Intelligence Act (hereinafter: AIA) and the relatively recent South Korean AI Act Bill (hereinafter: SKAIA).[2] The gradual integration of AI systems into everyday life has not only enriched our lives with numerous useful functions but has also been the source of new types of risks. The so-called "black box" phenomenon, which has almost become a cliché in the AI discourse, is a prominent example of these risks.[3] The case of Eric Loomis in 2013 is an illustration of the black box. During a criminal trial, the COMPAS system identified him as a high-risk individual, resulting in a more severe sentence. Despite Loomis' appeal to the US Supreme Court, the court ultimately upheld the original sentence.[4] The problems arising from a lack of transparency, coupled with statistically based - often questionable - discriminatory practices, have led to results that are highly concerning and legally questionable. The case of Eric Loomis serves as an important example, underscoring the need to subject AIbased systems to comprehensive and critical scrutiny. Such scrutiny is essential to minimize the risk of fair trial violations and to prevent potential violations of other fundamental rights.

The aim of this paper is to analyze in detail the issue of the horizontal applicability of fundamental rights, to compare the regulatory approaches of the AIA and the SKAIA to fundamental rights and to present the theoretical basis and practical relevance of the concept of the proactive state.

- 50/51 -

2. Constitutional approach

The emergence of AI is not some distant vision of the future. It is already part of our present reality. We are actively facing its challenges today and it is our responsibility to define the legal and ethical boundaries within which it does business. The growing influence of AI on decision-making, economic structures and individual rights requires a proactive legal framework that ensures accountability and protects fundamental rights.

It has to be recognized that modern constitutions have been designed primarily to limit the power of the state, thereby safeguarding civil liberties and promoting the development of the law. However, constitutional guarantees can no longer be limited to merely constraining the state. They must also be applied to protect individuals from potential violations by other private actors, including corporations and AI-driven systems. In this respect the AIA introduced by the European Union is groundbreaking. Both its objectives and its content reveal a forward-looking regulatory approach that seeks to shape the future of governance in a rights-protective manner. The AIA recognizes that AI regulation cannot be reactive - it must anticipate risks and provide clear legal safeguards before harm occurs.[5]

To develop an effective regulatory model whether in Hungary or within a broader international context, it is essential to move beyond the constraints of classical legal thinking while preserving the core values and principles refined through centuries of jurisprudence. The legal profession must adapt to the complexities of AI by integrating multidisciplinary perspectives, including technological expertise, ethics and fundamental law. However, it is somewhat paradoxical that while we must maintain continuity with past legal traditions, the challenges posed by AI demand entirely new legal and regulatory strategies. The tools of the past are insufficient to address the unprecedented risks and ethical dilemmas presented by AI systems. New approaches are needed - ones that emphasize algorithmic transparency and the mitigation of biases embedded in AI decision-making.

In this context, regulatory frameworks must strike a balance between innovation and legal certainty. Over-regulation can stifle technological progress, but the absence of clear rules can lead to unchecked power and violations of fundamental rights. The governance of AI will therefore need to evolve in a way that upholds democratic principles, while ensuring that the development

- 51/52 -

and deployment of AI serves the public good, rather than narrow corporate or political interests.

The fundamental rights aspect of constitutions is mainly concerned with the protection of individuals against the State. Constitutions contain declarations of fundamental rights. The state must not only respect these rights but also actively enforce them through legal guarantees. From the point of view of fundamental rights, the individual is the holder of rights, while the state has an objective institutional duty to ensure the protection and realization of these rights. This raises the crucial question of how to examine our subject through the lens of constitutional law.

Constitutional law, as a fundamental legal framework, has a unique and synthesizing role in AI regulation. Just as constitutional law defines and structures the relationship between the state and its citizens, it also has the potential to establish clear boundaries for AI use in all circumstances. A constitutional approach to AI regulation would require the state to establish effective mechanisms for accountability, transparency and for sure human oversight. This aligns with the principle of effect utile, which demands that legal protections be not only theoretical but also practically enforceable. The European Union's AIA exemplifies this approach, seeking to impose regulatory safeguards that align with fundamental rights considerations.

Beyond mere compliance, constitutional law could play a guiding role in shaping ethical and legal norms for AI. While civil, data protection, and administrative law address specific aspects of AI-related issues, constitutional law provides the overarching principles necessary for a coherent and rights-based regulatory framework. Just as constitutional law limits state power to protect individuals, it can also set necessary limits on AI systems, ensuring that their deployment remains aligned with fundamental rights under all circumstances.

As a verdict, AI regulation cannot be confined to sectoral legal approaches. It requires a constitutional framework that ensures the protection of human rights and democratic values. The rapid integration of AI into society necessitates legal frameworks that are adaptable yet robust, capable of addressing both current and emerging risks. Constitutional law, with its emphasis on fundamental rights and institutional responsibilities, provides a critical foundation for AI governance, ensuring that technological advancements serve the public good while safeguarding individual freedoms.

- 52/53 -

3. Liability of AI

The development of AI raises new challenges in terms of liability, in particular fundamental rights. While traditional liability structures assume a clear link between the actor and the consequences, the autonomous decision-making capacity of AI makes this relationship increasingly complex. In my view, the primary consideration when examining the issue of liability is who caused the act, who is responsible for it and what are the consequences. According to the thought process of Sándor Udvary, the human will is always behind AI, since AI works to achieve a goal set by the operator and is not capable of independent goal formation.[6]

In the case of damage caused by AI several liability models can be applied. Product liability legislation offers one possibility for the manufacturer to bear the liability as AI is currently legally on the market as a product.[7] Accordingly, the manufacturer is liable for any damage resulting from the defect of the AI, if it is covered by product liability rules. However, this level of simplification of liability is not always applicable. In some cases, it is difficult to determine clearly whether the damage is caused by malfunctioning of the MI, misuse by the user or manipulation by a third party.

In this perspective, it is also worth examining the applicability of conditional legal liability.[8] According to Krisztina Nagy's interpretation, social media platforms are liable if they fail to remove infringing content posted by users after becoming aware of it.[9] Applying this logic to AI systems could be concluded that the producer would only be liable if it does not immediately correct the error detected by the user. This approach can have significant advantages, as it ensures that the manufacturer does not have to take responsibility for every malfunction, while remaining responsible for the proper functioning of the system.

The issue of the liability of AI is not only a matter of civil law rules but also has fundamental rights implications. One of the most significant challenges of AI systems is the risk of discrimination, especially in employer decisions, credit

- 53/54 -

assessments and law enforcement applications.[10] AI systems that learn from biased datasets can disproportionately disadvantage marginalized groups, which can lead to violations of fundamental rights to equality and non-discrimination.[11] Therefore the AIA intends to introduce stricter rules for high-risk AI schemes requiring transparency and accountability mechanisms.[12]

Liability issues are also particularly important for data protection and privacy. Surveillance systems, facial recognition technologies and profiling systems operated by AI create significant risks to the protection of personal data.[13] The General Data Protection Regulation (GDPR) provides a legal basis for regulating such systems but the specific nature of artificial intelligence requires further legal clarification.[14] Consequently, the AIA clearly prohibits the use of subliminal techniques, the legislator's intention is clearly to protect the autonomy of individuals to make decisions and to ensure that their choices are not restricted by hidden manipulative algorithms.[15] Such prohibitions are not only aimed at regulating the technical functioning of AI systems, focusing also on how these technologies affect the social status of individuals and the exercise of their fundamental rights. Here again, the issue of responsibility has a dual dimension. Developers need to have a responsibility to ensure that data is handled appropriately, while users need to be aware of the system's operational limitations.

The possibilities for appeal against decisions taken by the AI are also a central issue. Where an AI system makes a decision that adversely affects an individual - for instance rejecting a loan application or a job application - the person concerned should have the opportunity to challenge the decision.[16] Due

- 54/55 -

to the so-called black box problem many AI algorithms are not transparent in their operation, which hampers effective redress and procedural safeguards. To remedy this, legislation should provide for clarity of models and the possibility of human review.

Addressing the liability issues of the AI is of particular importance not only in the field of civil law and product liability, but also for the protection of fundamental rights. AI systems must operate within a clear legal framework, ensuring non-discrimination, transparency and adequate remedies. This requires a complex regulatory environment that both encourages innovation and guarantees effective protection of fundamental rights. One of the tools to achieve this objective is to examine the horizontal scope.

4. Horizontal effect

The traditional paradigm of the enforcement of fundamental rights is based on vertical legal relationships, specifically those between the state and the individual.[17] According to this logic, fundamental rights primarily impose obligations on state institutions and bodies to ensure their respect and implementation. However, social and technological developments increasingly necessitate the enforceability of fundamental rights within legal relationships between private actors - such as companies, corporations or individuals. This principle is referred to as the horizontal effect of fundamental rights.

In private law relationships, certain constitutional principles - particularly through general clauses - apply objectively, reflecting the significant societal importance of these legal interactions.[18] In particular, this is relevant at a time when technological developments and societal transformations are creating new challenges that require a comprehensive and innovative review of the legal interpretation and application framework. In such circumstances, the integration of constitutional principles would also ensure the broad application of fundamental rights.

- 55/56 -

The horizontal scope can basically be divided into two parts: direct and indirect ('drittwirkung')[19] scope. On the basis of the theory of direct horizontal effect, fundamental rights provisions can be directly applied in private law disputes, thus allowing parties to rely directly on the fundamental rights guaranteed by the Constitution.[20] Accordingly, the courts can base their decisions directly on the fundamental rights themselves, without taking into account the mediating role of private law rules. The direct effect is therefore that a possible violation of fundamental rights can be an independent legal basis in disputes between private parties.

Direct horizontal effect offers numerous advantages, particularly in terms of the efficiency of legal enforcement and its role in providing guarantees. Since this theory does not require the use of intermediate legislation, it can provide direct and rapid protection for the persons concerned. However, this approach has also been criticized as it may lead to a blurring of the boundaries between public and private law rules,[21] which may jeopardize the stability of private legal relations.[22]

In German constitutional court practice, the theory of indirect horizontal effect is particularly significant. It emphasizes the intrinsic connection between the fundamental rights enshrined in the Constitution and the principles of private law.[23] This connection was confirmed by the Lüth case,[24] which is a milestone in the history of German constitutional law thinking. In the application of indirect horizontal scope, fundamental rights do not directly appear in private law disputes but play a role in the context of the interpretation of the relevant legislation, therefore ensuring the indirect enforcement of fundamental rights in private law relationships.[25]

On the contrary, direct horizontal effect allows individuals to invoke fundamental rights provisions directly as a legal basis, enabling judicial decisions to be made explicitly in light of these constitutional provisions. While this approach broadens the scope for the enforcement of fundamental rights, Tamás Klein

- 56/57 -

notes that the doctrinal foundations of direct horizontal scope are less stable and systematically developed than the elaborate systems of vertical protection of fundamental rights.[26] This prompts the question of how effectively legal certainty and private law autonomy can be maintained when fundamental rights serve as a direct basis for private legal relationships.

As regards the indirect horizontal scope of fundamental rights in Hungary, the decision of the Constitutional Court No. 8/2014. (III. 20.) can be considered a significant milestone. The decision established that the provisions of the Constitution may have an effect not only in relations between the state and the individual, but also in private legal relations. This effect is achieved through the general clauses, which transmit fundamental constitutional values to private law relations.[27] This approach opens a broader possibility to apply fundamental rights in situations where private actors, such as technology companies, develop or operate artificial intelligence systems.

5. International outlook on the regulation of AI

The rapid development and proliferation of AI technologies raise numerous legal, ethical, and societal challenges. Consequently, an increasing number of countries and international organizations are initiating comprehensive regulatory frameworks to ensure the safety, reliability, and protection of fundamental rights in AI systems. This chapter examines two significant regulatory frameworks: the European Union's AIA and South Korea's SKAIA. By analyzing the similarities and differences between these two legal instruments, we aim to understand how the international community shapes AI regulation and identify potential gaps within these frameworks.

The AIA represents one of the most comprehensive and detailed regulatory frameworks for AI governance. Its objective is to ensure the safety and reliability of AI systems while protecting fundamental rights. The EU AI Act employs a risk-based approach, classifying AI systems according to their risk levels and regulating them accordingly.[28] The regulation prohibits AI practices that pose unacceptable risks, such as government-conducted social credit systems or software that promotes hazardous behavior, manipulative practices, or subliminal techniques.[29] Additionally, the EU AI Act differentiates between stakeholders, imposing distinct

- 57/58 -

obligations on various AI users and developers. This approach allows for flexible regulatory application across diverse industrial and technological contexts.

On December 26. 2024, South Korea's National Assembly enacted the Framework Act on the Development of Artificial Intelligence and the Establishment of a Trust Foundation, the SKAIA. This legislation aims to improve the quality of life and enhance competitiveness, marking the world's second comprehensive AI regulation after the AIA.[30] The South Korean law primarily focuses on AI systems with demonstrable impacts, particularly generative AI, which may significantly affect human life, including physical safety and fundamental rights. Unlike the AIA, the South Korean regulation does not contain specific prohibitions but instead emphasizes requirements for the safety and reliability of AI systems. Moreover, the law broadly defines obligations without distinguishing between different types of stakeholders. Both the AIA and the SKAIA aim to ensure the safety and reliability of AI systems while safeguarding fundamental rights. Nevertheless, there are notable differences in their approaches and emphases.

The EU's AIA risk-based approach facilitates flexible regulatory application, whereas the South Korean law focuses more on safety and reliability requirements. The EU explicitly bans AI practices posing unacceptable risks, which are not explicitly addressed in South Korea's legislation. Furthermore, the EU imposes differentiated obligations on stakeholders, while the South Korean law outlines comprehensive obligations without stakeholder-specific distinctions.[31]

These regulatory frameworks exhibit gaps that require attention through implementing acts. For the AIA, further clarification is needed regarding risk classifications and sanctions.[32] In contrast the SKAIA lacks specific prohibitions and differentiated obligations for stakeholders. In result the regulatory frameworks of the AIA and SKAIA represent significant strides toward ensuring the safety and reliability of AI systems. However, the differences and gaps between these legal instruments highlight the need for further international coordination and harmonization. Moving forward, these principles are expected to evolve and become more precise to effectively address the challenges arising from the rapid advancement of AI technologies.

- 58/59 -

The nature of AI technology requires the development of uniform standards and guidelines to ensure ethical and responsible use. Cross-border challenges, the development of universal standards and the management of extreme risks are important elements of international cooperation in the field of AI. The OECD Global Strategy Group meeting stressed the need to address cross-border challenges and develop universal standards, in addition to developing digital skills. Different cultural and political approaches, as well as different economic interests, make it difficult to create a single international regulatory framework. The European Union and Canada have launched a Digital Partnership to deepen cooperation to create a positive and people-centered digital economy and society.

6. The concept of the proactive state

The system of fundamental rights has a dual nature, as it encompasses both a subjective and an objective dimension.[33] The subjective aspect refers to the rights and entitlements of the individual while the objective aspect manifests as the state's duty to protect institutions.[34] This means that the state is responsible for creating and maintaining the institutional and legal structures essential for ensuring the practical realization and protection of fundamental rights.

In cases where a violation of fundamental rights occurs or when multiple fundamental rights come into conflict, the state entrusts the judicial system with the task of determining which right, in what form and to what extent enjoys protection.[35] To resolve such conflicts various fundamental rights tests are employed in judicial practice, assisting in the fair and lawful resolution of cases based on principles such as necessity, proportionality or other constitutional standards.

Moreover, beyond the classical protection of fundamental rights it is necessary to lay down the concept of a proactive state. The goal of the proactive regulatory principle, as we interpret it, is for the state to use legal instruments not necessarily to prevent but to preemptively address challenges that may raise constitutional concerns. The significance of the state's proactivity lies not merely in prevention - hence the use of the term proactive rather than preventive - but in fostering a forward-looking legislative intent. This approach aims to enhance the effective enforcement of the principle of effect utile, derived from the right to a fair trial,

- 59/60 -

which mandates effective legal protection in cases of fundamental rights violations before legally binding remedial forums.[36] The idea can easily be referred to the phrase used by Tamás Klein the "vaccine law", i.e. that the provisions do not give priority to the sanctioning of the infringement but prescribe measures that prevent it from occurring in the first place.[37]

As an example, the Fundamental Rights Impact Assessment (hereinafter: FRIA) in the AIA aims to ensure that before high-risk AI systems are brought to market, their users - whether public bodies or private organizations providing public services - are required to carry out a thorough impact assessment. The purpose of this is to assess the potential impact of the use of such systems on the enjoyment of fundamental rights.[38] In a broad interpretation of the principle of effective legal protection the FRIA aims to prevent any possible violation of fundamental rights and to ensure that the functioning of the AI systems is in line with the fundamental rights standards required by the EU legal framework.

The creation of FRIA is a clear step towards proactive regulation. The AIA requires member states to establish appropriate supervisory authorities to ensure proper enforcement of its provisions.[39] Thus - indirectly[40] - the establishment and implementation of the FRIA system can be understood as an obligation stemming from the State's responsibility under public law and EU law. This situation can also be interpreted as creating through the EU Regulation an objective obligation of institutional protection for the member states, since the text of the regulation not only refers to the importance of fundamental rights, but also explicitly obliges the member states and the to whom it applies to protect them. It is particularly important that the Regulation refers to the Charter[41] as the point of reference for the assessment of the impact of fundamental rights, which, as a general rule, is not directly applied by the member states.

The AIA not only strengthens innovative legislation by introducing the FRIA but also by creating a regulatory sandbox.[42] Article 57 of the Regulation contains the basic rules on the test environment. At first reading the term sandbox may seem novel but this method is not entirely new. Test environments based on a similar concept have been used in the financial technology sector where AI

- 60/61 -

solutions have been already present for many years.[43] There are also physical test tracks for self-driving cars where developers test the latest solutions in a real-world environment.[44] These environments are designed to ensure that new AI systems comply with relevant EU and national legislation. In essence, it creates a collaboration between technology developers and legislators to ensure ongoing communication between the parties. At the end of the testing process a regulatory framework will be developed that takes into account and reconciles the interests of both parties. This approach is extremely forward-looking as it does not follow the legislative methodology used in the past but adopts a new approach.[45]

The Regulation delegates the task of operating the test environments to the national authorities operating FRIA in line with the subsidiarity principle to provide for public authority and appropriate control. As a manifestation of proactivity and direct horizontal scope, Article 57(6) stipulates that test environments provide an opportunity to identify and minimize risks in a timely manner.

We argue that regulatory sandboxes will not only play a key role in fostering technological innovation, but will also ensure a dynamic and adaptive regulatory environment in the long term facilitating the emergence of secure and legitimate AI applications. These environments will allow for the continuous adaptation of regulatory requirements and technological innovations, ensuring that legislation can keep pace with technological developments. However, there is also a lack of sophistication in the current regulatory framework as regards test environments. The regulation does not clarify how Member States with different economic and technological capacities should compensate for differences in order to ensure uniformity, allocate sufficient resources..."[46] is not sufficient in the present situation. The Regulation allows for the creation of test environments at regional and local level[47] but does not clarify the coordination between the different levels, nor does it clarify the responsibilities in case of possible infringements. The AIA emphasizes the transparency of innovation efforts and test environments but does not provide sufficient guarantees that social control is applied in the testing process. The latter may be of particular concern where fundamental rights are at stake.

- 61/62 -

7. Conclusion

As it has been explained, the issue of the legal regulation of AI is crucial for the protection of fundamental rights. The rise of technology poses new challenges to legal systems as AI becomes increasingly embedded in social and economic decision-making, affecting many legal sectors. This development requires the development of an appropriate legal framework to ensure that fundamental rights are respected and effectively enforced.

The central issue of the study is the horizontal scope of fundamental rights, i.e. how fundamental rights can be enforced in private law relationships. According to the traditional constitutional conception of fundamental rights, these rights play a role primarily in the relationship between the state and the individual, but in the modern technological environment it is essential that these rights can also be enforced in the relationship between private actors. AI-based decision-making systems are increasingly present in the labour market, financial services and public administration, making the extension of horizontal scope a key issue for the protection of fundamental rights.

Discrimination risks also feature prominently in the analysis. In many cases AI systems are based on biased databases that can lead to certain social groups being disproportionately disadvantaged. This is particularly relevant in the areas of employment and credit judgements where AI decisions have a direct impact on the lives of individuals. The case of Eric Loomis - presented in the introduction - also highlights that the lack of an appropriate legislative and safeguarding environment, as well as human supervision and negligence can lead to extremely worrying outcomes. The AIA is therefore introducing stricter regulation of high-risk schemes ensuring transparency and appropriate redress in an effort to curb worrying practices. In this way the European Union has created a constitutional right that goes beyond the competence of the member states. This practice may also raise questions of sovereignty in the future.

The liability issues of AI were also examined in the thesis. Due to the specificities of autonomous decision making and self-evolving algorithms traditional liability systems are not fully applicable. Under product liability rules, manufacturers are liable for damages caused by AI but in many cases such systems cannot be considered as a product in the traditional sense. Under the concept of conditional legal liability developers and operators are liable if a detected defect is not corrected in a timely manner, giving the system the possibility to operate legally.

The regulation of AI is not only a national challenge but also a transnational one that requires global cooperation. The EU's AIA and South Korea's SKAIA represent

- 62/63 -

two different approaches. The EU uses a risk-based system that bans and strictly regulates certain AI applications, while South Korea sets more general standards without imposing specific bans.

The study also highlights the importance of the concept of the proactive state as a proposed solution to the problem of subversive technologies. Reactive legislation is not enough to regulate AI, proactive and preventive regulatory instruments are needed. FRIA and regulatory sandbox systems aim at reconciling innovation and legal aspects creating the possibility to develop AI in a safe and secure way and in compliance with fundamental rights. At the same time, it is still a weakness that the AIA fails to clearly explain and pull together its effects on fundamental rights-something our digital age makes essential.[48]

The paper's central argument that the regulation of AI is a complex and multifaceted task, covering a wide range of areas from ensuring the protection of fundamental rights to clarifying liability and transnational cooperation. In developing a legal framework for AI, it is argued that a dynamic and adaptive regulatory approach is needed that both encourages innovation and ensures effective enforcement of fundamental rights.

Bibliography

Solon Barocas - Moritz Hardt - Arvind Narayanan: Fairness and machine learning. FairML Book, 2019.

Nóra Chronowski: Alkotmányosság három dimenzióban. Jogtudományi Alapkutatások, 4. Budapest, TK Jogtudományi Intézet, 2022.

Csaba Cservák: A digitalizáció hatása az alapjogok gyakorlására és érvényesítésére. In: Árpád Olivér Homicskó (ed.): A digitalizáció hatása az egyes jogterületeken. Acta Caroliensia Conventorum Scientiarum Iuridico-Politicarum XXIX., Budapest, Patrocinium, 2020, 55-76.

Günter Dürig: Grundrechte und Zivilrechtsprechung. In: Theodor Maunz (ed.): Festschrift für Hans Nawiasky. München, Isar Verlag, 1956.

Lilian Edwards - Michael Veale: Slave to the algorithm? Why a right to explanation is probably not the remedy you are looking for. Duke Law and Technology Review, 2017 (1), 18-84.

- 63/64 -

Zoltán Fleck: Mi a kihívás a jogban? In: Tamás Gyekiczky (ed.): Határtér. Digitális kihívások a jogban. Budapest, Patrocinium, 2021.

Fruzsina Gárdos-Orosz: Alkotmányos polgári jog? Az alapvető jogok alkalmazása a magánjogi jogvitákban. Budapest-Pécs, Dialóg Campus, 2011.

Fruzsina Gárdos-Orosz: Az alapjogok korlátozása. IJOTEN/Alkotmányjog, 2020.

Fruzsina Gárdos-Orosz: Alapvető jogok a magánjogi jogviszonyokban - horizontális hatály a közösségi jogban. In: Tamás Nótári - Gábor Török (eds.): Prudentia Iuris Gentium Potestate. Ünnepi tanulmányok Lamm Vanda tiszteletére. Budapest, Magyar Tudományos Akadémia Jogtudományi Intézet, 2010.

Fruzsina Gárdos-Orosz - Renáta Bedő: Az alapvető jogok érvényesítése a magánjogi jogviták során - Az újabb alkotmánybírósági gyakorlat (2014-2018). Alkotmánybírósági Szemle, 2018 (1), 3-15.

Katalin Gombos - Franciska Zsófia Gyuranecz - Bernadett Krausz - Dorottya Papp: A mesterséges intelligencia jogalkalmazási területen való hasznosíthatóságának alapjogi kérdései. In: Bernát Török - Zsolt Ződi (ed.): A mesterséges intelligencia szabályozási kihívásai. Budapest, Ludovika Egyetemi Kiadó, 2021.

János Kálmán: Ex Ante "Regulation"? The Legal Nature of the Regulatory Sandboxes or How to "Regulate" Before Regulation Even Exists. In: Gábor Hulkó - Vybiral Roman (ed.): European Financial Law in Times of Crisis of the European Union. Budapest, Dialóg Campus, 2019, 215-225.

Tamás Klein: A DSA alapjogvédelmi mechanizmusa mint alkotmányjogi nóvum. In: András Koltay - Tamás Szikora - András Lapsánszky (eds.): A vadnyugat vége? Tanulmányok az Európai Unió platformszabályozásáról. Budapest, Orac Kiadó, 2024.

Zoltán Majó-Petri - Sándor Huszár: Autonóm járművek, önvezető autók: mit gondol a közönség? Közlekedéstudományi Szemle, 2020 (1), 66-75.

Krisztina Nagy: Facebook files - gyűlöletbeszéd törölve? A közösségi médiaplatformok tartalom-ellenőrzési tevékenységének alapjogi vonatkozásai. Pro Futuro, 2018 (2).

Yuval Noah Harari: 21 lecke a 21. századra. Central Kiadó Csoport, 2018.

Balázs Schanda - Zsolt Balogh: Alkotmányjog, alapjogok. Budapest, Pázmány Press, 2022.

Jamie Susskind: Politika a jövőben. Életünk a technológia uralta világban. Budapest, Athenaeum Kiadó, 2022.

Bernát Török: Az alkotmányjog horizontális hatálya. In: Bernát Török - Zsolt Ződi (ed.): A mesterséges intelligencia szabályozási kihívásai. Budapest, Ludovika Egyetemi Kiadó, 2021.

Sándor Udvary: Szenzorok a mátrixban. Értelem, érzékelés és a jog. In: Tamás Gyekiczky (ed.): Határtér. Digitális kihívások a jogban. Budapest, Patrocinium, 2021.

Michael Veale - Frederik Zuiderveen Borgesius: Demystifying the Draft EU Artificial Intelligence Act - Analysing the good, the bad, and the unclear elements of the proposed approach. Computer Law Review International, 2021 (4), 97-112.

- 64/65 -

Juho Yoon - Jiyoung Sohn - Jeonghee Kang: Significance of the passage of the ai framework act. And its impact on the industry. BKL Legal Update, 2025.

Act V of 2013 on the Civil Code, Chapter LXXII

Constitutional Court decision of 3064/2014. (III. 26.) [15]

Directive 2000/31/EC of the European Parliament and of the Council of 8 June 2000 on certain legal aspects of information society services, in particular electronic commerce, in the Internal Market. HL L 178, 17. 07. 2007., 1-16.

Getting the future right - Artificial intelligence and fundamental rights, European Union Agency for Fundamental Rights, 2020.

Lüth-case. 7 BVerfGE, 1958. 198.

National Assembly passes the AI Basic Act, https://www.shinkim.com/eng/media/newsletter/2667#:~:text=The%20AI%20Basic%20Act%20requires,safety%2C%20reliability%2C%20and%20accessibility. (22. 01. 2025.)

Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation)

Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (AIA)

South Korea: National assembly passes basic law on development of AI. https://www.dataguidance.com/news/south-korea-national-assembly-passes-basic-law (23. 02. 2025.)

State vs. Loomis case. https://harvardlawreview.org/print/vol-130/state-v-loomis/ (22. 01. 2025.) ■

NOTES

[1] PhD student, Doctoral School of Law and Political Sciences, Károli Gáspár University of Reformed Church in Hungary.

[2] National Assembly passes the AI Basic Act, https://www.shinkim.com/eng/media/newsletter/2667#:~:text=The%20AI%20Basic%20Act%20requires,safety%2C%20reliability%2C%20and%20accessibility (22. 01. 2025.).

[3] Bernát Török: Az alkotmányjog horizontális hatálya. In: Bernát Török - Zsolt Ződi (ed.): A mesterséges intelligencia szabályozási kihívásai. Budapest, Ludovika Egyetemi Kiadó, 2021, 147. Katalin Gombos - Franciska Zsófia Gyuranecz - Bernadett Krausz - Dorottya Papp: A mesterséges intelligencia jogalkalmazási területen való hasznosíthatóságának alapjogi kérdései. In: Bernát Török - Zsolt Ződi (ed.): A mesterséges intelligencia szabályozási kihívásai. Budapest, Ludovika Egyetemi Kiadó, 2021, 344.

[4] State vs. Loomis case. https://harvardlawreview.org/print/vol-130/state-v-loomis/ (22. 01. 2025.).

[5] Will be discussed in more detail in the final part of the paper.

[6] Sándor Udvary: Szenzorok a mátrixban. Értelem, érzékelés és a jog. In: Tamás Gyekiczky (ed.): Határtér. Digitális kihívások a jogban. Budapest, Patrocinium, 2021, 169.

[7] Act V of 2013 on the Civil Code, Chapter LXXII.

[8] Directive 2000/31/EC of the European Parliament and of the Council of 8 June 2000 on certain legal aspects of information society services, in particular electronic commerce, in the Internal Market. HL L 178, 17. 07. 2007, 1-16.

[9] Krisztina Nagy: Facebook files - gyűlöletbeszéd törölve? A közösségi médiaplatformok tartalom-ellenőrzési tevékenységének alapjogi vonatkozásai. Pro Futuro, 2018 (2), 118.

[10] Yuval Noah Harari: 21 lecke a 21. századra. Central Kiadó Csoport, 2018, 66-67. Jamie Susskind: Politika a jövőben. Életünk a technológia uralta világban. Budapest, Athenaeum Kiadó, 2022, 241-243.

[11] Solon Barocas - Moritz Hardt - Arvind Narayanan: Fairness and machine learning. FairML Book, 2019, 19-20.

[12] Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (hereinafter: AIA.).

[13] Getting the future right - Artificial intelligence and fundamental rights, European Union Agency for Fundamental Rights, 2020.

[14] Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (GDPR).

[15] AIA. (2024) Article 5. section (1), a).

[16] Lilian Edwards - Michael Veale: Slave to the algorithm? Why a right to explanation is probably not the remedy you are looking for. Duke Law and Technology Review, 2017 (1), 18-84, 4-10.

[17] Tamás Klein: A DSA alapjogvédelmi mechanizmusa mint alkotmányjogi nóvum. In: András Koltay - Tamás Szikora - András Lapsánszky (eds.): A vadnyugat vége? Tanulmányok az Európai Unió platformszabályozásáról. Budapest, Orac Kiadó, 2024, 271.

[18] Fruzsina Gárdos-Orosz: Alapvető jogok a magánjogi jogviszonyokban - horizontális hatály a közösségi jogban. In: Tamás Nótári - Gábor Török (eds.): Prudentia Iuris Gentium Potestate. Ünnepi tanulmányok Lamm Vanda tiszteletére. Budapest, Magyar Tudományos Akadémia Jogtudományi Intézet, 2010, 204.

[19] Fruzsina Gárdos-Orosz: Alkotmányos polgári jog? Az alapvető jogok alkalmazása a magánjogi jogvitákban. Budapest-Pécs, Dialóg Campus, 2011, 45.

[20] Fruzsina Gárdos-Orosz - Renáta Bedő: Az alapvető jogok érvényesítése a magánjogi jogviták során - Az újabb alkotmánybírósági gyakorlat (2014-2018). Alkotmánybírósági Szemle, 2018 (1), 3-15.

[21] Nóra Chronowski : Alkotmányosság három dimenzióban. jogtudományi Alapkutatások, 4. Budapest, TK jogtudományi Intézet, 2022, 40.

[22] Günter Dürig: Grundrechte und Zivilrechtsprechung. In: Theodor Maunz (ed.): Festschrift für Hans Nawiasky. München, Isar Verlag, 1956, 183.

[23] Gárdos-Orosz 2011, 45.

[24] Lüth-case. 7 BVerfGE, 1958. 198.

[25] Gárdos-Orosz - Bedő 2018, 3.

[26] Klein 2024, 271.

[27] Gárdos-Orosz - Bedő 2018, 8-10.

[28] AIA. (2024) Recital.

[29] AIA. (2024) Article 5, section (1).

[30] South Korea, National assembly passes basic law on development of AI. https://www.dataguidance.com/news/south-korea-national-assembly-passes-basic-law (23. 02. 2025.).

[31] Juho Yoon - Jiyoung Sohn - Jeonghee Kang: Significance of the passage of the ai framework act. And its impact on the industry. BKL Legal Update, 2025.

[32] Michael Veale - Frederik Zuiderveen Borgesius: Demystifying the Draft EU Artificial Intelligence Act - Analysing the good, the bad, and the unclear elements of the proposed approach. Computer Law Review International, 2021 (4), 97-112.

[33] Balázs Schanda - Zsolt Balogh: Alkotmányjog, alapjogok. Budapest, Pázmány Press, 2022, 35.

[34] Ibid.

[35] Fruzsina Gárdos-Orosz: Az alapjogok korlátozása. IJOTEN/Alkotmányjog, 2020, 105.

[36] Constitutional Court decision of 3064/2014. (III. 26.) [15].

[37] Klein 2024, 15.

[38] AIA. (2024) Article 27., section (1).

[39] AIA. (2024) Article 28.

[40] By its very nature the AIA Regulation is a non-transposing law, so its provisions are directly enforceable through the member states. However, its effect is indirect for those subject to the AIA as regards mediation by member states.

[41] AIA. (2024) Article 1, section (1).

[42] AIA. (2024) Recital 138, 139.

[43] János Kálmán: Ex Ante "Regulation"? The Legal Nature of the Regulatory Sandboxes or How to "Regulate" Before Regulation Even Exists. In: Gábor Hulkó - Vybiral Roman (ed.): European Financial Law in Times of Crisis of the European Union. Budapest, Dialóg Campus, 2019, 215-225.

[44] Zoltán Majó-Petri - Sándor Huszár: Autonóm járművek, önvezető autók: mit gondol a közönség? Közlekedéstudományi Szemle, 2020 (1), 66-75.

[45] Zoltán Fleck: Mi a kihívás a jogban? In: Tamás Gyekiczky (ed.): Határtér. Digitális kihívások a jogban. Budapest, Patrocinium, 2021, 47.

[46] AIA. (2024) Article 57., section (4).

[47] AIA. (2024) Article 57., section (2).

[48] Csaba Cservák: A digitalizáció hatása az alapjogok gyakorlására és érvényesítésére. In: Árpád Olivér Homicskó (ed.): A digitalizáció hatása az egyes jogterületeken. Acta Caroliensia Conventorum Scientiarum Iuridico-Politicarum XXIX., Budapest, Patrocinium, 2020, 55-76.

Tartalomjegyzék

Visszaugrás

Ugrás az oldal tetejére