Megrendelés

Dániel Mazsu[1]: Latest Regulatory Developments regarding Artificial Intelligence in Hungary and the EU -The provisions of the AI Regulation entering into force and the most recent developments regarding AI in Hungary (DJM, 2025/1-2., 65-88. o.)

https://doi.org10.24169/DJM/2025/1-2/4

Abstract - Latest Regulatory Developments regarding Artificial Intelligence in Hungary and the EU - The provisions of the AI Regulation entering into force and the most recent developments regarding AI in Hungary

Artificial intelligence (AI) has undergone rapid development over the past decades, even accelerating in the past few years. The European Union is trying to respond to the technology's regulatory challenges with the AI Act, and Hungary is trying to respond to it through legislation, related to the act. The article reviews the interpretation of AI in the EU and Hungary, the differences in the definition accepted between the two and the circumstances of definition, especially between generative models, general-purpose and general AI. The article presents the most important provisions the gradual entry into force of the AI Regulation at the writing of the article, with special regard to prohibited AI practices and the regulations on general-purpose AI models. In addition, also due to its timeliness, it presents and analyses the EU-level institutional system of the AI Regulation, and the planned domestic organizations related to it and complementing it, building on the available forms of the new Hungarian AI Strategy (MIS 2.0). The aim of the article is to provide the reader with an insight into the regulatory efforts of the EU and Hungary AI, with a special focus on the legislator's responses to the social and economic issues of technology.

Keywords: artificial intelligence, AI Act, European Union, general-purpose AI, general AI, AI Office

- 65/66 -

Absztrakt - A Mesterséges Intelligencia legújabb szabályozási fejleményei Magyarországon és az Unióban - Az MI Rendelet most hatályba lépő rendelkezései, valamint a magyar fejleményei

A mesterséges intelligencia (MI) technológiája az elmúlt évtizedek, de leginkább évek során gyors ütemű fejlődésen ment keresztül, amelynek szabályozási kihívásaira az Európai Unió az MI Rendeleten, Magyarország pedig az ahhoz kapcsolódó jogszabályokon keresztül igyekszik választ adni. A tanulmány áttekinti az Unió, valamint Magyarország MI értelmezését, a kettő közötti értelmezési különbségeket és a definiálás körülményeit, különösen a generatív modellek, az általános célú, valamint az általános MI között. A cikk bemutatja az MI Rendelet cikk írásakor hatályba lépő legfontosabb rendelkezéseit, a fokozatos hatálybalépésből adódóan különös tekintettel a tiltott MI gyakorlatokra, valamint az általános célú MI modellekre vonatkozó előírásokra. Ezen kívül, ugyancsak annak időszerűsége miatt bemutatja és elemzi az MI Rendelet EU-szintű intézményrendszerét, valamint az ahhoz kapcsolódó, azt kiegészítő tervezett hazai szervezeteket, építve az új magyar MI Stratégia (MIS 2.0) elérhető dokumentumaira. A cikk célja, hogy betekintést nyújtson az olvasó számára az EU és Magyarország MI szabályozási törekvéseibe, különös tekintettel a jogalkotói válaszokra a technológia társadalmi és gazdasági kérdései kapcsán.

Kulcsszavak: mesterséges intelligencia, MI Rendelet, Európai Unió, általánoscélú MI, általános MI, MI Hivatal

Abstrakt - Neueste regulatorische Entwicklungen im Bereich der künstlichen Intelligenz in Ungarn und der EU - Die Bestimmungen der KI-Verordnung, die jetzt in Kraft treten, und die neuste Entwicklungen in Ungarn

Künstliche Intelligenz (KI) hat sich in den letzten Jahrzehnten rapid entwickelt und in den letzten Jahren sogar noch beschleunigt. Die Europäische Union versucht, mit dem KI-Gesetz auf die regulatorischen Herausforderungen der Technologie zu reagieren, und Ungarn versucht, durch Rechtsvorschriften im Zusammenhang mit dem Gesetz darauf zu reagieren. Der Artikel befasst sich mit der Auslegung von KI in der EU und Ungarn, den Unterschieden in der akzeptierten Definition zwischen den beiden und den Definitionsumständen,

- 66/67 -

insbesondere zwischen generativen Modellen, universeller und allgemeiner KI. Der Artikel stellt die wichtigsten Bestimmungen der Kl-Verordnung vor, die zum Zeitpunkt der Erstellung dieses Artikels in Kraft treten werden, unter besonderer Berücksichtigung verbotener KI-Praktiken und der Verordnungen zu universellen KI-Modellen aufgrund des schrittweisen Inkrafttretens. Darüber hinaus stellt er, auch aufgrund seiner Aktualität, das institutionelle System der KI-Verordnung auf EU-Ebene und die geplanten inländischen Organisationen vor, die damit verbunden sind und es ergänzen, aufbauend auf den verfügbaren Formularen der neuen ungarischen KI-Strategie (MIS 2.0). Ziel des Artikels ist es, dem Leser einen Einblick in die Regulierungsbemühungen der EU und Ungarns KI zu geben, mit besonderem Fokus auf die Antworten des Gesetzgebers auf die sozialen und wirtschaftlichen Fragen der Technologie.

Schlagworte: Künstliche Intelligenz, KI-Gesetz, Europäische Union, universelle KI, allgemeine KI, KI-Büro

Introduction

Artificial Intelligence (thereinafter: AI) is currently one of the most popular buzzwords, if not the most popular. Until the early 2000s, it was mostly only used in science fiction literature as a quasi-obligatory element. However, in real life, this technology has also been with us for many decades, although there has not always been an advancement in it at that time. But from the beginning of the century, this changed radically.

Due to the devices capable of collecting and transmitting data, i.e. those related to the Internet of Things (IoT), a huge, almost unforeseeable amount of data (Badman & Kosinski, 2024.) was generated. At the same time, although no longer in accordance with Moore's law (Intel, 2005), the computing capacity of computers continued to increase, as if "keeping up" with this amount of data. These two developments were the basis for a significant increase in the quantity and quality of AI developments.

These developments have appeared in more and more areas of society and the economy, but the fall of 2022 can be considered outstanding, as that is the time when GhatGPT's release of the then current model "exploded" into the public discourse, making the technology one of the main innovations in the world, possibly forever. However, fortunately, law began to deal with this piece of technology much earlier.

- 67/68 -

An EU-wide strategy on AI was adopted years earlier, followed by a White Paper. In addition, national strategies were also created, which were based to a greater or lesser extent on EU documents. Among these documents, the AI Regulation stands out both at EU and on a global level, as is the first comprehensive piece of legislation in the world deals with AI technology in a complex way, that it certainly requires.

The different provisions of the AI Regulation are applied in stages, due to exceptional priority of AI and the technology's complexity. Of these provisions, the rules on prohibited AI systems are applicable since February 2025, and the next such sections are the establishment and designation of codes of practice for general-purpose AI models and organizations based on the AI Regulation.

Domestic processes on national level are closely related to these, on one hand, due to the AI Regulation, but on the other hand, parallel with it. Parallelism is related to the fact that Hungary's AI Strategy was adopted almost at the same time as the EU White Paper. This document has recently been revised, both due to the astonishing advancement of technology since its acceptance and its mandatory requirement and is currently being renewed. This gives a unique opportunity to see how a Member State interprets and implements the AI Regulation in a long term, with regard to areas not mentioned in the AI Regulation.

Therefore, the purpose of this article is to present and provide insight into the most relevant parts of the AI Regulation, supplemented by the ongoing domestic processes, with special focus on the already existing and planned regulation.

1. What qualifies as Artificial Intelligence in the European Union?

In order to be able to examine anything in depth, it is necessary to have the appropriate knowledge about it. AI is fundamentally and inherently a technological solution. However, it did not have a clear and widely accepted definition, so when the law began to deal with it (partly out of necessity), it first had to create a definition for it. In this the EU already has a quite considerable experience.

The EU's AI Strategy (EU Commission, 2018) was the first to define AI, according to which "Artificial intelligence (AI) refers to systems that display

- 68/69 -

intelligent behavior by analyzing their environment and taking actions - with some degree of autonomy - to achieve specific goals." (EU Commission, 2018, p. 1) This was further clarified in the document "Ethics Guidelines for Trustworthy Artificial Intelligence" set up by the high-level expert team set up by the Commission (High-Level Independent Expert Group On Artifical Intelligence, 2019) and in the text on the AI definition published alongside it (High-Level Independent Expert Group On Artifical Intelligence, 2019).

Piecing these materials together the definition is, that "AI-based systems are human-designed software systems (and possibly hardware systems) that, with regard to their complex purpose, operate in the physical or digital dimensions by perceiving their environment through data acquisition, interpreting structured and unstructured data collected, arguing based on their knowledge, or processing information from such data and decide what are the most effective actions to achieve the given goal. AI systems can use symbolic rules or learn a numerical model, and they can also change their behavior by analyzing how past actions have affected the environment." (EU Commission, 2018, p. 1) and (High-Level Independent Expert Group On Artifical Intelligence, 2019, p. 6)

As the EU definition did not have binding force at the time, the Member States created and applied their own AI definitions documents related to the technology, especially, but not limited in their strategies like the German (Bundesregierung Deutschland, 2018) or French AI Strategy (Villani, 2018). This practice however, brought the risk of fragmentation.

One of these was the Hungarian AI Strategy (Hungarian Government, 2020), which contains not just one, but two definitions, that are slightly different in terms of content. AI first appears "the sum of algorithmic systems capable of teaching and improving themselves based on input data" (Hungarian Government, 2020, p. 6). The second definition that "(a)rtificial intelligence is a piece of software capable of mapping parts of human intelligence, and supporting or autonomously performing processes of sensing, interpreting, decision-making and action." (Hungarian Government, 2020, p. 9) The document adds a comment to this, that the Hungarian AI Strategy only deals with "narrow" Als.

In order to really understand them, it is necessary to see what definitions exist in literature as they give essential context to all the aforementioned concepts.

- 69/70 -

1.1. AI definitions in literature

Just like in the previously mentioned documents, there is no unanimously accepted definition of AI in literature. Unanimous here means, that while in the case of other, more "established" expressions, the definition is not necessary to (re) create, as it is self-evident that everyone means the same thing. The abundance of concepts seed in regard to AI has evolved from the fact that everyone considers different things important when defining them. As a result, one can find different definitions per author or even per article, or one can read about the technology even without an explicit definition. In most cases, this happens when the precise concept of the technology itself is not important for the given topic but is presented as a phenomenon. Despite the fact that there are many concepts with different meanings, two conceptual groups can be formed from them along the principles of how they approach AI.

The first group starts from intelligence. From their point of view programs that make intelligent decisions and actions to achieve their goals are considered AI. They consider an intelligent decision, a step that a person would take in the same situation. These concepts are called human-centered definitions (Turner, 2019, p. 9).

The second group aims to eliminate this anthropomorphic "dependence", that is, the reliance on the word intelligence, with which it is possible to form a concept that is more objective. According to this reading, intelligence means the sum of rational decisions, so a program is intelligent if it makes rational decisions to achieve its goal. These concepts constitute rational definitions. (Turner, 2019, p. 13)

The other circumstance that requires attention when defining the concept is that several studies (Turner, 2019, p. 8) can be found in a search that are about AI, but do not use an explicit definition. This is either because it does not consider it necessary, because it deals with the effects of technology on the given field, or because it does not consider it possible to create such a definition.

In addition to this grouping, it is also necessary, mainly due to the attitude of the Hungarian Strategy, to distinguish between general and narrow AIs. This grouping distinguishes based on the level of development of the agents, and their alternative designation - in the order of the previous categories - is strong and weak AI.

- 70/71 -

As a subject of regulation, general AIs currently exist only in theory and in the world of science fiction. These are the agents that are able to operate in several or even all areas, solve problems, develop themselves by learning independently, and are able to use the experience gained in one area to solve a problem that has arisen in another area. However, the known research and development related of such AI was still very rudimentary at the time of the development of both the White Paper (EU Commission, 2020) and the Hungarian AI Strategy (Hungarian Government, 2020), so their impact on the regulatory issues was treated as negligible. In contrast, narrow AI was already able to solve tasks only in a targeted, well-defined area - such as image recognition, the production of prediction data etc. Although this may seem limited compared to general AI, it had a huge advantage: that this type of AI already existed. According to this approach, any system or service that actually used AI at that time was narrow AI. However, as presented below, research and development related to general AI has progressed greatly.

Based on the above groupings, it can be stated that in relation to the grouping, the EU definition belongs to the group of rational AI concepts, as it does not even use the word intelligence, but goal-orientation and the most efficient and rational achievement of the goal appear as a central element.

In contrast, the Hungarian document does not specifically name a definition but writes in more detail in two places about what the strategy considers to be AI. The first time it gives a very general definition, which is not specific enough to be classified into any of the human-centered or rational categories. The second definition is more detailed and can be clearly classified into the group of human-centered concepts. Although the document notes in the footnote that it narrows the definition to narrow AIs due to the underdevelopment of general AIs - the Hungarian AI Strategy does not explain what this term means, while this cannot be considered trivial knowledge. This suggests that the target audience of the document must have prior knowledge of the theoretical background of AI types (Hungarian Government, 2020, p. 10) - it is highly likely that it would have been effective to think with general AIs to achieve the goals included in the Strategy, neither because of resource use nor efficiency.

- 71/72 -

1.2. The AI and AI system in the AI regulation

In fact, the above-mentioned fragmentation was largely eliminated with the adoption of the Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonized rules on artificial intelligence, as well as by Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144, and amending Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (the Artificial Intelligence Regulation) (thereinafter: the AI Regulation), which created a unified definition of AI and AI system at the level of the EU, removing uncertainties arising from multiple definitions.

The AI Regulation defines the AI system its main object, and according to it AI-system "means a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments" (AI Regulation Article 3 section 1).

Like the previous definitions from the EU, this also belongs to the rational AI concepts, as it is not based on human intelligence. However, compared to the previous defmition(s), it is more defined and much clearer.

However, as was the case above with the White Book's and Hungarian Strategy's AI definitions, a detour is necessary here as well. Namely, after the submission of the first version of the regulation, ChatGPT "exploded" in the fall of 2022 (Buchholz, 2023). Due to the huge notoriety from the explosion with the positive and negative effects resulting from the relatively easy use of generative models similar to it, there is a very strong social need to regulate these models and the problems they intensify to appear separately in the decree.

Unfortunately, due to the problems explained in the White Paper and the Hungarian Strategy's AI definition, the definition of generative AI leads to the same problem: there is no universally accepted concept. Since this is currently the most "trending" type of AI, part of the problem during the definition is that the providers of each model, in order to gain as many users as possible, try to set their own model to be more capable than the competitors. However, based on the largest dictionaries and articles on the subject, a definition can be applied,

- 72/73 -

according to which generative artificial intelligence is a special AI technology, mostly based on machine learning, which first analyzes and classifies the patterns provided to it in the learning phase, and then, based on them, is able to produce new content according to the input (prompt) determined by humans, For example, to create text, images, or music (Nah, et al., 2023, pp. 277-304).

Generative AI, due to its definition described above, is suitable for creating new, or at least seemingly so-like content compared to other, "average" AI systems. While this function also has enormous potential, for example, in the fields of healthcare, marketing or education, it also carries at similarly great dangers. Namely, such generative AI-generated content is perfectly suitable for realizing the phenomenon of hidden influence, or "nudge", which is also included in the Hungarian Strategy (Hungarian Government, 2020, p. 13), and to a much greater extent and in a much larger quantity than before due to their availability and relative simplicity.

Also, due to the versatile use of such models, many people, especially those who develop and provide generative AI models, expect such models to be the basis of the general AIs mentioned above. Mainly due to the concept of the AI Regulation, it is necessary to present the definition of general AI. The AI definition problem here repeats for the third time, as there is no uniformly accepted definition, again. However, there is a safe reference point. Due to its general acceptance, the definition in the article on which the ARC-AGI test is based on - a test which is specially designed to measure general intelligence - can be used, according to which the "(t)he intelligence of a system is a measure of its skill-acquisition efficiency over a scope of tasks, with respect to priors, experience, and generalization difficulty" (Chollet, 2019, p. 27). While the test itself, its creator, as well as the general public, see it as a future technology, there are already AI developers who believe that it is necessary to talk about it in the present tense. According to the information provided by the creator of the best-known AI model, their late 2024 model has already reached a level that exceeds the general human intelligence level defined as a benchmark in the ARC-AGI test (Edwards, 2024).

Due to the multifaceted and potentially dangerous nature of generative AI, as well as the increasing focus on generative AI technology during the adoption process of the AI Regulation, the AI Regulation has incorporated the definition of general-purpose AI model in order to respond to societal demand, as

- 73/74 -

evidenced by the huge number of amendments suggested by the European Parliament. In the AI Regulation general purpose "means an AI model, including where such an AI model is trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications, except AI models that are used for research, development or prototyping activities before they are placed on the market" (AI Regulation Article 3 section 63).

2. AI Regulation: hazard categories and prohibited uses

The AI Regulation has continued with the risk-based approach previously outlined by the EU in the EU AI Strategy and detailed in the White Paper. These categories represent an approach to technology regulation that does not regulate the technology itself, but its impact, in order to ensure that the provisions created are technology-neutral and future-proof.

Accordingly, the AI Regulation sets up several categories based on its current and potential hazard. The first is the list of prohibited AI practices that are most at the center of societal dialogue. These are also highlighted by the AI Regulation with its structure, as the practices classified as unacceptable risks are listed directly in the regulation, unlike the high-risk AI practices, that are only listed in the annexes.

Eight situations and practices have concluded to pose an unacceptable risk to society:

1. Subliminal or purposefully manipulative AI

2. AI exploiting vulnerabilities

3. "Social scoring" (Neuwirth, 2023, p. 3)

4. "Pre crime" AI

5. Facial recognition databases

6. Emotion recognition AI at work or educational institution

7. Biometric Categorization Systems

8. Real-time, remote biometric identification systems in public places.

- 74/75 -

The first category includes AI practices that want use technology unconsciously or in a deceptive and manipulative way. These have been included, and in the first place, in the prohibited category, as they are capable of modifying or distorting the behavior or even the entire attitude of a person or group in relation to any product or service. Moreover, as the Hungarian Strategy feared, this also includes the phenomenon of civic and political manipulation, as it represents the EU regulatory reaction to the "nudge" phenomenon. The part of the decree dealing with this does not mention explicit solutions - not even as an example - so this includes both the currently existing content solutions, as well as images, video, sound, and potential future practices such as a non-yet existent future human-machine interface. (Dr. Petrányi & Dr. Horváth, 2024)

The second category is actually another aspect of the first case, where the purpose is the same, but the reason for the danger is not the use of covert, manipulative techniques, but the "targeted" peope, since it would take advantage of "some vulnerability of a person or group due to age, disability or individual social or economic situation" (AI Regulation Article 5 section 1 point b). This is not only due to the classic protective situation, that these people and social groups need to be better protected precisely because of their vulnerability, but also from the fact that equality is enforced at the level of consumers on the EU market precisely because of the original, product safety regulation nature (Tóth, 2024, pp. 3-11) of the AI Regulation.

The third category is perhaps the most notorious to date. This category aims to prevent the phenomenon of "social credit" already used in China (Bertelsmann Stiftung, 2023), whether it would be from the private or government sector. However, the AI Regulation, like the second category, prohibits classification on the basis of a characteristic or personality trait only if it is detrimental to the person, or persons concerned. It applies only where the social score obtained results in disadvantageous or unfavorable treatment in social contexts unrelated to the situations in which the data were originally created or collected, or if it results in disadvantageous or unfavorable treatment which is unjustified or disproportionate to their conduct in the community or to the seriousness of it (AI Regulation Article 5 section 1 point c points i and ii). In other words, it does not ban the phenomenon of "social scoring" or "social credit" in its entirety. Since even the Chinese version can have potentially positive effects (Yang, 2022), the AI Regulation does not close the possibility to the implementation of the "only positive" version of social scoring.

- 75/76 -

The fourth phenomenon, referred to as "pre-crime" AI in the list above, involves the risk assessment of natural persons in order to determine in advance the risk of the analyzed person potentially committing a crime (AI Regulation Article 5 section 1 point d). This is related to the point of social scoring in two ways. One is that it effectively prohibits the same activity, only in relation to a different purpose. The other point of connection is the fact that while the ban on social scoring is indirect, the AI Regulation allows for a direct exception in the case of "pre-crime" AI use. This also shows that the list of the prohibited AI practices clearly means a differentiation between them as well, an order of how serious that practice is from the point of the EU legislator.

The fifth situation is also related to the prohibition of social scoring on a level. That section prohibits the creation or expansion of facial recognition databases by means of non-targeted retrieval of facial images from the internet or from closed-circuit camera footage (AI Regulation Article 5 section 1 point e). If this were not a prohibited use of an AI system, it would allow individuals to be traced with the help of such databases without asking for their consent, in which case it would be much easier to set up a system that classifies it based on the activity of the person being tracked in this way.

The sixth prohibited use case includes AI systems that recognize emotions and infer from them in workplaces and educational institutions (AI Regulation Article 5 section 1 point f). This restricts the use of AI in a relatively narrow area, as it does not classify economic exploitation in the prohibited category, as it relies on the fact that AI basically works on a data basis. After all, it is essential for both training and interaction, so in the case of economic exploitation, the AI Regulation will definitely be applied together with other legal regulations - for example the GDPR (Forbes, 2022) - in the first cases.

The seventh and last but one prohibited use is that of biometric categorization systems. These AI systems would categorize natural persons individually based on their biometric data in order to derive or infer their race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation (AI regulation Article 5 section 1 point g). The AI Regulation continues the practice according to which the later a situation appears in the list, the more an exception is allowed, as this prohibition does not extend to the lawful tagging or filtering of legally obtained biometric datasets, such as images based on biometric data, or to the categorization of biometric data in the field of law

- 76/77 -

enforcement. In practice, this means that if they are uploaded voluntarily, the algorithms of an online platform can use the facial images or even voice recordings of individual users to categorize and identify their political opinions, religious beliefs or other personal characteristics. As the AI Regulation uses an exhaustive list, it does not apply to economic use, processing or profiling Thus, in the latter cases, similarly to AI systems for emotional analysis, the application of the law will have to provide a framework for interpretation.

The eighth prohibited mode is the use of remote biometric identification systems in publicly accessible places for law enforcement purposes. This situation was the subject of much, if not the most debate during the legislative process. While the Member States wanted to narrow this ban as much as possible so that this use could be practiced more widely, the European Parliament - and human rights organizations joining through it - wanted to extend the ban as much as possible. The compromise came to the most regulated category among prohibited practices. In accordance with the interests represented by the Member States, this is the only practice that specifies that authorization can be provided by Member State's legislation, which exception is not allowed in any of the aforementioned prohibited practices.

While the banned practices described above, examined from several perspectives - such as their first place in the structure of the regulation, or the inclusion or justification of prohibitions in several places for reasons beyond product safety -have an effect that could even lead to the conclusion that although the AI Regulation reflection of the approach of the White Paper - i.e. the building of a "trust ecosystem" - is very broad, in some places it has taken that further for fundamental rights. Which could be concluded in away, that the EU has created a kind of AI Codex, against the original intention of the Regulation. However, although this is an emphatic part of the regulation, it is not the central part. The focus of the regulation is on high-risk AI uses, which constitutes the largest part of the legislation, both in quality and quantity.

2.1. Requirements regarding general-purpose AI models

In addition to prohibited AI practices, the provisions on general-purpose AI models are also a very relevant part at the moment. Namely, these provisions make up one of the most controversial parts of the legislation, as these include the most popular genre of AI, the generative models. Thus this part of the AI

- 77/78 -

Regulation is given a lot of attention, which is further enhanced by the fact that it is one of the five parts from the regulation that will be applicable in the following period, i.e. the first half of 2025 (Future of Life Institute, 2024).

The most important element of regulation for general-purpose AI models are the rules about transparency. Regarding this the AI Regulation takes a two-pronged approach: a general description of the general-purpose AI model, including the tasks it can do according to its purpose; the type and nature of the AI systems into which it can be integrated; the number of architectures and parameters of the model; or the input and output modality (image, sound, text etc.) and format. The other approach is the detailed description of the model and the relevant information about the development process, such as the technical solutions; how the given model can be integrated into an other AI system; information about the model's training process; what the design requirements were; the most important design decisions, reasons and assumptions; the type and origin of the data; the data management methods (such as cleaning, filtering etc.); the number of data points, their scope and main characteristics; the computing resources required for the model; or the known or estimated power consumption. As a help, the act provides a detailed list of all the required information that needs to be published about a general-purpose AI model in Annex 12 of the AI Regulation.

Within the framework of this chapter, the AI Regulation has created the term general-purpose AI model with systemic risk. It did not directly define this as follows, but it indirectly defined as having high-impact capabilities assessed on the basis of appropriate technical tools and methodologies, or having a significant impact on the internal market due to their prevalence, and reaching or exceeding the capabilities of the most advanced general-purpose AI models in such a way, that they exceed a computational volume set out in the AI Regulation. This calculation threshold is set in 10[25] floating point operations per second (FLOPS) (Walker IL, 2024) and effectively leaves the category open-ended, as any AI that has high-impact capabilities, i.e. exceeds the above value or has these capabilities according to the Commission, is to be classified here. This is especially interesting in regarding the fact that according to some opinions, current models are "undertaught", so tying the definition to the number of floating-point operations is not necessarily the most appropriate unit of measurement to provide a basis for systemic risk (Stevens, 2024). Furthermore, this definition goes against the logic of both the AI Regulation and previous EU documents. Through these, like the EU AI Strategy, or the White Book, the European Union has declared, for

- 78/79 -

many years now, that AI regulation must be technology-neutral, in order to have to amend and update the legislation as few times as possible, especially in the case of such a rapidly developing and evolving technology. And also, to avoid assumption, that the regulation was made for a particular company or model. However, this section does not follow the regulatory solution described above in the case of prohibited and high-risk AI systems. The basis for this can be found mainly in the fact that among the AIs regulated by the regulation, since general AIs have not been included, this has the least information, this is the most hypothetical area, but it is not futuristic. This "non-futurism" is a fact, as there were already models exceeding the aforementioned 10[25] FLOPS when the AI Regulation was accepted (these models were OpenAI's GPT-4 and Alphabet's Gemini models), and the number of such models is constantly increasing. And thus these general-purpose AI models with systemic risk must comply with additional requirements in addition to the general requirements, such as additional documentation obligations related to copyright or educational content; model providers must regularly evaluate the models; mitigate the risks, that came out from those evaluations; document and report unexpected events; make sure, that there are remedial actions that ensure cybersecurity and physical safeguards are met; and additionally report energy consumption, connecting the AI Regulation with the Union's green transition rules.

2.2. Organisations and institutions in the AI regulation

To monitor and enforce the provisions presented in the AI Regulation, with regards to the rapid development of the technology and the unique nature of the societal needs around AI the act establishes a new, multi-level organizational structure. The multi-level approach can be observed in two places: one could be considered traditional due to the structure of the Union, as the regulation requires the designation or establishment of a national authority in addition to the EU bodies. The other is the establishment of a multi-level and multi-branched organizational system at the EU level.

The central organization of the AI regulation at EU level is the AI Office. This organization has been supporting the Commission since its establishment on 29 May 2024 (European Commission, 2024) and its main focus is to ensure the consistent application of the AI Regulation through the Member States' AI offices. The AI Office also directly enforces the rules about general-purpose AI

- 79/80 -

models. In cooperation with developers, the scientific community and all other stakeholders around the technology, the AI Office coordinates the development of state-of-the-art codes of practices; tests and evaluates general-purpose AI models; requests information or applies sanctions where necessary. In addition, when the need arises for the AI Regulation to be updated, clarified or supplemented, the AI Office will be the one to carry out the preparatory work necessary for the amendment of the legislation on behalf of the Commission. This is reflected in the structure of the Office, which consists of 5 departments (Excellence in AI and Robotics, Regulation and Compliance, AI Safety, Innovation and Policy Coordination, and AI for Societal Good) as well as two Advisors for Academic and International Relations (European Commission, 2024).

In addition to the AI Office, the AI Regulation establishes three other organizations, the European Artificial Intelligence Board (AI Board), the Advisory Forum and the Scientific Panel of Independent Experts. The AI Board will have one representative from each Member State and the European Data Protection Supervisor will join in an observer capacity. Its priority is to assist the Commission and the Member States, with a particular focus to ensure the consistent application of the AI Regulation. The Board provides a forum for coordination for market surveillance authorities and may issue recommendations and opinions as necessary. A key element of the AI Board's activities and decision-making is the exchange of information between Member States, the promotion of harmonization and the sharing of existing regulatory and technical expertise (AI Regulation Articles 65-66). The Advisory Forum's main objective is to provide independent, expert-level technical advice to the AI Board and the Commission. The Forum's members come from market organizations, including start-ups, SMEs and large corporations, as well as representatives of the academia and civil society organizations. The Forum has to meet at least twice a year to develop opinions and recommendations as well as to establish its own internal sub-groups to discuss specific issues. The Scientific Panel of Independent Experts will provide the technical and scientific background for the other bodies by requiring its members to have specific expertise and aptitudes, as well as scientific or technical expertise, and has to be independent of the providers of AI models and systems (AI Regulation Article 68 section 2). Their main task is to evaluate general-purpose AI models and assess their systemic risks.

- 80/81 -

According to the Regulation, national authorities must designate at least one notifying authority and one market surveillance authority. National authorities should act independently and impartially, ensuring the continuous controlment of AI systems. Member States are required to notify the Commission of the names and responsibilities of their authorities and to make publicly available the means of electronic communication. In addition, each Member State will have to designate a single point of contact to facilitate international and EU consultations.

The text, both in the preamble and in the specific section on national authorities (AI Regulation preamble 153-154 and Article 70), defines the two possibilities of setting up or designating these bodies on an equal footing. The AI Regulation does not set a restriction that this cannot be done by one office, the fact that the national authorities are subject to very extensive requirements to provide adequate resources in terms of technical, financial and human capacity as well as that they have to regularly report to the Commission, in addition to the compliance with the high-quality cybersecurity requirements, demonstrates the legislator's intention to ensure that the two roles should be performed be the same organization, thus creating a "one-stop-shop" in the Member State for AI providers.

2.3. Current Hungarian plans regarding the legislation of AI

Even though the AI Regulation is a regulation, it builds on the fact that it will be supplemented, albeit to a limited extent, by further rules by the Member States, mostly in the case of the test environment and the designation of the authorities described above. The Hungarian Strategy, that has already been shown by the definition, is also relevant with this issue as well. The presentation of the Hungarian Strategy is further strengthened by the fact that it is currently being updated, and the "sequel", the so called "MIS 2.0", is in the process of acceptation as the new (revised) Hungarian AI Strategy. However, the document is currently an internal discussion paper, which may still change for many reasons, so with professional responsibility only its general directions and current state can be used as a basis for the analysis.

One of the reasons for the highly presumed changes to the document is the AI Regulation itself. The final, adopted and currently effective version was published in the Official Journal of the European Union on 12 July 2024, which has already fallen into the final phase of the revision of the Hungarian Strategy. In addition,

- 81/82 -

the EU legislator will also issue further documents in connection with the AI Regulation, mainly to clarify the application of the law, such as a list of use cases for high-risk and non-high-risk AI systems (Dr. Petrányi & Dr. Horváth, 2024), or Code of Practice on general-purpose AI models. Also, beyond the act, legislation specifically addressing AI and its effects is continuously being adopted, such as the new directive on liability for defective products. Thus, MIS 2.0 is not only difficult to get into its final form due to the strong and continuous development of technology, but also to the rapidly changing legal and interpretative environment.

Part of this legal environment is the "implementation" of the AI Regulation in Hungary. Since it is a regulation, unlike directives, it would not normally need national legislation. This is not necessary for the vast majority of provisions, but it is essential for some provisions of the act. In Hungary, this is prepared by Government Decree 1301/2024 (IX.30.) on the measures necessary for the implementation of the Regulation of the European Parliament and of the Council on Artificial Intelligence. It can be established from the decision that not only the above-mentioned "one-stop-shop" solution will be adopted, which is preferred by the EU legislator, will be implemented in Hungary, but this office will also be responsible for the operation and supervision of the Member State test environment i.e. "sandbox" prescribed by the act. Furthermore, the decree requires the establishment of a completely new organization, the Hungarian Artificial Intelligence Council. This would consist of delegates from the state authorities most affected by the technology, such as the National Media and Infocommunications Authority or the Hungarian Competition Authority. Essentially this could be considered as the national counterpart of the AI Board from the AI Regulation, as its task will be to issue guidelines and positions related to the implementation of the act.

The question arises as to why the Hungarian legislator highlighted this of the three EU institutions and created its counterpart at the national level, while the other two did not. The answer to this question is provided by the 2020 Hungarian Strategy, the ecosystem that has developed around it, which is also strengthened by MIS 2.0. Namely, the functions of the other two EU organizations, i.e. the Advisory Forum and the Scientific Panel of Independent Experts, as set out in the AI Regulation, already have an equivalent in Hungary, in the form of the Artificial Intelligence Coalition. The structure of the Coalition makes it possible for the organizations present as members, from the smallest startups through the

- 82/83 -

largest Hungarian companies to the Hungarian subsidiaries of companies that are global leaders in the technology, as well as NGOs, universities and public administration bodies, that through their experts they can participate in the domestic discourse regarding the technical, legal issues and social impacts of technology, thus simultaneously representing the experiences of the given organization, and the expert opinion. With this, the Coalition can perform the tasks of both bodies at same time while the AI Regulation creates two separate organizations for this.

This is confirmed by MIS 2.0, as it deals separately with the further strengthening of the Coalition. According to the current state of the document, the Coalition would also be given a central role in methodology in addition to the previous ones, in order to make the knowledge collected related to Hungarian AI developments available in one place. In addition, for interested enterprises to receive prepared professional (technological, business, legal etc.) assistance, and by continuing the preparation of regular surveys and the holding of professional events, in-depth knowledge would be available about the current domestic conditions, which the organization could represent uniformly and efficiently towards the legislator the law enforcer, i.e. the Hungarian AI Office. The Coalition would also play an important role in the forming of the MIS 2.0 action plan, and the development and operation of its monitoring system. The most important of these from the point of view of analysis is clearly the section on the flexible regulatory environment.

The regulatory part currently comprises six measures, namely:

1. the implementation of the EU regulatory framework,

2. an innovation-friendly regulatory environment,

3. the creation of the Regulatory Test Environment,

4. cooperation with international standardization bodies,

5. the revision of the regulation of intellectual property rights,

6. and the development of ethical guidelines.

The first three of these clearly form a group, as the implementation of the AI Regulation is only necessary to a limited extent, which is effectively achieved in the creation of an innovation-friendly regulatory environment and a regulatory test environment. Therefore, the justification for the implementation is interesting, according to which this is necessary to ensure market competition and not to create monopolies regarding AI. This is interesting as the AI

- 83/84 -

Regulation started as a product safety law, and despite the fact that it contains features that go beyond the elements traditionally found in this type of legislation, such as the fundamental law impact assessment or certain parts of prohibited practices, it does not contain any elements of competition law, and there is no intention to replace the competition law. EU competition law has already been proven to be effective regarding technology and digital markets (see case C-3/37.792 against Microsoft or cases AT.39740 and AT.40099 against Google), and where it was necessary to create new law, those interventions were made not within the framework of the AI Regulation, but in other legislation, like the regulations regarding digital markets and digital services (better known as DMA and DSA). As a result, it can be strongly assumed that these three points will be changed in the future, presumably merged, so that the reaction of the Member States, whether it is mandatory arising from the regulation or going beyond it, will become uniform. Another point of interest is the elaboration of ethical guidelines. This was already included in the 2020 Strategy, based on the ad hoc committee of the Council of Europe and the EU's ethical guidelines regarding AI (High-Level Independent Expert Group On Artifical Intelligence, 2019). Taking this point forward almost one-to-one, especially since the AI Regulation was published and adopted between the two Hungarian documents at the time based on the EU ethical guidelines (High-Level Independent Expert Group On Artifical Intelligence, 2019), raises the question of whether it is necessary create such a document. On the one hand, such a code would complement the EU's confidence-building attitude that has existed since the beginning, and has been strengthened in several places, but the commitments, or even rules in it would actually mean extra obligations for domestic service providers and users in addition to those already adopted in the AI Regulation, which in turn could have a so called "chilling effect" on the competitiveness of these actors, restraining national innovation and growth potential for the technology. It must be pointed out, that the technology could be considered to be on the forefront of the Hungarian government's list, as a government commissioner was appointed specifically for AI by Government Decree 1028/2025 (II.24) on the appointment and duties of the government commissioner responsible for artificial intelligence.

- 84/85 -

Conclusions

The evolution of EU documents regarding AI, which culminates in the AI Regulation shows that the Union is committed to the idea of human centered AI. This can be seen is several points, such as the banned practices, the fundamental rights assessment or the obligation, to have a constant human oversight over the models, including a "kill-switch" so that the supervisor has the option to immediately terminate the malfunctioning AI. Also, the extensive institutional system, whether they already exist or are planned, shows the same goal. This, combined with the strength of the Single Market, and the Brussels effect of EU regulation, shows a path for other regulatory bodies in the world.

But as with every regulation and rule, it doesn't matter whether they are restrictive or liberal, if they are not enforceable. And for this reason, one has to consider not only the points within law, but also conditions outside of it. In this case the fact, that the main region for AI unfortunately is not the EU, but the USA and China, both of which chose a different path regarding the regulation of the technology. Also, the new leadership in the US considers EU technology regulation, whether we say the sanctions of platforms from either consumer protection (Hajnal, 2023) or competition law, or the human centered regulation of AI, a threat for the companies from that country, and thus changed its view regarding Europe from partner to quasi-opponent, if not more.

The challenges posed by the changing world we live in, whether it's technology, legislation and regulation, or other things, require the EU to complement good regulation with an even more competitive, stronger economy, which unfortunately sometimes comes at a cost.

Manuscript closing date: August 11, 2025.

References

Badman, Annie - Kosinski, Matthew (2024) Wat is Big Data IBM. [Online] Available at: https://www.ibm.com/think/topics/big-data [Accessed on: 25 02 2025].

Bertelsmann Stiftung (2023) China Social Credit System. [Online] Available at: https://www.bertelsmann-stiftung.de/fueadmin/files/aam/Asia-Book_A_03_China_Social_Credit_System.pdf [Accessed on: 25 02 2025].

- 85/86 -

Buchholz, Katharina (2023) Statista ChatGPT Users. [Online] Available at: https://www.statista.com/chart/29174/time-to-one-million-users/ [Accessed on: 25 2 2025].

Bundesregierung Deutschland (2018) Deutscher KI Strategie. [Online] Available at: https://www.ki-strategie-deutschland.de/?file=files/downloads/Nationale_KI-Strategie.pdf&cid=728 [Accessed on: 25 02 2025].

Chollet, Francois (2019) On the Measure of Intelligence. [Online] Available at: https://arxiv.org/abs/1911.01547 [Accessed on: 25 02 2025].

Dr. Petrányi, Dóra - Dr. Horváth, Katalin (2024) Practical questions regarding the AI Act. Budapest: Lecture.

Edwards, B. (2024) OpenAI announces o3 and o3-mini, its next simulated reasoning models. [Online] Available at: https://arstechnica.com/information-technology/2024/12/openai-announces-o3-and-o3-mini-its-next-simulated-reasoning-models/ [Accessed on: 25 2 2025].

EU Commission (2018) EU Commission website. [Online] Available at: https://www.intel.com/content/www/us/en/history/virtual-vault/articles/moores-law.html [Accessed on: 01 05 2025].

EU Commission (2020) White Paper on Artificial Intelligence: a European approach to excellence and trust. [Online] Available at: https://commission.europa.eu/document/download/d2ec4039-c5be-423a-81ef-b9e44e79825b_en?filename=commission-white-paper-artificial-intelligence-feb2020_en.pdf [Accessed on: 25 2 2025].

European Commission (2024) Commission establishes AI Office. [Online] Available at: https://ec.europa.eu/commission/presscorner/detail/en/ip_24_2982 [Accessed on: 7 1 2025].

European Commission (2024) European AI Office. [Online] Available at: https://digital-strategy.ec.europa.eu/en/policies/ai-office#ecl-inpage-the-structure-of-the-ai-office [Accessed on: 7 1 2025].

European Union (2024) Artificial Intelligence Act. Brussels: Official Journal of the EU.

- 86/87 -

Forbes (2022) Brutális büntetést kapott az egyik magyar bank, értelmi állapot alapján hívogatták az ügyfeleket. [Online] Available at: https://forbes.hu/penz/bank-mesterseges-intelligencia-buntetes/ [Accessed on: 24 02 2025].

Future of Life Institute (2024) EU AI Act Implementation Timeline. [Online] Available at: https://artificialintelligenceact.eu/implementation-timeline/ [Accessed on: 04 03 2025].

Hajnal, Zsolt (2023) The New Liability Forms of Online Platforms in the new European Digital Legal Framework from the Consumers' Perspective. Acta Sapientia, 2023/2.

High-Level Independent Expert Group On Artifical Intelligence (2019) AI Definition EU Commission website. [Online] Available at: https://ec.europa.eu/newsroom/dae/document.cfm?doc_id=60651 [Accessed on: 25 02 2025].

High-Level Independent Expert Group On Artifical Intelligence (2019) Ethics Guideline EU Commission website. [Online] Available at: https://ec.europa.eu/newsroom/dae/document.cfm?doc_id=60419 [Accessed on: 01 05 2025].

Hungarian Government (2020) Hungarian AI Strategy 2020. [Online] Available at: https://mik.neum.hu/wp-content/uploads/2025/03/2020-hungarian-AI-strategy.pdf [Accessed on: 25 02 2025].

Intel (2005) Moore's law. [Online] Available at: https://www.intel.com/content/www/us/en/history/virtual-vault/articles/moores-law.html [Accessed on: 04 03 2025].

G. Karácsony, Gergely: Okoseszközök - Okos jog? A mesterséges intelligencia szabályozási kérdései. [Digital Edition.] Budapest, Akadémiai Kiadó - Ludovika Egyetemi Kiadó. Available at: https://doi.org/10.1556/9789634549529 [Accessed on: 23 12 2024].

Mezei, Kitti - Träger, Anikó (2025) Risks and Resilience in the European Union's Regulation of Online Platforms and Artificial Intelligence: Hungary in Digital Europe; In: Gárdos-Orosz, Fruzsina (edit): The Resilience of the Hungarian Legal System since 2010 A Failed Resilience? Cham, Switzerland, Springer International Publishing AG. Available at: https://doi.org/10.1007/978-3-031-70451-2 [Accessed on: 12 02 2025]

- 87/88 -

Nah, Fiona Fui-Hoo and others (2023) Generative AI and ChatGPT: Applications, challenges, and AI-human collaboration. Journal of information technology case and application research, 25(3) 277-304. https://doi.org/10.1080/15228053.2023.2233814

Neuwirth, Rostam J. (2023) Prohibited artifical intelligence practices in the proposed EU artificial intelligence act (AIA). Computer Law & Security Review, 48. edition. https://doi.org/10.1016/j.clsr.2023.105798

Stevens, Ingrid (2024) Regulating AI: The Limits of FLOPs as a Metric. [Online] Available at: https://medium.com/@ingridwickstevens/regulating-ai-the-limits-of-flops-as-a-metric-41e3bl2d5d0c [Accessed on: 5 1 2025].

Tóth, András (2024) Az Európai Unió Mesterséges Intelligencia Törvényéről. Gazdaság és Jog, 5-6. 3-11. https://doi.org/10.55413/561.A2400102.EUO

Turner, Jacob (2019) Robot Rules Regulating Artificial Intelligence. Cham: Palgrave Macmillan. https://doi.org/10.1007/978-3-319-96235-1

Villani, Cédric (2018) French AI Strategy. [Online] Available at: https://www.jaist.ac.jp/~bao/AI/OtherAIstrategies/MissionVillani_Report_ENG-VF.pdf [Accessed on: 25 02 2025].

Walker II., Stephen M. (2024) FLOPS (Floating Point Operations Per Second). [Online] Available at: https://klu.ai/glossary/flops [Accessed on: 5 1 2025].

Yang, Zeyi (2022) China just announced a new social credit law. Here's what it means. [Online] Available at: https://www.technologyreview.com/2022/11/22/1063605/china-announced-a-new-social-credit-law-what-does-it-mean/ [Accessed on: 24 02 2025]. ■

Lábjegyzetek:

[1] The author is PhD student, University of Debrecen, Géza Marton Doctoral School of Legal Studies.

Tartalomjegyzék

Visszaugrás

Ugrás az oldal tetejére