https://doi.org/10.54148/ELTELJ.2025.1.111
Our century has witnessed the emergence and unfolding of disruptive technologies. Regulating these fields poses substantial challenges for law-making entities. The rapid pace of technological evolution does not allow for the application of classical regulatory methods since, by the time a norm is established, the technology often surpasses the provisions. Recently, so-called dynamic regulation methodology has gained wider acceptance. This puts the legislator in a more cooperative role working alongside actors from the innovation ecosystem and, instead of being a rigid 'ruler', involves tailoring the legal regime together with the former. Regulatory test environments, also known as sandboxes, are key tools in this modern approach. They have already proven their usefulness in the promotion of innovation and regulation procedures in the fintech sector. I believe that sandboxes are excellent laboratories for forming the legal regime for artificial intelligence, which may be the most disruptive technology of our time. In the present contribution, I will examine why it is hard to regulate disruptive technologies and describe the situation to be resolved in the innovation world that underlines the necessity of using sandboxes. I also depict their operating principles to enlighten readers about their vital role in the age of AI.
Keywords: artificial intelligence, regulation, regulatory sandboxes, disruptive technologies
- 111/112 -
'Nothing is permanent, only change itself.' Most people attribute these words, which are still relevant today, to Heraclitus.[1] While the Greek philosopher did not say so, the speed of change also matters: the longer the process takes, the more humanity can prepare for it. However, the last few centuries have seen several inventions and developments that have changed the way we think about the world and the way we do things at an alarming rate. Such groundbreaking solutions are best defined as disruptive technologies.[2] These include innovations that generate profound, sweeping changes in many layers and subsystems of society in a relatively short period of time. They share the common characteristic of completely transforming existing market structures and dominant actors by being 'cheaper, simpler and more convenient than the dominant technology.'[3] Let's look at some examples to better define disruptive technologies, understand their characteristics, and establish the starting point we need to embrace in relation to the importance of sandboxes.
I find it unlikely that when British inventors created the first steam engines at the dawn of the 18th century - then used to drain mines - they thought about the long-term impact of their work. Regardless, the device perfected by James Watt in 1765[4] was the technological starting point for the epochal change we now call the First Industrial Revolution.[5] Agricultural production was simplified, and the masses moving to the cities were absorbed by mass production in factories that offered them a multitude of new jobs. Social classes were reorganised, and social structures were upended. The change generated a lot of tension, with the emergence of the machine-wrecking movements as early as the 1810s,[6] and by the middle of the century, new ideologies had been elevated to tackle the related social dilemmas. The discovery and spread of electricity in the first half of the century further increased the speed of change, as the technologies based on it made people's
- 112/113 -
lives orders of magnitude easier and enabled the development of many new production mechanisms.
Electricity and Second World War were the catalysts in the 20th century that changed humanity forever. The race for the atomic bomb[7] boosted the process that eventually led to the development of the computer as we know it today.[8] The revolutionary significance of the device could not be better illustrated than by the fact that the longest-running patent lawsuits in the US courts were launched in relation to this innovation. It was finally ruled[9] that the automatic electric digital computer is the most valuable discovery of the 20th century and, therefore, cannot be patented.[10] Another undoubtedly disruptive technology from the last century was the Internet. The spread of the web has opened new horizons in the flow of information around the world and transformed communication structures in a way that nothing has ever done before. I believe it is pointless for the modern thinker to address the implications of the arrival of the computer and the Internet any further than this, as it would be hard to imagine our everyday lives without our PCs, social media platforms or online shopping.
The century we live in is full of innovations that could be classified as disruptive. The proliferation of fintech solutions has transformed the financial world. The name PayPal is now synonymous with sending money, with over 400,000,000 active users worldwide, processing more than 25 billion transactions in the year 2023.[11] Alongside offering banking services, Revolut now offers investment and currency exchange services in more than 160 countries[12] and has collected over 40 million users since its launch in 2015. With the rise of blockchain technology, cryptocurrencies have burst onto the scene, opening up a world of independent payment instruments outside of central banking systems. The most successful of these, Bitcoin, is estimated to have a market capitalisation of $1,300 billion today[13] which is likely to get higher in the near future. So-called NFTs,[14] also based on
- 113/114 -
blockchain technology, represent the biggest investment hype of the last few years; it is hard to comprehend that some have been bought for a total of $91.8 million.[15] However, distributed ledger-based solutions have not only enabled hype-driven investments but have also contributed in many areas to a safer life in a digitalising world.[16]
One may notice that none of these technologies took centuries to transform the world. However, it is interesting to reflect on the increasingly shorter time it took for these solutions to become widespread. Legal professionals must pay attention to a worrying phenomenon - namely, the challenges to regulation posed by the speed of development of disruptive technologies. The classical legislative process is simply not able to keep pace with today's digital revolution and the evolution of technology. Legal certainty[17] and giving legal entities sufficient time to prepare for the application of rules[18] are pillars of the rule of law, but this is simply not feasible in a world where new technology becomes obsolete in a year. By the time the preparatory work starts on the legal framework applicable to a particular area, reality has long since surpassed the previous paradigm, so regulators are immediately at a disadvantage. If, in such a situation, that cannot be handled by the usual cumbersome legislative methods, regulation is nevertheless created, its content must be updated almost immediately, as the market and reality have presumably already exceeded the codification.
To illustrate this, let us have a glance at the recent events in EU technology regulation. The co-decision procedure is not the quickest method; for example, the influential GDPR[19] took four years to create, with an additional two years of preparation before entering into force.[20] So, in 'just' six years, our personal data was made relatively safe. In contrast, the
- 114/115 -
Digital Markets Act,[21] which regulates the biggest platforms that shape our lives, took just two years to draw up[22] and one more to gradually come into effect.[23] The same tedious legislative process marked the Artificial Intelligence Act[24] (AI Act). Since April 2021, the initial draft has undergone more than ten discussions[25] by the relevant bodies, and the final version was only adopted in mid-May 2024.[26] Beside numerous updates and refinements in the text to reflect technological advancements, the Commission established a two-year period before its full implementation[27] and its gradual entry into force, which raises a theoretical question: how much will the world have changed by then?
To sum up, the speed of development of disruptive technologies exceeds the capacity and methodology of the current legislative process, which has often remained untouched for centuries; hence, it is unable to keep pace with change in the world in a fast and flexible way. However, society needs the security that only the legal system can provide, which is increasingly in demand due to a storm of technological developments. There is no doubt that there is a greater urge than ever to utilise modern legislative methodologies that can keep up with the rapid change and balance the interests of society, the state and the market while safeguarding the legitimate interests of parties and the rule of law. Disruptive technology requires disruptive legislation.
The impact of artificial intelligence (AI) will undoubtedly surpass all the inventions of the 21st century, and we cannot even begin to estimate its world-changing shockwaves today. Futurologists, gurus and scientists from all fields are trying to understand, analyse and assess the potential benefits of AI. Countless articles, studies and books have been written
- 115/116 -
on its nature, potential, and, above all, how its spread will completely transform our world. With AI, we are trying to reproduce human intelligence - at least parts of it, for the time being.[28] As AI develops, this fact predestines us humans to encounter it in more and more areas that are currently based on human thinking. Whether we are talking about telephone customer service[29] or complex healthcare applications,[30] sooner or later, most sectors will have an artificial alternative to the usual human workforce.
Hungary is not the largest or wealthiest country, but according to its AI strategy, the take-up of new technology will affect around 900,000 jobs in some way by 2030,[31] which is approximately 20% of the current total.[32] Lawyers, like other professions, sometimes tend to think that their work is so attached to human endeavours that it cannot be replaced, but this is far from the case. Research by Goldman Sachs states that 44% of legal jobs in the US alone are at risk,[33] which, given that ChatGPT 4.0 scores in the top ten per cent on BAR exams,[34] does not seem so incredible. In addition, many other studies[35] confirm that AI and innovations based on it will radically change society if they have not already. Unfortunately, many people are still waiting for the AI revolution, even though today, we may already be living at the end of its beginning.
- 116/117 -
The rapid development of disruptive technologies often outpaces legislation, as one can observe in the case of the two tech superpowers. Regarding comprehensive AI regulation, the EU, with its upcoming AI Act, is significantly more advanced than the US. Although during the past four years, several documents[36] urging the use of AI in governmental and administrative applications have left the Oval Office, we had to wait until 2022 for the first broad AI policy.[37] As mentioned, despite being home to leading global innovators in the field like Google, Meta, and Microsoft, the US has remained in a rudimentary regulatory state for long, seeming to favour a 'Wild West'[38] approach: allowing tech giants to create a quasi-self-governing legal environment and permitting state-level regulation[39] instead of federal rules, thereby granting the EU the liberty to set the global industry standard.[40] However, the recent AI explosion, and particularly the proliferation of generative AI technologies, such as ChatGPT, seems to have shaken even these companies' belief in the wisdom of laissez-faire innovation. An open letter,[41] which was made public last March, is clear evidence of this. The declaration, signed by thousands of scientists and tech gurus - including luminaries such as Elon Musk, Steve Wozniak and Yuval Harari - called on the world's AI labs to stop developing algorithms more capable than GPT4 for at least six months. The rationale stated that a pause is needed to allow developers to 'jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts. [...] In parallel, AI developers must work with policymakers to dramatically accelerate [the] development of robust AI governance systems'[42] The letter was followed up a month later by a detailed study[43] containing the policy recommendations needed to consolidate the situation. To be honest, I find such letters mildly worrying. If even those actors who have hitherto controlled and managed AI developments to a much greater extent than governments are 'calling time', how hopeless is the situation? It is perhaps inputs
- 117/118 -
like this letter that paved the way to an Executive Order[44] issued by President Biden with the goal of 'governing the development and use of AI safely and responsibly, and advancing a coordinated, Federal Government-wide approach.'[45] One thing is for sure: if even the market players are requesting a legal framework, it is high time to develop methods of managing AI regulation in a truly effective way. One such method is the creation of sandboxes, and I will try to explain why.
The emergence of regulatory sandboxes[46] has been triggered by the disruptive technologies of the 21st century. There is a need for a legislative method that will allow the regulator to dynamically follow the evolution of technology and create an innovation-friendly regime that reflects its demands. Simultaneously, it should ensure a supportive environment for developers, protect stakeholders - including consumers, users and others impacted by the technology - and provide legal certainty without sacrificing the ability to respond swiftly.[47] This flexibility or adaptability is very much needed in our world today, where exponential technological progress[48] is creating paradigm shifts every 1-1.5 years and constantly forcing us to redefine what we know about things. At this pace of change, there is no way we can apply the guarantee-ridden but often excessively bureaucratic and, thus, protracted legislative processes that have crystallised in modern societies over the last few centuries. We need modern legal approaches to deal with this situation effectively. A sandbox is such a construction that combines flexibility, legal certainty and innovation. To grasp the legitimacy of sandboxes and to understand their practicality, we can briefly imagine ourselves in the position of the actors in the innovation process. Through the following strictly fictitious example, I would like to make tangible the tension that underlies the need for such test environments.
- 118/119 -
Imagine the situation of an innovative company, a developer of a piece of AI-based medical diagnostic software. As a start-up, they have only a few employees and a ready-to-market product that is about to be launched. The potential of the company is underlined by the fact that they have received significant investment[49] for development and market entry. Let's look at the challenges they face:
As the founders are responsible individuals, they want to be sure their product fully complies with the law and that they do not encounter any obstacles in obtaining the necessary licenses and certificates. They would also like to rule out the possibility of getting involved in costly legal proceedings or litigation over any shortcoming that has been overlooked. Since there is no lawyer on their team,[50] they turn to a law firm for advice, where they encounter an astronomical price[51] for maintaining legal compliance, due diligence and counselling, which the young entrepreneurs cannot afford. Without a solid legal background and validation, they are reluctant to enter the market, so their idea sits in a drawer for an extended period.[52]
In an alternative version of the story, the founders, not caring much about legal compliance, enter the market with an MVP[53] but soon find themselves in the crossfire of authority proceedings and lawsuits, as some aspects of their product's operation leave legal issues to be resolved. To make things worse, their early customers - mainly respected doctors and healthcare institutions - are not satisfied with the product because it destroys their patient's confidence by producing misdiagnoses, as there is still a lot of room for improvement. This is all because the start-up has not been able to test the software in real-world conditions with real users and, hence, has no sufficient data and experience to
- 119/120 -
perfect it. As a result of a major fine related to consumer-protection issues, several lost lawsuits and having to pay damages, there is no alternative but to, unfortunately, declare bankruptcy and put an end to an otherwise promising start-up. Not to mention that the company's situation was worsened by the constant pressure from their investors, who wanted to realise earnings as quickly as possible and did not care about the obstacles the founders had to overcome. As a result of entering the market unprepared and selling an imperfect product, the innovators were eventually pushed into a situation that led to the end of the business.
Unfortunately, similar stories are not rare in the start-up world: in most cases, innovative businesses do not have the stable legal and financial background to make their products a success, so they fail even before entering the market. The legal environment they face, especially the strict liability regime,[54] consumer protection standards or strict sector-specific[55] requirements are often an insurmountable barrier[56] that are complicated and costly to comply with. This prevents them from entering the market and testing their innovative ideas in real conditions with real users and, as a result, perfecting them. Nor should we forget that businesses must often comply with legislation that is either completely unintelligible or outdated and obsolete in relation to the state-of-the-art solutions they create. Of course, not only SMEs but also larger companies are exposed to the risks of an uncertain legal environment that encompasses disruptive technologies.[57] Though bigger actors already have a solid financial background, legal support, IT infrastructure, and the data assets needed for successful product development, public authorities must validate their innovative products and services before entering the market. In summary, innovative companies of any size that cultivate disruptive technologies are under pressure. The challenges firms face in product development, and particularly the risks arising from uncertain legal surroundings, impose significant burdens on them. These burdens make it more difficult for enterprises to operate to varying degrees and can ultimately stifle innovation.
In the meantime, we cannot ignore the difficult situation that public actors, especially legislators and law enforcement authorities, find themselves in as a result of the emergence of disruptive technologies. The incredible speed of innovation today is simply not matched by any legislator - whether national or supranational - conditioned to the relatively slow
- 120/121 -
bureaucratic processes. By the time the legislative bodies responsible for technology regulation have detected and understood the situation, assessed its significance, and convened a group of experts to discuss the issues and risks, reality has moved on, leaving them standing. I believe that it would not be fair to place the responsibility solely on public authorities, as there is a substantial information asymmetry between innovative firms and public entities. Thus, the utmost difficulty is often associated with the absence of understanding and forming a correct assessment of the situation. The lawmaker typically has limited knowledge of the nature of the emerging technology and, as a result, has no clear vision of how it should be addressed using legal frames. Having no insight into the development processes and the peculiarities of the operation of innovative companies may lead to the inadequate design of regulation.
Another handicap is the professional or technological incompetence of the legislator, both at individual and organisational levels. A workforce specialised in the management of usually slow-moving administrative affairs is, though no fault of their own, unable to match the speed of rapid development, nor able to assess its impact and consequences, and nor does it have the technological skills and holistic vision that is required. In addition, the public actor often does not receive any input on what rules to create or how to adapt the current regime to follow technological advancements. The lack of cooperative, open communication based on mutual feedback is another major difficulty for the rule maker, making it hard to identify, understand[58] and effectively regulate disruptive technologies. It is no wonder that state actors in the 21st century are often in the dark, which in turn risks their regulatory decisions not being carefully prepared but promptly taken when they are forced to act on a case. However, such 'firefighting' measures like opaque legislation and soft-law provisions such as guidelines or recommendations are rarely appropriate,[59] and reality often leads to subsequent amendments. We must remember, however, that in the age of the fourth industrial revolution, change is constant, so it would be hypocritical to blame legislators for mishandling problems that are often not even understood by their developers either. Market players and society must, therefore, be much more patient with public ones in regulating these phenomena.
It is clear that the situation thus portrayed is burdensome, but what may be done to release the tension? The most promising answer is provided by a new legislative methodology
- 121/122 -
referred to as dynamic regulation.[60] Its starting point is that the legal regime applicable to any area is not unilaterally developed by the regulatory entity but defined in cooperation with stakeholders in a continuous consultation process that results in a win-win situation. The emphasis is on cooperation, as instead of rigid imperatives, stakeholders can express their requirements during the legislative process, thus actively shaping the content of the rules by articulating their needs. In dynamic regulation, legislation is an open-ended process[61] whereby, after careful and extensive consultation, the deal agreed by industry is codified, helping actors to ensure that the framework is effective and useful to them rather than an unnecessary bureaucratic burden. The process is characterised by mutual learning, whereby the proactive legislator seeks to get to know first-hand the operation of the actors to be regulated, their needs and the specificities that can only be authentically understood through their interpretation.
The regulatory sandbox is a multipurpose gadget in the toolbox of dynamic regulation. It can be best defined as a legal laboratory or a safe harbour. Strict legislation usually has a chilling effect on innovation, as businesses that fear the risk of retaliation for inappropriate market behaviour often abstain from development and testing activities. The sandbox is the place where innovators can experiment with their developments and products in a real-world environment without fear of sanctions from authorities or posing significant risks to consumers, hence gaining a temporary competitive advantage.[62] Meanwhile, the establishing authority can keep a close eye on the companies, analysing and interpreting their operation, assessing the specificities of the applied disruptive technology and advising the enterprises on how to comply with the relevant norms. If a sector lacks an elaborated legal framework, as it does with AI, or if there is room for improvement, the real perk of regulatory testbeds emerges: legal experiment. With the input gathered in the sandbox, the regulator decides what approach should be taken to create a safe and supporting legal environment. This process is done together with the stakeholders, resulting in a cooperative law-making experiment and a viable legal environment at the end of the day. A well-constructed sandbox creates a win-win situation whereby society, in addition to the participants, also benefits enormously, as it can benefit from a number of products and services that would otherwise not have been developed.
Today, there are many good practices and positive experiences that help justify the use of sandboxes. In 2015, the Financial Conduct Authority (FCA)[63] created the first fintech
- 122/123 -
regulatory testbed,[64] which has accepted over 180 companies from the 630 applicants[65] over the course of its operation. The pioneer has since been followed by many others operating in the financial market[66] sector in more than 50 countries,[67] with more being set up to address other fields of innovation.[68] In other sectors that are heavily penetrated by disruptive technologies, such as energetics,[69] autonomous vehicles[70] and blockchain technology,[71] testbeds are also set up to promote development and viable regulation. Some authors argue outright that all technology development should be channelled into so-called general purpose or universal sandboxes.[72] In the rest of this paper, I will scrutinise how a sandbox works and explain why it is so effective when it comes to AI regulation.
Let's start by laying the foundations and defining what a test environment looks like. First things first: when we talk about sandboxes, we should not think of an office building or a laboratory isolated from the world. Test environments, in general - whatever disruptive technology they focus on - are grounded on three pillars: regulatory background, testing infrastructure, and governing authority. The legal pillar is the set of norms establishing the sandbox and framing the activity that takes place in it, most importantly determining
- 123/124 -
the legal facilitations[73] that participants enjoy during their testing period. These rules are usually laid down at different levels of national legislation,[74] defining the conditions for their creation and operation, sometimes including entry and exit criteria and technical details as well. Regulatory sandboxes are typically set up and operated by a public body, such as a governmental or independent public regulator, an agency, or a central bank.[75] Given that disruptive technologies usually operate globally and know no national borders, it is timely to consider establishing international sandboxes, as exemplified by the AI Act, which allows the creation of European-level AI testbeds.[76]
As a third pillar, test environments, depending on the technology developed in them, may be complemented by different testing facilities or infrastructures where essential product development, testing and validation take place. These infrastructures are vital, as they emulate real-world circumstances without posing significant risks to consumers, such as accidents, data leaks or cyber-attacks; they do this by simulating the environment in which the future product or software will be functioning.[77] In many sectors, such as self-driving vehicles and drone testbeds, they will include isolated test tracks[78] where companies can perfect their developments in safe conditions, or server farms, where data can be securely stored.
Of course, the benefits are not for everyone. Only those companies that meet the often very strict selection and eligibility criteria will be admitted to the sandboxes. For example, the FCA has set out the following entry conditions for one of the first fintech sandboxes in the world: the proposal is intended for the UK financial services market, either involving a regulated activity or supporting firms doing regulated activities. It must involve genuine innovation, hence something completely unprecedented, and must benefit consumers without exposing them to undue risk. The firms applying must demonstrate a validated need for a sandbox, be ready for testing, and have a built version of the proposed idea and
- 124/125 -
a clear objective for the experimental procedure.[79] The terms and conditions of participation in the sandbox are typically laid down in a schedule,[80] implementation plan[81] or legally binding agreement[82] between the parties, thus ensuring that the test environment is not just a fancy way to waste time. It is important to underline that the benefits of the sandbox could lead to market and competitive distortions in the long run, as participating companies are in a much more favourable position than their counterparts who are not inside the test environment. To ensure that the creation of a sandbox does not lead to or prolong unfair situations, participation time is limited.[83] The typical period of between six months and two years[84] is sufficient for any company to test and develop its product in detail and gives the public actor the opportunity to observe and learn. It is also possible to transform an innovative idea from the ground up within this timeframe and comply with the applicable legislation. The number of experimenting companies is usually restricted to ensure that every participant gets enough attention, so the entities enter regulatory sandboxes in smaller, manageable groups or so-called cohorts.[85]
Now that we have a concept of what test environments look like from the outside, let's delve into their anatomy. We must not forget that each state is free to design sandboxes under its jurisdiction with different frameworks and content, so depending on the legal culture, the specificities of the areas to be regulated and the needs of the stakeholders, we may encounter different constructions. Therefore, I have tried to gather and generalise the most typical characteristics. In most cases, the regulatory sandbox is described as a safe space or harbour,[86] which is a meaningful summary of its essential nature. The sandbox gives the legislator the opportunity to learn and analyse the situation while providing guarantees to innovative market players to develop their solutions in peace without fearing
- 125/126 -
the legal and economic consequences. The main parties are innovative businesses and regulators, but they are often joined by independent experts, consultancies[87] and NGOs, especially in the legislative process.
The backbone of a regulatory sandbox is a set of measures that the founder provides to the participants. This can include legal facilitation or discounts,[88] as well as consultation and customised advisory processes[89] to help businesses develop their products and ensure legal compliance. The name 'test environment' also implies the key function of providing the opportunity for testing in real-life conditions. The aim is to enable innovative businesses to improve their products and services by involving not just fictitious but real users and consumers,[90] while taking only moderate risks compared to the real market situation. A further signature feature of the sandbox is the cooperative process that relies on institutionalised communication[91] to ensure a constant exchange of information between participants and the public sector, helping the latter to learn to really understand[92] the specificities of the technology and shape its legislative activity and the content of its standards accordingly. Below, we investigate these features and measures in a little more detail.
As mentioned, a specific and fundamental characteristic of test environments is the elimination of the unequal relationship between the regulator and the regulated entity, some aspects of which are worth mentioning. In sandboxes, there is continuous communication between the founder - typically an authority or a government agency[93] - and the market players, with the aim of shaping the legal framework of the field together. This straight connection to the legislature leverages channelling the feedback from the market actors and other stakeholders without distortion and helps ensure that the final norms will clearly reflect their demands.[94] Also, participating companies undertake to explain their operations, internal decision-making and product development processes, as well as the innovative product or service they have created, in detail. This helps the legislator to get a grip on their
- 126/127 -
nature and impact[95] and 'contributes] to a better understanding of the (in-)adequacy of existing and planned laws in their social context.'[96] Hence, lawmakers can closely see why and how things are happening and learn about the technology and its specificities from a perspective [97] that would not be possible under normal market conditions and conditions of commercial confidentiality. Gaining first-hand experience with the nature of disruptive technology and the stakeholders and identifying their needs will shape a regulatory ecosystem that benefits everyone.
Open, honest communication, backed up by legal guarantees, results in changing unilateral, rigid legislation into a cooperative journey, ideally leading to a robust and widely approved legal environment. Disruptive regulation that takes place in a sandbox is never complete and never final,[98] but always leaves room for adjustments, such as ex-post correction, in case the public actor perceives that trends are changing. In this case, a rapid response is ensured since the legislator is directly informed about the current challenges of the innovative companies, eliminating the need for time-consuming situation assessment studies and other decision-preparation processes. This allows the law to keep up with real-life developments almost without delay, which is much needed in our world.
'The most exciting benefit of regulatory sandboxes is that they may kick-start innovative businesses that might otherwise be stymied by regulatory costs.'[99] The regulation of disruptive technologies usually seems like an opaque maze. Inventing, developing and bringing an innovative business idea to market requires a lot of legal groundwork. Complying with legal requirements along the way is burdensome or even impossible, but after all, this is what sandboxes are really designed for.[100] For example, imagine a fintech company offering a revolutionary banking service. Before entering the market, it has to comply with a number of banking, financial and consumer protection laws even to test the first working version of its product, which it may not be able to do at the initial phase because sufficient organisational or technological measures are yet to be arranged, and in-house legal proficiency is not available either. The worst case is when the business is not able to evolve the product from blueprint to reality, as there are elements that must be developed from the outset in the strict conditions imposed by law prior to any implementation at all. Take the case of parcel delivery drones,[101] which are not possible to perfect without using
- 127/128 -
airspace and are subject to strict aviation rules and licensing. If the legislator does not provide some kind of loophole to allow experimental solutions to be tested in this airspace, it could significantly discourage the development of this sector and stifle innovation.
In order to avoid this situation, founders provide legal facilitations, so-called experimentation clauses,[102] to companies in sandboxes, strictly for the duration of the participation period. This simplifies the legal compliance required for market entry and allows real-world testing. The benefits are mostly embodied in deregulation, whereby the legislator temporarily exempts the undertakings concerned from compliance with certain legislation or specific provisions thereof. Deregulation can take different forms, such as exceptions to prohibition or the granting of exemptions such as exemption from authorisation,[103] licensing,[104] waivers,[105] or the issuance of no enforcement letters.[106] Participants may not be completely exempted from the rules but may be subject to simpler, more feasible conditions and lower legal thresholds than the stricter rules that apply to everyone else outside the test environment. In the sandbox operated by the National Bank of Hungary, for example, fintech companies that meet the entry criteria are exempted from remote customer identification, certain payment rules and customer complaint handling rules.[107] By taking advantage of legal relief, the sandbox opens the way to market access for businesses that would otherwise not have been able to overcome the related challenges under normal regulatory circumstances. The partial alleviation of the legal burden and the lower regulatory entry barrier,[108] thus provided by test environments has a very strong innovation-stimulating effect on the market for disruptive technologies.
It is important to note here that although deregulation takes many forms and varies in extent, there is one issue on which the founders do not compromise: participation never exempts a company from liability for damages caused to anyone during the test phase.[109] If a market player causes harm during testing or development - even if this is the result of a bug that was intended to be solved in the sandbox - it must assume the liability provided for in the relevant legal system. Although there are opportunities to cover this risk with liability insurance, 'it can be difficult to find an insurance company which is willing to take on a risk which is hard to calculate in view of the novel nature of the innovation.'[110] Without going into the implications of this circumstance, we should all hope that soon there will be
- 128/129 -
a satisfactory solution to this problem. Otherwise, it may be the main factor that moderates the interest of companies in sandboxes and stifles disruptive innovation.[111]
Inside the sandboxes, the public entity is not only an observer but also an advisor, a quasi-mentor that helps to navigate the legal maze.[112] No matter how beneficial the relaxed legal conditions may be for developers, if they get lost in the labyrinth, their participation in the test environment can easily become counterproductive. To prevent this, sandboxes involve carefully designed advisory processes to help participants comply with the legislation that applies to them and to get both their product and internal organisation ready for successful take-off. 'Through powers to provide legal guidance, the competent supervisory authority and the innovator can determine whether the product or service in question complies with current legal requirements and, if it does not, what a legally compliant product design could look like.'[113] General or bespoke guidance[114] for businesses may include not only legal compliance advice or guidance for technology development but also market-entry mentoring by the founding authority. The primary goal of the business while participating in a sandbox is to ensure that the product or service is working, viable on the market and clearly compliant with the relevant legislation, which is greatly facilitated by focused attention and tailored advice.
Given that companies in the sandbox are in constant consultation with the public authority plus undergo a series of tests and evaluations,[115] their internal operations, processes, and products will indeed meet legal requirements at the time they exit. To crown the process, the sandbox operating authority may issue an exit certificate to prove that the company complies with the rules that apply to it and the disruptive technology they have developed. Such certificates provide an official guarantee to the market actor that it will not be penalised for illegal behaviour or activities. In addition, they give the holder a significant competitive advantage by conveying reliability, credibility and quality to consumers. The AI Act follows an approach like this in introducing the exit report system,[116] which hopefully will serve as a model for other sandboxes.
- 129/130 -
As mentioned, one of the perks of sandboxes is that they allow participants to test the performance and effectiveness of their products in real market conditions, with real customers and real user behaviour,[117] but without any serious threat or danger to them.[118] Companies can analyse the feedback and lessons learned during experiments and can still adapt or fine-tune innovative products accordingly. Depending on the nature of disruptive technology, different kinds of testing environments are set up in most sandboxes where practical product development is facilitated. For example, a cybersecurity sandbox usually has isolated servers and 'its own network and typically doesn't have a physical connection to production resources. The purpose of the sandbox is to execute malicious code and analyse it. [...] Because of this, the sandbox must not have access to critical infrastructure.'[119] Meanwhile, transportation test environments may need a whole city to function properly, like the one in Hamburg, Germany.[120] The four-year-long HEAT[121] regulatory sandbox project investigated 'how fully automated or self-driving electric minibuses can be safely deployed to transport passengers on urban roads.'[122]
If sufficient transparency is provided, testing in a sandbox not only hugely benefits public authorities and innovative market actors but also contributes to raising public awareness and the understanding of disruptive technologies on a wider societal level.[123] The tests carried out may be logged in detail, and the results are made public (of course, protecting trade secrets and without any loss of interest to the parties involved). Such an approach is foreseen in the AI Act, as it opens up the possibility of making exit reports public[124] on a single information platform.[125] These reports will, of course, inform civil stakeholders as well as innovative companies and developers outside the sandbox who are active in a disruptive field. The lessons learned in the test environment can be used to develop use cases and good practices that will make life easier for other market players, as they will help to avoid dead ends and prevent unnecessary developments. The burden of this publicity is typically borne and accepted by the participating market players, as the benefits of engaging in a test environment still far outweigh the required sacrifices and transparency.
- 130/131 -
Considering AI a disruptive technology that needs to operate between legal boundaries, having well-designed, efficiently functioning AI regulatory sandboxes is foundational. The European Union correctly perceived this in relation to the journey towards the age of AI. The EU recognised early on that supporting AI innovation with legal measures at a community level would bolster the development and widespread deployment of the technology. The Commission declared its intention in a Communication in April 2018[126] to establish and promote testbeds in line with the European approach to AI. This objective now seems to be materialising, as the AI Act has laid down a detailed sandbox design and ecosystem,[127] making it mandatory at the national level and allowing the creation of AI sandboxes at the Community level. The AI Act leaves the detailed rules of test environments, like eligibility criteria, internal procedures, terms and conditions,[128] to so-called implementing acts.[129] This guarantees the flexibility and the ability to closely follow technological development, hence making regulatory sandboxes in the EU future-proof. This approach opens the way for the Commission to react rapidly and refine the sandbox norms without initiating bureaucratic and slow ordinary legislative procedures. Such regulatory logic indeed makes the AI Act an innovation-friendly piece of law.
The Presidents of the EU Council and the Parliament signed the AI Act in mid-June,[130] and the comprehensive legislation entered into force in August. Indeed, it will take another two years to become fully applicable.[131] Spain has long taken action to support AI innovation. In mid-2022, the Ministry of Economic Affairs and Digital Transformation of the European state, together with the European Commission, set up a pilot for the European AI sandbox.[132] The main goal was to 'connect competent authorities with companies developing AI in order to define together best practices to implement the future European Commission's AI regulation.'[133] The general objectives were to transfer compliance know-how to companies, enable the development of innovative, trustworthy AI systems and share best practices across the EU. The pilot was open to all EU Member States so that countries could join the
- 131/132 -
controlled environment and obtain practical learning experience to support the development of standards, guidance and tools at national and European levels.[134] During the Spanish EU presidency in the second half of 2023, the pilot project came to fruition when Spain officially established the first EU AI regulatory sandbox with a dedicated law.[135] Although two test environments had already been set up, one focusing on the financial system and another one on electricity,[136] establishing a dedicated AI sandbox put the country among the leaders of global AI regulatory innovation. Being the first of its kind,[137] we can only hope that the pioneering initiative will set the standard for the AI sandboxes of the future both in Europe and globally or at least act as a point of reference for them.
When the wheel, one of the most ancient disruptive technologies, was invented, it was probably not considered that it needed to be tested, proven and perfected in a 'contemporary sandbox' before it was brought to market. Of course, this was not only because there was no such thing as a sandbox but also because there were few technologies as simple as the wheel. Its usefulness was obvious to everyone, and its clear mode of operation made it unnecessary to scrutinise and understand its risks before application. The disruptive technologies of our time, however, are no longer associated with such an easy path.
As I have stressed, artificial intelligence - which may be the most disruptive technology of the present times - is not nearly so simple and straightforward, especially when it comes to regulation. While it is fundamentally reshaping our world and permeating more and more spheres of our lives, the operation and nature of AI often remain obscure, even to experts and developers. The possible directions of its progression are unpredictable, and the speed of its growth is exponential, so its irresponsible use and development may be particularly challenging and dangerous for mankind. What is even more concerning is that even the law cannot compete with the soaring speed of its development, and AI regularly operates in an environment without a robust legal framework because of the lack of regulatory understanding and delayed acting. Nevertheless, we have a common interest and need to exploit the wide potential of AI and create legal frameworks that protect the interests of all
- 132/133 -
affected parties, such as users, developers and sovereign public entities. Sandboxes are the perfect places for this work.
This contribution had the aim of presenting how all the players in the disruptive innovation ecosystem - like the AI universe - benefit from the sandbox approach. Participating in a testbed ensures secure and human-centric development, thorough testing possibilities, and the perk that this is done in a secure environment, where developers do not have to fear innovation-stifling legal risks. AI developers working in the sandbox take advantage of being able to test and improve their AI-based solutions in a real-world environment, with real feedback, and without facing any major legal risks, while supported by tailored advice and legal exemptions. In the meantime, lawmaker entities can learn first-hand how this often mysterious technology works and cooperate with developers to create a desirable legal environment that supports innovation while ensuring that the risks associated with AI are managed and mitigated.
Although the global roll-out of dedicated AI sandboxes is still in its early stages, seeing the forward-looking Spanish initiative, we can be optimistic about their future uptake and success in the long run. This aspiration is strengthened by the AI Act that establishes mandatory sandboxes in at least 27 EU Member States, and we may only hope that it will create an example to follow and trust and that the so-called 'Brussels effect'[138] will do its job and make the European process a global standard. ■
NOTES
[1] Luke Dunne, '4 Important Facts about Heraclitus, the Ancient Greek Philosopher' (11 December 2022) The Collector <https://www.thecollector.com/heraclitus-philosopher-facts-you-should-know/> accessed 1 April 2025.
[2] See Joseph L. Bower, Clayton M. Christensen, 'Disruptive Technologies: Catching the Wave' (1995) 73 (1) Harvard Business Journal 43-53, 43.
[3] See Clayton M. Christensen, Joseph L. Bower, 'Customer Power, Strategic Investment, and the Failure of Leading Firms' (1996) 17 (3) Strategic Management Journal 197-218, 210, DOI: https://doi.org/10.1002/(SICI)1097-0266(199603)17:3%3C197::AID-SMJ804%3E3.0.CO;2-U
[4] Peter W. Kingsford, 'James Watt' Encyclopedia Britannica <https://www.britannica.com/biography/James-Watt> accessed 1 April 2025.
[5] The Editors of Encyclopaedia Britannica, 'Industrial Revolution' Encyclopedia Britannica <https://www.britannica.com/money/topic/Industrial-Revolution> accessed 1 April 2025.
[6] Smithsonian Magazine and Richard Conniff, 'What the Luddites Really Fought Against' Smithsonian Magazine <https://www.smithsonianmag.com/history/what-the-luddites-really-fought-against-264412/> accessed 1 April 2025.
[7] Probably the most brilliant mathematician, John von Neumann, needed a more advanced computer than the IBM machines of the time for the shockwave-related calculations associated with the first atomic bomb, which is what led to the development of the so-called Neumann principle machines. Original quote from George Marx, A marslakók érkezése: magyar tudósok, akik nyugaton alakították a 20. század történelmét (Akadémiai Kiadó 2000, Budapest) 271; English version of the book: George Marx, The Voice of the Martians: Hungarian Scientists Who Shaped the 20 Century in the West (3rd rev. edn, Akadémiai Kiadó 2001, Budapest).
[8] The so-called Neumann principle computer consists of a central processing unit and a memory interconnected by a control unit.
[9] Honeywell Inc v Sperry Rand Corp, 180 USPQ 673 (D Minn 1973).
[10] Ananyo Bhattacharya, Neumann János - Az Ember a Jövőből (Open Books 2023, Budapest) 166-167; see in the original book: A. Bhattacharya, The Man from the Future: The Visionary Ideas of John von Neumann (W. W. Norton 2022, New York).
[11] PayPal official website, <https://about.pypl.com/who-we-are/history-and-facts/default.aspx> accessed 1 April 2025.
[12] Revolut official website, <https://www.revolut.com/about-revolut/> accessed 1 April 2025.
[13] Blockworks.co, 'Bitcoin price' <https://blockworks.co/price/btc> accessed 1 April 2025.
[14] Non-Fungible Tokens - It is worth reading the comprehensive article on what NFTs are: Mitchell Clark, 'People Are Spending Millions on NFTs. What? Why?' The Verge (3 March 2021) <https://www.theverge.com/22310188/nft-explainer-what-is-blockchain-crypto-art-faq> accessed 1 April 2025.
[15] Crypto.com, 'The Most Expensive NFTs Ever Sold' <https://crypto.com/en/university/most-expensive-nfts> accessed 1 April 2025.
[16] For examples from different areas of application, see Michael Nofer and others, 'Blockchain' (2017) 59 (3) Business & Information Systems Engineering 183-187, DOI: https://doi.org/10.1007/s12599-017-0467-3; P. Tasatanattakool, C. Techapanupreeda, 'Blockchain: Challenges and Applications', 2018 International Conference on Information Networking (ICOIN) 473-475, DOI: https://doi.org/10.1109/ICOIN.2018.8343163; Vahiny Sharma and others, 'Blockchain in Secure Healthcare Systems: State of the Art, Limitations, and Future Directions' [2022] Security and Communication Networks e9697545 (article number).
[17] See Zoltán Tóth J., 'A jogállamiság tartalma' [2019] Jogtudományi Közlöny 197-212, 199.
[19] Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation) [2016] OJ L119/1 - the Act was proposed by the Commission on 25 January 2012, and was signed by the President of the EP and the President of the Council on 27 April 2016.
[20] The GDPR has been effective since 24 May 2016, applied from 25 May 2018 - see: <https://eur-lex.europa.eu/legal-content/EN/ALL/?uri=CELEX:32016R0679&qid=1718531283921> accessed 1 April 2025.
[21] Regulation (EU) 2022/1925 of the European Parliament and of the Council of 14 September 2022 on contestable and fair markets in the digital sector and amending Directives (EU) 2019/1937 and (EU) 2020/1828 (Digital Markets Act) [2022] OJ L265/1.
[22] The Digital Markets Act was introduced by the Commission on 16 December 2020 and signed by the President of the EP and the President of the Council on 14 September 2022.
[23] See Article 54 of the Digital Markets Act.
[24] Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act) [2024] OJ L2024/1689.
[25] According to the EU Official Journal - <https://eur-lex.europa.eu/legalcontent/EN/HIS/?uri=celex:52021PC0206> accessed 1 April 2025.
[26] 'Artificial Intelligence (AI) Act: Council Gives Final Green Light to the First Worldwide Rules on AI' <https://www.consilium.europa.eu/en/press/press-releases/2024/05/21/artificial-intelligence-ai-act-council-gives-final-green-light-to-the-first-worldwide-rules-on-ai/> accessed 1 April 2025.
[27] See 'Timeline of Developments EU Artificial Intelligence Act' <https://artificialintelligenceact.eu/developments/> accessed 1 April 2025.
[28] We are living in the last days of so-called narrow AI when an AI-based algorithm can simulate certain fields of human intelligence - eg speaking, vision, combination or selection - and surpass man. With the emergence of foundation models, generative AI - like ChatGPT and Dall-E - is taking the stage, and we are slowly entering the age of artificial general intelligence (AGI). Then, a single AI model will reach the level of human intelligence. It is not certain when AGI will happen, but the most advanced version of ChatGPT shows signs of this - see in: Sébastien Bubeck and others, 'Sparks of Artificial General Intelligence: Early Experiments with GPT-4' <https://arxiv.org/abs/2303.12712> accessed 1 April 2025.
[29] See Samadrita Ghosh, Stephanie Ness, Shruti Salunkhe, 'The Role of AI Enabled Chatbots in Omnichannel Customer Service' (2024) 26 (6) Journal of Engineering Research and Reports 327-345, DOI: https://doi.org/10.9734/jerr/2024/v26i61184
[30] See Ibrahim Kamel, 'Artificial Intelligence in Medicine' (2024) 7 (4) Journal of Medical Artificial Intelligence 4, DOI: https://doi.org/10.21037/jmai-24-12; Chris Varghese and others, 'Artificial Intelligence in Surgery' (2024) 30 (5) Nature Medicine 1257-1268, DOI: https://doi.org/10.1038/s41591-024-02970-3
[31] Ministry of Innovation and Technology and Digital Success Programme, 'Magyarország Mesterséges Intelligencia Stratégiája 2020-2030' [AI Strategy of Hungary] (2020).
[32] Official statistics of the Hungarian Central Statistical Office, <https://www.ksh.hu/stadat_files/mun/en/mun0098.html> accessed 1 April 2025.
[33] Chris Vallance, 'AI Could Replace Equivalent of 300 Million Jobs - Report - BBC News' (2023) <https://www.bbc.com/news/technology-65102150> accessed 1 April 2025.
[34] OpenAI, 'GPT-4 Technical Report' (March 27, 2023) 6; see also: John Koetsier, 'GPT-4 Beats 90% Of Lawyers Trying To Pass The Bar' Forbes <https://www.forbes.com/sites/johnkoetsier/2023/03/14/gpt-4-beats-90-of-lawyers-trying-to-pass-the-bar/> accessed 1 April 2025.; Lakshmi Varanasi, 'AI Models like ChatGPT and GPT-4 Are Acing Everything from the Bar Exam to AP Biology. Here's a List of Difficult Exams Both AI Versions Have Passed' Business Insider <https://www.businessinsider.com/list-here-are-the-exams-chatgpt-has-passed-so-far-2023-1> accessed 1 April 2025.
[35] See Misha Benjamin and others, 'What the Draft European Union AI Regulations Mean for Business' (10 August 2021) <https://www.mckinsey.com/business-functions/quantumblack/our-insights/what-the-draft-european-union-ai-regulations-mean-for-business> accessed 1 April 2025.; World Economic Forum, 'Future of Jobs Report' (WEF 2023) <https://www.weforum.org/reports/the-future-of-jobs-report-2023/> accessed 1 April 2025.
[36] See Executive Office of the President: Guidance for Regulation of Artificial Intelligence Applications -Memorandum for the Heads of Executive Departments and Agencies M-21-06, (2020); Donald J. Trump, Executive Order 13859 - Maintaining American Leadership in Artificial Intelligence (February 11, 2019) Federal Register <https://www.federalregister.gov/documents/2019/02/14/2019-02544/maintaining-american-leadership-in-artificial-intelligence> accessed 1 April 2025.
[37] See The White House, 'Blueprint for an AI Bill of Rights / OSTP' The White House <https://bidenwhitehouse.archives.gov/ostp/ai-bill-of-rights/> accessed 1 April 2025.
[38] Paige Lord, 'The "Wild West" Era of AI' (27 January 2023) <https://www.linkedin.com/pulse/ai-its-wild-west-era-paige-lord/> accessed 1 April 2025.
[39] As of June 2024, hardly a dozen states in the USA regulated AI, creating a regulatory patchwork in the United States - see: '2024 AI State Law Tracker' <https://www.huschblackwell.com/2024-ai-state-law-tracker> accessed 1 April 2025.
[40] Robert Seamans, 'AI Regulation Is Coming To The U.S., Albeit Slowly' Forbes <https://www.forbes.com/sites/washingtonbytes/2023/06/27/ai-regulation-is-coming-to-the-us-albeit-slowly/> accessed 1 April 2025.
[41] Future of Life Institute, 'Pause Giant AI Experiments: An Open Letter' Future of Life Institute <https://futureoflife.org/open-letter/pause-giant-ai-experiments/> accessed 1 April 2025.
[42] Future of Life Institute (n 41).
[43] Future of Life Institute, 'Policymaking in the Pause' (Future of Life Institute 2023).
[44] E.O. 14110 of Oct 30, 2023 - Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence 2023 (Federal Register Vol. 88, No. 210, 1 November 2023).
[45] Ibid.
[46] The word 'sandbox' originates in the IT world, where the importance of testing before application is crucial. Sandboxes are isolated, controlled, usually simulated environments like networks or software emulations. They provide a safe space for experimentation, enabling developers to test and refine their innovations before releasing them into the real world. See eg: Hassan Abdishakur, 'What Is a Sandbox Environment? (Definition, How To Guide)' (28 March 2023) Built In <https://builtin.com/software-engineering-perspectives/sandbox-environment> accessed 1 April 2025.; 'What Is a Sandbox Environment? Meaning & Setup | Proofpoint US' (26 February 2021) Proofpoint <https://www.proofpoint.com/us/threat-reference/sandbox> accessed 1 April 2025.
[47] See Thomas Buocz, Sebastian Pfotenhauer, Iris Eisenberger, 'Regulatory Sandboxes in the AI Act: Reconciling Innovation and Safety?' (2023) 15 (2) Law, Innovation and Technology 357-389, 362, DOI: https://doi.org/10.1080/17579961.2023.2245678
[48] See Andrea O'Sullivan, 'Expanding Regulatory Sandboxes to Fast-Track Innovation' (Policy Brief, The James Madison Institute 2021) 3.
[49] Investment in AI has skyrocketed lately, and the volume of global venture capital investment in this area is extreme: according to the OECD, over 430 billion dollars were thrown at the field worldwide last year, mainly from the US and China. Healthcare, including drugs and biotechnology, is the second most funded sector -see: 'Live Data from OECD.AI' <https://oecd.ai/en/data> accessed 1 April 2025.; Piyush Gupta, 'How Venture Capital Is Investing in AI in These 5 Top Economies' (24 May 2024) World Economic Forum <https://www.weforum.org/agenda/2024/05/these-5-countries-are-leading-the-global-ai-race-heres-how-theyre-doing-it/> accessed 1 April 2025.
[50] Wolf-Georg Ringe, Christopher Ruof, 'Regulating Fintech in the EU: The Case for a Guided Sandbox' (2020) 11 (3) European Journal of Risk Regulation 604-629, 613, DOI: https://doi.org/10.1017/err.2020.8
[51] See Hilary J Allen and others, 'Regulatory Sandboxes' (2019) 87 (3) The George Washington Law Review 579-645, 587-589, DOI: http://dx.doi.org/10.2139/ssrn.3056993
[52] To get a better grip on the situation, see Jon Truby and others, 'A Sandbox Approach to Regulating High-Risk Artificial Intelligence Applications' (2022) 13 (2) European Journal of Risk Regulation 270-294, 276, DOI: https://doi.org/10.1017/err.2021.52
[53] Minimum Viable Product - is a concept from product development that refers to the simplest version of a product that can be released to the market. It includes just enough features to attract early adopters and gather valuable feedback for future development. The main goals of an MVP are to test a product hypothesis with minimal resources and to learn about the target market's response as quickly as possible.
[54] See Truby and others (n 52) 276.
[55] The primary sectors where disruptive technologies are most extensively utilised include healthcare, transportation, and finance, all of which are subject to rigorous and meticulous regulation.
[56] It is no wonder that the AI Act has placed emphasis on providing differentiated measures to help SMEs enter the AI market and access sandboxes - see, eg AI Act recitals (8), (109), (139), (143), art 57 (9)(e), art 58(2)(d), art 62.
[57] CMS Hungary, 'Hungary Data Authority Issues Heavy Fine for the Use of AI in Voice Recording Analysis' <https://cms-lawnow.com/en/ealerts/2022/04/hungary-data-authority-issues-heavy-fine-for-the-use-of-ai-voice-recording-analysis> accessed 1 April 2025.
[58] See Ringe, Ruof (n 50) 617.
[59] See the reception of the AI Act in Philipp Hacker, 'What's Missing from the EU AI Act: Addressing the Four Key Challenges of Large Language Models' (2023) Verfassungsblog <https://verfassungsblog.de/whats-missing-from-the-eu-ai-act/> accessed 1 April 2025.
[60] See Mark Fenwick, Erik PM Vermeulen, Marcelo Corrales, 'Business and Regulatory Responses to Artificial Intelligence: Dynamic Regulation, Innovation Ecosystems and the Strategic Management of Disruptive Technology' in Marcelo Corrales, Mark Fenwick, Nikolaus Forgó (eds), Robotics, AI and the Future of Law (Springer 2018, Singapore) 81-103, 88, DOI: https://doi.org/10.1007/978-981-13-2874-9_4
[61] Fenwick, Vermeulen, Corrales (n 60) 9.
[62] See OECD, 'Regulatory Sandboxes in Artificial Intelligence' (13 July 2023) OECD Digital Economy Papers 8, DOI: https://doi.org/10.1787/8f80a0e6-en
[63] Financial Conduct Authority - United Kingdom.
[64] Jayoung James Goo, Joo-Yeun Heo, 'The Impact of the Regulatory Sandbox on the Fintech Industry, with a Discussion on the Relation between Regulatory Sandboxes and Open Innovation' (2020) 6 (2) Journal of Open Innovation: Technology, Market, and Complexity 43-60, 45, DOI: https://doi.org/10.3390/joitmc6020043
[65] See the details here: <https://www.fca.org.uk/firms/innovation/regulatory-sandbox/accepted-firms> accessed 1 April 2025.
[66] See Rolf H Weber, Massimo Durante, 'Artificial Intelligence Ante Portas: Reactions of Law' (2021) 4 (3) J 486-499, 496, DOI: https://doi.org/10.3390/j4030037
[67] See Ross P. Buckley and others, 'Building FinTech Ecosystems: Regulatory Sandboxes, Innovation Hubs and Beyond' DOI: http://dx.doi.org/10.2139/ssrn.3455872; Tambiama Madiega, Anne Louise Van De Pol, 'Artificial Intelligence Act and Regulatory Sandboxes' (European Parliamentary Research Service, European Union 2022) 2.
[68] Buocz, Pfotenhauer, Eisenberger (n 47) 362.
[69] See Sofia Ranchordas, 'Experimental Regulations for AI: Sandboxes for Morals and Mores' [2021] SSRN Electronic Journal 12, DOI: https://dx.doi.org/10.2139/ssrn.3839744
[70] Miriam McNabb, 'Estonia Establishes Drone Sandbox: ANRA Technologies to Provide U-Space and CIS Tech' (15 May 2023) DRONELIFE <https://dronelife.com/2023/05/15/estonia-establishes-drone-sandbox-anra-technologies-to-provide-u-space-and-cis-tech/> accessed 1 April 2025.
[71] 'European Blockchain Regulatory Sandbox | EU Digital Finance Platform' <https://digital-finance-platform.ec.europa.eu/cross-border-services/ebsi> accessed 1 April 2025.
[72] See James Czerniawski, Trace Mitchell and Adam Thierer, '"Sandbox" Everything' (12 October 2020) RealClear Policy <https://www.realclearpolicy.com/articles/2020/10/12/sandbox_everything_580391.html> accessed 1 April 2025; Ringe, Ruof (n 50); O'Sullivan (n 48).
[73] In Germany, these so-called experimental clauses can be found in a handful of legal sources; see Federal Ministry for Economic Affairs and Energy, 'Making Space for Innovation - The Handbook for Regulatory Sandboxes' (Federal Ministry for Economic Affairs and Energy 2019), Annex, 78-85.
[74] For example, the legal grounds that are making the Hungarian fintech sandbox possible are established by a decree issued by the president of the National Bank of Hungary: Decree No. 47/2018 (XII. 17.) on The Different Rules for Compliance with Obligations According to Certain MNB Decrees. In Spain, the norm that founded the sandbox for the electricity sector took the form of a Royal Decree (Royal Decree 568/2022 of 11 July 2022) See 'Spain Publishes First Call for Access to Its Electricity Regulatory Sandbox' <https://www.osborneclarke.com/insights/spain-publishes-first-call-access-its-electricity-regulatory-sandbox> accessed 1 April 2025.
[75] For some examples of establishing authority, Buckley and others (n 67) Appendix A; Goo, Heo (n 64) Table 2.
[77] See 'What Is a Sandbox Environment? Meaning & Setup / Proofpoint US' (n 46); Abdishakur (n 46).
[78] National Defence, 'Sandboxes' <https://www.canada.ca/en/department-national-defence/programs/defence-ideas/element/sandboxes.html> accessed 1 April 2025.; Sally French, 'These 8 States Are the Perfect Sandbox for Drones' (29 September 2022) The Drone Girl <https://www.thedronegirl.com/2022/09/29/drone-sandbox-mercatus/> accessed 1 April 2025.
[79] See Federal Conduct Authority 'FCA Regulatory Sandbox' 5, <https://www.fca.org.uk/publication/fca/fca-regulatory-sandbox-guide.pdf> accessed 1 April 2025.
[80] See Monetary Authority of Singapore, 'Fintech Regulatory Sandbox Guidelines' (Monetary Authority of Singapore 2016).
[81] See Real Decreto 817/2023, de 8 de noviembre, que establece un entorno controlado de pruebas para el ensayo del cumplimiento de la propuesta de Reglamento del Parlamento Europeo y del Consejo por el que se establecen normas armonizadas en materia de inteligencia artificial 2023, art 11(2).
[82] The AI Act seems to describe such a legal construction, called the specific sandbox plan, that is agreed between the providers of AI-based solutions and the competent authority, see AI Act, art 57(5) and (7).
[83] See Allen and others (n 51) 638.
[84] See Buckley and others (n 67) 68; Goo, Heo (n 64) 4; Christopher Lomax, Angela Attrey, Molly Lesher, 'The Role of Sandboxes in Promoting Flexibility and Innovation in the Digital Age' [2020] Going Digital Toolkit Note No. 2, 10.
[85] See OECD (n 62) 15; See Financial Conduct Authority (FCA), 'Innovation Hub: Market Insights' (4 January 2023) Graph 1 <https://www.fca.org.uk/data/innovation-market-insights> accessed 1 April 2025.
[86] See Buckley and others (n 67) 83; Chang-Hsien Tsai, Ching-Fu Lin and Han-Wei Liu, 'The Diffusion of the Sandbox Approach to Disruptive Innovation and Its Limitations' (2020) 53 (2) Cornell International Law Journal 261-296, 268; Fenwick, Vermeulen, Corrales (n 60) 10; Saule T. Omarova, 'Technology v Technocracy: Fintech as a Regulatory Challenge' (2020) 6 (1) Journal of Financial Regulation 75-124, 110, DOI: https://doi.org/10.1093/jfr/fjaa004; Ranchordas (n 69) 13.
[87] See Federal Ministry for Economic Affairs and Energy (n 73) 24-27.
[88] Chang-Hsien and others (n 86) 264; Buocz, Pfotenhauer, Eisenberger (n 47) 364.
[89] Ranchordas (n 69) 21.
[90] Ringe, Ruof (n 50) 605.
[91] See Dirk Zetzsche and others, 'Regulating a Revolution: From Regulatory Sandboxes to Smart Regulation' (2017) 23 (1) Fordham Journal of Corporate & Financial Law 31-103, 38-39, 61.
[92] See Truby and others (n 52) 273; Thomas A. Hemphill, 'Technology Entrepreneurship and Innovation Hubs: Perspectives on the Universal Regulatory Sandbox' (2023) 50 Science and Public Policy 350-353, 351, DOI: https://doi.org/10.1093/scipol/scac072; Ringe and Ruof (n 50) 617.
[93] See (n 75).
[94] See Albert Tan, 'The Digital Banking and Fintech Sandbox - Nepal' (Open Science Framework 2023) preprint 9, DOI: https://doi.org/10.31219/osf.io/pze6u
[95] Allen and others (n 51) 632.
[96] Buocz, Pfotenhauer, Eisenberger (n 47) 388.
[97] Omarova (n 85) 12-13.
[98] See Fenwick, Vermeulen, Corrales (n 60) 89.
[99] O'Sullivan (n 48) 2.
[100] See Ranchordas (n 69) 7-8.
[101] See the case study in Federal Ministry for Economic Affairs and Energy (n 73) 42.
[102] See Federal Ministry for Economic Affairs and Energy (n 73) 62-63.
[103] Zoltán Pék, 'Szabályozási tesztkörnyezet az energetikában: innováció és szabályozás' (2022) 69 Közgazdasági Szemle, 625-642, 631, DOI: https://doi.org/10.18414/KSZ.2022.5.625
[104] See Allen and others (n 51) 598; OECD (n 63) 15.
[105] FCA, 'Regulatory Sandbox' (Financial Conduct Authority 2015) 9.
[106] Buocz, Pfotenhauer, Eisenberger (n 47) 362.
[107] See <https://www.mnb.hu/en/innovation-hub/regulatory-sandbox> accessed 1 April 2025.
[108] See Fenwick, Vermeulen, Corrales (n 60) 81.
[109] Such an approach is followed by the European AI Regulation, See AI Act, art 57(12).
[110] Federal Ministry for Economic Affairs and Energy (n 73) 45.
[111] See Truby and others (n 52) 272.
[112] See O'Sullivan (n 48) 2.
[113] Buocz, Pfotenhauer, Eisenberger (n 47) 362.
[114] Ranchordas (n 69) 21; FCA (n 105) 9.
[115] See Fenwick, Vermeulen and Corrales (n 60) 81; See examples of the evaluation measures: Jon Truby, Andrew Dahdal, Imad Antoine Ibrahim, 'Sandboxes in the Desert: Is a Cross-Border "Gulf Box" Feasible?' (2022) 14 (2) Law, Innovation and Technology 447-473, 472-473, DOI: https://doi.org/10.1080/17579961.2022.2113674
[117] Truby and others (n 52) 277.
[118] See Ringe, Ruof (n 50) 606, 616, 629.
[119] What Is a Sandbox Environment? Meaning & Setup / Proofpoint US' (n 46).
[120] HEAT / Renewable Mobile' <https://www.erneuerbar-mobil.de/projekte/heat> accessed 1 April 2025.
[121] Hamburg Electric Autonomous Transportation.
[122] Federal Ministry for Economic Affairs and Energy (n 73) 11.
[123] See Fenwick, Vermeulen, Corrales (n 60) 12.
[125] According to Art. 62 (3) (b) of the AI Act, this platform should be developed and maintained by the AI Office to provide easy-to-use information in relation to the AI Act for all operators across the Union.
[126] Commission, 'Artificial Intelligence for Europe' COM(2018) 237 final.
[129] Implementing and Delegated Acts - European Commission' <https://commission.europa.eu/law/law-making-process/adopting-eu-law/implementing-and-delegated-acts_en> accessed 1 April 2025.
[130] See the Official Journal of EU 'Regulation - EU - 2024/1689 - EN -EUR-Lex' <https://eur-lex.europa.eu/legal-content/EN/ALL/?uri=CELEX:32024R1689&qid=1722334924394> accessed 1 April 2025.
[132] Launch Event for the Spanish Regulatory Sandbox on Artificial Intelligence / Shaping Europe's Digital Future' (27 June 2022) <https://digital-strategy.ec.europa.eu/en/events/launch-event-spanish-regulatory-sandbox-artificial-intelligence> accessed 1 April 2025.
[133] Ministro de Asuntos Económicos y Transformacion Digital, 'The Government of Spain in Collaboration with the European Commission Presents a Pilot for EU's First AI Regulatory Sandbox' (2022) 1.
[134] See (n 133) 1-3.
[135] Real Decreto 817/2023, de 8 de noviembre, que establece un entorno controlado de pruebas para el ensayo del cumplimiento de la propuesta de Reglamento del Parlamento Europeo y del Consejo por el que se establecen normas armonizadas en materia de inteligencia artificial.
[136] Garrigues-Javier Fernandez Rivaya, Anxo Vidal, 'Spain: The Artificial Intelligence Regulatory "Sandbox" Has Arrived' (29 September 2023) Lexology <https://www.lexology.com/library/detail.aspx?g=99939c25-d7bb-4d06-b154-4a972eb71e9b> accessed 1 April 2025.
[137] There are plenty of AI-related sandboxes - See OECD (n 61) Annex B., but the Spanish one is the first to solely and explicitly focus on AI.
[138] See Anu Bradford, The Brussels Effect: How the European Union Rules the World (Oxford University Press 2020, Oxford) DOI: https://doi.org/10.1093/oso/9780190088583.001.0001
Lábjegyzetek:
[1] The author is a PhD candidate at Eötvös Loránd University, Faculty of Law and a researcher in the Jean Monnet Chair for European Data Economy. ORCID: https://orcid.org/0009-0003-1378-3788. This contribution was supported by the National Research, Development and Innovation Office and the Hungarian Ministry for Culture and Innovation under Grant K-142232 OTKA22.
Visszaugrás