https://doi.org/10.54148/ELTELJ.2024.2.127
With the seemingly rapid progression of technological development, algorithms are also becoming increasingly powerful and complex, not least due to the emergence of artificial intelligence (AI). While the AI Act is not yet applicable, a European Union law governing the use of algorithms on online platforms already exists that sets out the potential risks and challenges associated with their use. The Digital Services Act (DSA) introduces several new regulations concerning algorithm-based, automatic filtering systems into EU law that play a particularly important role for online platforms, as algorithms are used in these in the form of filter and recommender systems. These help with the moderation of content on platforms on the one hand and ensure a better user experience on the other. At the same time, their use is also associated with potentially negative implications and risks. For example, the spread of misinformation, hate speech and other harmful content on online platforms can have a significant negative impact on democracy and social cohesion. The Digital Services Act aims to ensure that algorithmic systems are used transparently and responsibly. In the analysis of the Digital Services Act, the paper primarily employs the method of word interpretation. This involved a detailed examination of the language used in the Digital Services Act, focusing on the specific terms and phrases within the legislative text. By scrutinising the context and usage of these keywords, the paper aims to uncover their precise meanings and implications.
Keywords: algorithms, algorithm regulation, DSA, Digital Services Act, recommender systems, moderation
- 127/128 -
Online platforms, ie hosting services that store and publicly disseminate information on behalf of a user,[1] are increasingly in focus when it comes to content moderation and curation. It is now not unusual but rather standard practice to use technological measures such as algorithms to cope with the increasing pressure to identify and subsequently block, remove, monitor or filter illegal content in order to avoid potential liability.[2] Similar to the E-Commerce Directive,[3] the Digital Services Act (DSA)[4] also obliges platforms to remove illegal content expeditiously, as soon as they become aware of it in order not to lose their exemption from liability.
In addition to automated content moderation, the other area in which algorithm-based systems are used is that of recommender systems. These filter and classify the increasing flow of information and prioritise according to certain parameters that most closely match and appeal to users' interests. Such recommender systems can be found on e-commerce platforms such as Amazon, where products are sorted accordingly, or on dating platforms, for example. Entertainment platforms such as Spotify and Netflix and social media such as YouTube and Facebook use recommender systems to make personalised recommendations to their users based on their specific preferences. Due to their significance in terms of access to and the processing of information on online platforms, such algorithms have a major influence, which is accompanied by systemic risks that can manifest themselves, for example, in filter 'bubbles' of selected information.[5]
Obtaining access to information about algorithms has not been easy in the past because it has not always been available in a machine-readable format, especially in the case of recommender systems, which are usually not always documented in a meaningful way. In order to obtain more information, methods such as scraping or deploying the data-subject's right of access, which is available under the GDPR, have been used to date.[6]
- 128/129 -
Regarding copyright law, for example, this has been common practice for more than a decade with YouTube's Content ID,[7] but also with other music and video platforms that use automated systems to identify and manage content. More recently, such systems have also been used to address hate speech, disinformation, counterterrorism and other violence-related content. When photos are uploaded, for example, they are compared with the content of industry-wide databases such as Microsoft's PhotoDNA[8] to combat child pornography. In the fight against hate speech, numerous platforms now rely on a process in which texts are analysed based on their supposed toxicity and blocked when a certain threshold is reached.
One problem that exists, or has existed, is that traditional legal frameworks and processes have been ill-equipped to monitor the opaque and relatively effective nature of algorithmic content moderation. In particular, over-enforcement in the area of permitted expression (overblocking) on the one hand and lack of enforcement in relation to content that is actually unauthorised on the other pose problems that may ultimately pose a threat to the rule of law if traditional rule of law institutions do not provide sufficient tools.[9]
The algorithmic shift in terms of the governance and regulation of platforms is ultimately the result of technical developments as well as politics and public discourse.[10] The current rules of the DSA are the product of this public and political discussion and the EU's response to how providers of online platforms should deal with problematic content such as misinformation and hate speech. In addition to isolated national measures, such as the German Network Enforcement Act (NetzDG) and the Austrian Communication Platforms Act (KoPlG), which is no longer in force, the DSA has now been adopted as a comprehensive and directly effective EU-wide regulation.
The increasingly important questions of how to deal with issues such as hate speech or misinformation on online platforms, including cross-border hate speech, as well as algorithms that are often barely comprehensible (like a black box) but at the same time make a decisive contribution to the function of platforms, should be made more transparent - whether platforms should be allowed to regulate themselves was and remains a topic of public discourse that led to the DSA. The guiding principle is to reduce negative effects on democracy and social cohesion. Safeguarding fundamental rights such as freedom of expression and freedom of information is therefore essential while at the same time establishing a balance and a limit to hate online. Which mechanisms decide what can be expressed publicly and where the boundaries lie? One of the solutions will not simply be to trust supposedly smoothly operating algorithms and AI systems. It is not a given that topics
- 129/130 -
such as content moderation and governance will be addressed on online platforms at all. On the other hand, the law also expects platforms to take responsibility accordingly, as we have seen at least since the introduction of legislative acts such as the E-Commerce Directive and now the DSA.[11] Algorithms are a welcome technical solution as the responsibility of platforms increases, but their use is not unproblematic in various respects. The DSA is now proposing what a successful solution to these issues could look like in relation to algorithmic systems.[12]
It is important to realise the tension between automated processes and the need for transparency and accountability in algorithmic content moderation. Automated tools efficiently handle large quantities of data but operate opaquely, raising concerns about decision-making and legal implications. The current legislative trends prioritise technology deployment over human rights, risking transparency and accountability. However, new laws like the DSA aim to improve algorithmic governance by enforcing transparency, accountability and risk assessment. The challenge is to integrate these principles through collaboration among experts, lawmakers and technologists to ensure a balanced and ethical digital future.[13]
In relation to online platforms, algorithms are particularly relevant in the context of content moderation. This algorithmic moderation can be summarised as governance mechanisms that structure participation in a community to facilitate collaboration and prevent abuse,[14] or more narrowly, as systems that classify user-generated content based on matches or predictions, leading to a governance decision and outcome (eg removal, geo-blocking or account deletion).[15]
Recommender systems are designed to make it easier for users to use a platform by suggesting content that they are likely to like. However, user-friendliness is only one side of the coin. Online platforms benefit, for example, through longer retention times and, therefore, more advertising potential or directly recommending to users what they should
- 130/131 -
buy. These recommender systems work best when they are also based on profiling. The DSA now also provides a legal definition of the term. A recommender system is defined in the DSA as 'a fully or partially automated system used by an online platform to suggest in its online interface specific information to recipients of the service or prioritise that information, including as a result of a search initiated by the recipient of the service or otherwise determining the relative order or prominence of information displayed'.[16] It should be noted that this definition does not generally cover hosting services or intermediary services, nor online search engines, but only online platforms, and at the same time, is not limited to certain forms of information. The 'content-agnostic' approach also runs through the entire DSA and is already known from the E-Commerce Directive.[17]
Recommender systems and the algorithms behind them sometimes only work so well because they work with profiling. The DSA addresses the risk of profiling in several places and considers, for example, in its recital 68 in relation to advertising, that online advertising plays an important role in the provision of services by online platforms but can also entail considerable risks. Online platforms should, therefore, be obliged to ensure that users receive information about the main parameters used to decide which advertising is displayed to them. Users should be provided with conclusive explanations that explain the underlying logic and whether profiling is used.
Interest-optimised personalised advertising, which may target weaknesses, can have serious negative effects on users. Manipulative techniques have the potential to cause social harm to entire groups, for example, through disinformation or discrimination against certain groups. For this reason, providers of online platforms should be prohibited from displaying advertising based on profiling based on racial and ethnic origin, political opinion, religion, ideology, trade union membership, genetic or health data, as well as sex life or sexual orientation.[18]
The parameters of such recommender systems should be presented in a way that is clear and easy for the user to understand. These parameters should include at least the most important criteria used to determine which information is suggested to the user, as well as the reasons why the individual criteria are important. This also affects cases in which information is prioritised on the basis of profiling and the user's online behaviour.[19]
Very large online platforms (VLOPs),[20] in regard to their recommender systems, should consistently ensure that users of their service have alternative options concerning the most
- 131/132 -
important parameters of their recommender systems that are not based on profiling.[21] This provision can be found in Article 38 DSA. The legal text requires at least one option for each of its recommender systems that is not based on profiling.[22]
Like the GDPR, the DSA can be considered an exemplary law since there have been attempts to regulate similar aspects of algorithms on online platforms around the world -for example, the US American Algorithmic Justice and Online Platform Transparency Act, which has been proposed but not yet enacted. Similar to the DSA, the introduced American law establishes requirements for certain commercial online platforms (eg social media sites) that withhold or promote content through algorithms and related computational processes that use personal information. Online platforms must disclose their collection and use of personal information and their content moderation practices; retain specified records that describe how the algorithms use personal information and assess whether the algorithms produce disparate outcomes based on race and other demographic factors in terms of access to housing, employment, financial services and related matters; employ algorithms safely and effectively; and allow users to access and transfer their personal information.[23]
The first specific provision in relation to algorithmic decision-making can be found in Article 14(1) DSA. This stipulates transparency obligations with regard to the terms and conditions of providers of intermediary services in relation to information on content moderation and algorithmic decision-making.[24]
Providers of intermediary services, including online platforms, must provide information in their terms and conditions on any restrictions they impose on the information provided by users in connection with the use of their service. This includes information on all guidelines, procedures, measures and tools used to moderate content, including algorithmic decision-making. This must be written in clear, simple, understandable, user-friendly and unambiguous language and should be made publicly available in an easily accessible and machine-readable format; in other words, transparency obligations regarding content moderation, with explicit reference to algorithmic decision-making.[25]
The term 'algorithmic decision-making' should not suggest that there is a certain decision-making power to the algorithm itself, as algorithms have no autonomy. The
- 132/133 -
legislator has merely adopted this common term. Algorithmic decision-making refers to any form of automated content moderation. Therefore, it does not have to be a particularly sophisticated algorithm that meets the criteria for consideration as 'artificial intelligence' as defined in the European Commission's AI Act (Article 3(1) AI Act in conjunction with Annex I). Any use of automated means is sufficient, even if these are based on a simple if-then programming logic.[26]
In the interests of transparency and user protection and in order to avoid unfair or arbitrary results, certain rules should be laid down with regard to terms and conditions. Providers of intermediary services should clearly state in their terms and conditions the reasons why they may restrict the provision of their services. These should then always be kept up to date. In particular, but not exclusively, this should include information on all guidelines, procedures, measures and tools used to moderate content, including algorithmic decision-making. Providers of intermediary services can also use graphical elements such as symbols or images in their terms of use to illustrate the main elements of the information obligations under the DSA. Providers should inform the users of their service in an appropriate manner in the event of significant changes to the terms and conditions, eg if they change the rules for the information permitted in their services or about other such changes that could have a direct impact on the users' ability to use the service, and thus also in the event of changes to the algorithms.[27] Even if some online platforms have already adopted this practice voluntarily in part and with varying degrees of intensity out of goodwill, it is to be welcomed that this has now been legislated and applies uniformly to all online platform operators in the EU.
If the service is primarily aimed at minors, which may be expressed by the design or marketing of the service or by the fact that the service is predominantly used by minors, special efforts should be made to make the terms and conditions easier for minors to understand.[28] This means that the algorithmic decision-making tools must also be clear for minors, which makes the already difficult task of explaining how complex algorithms work even more difficult.
VLOP providers are subject to stricter transparency requirements with regard to their terms and conditions. They should also provide their terms and conditions in the official languages of all EU Member States in which they offer their services. Furthermore, users should be provided with a compact and easy-to-read summary of the most important points of the terms and conditions.[29] The DSA leaves open whether this also includes information on the algorithmic recommender systems, but this can be assumed on the basis of a systematic interpretation.
- 133/134 -
A central component of the business activities of online platforms is the way in which information is prioritised and presented on their online interface. This includes algorithmic recommendations, ranking and prioritisation of information, indicated by textual or other visual representations, as well as other types of curation of information provided by users. These recommender systems can have a significant impact on users' ability to access and interact with information online. For example, they can facilitate the search for content relevant to users and contribute to an improved user experience. They also play an important role in reinforcing certain messages, spreading information virally and encouraging online behaviour. According to the DSA, online platforms should, therefore, always ensure that users are adequately informed about how recommender systems affect the way information is displayed and how they can influence the way information is presented to users. The parameters of these recommender systems should be clear and easy to understand to ensure that users can understand how the information displayed to them is prioritised. At a minimum, these parameters should include the key criteria used to determine what information is suggested to the user and the reasons why each criterion is important, which explicitly includes profiling.[30]
The DSA does not specify what such important criteria might be. However, this is not unwise, as these can differ from platform to platform. For example, purchase history can be an important parameter on Amazon, while liking certain pages can be an important parameter on Facebook. In any case, it is also to be welcomed that platform operators now also must explain why the respective criteria are relevant so that users can better reflect on why they are presented with specific content. However, it remains to be seen, and the CJEU will have to decide sooner or later in what level of detail online platforms will have to explain this information. The overriding premise remains the requirement of transparency.
To be a little more specific, regarding transparency, the DSA also requires online platform providers that use recommender systems to clearly and unambiguously set out the most important parameters used in their recommender systems in their terms and conditions, as well as all options for users to change or influence these important parameters.[31] This must be done in clear language. The requirement for clear and understandable language is already known from the GDPR.[32]
The said important parameters include explanations as to why users are shown certain content or why certain information is suggested. The DSA also prescribes a minimum content for this, so these important parameters must at least include the criteria that are
- 134/135 -
most important for determining the information that is suggested to the user on the one hand[33] and the reasons for the relative importance of these parameters on the other.[34]
If several options are available for recommender systems that determine the relative ranking of information provided by users, there must also be a function that allows users to select and change their preferred option, and this option must be directly available and easily accessible on the online interface.[35] The fact that this option, which is already offered by some platforms, is now legally mandatory is a positive development, particularly from the user's perspective.
Criticism can be expressed here, on the one hand, insofar as the DSA is sticking its neck out and creating the possibility of allowing mandatory third-party recommender systems on online platforms.[36] However, in this case, one should start thinking about whether this would already represent a fundamental dilution of the business model of many online platforms and interference with the fundamental rights of the operators of these platforms.
Furthermore, even if platforms provide useful information, it may be insufficient if hidden in terms of service. User research from the Centre for Democracy and Technology shows that users prefer information about recommender systems to be visual, interactive, personalised, and allow direct control over the system. This is not achievable through terms of service alone. It is preferable to include this information on a designated information page or transparency report.[37]
Also, the information provided by platforms may not be sufficient due to the ambiguous language of Article 27 DSA, such as unclear definitions of 'main parameters' or 'most significant criteria'. Platforms must be held to high standards to avoid 'transparency theatre'. For instance, simply saying that content is being shown because the end-user liked similar content does not offer meaningful insight into a recommender system's parameters. Additionally, platforms should disclose detailed information for expert audiences, including system design, key metrics and data usage, ensuring comprehensive transparency for different recipients.[38]
Furthermore, Article 27 DSA can be compared to Article 5 Platform-To-Business Regulation [Regulation (EU) 2019/1150, P2B Regulation]. It can be stated that the European legislator has provided the same level of algorithmic transparency for online intermediary services in Article 5 P2B Regulation and Article 27 DSA. However, this can be regarded as cumbersome. On the one hand, because the legislator uses different terminology in the
- 135/136 -
German versions of the laws (eg, Hauptparameter and wichtigste Parameter, but these do not occur in the English versions) and, on the other hand, because individual criteria are listed elsewhere: once in the general clause itself and once in the specification of the general clause.[39]
The central differences between the requirements of the P2B Regulation and the DSA, however, lie in the fact that the P2B Regulation defines the limits of transparency obligations [Article 5(6) P2B Regulation] and provides for a guideline competence of the European Commission [Article 5(7) P2B Regulation]. However, regarding the guidelines, this difference is not really relevant because they could also be made fruitful for the interpretation of Article 27 DSA due to their lack of legal norm quality.[40]
Providers of VLOPs that use recommender systems must offer at least one option for each of their recommender systems that is not based on profiling.[41] There is no mention in the DSA that recommender systems should not be based on profiling by default, as initially recommended by the European Data Protection Supervisor in respect to the DSA.[42] In any case, online platforms can use recommender systems with preset profiling. In the case of VLOPs, there is an alternative, according to the DSA, but users must still actively deactivate profiling if it is preset by default on the specific online platform. Recital 94 DSA indicates that the option to select the alternative without profiling should be accessible directly from the online interface on which the recommendations are presented. In my view, this means that in the relevant online interfaces of large online platforms such as YouTube and Facebook, for example, there should be an unambiguous and clear selection option in the feed, for example, in the form of a button or at least a drop-down menu, to select a feed that is not based on profiling but, for example, on a chronological timeline. In any case, this is technically possible. Since the income of many VLOPs is also largely due to profiling, the opt-out variant chosen in the DSA is certainly a less severe one.
The responsibilities of VLOP providers also include the careful identification, analysis and assessment of all systemic risks arising from the design, including algorithmic systems,
- 136/137 -
operation and use, of their services in the Union.[43] Such risk assessments should be carried out at least once a year and whenever new functions are to be introduced that are likely to have a critical impact on identified risks.[44] The minimum annual risk assessment is a good choice in times of rapid technological change, especially as each introduction of a major change entails a separate risk assessment anyway.
Among the systemic risks, Article 34(1) DSA includes, in an exhaustive list, the dissemination of illegal content,[45] any actual or foreseeable adverse effects on the exercise of fundamental rights, in particular, respect for human dignity, private and family life, protection of personal data, freedom of expression and information, including media freedom and pluralism, the prohibition of discrimination, the rights of the child and consumer protection,[46] any actual or foreseeable adverse effects on social debate, electoral processes and public security[47] as well as any actual or foreseeable adverse effects in relation to gender-based violence, the protection of public health, minors and serious adverse effects on a person's physical and mental well-being.[48] The EU legislator is clearly responding here to the events surrounding the 2016 US presidential election, among other things, and is making an important contribution to stability in the EU by naming systematic risks and the obligation to analyse these risks.
In such risk assessments of VLOP providers, particular attention is paid to the design of their recommender systems and other relevant algorithmic systems.[49] Although the vast majority of VLOPs[50] use algorithmic systems to moderate their content, select and display advertising and contacts, the factors 'systems for moderating content'[51] and 'systems for selecting and presenting advertising'[52] are listed separately, as much but not everything must necessarily be based on an algorithmic system when it comes to content moderation or the presentation of advertising. The risk assessment should also analyse whether and how risks are influenced by deliberate manipulation of the service offered. In particular, reference is also made to the possible methodology of non-authentic use or automated exploitation of the service.[53]
- 137/138 -
In addition to the risk assessment, providers of VLOPs are also obliged to take appropriate, proportionate and effective measures to mitigate systemic risks in accordance with Article 34 DSA, with a particular focus on fundamental rights.[54] In a demonstrative list, the DSA mentions here, among other things, adapting the design, features or functioning of their services, including their online interfaces,[55] adapting content moderation procedures, including the speed and quality of the processing of reports on certain types of illegal content, and, where necessary, rapidly removing reported content or disabling access to it, in particular in relation to illegal hate speech or cyber violence, and adapting all relevant decision-making processes and the means used for content moderation,[56] testing and adapting their algorithmic systems, including their recommender systems,[57] and adapting their advertising systems and adopting targeted measures to restrict or adapt the display of advertising in connection with the service they provide.[58]
The testing of the algorithmic systems is a risk assessment activity (Article 34 DSA) on the one hand; on the other hand, it is also part of risk mitigation activity according to Article 35(1)( d) DSA. However, until now, there have been no established control and test procedures and corresponding standards that are the subject of research and development in science and economy. The algorithmic systems can be adapted initially in relation to the test results. A large number of weaknesses in automated means are already known, for example, in filter technologies for identifying and possibly making illegal content inaccessible for the purposes of content moderation. Adaptation can also involve selecting a different algorithmic system, no longer having a certain function performed by such systems or strengthening the role of humans.[59]
In the context of content moderation, for example, a matching technique with perceptual hashing, ie a fingerprinting technology based on the comparison of reference files, can be dispensed with if considerable overenforcement, in particular the enforcement of nonexistent rights, is identified. If the algorithmic system that is used is retained, its settings can be adjusted, eg the tolerance of the filter systems with regard to deviations. These determine, in particular, the risks of overblocking, ie making lawful content inaccessible due to alleged illegality ('false positives') or alleged violations of the platform's terms of use.[60]
- 138/139 -
In general, this is also a welcome approach, as ultimately, the online platforms themselves will know best how to implement risk mitigation measures, so why not give them this task?
VLOP providers shall make the data necessary for the monitoring and assessment of compliance with the DSA available to the Digital Services Coordinator or the Commission upon reasoned request.[61] For these purposes, VLOP providers shall, at the request of the Digital Services Coordinator or the Commission, explain the design, logic of operation and testing of their algorithmic systems, including their recommender systems.[62] It can be seen that the transparency requirement runs like a continuous thread through the DSA.
It is obvious that explainability and transparency can quickly reach their limits in connection with algorithms. We should therefore be careful and protect ourselves against the 'transparency fallacy'[63]. How this transparency will ultimately be guaranteed will probably only be decided by the CJEU.
According to Article 49 para 1 DSA, Member States must designate one or more competent authorities responsible for the supervision of providers of intermediary services and the enforcement of the DSA. One of these authorities must then be designated as the Digital Services Coordinator. This coordinator shall be responsible for all matters relating to the supervision and enforcement of the DSA in the Member State unless the Member State concerned has delegated certain specific tasks or sectors to other competent authorities. The Digital Services Coordinator is, in any case, responsible for ensuring the coordination of these matters at the national level and for contributing to effective and consistent supervision and enforcement of the DSA across the EU.[64]
In Austria, this task is depicted by the already existing Austrian Communications Authority ('KommAustria'). Hence, no new Authority is created. The corresponding national implementation law, the Coordinator for Digital Services Act (KDD-G), came into force on 17 February 2024.
- 139/140 -
In the context of algorithms, it is worth mentioning that, during inspections, the Commission's delegated officials and other accompanying persons authorised by it shall have the power to request from the VLOP provider concerned access to information on and explanations of the organisation, functioning, IT system, algorithms, data management and business practices, and to record or document such explanations.[65]
Furthermore, during inspections, the officials and other accompanying persons authorised by the Commission, the auditors or experts designated by it, the Digital Services Coordinator, or the other competent authorities of the Member State on whose territory the inspection is carried out may request explanations from the VLOP provider concerned on the organisation, functioning, IT system, algorithms, data management and business practices and may interview its key personnel.[66]
Article 69(2)(d) DSA contains a power for the Commission to require the provider of a very large online platform to provide access to information on the organisation, functioning, IT system, algorithms, data management and business practices, as well as explanations thereof, and to record or document these explanations. This power can be explained above all by the specifics of digital companies, whose organisation, functioning and, in particular, technical infrastructure cannot be easily viewed, recorded and understood by outsiders.[67]
In this respect, the right of 'access' is likely to represent a special form of the general power of inspection pursuant to Article 69(2)(b) DSA - to a certain extent, it is the power of digital inspection. It is aimed at a comprehensive search of the addressee's digital assets, including outsourced cloud capacities and sensitive information such as the operational algorithms of the platform, which is carried out on site, although not limited to the information stored on site. However, the right to 'explanations' (and related records or documentation) goes beyond this and takes into account the fact that the architectures of complex digital platforms and search engines can prove to be 'black boxes' even for the Commission's experts and can make it difficult or even impossible to effectively enforce the related provisions of the DSA.[68]
Article 69(2)(d) DSA gives the Commission (only, but at least) a right to sufficient transparency with regard to the technical facilities of the platform or search engine relevant to the alleged infringements. The addressees of an inspection must, therefore, be willing and able to explain the relevant facilities to the Commission in a comprehensible manner, including the software architecture of the platform, which is probably typically at the centre of the inspection. The scope of this duty to explain is likely to require some clarification in
- 140/141 -
practice. However, in the interests of effective enforcement of the DSA provisions, it will be necessary to demand that providers cannot simply provide abstract explanations but must comply with the Commission's request with the desired degree of concretisation.[69]
In order to carry out the tasks assigned to it, the Commission may take the necessary measures to monitor the effective implementation of and compliance with the DSA by VLOP providers. The Commission may order them to grant access to their databases and algorithms and provide explanations.[70]
To ensure effective implementation and compliance with the DSA, the Commission is authorised, pursuant to Article 72 DSA, to monitor providers' adherence to their obligations. For this purpose, it may demand access to data and algorithms granted by the provider, as well as access to other essential information. To this end, it can also engage independent external experts and auditors who are appointed by the respective national supervisory authorities to assist with the procedure (Article 72(1) DSA). Further, all necessary documents should be provided, and any clarifications required should be furnished. The Commission may establish detailed rules regarding the modalities of the procedure (Article 83 DSA).[71]
To support the Commission in enforcing the DSA, the Commission's Joint Research Centre has established a Centre for Algorithmic Transparency, which will support the Commission with technical and scientific expertise.[72] The ECAT supports the Commission's supervisory function with multidisciplinary internal and external expertise. The centre is based in Seville, Spain, and was officially opened in April 2023. An interdisciplinary team of data scientists, AI experts, social scientists and legal experts will work on evaluating algorithms and identifying and measuring systemic risks. The centre particularly supports the Commission in evaluations to determine whether the functioning of algorithmic systems complies with the obligations of the DSA for risk management by VLOPs and very large
- 141/142 -
online search engines (VLOSEs). The aim is to centralise research activities concerning transparency and algorithms. The ECAT is part of the European Commission and is operated by the Joint Research Centre (JRC) - the Commission's internal science and knowledge service - in close cooperation with the Directorate-General for Communications Networks, Content and Technology (DG CONNECT).[73] As a research centre, ECAT will especially support the Commission in the evaluation of the functioning of algorithms used on VLOPs and compliance with the associated risk management obligations.[74]
The DSA now forms the core law on algorithms on online platforms. The regulation does not specify the content of the code or the technical or content-related design of algorithms. Rather, the approach chosen is that of transparency obligations and verification options. This gives the respective online platforms a certain degree of freedom regarding the algorithms and recommender systems they use. At the same time, operators of online platforms must now also pay mandatory attention to the systemic risks addressed in the DSA. Many details are still open, such as the question of how specific processes will work in the event of reviews and to what extent transparency will actually be required on the part of online platforms.
In my opinion, the concept of the DSA is sound insofar as it is not aimed at regulating the type and structure of algorithms per se but rather focuses on the potential risks arising from their use. The wording is open enough to ensure that the law has a long shelf life. In the coming years and decades, we can certainly look forward to decisions by the highest courts on unclear formulations and regulations in the respective legal texts. The path taken by the DSA, with its transparency and accountability obligations in relation to algorithm-based systems, is, in any case, a suitable and well-considered path that does not inhibit innovation but rather provides it with an easy guideline by identifying systemic risks that need to be avoided. ■
NOTES
[1] DSA Article 3(i).
[2] Niva Elkin-Koren, Maayan Perel, 'Guarding the Guardians: Content Moderation by Online Intermediaries and the Rule of Law' in Giancarlo Frosio (ed), The Oxford Handbook of Online Intermediary Liability (Oxford University Press 2020, Oxford) 669.
[3] Directive 2000/31/EC of the European Parliament and of the Council of 8 June 2000 on certain legal aspects of information society services, in particular electronic commerce, in the Internal Market ('Directive on electronic commerce') [2000] OJ L178/1.
[4] Regulation (EU) 2022/2065 of the European Parliament and of the Council of 19 October 2022 on a Single Market For Digital Services and amending Directive 2000/31/EC (Digital Services Act) [2022] OJ L277/1.
[5] Natali Helberger and others, Regulation of news recommenders in the Digital Services Act: empowering David against the Very Large Online Goliath <https://policyreview.info/articles/news/regulation-news-recommenders-digital-services-act-empowering-david-against-very-large> accessed 15 October 2024.
[6] Ben Wagner, 'Algorithmic Accountability: Towards Accountable Systems' in Giancarlo Frosio (ed), The Oxford Handbook of Online Intermediary Liability (Oxford University Press 2020, Oxford) 682.
[7] How Content ID works <https://support.google.com/youtube/answer/2797370> accessed 15 October 2024.
[8] Microsoft PhotoDNA <https://www.microsoft.com/en-us/photodna> accessed 15 October 2024.
[9] Elkin-Koren, Perel (n 2) 670.
[10] Christian Katzenbach, Der "Algorithmic turn" in der Plattform-Governance. Die diskursive, politische und technische Positionierung von Algorithmen und KI als "technological fix" für komplexe Herausforderungen, (2022) 74 (19) Kölner Zeitschrift für Soziologie und Sozialpsychologie 285, DOI: https://doi.org/10.1007/s11577-022-00837-4
[11] Gergely Gosztonyi, Censorship from Plato to Social Media (Springer 2023, Cham), DOI: https://doi.org/10.1007/978-3-031-46529-1_4
[12] Katzenbach (n 10) 285.
[13] Giancarlo Frosio, 'Algorithmic Enforcement Tools: Governing Opacity with Due Process' in Simona Francese, Roberto King (eds), Crossing the valley of death: Driving forensic innovation in the 21st Century (Springer 2024, Cham), http://dx.doi.org/10.2139/ssrn.4610556
[14] James Grimmelmann, 'The Virtues of Moderation' (2015) 17 Yale Journal of Law & Technology 42.
[15] Robert Gorwa and others, 'Algorithmic content moderation: Technical and political challenges in the automation of platform governance' (2020) Big Data & Society (2020) 7, DOI: https://doi.org/10.1177/2053951719897945
[16] DSA Article 3(s).
[17] Sebastian Felix Schwemer, 'Recommender Systems in the EU: from Responsibility to Regulation?' (2021) 1 (2) Morals & Machines DOI: https://doi.org/10.5771/2747-5174-2021-2-60
[18] DSA Recital 69.
[19] DSA Recital 70.
[20] DSA Article 33(1): 'online platforms and online search engines which have a number of average monthly active recipients of the service in the Union equal to or higher than 45 million, and which are designated as very large online platforms'.
[21] DSA Recital 94.
[22] DSA Article 38.
[23] Algorithmic Justice and Online Platform Transparency Act, S. 1896, 117th Cong. (2021), <https://www.congress.gov/bill/117th-congress/senate-bill/1896> accessed 15 October 2024.
[24] Bissera Zankova, Gergely Gosztonyi, 'Quo vadis, European's Union New Digital Regulation Package?' (2021) (2) Business and Law 67-90.
[25] DSA Article 14(1).
[26] Benjamin Raue, 'Art 14 - Allgemeine Geschäftsbedingungen' in Franz Hofmann, Benjamin Raue (eds), Digital Services Act (Nomos 2023, Baden-Baden) 259.
[27] DSA Recital 45.
[28] DSA Recital 46 DSA and Article 14(3).
[29] DSA Recital 48.
[30] DSA Recital 70.
[31] DSA Article 27(1).
[32] Schwemer (n 17).
[33] DSA Article 27(2)(a).
[34] DSA Article 27(2)(b).
[35] DSA Article 27(3).
[36] Natali Helberger and others, Regulation of news recommenders in the Digital Services Act: empowering David against the Very Large Online Goliath (2021) <https://policyreview.info/articles/news/regulation-news-recommenders-digital-services-act-empowering-david-against-very-large> accessed 15 October 2024.
[37] Maximilian Gahntz, Claire Pershan, 'Action Recommended: How the Digital Services Act Addresses Platform Recommender Systems' (27 February 2023) VerfBlog, DOI: https://doi.org/10.17176/20230227-185115-0
[38] Gahntz, Pershan (n 37).
[39] Sebastian Schwammberger, 'Zusammenspiel von Friktionen mit anderen Rechtsakten' in Björn Steinrötter (ed), Europäische Plattformregulierung (Nomos 2023, Baden-Baden) 261.
[40] Schwammberger (n 39).
[41] DSA Article 38.
[42] EDPS, Opinion 1/2021 on the Proposal for a Digital Services Act <https://edps.europa.eu/system/files/2021-02/21-02-10-opinion_on_digital_services_act_en.pdf> accessed 15 October 2024.
[43] DSA Article 34(1).
[44] DSA Article 34(1).
[45] DSA Article 34(1)(a).
[46] DSA Article 34(1)(b).
[47] DSA Article 34(1)(c).
[48] DSA Article 34(1)(d).
[50] Parul Pandey, 'The Remarkable World of Recommender Systems' (2019) <https://towardsdatascience.com/the-remarkable-world-of-recommender-systems-bff4b9cbe6a7> accessed 15 October 2024; Ankit Jena, '4 Great Platforms That Use Recommendation System' (2022) <https://www.muvi.com/blogs/platforms-that-use-recommendation-system.html> accessed 15 October 2024.
[51] DSA Article 34(2)(b).
[52] DSA Article 34(2)(d).
[53] DSA Article 34(2).
[54] DSA Article 35(1).
[55] DSA Article 35(1)(1).
[56] DSA Article 35(1)(c).
[57] DSA Article 35(1)(d).
[58] DSA Article 35(1)(e).
[59] Katharina Kaesling, 'Art 34 - Risikominderung' in Franz Hofmann, Benjamin Raue (eds), Digital Services Act (Nomos 2023, Baden-Baden) 588.
[60] Kaesling (n 59).
[61] DSA Article 40(1).
[62] DSA Article 40(3).
[63] Miriam C. Buiten, 'Chancen und Grenzen "erklärbarer Algorithmen" im Rahmen von Haftungsprozessen' in Daniel Zimmer (ed), Regulierungfür Algorithmen und Künstliche Intelligenz (Nomos 2021, Baden-Baden) 173, DOI: https://doi.org/10.5771/9783748927990-149
[64] DSA Article 49(2).
[65] DSA Article 69(2)(d).
[66] DSA Article 69(5).
[67] Christoph Krönke, 'Art 69 - Befugnis zur Durchführung von Nachprüfungen' in Franz Hofmann, Benjamin Raue (eds), Digital Services Act (Nomos 2023, Baden-Baden) 945.
[68] Krönke (n 67).
[69] Krönke (n 67).
[70] DSA Article 72(1).
[71] Ranjana Andrea Achleitner, 'Durchsetzung: Befugnisse von und Zusammenarbeit mit Behörden' in Björn Steinrötter (ed), Europäische Plattformregulierung (Nomos 2023, Baden-Baden) 239.
[72] European Commission, 'Press Release, Digital Services Act: Commission is setting up new European Centre for Algorithmic Transparency' (2022) <https://digital-strategy.ec.europa.eu/en/news/digital-services-act-commission-setting-new-european-centre-algorithmic-transparency> accessed 15 October 2024.
[73] Achleitner (n 71) 236.
[74] European Commission (n 72).
Lábjegyzetek:
[1] The author is LL.M., Research Associate, Department of Innovation and Digitalisation in Law, University of Vienna. This paper was supported by the Jubilee Fund of the Austrian National Bank. https://orcid.org/0009-0006-9102-886X.
Visszaugrás