Ez a tanulmány a mesterséges intelligencia (MI) modern bírósági rendszerekre gyakorolt átalakító hatását vizsgálja. Bemutatja, hogy a bíróságok világszerte hogyan térnek át a hagyományos papíralapú eljárásokról a digitális és MI-alapú működésre. A jogi gyakorlatban tapasztalható technológiai fejlődést elemezve a tanulmány feltárja az MI integrációjával járó előnyöket és kihívásokat. A főbb vizsgált területek közé tartozik a hatékonyság, az átláthatóság és az elszámoltathatóság növekedése, valamint az etikus alkalmazás, a torzítás, az adatbiztonság és a magánélet védelmével kapcsolatos aggályok. Az Egyesült Államokból, Kínából és Németországból származó esettanulmányok és szakpolitikai ajánlások révén a tanulmány bemutatja, hogyan biztosítható az MI felelős alkalmazása. A kutatás végül további vizsgálatokra, a bírák oktatásának fejlesztésére, valamint erősebb jogi és etikai keretek kialakításának szükségességére hívja fel a figyelmet annak érdekében, hogy az MI jövőbeni alkalmazása a bírósági rendszerekben megalapozott és biztonságos legyen.
Kulcsszavak: Mesterséges Intelligencia, bírósági modernizáció, digitális átalakulás, igazságszolgáltatáshoz való hozzáférés, etikus MI
This manuscript explores the transformative impact of artificial intelligence (AI) on modern judicial systems. It discusses how courts around the world are shifting from traditional paper-based processes to digital and AI-assisted operations. By examining the evolution of technology in legal practice, this work analyzes both the benefits and challenges associated with AI integration. Key areas of focus include improvements in efficiency, transparency, and accountability, as well as concerns regarding ethical use, bias, data security, and privacy. Drawing on examples from the United States, China, and Germany, the paper
- 192/193 -
presents detailed case studies and policy recommendations to ensure that AI is deployed responsibly. The study concludes with a call for further research, enhanced judicial education, and stronger legal and ethical frameworks to guide the future use of AI in the justice system.
Keywords: Artificial Intelligence, judicial modernization, digital transformation, access to justice, ethical AI
Today, achieving a fair trial and protecting human rights and freedoms through the courts remains a pressing issue worldwide. According to the World Justice Project, more than 1.5 billion people lack access to justice. The restrictions and quarantine measures caused by the coronavirus pandemic have exacerbated this problem, highlighting the importance of digital technologies and their potential for improving judicial systems.[2]
The pandemic has opened new avenues for increasing the efficiency of judicial systems through digitalization. This is significant for quickly and qualitatively reviewing cases in courts. Uzbekistan is undergoing broad legal reforms and actively modernizing its institutions, positioning itself as a regional leader in Central Asia. As someone who has worked within the Supreme Court of Uzbekistan, I have a clear understanding of the country's judicial structure and operational challenges. This insider perspective allows for a deeper and more realistic analysis of the opportunities and limitations of judicial transformation through technology. Uzbekistan's position in the Rule of Law Index - ranked 78th globally in 2022 - underscores the need for improvement in judicial efficiency, accessibility, and public trust. Persistent issues such as corruption, outdated infrastructure, and high litigation costs continue to affect the judiciary's reputation. In this context, the introduction of artificial intelligence (AI) presents significant potential to enhance transparency, efficiency, and decision-making. However, it also raises critical concerns, including ethical dilemmas, bias, lack of accountability, security risks, and transparency challenges.[3]
One of the most important issues facing contemporary cultures is still access to justice. Inefficient procedures, inconsistent evidence processing, and delayed processes are common problems for courts around the world. These issues are especially acute in areas where high expenses, poor infrastructure, and corruption
- 193/194 -
impose further obstacles on underprivileged groups. In this regard, artificial intelligence (AI) is becoming a game-changing instrument that has the potential to completely alter legal procedures by increasing responsibility, guaranteeing consistency, and increasing efficiency. Digital solutions do, however, have certain drawbacks. Stable electricity, up-to-date machinery, dependable internet access, and adequate technical know-how are necessary for the successful use of AI; these resources might not be easily accessible in every location. Digitalization has the potential of escalating rather than resolving current disparities if these infrastructure gaps are not addressed.
The aim of this manuscript is to examine how AI can modernize judicial systems and address longstanding issues such as lengthy case backlogs and inconsistent decision-making. While digital technologies have already begun to change court operations, the integration of AI offers new possibilities for automating routine tasks, analyzing vast datasets, and detecting errors or biases in judicial decisions. However, with these opportunities come significant challenges, including concerns about data security, ethical use, and the transparency of AI-driven tools.
This paper is structured as follows. In Section 2, we discuss details the opportunities AI offers in enhancing efficiency, transparency, and access to justice. Section 3 outlines the challenges and risks of implementing AI in legal contexts, while Section 4 presents detailed case studies from the United States, China, Germany and Russia. Section 5 concludes the manuscript by summarizing key findings and suggesting directions for further study.
By examining these topics, the manuscript seeks to provide a balanced view of AI's potential to reform judicial systems while ensuring that its implementation respects ethical standards and upholds the core principles of justice.
What is AI? There are many ways to answer this question, but one place to begin is to consider the types of problems that AI technology is often used to address. In that spirit, we might describe AI as using technology to automate tasks that "normally require human intelligence". This description emphasizes that AI is often applied to tasks involving decision-making, pattern recognition, learning, or problem-solving - activities traditionally associated with human cognition. For instance, researchers have successfully applied AI to complex tasks such as playing chess, translating languages, and driving vehicles. What makes these
- 194/195 -
AI tasks, rather than general automation, is the involvement of higher-order cognitive processes when performed by humans.[4]
In addition to this conceptual view, authoritative bodies such as the European Union and the OECD offer formal definitions. The EU defines AI systems as "software that is developed with one or more of the techniques and approaches... for a given set of human-defined objectives, [that] generates outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with." Similarly, the OECD defines an AI system as "a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments." These definitions help ground our understanding of AI in current policy and regulatory discourse. Historically, judicial processes have relied on paper-based systems and in-person hearings. These methods, while rooted in tradition, often lead to inefficiencies, such as delayed case processing, loss of documents, and inconsistent handling of evidence. Paper records can be misplaced or damaged, and physical presence in court may not always be feasible, especially for litigants in remote areas. Moreover, manual processing of documents is labor-intensive and prone to human error. Over recent decades, many courts have begun shifting toward digital methods. Electronic filing (e-filing) systems allow lawyers to submit documents online, reducing the need for physical paperwork and speeding up the administrative process. Video conferencing has become an increasingly popular method for conducting hearings, especially during the COVID-19 pandemic, when traditional courtrooms were not accessible. Digital transformation has allowed for more efficient communication between courts, attorneys, and litigants, laying the groundwork for even more advanced technologies.
When it comes to public administration, one of the key advantages of using artificial intelligence (AI) in government is the potential to increase efficiency and productivity. By automating tasks that are time-consuming or prone to human error, AI enables government agencies to operate more effectively. For example, the Ministry of Defense uses AI to analyze and classify satellite images, allowing analysts to focus on more complex tasks. Similarly, the Internal Revenue Service (IRS) applies AI through natural language processing to automate responses to frequently asked questions, improving customer service delivery. Beyond efficiency, AI enhances decision-making by analyzing large datasets and identifying patterns or anomalies that may not be readily visible to human analysts. For instance, the National Weather Service leverages AI to improve weather forecasting and early warning systems, while the Centers for Medicare
- 195/196 -
and Medicaid Services use AI to detect potential fraud, analyze care patterns, and process medical claims data more accurately and swiftly. However, despite its growing capabilities, AI remains fundamentally limited to recognizing statistical correlations and detecting patterns. It is highly effective in situations requiring fast, automated data processing - such as issuing identification cards, processing applications, or granting subsidies - where decisions are based on clearly defined rules. Yet, in contexts requiring deeper cognitive functions - such as evaluating complex evidence, interpreting conflicting claims, or balancing legal and ethical considerations - AI cannot replace human judgment. It lacks the ability to reason, prioritize values, or understand context in the way human decision-makers do. Recognizing these limitations is essential to ensuring AI is deployed responsibly in the public sector. Judicial leaders around the world have expressed varied opinions on the digital transformation of courts. Many views the integration of AI as an inevitable step that will improve the legal system, while others caution against overreliance on technology. Chief Justice Sundaresh Menon of Singapore, for example, has stated that technology will be the most potent force reshaping the legal profession in the coming years.[5] His optimism is shared by those who see the benefits of faster case processing and more consistent decision-making.
Many judicial commentators agree that while embracing new technology is essential, it should not come at the expense of judicial independence or the fairness of legal proceedings. AI should serve as a support tool, aiding judges in their decision-making rather than replacing them. Courts must maintain a human oversight element, ensuring that every AI-generated recommendation is carefully reviewed by a human judge.[6] When it comes to, opportunity for my country, Assistant judges in Uzbekistan's criminal courts - often serving as court clerks or "major assistants" - play a pivotal role in ensuring the smooth operation of judicial proceedings. They are tasked with preparing criminal cases for hearings, notifying all process participants about the time and location of each trial, verifying the attendance of summoned individuals, investigating and reporting any absences along with their reasons, and meticulously transcribing trial minutes.[7] In their current workflow, these routine administrative tasks cumulatively demand an estimated 53.7 hours per month. This calculation is based on handling an average workload of 23 cases per month, where case preparation typically takes 30 minutes, participant notifications require about 20 minutes,
- 196/197 -
attendance checks add another 15 minutes per case, and the transcription of trial minutes - covering approximately two hours of court proceedings - consumes around 75 minutes (or roughly 140 minutes per case when all components are accounted for).[8]
The promise of integrating advanced technologies into this process is substantial. Artificial intelligence (AI) can streamline these tasks dramatically: it can cut case preparation time by 50%, reducing it from 30 minutes to 15 minutes per case; automate notifications to save up to 80% of the time required, dropping the duration from 20 minutes to just 4 minutes; employ facial recognition or electronic check-ins to lower attendance tracking time by 90%, decreasing it from 15 minutes to a mere 2 minutes per case; and leverage real-time transcription technology to reduce the manual effort for transcribing trial minutes by 95%, trimming the time needed from 75 minutes to only 4 minutes per case. When these improvements are applied, the overall time required per case could drop to roughly 25 minutes, slashing the total monthly administrative burden to approximately 9.6 hours - a reduction of about 44.1 hours or an 82% decrease.
One of the most tangible benefits of AI in the judicial system is its ability to enhance efficiency. AI tools can process vast amounts of data quickly, reducing the time needed to review documents and manage case files. For instance, standardized AI-driven case tracking systems work by automatically indexing documents and ensuring that every piece of evidence is handled uniformly. This standardization minimizes errors and ensures that cases are processed consistently, regardless of the workload or individual judge's pace.
When judges and court staff use AI to sift through thousands of pages of digital records, they are less likely to miss critical details. This leads to more accurate and fair outcomes, as inconsistencies and errors can be flagged for human review. The increased efficiency also means that courts can reduce their case backlogs, leading to faster resolutions and improved access to justice. However, the integration of AI into judicial processes must be approached with caution and guided by clear principles. The European Law Institute's (ELI) 'Standards for Judicial Independence' emphasize that the use of AI in courts must not undermine judicial autonomy or discretion. AI should be seen as a tool to assist - not replace - human decision-making. These standards call for transparency in how AI tools operate, explainability of AI-generated results, and the guarantee that judges remain ultimately responsible for the decisions rendered. By adhering to such safeguards, courts can embrace technological innovation while preserving the core values of justice.
- 197/198 -
AI's capacity for rapid data analysis means it can identify inconsistencies or biases in judicial decisions. For example, an AI system can analyze historical case data to determine whether similar cases are being treated differently. If it finds that evidence is handled in varying ways or that outcomes differ significantly without clear justification, this information can be used to prompt a review of judicial practices.
Such systems play a pivotal role in reinforcing the integrity of judicial decisions. When every case is processed under the same guidelines, it helps build public trust. Citizens are more likely to trust a system that demonstrates consistency and transparency. Furthermore, the ability to track and compare decisions across different cases provides a means to hold the judicial system accountable. Regular audits and reviews of AI-generated data can ensure that any deviations from standard practice are quickly addressed.
Beyond efficiency, AI holds significant promise for protecting human rights and combating corruption. In regions where human rights violations are frequent, AI tools can analyze large datasets to detect patterns of abuse. For instance, in China's "smart courts" initiative, AI-powered systems are used to monitor digital evidence and ensure that all documentation is processed uniformly. This helps to reduce the opportunity for corrupt practices and improves the overall transparency of the judicial process.
AI can also support international efforts to document and investigate systemic abuses. By rapidly processing and comparing data from numerous cases, AI systems can reveal patterns that may indicate broader issues, such as widespread corruption or discriminatory practices. This information can then be used to hold authorities accountable and to drive reforms aimed at protecting human rights.
One of the most significant challenges in integrating AI into the judiciary is the lack of explainability - commonly referred to as the "black box" problem - which poses a serious threat to transparency, accountability, and fairness in legal decision-making. Advanced AI models, particularly deep neural networks, often operate by processing enormous datasets and producing decisions without offering clear, human-understandable explanations of their reasoning. This opacity means that judges, lawyers, and litigants are frequently unable to trace how a particular conclusion was reached, which undermines the fundamental legal principle of due process. In contexts where decisions can have profound impacts on individuals' lives - such as bail determinations or sentencing - the
- 198/199 -
inability to scrutinize the internal logic of an AI system raises concerns about potential biases and errors. For example, if an AI tool used to predict recidivism does not reveal how it weighs factors like prior records or demographic data, it becomes extremely challenging to contest its outputs in court or to ensure that it does not perpetuate existing inequalities.[9] In response, scholars and practitioners have proposed several solutions. One promising avenue is the development and adoption of Explainable AI (XAI) methods that focus on creating models with interpretable outputs, allowing legal professionals to understand the decision pathways. Techniques such as model distillation and the use of surrogate models can provide simplified, approximate explanations of complex AI decisions. Another solution involves establishing strict regulatory frameworks and standards that mandate transparency in algorithm design and require regular audits of AI systems used in judicial processes. This could include open-sourcing certain components of the algorithm, or at the very least, requiring that independent experts are granted access to the training data and model parameters under protective orders. Additionally, integrating human oversight - where AI tools are used only as supportive aids rather than as decision-makers - ensures that judges remain "in the loop" and can override or question AI-generated recommendations when necessary. By combining these technical and procedural safeguards, the legal system can work toward harnessing the efficiency benefits of AI while mitigating the risks associated with its current lack of explainability.[10]
Algorithmic bias and discrimination represent critical challenges when implementing AI in judicial decision-making systems. AI algorithms are inherently dependent on the data used to train them, and if that data reflects historical prejudices or unbalanced distributions, the resulting outputs may inadvertently perpetuate discrimination. For instance, tools such as the COMPAS risk assessment system have been scrutinized for assigning higher risk scores to defendants from minority backgrounds compared to their white counterparts, even when the predictive error rates are comparable across groups. This bias can lead to disproportionate impacts on sentencing, bail decisions, and parole evaluations, undermining the principle of equal treatment under the law. In addition, the design choices made during algorithm development - such as the selection of features and weighting mechanisms - can further embed societal biases into AI systems. These issues are compounded by the "black box" nature of many AI models, which often leaves the reasoning behind bias undetected and
- 199/200 -
uncorrected.[11] To address these challenges, several solutions have been proposed. One critical measure is to improve the quality and diversity of training data by ensuring that it is representative of all demographic groups and that historical biases are identified and corrected during data pre-processing. Fairness-aware machine learning techniques, such as algorithmic debiasing and the use of fairness constraints during model training, can also be applied to mitigate bias in decision outputs. Additionally, independent audits and regular evaluations of AI systems should be mandated to assess their performance across different demographic groups, with clear standards established by legal and regulatory bodies. Transparency can be further enhanced by requiring developers to disclose key aspects of their algorithms and data sources - subject to protecting legitimate proprietary information - to allow for external validation by experts.[12] Lastly, maintaining robust human oversight remains essential; AI should be used as an aid rather than a replacement for judicial decision-making, ensuring that human judges can review, question, and override AI-generated recommendations when necessary. These multi-faceted approaches aim not only to reduce bias in AI outputs but also to build trust in AI tools as fair and reliable components of the judicial system.[13]
Data Privacy and Security Concerns represent a formidable challenge when integrating AI into judicial systems. AI tools in the legal arena require access to vast amounts of sensitive data - from personal details of litigants and witnesses to confidential case files and criminal records - which, if not properly secured, can be exposed to unauthorized access or cyberattacks. This heightened vulnerability poses significant risks: data breaches may not only lead to the compromise of individual privacy but can also undermine the integrity of judicial processes, potentially affecting outcomes if sensitive information is manipulated or leaked. Moreover, many AI systems are built on large datasets that may contain information collected without robust consent protocols, exacerbating ethical concerns regarding surveillance and data misuse. To address these challenges, robust encryption protocols and strict access controls must be implemented to protect stored data, and regular security audits should be mandated to identify and remedy vulnerabilities. Compliance with stringent regulatory frameworks - such as the EU's General Data Protection Regulation (GDPR) or national data
- 200/201 -
protection laws like China's Data Security Law - is essential to ensure that data collection, processing, and storage practices meet high standards of privacy and security. In addition, transparency about how data is gathered and used, along with mechanisms for anonymization, can help build trust among judicial operators and the public.[14]
Transparency and Accountability in AI Systems is a critical challenge in judicial applications because the decision-making processes of many AI models are not readily visible to users, which complicates both verification and contestation of AI outputs. In many cases, AI systems are built on complex algorithms and vast datasets, resulting in outcomes that can be difficult to audit or explain. This "opacity" poses a threat to the legitimacy of judicial decisions, as litigants and oversight bodies must be able to scrutinize how conclusions were reached, particularly when those decisions have significant impacts on individuals' rights and freedoms. To address this challenge, legal systems can mandate the disclosure of sufficient details about AI algorithms, including the data sources used, the model's decision criteria, and its error rates - subject to protections for proprietary information. Independent audits and peer reviews by technical experts should be required, ensuring that any AI system used in court is rigorously validated and that its performance can be reliably challenged if necessary. Regulatory frameworks such as the EU's General Data Protection Regulation (GDPR) and emerging guidelines like those developed by UNESCO advocate for explainability and accountability, providing models for how these systems can be made more transparent.[15] By combining technical measures like explainable AI (XAI) techniques with robust legal oversight, the judiciary can work to ensure that AI tools enhance efficiency and fairness while preserving the accountability essential to maintaining public trust in the legal process.[16]
Threats to judicial independence and adequate human oversight emerge as critical challenges when integrating AI into court processes. The risk is that overreliance on AI-generated recommendations could gradually erode the autonomy of judges by shifting the balance of decision-making authority from human expertise to algorithmic outputs. If judges come to depend on automated systems for routine tasks or even substantive decision-making, there is a danger that the distinctive human judgment - shaped by legal tradition, ethical considerations, and contextual nuance - could be diminished.[17] Moreover, if AI
- 201/202 -
systems operate without clear accountability mechanisms, it becomes difficult to assign responsibility for errors or biases that may influence legal outcomes. This is particularly worrisome in high-stakes cases where even small errors can have profound implications on individuals' rights and the public's trust in the judicial system. Such risks are exacerbated by proprietary "black box" models that do not offer sufficient transparency into their decision-making processes, leaving judges and oversight bodies without the necessary information to evaluate or challenge the outputs effectively.[18]
To mitigate these threats, several measures can be implemented to preserve judicial independence and ensure robust human oversight. First, AI systems should be deployed only as advisory tools rather than decision-makers, ensuring that final judgments rest with human judges who are fully accountable for their decisions. Regulatory frameworks and court policies - such as those recently introduced by the Delaware Supreme Court - can mandate that AI-generated inputs be thoroughly vetted and that any use of AI in drafting opinions or managing case files is accompanied by explicit disclosures and human review. Moreover, specialized training programs for judicial officers on the strengths and limitations of AI can empower them to critically assess AI recommendations, preventing blind reliance on automated outputs. Independent audits and the creation of oversight committees comprising legal and technical experts can further help to verify the performance, reliability, and fairness of AI tools. By ensuring transparency through requirements to disclose the data sources and methodologies behind AI systems - while balancing the need to protect proprietary information - judicial systems can foster a culture of accountability that upholds both the integrity of the legal process and the autonomy of human decision-makers.[19]
Integration with existing judicial processes poses a multifaceted challenge when incorporating AI into court systems. Many judicial institutions rely on legacy systems, established procedures, and long-standing practices that were not designed for digital transformation. This misalignment can create significant technical and operational hurdles when attempting to adopt new AI tools. For instance, courts must deal with data migration from paper-based or outdated electronic systems to modern, secure digital platforms that support AI functions. Moreover, integrating AI requires that the current case management workflows,
- 202/203 -
document handling procedures, and evidence review processes be re-engineered to accommodate automated tasks, all while ensuring that legal standards and due process are maintained. There is also a cultural aspect to this integration: judges, clerks, and legal staff often need to adapt to new ways of working and trust technology that may initially seem opaque or unreliable. This resistance to change can further complicate the smooth transition toward a digital judiciary.[20]
To address these issues, a phased and well-planned integration strategy is essential. Courts can start by piloting AI tools in non-critical areas such as document review or routine administrative tasks, allowing stakeholders to become familiar with the technology without risking core judicial functions. Comprehensive training programs should be developed to build digital literacy among judicial operators, ensuring that they understand both the capabilities and limitations of the AI systems they are using. Furthermore, close collaboration between IT professionals, legal experts, and policymakers is necessary to develop standards and guidelines that align AI implementation with existing legal procedures. Upgrading legacy systems to ensure compatibility with modern digital platforms is also critical; this may involve substantial investment in new hardware and software infrastructure. Finally, establishing clear oversight mechanisms, including independent audits and periodic reviews, can help ensure that AI tools are seamlessly integrated into judicial processes without compromising the integrity, transparency, or reliability of legal decision-making.[21]
The COMPAS (Correctional Offender Management Profiling for Alternative Sanctions) system, developed by Northpointe (now Equivant), is a widely used risk assessment tool in many U.S. jurisdictions designed to estimate the likelihood of a defendant reoffending. By analyzing factors such as criminal history, age, education, employment status, and responses to a detailed questionnaire, COMPAS generates a risk score that judges use to inform decisions on sentencing, parole, and probation. Proponents argue that COMPAS introduces objectivity into the judicial process, reducing reliance on subjective judgments and potentially mitigating human biases. However, a landmark 2016 investigation by ProPublica revealed significant flaws in the system. While COMPAS correctly predicted
- 203/204 -
general recidivism about 61% of the time, its accuracy dropped to just 20% for violent reoffending. More critically, the investigation found that the algorithm exhibited racial bias: Black defendants were more likely to be incorrectly classified as high-risk, while white defendants were more often incorrectly labeled as low-risk. These findings ignited a fierce debate about the fairness, transparency, and ethical implications of using AI in criminal justice, particularly when such tools may perpetuate or exacerbate systemic inequalities. Critics argue that even if race is not explicitly included as a variable, the historical data used to train the algorithm may embed implicit biases, reflecting and reinforcing existing disparities in the criminal justice system.[22]
In response to these concerns, some jurisdictions have taken steps to address the limitations of COMPAS and similar AI tools. For example, states like Wisconsin and California have implemented measures to ensure regular audits and reviews of algorithmic risk assessment tools to detect and mitigate bias. Additionally, there is growing advocacy for greater transparency in how these algorithms operate, including calls for open-source models that allow independent scrutiny of their design and outcomes. The COMPAS case underscores the potential benefits of AI in improving judicial efficiency and consistency, but it also highlights the critical need for robust oversight, accountability, and ethical safeguards. Without these measures, AI tools risk amplifying existing inequities, undermining public trust in the justice system. This case study serves as a cautionary tale, emphasizing that while technology can enhance judicial processes, its implementation must be carefully managed to ensure fairness, accuracy, and respect for civil liberties.[23]
China. Under the 14th five-year plan, Chinese courts will upgrade to the fourth generation of smart court by 2025. China is rolling out changes to monitor judges, streamline court procedures and boost judicial credibility that could result in the world's first AI-integrated legal system.[24] China's Smart Courts Initiative stands as a bold transformation of the judicial landscape, where advanced technologies such as artificial intelligence, blockchain, and big data analytics are woven into the fabric of the legal process. In this system, routine tasks - from assigning cases based on judges' expertise and workload to verifying and securing digital evidence - are automated with remarkable efficiency. For instance, some court hearings are now 67% shorter, and overall case resolution times have dropped by 25% between 2017 and 2019.[25] The immutable nature of blockchain helps
- 204/205 -
ensure that digital evidence remains secure and tamper-proof, thereby enhancing transparency and mitigating corruption risks. Moreover, platforms like China Mobile Micro Court have revolutionized access to justice by facilitating remote hearings and online dispute resolution, making the legal system more accessible even as it pushes the boundaries of technological integration. Yet, these sweeping reforms are not without their challenges - issues such as data privacy, algorithmic bias, and the digital divide, especially in rural regions, continue to spark debate. Critics also caution that the drive for efficiency might sometimes compromise due process, particularly in politically sensitive cases, calling for a careful balance between innovation and the protection of individual rights.[26]
In comparison, other countries are charting their own courses in judicial digitization. Singapore, for example, has embraced AI-driven legal research and predictive analytics, focusing on refining case prediction and streamlining legal procedures while maintaining a robust framework that upholds transparency and accountability.[27] Brazil is experimenting with blockchain technology to bolster evidence integrity, albeit on a more experimental basis that reflects its decentralized approach to digital reforms.[28] The United States, despite being a leader in legal technology, has yet to consolidate these tools into a nationwide system comparable to China's integrated model; instead, it relies on a patchwork of AI tools for case management and legal research, which, while effective in isolated applications, lack the cohesive, end-to-end digital transformation seen in China. Ultimately, as nations around the world grapple with the challenges of modernizing their judicial systems, China's initiative offers both inspiration and a cautionary tale - a vivid demonstration of how technology can dramatically enhance judicial efficiency and accessibility, while also underscoring the critical need for safeguards to ensure fairness and protect individual rights.[29]
Germany has emerged as a European frontrunner in judicial digitalization, with a transformative mandate that will require all civil, administrative, social, and criminal proceedings to be managed via electronic file management systems (the e-Akte) by January 1, 2026. This sweeping reform is designed to modernize the country's judicial infrastructure by shifting from traditional paper-based processes to a fully digital workflow. The e-Akte system will allow for secure,
- 205/206 -
centralized storage of case files, enabling multiple users - including judges, clerks, and legal professionals - to access and process documents from anywhere, thereby reducing physical document handling, cutting administrative delays, and lowering overall operational costs. Early initiatives, such as the electronic mailbox for lawyers (beA) and the integration of secure digital communication channels, have already laid a solid foundation for this transformation. These efforts are in line with broader EU digitalization strategies, reinforcing Germany's commitment to efficiency, transparency, and interoperability in its legal processes.[30]
In addition to the mandatory e-Akte implementation, Germany is actively exploring the integration of advanced technologies like artificial intelligence (AI) to further enhance judicial operations. For instance, the judicial system in Hamburg is currently piloting an AI-based assistive system at the Landgericht, which is capable of automatically categorizing and indexing incoming civil documents, thus significantly accelerating case processing and reducing the workload on court staff.[31] Such AI tools promise to support the extraction of metadata, streamline routine tasks, and even facilitate remote hearings via videoconference, ensuring that judges can focus more on substantive legal analysis rather than administrative burdens. Moreover, digitalization initiatives are being guided by comprehensive expert frameworks - such as those outlined in the CMS Expert Guide to Digital Litigation in Germany - which highlight the benefits of standardized electronic case files for improving access to justice and expediting dispute resolution.[32] The integration of AI in German courts is expected to lead to more consistent and transparent judicial decisions. By ensuring that every document is processed in a standardized manner, AI can help minimize errors and improve the overall quality of legal outcomes. However, successful implementation depends on robust cybersecurity measures and ongoing training for judicial staff to adapt to these new technologies. Together, these measures underscore a holistic approach to reform that not only meets EU mandates but also sets a benchmark for future judicial innovation across Europe.
In Russia, the development of the use of artificial intelligence and the corresponding regulation is at an early stage. However, the Russian authorities and individual legal entities are making serious efforts to develop this industry as quickly as possible. There is no legal definition of predictive policing in Russia. However, in legal doctrine and the media, this term is sometimes used, it refers
- 206/207 -
to a preventive strategy based on computer calculations, with the help of which the police can assess the degree of risk of committing certain crimes in certain places. Nevertheless, it is possible to note the presence of a significant amount of use of systems based on artificial intelligence in predictive policing and related fields. The modern application of artificial intelligence in all spheres of activity is impossible without working with Big Data. To this end, the Ministry of Internal Affairs of Russia, together with leading research centers and start-ups, is holding large joint conferences on the most relevant breakthrough approaches in the use of artificial intelligence and Big data in order to combat crime. In December 2021, the second major event of this kind is being held on the basis of the Academy of Management of the Ministry of Internal Affairs of Russia. In Russia, the greatest breakthrough has been achieved in the use of artificial intelligence algorithms for video surveillance. In Moscow, the capital of Russia, about 70% of all registered crimes are solved using this technology. The Safe City program is a complex of software and hardware systems and organizational measures to ensure video security and technical security carried out through video surveillance. Part of this system is the FindFace Security face recognition system, created by the Russian company NtechLab in 2015. Only in Moscow more than 178,000 cameras are connected to the face recognition system. The city's video surveillance system includes cameras installed in courtyards, at the entrances of residential buildings, in parks, schools, clinics, shops and construction sites, as well as in office buildings and other public places. As noted on the NtechLab website, the main goals of their created program are advanced analytics, search for offenders, search for missing people, ensuring the safety of public events, as well as transport security. That is, as a rule, criminals are searched for using this system. As a result of the implementation of this program, an information and analytical system for monitoring the crime situation (IASMCS) appeared in Moscow. The IASMCS system analyzes records on criminal and administrative offenses, road accidents, economic crimes, and more. In partnership with executive authorities and law enforcement in Moscow, it enables ongoing monitoring of the city's crime situation. As a result, targeted interventions have led to a 23% reduction in crime over a 10-month period. Moreover, during events such as the 2018 FIFA World Cup, FindFace Security helped detain over 180 individuals listed in offender databases. The system was also deployed to monitor quarantine compliance during the COVID-19 pandemic and to locate protest participants in the 2021 winter demonstrations.
It is worth exploring comparable systems abroad. Turkey's National Judiciary Informatics System (UYAP) - originally designed for e-filing, case management, and basic decision support - has since integrated artificial-intelligence components.
- 207/208 -
According to official sources and academic studies, UYAP now includes a decision support module that can issue pop-up warnings (e.g. alerting staff if data entered corresponds to fugitives) and suggest procedural corrections to users, preventing errors and improving efficiency. More recently, new projects have added predictive analytics - for example, flagging organizational affiliations related to terrorism cases - automatically cross-referencing input data with security databases as part of a broader AI strategy within UYAP.[33] While the examples of Moscow's system and UYAP differ in scope and context, they illustrate common themes: data centralization, AI-enhanced error detection, and the potential for efficiency gains. However, as experienced in Turkey, AI-powered classification tools - like those used to predict links to terrorist organizations - may raise serious concerns about due process, bias, and the presumption of innocence.[34] Adding this comparative insight to your article can provide readers with a broader perspective on how judicial AI tools operate across jurisdictions.[35]
The integration of artificial intelligence (AI) into judicial systems presents a transformative opportunity to modernize justice worldwide. As we have explored, AI can significantly enhance efficiency by automating routine tasks such as case preparation, notifications, attendance tracking, and transcription. For instance, AI-driven tools can reduce the time assistant judges in Uzbekistan spend on these tasks from 53.7 hours to just 9.6 hours per month, a reduction of 82%. This allows judges to focus more on substantive legal analysis and decision-making, ultimately leading to faster and more consistent case resolutions. Beyond efficiency, AI offers the potential to enhance transparency and accountability in the courts. By analyzing vast datasets, AI can identify inconsistencies or biases in judicial decisions, prompting reviews and ensuring fair treatment for all. This capability is crucial for building public trust in the justice system and supporting efforts to combat corruption and protect human rights. However, the path to integrating AI into the judiciary is fraught with challenges. Ethical concerns, such as bias and the lack of transparency in AI decision-making, must be addressed to ensure that
- 208/209 -
AI tools uphold the principles of justice. The case of COMPAS in the United States highlights the risks of relying on AI systems that may inadvertently perpetuate existing biases. To mitigate these risks, it is essential to develop AI models that are not only accurate but also explainable, allowing judges and the public to understand and, if necessary, challenge the AI's recommendations. Data security and privacy are also critical issues. Courts handle highly sensitive information, and any breach could have severe consequences. Robust cybersecurity measures, strong encryption, and strict access controls are necessary to protect this data. Additionally, AI systems must comply with data protection regulations like the GDPR in Europe to ensure the privacy of individuals is safeguarded. To navigate these challenges, comprehensive legal and ethical frameworks are indispensable. These frameworks should address issues such as data privacy, cybersecurity, accountability, and bias, providing clear guidelines for the responsible use of AI in the judiciary. Regulatory bodies or panels should be established to oversee AI implementation, ensuring compliance with these standards and taking corrective action when necessary.[36] Moreover, the integration of AI with traditional judicial processes must be approached thoughtfully. AI should serve as a support tool, enhancing human decision-making rather than replacing it. Pilot projects and case studies from various jurisdictions can provide valuable insights into best practices for creating a hybrid system where AI handles routine tasks while judges retain authority over complex issues. Finally, international collaboration is key to advancing the responsible use of AI in the judiciary. By comparing experiences from different countries, such as the United States, China, and Germany, we can identify common challenges and successful strategies. Collaborative research projects and international conferences can facilitate the exchange of ideas and promote the development of global standards for ethical AI use in the courts.
In conclusion, while AI holds immense promise for revolutionizing judicial systems, its implementation must be guided by a commitment to ethical responsibility and legal oversight. By developing robust frameworks, investing in continuous training for legal professionals, and fostering international collaboration, we can harness the power of AI to create more efficient, transparent, and fair justice systems. The future of justice depends on our ability to balance innovation with the fundamental rights of citizens, ensuring that AI serves as a tool for empowerment and equity in the pursuit of justice.
- 209/210 -
J. Angwin - J. Larson - S. Mattu - L. Kirchner: Machine Bias - There's Software Used across the Country to Predict Future Criminals. And It's Biased against Blacks. ProPublica (May 23, 2016).
Angela Jin - Niloufar Salehi: (Beyond) Reasonable Doubt: Challenges that Public Defenders Face in Scrutinizing AI in Court. 2024. arXiv preprint arXiv:2403.13004 (March 2024). Available at: https://arxiv.org/abs/2403.13004.
Sayash Kapoor - Peter Henderson - Arvind Narayanan: Promises and pitfalls of artificial intelligence for legal applications. 2024. arXiv preprint arXiv:2402.01656. Available at: https://arxiv.org/abs/2402.01656.
Jinqi Lai - Wensheng Gan - Jiayang Wu - Zhenlian Qi - Philip S. Yu: Large Language Models in Law: A Survey. 2023. arXiv preprint arXiv:2312.03718. https://arxiv.org/abs/2312.03718.
A. D. Selbst et al.: Fairness and Abstraction in Sociotechnical Systems. Proceedings of the 2019 Conference on Fairness, Accountability, and Transparency, 2019, 59-68.
Changqing Shi - Tania Sourdin - Bin Li: The Smart Court - A New Pathway to Justice in China? International Journal for Court Administration, 2021 (1). https://ssrn.com/abstract=3778345.
Vladislav Gubko - Margarita Novogonskaya - Pavel Stepanov - Maria Yundina: AI and Administration of Justice in Russia. https://www.penal.org/sites/default/files/files/A-07-23.pdf.
https://worldjusticeproject.org/rule-of-law-index/insights. https://worldjusticeproject.org/rule-of-law-index/global.
https://lexiconlegal.in/ai-takes-the-gavel-contract-laws-new-sidekick-in-automated-decision-making/.
https://en.wikipedia.org/wiki/Ethics_of_artificial_intelligence.
https://en.wikipedia.org/wiki/Automated_decision-making.
https://hdsr.mitpress.mit.edu/pub/hzwo7ax4/release/7.
https://lexir.co/2024/09/20/chinas-smart-courts-initiative-a-case-study-in-ai-integration/.
- 210/211 -
https://www3.weforum.org/docs/WEF_Blockchain_Government_Transparency_Report.pdf.
https://www.americanbar.org/groups/law_practice/resources/tech-report/.
https://cms.law/en/int/expert-guides/cms-expert-guide-to-digital-litigation/germany. ■
NOTES
[1] PhD student, Doctoral School of Law and Political Sciences, Károli Gáspár University of Reformed Church in Hungary.
[2] https://worldjusticeproject.org/rule-of-law-index/insights.
[3] https://worldjusticeproject.org/rule-of-law-index/global.
[4] https://oecd.ai/en/wonk/ai-system-definition-update.
[6] https://lexiconlegal.in/ai-takes-the-gavel-contract-laws-new-sidekick-in-automated-decision-making/.
[7] https://lex.uz/docs/-111460.
[9] https://en.wikipedia.org/wiki/Ethics_of_artificial_intelligence.
[10] Jinqi Lai - Wensheng Gan - Jiayang Wu - Zhenlian Qi - Philip S. Yu: Large Language Models in Law: A Survey. 2023. arXiv preprint arXiv:2312.03718. https://arxiv.org/abs/2312.03718 (Accessed February 2025.).
[11] J. Angwin - J. Larson - S. Mattu - L. Kirchner: Machine Bias - There's Software Used across the Country to Predict Future Criminals. And It's Biased against Blacks. ProPublica (May 23, 2016).
[12] A. D. Selbst et al.: Fairness and Abstraction in Sociotechnical Systems. Proceedings of the 2019 Conference on Fairness, Accountability, and Transparency, 2019, 59-68.
[13] Lai - Gan - Wu - Qi - Yu, Ibid.
[14] https://en.wikipedia.org/wiki/Automated_decision-making.
[15] EU General Data Protection Regulation (GDPR), Regulation (EU) 2016/679.
[16] https://en.wikipedia.org/wiki/Automated_decision-making.
[18] Angela Jin - Niloufar Salehi: (Beyond) Reasonable Doubt: Challenges that Public Defenders Face in Scrutinizing AI in Court. 2024. arXiv preprint arXiv:2403.13004 (March 2024). Available at: https://arxiv.org/abs/2403.13004 (Accessed February 2025).
[19] Jin - Salehi, Ibid.
[20] https://en.wikipedia.org/wiki/Automated_decision-making.
[21] Sayash Kapoor - Peter Henderson - Arvind Narayanan: Promises and pitfalls of artificial intelligence for legal applications. 2024. arXiv preprint arXiv:2402.01656. Available at: https://arxiv.org/abs/2402.01656 (Accessed: February 2025).
[22] https://www.propublica.org/article/how-we-analyzed-the-compas-recidivism-algorithm.
[23] https://hdsr.mitpress.mit.edu/pub/hzwo7ax4/release/7.
[25] https://lexir.co/2024/09/20/chinas-smart-courts-initiative-a-case-study-in-ai-integration/.
[26] Changqing Shi - Tania Sourdin - Bin Li: The Smart Court - A New Pathway to Justice in China? International Journal for Court Administration, 2021 (1). https://ssrn.com/abstract=3778345.
[28] https://www3.weforum.org/docs/WEF_Blockchain_Government_Transparency_Report.pdf.
[29] https://www.americanbar.org/groups/law_practice/resources/tech-report/.
[32] https://cms.law/en/int/expert-guides/cms-expert-guide-to-digital-litigation/germany.
[34] https://dergipark.org.tr/tr/pub/deuhfd/issue/54409/704837.
[35] Vladislav Gubko - Margarita Novogonskaya - Pavel Stepanov - Maria Yundina: AI and Administration of Justice in Russia. https://www.penal.org/sites/default/files/files/A-07-23.pdf.
[36] Angwin-Larson-Mattu-Kirchner, Ibid.
Visszaugrás