Artificial Intelligence in Judicial Decision-Making: Can a Robot Replace a Judge?
##plugins.themes.bootstrap3.article.main##
ანოტაცია
This article examines the concept of the “robot judge” and evaluates the legal, ethical, and human rights implications of using artificial intelligence in judicial decision-making. The study explores whether AI can partially or fully perform judicial functions and assesses the extent to which algorithmic tools may be integrated into courts without undermining the fundamental principles of justice. The article is based on doctrinal legal analysis, comparative review, and a human-rights-oriented approach. It distinguishes between administrative automation, decision-support systems, and fully automated adjudication, arguing that these forms of technological involvement raise different levels of legal concern. The paper demonstrates that AI may offer important benefits for judicial systems, including greater efficiency, faster case processing, improved consistency, and enhanced access to justice, especially in repetitive or low-value disputes. At the same time, the article identifies serious risks associated with algorithmic bias, lack of transparency, limited explainability, accountability gaps, and threats to the right to a fair trial. Attention is given to the relationship between AI and judicial discretion, emphasizing that legal reasoning is not a purely mechanical exercise but a process involving interpretation, contextual evaluation, proportionality, and moral judgment. The article concludes that AI should not replace human judges in the exercise of final judicial authority. A legally acceptable model is the use of AI as a supportive instrument under meaningful human supervision, clear regulatory safeguards, transparency requirements, and effective mechanisms of review. Such an approach best reconciles technological innovation with the rule of law and the protection of human dignity.
Keywords: Algorithmic bias, judicial discretion, fair trial.
Introduction
The rapid advancement of digital technologies and the expanding use of artificial intelligence (AI) across public and private sectors have fundamentally reshaped contemporary legal and institutional frameworks. This transformation is particularly visible in the field of justice, where increasing demands for efficiency, speed, consistency, and accessibility have encouraged the adoption of data-driven and algorithmic tools. The use of AI in judicial systems is no longer a speculative or purely theoretical matter. It is already associated with functions such as case allocation, legal research, document analysis, risk assessment, and other forms of decision support within courts. Against this background, the concept of the “robot judge” has emerged as one of the most provocative and controversial issues in modern legal scholarship. It raises a central question: can artificial intelligence partially or fully perform the functions traditionally exercised by a human judge?
The importance of this question lies not only in technological progress but also in the unique constitutional and social role of the judiciary. The administration of justice is among the most sensitive functions of the state, as it is directly connected with the rule of law, legal certainty, public confidence in courts, and the effective protection of human rights. Judicial decision-making cannot be reduced to the mechanical processing of information. It involves the interpretation of legal norms, the contextual assessment of facts, the balancing of competing principles and interests, the exercise of discretion, and the articulation of reasons capable of justifying the outcome. In this sense, the debate on the “robot judge” concerns far more than the possibility of increasing institutional efficiency. It concerns the very nature of adjudication and whether algorithmic systems can be reconciled with the normative, moral, and human dimensions of justice.
The term “robot judge” itself is not univocal. It may refer to a fully automated system that independently renders judicial decisions; it may describe algorithmic systems that support judges in analysis and reasoning; or it may function as a broader metaphor for the increasing automation of judicial processes. This conceptual ambiguity makes careful legal analysis particularly necessary. It is essential to distinguish between administrative automation, algorithmic decision-support, and autonomous adjudication, as each of these models raises different legal, ethical, and institutional concerns. The central issue is therefore not simply whether technology can be introduced into the judicial sphere, but under what conditions such use remains compatible with the fundamental principles of justice.
This article aims to examine the concept of the “robot judge” and to assess the extent to which AI may be integrated into judicial decision-making without undermining the foundational values of the legal order. More specifically, the article seeks, first, to identify the potential advantages of AI in the administration of justice and, second, to determine the legal and ethical limits that must constrain its use. The article also addresses whether AI may realistically be viewed as a substitute for the human judge or whether it should instead be understood solely as a supportive tool operating under meaningful human control.
The research is guided by several key questions. Can artificial intelligence partially or fully perform judicial functions? What legal, ethical, and human rights risks accompany the use of AI in judicial decision-making? How does the increasing reliance on algorithmic systems affect judicial discretion, independence, and the requirement that decisions be reasoned and reviewable? Finally, what model of AI integration can be regarded as normatively acceptable in contemporary justice systems? These questions are of particular importance because the introduction of AI into adjudication cannot be evaluated solely in terms of efficiency or innovation; it must also be assessed against the standards of fair trial, accountability, transparency, equality, and human dignity.
Methodologically, the article is based on doctrinal legal analysis, comparative legal research, and a human-rights-oriented approach. It examines the theoretical foundations of artificial intelligence and the “robot judge”, analyzes current forms of AI use in judicial systems, and considers relevant comparative practices. At the same time, it evaluates the principal risks associated with algorithmic bias, opacity, limited explainability, accountability deficits, and possible interference with fair trial guarantees. Attention is devoted to judicial discretion, since legal reasoning is not merely a formal or technical exercise but also a context-sensitive and value-laden process that cannot easily be translated into algorithmic logic.
The central argument of the article is that artificial intelligence may serve as a valuable auxiliary instrument in the administration of justice, but it should not replace the human judge as the final bearer of judicial authority. While AI may improve efficiency, consistency, and the management of judicial workloads, adjudication remains a fundamentally human responsibility. It requires interpretation, proportionality assessment, moral judgment, responsibility, and the capacity to provide reasons in a manner that is intelligible and legitimate for the parties and for society. Accordingly, the “robot judge” should be approached not as an inevitable endpoint of technological development, but as a legal and ethical challenge that demands careful normative boundaries.
The structure of the article reflects this objective. It first outlines the theoretical foundations of artificial intelligence and the concept of the “robot judge”. It then examines existing forms of AI use in judicial practice and the potential advantages associated with such technologies. The following sections analyze the principal legal, ethical, and human rights risks, with separate attention devoted to the relationship between AI and judicial discretion. Finally, the article considers possible future models of AI integration in justice and proposes recommendations for legal policy. In doing so, it argues that technological innovation in the judicial sphere is acceptable only where it remains subject to human oversight, clear legal safeguards, and rights-based limitations.
Methodology
This article employs a doctrinal legal method as its primary research approach, supplemented by a comparative legal analysis and a human-rights-oriented analytical framework. The doctrinal method is used to examine the concept of the “robot judge” through the interpretation of legal principles, scholarly literature, and normative standards relevant to judicial decision-making, fairness, accountability, and the rule of law. This method was selected because the subject of the study is fundamentally normative in nature and requires an assessment of whether the use of artificial intelligence in adjudication is compatible with the core principles of justice.
In addition, the article applies a comparative approach to review how different jurisdictions and legal systems address the use of artificial intelligence in courts. Comparative analysis makes it possible to identify common tendencies, regulatory differences, and emerging models of technological integration in judicial practice. This method was chosen because the issue of AI in justice is developing unevenly across countries, and a comparative perspective helps to evaluate both practical experiences and normative responses.
The article also adopts a human-rights-based analytical perspective, particularly in relation to the right to a fair trial, judicial independence, equality, transparency, and data protection. This approach is necessary because the use of AI in judicial decision-making cannot be assessed solely from the standpoint of efficiency; it must also be examined considering fundamental rights and procedural guarantees.
The combination of these methods supports the aims of the research by enabling a structured evaluation of the legal nature, practical implications, and normative limits of artificial intelligence in the administration of justice.
Theoretical Foundations of Artificial Intelligence and the Concept of the “Robot Judge”
There is still no single and universally accepted definition of artificial intelligence; however, in contemporary scholarly and institutional discourse, AI is generally understood as a machine-based system capable of generating predictions, recommendations, content, or decisions on the basis of input data in ways that affect real or virtual environments.[1] This understanding makes clear that AI is not a single technology, but rather an umbrella concept that encompasses different methods, models, and functional capacities whose common feature is the ability to process information in a manner that, at least partially, resembles human intellectual activity. For this reason, any discussion of AI in the judicial sphere must be based not only on a technical understanding of the technology but also on a functional analysis of the role that the system plays in the decision-making process and the extent of its autonomy. This understanding makes clear that AI is not a single technology, but rather an umbrella concept that encompasses different methods, models, and functional capacities whose common feature is the ability to process information in a manner that, at least partially, resembles human intellectual activity. For this reason, any discussion of AI in the judicial sphere must be based not only on a technical understanding of the technology but also on a functional analysis of the role that the system plays in the decision-making process and the extent of its autonomy.[2] One of the most important branches of artificial intelligence is machine learning, which refers to the development and use of computer systems that adapt and learn from data with the goal of improving accuracy.[3] Unlike traditional programming, where a machine follows a fixed set of explicitly predefined instructions, machine learning systems detect patterns in data and build models that allow them to generate predictions or assessments. At the current stage of technological development, generative AI has also become especially significant. According to NIST, generative AI refers to a class of AI models that emulate the structure and characteristics of input data in order to generate synthetic content, including text, images, audio, video, and other forms of digital material. This category is particularly relevant for judicial settings, because it is capable not only of classification or prediction, but also of simulating legal reasoning, drafting, and justification.[4]
At the same time, it is essential to distinguish between automation and intelligent decision support. Automation usually concerns the execution of predefined, repetitive, and technical tasks, such as the registration of cases, document sorting, or the management of procedural deadlines. Intelligent decision support, by contrast, is a more complex category: the system processes data, identifies patterns, produces recommendations, and may influence the reasoning of the decision-maker. The key difference lies in the fact that, in the first case, technology replaces only an administrative or procedural action, whereas in the second case, it touches upon the substantive core of legal reasoning itself. For that reason, when AI is introduced into judicial systems, the crucial question is whether the system is merely a technical tool or whether it affects the normative content of adjudication.[5]
The term “robot judge” is not univocal, and its meaning depends on the degree of technological involvement that a particular author, policymaker, or legal system has in mind. In a broad sense, it may refer to any model in which artificial intelligence participates in judicial activity. In a narrower sense, it denotes a system that renders a legally significant decision with little or no human intervention. In legal scholarship, the expression “robot judge” is often used as an analytical concept through which one may assess both the feasibility and the legitimacy of transferring judicial functions to technology.[6] Thus, the term is more than a technical description; it reflects a deeper debate about the relationship between human judgment and algorithmic governance.
From a theoretical perspective, several models of the “robot judge” may be identified. The first is fully automated adjudication, in which a system independently processes facts, links them to relevant legal norms, and produces a final decision. This is the most radical model, because it effectively transfers the judicial function to technology. The second model is partially automated decision support, where AI analyses case materials, offers risk assessments, predicts possible outcomes, or drafts preliminary reasoning, while the final decision still remains in the hands of a human judge. The third, more moderate, model is AI as an auxiliary judicial tool, where technology is used for legal research, the identification of precedents, document organization, or administrative support, without directly exercising normative judgment.[7] Much of the confusion surrounding the notion of the “robot judge” stems from a failure to distinguish clearly among these different levels of technological involvement.
Contemporary European approaches have been particularly clear in stressing that AI in judicial systems should remain a tool “under user control” and should not become a prescriptive mechanism that effectively takes the place of the judge.[8] This position is grounded in the recognition that adjudication cannot be reduced to a purely computational exercise. For this reason, the most realistic contemporary understanding of the “robot judge” does not involve the complete replacement of the human judge, but rather a hybrid model in which AI performs analytical and organizational functions, while legal authority and responsibility remain with the human decision-maker.
The role of the human judge extends well beyond the mechanical application of legal rules to facts. Judicial decision-making requires not only the identification of relevant norms, but also their interpretation, the balancing of competing principles, the individualized assessment of circumstances, the application of proportionality, and the provision of reasons in a manner that is intelligible and legitimate to the parties and to society.[9] It is precisely here that a fundamental distinction emerges between human reasoning and algorithmic calculation. Algorithms may be highly effective at identifying patterns and correlations, yet legal judgment often depends on normative evaluation, contextual sensitivity, and responsibility in ways that exceed statistical inference.
A central element of adjudication is judicial discretion. Discretion does not mean arbitrariness; rather, it refers to the legally bounded capacity to choose among permissible alternatives in light of the specific circumstances of the case. Judges therefore evaluate elements that are difficult to fully formalize: the conduct of the parties, the singularity of factual situations, the proportionality of consequences, broader social implications, and, at times, moral intuition. By contrast, algorithmic systems operate within the parameters of the data they are given, the variables selected by designers, and the structure of the model itself. As a result, their ability to appreciate context in a deep and normatively meaningful way remains limited.[10] For this reason, scholarship increasingly emphasizes that algorithmic systems may assist judges, but they cannot easily replace the discretionary and responsibility-laden role of the human judge.[11]
Moreover, the human judge acts not only as an interpreter of rules, but also as a public authority whose decision must be perceived as fair. In this sense, the legitimacy of adjudication depends not only on the outcome, but also on the process: the opportunity to be heard, the careful consideration of individual circumstances, the reasoning provided, and the perception that the case has been examined by a human decision-maker rather than merely processed by a computational system. Empirical research further suggests that attitudes toward algorithmic involvement in courts vary significantly depending on the stage at which automation is introduced: automation is generally perceived as more acceptable in information acquisition and analytical support than in the final stages of decision selection or implementation.[12] In this respect, the human judge and the algorithmic system should not be understood as fully interchangeable actors. A more accurate view is that they are fundamentally different in nature and function: the latter may support the former, but cannot fully substitute for it.[13]
Accordingly, at the theoretical level, the concept of the “robot judge” demonstrates that the problem is not limited to the speed or technical accuracy of artificial intelligence. The deeper issue is whether core elements of adjudication—discretion, the perception of justice, moral judgment, contextual reasoning, and accountable justification—can truly be translated into algorithmic form. The answer to this question ultimately determines whether AI in the judicial sphere should remain a supportive instrument or whether it could one day challenge the human judge as the author of legal decisions.[14]
Existing Forms of the Use of Artificial Intelligence in the Administration of Justice
The current use of artificial intelligence in judicial systems is neither purely hypothetical nor limited to futuristic experimentation. Contemporary practice shows that AI is already being integrated into courts through a wide variety of tools designed to improve efficiency, accessibility, and the management of growing caseloads. According to the first annual report of the CEPEJ Resource Centre on Cyberjustice and Artificial Intelligence, by the end of 2024, at least 125 cyberjustice tools had been identified, with systems based especially on machine learning and natural language processing becoming increasingly important in courts.[15] These tools are used not only for administrative purposes, but also for tasks that come closer to legal reasoning, such as analyzing case materials, extracting relevant information, generating summaries, identifying legal issues, and supporting the preparation of judicial work.[16] At the same time, the same report emphasizes that no fully autonomous “robot justice” currently exists in actual court practice and that the most advanced systems remain supportive rather than substitutive.[17]
In practical terms, one of the most widespread areas of AI use concerns case management and document processing. Courts and judicial administrations increasingly employ AI-enabled systems to classify incoming materials,[18] extract metadata, anonymize judgments, transcribe hearings, translate texts, and summarize large volumes of legal documents.[19] The OECD has documented, for example, that Spain has developed in-house NLP and generative AI tools to classify, analyze, summarize, and anonymize court-related texts, while also supporting document retrieval and workflow management within the justice system.[20] Such systems do not decide cases themselves, but they substantially reduce the time required for repetitive and information-intensive tasks. AI is also being used in legal research, precedent retrieval, and the identification of relevant norms or comparable decisions. In this respect, modern judicial AI often operates as an advanced research and knowledge-management tool, enabling judges and court staff to navigate complex legal material more quickly and systematically.[21]
A more controversial field of application concerns risk assessment instruments and sentencing prediction models, especially in criminal justice. In the United States, algorithmic tools such as COMPAS have been used to support decisions relating to bail, sentencing, and parole by estimating the likelihood of recidivism.[22] The well-known case of State v. Loomis illustrates this development: the Wisconsin Supreme Court accepted the use of COMPAS as a sentencing support tool, but made clear that such an assessment could not be the determinative factor and had to be treated with caution.[23] More broadly, these tools do not generate a sentence in a legally autonomous sense; rather, they offer probabilistic assessments meant to inform judicial discretion. Even where predictive systems are used, they remain embedded within a human decision-making process, precisely because criminal adjudication involves constitutional guarantees, individualized assessment, and responsibility that cannot be delegated entirely to an opaque model.[24]
The United States offers one of the clearest examples of selective and function-specific use of AI in judicial settings. Algorithmic tools have been most visibly used in criminal justice, especially for recidivism and risk assessment, while more recent developments concern generative AI in research, drafting, and court administration.[25] At the institutional level, the U.S. judiciary has taken a cautious but practical approach. The Federal Judicial Center has issued guidance for federal judges explaining how AI may be used in the judicial process and what legal questions arise from its use.[26] In parallel, the National Center for State Courts reports that more than 50 courts are actively testing AI tools for legal research, document review, and case management, while also urging courts to begin with low-risk tasks and to establish governance safeguards.[27] Chief Justice John Roberts likewise recognized in his 2023 Year-End Report that many AI applications can assist courts in achieving “just, speedy, and inexpensive” resolutions, while stressing that judicial work still includes quintessentially human functions that AI can inform but not perform on its own.[28] China represents a broader and more ambitious model of judicial digitalization. The Chinese “smart court” reform integrates big data, algorithmic tools, online proceedings, and AI-assisted functions into a wider platform of judicial informatization.[29] Scholarly analyses show that Chinese courts use such systems for case management, online filing, evidence exchange, document service, workflow coordination, and analytical support for judges.[30] More recent official materials from the Supreme People’s Court describe AI-generated platforms designed to help judges search legal materials, compare electronic files, extract key points, and reduce the burden of handling rising numbers of cases.[31] Although this model goes further than most Western systems in the breadth of digitization, even there, the language used by official institutions presents AI primarily as a “legal assistant” that enhances efficiency rather than as a fully independent judge.[32] At the same time, the Chinese experience has generated significant debate concerning oversight, surveillance, judicial autonomy, and the relationship between technology and political control.[33] The case of Estonia is especially important because it is often surrounded by misinformation in public debate. Estonia has frequently been cited as a country developing a “robot judge”, yet the Estonian Ministry of Justice publicly clarified in 2022 that it was not developing an AI judge for small claims or general court procedures to replace human judges.[34] Instead, Estonia has focused on optimization, automation of procedural steps, and other ICT tools intended to reduce administrative burdens.[35] Academic writing confirms that Estonia has long used semi-automatic procedures in certain simplified processes, such as payment-order procedures, and that its judiciary uses AI-related tools in support functions within the e-filing and court information systems, but not for autonomous judicial decision-making.[36] Thus, the Estonian experience is better understood as an example of digital procedural support rather than genuine machine adjudication.[37]
Within the European Union, the prevailing approach is normative caution combined with controlled experimentation. At the policy level, the Council of Europe’s CEPEJ Ethical Charter has long insisted that AI in justice must remain consistent with fundamental rights, transparency, non-discrimination, and user control.[38] This orientation has been reinforced by the EU AI Act. Regulation (EU) 2024/1689 treats certain AI systems used in the administration of justice as high-risk, including systems intended to assist judicial authorities in researching and interpreting facts and the law and in applying the law to concrete facts.[39] Recital 61 of the Act expressly links such systems to potentially significant effects on the rule of law, the right to an effective remedy, and the right to a fair trial.[40] The European model, therefore, does not prohibit the use of AI in courts as such, but it subjects judicial AI to heightened safeguards and signals that systems operating in this field must remain tightly controlled and reviewable.
A comparative assessment of current practice shows that AI is used in courts predominantly as a decision-support tool rather than as a final and autonomous decision-maker. The CEPEJ’s 2025 report is explicit on this point: according to the information currently available, there are no fully automated AI systems functioning entirely independently in courts, and the idea of replacing judges with machines is not supported by the evidence gathered in the Resource Centre.[41] The same report concludes that AI should serve as an assistant rather than a replacement and that judges must retain their own judgment throughout the process.[42] This finding is highly significant because it demonstrates that actual institutional practice remains far more cautious than some public narratives about “robot judges” might suggest. The reasons for this restraint are structural rather than merely technological. Judicial authority is inseparable from responsibility, reason-giving, procedural fairness, and the protection of rights. Even where AI can classify cases, summarize documents, retrieve precedents, or provide predictive assessments, final adjudication still requires interpretation, contextual judgment, and legal accountability. This is why, in practice, courts and regulators tend to accept AI for low-risk and preparatory functions, while resisting its use as a substitute for judicial authority.[43] The U.S. judiciary’s own reflections emphasize that AI may support judges, but cannot itself perform those “quintessentially human functions” that arise in fact-sensitive and discretionary adjudication.[44] Similarly, the EU regulatory approach classifies judicial AI as high-risk precisely because of its possible effects on fair trial rights and the rule of law.[45] Accordingly, the dominant contemporary model is not that of automated justice, but of augmented adjudication: AI may assist, inform, and streamline, yet the final decision remains the responsibility of a human judge.
Possible Advantages of the Robot Judge
One of the most frequently cited advantages of artificial intelligence in the administration of justice is its potential to improve efficiency and procedural speed. Judicial systems in many countries face chronic delays, heavy caseloads, and administrative congestion, which undermine the timely delivery of justice. In this context, AI-based tools may assist courts by accelerating repetitive and information-intensive tasks, including case registration, document classification, anonymization of judgments, transcription, summarization, and the preparation of draft materials.[46] The CEPEJ has reported that, by early 2025, 125 AI-related tools had already been identified in its Resource Centre on Cyberjustice and AI, many of them expressly aimed at improving judicial efficiency and accessibility.[47] Likewise, the OECD notes that AI applications in justice range from automating routine case-management functions to supporting legal research and predictive analytics, thereby reducing the time spent on tasks that would otherwise consume substantial judicial and administrative resources.[48] The practical implications of such use are significant. Where courts are burdened by large numbers of routine filings and document-heavy proceedings, AI systems can shorten processing time and relieve court personnel of manual tasks. OECD examples show that AI-assisted tools are already being used to automate anonymization, assist in legal research, and generate standardized summaries, thereby streamlining workflows within judicial institutions.[49] Particularly illustrative is the Peruvian “Amauta Pro” system, which, according to the OECD, reduced the time needed to draft protective resolutions in cases of violence from approximately three hours to forty seconds.[50] Although such examples do not imply that AI replaces adjudication itself, they do suggest that AI may contribute to reducing backlogs indirectly by freeing judges and clerks to focus on the substantive and discretionary dimensions of cases. In this respect, the appeal of the “robot judge” lies not only in the prospect of faster decisions, but also in the redistribution of institutional effort within the court system.[51]
A second important advantage commonly associated with AI in justice is its capacity to support greater consistency and predictability in the treatment of similar cases. In legal systems committed to equality before the law and legal certainty, significant divergences in judicial outcomes may undermine trust in courts and make litigation outcomes harder to foresee.[52] AI tools trained on large corpora of case law may assist judges and legal professionals in identifying patterns, locating comparable precedents, and highlighting the range of outcomes that have historically been adopted in similar factual and legal situations.[53] From this perspective, algorithmic support is often presented as a means of strengthening coherence in judicial practice rather than replacing individualized adjudication. The potential value of such consistency should not be overstated, but it is nevertheless important. The OECD’s justice overview explicitly states that AI can support justice systems by improving consistency, efficiency, and accessibility.[54] In a similar vein, the CEPEJ Ethical Charter recognizes that the use of AI in justice may contribute to the efficiency and quality of judicial systems, provided that it remains compatible with fundamental rights.[55] Scholarly analysis has also linked predictive and analytical tools to the broader objectives of foreseeability, legal certainty, and equality in adjudication, especially in areas where courts decide large numbers of structurally similar cases.[56] Used carefully, AI may therefore help reduce unjustified fragmentation in case law, encourage more uniform judicial practice, and offer judges a clearer map of existing legal tendencies. At the same time, this advantage is best understood as an aid to coherence rather than as a promise of mechanical uniformity, since justice still requires sensitivity to the facts and circumstances of each case.[57]
A further advantage of AI lies in its capacity to process and analyze large volumes of legal and factual data at a speed impossible for human actors alone. Courts operate in an environment saturated with legislation, case law, procedural records, expert reports, and evidentiary materials. AI systems—especially those based on natural language processing and machine learning—can assist by searching across massive datasets, detecting patterns, cross-referencing authorities, extracting relevant information, and generating structured reports on specific issues.[58] The CEPEJ’s 2025 reflections note that some AI systems can cross-reference large amounts of data and provide reports on particular points, which can be useful for analyzing case documents and facts as part of preparatory assistance.[59] This kind of analytical support is especially valuable in complex litigation, repetitive claims, and areas of law characterized by extensive case-law development. The significance of this advantage is not limited to speed alone. Large-scale analysis can also improve the informational quality of judicial work. AI tools may help judges and court staff discover tendencies that are difficult to identify manually, such as recurring procedural bottlenecks, statistically unusual outcomes, patterns in case duration, or clusters of similar claims. In legal research and case preparation, this may enhance the capacity of judicial actors to navigate precedent, compare decisions, and identify relevant normative material more comprehensively.[60] In this sense, the “robot judge” is often attractive not because it promises autonomous legal wisdom, but because it appears capable of augmenting the cognitive reach of human decision-makers. The ability to process large datasets rapidly and systematically is therefore one of the clearest functional advantages of AI in judicial settings, even when final legal evaluation remains entirely human.[61]
AI is also frequently associated with the possibility of improving access to justice, particularly for individuals facing barriers related to cost, complexity, distance, and procedural opacity. Digital tools, including chatbots, virtual assistants, automated triage systems, and online dispute resolution platforms, may help users understand procedures, identify relevant legal pathways, and obtain timely information without the need for immediate in-person legal assistance.[62] The OECD emphasizes that digital technologies and data have considerable potential to support access to justice, resilience, efficiency, and fairness, while its Online Dispute Resolution Framework stresses that ODR can significantly enhance access to justice by improving affordability, reducing the need to travel, and enabling more effective and timely dispute resolution.[63] These developments are especially relevant for low-value, repetitive, or standardized disputes, where traditional court processes may be too expensive or cumbersome in relation to the amount at stake.
Examples identified by the OECD and the NCSC illustrate how digital and AI-supported justice services can make justice pathways more user-friendly. In Portugal, an AI-powered chatbot has been used to provide accessible guidance and practical information to the public about justice-related matters.[64] The NCSC similarly highlights the use of triage tools to improve civil justice accessibility and to expedite certain categories of proceedings.[65] More broadly, the prospect of a digital court or AI-supported court ecosystem suggests a model in which certain disputes—especially small claims, consumer conflicts, or administrative matters—could be handled more quickly and more conveniently through online interfaces, supported by automated information tools and structured digital workflows.[66] While such developments do not eliminate the need for human adjudication, they strengthen the argument that AI, when used responsibly, may contribute to making justice systems more accessible, people-centered, and responsive to modern social needs.[67]
Taken together, these advantages explain why artificial intelligence has attracted serious institutional attention in the judicial sphere. AI promises not only procedural acceleration, but also better information management, greater regularity in judicial practice, and new pathways for access to justice. Yet these benefits are strongest when AI is used as a supportive infrastructure rather than as an autonomous adjudicator.[68] The very features that make the “robot judge” attractive—speed, scale, standardization, and digital accessibility—also reveal that its most realistic contribution lies in strengthening the functioning of human-centered justice, not in displacing the human judge.[69]
Artificial Intelligence and Judicial Discretion
Any discussion of artificial intelligence in adjudication must begin with the principle of judicial independence, which is a prerequisite for the rule of law and a fundamental guarantee of a fair trial.[70] Judicial independence has both an institutional and an individual dimension: courts must be free from improper external influence, and judges must also be able to decide cases according to their own assessment of the facts and the law, without pressure from internal hierarchies, political actors, or technological systems.[71],[72] In this respect, the growing use of AI in courts raises a new question: can algorithmic recommendations subtly constrain the judge’s freedom of judgment, even when the final decision is formally taken by a human being? This concern is particularly relevant where AI tools are presented as highly accurate, neutral, or efficient, because such framing may create pressure to follow machine-generated outcomes rather than to exercise genuine independent reasoning.[73],[74],[75]
Judicial independence is therefore closely linked to inner conviction and discretionary authority. A judge does not merely apply predetermined instructions; rather, adjudication requires a reasoned personal evaluation of the case within the bounds of law. If AI is allowed to shape the outcome too strongly, the danger is not only technical dependence but also a gradual erosion of judicial autonomy. For this reason, contemporary European guidance emphasizes that AI in justice must remain under user control and must not replace the judge’s own responsibility for legal assessment.[76],[77]
Artificial intelligence can undoubtedly assist with the reading of legal texts, the retrieval of case law, and the identification of patterns across large volumes of legal material. It can also support judges by summarizing arguments, comparing precedents, or highlighting relevant factors in similar disputes.[78],[79] Yet the central issue is whether AI can move beyond information processing and perform genuine legal evaluation. Legal judgment is not exhausted by locating the applicable norm. It also requires interpretation, the reconciliation of competing principles, the assessment of proportionality, and the justification of why one solution is more consistent with justice than another. These tasks are not purely computational; they involve normative reasoning and context-sensitive judgment.[80]
This limitation becomes especially visible in hard cases. The interpretation of open-textured legal standards, the balancing of rights and public interests, and the application of proportionality tests require more than prediction based on past data. They require legal reasoning that is sensitive to the singularity of the dispute and to the normative meaning of the law in context. Current policy and scholarly analysis, therefore, treat AI as a support mechanism for reasoning rather than as an autonomous bearer of legal judgment.[81],[82],[83]
The limits of AI in adjudication reflect a deeper theoretical point: law is not only a system of rules, but also a practice of value-based judgment. Legal reasoning is not reducible to formal logic alone. Judicial decisions often involve fairness, equity, proportionality, social consequences, and the moral significance of the facts. Even when the legal rule appears clear, its application frequently depends on interpretation and on the weighing of values that cannot be fully translated into code or statistically inferred from prior cases.[84]
Accordingly, the role of the judge goes beyond the mere processing of information. The judge acts as a public authority who must give reasons, assume responsibility, and produce a decision that can be accepted as legitimate by the parties and by society. This human dimension of adjudication explains why the concept of the “robot judge” remains problematic at the level of final decision-making. AI may strengthen judicial work, but it cannot fully replicate the normative responsibility that lies at the heart of judging.[85],[86],[87]
Human Rights Perspective
The use of AI in courts must be assessed through the lens of the right to a fair trial. Article 6 of the European Convention on Human Rights guarantees a hearing by an independent and impartial tribunal established by law, together with core elements of due process such as reasoned adjudication and procedural fairness.[88] In the context of judicial AI, these guarantees raise immediate concerns. If an algorithm influences the resolution of a dispute, the parties must still be able to understand how the outcome was reached, challenge the basis of the decision, and have their arguments genuinely considered by a tribunal that remains independent in substance, not only in form. The EU AI Act reflects the gravity of this issue by classifying certain AI systems used in the administration of justice as high-risk, precisely because of their potentially significant impact on the rule of law, the right to an effective remedy, and the right to a fair trial.[89]
The fair trial dimension also includes the requirement of reviewability. Even where AI is used only in support of a judicial decision, the result must remain open to meaningful human reconsideration and, where applicable, appeal. A system that produces opaque or effectively unchallengeable outcomes would be difficult to reconcile with procedural justice. For that reason, judicial AI cannot be acceptable unless it preserves the parties’ ability to contest the reasoning, the data, and the relevance of machine-generated outputs.[90],[91],[92]
AI systems often depend on the processing of large volumes of data, including personal and sometimes highly sensitive information. In judicial settings, this may involve criminal history, financial status, health-related material, family circumstances, or behavioral indicators. This makes privacy and data protection central concerns. Council of Europe Convention 108+ applies to automated processing of personal data, while the Council of Europe’s AI and data protection guidelines stress that human dignity and fundamental rights must remain central in AI deployment.[93] In the EU context, GDPR Article 22 provides that a person has the right not to be subject to a decision based solely on automated processing that produces legal or similarly significant effects, and accompanying guidance stresses safeguards such as meaningful information, contestability, and human intervention.[94]
In the judicial sphere, the relevance of these safeguards is obvious. Where courts rely on AI tools trained on large datasets, questions arise about data minimization, security, accuracy, secondary use, and the risks created by the inclusion of sensitive personal information. These risks become even more serious when the data reflects structural social inequalities or historical over-policing, since such distortions may later influence judicial outcomes. Data protection is therefore not a peripheral issue in the debate on the robot judge; it is one of the legal foundations for limiting the use of automated decision-making in justice.[95],[96]
The principle of equality before the law is directly implicated by algorithmic decision-making. AI systems learn from existing data, and where that data contains historical bias or structural inequalities, algorithmic outputs may reproduce or intensify discriminatory patterns. The CEPEJ Ethical Charter, therefore, places non-discrimination among the core principles governing the use of AI in judicial systems.[97] Likewise, the EU Agency for Fundamental Rights has warned that biased algorithms can reinforce or even create discrimination against certain groups.[98] These concerns are especially important for vulnerable groups, whose treatment in legal and administrative systems has often already been affected by social disadvantage, unequal surveillance, or discriminatory institutional practices.
In courts, unequal algorithmic impact may appear in multiple ways: through skewed risk scores, distorted predictive models, selective correlations, or the use of proxies that indirectly encode race, class, gender, disability, or other protected characteristics. The fact that an AI system appears neutral on its face does not remove this danger. On the contrary, the opacity and technical authority of algorithmic systems may make discriminatory effects harder to detect. For this reason, equality in the age of judicial AI requires not only formal neutrality but active monitoring, testing, and auditing for disparate impact.[99]
For all these reasons, human oversight has become a central principle in the governance of AI. Article 14 of the EU AI Act requires human oversight for high-risk systems, with the explicit aim of preventing or minimizing risks to health, safety, and fundamental rights.[6] In the justice context, this requirement has special significance: it means that a human actor must remain capable of understanding the system’s role, identifying anomalies, disregarding inappropriate outputs, and taking responsibility for the final decision. The same logic is reflected in Council of Europe materials, which stress user control and caution against non-explainable AI influencing the judge’s autonomy.[100]
The human-in-the-loop model is therefore not merely a technical preference but a rights-based necessity. It preserves accountability, protects judicial independence, and ensures that technology remains subordinate to law rather than the reverse. In the field of adjudication, meaningful human control is the minimum condition for reconciling innovation with fundamental rights protection.[101]
The most radical vision of the robot judge is the idea of full replacement: the automation of the judicial function itself. This theory assumes that, at least in some categories of cases, adjudication could be reduced to the processing of facts and rules through a sufficiently advanced algorithm. Such a model is most often discussed in relation to small, standardized, and repetitive disputes, where legal rules are relatively clear and factual complexity is limited. In theory, certain procedural areas—such as payment orders, uncontested claims, or highly formalized low-value disputes—may appear more suitable for extensive automation than complex constitutional, criminal, or rights-sensitive litigation.[102]
Yet even in these domains, the idea of full replacement remains problematic. Current evidence does not show the existence of fully autonomous judicial AI operating independently in courts, and European guidance remains skeptical of such a model.[103] More importantly, even seemingly simple cases may raise issues of context, fairness, vulnerability, or procedural justice that resist purely automated treatment. Full replacement, therefore, remains more a theoretical provocation than a legally accepted model of justice.
A more realistic approach is the collaborative model, in which AI functions as a support system within a judge-led process. In this model, the relationship is not “judge or machine”, but judge plus AI. The system may assist with research, pattern recognition, summarization, document management, or draft analysis, while the human judge retains interpretive and decision-making authority. This arrangement allows courts to benefit from technological speed and scale without abandoning the safeguards associated with human adjudication.[104]
The collaborative model also reduces risk through human control. It permits judicial scrutiny of algorithmic outputs, preserves reason-giving and accountability, and makes it easier to correct errors or reject inappropriate recommendations.[105] For these reasons, cooperation rather than substitution has become the dominant practical and normative direction in contemporary debates on AI in justice.[106]
From a normative standpoint, AI should not replace the judge. It may be used as a valuable supportive mechanism, but final judicial authority should remain with a human decision-maker. This position follows not only from technological caution, but from the nature of adjudication itself: judging requires discretion, interpretation, legitimacy, and responsibility in ways that cannot be fully transferred to an algorithm. The most legally acceptable model is therefore a human-centered and human-controlled framework, in which AI augments judicial work but does not determine the outcome.[107]
Recommendations for Legal Policy
Considering the above analysis, a prudent legal policy for AI in justice should include the following elements. First, states should adopt a clear legal framework defining which judicial functions may be supported by AI and which must remain exclusively human. Second, transparency standards should require disclosure of the role played by AI in judicial processes.[108] Third, systems used in courts should be subject to an explainability requirement, at least to the extent necessary for meaningful challenge and review. Fourth, human oversight must be mandatory wherever AI affects legally significant outcomes. Fifth, judicial AI should undergo regular audits for accuracy, robustness, and rights-related risks. Sixth, courts should implement anti-discrimination controls, including testing for biased data and disparate impact. Seventh, judicial institutions should adopt ethical guidelines for the responsible use of AI, consistent with fair trial and data protection standards. Finally, judges and legal professionals should receive training not only in the technical use of AI tools, but also in their legal limits and human rights implications.[109],[110],[111],[112],[113]
Conclusion
Artificial intelligence creates important opportunities for the administration of justice. It can improve efficiency, accelerate the handling of cases, support consistency, strengthen data analysis, and expand access to justice in certain categories of disputes.[114] At the same time, judicial decision-making is not merely a technical process. Justice requires fairness, explainability, accountability, equality, and respect for fundamental rights. It also requires a human judge capable of interpretation, discretion, and responsibility. For this reason, the concept of the “robot judge” should not be understood as a viable substitute for the human judge in the full sense of adjudication.[115]
The most realistic and legally acceptable model is therefore not replacement, but cooperation. AI may serve as a powerful auxiliary instrument within judicial systems, provided that its use is framed by clear law, effective oversight, procedural safeguards, and meaningful human control. Only under such conditions can technological innovation remain compatible with the rule of law and the protection of human dignity.
References
Scholarly Literature:
Barysė, D., Sarel, R. (2023). Algorithms in the court: Does it matter which part of the judicial decision-making is automated? Artificial Intelligence and Law, 32(1). DOI:10.1007/s10506-022-09343-6;
de la Osa, D. U. S., Remolina, N. (2024). Artificial intelligence at the bench: Legal and ethical challenges of informing—or misinforming—judicial decision-making through generative AI. Data & Policy, 6, e59. DOI:10.1017/dap.2024.53;
Dolidze T. (2026) Integration of Artificial Intelligence in Criminal Investigation and Criminal Proceedings. Generis Publishing;
Donati, F. (2025). The use of artificial intelligence in judicial systems: Ethics and efficiency. In Artificial intelligence, judicial decision-making and fundamental rights (JuLIA Handbook). Scuola Superiore della Magistratura;
Fortes, P. R. B. (2020). Paths to digital justice: Judicial robots, algorithmic decision-making, and due process. Asian Journal of Law and Society. DOI:10.1017/als.2020.12;
Härmand, K. (2023). AI systems’ impact on the recognition of foreign judgements: The case of Estonia. Juridica International, 32;
Morison, J., Harkens, A. (2019). Re-engineering justice? Robot judges, computerised courts and (semi) automated legal decision-making. Legal Studies, 39(4). DOI:10.1017/lst.2019.5;
Reiling, A. D., Papagianneas, S. (2025). Lessons from China’s smart court reform? International Journal for Court Administration, 16(1).
Normative acts:
Convention for the Protection of Human Rights and Fundamental Freedoms, November 4, 1950, ETS No. 5;
Protocol amending the Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data (Convention 108+), October 10, 2018, CETS No. 223.
Council of Europe, Committee of Ministers. (2010). Recommendation CM/Rec(2010)12 on judges: independence, efficiency and responsibilities;
Council of Europe. (2019). Guidelines on artificial intelligence and data protection.
European Commission for the Efficiency of Justice. (2018). European ethical charter on the use of artificial intelligence in judicial systems and their environment. Council of Europe;
European Data Protection Board. (2018). Guidelines on automated individual decision-making and profiling for the purposes of Regulation 2016/679;
European Parliament and Council of the European Union. (2016). Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data (General Data Protection Regulation). Official Journal of the European Union, L 119;
European Union. (2024). Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence (Artificial Intelligence Act). Official Journal of the European Union, L, 2024/1689;
United Nations Office on Drugs and Crime. (2002/2006). The Bangalore Principles of Judicial Conduct. UNODC.
Court decision:
State v. Loomis, 881 N.W.2d 749 (Wis. 2016).
Supplementary Materials:
Baker, J., Hobart, L., Mittelsteadt, M. (2023). An introduction to artificial intelligence for federal judges. Federal Judicial Center;
European Commission for the Efficiency of Justice. (2025). 1st report on the use of artificial intelligence (AI) in the judiciary, based on the information contained in the CEPEJ’s Resource Centre on Cyberjustice and AI. Council of Europe;
European Commission for the Efficiency of Justice. (2025). Reflections of the AIAB on the use of artificial intelligence in judicial systems. Council of Europe;
European Court of Human Rights. (2022). Guide on Article 6 of the European Convention on Human Rights: Right to a fair trial (civil and criminal limbs);
European Union Agency for Fundamental Rights. (2025). Fundamental Rights Report 2025. FRA;
Ministry of Justice and Digital Affairs of Estonia. (2022). Estonia does not develop AI Judge;
National Center for State Courts. (2026). Guidance for implementing AI in courts;
National Institute of Standards and Technology. (n.d.). Machine learning. NIST Computer Security Resource Center Glossary;
OECD.AI. (n.d.). AI in government: Issues – Justice. OECD;
OECD. (2024). Explanatory memorandum on the updated OECD definition of an AI system. OECD Artificial Intelligence Papers, No. 8. OECD Publishing;
Organisation for Economic Co-operation and Development. (2024). OECD online dispute resolution framework. OECD Publishing;
Organisation for Economic Co-operation and Development. (2025). AI in justice administration and access to justice. In Governing with artificial intelligence: The state of play and way forward in core government functions. OECD Publishing;
Roberts, J. G., Jr. (2023). 2023 Year-End Report on the Federal Judiciary. Supreme Court of the United States;
Supreme People’s Court of the People’s Republic of China. (2024). SPC launches AI-generated platform to help judges, public.
Footnotes
[1] OECD. (2024). Explanatory memorandum on the updated OECD definition of an AI system. OECD Artificial Intelligence Papers, No. 8. OECD Publishing.
[2] Ibid.
[3] National Institute of Standards and Technology. (n.d.). Machine learning. NIST Computer Security Resource Center Glossary.
[4] Ibid.
[5] European Commission for the Efficiency of Justice. (2018). European ethical charter on the use of artificial intelligence in judicial systems and their environment. Council of Europe.
[6] Morison, J., Harkens, A. (2019). Re-engineering justice? Robot judges, computerised courts and (semi) automated legal decision-making. Legal Studies, 39(4), 618–635. DOI:10.1017/lst.2019.5.
[7] Fortes, P. R. B. (2020). Paths to digital justice: Judicial robots, algorithmic decision-making, and due process. Asian Journal of Law and Society. DOI:10.1017/als.2020.12.
[8] European Commission for the Efficiency of Justice. (2018). European ethical charter on the use of artificial intelligence in judicial systems and their environment. Council of Europe.
[9] Fortes, P. R. B. (2020). Paths to digital justice: Judicial robots, algorithmic decision-making, and due process. Asian Journal of Law and Society. DOI:10.1017/als.2020.12.
[10] Morison, J., Harkens, A. (2019). Re-engineering justice? Robot judges, computerized courts and (semi) automated legal decision-making. Legal Studies, 39(4), 618–635. DOI:10.1017/lst.2019.5.
[11] Barysė, D., Sarel, R. (2023). Algorithms in the court: Does it matter which part of the judicial decision-making is automated? Artificial Intelligence and Law, 32(1). DOI:10.1007/s10506-022-09343-6.
[12] Ibid.
[13] de la Osa, D. U. S., Remolina, N. (2024). Artificial intelligence at the bench: Legal and ethical challenges of informing—or misinforming—judicial decision-making through generative AI. Data & Policy, 6, e59. DOI:10.1017/dap.2024.53.
[14] Ibid.
[15] European Commission for the Efficiency of Justice. (2025). 1st report on the use of artificial intelligence (AI) in the judiciary, based on the information contained in the CEPEJ’s Resource Centre on Cyberjustice and AI. Council of Europe.
[16] Ibid.
[17] Ibid.
[18] Ibid.
[19] Organisation for Economic Co-operation and Development. (2025). AI in justice administration and access to justice. In Governing with artificial intelligence: The state of play and way forward in core government functions. OECD Publishing.
[20] Ibid.
[21] National Center for State Courts. (2026). Guidance for implementing AI in courts.
[22] State v. Loomis, 881 N.W.2d 749 (Wis. 2016).
[23] Ibid.
[24] Baker, J., Hobart, L., Mittelsteadt, M. (2023). An introduction to artificial intelligence for federal judges. Federal Judicial Center.
[25] State v. Loomis, 881 N.W.2d 749 (Wis. 2016).
[26] Baker, J., Hobart, L., Mittelsteadt, M. (2023). An introduction to artificial intelligence for federal judges. Federal Judicial Center.
[27] National Center for State Courts. (2026). Guidance for implementing AI in courts.
[28] Roberts, J. G., Jr. (2023). 2023 Year-End Report on the Federal Judiciary. Supreme Court of the United States.
[29] Reiling, A. D., Papagianneas, S. (2025). Lessons from China’s smart court reform? International Journal for Court Administration, 16(1).
[30] Ibid.
[31] Supreme People’s Court of the People’s Republic of China. (2024). SPC launches AI-generated platform to help judges, public.
[32] Ibid.
[33] Reiling, A. D., Papagianneas, S. (2025). Lessons from China’s smart court reform? International Journal for Court Administration, 16(1).
[34] Ministry of Justice and Digital Affairs of Estonia. (2022). Estonia does not develop AI Judge.
[35] Ibid.
[36] Härmand, K. (2023). AI systems’ impact on the recognition of foreign judgements: The case of Estonia. Juridica International, 32, 107–118.
[37] Ibid.
[38] European Commission for the Efficiency of Justice. (2018). European ethical charter on the use of artificial intelligence in judicial systems and their environment. Council of Europe.
[39] European Union. (2024). Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence (Artificial Intelligence Act). Official Journal of the European Union, L, 2024/1689.
[40] Ibid.
[41] European Commission for the Efficiency of Justice. (2025). 1st report on the use of artificial intelligence (AI) in the judiciary, based on the information contained in the CEPEJ’s Resource Centre on Cyberjustice and AI. Council of Europe.
[42] Ibid.
[43] Roberts, J. G., Jr. (2023). 2023 Year-End Report on the Federal Judiciary. Supreme Court of the United States.
[44] Baker, J., Hobart, L., & Mittelsteadt, M. (2023). An introduction to artificial intelligence for federal judges. Federal Judicial Center.
[45] European Union. (2024). Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence (Artificial Intelligence Act). Official Journal of the European Union, L, 2024/1689.
[46] Organisation for Economic Co-operation and Development. (2025). AI in justice administration and access to justice. In Governing with artificial intelligence: The state of play and way forward in core government functions. OECD Publishing.
[47] Ibid.
[48] Ibid.
[49] Ibid.
[50] Ibid.
[51] European Commission for the Efficiency of Justice. (2025). Reflections of the AIAB on the use of artificial intelligence in judicial systems. Council of Europe.
[52] OECD.AI. (n.d.). AI in government: Issues – Justice. OECD.
[53] Donati, F. (2025). The use of artificial intelligence in judicial systems: Ethics and efficiency. In Artificial intelligence, judicial decision-making and fundamental rights (JuLIA Handbook). Scuola Superiore della Magistratura.
[54] OECD.AI. (n.d.). AI in government: Issues – Justice. OECD.
[55] European Commission for the Efficiency of Justice. (2018). European ethical charter on the use of artificial intelligence in judicial systems and their environment. Council of Europe.
[56] Donati, F. (2025). The use of artificial intelligence in judicial systems: Ethics and efficiency. In Artificial intelligence, judicial decision-making and fundamental rights (JuLIA Handbook). Scuola Superiore della Magistratura.
[57] Ibid.
[58] European Commission for the Efficiency of Justice. (2025). 1st report on the use of artificial intelligence (AI) in the judiciary, based on the information contained in the CEPEJ’s Resource Centre on Cyberjustice and AI. Council of Europe.
[59] European Commission for the Efficiency of Justice. (2025). Reflections of the AIAB on the use of artificial intelligence in judicial systems. Council of Europe.
[60] Ibid.
[61] Organisation for Economic Co-operation and Development. (2025). AI in justice administration and access to justice. In Governing with artificial intelligence: The state of play and way forward in core government functions. OECD Publishing.
[62] European Commission for the Efficiency of Justice. (2025). Reflections of the AIAB on the use of artificial intelligence in judicial systems. Council of Europe.
[63] Organisation for Economic Co-operation and Development. (2024). OECD online dispute resolution framework. OECD Publishing.
[64] Organisation for Economic Co-operation and Development. (2025). AI in justice administration and access to justice. In Governing with artificial intelligence: The state of play and way forward in core government functions. OECD Publishing.
[65] National Center for State Courts. (2026). Guidance for implementing AI in courts.
[66] Organisation for Economic Co-operation and Development. (2024). OECD online dispute resolution framework. OECD Publishing.
[67] National Center for State Courts. (2026). Guidance for implementing AI in courts.
[68] European Commission for the Efficiency of Justice. (2025). Reflections of the AIAB on the use of artificial intelligence in judicial systems. Council of Europe.
[69] European Commission for the Efficiency of Justice. (2018). European ethical charter on the use of artificial intelligence in judicial systems and their environment. Council of Europe.
[70] United Nations Office on Drugs and Crime. (2002/2006). The Bangalore Principles of Judicial Conduct. UNODC.
[71] Ibid.
[72] Council of Europe, Committee of Ministers. (2010). Recommendation CM/Rec(2010)12 on judges: independence, efficiency and responsibilities.
[73] United Nations Office on Drugs and Crime. (2002/2006). The Bangalore Principles of Judicial Conduct. UNODC.
[74] Council of Europe, Committee of Ministers. (2010). Recommendation CM/Rec(2010)12 on judges: independence, efficiency and responsibilities.
[75] European Commission for the Efficiency of Justice. (2025). Reflections of the AIAB on the use of artificial intelligence in judicial systems. Council of Europe.
[76] Ibid.
[77] European Commission for the Efficiency of Justice. (2018). European ethical charter on the use of artificial intelligence in judicial systems and their environment. Council of Europe.
[78] European Commission for the Efficiency of Justice. (2025). Reflections of the AIAB on the use of artificial intelligence in judicial systems. Council of Europe.
[79] de la Osa, D. U. S., Remolina, N. (2024). Artificial intelligence at the bench: Legal and ethical challenges of informing—or misinforming—judicial decision-making through generative AI. Data & Policy, 6, e59. DOI:10.1017/dap.2024.53.
[80] Ibid.
[81] European Commission for the Efficiency of Justice. (2025). Reflections of the AIAB on the use of artificial intelligence in judicial systems. Council of Europe.
[82] de la Osa, D. U. S., Remolina, N. (2024). Artificial intelligence at the bench: Legal and ethical challenges of informing—or misinforming—judicial decision-making through generative AI. Data & Policy, 6, e59. DOI:10.1017/dap.2024.53.
[83] European Union. (2024). Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence (Artificial Intelligence Act).
[84] de la Osa, D. U. S., Remolina, N. (2024). Artificial intelligence at the bench: Legal and ethical challenges of informing—or misinforming—judicial decision-making through generative AI. Data & Policy, 6, e59. DOI:10.1017/dap.2024.53.
[85] United Nations Office on Drugs and Crime. (2002/2006). The Bangalore Principles of Judicial Conduct. UNODC.
[86] European Commission for the Efficiency of Justice. (2025). Reflections of the AIAB on the use of artificial intelligence in judicial systems. Council of Europe.
[87] de la Osa, D. U. S., Remolina, N. (2024). Artificial intelligence at the bench: Legal and ethical challenges of informing—or misinforming—judicial decision-making through generative AI. Data & Policy, 6, e59. DOI:10.1017/dap.2024.53.
[88] European Court of Human Rights. (2022). Guide on Article 6 of the European Convention on Human Rights: Right to a fair trial (civil and criminal limbs).
[89] European Union. (2024). Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence (Artificial Intelligence Act).
[90] European Court of Human Rights. (2022). Guide on Article 6 of the European Convention on Human Rights: Right to a fair trial (civil and criminal limbs).
[91] European Union. (2024). Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence (Artificial Intelligence Act).
[92] European Data Protection Board. (2018). Guidelines on automated individual decision-making and profiling for the purposes of Regulation 2016/679; European Union. (2016). Regulation (EU) 2016/679 (General Data Protection Regulation), Art. 22.
[93] Council of Europe. (2018). Protocol amending the Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data (Convention 108+); Council of Europe. (2019). Guidelines on artificial intelligence and data protection.
[94] European Data Protection Board. (2018). Guidelines on automated individual decision-making and profiling for the purposes of Regulation 2016/679; European Union. (2016). Regulation (EU) 2016/679 (General Data Protection Regulation), Art. 22.
[95] Ibid.
[96] Council of Europe. (2018). Protocol amending the Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data (Convention 108+); Council of Europe. (2019). Guidelines on artificial intelligence and data protection.
[97] European Commission for the Efficiency of Justice. (2018). European ethical charter on the use of artificial intelligence in judicial systems and their environment. Council of Europe.
[98] European Union Agency for Fundamental Rights. (2025). Fundamental Rights Report 2025. FRA.
[99] Ibid.
[100] European Commission for the Efficiency of Justice. (2018). European ethical charter on the use of artificial intelligence in judicial systems and their environment. Council of Europe.
[101] European Union. (2024). Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence (Artificial Intelligence Act).
[102] Ibid.
[103] European Commission for the Efficiency of Justice. (2025). Reflections of the AIAB on the use of artificial intelligence in judicial systems. Council of Europe.
[104] European Commission for the Efficiency of Justice. (2018). European ethical charter on the use of artificial intelligence in judicial systems and their environment. Council of Europe.
[105] European Union. (2024). Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence (Artificial Intelligence Act).
[106] European Commission for the Efficiency of Justice. (2018). European ethical charter on the use of artificial intelligence in judicial systems and their environment. Council of Europe.
[107] de la Osa, D. U. S., Remolina, N. (2024). Artificial intelligence at the bench: Legal and ethical challenges of informing—or misinforming—judicial decision-making through generative AI. Data & Policy, 6, e59.
[108] Dolidze T. (2026) Integration of Artificial Intelligence in Criminal Investigation and Criminal Proceedings. Generis Publishing, 20-21.
[109] European Commission for the Efficiency of Justice. (2025). Reflections of the AIAB on the use of artificial intelligence in judicial systems. Council of Europe.
[110] European Union. (2024). Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence (Artificial Intelligence Act).
[111] European Data Protection Board. (2018). Guidelines on automated individual decision-making and profiling for the purposes of Regulation 2016/679; European Union. (2016). Regulation (EU) 2016/679 (General Data Protection Regulation), Art. 22.
[112] European Commission for the Efficiency of Justice. (2018). European ethical charter on the use of artificial intelligence in judicial systems and their environment. Council of Europe.
[113] European Union Agency for Fundamental Rights. (2025). Fundamental Rights Report 2025. FRA
[114] European Commission for the Efficiency of Justice. (2018). European ethical charter on the use of artificial intelligence in judicial systems and their environment. Council of Europe.
[115] European Union. (2024). Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence (Artificial Intelligence Act).
##plugins.themes.bootstrap3.article.details##
როგორ უნდა ციტირება

ეს ნამუშევარი ლიცენზირებულია Creative Commons Attribution-ShareAlike 4.0 საერთაშორისო ლიცენზიით .
https://orcid.org/0009-0004-3969-9821