The Capabilities and Challenges of Artificial Intelligence in the Justice System

##plugins.themes.bootstrap3.article.main##

ანოტაცია

In the context of globalization, one of the key pillars for improving the effectiveness of a country’s legal system, ensuring access to justice, and enhancing the quality of legal proceedings lies in innovative and technological advancement. The ongoing global digitization process offers a broad range of services in every field, including the judiciary, enabling improved access to justice for citizens from various social backgrounds through digital transformation. It also allows the integration of artificial intelligence tools into case review and decision-making processes, making the administration of justice faster, more flexible, and efficient. Moreover, the implementation of AI technologies helps create essential tools and mechanisms that, through an integrated approach, contribute to solving global legal challenges—such as prolonged legal proceedings, overburdened judges, limited access to justice, inefficiencies in the legal system, and more.


The primary purpose of artificial intelligence is to simplify administrative processes, increase transparency and efficiency in decision-making, and assist judges, prosecutors, and lawyers in processing documents. AI enables the analysis of legal documents, anonymization of court decisions, and comparison and compliance checks of contracts. These capabilities significantly reduce human error and save time.


 This article discusses examples from various countries where AI is applied in both legal research and the modeling of judicial proceedings. It is essential to emphasize that the successful use of this technology depends not only on its technical capabilities but also on the legal and ethical frameworks that protect citizens’ rights.


           


Keywords: Justice system, AI, ethical challenges, analysis, regulation


 


Introduction


 The process of digitalization in the judicial system accelerated significantly following the global pandemic that began in early 2020. The swift transition to remote (online) court hearings was made possible through the integrated use of justice-oriented digital technologies. This shift posed new challenges for judicial institutions in terms of effectively managing cases, analyzing evidence, ensuring secure digital communication, maintaining data security, and delivering timely justice. The technological environment of artificial intelligence provides courts with the ability to effectively utilize automated resources, adapt them to their workflows and management systems, and thus help formulate a clearer vision and strategy for delivering fast and efficient justice.


In recent years, artificial intelligence has penetrated and fundamentally transformed many spheres of our lives.[1] Becoming a part of our daily routine. It is now used both in the private sector and across public institutions. Digital platforms and tools have become a kind of guarantee for the continuity of activities in all key sectors.[2] Consequently, AI is increasingly being applied in justice systems around the world—offering both opportunities and risks. Recently, critical scholarship has raised questions about the judiciary’s ability to handle the difficulties and limitations inherent in deploying AI systems.[3] This article will specifically examine what kinds of opportunities AI creates within judicial systems where it is already in use, and what risks are associated with its implementation.


The term artificial intelligence was first introduced in 1956 at a seminar held at Stanford University in the United States, which focused on logical rather than computational problems.[4] Artificial intelligence can be defined as “a machine’s ability to act in a way that would be considered intelligent”. This definition belongs to John McCarthy, who is regarded as the creator of the term “artificial intelligence” and introduced it for AI in 1956.[5],[6]


According to the Duden Dictionary, the term “artificial” describes the imitation of a natural process, while “intelligence” is defined as a human capacity for abstract thinking, reasoning, and purposeful action. Based on this definition, artificial intelligence can be understood as an attempt to create a simulation of human cognitive abilities.[7]


Definitions of artificial intelligence also appear in the field of computer science. For example, the definition of AI as “an attempt to teach computers to think”[8] highlights the imitation of human cognitive processes by systems such as machines or computers. This perspective is reflected in the “Turing Test”,[9] developed by British scientist Alan M. Turing, which AI can only pass if it communicates with a human in natural language, acts logically, and adapts to changing circumstances.[10]


The Council of Europe defines artificial intelligence as “a combination of sciences, theories, and technologies whose goal is to reproduce human cognitive abilities through machines. Given the current level of development, artificial intelligence refers to the delegation of complex intellectual tasks, normally performed by humans, to machines”.[11]


According to the definition developed by the European Commission’s High-Level Expert Group on Artificial Intelligence (AI HLEG), “artificial intelligence characterizes systems that, through environmental analysis, demonstrate intelligent behavior and, to a certain degree of autonomy, carry out actions to achieve specific objectives. AI-based systems can exist in a virtual environment as fully software-based (e.g., voice assistants, image analysis software, search engines, voice and facial recognition systems), or AI can be embedded in hardware devices (e.g., advanced robots, autonomous vehicles, drones, and Internet of Things applications)”.[12]


 


Methodology


In the process of working on this research, I employed both comparative and qualitative analysis, focusing on the study of international practices and the possibilities for integrating artificial intelligence (AI) into Georgia’s justice system. The research analyzed legal approaches and practical examples from various countries, including initiatives from the Council of Europe, the European Union, and individual member states regarding the adoption of AI in judicial systems. I also reviewed findings published in high-ranking academic journals.


The primary sources for data collection included binding international legal documents (e.g., the EU’s draft AI Act), the Council of Europe’s principles on the use of AI in the judiciary (CEPEJ guidelines), academic and expert analyses (including reports by the EU Agency for Fundamental Rights), and studies and public statements from organizations engaged in judicial reform. The analysis of current practices in Georgia was carried out using the Desk Research method, which involved evaluating open sources such as public policy documents, strategies, legislation, judicial reform plans, and the national AI strategy. The following areas were specifically examined: stages of digitalization in the court system, implementation and use of electronic management systems, and existing frameworks for personal data protection.


The methodological approach also included the identification of ethical risks regarding algorithmic transparency, impartiality, and the necessity of human oversight in judicial decision-making.
1. Theoretical and Practical Dimensions of AI in Justice Systems


“Artificial intelligence is a complex artificial cybernetic software-hardware system (electronic, including virtual, electromechanical, bio-electromechanical, or hybrid), which possesses a cognitive functional architecture and access, either independently or in relative terms, to the needed high-speed computational power”.[13]


AI systems can also be differentiated based on their performance and domain of application. A common distinction in AI research is that between so-called “strong” and “weak” AI. This distinction is philosophical in nature and hinges on two hypotheses: the weak hypothesis, which claims that a system (e.g., a machine) can behave intelligently, and the strong hypothesis, which posits that such a system may possess intelligence. Analogously, a strong AI system exhibits intelligent behavior because it genuinely thinks, whereas a weak AI system only mimics intelligent behavior[14] [1-4]. A strong AI system would operate at a level equal to or beyond the capacity of the human brain. In contrast, a weak AI system is specialized in solving individual tasks and is intended to support, not replace, human cognitive effort.[15]


It is important to distinguish between artificial intelligence (AI) and machine learning (ML), as ML is merely a subcategory of AI. Using them interchangeably is incorrect. ML is typically closely associated with statistics and data processing, enabling a system to improve through experience. Deep Learning (DL), another subcategory of AI, uses neural networks to process unstructured data.[16]


Examples of AI use:



  • Navigation services (e.g., Google Maps, Apple Maps);

  • Mobile applications (e.g., Siri, Alexa, Google Assistant);

  • Social media platforms (e.g., Facebook, Twitter, Instagram) use AI to tailor content to user interests.[17]


1.1 AI in the justice system: Transforming courts through technology


AI offers a broad spectrum of possibilities for improving various sectors. AI systems are increasingly being used in judicial procedures and courtrooms around the world—from Australia, China, and the United States to the United Kingdom, Estonia, Mexico, and Brazil. These systems are being built, tested, developed, and adapted for use in courts and tribunals globally. AI has the potential to increase procedural efficiency, accuracy, and accessibility of justice.


Court hearings do not require in-person presence, as communication technologies facilitate remote proceedings. Solution Explorer, for example, was used 160,527 times between July 13, 2016, and March 31, 2021. In 2020/2021, the average time to resolve a dispute using this system was 85.8 days, with a median resolution time of 59 days across all case types.[18]


In China, courts use AI to warn judges if their decision deviates from precedent data in a central database.[19]


 AI has also demonstrated the ability to predict rulings of the European Court of Human Rights (ECHR). This tool employs natural language processing and machine learning to forecast whether a provision of the European Convention on Human Rights has been violated in a given case. The system bases its predictions on prior decisions and achieves a 79% accuracy rate in matching human judges’ outcomes.[20]


Beyond these applications, AI is used during court proceedings to review and analyze documents for compliance with predefined criteria. For example, document review involves identifying relevant materials in a case, and AI can significantly enhance the speed, accuracy, and efficiency of this process. Another AI tool is contract analysis, which can assist with both general transactions and individual contracts. JPMorgan has used an AI-powered tool named COIN (Contract Intelligence) since June 2017 to interpret commercial loan agreements. A task that would typically require 360,000 lawyer hours can now be completed in seconds.[21]


The Higher Regional Court of Stuttgart uses an AI tool named OLGA (Assistant of the Higher Regional Court). OLGA analyzes lower court decisions, grounds for appeal, and previously set judicial parameters. It functions as an intelligent research assistant with access to judicial rules, but it does not make decisions itself.


In Bavaria, a new system will soon be tested to automate the anonymization of decisions — a task currently performed manually, and which requires significant time and human resources. Anonymization extends beyond obvious identifiers such as names and addresses to include any data that might indirectly identify an individual.[22]


In the United Kingdom, the Money Claim Online (MCOL) portal has been in use since 2002 to manage claims under £100,000 without needing to enter a courtroom or hire legal representation. A separate portal, Civil Money Claims, launched in 2018, allows claims under £10,000. For 80% of surveyed users, the portal was found to be easy to use. The system first determines whether a case qualifies for the MCOL or Civil Money Claims path. If eligible, and if automatically generated documents are uploaded, the claim can be submitted for mediation or court. If the respondent agrees to pay, the claimant enters the terms of a judgment for court approval. The portal can also be used to issue enforcement orders if payment is not made.[23]


Taken together, these examples show that AI has remarkable capabilities in the justice system. It can accelerate dispute resolution, improve document processing, and increase both efficiency and access to justice.


1.2 Ethical challenges and data protection concerns in AI development


It is worth noting that artificial intelligence offers considerable potential and benefits, but at the same time, it is accompanied by significant ethical challenges, particularly the following:


In some cases, artificial intelligence exhibits bias and discrimination, which may result in unjust outcomes. For example, in 2019, it was revealed that Apple Pay offered different credit limits for men and women. Women were granted lower credit limits and were made more vulnerable due to the algorithm Apple used. A case of algorithmic racism was also reported with Google Photos, where photos of Black individuals were just labeled “Black”.[24]


Articles 7 (prohibition of discrimination) and 12 (right to privacy) of the Universal Declaration of Human Rights, along with Articles 2, 3, and 17 of the International Covenant on Civil and Political Rights, are binding on all signatory states when it comes to the use of artificial intelligence. Guidelines highlight the necessity of algorithmic transparency and openness in decision-making processes. AI-generated decisions must be predictable and require human oversight. Transparency of databases and public accessibility to the basis of their processing are essential for the development of AI in an environment regulated by ethical, moral, and legal mechanisms.[25]


One of the key challenges also lies in the protection of personal data and privacy. Massive surveillance and data collection were observed in Amazon’s “Rekognition” project, which was designed for real-time human identification but faced issues concerning privacy and surveillance.[26]


Privacy and data protection are closely related but distinct rights. Privacy is a fundamental right recognized, in some form, by nearly every country in its constitutions or legal frameworks. Additionally, privacy is recognized as a general human right, unlike data protection. The right to privacy and private life is enshrined in Article 12 of the Universal Declaration of Human Rights, Article 8 of the European Convention on Human Rights, and Article 7 of the EU Charter of Fundamental Rights.


Data protection refers to safeguarding any information related to an identified or identifiable natural person—this includes names, birthdates, photographs, video recordings, email addresses, and phone numbers. The concept of data protection has its roots in the right to privacy, and both are important instruments for the defense of fundamental rights. Data protection serves the specific purpose of ensuring that personal data are processed (collected, used, stored) in good faith by both the public and private sectors[27] [14-17].


One example of data insecurity is the case of Cambridge Analytica, a data analytics company that unlawfully used Facebook users’ personal information during the 2016 U.S. presidential campaign. According to records, the company obtained and analyzed the data of 50 million users, which were then used to craft personalized political advertisements.


For data processing to be lawful, merely having a legal basis is not sufficient. The processing of data must comply with specific principles:



  • Fairness and lawfulness: The processing of personal data must be conducted fairly and legally. This means that data must be collected and handled in a way that does not violate the rights and dignity of the person to whom the data belong.

  • Clear purpose: Data must be collected only for specific and legitimate purposes. Further use of the data for other purposes must be prohibited.

  • Proportionality and adequacy: Only the amount of data necessary to achieve the intended purpose should be collected. The data must be sufficient and relevant for the purpose of processing, but not excessive.

  • Truthfulness and accuracy: Data must be true and accurate. When necessary, data must be updated, their reliability checked, and incorrect or inaccurate information corrected.

  • Storage limitation: Personal data should only be retained for the time necessary to achieve the stated purpose. Once the purpose has been fulfilled, the data must either be deleted or stored in a form that no longer allows identification of the individual.


Another major challenge of artificial intelligence is the issue of accountability: who should be responsible for harm caused by the actions of AI—the manufacturer, the user, or the AI itself? Legally, this is a complex question. Responsibility is generally based on wrongful conduct that causes harm. Since the manufacturer is closest to the decision-making around AI development, they are typically held responsible for defects. However, there are exceptions, such as in cases involving medical harm or damage caused by autonomous vehicles. In cases involving medical harm, it is important to investigate whether the physician, who relied on AI for diagnostics, exercised the necessary level of care. In instances of damage caused by autonomous vehicle operation, liability generally falls on the driver, since they are the one who activates and uses the self-driving function. The driver is considered legally responsible for the vehicle even if they are not physically steering it.[28]


This section presents the challenges that, according to current data, may be associated with artificial intelligence. Alongside these challenges, AI also offers possibilities and potential to solve repetitive, labor-intensive tasks more quickly and efficiently. This, in turn, frees up human resources to focus on more complex and creative tasks. AI also has the capacity to play an important role in disease diagnosis and to be used in environmental protection. However, to eliminate the challenges mentioned above, it is essential that the development of artificial intelligence takes ethics into account.


Building trustworthy AI: Legal frameworks and human rights considerations


Integrating ethical principles into the development of artificial intelligence is crucial to ensuring that AI tools have a positive impact on society. For users and affected individuals, AI systems are often neither understandable nor transparent in terms of how decisions or outcomes are reached. Among other concerns, the decisions must be understandable for AI systems to be perceived as trustworthy and legally compliant. Additionally, effective safeguards must be in place to protect against discrimination, manipulation, or other harmful applications.[29]


Considering the circumstances mentioned above, a foundation has been established for ethical standards governing the use of artificial intelligence. According to these standards, the use of AI must always be based on fundamental human rights, which are legally binding under EU treaties and the EU Charter of Fundamental Rights. Above all, these include respect for human dignity, personal freedom, democracy, the rule of law, equality, non-discrimination, and solidarity.


In June 2018, the High-Level Expert Group (HLEG), established by the European Commission, published its ethical guidelines on trustworthy artificial intelligence. The goal of these guidelines is to promote trustworthy AI, which should be characterized by three components throughout its entire lifecycle:[30]



  1. It must be lawful and, therefore, comply with all applicable laws and regulations.

  2. It must be ethical and, therefore, respect ethical values and principles.

  3. It must be robust, both from a technical and social perspective.


In this way, the HLEG provides recommendations for supporting and ensuring ethical and robust artificial intelligence, and it promotes the integration of AI systems into socio-technical environments. The 52-member expert group believes that the use of artificial intelligence has the potential to profoundly transform society: “Artificial intelligence is not an end in itself, but a promising means to enhance human flourishing and, by extension, individual and societal well-being, as well as to promote progress and innovation”.


Based on the European Union’s guiding principles, the Organization for Economic Co-operation and Development (OECD), an international body composed of 36 member states, primarily from Europe and North America, also developed its own set of AI principles. The OECD aims to promote innovative and trustworthy AI that respects human rights and democratic values. The group of AI experts formulated five key recommendations:[31]



  1. Artificial intelligence should be beneficial to people and the planet by supporting inclusive growth, sustainable development, and the improvement of quality of life.

  2. AI systems should be designed in a way that respects the rule of law, human rights, democratic values, and diversity. They must also ensure appropriate safeguards, such as human intervention where necessary, with the aim of promoting a fair and just society.

  3. AI systems must ensure transparency and responsible disclosure so that individuals can understand and question outcomes produced by AI. AI systems should function securely and reliably throughout their lifecycle, with ongoing assessment and mitigation of potential risks. Organizations and individuals developing, deploying, or operating AI systems should be held accountable for their proper functioning in line with the above principles.


The OECD document encourages member governments to support both public and private investment in research and development to drive innovation in trustworthy AI, and to create policy environments that enable the safe and responsible deployment of AI systems. In principle, cross-border and cross-industry cooperation will be necessary to advance responsible AI governance.[32]


At the same time, AI systems must ensure compliance with data protection standards throughout their entire lifecycle. This includes both the information provided initially by users and the data generated about users through their interactions with the system.[33]


According to UNESCO (the United Nations Educational, Scientific and Cultural Organization), the world must harness the positive potential of artificial intelligence to achieve the Sustainable Development Goals, foster knowledge societies, and promote socio-economic progress.[34]


Based on all that has been said above, the information presented underscores the vital role of ethics in the development of artificial intelligence. It is extremely important that decisions made with the assistance of AI are transparent, understandable, and compliant with legal standards. This is essential for building trust, protecting fundamental human rights, and ensuring that the use of AI systems aligns with the real needs of society.


Results and discussion


The legal regulation of artificial intelligence is essential for introducing ethical standards and managing its impact on society. Appropriate regulations help minimize risks and maximize opportunities. This section will review existing laws, regulations, and initiatives. In October 2022, the White House released “The Blueprint for an AI Bill of Rights”, which outlines five key principles intended to protect the rights of the American public in the era of artificial intelligence.



  1. Safe and effective systems: AI systems must be protected from harmful or ineffective technologies. This includes developing systems based on broad consultations to identify and reduce potential risks. Systems should undergo pre-deployment testing and ongoing monitoring to ensure their safety and effectiveness.

  2. Protection against algorithmic discrimination: AI systems must be designed to prevent algorithmic discrimination, meaning they should avoid unjustified disparate treatment. Designers and developers should take steps to ensure systems are fair and protect individual rights without exception.

  3. Data privacy: Data protection must be a top priority. AI systems must be designed to safeguard privacy and obtain users’ consent for data use. Proper and secure data handling and confidentiality must be guaranteed.

  4. Notice and explanation: Users must be informed when AI systems are in use and how they influence outcomes. Systems should provide clear explanations so that individuals understand how decisions are made.

  5. Human alternatives, consideration, and fallback: Users should have the option to decline automated systems and request human review and correction when needed. Human involvement should be ensured, especially in high-risk scenarios.[35]


Additionally, on January 6, 2023, the Council of Europe’s Committee on Artificial Intelligence published a draft convention on AI, human rights, democracy, and the rule of law.


The first part of the convention covers general provisions. Article 1 defines the purpose and scope of the convention, which is to establish fundamental principles, rules, and rights to ensure that the design, development, and use of AI systems are fully aligned with human rights, the functioning of democracy, and the rule of law. Article 2 contains definitions, Article 3 outlines the principle of non-discrimination, and Article 4 defines the scope of the convention—namely, that it applies to the design, development, and use of AI systems. The second part of the draft convention concerns the use of AI tools by public authorities. Article 5 outlines the obligations of state bodies: the use of AI systems must fully respect human rights and fundamental freedoms. Any interference with these rights and freedoms by public authorities or private entities acting on their behalf, resulting from the use of AI systems, must align with the fundamental values of democratic societies, be based on law, and be necessary in a democratic country in pursuit of legitimate public interests. Article 6 addresses respect for human rights, while Article 7 covers respect for democratic institutions and the rule of law. Chapter III concerns the use of AI tools in the provision of goods and services. Chapter IV addresses the fundamental principles of AI system design, development, and deployment. Chapter V focuses on measures and safeguards that ensure accountability and redress. Chapter VI discusses the assessment and mitigation of risks and adverse impacts. Chapter VII outlines provisions for cooperation, stating that parties shall consult with each other to support or improve the effective implementation and application of the convention. Chapter VIII contains the final provisions.[36]


The Organization for Economic Co-operation and Development (OECD) has developed 12 core principles that should guide the use of artificial intelligence tools. Specifically:



  1. Openness, transparency, and inclusiveness;

  2. Participation in decision-making and service delivery;

  3. Development of a public sector based on data analysis;

  4. Protection of personal privacy and ensuring security;

  5. Clarification of the responsibilities of political leadership;

  6. Consistent use of digital technologies;

  7. Development of coordination mechanisms;

  8. Strengthening international cooperation;

  9. Support for business development;

  10. Enhancement of project management capacities for modern technologies;

  11. Procurement of digital technologies;

  12. Establishment of an appropriate legal framework for digital technologies.[37]


On June 8, 2024, the European Union issued the first official regulation on artificial intelligence. This act aims to ensure the safety, fairness, and accountability of AI systems. The EU AI regulation is based on several core goals and principles that seek to promote the safe and ethical use of AI systems. The regulation’s primary objectives include system safety and effectiveness, protection of users’ health and safety, and transparency and fairness of AI-driven decisions. The regulation requires that AI systems be transparent and appropriate, and that users have full access to information about how these systems operate.[38]


Conclusion


Artificial intelligence is increasingly dominating the global landscape, making the integration of ethics essential for maximizing its positive impact and minimizing negative outcomes. This article has demonstrated that AI holds significant potential to improve judicial systems. However, it has also highlighted key ethical challenges, including bias, discrimination, and concerns around data protection and privacy. The future of artificial intelligence and ethics will be shaped by expanded research in ethical domains, stricter regulatory frameworks, and greater public awareness of the issues at stake. Ethics is an inseparable part of AI development, and the responsibility of developers in this regard is becoming increasingly emphasized. Ultimately, the challenge lies in harnessing the power of AI to promote human well-being and progress without compromising human ethics and dignity. This requires continuous, informed, and inclusive dialogue about the ethical questions AI raises to ensure a just and responsible AI future. According to recent studies, as of today, the Georgian justice system does not yet incorporate AI tools, nor does it have the necessary ethical or legal frameworks in place. Therefore, it is essential for the country to prioritize the development of an ethical framework that ensures the protection of fundamental human rights in the use of AI. At the same time, it is necessary to gradually introduce AI technologies into the justice system, which would contribute to streamlining processes, increasing transparency in decision-making, and alleviating pressure on an overburdened system.


References 


Scientific Literature:



  1. Dias, S. A. de J., Sátiro, R. M. (2024). Artificial intelligence in the judiciary: A critical view. Futures, 164, Article 103493. Available at: <https://doi.org/10.1016/j.futures.2024.103493>;

  2. Donahue, L. (2018). A primer on using artificial intelligence in the legal profession. Jolt Digest. Available at: <http://jolt.law.harvard.edu/digest/a-primer-on-using-artificial-intelligence-in-the-legal-profession> (Last access: 28.09.2023);

  3. Gabisonia, Z. (2022). Internet law and artificial intelligence. Tbilisi: World of Lawyers;

  4. Haugeland, J. (1985). Artificial Intelligence: The Very Idea. s.l.: MIT Press;

  5. Nilson, N. J. (2010). The Quest for Artificial Intelligence. Cambridge: University Press;

  6. Reiling, A. D. (Dory). (2020). Courts and Artificial Intelligence. International Journal for Court Administration, 11(2), Article 8. Available at: <https://doi.org/10.36745/ijca.343>;

  7. Russell, S. J., Norvig, P. (2010). Artificial Intelligence. s.l.: Pearson Education Inc.;

  8. Sidamonidze, N. (2019). Artificial intelligence as a challenge and some methodological aspects of its implementation. Tbilisi: Georgian Technical University;

  9. Turing, A. (1950). Computing machinery and intelligence. s.l.: Mind.


Legal acts and official documents:



  1. AI Decision-Making and the Courts, A guide for Judges, Tribunal Members and Court Administrators. (2022);

  2. Council of Europe. Artificial Intelligence. Available at: <https://www.coe.int/en/web/artificial-intelligence/glossary>;

  3. DIN/DKE. (2020). Ethik und Künstliche Intelligenz: Was können technische Normen und Standards leisten? (White paper). Berlin: DIN. Available at: <https://www.din.de/resource/blob/754724/00dcbccc21399e13872b2b6120369e74/whitepaper-ki-ethikaspekte-data.pdf>. (In German);

  4. Goderdzishvili, N. (2020). Artificial Intelligence: Essence, International Standards, Ethical Norms and Recommendations (policy document), Tbilisi: Information Freedom Development Institute (IDFI). Available at: <https://www.idfi.ge/public/upload/Article/1111Artificial-Intelligence-GEO_Web%20Version.pdf>. (In Georgian);

  5. Government of Georgia. (2023). Guiding principles of the national strategy of Georgia on artificial intelligence. Tbilisi: Government of Georgia;

  6. European Commission. (2019). Ethics Guidelines for Trustworthy AI. Publications Office of the European Union. Available at: <https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai>;

  7. Organisation für wirtschaftliche Zusammenarbeit und Entwicklung. Artificial Intelligence. OECD Principles on AI. (Online). Available at: https://www.oecd.org/going-digital/ai/principles/;

  8. (2019). On a Possible Standard-Setting Instrument on the Ethics of Artificial Intelligence. Available at: <https://unesdoc.unesco.org/ark:/48223/ pf0000369455>;

  9. The White House. (2022). Blueprint for an AI Bill of Rights: Making automated systems work for the American people. Washington, DC. Available at: <https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf> (Last access: 01.10.2023).


Web Sources:



  1. CMS Germany. (2023). Wie diskriminierend ist künstliche Intelligenz? CMSHS Bloggt. Available at: <https://www.cmshs-bloggt.de/rechtsthemen/sustainability/sustainability-social-and-human-rights/wie-diskriminierend-ist-kuenstliche-intelligenz/> (Last access: 30.09.2023) ;

  2. Committee on Artificial Intelligence (CAI). Available at: <https://rm.coe.int/cai-2023-01-revised-zero-draft-framework-convention-public/1680aa193f>. (Last access: 01.10.2023);

  3. Deutsche UNESCO-Kommission. (2022). DUK Broschüre KI-Empfehlung (Recommendation on AI (ethics)). Available at: <https://www.unesco.de/sites/default/files/2022-03/DUK_Broschuere_KI-Empfehlung_DS_web_final.pdf>;

  4. Ethik und Künstliche Intelligenz: Was können technische Normen und Standards leisten. Available at: <https://www.din.de/resource/blob/754724/00dcbccc21399e13872b2b6120369e74/whitepaper-ki-ethikaspekte-data.pdf>;

  5. European Data Protection Supervisor. (n.d.). Datenschutz. Available at: <https://edps.europa.eu/data-protection/data-protection_de> (Last access: 30.09.2023);

  6. European Parliament. (2023). EU AI Act: First regulation on artificial intelligence. Available at: <https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence>;

  7. Geekflare Team. (2025). 10 Beispiele für Künstliche Intelligenz (KI) im täglichen Leben (Article). Geekflare. Available at: <https://geekflare.com/de/daily-life-ai-example/> (Last access: 20.09.2025);

  8. High-Level Expert Group on Artificial Intelligence. (2019). Ethics guidelines for trustworthy AI. Brussels: European Commission. Available at: <https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai>;

  9. High-Level Expert Group on Artificial Intelligence. European Commission. Available at: <https://ec.europa.eu/futurium/en/ai-alliance-consultation>.

  10. Available at: <https://www.oecd.org/governance/digital-government/toolkit/12principles/> (Last access: 01.10.2023);

  11. Pfleger, L. Was kann KI an den Zivilgerichten. Available at: <https://www.lto.de/recht/justiz/j/justiz-ki-kuenstliche-intelligenz-e-akte-digitalisierung-zivilgerichte/> (Last access: 28.09.2023);

  12. Snow, J. (2018). Amazon’s Face Recognition Falsely Matched 28 Members of Congress with Mugshots. American Civil Liberties Union. Available at: <https://www.aclu.org/news/privacy-technology/amazons-face-recognition-falsely-matched-28> (Last access: 09.2023);

  13. SRD Rechtsanwälte. (2024). Künstliche Intelligenz (KI) – wer haftet, wenn ein Roboter versagt? SRD Rechtsanwälte Blog. Available at: <https://www.srd-rechtsanwaelte.de/blog/kuenstliche-intelligenz-haftung> (Last access: 01.10.2023).


Footnotes


[1] Deutsche UNESCO-Kommission. (2022). DUK Broschüre KI-Empfehlung (Recommendation on AI (ethics)). Available at: <https://www.unesco.de/sites/default/files/2022-03/DUK_Broschuere_KI-Empfehlung_DS_web_final.pdf>.


[2] Government of Georgia. (2023). Guiding principles of the national strategy of Georgia on artificial intelligence. Tbilisi: Government of Georgia.


[3] Dias, S. A. de J., Sátiro, R. M. (2024). Artificial intelligence in the judiciary: A critical view. Futures, 164, Article 103493. Available at: <https://doi.org/10.1016/j.futures.2024.103493>.


[4] Sidamonidze, N. (2019). Artificial intelligence as a challenge and some methodological aspects of its implementation. Tbilisi: Georgian Technical University.


[5] European Commission. (2019). Ethics Guidelines for Trustworthy AI. Publications Office of the European Union. Available at: <https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai>.


[6] Reiling, A. D. (Dory). (2020). Courts and Artificial Intelligence. International Journal for Court Administration, 11(2), Article 8. Available at: <https://doi.org/10.36745/ijca.343>.


[7] DIN/DKE. (2020). Ethik und Künstliche Intelligenz: Was können technische Normen und Standards leisten? (White paper). Berlin: DIN. Available at: <https://www.din.de/resource/blob/754724/00dcbccc21399e13872b2b6120369e74/whitepaper-ki-ethikaspekte-data.pdf>. (In German).


[8] Haugeland, J. (1985). Artificial Intelligence: The Very Idea. s.l.: MIT Press.


[9] Turing, A. (1950). Computing machinery and intelligence. s.l.: Mind.


[10] Russell, S. J., Norvig, P. (2010). Artificial Intelligence. s.l.: Pearson Education Inc.


[11] Council of Europe. Artificial Intelligence. Available at: <https://www.coe.int/en/web/artificial-intelligence/glossary>.


[12] High-Level Expert Group on Artificial Intelligence. European Commission. Available at: <https://ec.europa.eu/futurium/en/ai-alliance-consultation>.


[13] Gabisonia, Z. (2022). Internet law and artificial intelligence. Tbilisi: World of Lawyers, p. 446.


[14] Russell, S. J., Norvig, P. (2010). Artificial Intelligence. s.l.: Pearson Education Inc.


[15] Nilson, N. J. (2010). The Quest for Artificial Intelligence. Cambridge: University Press.


[16] Goderdzishvili, N. (2020). Artificial Intelligence: Essence, International Standards, Ethical Norms and Recommendations (policy document), Tbilisi: Information Freedom Development Institute (IDFI). Available at: <https://www.idfi.ge/public/upload/Article/1111Artificial-Intelligence-GEO_Web%20Version.pdf>. (In Georgian).


[17] Geekflare Team. (2025). 10 Beispiele für Künstliche Intelligenz (KI) im täglichen Leben (Article). Geekflare. Available at: <https://geekflare.com/de/daily-life-ai-example/> (Last access: 20.09.2025).


[18] AI Decision-Making and the Courts, A guide for Judges, Tribunal Members and Court Administrators. (2022).


[19] Ibid.


[20] Reiling, A. D. (2020). Courts and Artificial Intelligence.


[21] Donahue, L. (2018). A primer on using artificial intelligence in the legal profession. Jolt Digest. Available at: <http://jolt.law.harvard.edu/digest/a-primer-on-using-artificial-intelligence-in-the-legal-profession> (Last access: 28.09.2023).


[22] Pfleger, L. Was kann KI an den Zivilgerichten. Available at: <https://www.lto.de/recht/justiz/j/justiz-ki-kuenstliche-intelligenz-e-akte-digitalisierung-zivilgerichte/> (Last access: 28.09.2023).


[23] AI Decision-Making and the Courts, A guide for Judges, Tribunal Members and Court Administrators (2022).


[24] CMS Germany. (2023). Wie diskriminierend ist künstliche Intelligenz? CMSHS Bloggt. Available at: <https://www.cmshs-bloggt.de/rechtsthemen/sustainability/sustainability-social-and-human-rights/wie-diskriminierend-ist-kuenstliche-intelligenz/> (Last access: 30.09.2023).


[25] Goderdzishvili, N. (2020). Artificial Intelligence: Essence, International Standards, Ethical Norms and Recommendations (policy document), Tbilisi: Information Freedom Development Institute (IDFI). Available at: <https://www.idfi.ge/public/upload/Article/1111Artificial-Intelligence-GEO_Web%20Version.pdf>. (In Georgian).


[26] Snow, J. (2018). Amazon’s Face Recognition Falsely Matched 28 Members of Congress with Mugshots. American Civil Liberties Union. Available at: <https://www.aclu.org/news/privacy-technology/amazons-face-recognition-falsely-matched-28> (Last access: 30.09.2023).


[27] European Data Protection Supervisor. (n.d.). Datenschutz. Available at: <https://edps.europa.eu/data-protection/data-protection_de> (Last access: 30.09.2023).


[28] SRD Rechtsanwälte. (2024). Künstliche Intelligenz (KI) – wer haftet, wenn ein Roboter versagt? SRD Rechtsanwälte Blog. Available at: <https://www.srd-rechtsanwaelte.de/blog/kuenstliche-intelligenz-haftung> (Last access: 01.10.2023).


[29] DIN/DKE. (2020). Ethik und Künstliche Intelligenz: Was können technische Normen und Standards leisten? (White paper). Berlin: DIN. Available at: <https://www.din.de/resource/blob/754724/00dcbccc21399e13872b2b6120369e74/whitepaper-ki-ethikaspekte-data.pdf>. (In German).


[30] High-Level Expert Group on Artificial Intelligence. (2019). Ethics guidelines for trustworthy AI. Brussels: European Commission. Available at: <https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai>.


[31] OECD. Organisation für wirtschaftliche Zusammenarbeit und Entwicklung. Artificial Intelligence. OECD Principles on AI. (Online). Available at: https://www.oecd.org/going-digital/ai/principles/.


[32] Ethik und Künstliche Intelligenz: Was können technische Normen und Standards leisten. Available at: <https://www.din.de/resource/blob/754724/00dcbccc21399e13872b2b6120369e74/whitepaper-ki-ethikaspekte-data.pdf>.


[33] High-Level Expert Group on Artificial Intelligence. (2019). Ethics guidelines for trustworthy AI. Brussels: European Commission. Available at: <https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai>.


[34] UNESCO. (2019). On a Possible Standard-Setting Instrument on the Ethics of Artificial Intelligence. Available at: <https://unesdoc.unesco.org/ark:/48223/ pf0000369455>.


[35] The White House. (2022). Blueprint for an AI Bill of Rights: Making automated systems work for the American people. Washington, DC. Available at: <https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf> (Last access: 01.10.2023).


[36] Committee on Artificial Intelligence (CAI). Available at: <https://rm.coe.int/cai-2023-01-revised-zero-draft-framework-convention-public/1680aa193f>. (Last access: 01.10.2023).


[37] OECD. Available at: <https://www.oecd.org/governance/digital-government/toolkit/12principles/> (Last access: 01.10.2023).


[38] European Parliament. (2023). EU AI Act: First regulation on artificial intelligence. Available at: <https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence>.

##plugins.themes.bootstrap3.article.details##

სექცია
Articles

როგორ უნდა ციტირება

The Capabilities and Challenges of Artificial Intelligence in the Justice System. (2025). სამართალი და მსოფლიო, 11(35), 142-153. https://doi.org/10.36475/11.3.10
Creative Commons License

ეს ნამუშევარი ლიცენზირებულია Creative Commons Attribution-ShareAlike 4.0 საერთაშორისო ლიცენზიით .