Cookie Consent

Digital Object Identifier
Abbreviated Title

In This Article

    Abstract - Journal Law and World

    Volume 7, Issue 5

    The Legal Aspects of Artificial Intelligence based on the EU Experience

    Affiliation: Professor, Business and Technology University

    Abstract: In the digital era, technological advances have brought innovative opportunities. Artificial intelligence is a real instrument to provide automatic routine tasks in different fields (healthcare, education, the justice system, foreign and security policies, etc.). AI is evolving very fast. More precisely, robots as reprogrammable multi-purpose devices designed for the handling of materials and tools for the processing of parts or specialized devices utilizing varying programmed movements to complete a variety of tasks. Regardless of opportunities, artificial intelligence may pose some risks and challenges for us. Because of the nature of AI ethical and legal questions can be pondered especially in terms of protecting human rights. The power of artificial intelligence means using it more effectively in the process of analyzing big data than a human being. On the one hand, it causes loss of traditional jobs and, on the other hand, it promotes the creation of digital equivalents of workers with automatic routine task capabilities. “Artificial intelligence must serve people, and therefore artificial intelligence must always comply with people’s rights,” said Ursula von der Leyen, President of the European Commission. The EU has a clear vision of the development of the legal framework for AI. In the light of the above, the article aims to explore the legal aspects of artificial intelligence based on the European experience. Furthermore, it is essential in the context of Georgia’s European integration. Analyzing legal approaches of the EU will promote an approximation of the Georgian legislation to the EU standards in this field. Also, it will facilitate to define AI’s role in the effective digital transformation of public and private sectors in Georgia.

    Keywords: Artificial Intelligence, Digitalization, Automatic Routine Tasks,



    A new industrial revolution has brought sophisticated robots, bots, androids and other manifestations of artificial intelligence ("AI"). According to The Encyclopedia Britannica, “Artificial intelligence is the ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings”. Machine learning offers enormous economic and innovative benefits for society by vastly improving the ability to analyze data. The development and increased use of automated and algorithmic decision-making have an impact on the choices that a private person (such as a business or an internet user) and an administrative, judicial or other public authority take in rendering their final decision of a consumer, business or authoritative nature. Artificial intelligence (AI) refers to systems that display intelligent behaviour by analysing their environment and taking actions – with some degree of autonomy – to achieve specific goals. AI-based systems can be purely software-based, acting in the virtual world or AI can be embedded in hardware devices. AI can be used daily, for example, to translate languages, generate subtitles in videos or block email spam. Because of growth in computing power, availability of data and progress in algorithms AI has become one of the most strategic technologies of the 21st century.
    Given AI’s potential, the European Union will promote the development of Artificial Intelligence. Through the Digital Europe and Horizon Europe programmes, the Commission will invest EUR 1 billion per year in AI and mobilise additional investments from the private sector and the Member States to reach EUR 20 billion investment per year over the course of this decade.
    AI and other digital technologies can contribute to a sustained post COVID-19 recovery due to theirpotential for increasing productivity across all economic sectors, creating new markets and bringing tremendous opportunities for Europe’s economic growth. AI technologies help optimise industrial processes, make them more resilient, efficient, enable innovative self-learning and real-time solutions, from predictive maintenance to collaborative robots, from digital twins to augmented reality.
    AI can contribute to the objectives of the security policy. It can be a strategic tool to anticipate risks, overcome challenges and counter threats. More precisely, AI can help to fight crime and terrorism, and enable law enforcement to keep pace with the fast – developing technologies used by criminals and their cross-border activities.
    Simultaneously, the use of AI also creates risks that need to be addressed. Certain characteristics of AI, such as the opacity of many algorithms that makes investigating causal relationships difficult, pose specific and high risks to the safety and fundamental rights that existing legislation is unable to address or in view of which it is challenging to enforce existing legislation.
    New technologies have become normative challenges to both domestic and international law. This process requires regulatory and legislative actions. Sensible normative frameworks must be collaborative and governments should work with different actors in adopting fit-for-purpose governance regimes. In the light of the above, new rules should be developed by states to understand AI behavior and clarify appropriate responsibilities for providers and users of AI system. Such norms will provide a legal basis to protect human rights in the digital era


    In its Communication of 25 April 2018 and 7 December 2018, the European Commission set out its vision for artificial intelligence (AI), which supports “ethical, secure and cutting-edge AI made in Europe”.
    The goal of the Guidelines is to promote Trustworthy AI. Trustworthy AI has three components, which should be met throughout the system's entire life cycle: (1) it should be lawful, complying with all applicable laws and regulations (2) it should be ethical, ensuring adherence to ethical principles and values and (3) it should be robust, both from a technical and social perspective since, even with good intentions. Each component in itself is necessary but not sufficient for the achievement of Trustworthy AI. All the abovementioned components work in harmony and overlap in their operation. If tensions arise between these components in practice, society should endeavour to align them.
    Generally, AI ethics focuses on the normative issues raised by the design, development, implementation and use of AI. Within ethical discussions, the terms “moral” and “ethical” are used. The term “moral” refers to the factual patterns of behaviour, the customs, and conventions that can be found in specific cultures, groups and individuals. The term “ethical” refers to an evaluative assessment of such concrete actions from a systematic, academic perspective. Ethical AI is used to indicate the development and use of AI that ensures compliance with ethical norms, including fundamental rights as special moral entitlements and core values.
    Even if an ethical purpose is ensured, individuals and society must also be confident that AI systems will not cause unintentional harm. Such systems should perform in a safe, secure and reliable manner, and safeguards should be foreseen to prevent any unintended adverse impacts. In this regard, it is important to ensure that AI systems are robust.
    The robustness of an AI system encompasses both its technical robustness (appropriate in a given context, such as the application domain) and its robustness from a social perspective (ensuring that the AI system takes into account the context and environment in which the system operates). This is crucial to ensure that, even with good intentions, no unintentional harm can occur. Ethical and robust AI are intertwined and complement each other.
    It matters how ethical conflicts are reconciled and how much transparency is required in data analytic solutions. Also, it is essential how data are integrated into organizational routines. Overall, the ethical principles with technical and social perspectives aim to develop and use AI with good intention. If providers or users of AI system take into consideration core values in practice Artificial Intelligence will not cause harm


    On 21 April 2021, the European Commission proposed a transformative legal framework to govern the use of artificial intelligence (AI) in the EU. The proposal adopts a risk-based approach whereby the uses of artificial intelligence are categorised and restricted according to whether they pose an unacceptable, high, or low risk to human safety and fundamental rights. The policy is considered to be one of the first of its kind in the world which would have profound and far-reaching consequences for providers, users of technologies incorporating artificial intelligence.
    The AI Act addresses the risks stemming from the different uses of AI systems and aims to promote innovation in the field of AI. Mark MacCarthy and Kenneth Propp have called the proposed regulation “a comprehensive and thoughtful start to the legislative process in Europe that might prove to be the basis for trans-Atlantic cooperation.”
    The proposed regulatory framework on Artificial Intelligence focuses on the following 4 specific objectives: 1) ensure that AI systems placed on the Union market and used are safe and respect existing law on fundamental rights and Union values; 2) ensure legal certainty to facilitate investment and innovation in AI; 3) enhance governance and effective enforcement of existing law on fundamental rights and safety requirements applicable to AI systems; 4) facilitate the development of a single market for lawful, safe and trustworthy AI applications and prevent market fragmentation.
    To achieve these objectives, this proposal presents a balanced and proportionate horizontal regulatory approach to AI that is limited to the minimum necessary requirements to address the risks and problems linked to AI, without constraining or hindering technological development or otherwise disproportionately increasing the cost of placing AI solutions on the market. It sets a robust and flexible legal framework. On the one hand, it is comprehensive and future-proof in its fundamental regulatory choices, including the principle-based requirements that AI systems should comply with. On the other hand, it puts in place a proportionate regulatory system centred on a well-defined risk-based regulatory approach that does not create unnecessary restrictions to trade, whereby legal intervention is tailored to those concrete situations where there is a justified cause for concern or where such concern can reasonably be anticipated in the near future.
    The use of AI with its specific characteristics can affect a number of fundamental rights enshrined in the EU Charter of Fundamental Rights. This framework provides a high level of protection for those fundamental rights and aims to address various sources of risks through a clearly defined risk-based approach. With a set of requirements for trustworthy AI and proportionate obligations on all value chain participants, the proposal will promote the protection of fundamental rights including the right to human dignity, respect for private life and protection of personal data, nondiscrimination and equality between women and men. It aims to prevent a chilling effect on the rights to freedom of expression and freedom of assembly. Furthermore, the proposal will positively affect the rights of a number of special groups, such as the workers’ rights to fair and just working conditions, a high level of consumer protection, the rights of the child and the integration of persons with disabilities. In case infringements of fundamental rights still happen, effective redress for affected persons will be made possible by ensuring transparency and traceability of the AI systems coupled with strong ex-post controls.
    The AI Act is guided by the idea of the development of trustworthy technologies. Building trust requires the protection of people’s safety and fundamental rights. It can be achieved by establishing boundaries around why and how AI systems are developed and used. The AI Act considers a riskbased approach that bans specific unacceptable uses of AI and regulates some other uses that carry important risks.
    The prohibitions cover practices that have a significant potential to manipulate persons through subliminal techniques beyond their consciousness or exploit vulnerabilities of specific vulnerable groups such as children or persons with disabilities to materially distort their behaviour in a manner that is likely to cause them or another person psychological or physical harm. Other manipulative or exploitative practices affecting adults that might be facilitated by AI systems could be covered by the existing data protection, consumer protection and digital service legislation that guarantee that natural persons are properly informed and have the free choice not to be subject to profiling or other practices that might affect their behaviour. The proposal also prohibits AI-based social scoring for general purposes done by public authorities.
    The AI Act introduces specific transparency obligations for both users and providers of AI system. Specific transparency obligations apply to automated emotion recognition systems. Also, limited Risk AI Systems necessitate specific transparency obligations. Overall, the Artificial Intelligence Act is a good starting point to ensure that the development of AI in the EU is ethically sound, legally acceptable, socially equitable, environmentally sustainable and AI seeks to support the economy, society, and the environment. At the same time, it should be noted that the normative framework of the European Union could be the best model to stimulate the development of rules on AI systems at the global level.


    The EU ethical and legal frameworks focus on the creation of real guarantees for the protection of fundamental rights in the process of using AI. The EU assists its member states to establish institutional mechanisms for the implementation of appropriate standards concerning AI. It develops a riskbased approach to avoid utilizing machines that can cause damages in various aspects. Overall, the EUstrategic vision and framework give member statesa legal basis to regulate AI-related issues. This process promotes to use secure applications and theprotection of basic rules.
    AI will be used more intensively by different actors in the future. They will have an impact on thedecision-making process, especially in big data cases. It is essential to take into consideration that AIapplications should be provided with objective information. Based on such information, robots wouldbe able to make the right conclusions and assistcompanies, doctors, teachers, lawyers, diplomats,etc. Chatbots are integral parts of the digital worldand they can effectively perform automatic routinetasks for various purposes.
    In light of the above, Georgia should develop AIpolicy based on the European experience. Furthermore, considering trends of the AI policy developments in the EU and its member states are relevantfor Georgia due to its European aspirations and thelegal approximation duties derived from the Association Agreement between Georgia-European Union(AA). Ultimately, the development of AI policywould facilitate the adoption of the legal frameworkwith the focus of safeguarding fundamental rightsand the establishment of an institutional mechanismfor avoiding the utilization of high-risk applications.


    1. Artificial Intelligence Act. 2021. Regulation of the European Parliament and of the Council
    2. Artificial Intelligence for Europe. 2018. European Commission. Brussels
    3. Ethics Guidelines for Trustworthy AI, High-Level Expert Group on Artificial Intelligence. 2019. European Commission
    4. European Parliament Resolution of 16 February 2017 with recommendations to the Commission on Civil Law Rules on Robotics. https://www.europarl.europa.eu/doceo/document/TA-8-2017-0051_EN.html
    5. Floridi, L., 2021. The European Legislation onAI: a Brief Analysis of its Philosophical Approach, Philosophy & Technology
    6. Fostering a European approach to Artificial Intelligence. 2021. European Commission
    7. Gaumond, E., 2021. Artificial Intelligence Act: What Is the European Approach for AI? https://www.lawfareblog.com/artificial-intelligence-act-what-europeanapproach-ai
    8. Jarota, M., 2021. Artificial intelligence and Robotization in the EU – Should We Change OHS Law? Journal of Occupational Medicine and Toxicology
    9. Kop, M., 2021. EU Artificial Intelligence Act: The European Approach to AI, Stanford – Vienna Transatlantic Technology Law Forum, Transatlantic Antitrust and IPR Developments, Stanford University, Issue No. 2
    10. Parulava, G., 2021. Georgia – Fit for the Age of Artificial Intelligence? PMCG Research. Tbilisi
    11. Press remarks by President von der Leyen on the Commission’s new strategy: Shaping Europe’s Digital Future. February 19, 2020. https://ec.europa.eu/ commission/presscorner/detail/nl/speech_20_294
    12. The Encyclopedia Britannica. https://www.britannica.com/technology/artificialintelligence
    13. Vihul, L., 2020. International Legal Regulation of Autonomous Technologies, Centre for International Governance Innovation
    14. West, D., 2018. The Future of Work: Robots, AI, and Automation – Artificial Intelligence, Brookings Institution Press
    15. Yaros, O., Bruder, A., Hajda, O., Graham, E., 2021. The European Union Proposes New Legal Framework for AI. https://www.mayerbrown.com/en/perspectivesevents/publications/2021/05/the-european-union-proposes-new-legal-frameworkfor-artificial-intelligence
    Publication Fee
    Editor in Chief
    Publishing Language