Article citation information:

Czech, P. Artificial intelligence as a basic problem when implementing autonomous vehicle technology in everyday life. Scientific Journal of Silesian University of Technology. Series Transport. 2024, 122, 49-60. ISSN: 0209-3324. DOI: https://doi.org/10.20858/sjsutst.2024.122.3.

 

 

Piotr CZECH[1]

 

 

 

ARTIFICIAL INTELLIGENCE AS A BASIC PROBLEM WHEN IMPLEMENTING AUTONOMOUS VEHICLE TECHNOLOGY IN EVERYDAY LIFE

 

Summary. Innovative technologies that use artificial intelligence in transport solutions recently emerging around the world include, among others issues of autonomous vehicle driving. The use of autonomous vehicle technology affects the issues of civil liability (liability and insurance), road safety, natural environment (energy efficiency, renewable energy sources), data (access, exchange, protection, privacy), IT infrastructure (effective and reliable communication), employment (creation and loss of jobs, training of truck drivers in the use of automated vehicles). The development of new technologies related to artificial intelligence, including autonomous vehicles, generates inevitable changes in law, economy and society. It is inevitable due to the fact that autonomy is undoubtedly a means to achieve the goal of improving the efficiency sought in every area of life. The article presents arguments confirming the thesis that the basic factor inhibiting the implementation of autonomous vehicle technology is the problem of artificial intelligence, including its definition and legal regulation.

Keywords: artificial intelligence, autonomous vehicles, legal regulations, innovative technologies, mobility, transport

 

 

1. INTRODUCTION

 

In his actions, man has always tried to set new challenges for himself, forcing him to think innovatively and visionary. One of them was to create a vehicle that would move independently, without any human intervention. The idea behind this goal was broadly understood security. Vehicles of this type could, for example, independently transport hazardous materials or move in environments that pose a threat to human life [1].

Therefore, the topic of autonomy, including all types of vehicles, is particularly related to military matters. It was in this matter that the first ideas and research appeared, and financial resources were invested. History shows that the results of such research were often made public only after several years, and then used for non-military purposes.

The analysis carried out confirms the thesis that military needs were the greatest motivation for the development of technologies enabling the creation of vehicles that require only limited human intervention in their operation, or even no intervention at all. We can also indicate the activities of a military organization that directly contributed to the development of autonomous cars.

In 2003, the American government agency dealing with the development of military technology – DARPA (Defense Advanced Research Projects Agency) announced a competition aimed at designing an autonomous vehicle [2]. The detailed guidelines state that the vehicle should independently cover a distance of approximately 250 kilometers in less than 10 hours. The route was set in the Mojave Desert, located mostly in California, but also in the states of Utah, Nevada and Arizona, and in the southern part of the Great Basin in the USA. The award of one million dollars was approved by the Congress of the United States of America. The competition took place on March 13, 2004, called Grand Challenge. Unfortunately, neither team successfully completed the designated route from Barstow, California to Primm, Nevada. The closest to victory was the vehicle called Sandstorm, based on the Humvee military vehicle. It was equipped with four lidars – allowing distance determination using a laser beam, one radar – using radio waves, two cameras, an inertial navigation system and a GPS satellite navigation system. It traveled a relatively short distance of approximately 12 kilometers. Ultimately, the main prize was not awarded [3].

The next year, the winning car called Stanley, based on the Volkswagen Touareg, completed the route in 6 hours and 54 minutes. A total of four vehicles managed to overcome the required distance. The car was equipped with five laser sensors, radar, camera, inertial navigation system and GPS [4].

In the next edition, the race organizers introduced difficulties for its participants. The race took place in simulated urban traffic at a military base in Victorville, California. The cars had to obey California traffic rules, be able to cope with adverse weather conditions such as fog and rain, and function in the absence of GPS reception. The maximum stopping time was set at 10 seconds. In addition, the vehicles had to be able to perform turning and parking maneuvers, and no collisions could occur during the competition. The name of the competition, due to its new nature, was changed to Urban Challenge. The winner covered a route of approximately 95 kilometers in 4 hours and 10 minutes. It was a car called Boss, based on the Chevrolet Tahoe and equipped with four lidars, radar, camera, and GPS [5].

Looking at the history of the development of autonomy in transport, in addition to the activities undertaken by military-related institutions, we should not forget about the significant contribution of civilian visionaries. One of such people is undoubtedly Elon Musk, known as the co-founder of Tesla, a leader in implementing the concept of autonomy in cars.

In late 2014, the company began equipping the Model S with hardware that can automate some steering, braking and acceleration functions. Work on implementing the first Autopilot functions is beginning. Just a year later, the car is equipped with a working Autopilot function, which combines adaptive cruise control that allows driving at a specific speed and a system that allows the vehicle to stay in the lane marked by painted lines on the road. It was not yet a solution enabling autonomous driving, and vehicle control was still in the hands of a human driver. However, it provided the basis for further gradual implementation of innovative technology [6].

Analysis of the current situation indicates that it is technically possible to introduce autonomous vehicles on public roads, or it is close to that. However, a visible problem arises in the legal system surrounding this technology. Deficiencies in the relevant legal regulations hinder and often prevent this possibility.

 

 

2.  ARTIFICIAL INTELLIGENCE AS A TOOL TO SUPERVISE THE OPERATION OF AUTONOMOUS MEANS OF TRANSPORT

 

Regardless of the type of autonomous vehicle, the most important part is the one that is responsible for its control without external human participation. To make this possible, you need to use one of the tools belonging to the group of methods called “artificial intelligence”.

People have been interested in artificial intelligence for many years, both in the field of broadly understood entertainment – creating films and fantasy literature on this topic, as well as in science – both in the disciplines of engineering, technical and medical sciences, but also in the social and humanities. The first scientific journal exclusively devoted to artificial intelligence was “Artificial Intelligence” (ISSN: 0004-3702), created in 1970 and still published by Elsevier. In legal sciences, a similar journal appeared more than twenty years later – in 1992, called “Artificial Intelligence and the Law” (ISSN: 0924-8463) and is still published by Springer [7].

The statement “artificial intelligence” clearly refers to the concept of intelligence. Whether we are talking about machines or people, the term can be vague. It is of interest to psychologists, biologists, and neurobiologists. In the case of people involved in artificial intelligence research, the concept of rationality is mainly used. It refers to the ability to choose the best action that must be taken in order to achieve the intended goal. The given criteria and available resources should be considered here. Rationality is not the only component of intelligence, but it is an important part of it [8]. It is noted that artificial intelligence at today's level has the characteristics of rational thinking, which means not only copying and imitating human behavior, but also fully autonomous operation. Therefore, it can be assumed that artificial intelligence is humanocentric [9].

The concept of intelligence itself can be understood as the ability to achieve complex goals using knowledge and skills [10]. The following stand out [9]:

-  narrow intelligence – enables the achievement of a specific goal;

-  general intelligence – enables the achievement of any goal;

-  universal intelligence – allows you to gain general intelligence thanks to data and resources;

-  superintelligence – general intelligence exceeding the normal human level.

 

The term “intelligence” has been defined in various ways by famous scientists as a skill [11]:

-  abstract thinking (according to Terman L.M.);

-  learning and adapting to the environment (according to Colvin S.S.);

-  adapting to a new situation (according to Pintner R.);

-  acquiring new skills (according to Woodrow H.);

-  learning and generating profit based on your own experiences (according to Dearborn W.F.).

 

It is assumed that the term “artificial intelligence” was probably first used by John McCarthy at a conference in Dartmouth in 1955 [12]. In his statement, he referred to the design of machines that operate similarly to manifestations of human intelligence. With this statement, he called the science and engineering of building intelligent machines. In later years, he formalized the definition of artificial intelligence as the science and engineering of creating intelligent machines, especially intelligent computer programs. This is related to the similar task of understanding human intelligence using computers, but artificial intelligence does not have to be limited to biologically observable methods [13].

In the following decades, artificial intelligence developed in fits and starts, with periods of rapid progress interspersed with stagnation [14]. The following years brought the emergence of forecasts regarding the future of systems using artificial intelligence, including that [15]:

-  by the end of the 20th century, computers will emulate human intelligence in a way that makes it impossible to distinguish a machine from a human (Alan Turing, 1950 [16]);

-  within 10 years, intelligent machines will be created in Japan (Ministry of International Trade and Industry, “Knowledge Information Processing Systems (KIPS)” program, 1982 [17]);

-  over a period of 20 years, machines will gain emotions, desires, fears, love, and pride (Rodney Brooks, 2002 [18]).

 

It should be noted that, nowadays, this concept is often used mainly for marketing purposes, as an added value to a product or service, influencing consumer choices [15]. There is also a visible tendency to abuse this nomenclature for products to which this concept should not apply. Some of the designed systems may give the impression of being “intelligent”, but in fact they are based on a sequence of programming commands previously saved by the programmer, and at the same time there is no influence of the history of the system's operation and the acquired new data on the method of operation [19].

The most frequently cited definition of artificial intelligence – derived directly from Turing's analyzes – defines it as the ability of a machine to imitate or imitate human intelligence [19].

Instead of defining what artificial intelligence is, you can define what its purpose is. This includes activities such as reasoning, associating, or selecting information. All this is intended to automate human activities and intellectual activities [20].

There are two concepts of artificial intelligence:

-  strong artificial intelligence;

-  weak artificial intelligence.

 

Strong artificial intelligence is also called artificial general intelligence, and human-level artificial intelligence. On the other hand, weak artificial intelligence is also referred to as artificial narrow intelligence [24]. One may also come across the term model, where the first one is the connectionist model (so-called “bottom up”) and the second one is the classic one (so-called “top down”) [22]. The difference between them is the data that can be processed by an autonomous system.

The first of the above-mentioned types of artificial intelligence is able to perform any task at a level no worse than that achievable by a human [10]. It can also learn to the greatest extent possible, and during the operation of this type of system, a mental phenomenon called thinking or understanding may arise [23]. It is also characterized by self-awareness and self-knowledge [24]. Furthermore, it is also defined as the ability to think “truly”, i.e., thinking unsimulated and connected with the awareness of one's existence [25]. From a philosophical perspective, “thinking” itself – similarly to speaking out loud or writing answers on paper with a pen – can be called symbolic reasoning [22].

In turn, the second type of artificial intelligence enables the performance of only precisely specified tasks [9]. It can be compared to a computer with intelligent behavior [26]. When considering the difference between the mentioned types of artificial intelligence, it is indicated that it is quantitative rather than qualitative. This means that technology must be developed in an evolutionary manner, not in leaps and bounds [27].

According to information presented on the European Parliament website [28], artificial intelligence is the ability of machines to demonstrate human skills in the form of reasoning, learning, planning and creativity. It gives the technical systems based on it the opportunity to observe the environment, deal with what they observe, and solve tasks to achieve the intended goal. Systems using artificial intelligence can adjust their operation to a certain extent by analyzing the effects of previously undertaken actions and implementing them autonomously.

 

 

3.  CHALLENGES REGARDING THE USE OF ARTIFICIAL INTELLIGENCE IN AUTONOMOUS VEHICLES

 

It was noted that the risk of using artificial intelligence should not be a factor inhibiting the development of this technology and innovative research related to it. However, efforts should be made to anchor new technologies in human rights and accepted moral and ethical values and principles. At the end of 2019, the UNESCO General Conference decided to develop ethical standards for artificial intelligence. The recommendations developed were published in 2022 [29]. Their goal was to provide the basis for artificial intelligence systems to work for the good of humans and the natural environment, while preventing harm. They were also intended to stimulate their peaceful use. In its recommendations, UNESCO presents an artificial intelligence system as a system that enables the processing of data and information in a way that resembles intelligent behavior. Its operation includes processes such as reasoning, learning, perceiving, predicting, planning, and controlling. It is pointed out that artificial intelligence systems:

-  integrate models and algorithms, providing the opportunity to learn and perform tasks enabling prediction and decision-making in real and virtual environments;

-  enable operation with varying degrees of autonomy through modeling, knowledge representation, and the use of data and correlation calculations;

-  include machine learning (deep learning and reinforcement learning), machine reasoning (planning, scheduling, knowledge representation and inference, search and optimization);

-  can be used in cyberphysical systems (Internet of Things, robotic system, social robotics, human-computer interface), including control, perception, processing of data collected by sensors, operation of actuators;

-  throughout their life cycle (research, design, development, implementation, use, maintenance, operation, trade, financing, monitoring, evaluation, validation, end-of-life, dismantling) should consider ethical issues;

-  they imply new ethical challenges (impact on decision-making, employment and work, social interactions, health care, education, media, access to information, digital divide, personal data and consumer protection, environment, democracy, rule of law, security and policing, rights of human being, freedom of expression, privacy);

-  they may strengthen existing prejudices, forms of discrimination and stereotypes;

-  are able to perform tasks previously reserved only for living beings and even limited only to humans;

-  constitute a new role in human practices and society, and in relations with the natural environment;

-  create a new context for children and young people while growing up, developing the ability to understand the world and themselves, critically understanding the media and the information conveyed in them, learning to make decisions;

-  in the long term, they may pose a challenge to the human sense of experience and agency, raising concerns related to, among others, with human self-understanding, social, cultural and environmental interactions, autonomy, agency, value, and dignity.

 

The development of legal regulations regarding artificial intelligence is undoubtedly a difficult task. Excessive regulation may lead to the suppression of innovation, while underregulation may lead to serious damage to citizens' rights and loss of opportunities to shape the future of European society [30].

It is worth noting that for the first time in history, responsibility for human life may be entrusted to machines operating autonomously without direct human supervision [31]. When it comes to autonomous vehicles, this raises controversy regarding ethical responsibility for the health and life of people participating in road traffic. It is noted that people act morally, are able to draw conclusions and take responsibility. Machines, on the other hand, are unable to understand the concept of morality, which results in the inability to bear responsibility under civil and criminal law [32].

It is indicated that the operation of artificial intelligence will be mathematical and logical if the applicable legal system is clear and consistent [33].

The use of artificial intelligence methods generates legal risk involving unpredictability. It increases as the technology itself develops [34]. As a result, damage may occur and the users or producers of a given technology may have to bear legal liability [35]. The question arises here, who should be liable for damage resulting from the operation of systems using artificial intelligence. In this matter, [12] is mentioned:

-  creators of artificial intelligence systems in the person of programmers and producers of such systems, as well as people involved in training these systems;

-  people installing artificial intelligence software on specific devices;

-  producers of devices containing artificial intelligence systems;

-  owners of artificial intelligence devices;

-  entities using systems with artificial intelligence as part of their business activity;

-  users and consumers of artificial intelligence systems that are used for private purposes.

 

It is worth pointing out here that due to the characteristics of the structure and operation of artificial intelligence systems, proving a specific error of such systems and indicating the cause and effect relationship is very difficult or even impossible. This feature is reflected in the often used description of artificial intelligence systems as “black boxes”.

It is noted that when designing provisions on liability in the case of the use of systems based on artificial intelligence, the interest of the person responsible for the system should be particularly taken into account. This is extremely important in relation to innovative systems and the resulting product liability [36]. In the case of autonomous systems, there is an increase in the importance of product liability regulations [37]. A situation in which only one person is responsible for the entire risk may lead to reluctance to introduce innovations. This in turn will have an adverse impact on the public interest [38]. Currently, there is support for the view that the greater the risk of using a given autonomous system, the stricter the liability mechanism should be used [26]. This method of proceeding should be treated as an attempt to reconcile the protection of fundamental rights with the need to develop innovative technologies [39].

Currently, there are no norms of international law that organize the issues of artificial intelligence in a comprehensive and comprehensive manner. However, the authors in [26] point out the possibility of directly applying existing legal regulations to assess the operation of artificial intelligence. Such regulations include regulations regarding:

-  human rights;

-  consumer protection;

-  personal data;

-  intellectual property;

-  civil liability;

-  competition.

 

At the same time, however, it is postulated to develop a special legal regime based on the risk principle regarding liability for damages [40].

In 2020, during the debate on civil liability for artificial intelligence, a consensus emerged that the European Union's approach should be based on a combination of strict liability and fault-based liability rules [41]. In the first case, a person may be liable for the damage despite the lack of fault, which in the second case is a necessary condition for liability to exist. Strict liability may, for example, result from the fact of using a vehicle or from an action that he cannot control – as is the case with an animal owner.

According to [41], strict liability is a general presumption in most European legal systems, while strict liability provisions constitute a narrow set of exceptions. The document presents an analysis of selected European national legal systems in terms of provisions regarding strict liability. Existing differences can have a significant – even “dramatic” – impact on those affected. An example is road accidents. In such cases, the different levels of protection in national legal systems may be crucial for the injured party and their relatives.

The analysis of the issue of artificial intelligence in private international law allows the following conclusions to be presented [42]:

-  due to the complexity of legal events related to artificial intelligence, it is impossible to define a general statute of artificial intelligence;

-  conflict rules do not serve to legalize or outlaw, they do not decide on the legal consequences of the use of artificial intelligence, and they do not compare or evaluate legal systems or discriminate against foreign legal regulations related to artificial intelligence;

-  conflict rules indicate only the applicable law;

-  the lack of reference to artificial intelligence in the regulations does not imply a statement about a real (structural) gap – a specific loophole in the law or a technical gap, at most an apparent (axiological) gap;

-  if there is no reference to artificial intelligence in the regulations, qualifying efforts are made to indicate the concepts defining the conflict rule;

-  the occurrence of a situation regarding artificial intelligence does not automatically trigger the law applicable at the seat of the authority adjudicating in a given case, but it is possible to use foreign law, as well as the public policy clause – i.e., exclusion of the application of foreign law indicated by the conflict rule in the event of the possibility of an effect contrary to the fundamental principles of public order legal state;

-  when applying conflict rules, the selection of connecting factors should provide a compromise between legal certainty and the need to look for a law that most faithfully reflects the analyzed relationship;

-  indication of the personal statute of artificial intelligence does not automatically define its existence and legal personality;

-  in specific cases, it is worth determining the affiliation of artificial intelligence to a given country based on existing ties;

-  the possibility of applying foreign law in matters related to artificial intelligence should not be automatically rejected.

 

It is important to note that the responsibility must rest with the human, not the artificial intelligence itself. It should depend on the level of autonomy of the system using artificial intelligence, the duration of the learning process, or the ability to self-learn. The larger the systems are, the greater the responsibility of the person conducting the teaching process. However, it should be noted that the concepts of system skills resulting from the learning process and system skills resulting from the self-learning process are not the same.

A possible solution is the introduction of compulsory insurance, similar to the civil liability insurance of car owners. However, such insurance cannot have the character of currently used insurance against road accidents, which considers human actions and erroneous decisions. In this case, it is recommended to take into account all possible liability in the chain. A system of mandatory insurance against potential damage may apply to manufacturers and/or owners of systems using artificial intelligence. The created insurance system can be additionally expanded with a special fund enabling compensation for damage in cases not covered by insurance.

In the case of autonomous vehicles, the introduction of a special insurance fund is suggested [43]. It is possible that manufacturers of vehicles and the software implemented in them may participate in this project due to the fact that their errors cause the risk of failure and, consequently, damage. However, the imposition of responsibility for issues related to the operation of autonomous systems, for example for carrying out the necessary update of the implemented software, is an open issue.

Producers, developers, owners, or users of systems using artificial intelligence should also consider the possibility of limiting their liability when they pay contributions to a compensation fund or take out joint insurance to provide compensation in the event of damage. The created fund may be general, covering all systems using artificial intelligence, or individual for a category of systems. The obligation to pay the premium may be one-off – for example, when the system is introduced to sale, or periodic – throughout the product's life cycle.

 

 

6. CONCLUSIONS

 

It is indicated that systems using artificial intelligence begin to act randomly when a situation beyond the scope of their operation occurs. This also happens when there is no solution. The system automatically begins to act beyond its competence, resembling a man gone mad. Even the appearance of small errors in operation, due to their repeated occurrence, can lead to irrational and unstoppable behavior. Therefore, it is necessary to build complex security measures before such a situation occurs [44].

The civil liability structure applicable to a traditional (non-autonomous) car appears to be appropriate also in the case of autonomous vehicles. This thesis can be put forward due to the similarity between both cases – traditional and autonomous. In both cases, despite the lack of control over the vehicle, the insurance holder has civil liability for the events that occurred. For example, in the case of a traditional car, there may be a situation where its owner did not even participate in the event causing the damage and will bear its consequences. In such a case, similarly to the situation for autonomous vehicles, it will not have any impact on the driver of the vehicle involved in the road incident. Such a situation may arise, for example, when the vehicle is driven by a co-owner or a person who is an employee of the vehicle owner [45].

The manufacturer of the autonomous car is also pointed out as responsible for the resulting damage [30]. However, such an approach only constitutes a change of the responsible entity, without affecting the concept of responsibility itself [45].

When analyzing the operation of artificial intelligence, it should be remembered that it is the result of intellectual work consisting in developing an algorithm and implementing it into a computer program [46]. Additionally, it was ordered, invented, developed, implemented, used, changed, etc. – all by humans.

 

 

References

 

1.        Szczepaniak C. 2000. Motoryzacja na przełomie epok. [In English: Automotive at the turn of eras]. Warsaw: PWN Publishing House. ISBN: 978-83-0113-228-6.

2.        Defense Advanced Research Projects Agency. Available at: https://www.darpa.mil/.

3.        Davies A. 2017. „An Oral History of the Darpa Grand Challenge, the Grueling Robot Race That Launched the Self-Driving Car”. Wired. Available at: https://www.wired.com/.

4.        Thrun S. et al. 2006. „Stanley: The robot that won the DARPA Grand Challenge”. Journal of Field Robotics 23(9): 661-692.

5.        Urmson C. et al. 2008. „Autonomous driving in urban environments: Boss and the Urban Challenge”. Journal of Field Robotics 25(8): 425-466.

6.        Barry K. 2021. „Big bets and broken promises: a timeline of Tesla's self-driving aspirations”. CR Consumer Reports. Available at: https://www.consumerreports.org/.

7.        Bączyk-Rozwadowska K. 2021. „Odpowiedzialność cywilna za szkody wyrządzone w związku z zastosowaniem sztucznej inteligencji w medycynie”. Przegląd Prawa Medycznego 3-4: 5-35. [In English: „Civil liability for damage caused in connection with the use of artificial intelligence in medicine”. Medical Law Review].

8.        A definition of AI: main capabilities and disciplines. Definition developed for the purpose of the AI HLEG’s deliverables. Independent High-Level Expert Group on Artificial Intelligence set up by the European Commission. European Union. 2019.

9.        Płocha E.A. 2019. „O pojęciu sztucznej inteligencji i możliwościach jej zastosowania w postępowaniu cywilnym”. Prawo w działaniu. Vol. 40. Sprawy Cywilne. 2019. P. 273-291. Institute of Justice. [In English: „About the concept of artificial intelligence and the possibilities of its application in civil proceedings”. Law in action. Vol. 40. Civil Cases].

10.    Tegmark M. 2019. Życie 3.0. Człowiek w erze sztucznej inteligencji. [In English: Life 3.0. Man in the era of artificial intelligence]. Warsaw: Prószyński Media Publishing House. ISBN: 978-83-8169-070-6.

11.    Pfeifer R., Ch. Scheier. 2001. Understanding intelligence. Massachusetts Institute of Technology (MIT), Cambridge: The MIT Press. ISBN: 978-0262661256.

12.    Wachowska A., M. Kalinowski. 2020. „Odpowiedzialność za działania sztucznej inteligencji – jest projekt założeń unijnej regulacji”. [In English: „Responsibility for the activities of artificial intelligence – a draft of the assumptions of the EU regulation”]. TKP the Law. Available at: https://www.traple.pl/.

13.    McCarthy J. „What is artificial intelligence?”. John McCarthy's website. Available at: http://jmc.stanford.edu/.

14.    Russell S., P. Norvig. 2019. Artificial Intelligence: A modern approach. Financial Times Prentice Hall. Harlow. ISBN: 978-0134610993.

15.    Wołk K. 2018. „Czy współczesna SI to tylko marketingowy bełkot?”. Komputer Świat. [In English: „Is modern AI just marketing gibberish?”. Computer World]. Available at: https://www.komputerswiat.pl/.

16.    Turing A. 1950 „Computing Machinery and Intelligence”. Mind LIX(236): 433-460.

17.    McCorduck P. 1983. „Introduction to the Fifth Generation”. Communications of the ACM 26(9): 629-630.

18.    Brooks R.A. 2002. Flesh and Machines: How Robots Will Change Us. NY: Pantheon Books. ISBN: 9780375420795.

19.    Lai L., M. Świerczyński. 2020. Prawo sztucznej inteligencji. [In English: The law of artificial intelligence]. Warsaw: C.H. Beck Publishing House. ISBN: 978-83-8198-455-3.

20.    Janowski J. 2019. Trendy cywilizacji informacyjnej. Nowy technototalitarny porządek świata. [In English: Trends of information civilization. New techno-totalitarian world order]. Warsaw: Wolters Kluwer. ISBN: 978-83-8160-346-1.

21.    Searle J.R. 1980. „Minds, brains, and programs”. The Behavioral and Brain Sciences 3(3): 417-424.

22.    Jankowska M. 2015. „Podmiotowość prawna sztucznej inteligencji?”. [In English: „Legal subjectivity of artificial intelligence?”]. P. 171-196. In: Bielska-Brodziak A. (ed.). O czym mówią prawnicy, mówiąc o podmiotowości. [In English: What lawyers talk about when they talk about subjectivity]. Katowice: University of Silesia Publishing House. ISBN 978-83-8012-439-4.

23.    Kisielewicz A. 2017. Sztuczna inteligencja i logika: podsumowanie przedsięwzięcia naukowego. [In English: Artificial Intelligence and Logic: A Summary of the Scientific Endeavor]. Warsaw: PWN. ISBN: 978-83-01-19492-5.

24.    Rojszczak M. 2019. „Prawne aspekty systemów sztucznej inteligencji – zarys problemu”. [In Englis: „Legal aspects of artificial intelligence systems – outline of the problem”]. P. 1-10. In: Kinga Flaga-Gieruszyńska K., J. Gołaczyński, D. Szostek (eds.): Sztuczna inteligencja, blockchain, cyberbezpieczeństwo oraz dane osobowe. Zagadnienia wybrane. [In English: Artificial intelligence, blockchain, cybersecurity and personal data. Selected issues]. Warsaw: C.H. Beck Publishing House. ISBN: 978-83-8158-596-5.

25.    Kaczmarek-Templin B. 2022. „Sztuczna inteligencja (AI) i perspektywy jej wykorzystania w postępowaniu przed sądem cywilnym”. Studia Prawnicze. Rozprawy i Materiały 2(31): 61-78. [In English: „Artificial intelligence (AI) and prospects for its use in civil court proceedings”. Law Studies. Dissertations and Materials].

26.    Świerczyński M., Z. Więckowski. 2021. Sztuczna inteligencja w prawie międzynarodowym. Rekomendacje wybranych rozwiązań. [In English: Artificial intelligence in international law. Recommendations of selected solutions]. Warsaw: Difin Publishing House. ISBN: 978-83-8270-013-8.

27.    Księżak P., S. Wojtczak. 2020. „Prawa Asimova, czyli science fiction jako fundament nowego prawa cywilnego”. Forum Prawnicze 4(60): 57-70. [In English: „Asimov's Laws, or science fiction as the foundation of new civil law”. Law Forum].

28.    „Artificial intelligence: what is it and what are its applications?”. European Parliament. Available at: https://www.europarl.europa.eu/.

29.    Recommendation on the Ethics of Artificial Intelligence. The United Nations Educational, Scientific and Cultural Organization UNESCO. Paris. 2022.

30.    Bertolini A. 2020. Artificial Intelligence and Civil Liability. Legal Affairs. Policy Department for Citizens' Rights and Constitutional Affairs. Directorate-General for Internal Policies. European Union. PE 621.926.

31.    Drozd W., J. Brzezińska. 2018. „Najważniejsze problemy prawne dotyczące pojazdów autonomicznych w perspektywie globalnej i polskiej”. Przegląd Prawniczy Uniwersytetu Warszawskiego XVII(2): 39-57. [In English: „The most important legal problems regarding autonomous vehicles from a global and Polish perspective”. Law Review of the University of Warsaw].

32.    Hildebrandt M., J. Gaakeer. 2013. Human law and computer law: comparative perspectives. Ius Gentium: Comparative Perspectives on Law and Justice. IUSGENT. Vol. 25. Heidelberg, New York, London: Springer. ISBN: 978-9400794085.

33.    Goździaszek Ł. 2015. „Perspektywy wykorzystania sztucznej inteligencji w postępowaniu sądowym”. Przegląd Sądowy 10: 46-60. [InEnglish: „Prospects for the use of artificial intelligence in legal proceedings”. Judicial Review].

34.    „Liability for Artificial Intelligence and other emerging digital technologies”. Report from the Expert Group on Liability and New Technologies – New Technologies Formation. Justice and Consumers. European Union. 2019. ISBN: 978-92-76-12958-5.

35.    Dignum V. 2019. Responsible artificial intelligence. How to Develop and use AI in a responsible way. Springer Cham. ISBN: 978-3-030-30370-9.

36.    Bożek B., M. Jakubiec. 2017. „On the legal responsibility of autonomous machines”. Artificial Intelligence and Law 25: 293-304.

37.    Shevchenko O. 2020. „Connected Automated Driving: Civil Liability Regulation in the European Union”. Teis? 114: 85-102. Vilnius University.

38.    Świerczyński M., Ł. Żarnowiec. 2019. „Prawo właściwe dla odpowiedzialności za szkodę spowodowaną przez wypadki drogowe z udziałem autonomicznych pojazdów”. Zeszyty Prawnicze 19(2): 101-135. [In English: „Law applicable to liability for damage caused by road accidents involving autonomous vehicles”. Law Notebooks].

39.    Mazur J. 2020 „Unia Europejska wobec rozwoju sztucznej inteligencji: proponowane strategie regulacyjne a budowanie jednolitego rynku cyfrowego”. Europejski Przegląd Sądowy 9: 13-18. [In English: „The European Union towards the development of artificial intelligence: proposed regulatory strategies and building a digital single market”. European Judicial Review].

40.    Mendoza-Caminade A. 2016. „Le droit confronté ? l'intelligence artificielle des robots : vers l'émergence de nouveaux concepts juridiques?”. Recueil Dalloz 8: 445.

41.    Evas T. 2020. „Civil liability regime for artificial intelligence. European added value assessment”. Study. EPRS. European Parliamentary Research Service. Brussels, ISBN: 978-92-846-7127-4.

42.    Świerczyński M. 2019. „Sztuczna inteligencja w prawie prywatnym międzynarodowym – wstępne rozważania”. Problemy Prawa Prywatnego Międzynarodowego 25: 27-41. [In English: „Artificial Intelligence in Private International Law – Preliminary Considerations”. Problems of Private International Law].

43.    Urbanik G. 2019. „Odpowiedzialność za szkody wyrządzone przez pojazd autonomiczny w kontekście art. 446 kc”. Studia Prawnicze. Rozprawy i Materiały 2(25): 83-95. [In English: „Liability for damage caused by an autonomous vehicle in the context of Art. 446 of the Civil Code”. Law Studies. Dissertations and Materials].

44.    Kaku M. 2024. Wizje czyli jak nauka zmieni świat w XXI wieku. [In English: Visions or how science will change the world in the 21st century]. Warsaw: Prószyński i S-ka Publishing House. ISBN: 9788383521985.

45.    Robaczyński W. 2022. „Odpowiedzialność za szkody wyrządzone przez pojazdy autonomiczne”. Forum Prawnicze 1(69): 67-84. [In English: „Liability for damage caused by autonomous vehicles”. Law Forum].

46.    Robaczyński W. 2022. „Sztuczna inteligencja – przedmiot badań czy podmiot kontrolowany”. Kontrola i audyt 6: 8-29. [In English: „Artificial intelligence – a subject of research or a controlled entity”. Control and audit].

 

 

Received 02.10.2023; accepted in revised form 29.11.2023

 

 

Scientific Journal of Silesian University of Technology. Series Transport is licensed under a Creative Commons Attribution 4.0 International License

 



[1] Faculty of Transport and Aviation Engineering, The Silesian University of Technology, Krasinskiego 8 Street, 40-019 Katowice, Poland. Email: piotr.czech@polsl.pl. ORCID: https://orcid.org/0000-0002-0884-8765