Tải bản đầy đủ (.pdf) (13 trang)

The development of artificial intelligence and law in Taiwan

Bạn đang xem bản rút gọn của tài liệu. Xem và tải ngay bản đầy đủ của tài liệu tại đây (394.02 KB, 13 trang )

<span class='text_page_counter'>(1)</span><div class='page_container' data-page=1>

<b>THE DEVELOPMENT OF ARTIFICIAL INTELLIGENCE AND LAW </b>


<b>IN TAIWAN </b>



JSENG Pin-Chieh41


<b>Table of Content </b>
I. Introduction


II. Legal Issues of Artificial Intelligence
A. Equality


B. Democracy
C. Freedom
D. Personality
E. Responsibility


III. Perception of Unmanned Vehicles
Technology Innovative Experimentation Act


A. The Concept of Unmanned Vehicles
B. Characteristics of UVTIEA


IV. Conclusion: From Private Cars to Mobility
Services


<b>I. Introduction </b>


When it comes to Industry 4.0, we refer to the concept of factories in which machines
are augmented with wireless connectivity and sensors, connected to a system that can
visualize the entire production line, control, and make decisions on its own.42 This means that
machines will operate independently, or cooperate with humans in creating a customer-oriented


production field that constantly works on maintaining itself. As a result, this kind of smart
machine rather becomes an independent entity that is able to collect data, analyze it, and advise
upon it. Hence, Artificial Intelligence (AI) is an area of computer science that emphasizes the
creation of intelligent machines capable to work and react like humans. It is believed that AI
would fundamentally transform both human life and industry and create boundless business
opportunities in the near future.


Capitalizing on this wave, Taiwan government declared that 2017 was the first year of
AI era in the country and from 2018 on he rolled out the AI Taiwan Action Plan (2018-2021)
to sharpen Taiwan‘s advantages, prioritize innovation and real-world implementation, and


41


Professor of Law, Department of Financial and Economic Law, National Chung Cheng University, Taiwan; Docteur en
Droit, à l‘Université de Nantes, France.


42


</div>
<span class='text_page_counter'>(2)</span><div class='page_container' data-page=2>

develop software and hardware in tandem, thereby injecting greater momentum into Taiwan‘s
industries.43 As for AI applications, a public internet-of-things (IoT) information network
called Civil IoT Taiwan has been set up to monitor natural disasters and track air quality. In
fact, since late 2018, the government has combined AI with IoT technology to create the
public IoT information system and set up some 5,000 sensor stations to observe water, air,
land and natural disasters. The user interface allows environmental inspection authorities, for
instance, to use AI computing capabilities to track dangerous pollution.44


Among those AI related efforts, three initiatives are remarkable in the areas of law.
Firstly, in order to explore ethical and legal issues raised in the AI age, Taiwan MOST
(Ministry of Science and Technology) encouraged academics and research institutes to


proceed with projects for AI scientific research. Secondly, inspired by the idea of Regulatory
Sandbox and for the purpose of liberalizing administrative regulations on innovative
technologies, the government announced on December 19, 2018 the ―Unmanned Vehicles
Technology Innovative Experimentation Act‖ ( 無 人 載 具 科 技 創 新 實 驗 條 例 , which
combines AI with mobile vehicle and has come into force on June 1, 2019. This is the first of
its kind in the world covering autonomous vehicles on land, at sea, and in the air. Thirdly, for
the sake of creating a better environment for autonomous vehicles experimentation, Taiwan
also launched its first closed field testing ground(Taiwan CAR Lab)in Tainan‘s Shalun for
testing self-driving cars45.


Following an introduction relative to the emergence of AI in Taiwan, Part II will explore
legal issues of AI discussed by Taiwanese doctrines, including equality, democracy, freedom,
personality and responsibility. Then, Part III will analyze the perception of Unmanned Vehicles
and the characteristics of the above Act. Part IV will conclude by offering some principles of
imputation for the mobility service provided by traders of unmanned vehicles.


<b>II. Legal Issues of Artificial Intelligence </b>


In Taiwan legal circle, the researches of AI involve mainly five aspects: equality,
democracy, freedom, personality and responsibility. As a result, we will discuss legal issues
of AI through these five dimensions.


<i><b>A. Equality </b></i>


AI does not and cannot have any internal point of view, human self-consciousness,
thought, creativity, emotion, moral sentiment and spirit, but it can be capable of communicating
and interacting with human beings.46 When we enter into an Algorithm Society, most social
functions of which would be performed by AI machine learning and mega data, this will
certainly bring the problem of equality. First of all, who can use this mega data algorithm? Is



43


<i> See more information about AI Taiwan Action Plan, available at </i>




44


<i> Taiwan Air Quality Monitoring Network, available at </i>


45


<i> See more information about creating best environment for AI innovation, available at </i>




46<i><sub> YEN Chueh-An, All Watched Over by Machines of Loving Grace: AI, Mind and Algorithm Society, in A Preliminary </sub></i>


</div>
<span class='text_page_counter'>(3)</span><div class='page_container' data-page=3>

it not the person who wields politic or economic power can take advantage of that? Or is it
possible that under certain conditions, everyone has the right of access to the mega data? In
addition, AI programmers may unintentionally bring their own prejudice in designing
algorithm. Even when we input to machine learning mega data of majority population that has
been collected and selected, the output may stylize all kinds of unfairness and discrimination
that exist already in society, and eventually result in the exclusion of vulnerable minority from
our social community.47 For example, through analyzing, proceeding and concluding data, AI
is able to predict crime hotspots or hot zones. However, the police should not conduct
searches or stops solely based on crime hotspots, because on the one hand the drawing of
crime hotpots may result in bias and discrimination, and on the other hand, it may be incorrect
if the training data is incomplete.48



<i><b>B. Democracy </b></i>


How to avoid the Algorithm Society becoming a Black Box Society is a serious
challenge in the era of AI, since transparency and accountability are the two pillars of
democracy. AI and similar automated decision-making systems usually possess characteristics
that are not easy to perceive and understand for those affected by them. Even if we could
perceive or know them to some extent, we still have no idea about the methods and processes
of making decisions by AI system. For example, the characteristics of smart medical care lie
in the opacity of the black box algorithm Algorithms. How could we implement the principle
of informed consent?49 P28 Therefore, it is very difficult to examine and supervise the
processes and results of automated decision-making, not to mention review their justification.
This is obviously contrary to the fundamental principle of a democratic society.50


According to the spirit of due process of law, when people are affected by an automated
decision-making system, they should have the right to explain, that is to say, to obtain an
understandable explanation from the above system or from the person in charge of that.
Besides, people affected by an automated decision-making system shall be entitled to offer a
response or request the system to revise its assessment. Furthermore, is it possible that they
have the right to exit from the AI decision-making circle when certain conditions are met? In
this regard, Taiwanese doctrine answers in the affirmative51.


<i><b>C. Freedom </b></i>


AI Machine learning through mega data concerns the right to personal data regulated
by Taiwan Personal Information Protection Act (TPIPA). This involves the circumstances
under which everyone has the right to freedom from systematic monitoring, tracking and


47



LIU Ching-Yi, A Preliminary Analysis on the Ethical and Legal Issues of Artificial Intelligence, in A Preliminary Analysis
on the Legal Related Issues of Artificial Intelligence, p. 13, 31 and 37 (LIU Ching-Yi, Angle Publishing ed., 2018).


48


<i> LI Rong-Geng, An Introduction to the Application of Artificial Intelligence and Crime Hotspots, in A Preliminary Analysis </i>
on the Legal Related Issues of Artificial Intelligence, p. 117-148 (LIU Ching-Yi, Angle Publishing ed., 2018).


49


LIU Ching-Yi, A Preliminary Analysis on the Ethical and Legal Issues of Artificial Intelligence, in A Preliminary Analysis
on the Legal Related Issues of Artificial Intelligence, p. 27-28 (LIU Ching-Yi, Angle Publishing ed., 2018).


50


<i> LIU Ching-Yi, A Preliminary Analysis on the Ethical and Legal Issues of Artificial Intelligence, in A Preliminary Analysis </i>
on the Legal Related Issues of Artificial Intelligence, p. 14 and 45 (LIU Ching-Yi, Angle Publishing ed., 2018).


51


</div>
<span class='text_page_counter'>(4)</span><div class='page_container' data-page=4>

profiling by AI. Theoretically speaking, in principle unless informed consent is given and
expressly authorized by the data subject, we cannot expose an individual to the surveillance of
automated decision-making system that makes a significant impact on him. Nevertheless, study
revealed that TPIPA, like other current data protection laws around the world, all encounter
similar difficulties in bringing data collection and processing for AI learning purposes to terms.
Two reasons are put forward. On the one hand, the volume and the variety of the data needed
for AI development make it unfeasible to rely on prior consent as a legal ground. On the other
hand, the impossibility to truly anonymize personal data and unforeseeable application uses
further threaten the interests and rights of data subjects. All these are challenges yet to be


overcome that AI imposes on current personal data protection laws. 52


<i><b>D. Personality </b></i>


In the level of legislative policy, need we endow AI with certain legal personality? In
this point, Taiwanese doctrine inclined to view that till now it‘s too early to establish a legal
personality or electronic personality for AI like a robot, because the current technological
development of AI has not progressed to the stage close to the characteristics of human
personality, in particular the display of dignity, emotion, feeling, ethics and other free will, or
still lack at least some of its important features. That‘s why AI needn‘t be qualified as an
independent quasi-personality or quasi-legal person and shouldn‘t be required to bear any
independent responsibility.53


In the court practice relative to AI, it is very interesting to cite a passage from a
Taiwanese jurisprudence(臺灣高等法院臺南分院 106 年度上易字第 256 號刑事判決),
<i>which distinguishes between the reputation in facto and the right to reputation. The criminal </i>
<i>court said:《A non-natural person such as AI, robot etc., may have a reputation, but it doesn‘t </i>


<i>have the right to reputation,…because rights should belong to natural persons who can </i>
<i>embody the individual sovereignty and autonomy of human beings. Thus, except for some </i>
<i>exceptions, for instance, the private legal person or group within the scope of its purpose and </i>
<i>nature is allowed to have the right of reputation, only those who possess the moral integrity </i>
<i>and legal personality can be eligible for the subject to enjoy the right of reputation.》</i>54 This
jurisprudential position has found favor with certain doctrine.55


By the way, a very similar reasoning was also advanced in the field of intellectual
property law. When it comes to the work under copyright law and the invention under patent
law, some doctrine holds that as long as AI actually participates in the expression of the work





52


CHIOU Wen-Tsong, Evolving Issues of Data Protection and the Conundrum of Antidiscrimination in Artificial
<i>Intelligence, in A Preliminary Analysis on the Legal Related Issues of Artificial Intelligence, p. 149-175 (LIU Ching-Yi, </i>
Angle Publishing ed., 2018).


53


<i> WU Chung-Jau, A Basic Study of the AI‘s Civil Liability: Focused Reflection on Taiwan‘s Practical Opinions, in A </i>
Preliminary Analysis on the Legal Related Issues of Artificial Intelligence, p. 88, 94-97 (LIU Ching-Yi, Angle
Publishing ed., 2018).


54


<i> See 臺灣高等法院臺南分院 106 年度上易字第 256 號刑事判決,available at </i>


55<i><sub> WU Chung-Jau, A Basic Study of the AI‘s Civil Liability: Focused Reflection on Taiwan‘s Practical Opinions, in A Preliminary </sub></i>


</div>
<span class='text_page_counter'>(5)</span><div class='page_container' data-page=5>

or provides the technical contribution of the invention, AI shall become the author(or co-author)
or inventor(or co-inventor) according to its level of contribution. Otherwise, once the work was
created or invention was researched and developed by AI, this kind of work or invention can
merely enter the public domain and then losses the opportunities of protection offered by the
relevant law. This violates of course the purpose to promote innovation pursued by copyright
law and patent law. However, as we know, at present legal system the subjects of rights under
private law are still principally eligible for natural persons. In order to make the work or
invention of AI protected by copyright law or patent law, certain scholar proposes that through
the legal transfer of relevant right, we can give copyright or right to apply for a patent to
humans who worked with AI in creation or made technical contributions to AI.56 In this way, it


seems probable to resolve the authorship and inventorship of AI in intellectual property law.


<i><b>E. Responsibility </b></i>


The legal implications of AI for liability law is mainly due to the fact that the AI,
especially automated decision-making system, operates on its own, instead of acting by a
natural person with free will and discernment, on which we lay the traditional foundations of
fault liability. In other words, the present Taiwan legal system is based on autonomy of
human will in contract and on behavior of a discerning person in torts. Thus, how to evaluate
the result or impact of AI machine decision-making and impose certain liability on AI itself or
its stakeholder, is the most important thing to resolve. Should the risk of AI and machine
decision-making be attributed to the natural person who profits from it? For example, if an
autonomous car hits a pedestrian, who would be responsible for that? The car designer of
hardware? The programmer in the office with the source code? The owner on the road in the
car? Or the car manufacturer?


In Taiwanese law, if the car accident is caused simply by driver‘s negligence, the
driver is liable for the injury arising therefrom57. This is exactly the case when the AI system
has repeatedly issued an alarm to request the driver to place his hand on the steering wheel,
but the driver did not follow the instructions of AI system. On the contrary, if the above
accident is caused only by the error sensor detection of self-driving car, traders engaging in
designing, producing or manufacturing of this car, shall be jointly and severally liable for
injuries and damages to injured persons.58


Furthermore, doctrine asked whether an automated decision-making product or service
is subject to human control. For instant, should medical personnel, who use an automated
decision-making machine to conduct medical practices, in case of AI misdiagnosis, assume a


56



SHEN Chung-Lun, From Conflict to Symbiosis between Artificial Intelligence and Intellectual Property Law: In Terms of
<i>Inventorship and Authorship, in A Preliminary Analysis on the Legal Related Issues of Artificial Intelligence, p. 177-214 </i>
(LIU Ching-Yi, Angle Publishing ed., 2018).


57<i><sub> Article 191- 2 of Taiwanese Civil Code: ‗If an automobile, motorcycle or other motor vehicles which need not to be driven </sub></i>


<i>on tracks in use has caused the injury to another, the driver shall be liable for the injury arising therefrom, unless he has </i>
<i>exercised reasonable care to prevent the injury‘. </i>


58


<i> According to Article 7.3 of Taiwanese Consumer Protection Act: ‗Traders shall be jointly and severally liable in violating </i>


</div>
<span class='text_page_counter'>(6)</span><div class='page_container' data-page=6>

relatively low professional responsibility? Or for those medical personnel, should they meet a
stricter standard of medical level, which is higher than professional duty of care?59 In
Taiwanese law, if this kind of product or service is still under the human control, he is bound
to compensate for the harm caused by his negligence. However, if AI product or service
provided is already beyond the human control and the product or service is security-deficient,
the trader who is acting for purposes relating to his business in designing, producing,
manufacturing, importing or distributing AI goods, or providing AI services, shall bear a
non-fault responsibility.60 The reason why business operators of AI should be responsible for
no fault liability is because they can effectively disperse or pass on the risk of AI accidents at
a lower cost, such as through market pricing mechanism or via insurance, relatively to
consumers or ordinary people.


<b>III. Perception of Unmanned Vehicles Technology Innovative Experimentation Act </b>
Inspired by the idea of Regulatory Sandbox, Unmanned Vehicles Technology
Innovative Experimentation Act (UVTIEA)(無人載具科技創新實驗條例)was announced in
Taiwan on December 19, 2018,61 to encourage the research and development and the


application of unmanned vehicle technology, and to create a sound and safe environment for
innovative experimentation, so as to advance the development of industry technology and
innovative services, This is the first law in the world containing driverless vehicles on land, at
sea and in the air. Hence, it is worth making a panorama of the law. The following of this part
will begin by perceiving the concept of unmanned vehicles, and ends with delineating
characteristics of UVTIEA.


<i><b>A. The Concept of Unmanned Vehicles </b></i>


<i>In the first paragraph of Article 3 in UVTIEA, ‗Unmanned vehicles‘ refers to ―a </i>


<i>driverless transport vehicle that may be an automobile, aircraft, ship or any combination of these </i>
<i>items, which is operated through remote control or autonomous operation, and is equipped with </i>
<i>the sensing,62 positioning,63 monitoring,64 and decision-making and control65 technologies‖. </i>


Based on the literal meaning of the definition, three conditions are necessary for delineating
unmanned vehicles: (1) it must be a driverless transport vehicle, whether it flies in the sky, or


59


LIU Ching-Yi, A Preliminary Analysis on the Ethical and Legal Issues of Artificial Intelligence, in A Preliminary Analysis
on the Legal Related Issues of Artificial Intelligence, p. 26 (LIU Ching-Yi, Angle Publishing ed., 2018).


60


<i> According to Article 7.1 of Taiwanese Consumer Protection Act: ‗Traders engaging in designing, producing or </i>


<i>manufacturing of goods or in the provisions of services, shall ensure that goods or services provided meet and comply </i>
<i>with the contemporary technical and professional standards with reasonably expected safety requirements when placing </i>


<i>the goods into the stream of commerce, or at the time rendering services‘. </i>


61


English version text of Unmanned Vehicles Technology Innovative Experimentation Act is available at
see Article 1 of UVTIEA.


62


<i> According to 3.1.1 of UVTIEA, Sensing Technology is defined as a technology that can detect and identify the information </i>


<i>about surrounding environment or the events occurred during driving. </i>


63


<i> According to 3.1.2 of UVTIEA, Positioning Technology means using navigation modules or ICT applications for position </i>


<i>aid, geographical location transmission, and assisting in planning route, mission, etc. </i>


64<i><sub> According to 3.1.3 of UVTIEA, Monitoring Technology is understood by using an automatic system, the monitoring </sub></i>


<i>operator maintains a continuous and two-way communication link with the unmanned vehicle to control the overall </i>
<i>operating process, and obtains full control of the unmanned vehicle at any time. </i>


65


<i> According to 3.1.4 of UVTIEA, Decision-making and Control Technology means by integrating the information provided </i>


</div>
<span class='text_page_counter'>(7)</span><div class='page_container' data-page=7>

runs on the ground, or sails at the sea, or any combination of them. (2) This driverless vehicle
must be operated by remote control or autonomous operation, which is the inevitable


technology requirement of driverless vehicles. (3) This driverless vehicle must allow
monitoring operators to retrieve full control of it at any time, for the reason of security.


Once the above three elements are joined together, unmanned vehicle is qualified. Then,
we can develop Unmanned Vehicle Technology and launch Innovative Experimentation, which
signifies the experiments of unmanned vehicle technology, service and business operation for
the purpose of innovative applications.66 That‘s why the reminder of this part will address the
characteristics of UVTIEA.


<i><b>B. Characteristics of UVTIEA </b></i>


Looking at the full text of Unmanned Vehicles Technology Innovative Experimentation
Act, four major legal characteristics of this Act are observed: prior application and review
completed within 60 days, applicant obligated to be a good administrator during the experiments,
exemption of applicable administrative regulations during the experiments, and applicant under an
obligation of giving an innovative experimentation report within 30 days after the expiration of
<i>period. That is to say, before the experiments, the applicant shall submit the application form </i>
and relevant plan to the competent authority, the Ministry of Economic Affairs of Taiwan
<i>ROC which sets up a single service window. Then, during the experiments, the applicant shall </i>
on the one hand, manage the field involved in experiments with the care of a good
administrator, and on the other hand be liberated from the restrictions of administrative
<i>regulations. Subsequently, after the experiments, the applicant shall submit an innovative </i>
experimentation report to the Ministry of Economic Affairs within 30 days after the expiration
of experiments. As a consequence, we will go further in these four aspects as the followings.


<i><b>1. Prior application and review completed within 60 days </b></i>


Before the experiments, according to Article 5.1 of UVTIEA, the applicant shall
submit the application form, applicant's information, and the innovative experimentation plan,
including risk management mechanisms and plans for insurance coverage,67 to the competent




66<sub> See Article 3.3 of UVTIEA. </sub>
67


<i> Article 5.2 of UVTIEA: The project plan for the innovative experimentation, as mentioned in the preceding paragraph, </i>


<i>shall include the following items: </i>
<i> 1.A description of its innovativeness; </i>


<i> 2.An analysis of the applicability, concerning the exemption from traffic and other related laws and regulations involved, of </i>
<i>the innovative experimentation; </i>


<i> 3.A description that illustrates the scope, duration, scale of the innovative experimentation, and a completed simulation </i>
<i>analysis or a closed field experiment; </i>


<i> 4.A description of the person in charge of managing and executing the innovative experimentation; </i>


<i> 5.The expected benefits of the innovative experimentation and the benchmarks to be used for measuring the achievement of such benefits; </i>
<i> 6.Documents concerning agreements of the government authorities or the owners of the site(s) to cooperate with the </i>


<i>implementation of the experimentation; </i>
<i> 7.The contracts with the experiment participants; </i>


<i> 8.A usage plan, if the experiment involves the use of radio frequency; a certificate of usage approval by the competent </i>
<i>authority in charge of the end enterprises concerned shall also be submitted, if the radio frequency to be used is outside </i>
<i>the scope of Article 13 Paragraph 1 of this Act; </i>


<i> 9.An exit strategy after the applicant submitted an application to stop the innovative experimentation in writing, the </i>
<i>competent authority revokes its approval, or the permitted duration for the innovative experimentation expires; </i>



</div>
<span class='text_page_counter'>(8)</span><div class='page_container' data-page=8>

authority for approval to undertake innovative experimentation. Correspondingly, under
Article 8.1 of the Act, the competent authority shall through review meetings complete the
review68 within 60 days after accepting the application for the innovative experimentation,
make a decision to approve or reject the application69, and notify the applicant in writing of
the decision. If the decision is to reject, the reasons for such a decision should be included.


When the competent authority approves an innovative experimentation, in order to
offer one stop service, it will at least concern six matters such as the exemptions of
administrative regulations, the issuance of license, duration, change, disclosure, and fees. We
will successively relate them as the followings.


<i>(1) Based on Article 8.3 of the Act, the competent authority shall state the exemptions </i>


<i>of applicable laws, regulations, orders, or administrative rules within the scope and during </i>


the period of innovative experimentation, and may adopt the relevant measures.70 In this
point, we will later explain it in detail.


(2) In the light of Article 8.4 of the Act, the competent transportation authority shall,
<i>in accordance with the approved decision, undertake the operation process relating to the </i>


<i>issuance of the license. </i>


<i>(3) According to Article 9.1 of the Act, the duration of an innovative experimentation </i>
<i>approved, is in principle limited to one year. Nevertheless, the applicant may, 60 days before </i>
the expiration of the innovative experimentation period, submit an application to the
<i>competent authority for extension approval. In this regard, extensions are generally limited to </i>


<i>one time only, and such extensions shall not be longer than one year. However, under certain </i>





<i> 11.The descriptions concerning the setting up of the data recorders of unmanned vehicles and the provision of recorded data; </i>
<i> 12.The documents, or descriptions, concerning safety compliance of the unmanned vehicles or its associated devices; </i>
<i> 13.The description concerning ensuring the continuous communications link between the unmanned vehicle and the </i>


<i>monitoring operators, and obtaining control or other response measures via two-way communications, in the event of an </i>
<i>expected or unexpected, failure or hazard; </i>


<i> 14.The potential risks, risk management mechanisms, and risk reduction measures during the innovative experimentation; </i>
<i> 15.An analysis of impacts on traffic and measures for mitigating these impacts; </i>


<i> 16.The protection measures for experiment participants and experiment stakeholders; </i>
<i> 17.Plans for insurance coverage; </i>


<i> 18.The information system and safety control measures, as adopted for the innovative experimentation; </i>


<i> 19.Documents, as required, for obtaining relevant licenses from the transportation competent authorities pursuant to Article </i>
<i>8 Paragraph 4; </i>


<i> 20.A description of the project, if it involves a business operation; </i>
<i> 21.Other matters, as specified, by the competent authority. </i>


68


<i> Article 7 of UVTIEA: The competent authority shall review the following items in the application for innovative </i>


<i>experimentation: </i>
<i> 1.That it has innovativeness; </i>



<i> 2.Confirm that, within its scope of experimentation, it is not possible to obtain the permission or approval of the competent </i>
<i>authority in charge of the end enterprises concerned in accordance with current laws and regulations, and that, in order </i>
<i>to proceed with the innovative experimentation, certain applicable laws, regulations, orders or administrative rules from </i>
<i>which it should be exempt; </i>


<i> 3.That it has the feasibility of being a public open field experiment, and includes data of relevant experience and analysis </i>
<i>from the simulation or closed field testing; </i>


<i> 4.That it can effectively improve the efficiency of transportation services or systems, and improve safety, or reduce operating </i>
<i>and usage costs; </i>


<i> 5.That it includes measures to maintain smooth traffic flow and ensure traffic safety; </i>


<i> 6.That its potential risks have been assessed, and that relevant response measures and other safety or risk control measures, </i>
<i>relating to the innovative experimentation program, have been established; </i>


<i> 7.That it has established protective measures for experiment participants and experiment stakeholders, and has advanced </i>
<i>preparations in place for appropriate compensation, as required; </i>


<i> 8.Any other matters, pursuant to the decision of the review meeting, that should be explained by the applicant. </i>


69


<i> Article 8.2 of UVTIEA: If the applicant is notified by the competent authority to submit supplemental documents, the </i>


<i>review period, as referred to in the preceding paragraph, shall start the day after the completed documentation is </i>
<i>submitted. </i>


70



<i> According to Article 8.3 of UVTIEA, the competent authority may adopt the following measures: </i>


</div>
<span class='text_page_counter'>(9)</span><div class='page_container' data-page=9>

requirements, the number of extensions may be exceptionally expanded, and the entire
innovative experimentation period may be extended to a maximum of four years.71


<i>(4) On the basis of Article 10.1 of the Act, no change can be made to an innovative </i>
experimentation plan approved. However, if the desired changes do not involve critical
elements of the experiment, and impose no significant impact on the rights and interests of
experiment participants, the applicant may apply to the competent authority, and then make
such change after obtaining a review and approval.


<i>(5) Under Article 11 of the Act, the competent authority shall disclose the name of the </i>
applicant, the content of the innovative experimentation, the duration, scope, and the
exemptions of applicable laws, regulations, orders or administrative rules, as well as other
<i>relevant information, on the website of the agency. </i>


<i>(6) Founded on Article 12 of the Act, may be exempt from fees the applications, </i>
reviews, approvals and on-site visits of innovative experimentations, submitted in accordance
with the provisions of this Act.


<i><b>2. Applicant obligated to be a good administrator during the experiments </b></i>


During the period of innovative experimentation, the applicant should, according to Article
18.3 of the Act, heed the duties of being a good manager. In order to fulfill this duty of care on the
management and safety of the field involved in experiments, he must pay attention to the radio
frequency, monthly reports, the publication of information regarding experiments, information
security measures adopted, compliance with the provisions of both the Personal Information
Protection Act and Consumer Protection Act. Hence, we will sequentially explain them.



<i>(1) Founded on Article 13.2 of the Act, the applicant may start using the approved </i>


<i>assigned radio frequency after obtaining the approval for innovative experimentation. </i>


<i>(2) On the basis of Article 14 of the Act, the applicants shall comply with the </i>


<i>provisions of this Act, and implement all actions, as required by the competent authority, and </i>


shall provide the status of innovative experimentations pursuant to the instructions of the
<i>competent authority. In addition, the competent authority may conduct on-site visits as needed, </i>
and the applicants shall not evade, obstruct, or refuse. Besides, during the period of the
<i>innovative experimentation, the applicant shall report the number of times and the reasons for </i>


<i>human intervention in the control of the unmanned vehicle on a monthly basis, as a reference </i>


for the competent authority to use in evaluating the safety of innovative experimentation.
<i>Furthermore, applicants should collect and retain all recorded data made during the period of </i>


<i>the innovative experimentation, and such data should be retained for at least three years after </i>


the expiration of the innovative experimentation period.


<i>(3) In the light of Article 15, the applicant shall, prior to the start of the innovative </i>


<i>experimentation testing, publish information relating to the experiments via the media or on </i>
<i>websites, and shall post relevant information with appropriate means on the unmanned vehicles or </i>


<i>in the area of experiments. In the case of a safety incident occurrence during the experiments, the </i>
applicant shall, in addition to assuming compensation responsibility in accordance with relevant
<i>laws and regulations, immediately suspend the experiment, and inform the competent authority </i>



<i>and transportation competent authority concerning the occurrence of the incident, as well as other </i>


subsequent incident handling measures. After the occurrence of an incident, the competent
authority and the transportation competent authorities shall assess the situation to ensure that no
safety risks remained, before giving consent to resume the experiments.


(4) Based on Article 16 of the Act, the applicant shall, in accordance with the nature of
<i>the innovative experimentation, adopt appropriate and sufficient information security </i>


<i>measures during the period of innovative experimentation, to ensure the security of </i>


information collection, processing, utilization and transmission.


71


</div>
<span class='text_page_counter'>(10)</span><div class='page_container' data-page=10>

(5) Under Article 17 of the Act, for the purpose of privacy protection, when collecting,
processing, or utilizing personal data, the applicant shall comply with the provisions of the
Personal Information Protection Act. In a same way, Article 18.1 of the Act states that when
the applicant establishes contracts with experiment participants, concerning their participation
in the experiments, the said contracts should be established on the principles of fairness,
reasonableness, equality, reciprocity, and good faith. Besides, pursuant to Article 18.2 of the
Act, if the clauses in the contract are clearly unfair, the said clauses are invalid. If there is any
doubt about the clauses of the contract, the interpretation shall be made in favor of the
experiment participants. Evidently, this kind of specification is inspired by the Consumer
Protection Act72.


<i><b>3. Exemption of applicable administrative regulations during the experiments </b></i>



According to the original notion of Regulatory Sandbox, as long as the new startups
who propose new ideas apply for the Supervision of Sandbox, they can test their own creative
business models and be exempted from the regulatory norms of national laws within a certain
<i>scope. Enlightened by this idea, Article 22.1 of UVTIEA provides that: ‗During the period of </i>


<i>innovative experimentation, if the applicant implements the experiments within the scope approved </i>
<i>by the competent authority, the innovative experimentation activities are not subject to the </i>
<i>applicable laws, regulations, orders or administrative rules that were exempted in the approved </i>
<i>decision. However, the provisions of the Money Laundering Control Act, Counter-Terrorism </i>
<i>Financing Act and related regulations, orders, or administrative rules, will still apply‘. In other </i>


words, except for Money Laundering Control Act, Counter-Terrorism Financing Act and related
rules, administrative regulations are in principle excluded from applicable laws of the innovative
experimentation activities approved. Yet, two questions are raised here: Which are the
administrative laws actually excluded from the innovation experiment? Also, are excluded the
laws relative to civil and criminal liability?


For the first question, conforming to Article 22.2 of UVTIEA, the administrative
laws and regulations that should be exempt for the development of unmanned vehicle
technology are the relevant provisions of, for instant, the Road Traffic Management and
Penalty Act, the Highway Act, the Civil Aviation Act, the Law of Ships, Seafarer Act, and
the Telecommunications Act. The above exemptions are mainly to solve the problem of how
to make the unmanned vehicle legally move on the road, in the air and at sea, for the
equipment of the unmanned vehicle is different from that of the traditional vehicle (car,
aircraft, ship etc.): there is not any driver in the former while there is always a driver in the
latter. That‘s why the legislator waives some unnecessary restrictions and exempts the
relevant administrative penalty, such as, the driver should not be allowed to communicate
when driving on the road, the driver must not drive for more than eight hours, certain
requirements for traditional vehicle equipment, the driving behavior, the driver's license and
practice registration etc.73





72


<i> Paragraph 1 of Article 11 of Taiwan Consumer Protection Act provides that the terms and conditions adopted in the </i>


<i>standard contracts shall be conformed to the principles of equality and reciprocity. Besides, Paragraph 2 of Article 11 of </i>


<i>TCPA states that where there is any ambiguity in the wording of the standard terms and conditions, interpretations shall </i>


<i>be made in favor of consumer. </i>


73


</div>
<span class='text_page_counter'>(11)</span><div class='page_container' data-page=11>

For the second question whether the innovative experimentation activities are not
subject to the civil and criminal responsibility, according to Article 22.2.7 of UVTIEA, the
exemption does not include laws and regulations concerning civil and criminal liability. This
signifies that the user of the unmanned vehicle or its business operator may be liable for civil
damages and criminal liability. Here, let us only focus on civil liability of autonomous
vehicles. In Taiwan, the recent research of contract law revealed that in one hand, selling
traders of autonomous cars are bound to ensure their cars safe and they should particularly
maintain as well as update their self-driving system after the sale. In the other hand, when the
self-driving capacity of AI achieves a level of a prudent and professional driver, doctrine
proposes that perhaps we may regard driverless car as a person performing the obligation for
the user of AI system.74


By the way, some authors analyze the torts liability of autonomous car in various
situations based on SAE levels of driving automation. The studies noted that, in case of an
autonomous car accident caused by its driver‘s negligence or fault, we would apply fault liability


or presumption fault liability to Level 3 (which means when the feature requests, you must drive),
and below (which means you must constantly supervise these driving support features: you must
steer, brake, or accelerate as needed to maintain safety). On the contrary, when it comes to Level 4
and above (which means these automated driving features will not require you to take over
driving), an accident of driverless car caused by its safety deficiency or non-compliance with the
contemporary technical and professional standards, should apply to non-fault product liability
regulated by Article 7 of Taiwanese Consumer Protection Act, according to which traders
engaging in designing, producing or manufacturing of safety-deficient goods shall be jointly and
severally liable for damages to consumers or third parties.75


<i><b>4. Applicant under an obligation of giving an innovative experimentation report </b></i>
<i><b>within 30 days after the expiration of period </b></i>


First of all, on the basis of Article 19 of UVTIEA, the applicant may begin with the
innovative experimentation on the day after the arrival of the experiment approval, and shall
inform the competent authority in writing of the planned testing start date prior to
implementing experimental testing. Then, when implementing the innovative experimentation,
if any unlawful or harmful circumstance arises, the competent authority may order the
applicant to make improvement within a prescribed period of time.76 Since the nature of
innovative experimentation is simply a short-term, expedient experiment for a specific
purpose, it must terminate at the end of the period. It is very important to note that, pursuant
<i>to Article 21.1 of the Act, the applicant shall submit an innovative experimentation report to </i>


Chung Cheng University, June 2019, p. 5


74


CHANG Chan, The Civil Liability of Artificial Intelligence System Users- Focusing on Autonomous Vehicles, Master
Thesis of Institute of Financial and Economic Law, National Chung Cheng University, June 2019, p. 40-57



75


TSUNG Lin, Torts Liability of Autonomous Vehicles, Master Thesis of Institute of Financial and Economic Law, National
Chung Cheng University, June 2019, p. 34-55; CHANG Chan, The Civil Liability of Artificial Intelligence System
Users- Focusing on Autonomous Vehicles, Master Thesis of Institute of Financial and Economic Law, National Chung
Cheng University, June 2019, p. 83-97


76


</div>
<span class='text_page_counter'>(12)</span><div class='page_container' data-page=12>

<i>the competent authority within 30 days after the expiration of the approved period for the </i>


innovative experimentation. This is a legal obligation imposed on the applicant after
experiments. It seems that this reporting obligation for the applicant is just the consideration
of his preferential legal treatment during the experiments. But what should the report include?


According to Article 21.2 of the Act, the report shall include the following items:
1.The course of the innovative experimentation and the outcomes; 2.Risk occurrences
and traffic incident report records; 3.Records of the frequency and circumstances requiring
human intervention to control the unmanned vehicles; 4.Other matters specified by the
competent authority. Certainly, this report will permit competent authority to assess the
results of the applicant's innovation experiment. That‘s exactly why Article 21.3 of the Act
specifies that the competent authority may convene an evaluation meeting concerning the
outcomes of an innovative experimentation.


<b>IV. Conclusion: From Private Cars to Mobility Services </b>


In view of the emergence of Unmanned Vehicle Technology which combines AI with
mobile vehicles, it has become the development focus of AI commercialization in the world.
This is exactly the economic environment background for Taiwan to promulgate the Unmanned


Vehicles Technology Innovative Experimentation Act. In the automotive industry, it is true that
the current business model is based on the manufacturing and selling of private cars. As a result,
consumers buy private cars from business operators at the present time. However, with the
gradual maturity of self-driving technology, consumers may in the future purchase
transportation services from business operators of driverless vehicle. Imagine tomorrow the
business operator replaces human driving with a self-driving system, and he uses different
driverless cars to operate the mass transportation service, taxi transportation service, and cargo
transportation service etc. That‘s probably the coming business model in the AI age.


Facing this transformation of business model, how should we apply the current civil
liability system? According to our opinion, if an autonomous car is still under the driver‘s
control, such as SAE level 3 and below, the driver is obligated to compensate for the harm
caused by his negligence. In this regard, it is logic to apply a driver‘s negligent behavior to the
fault liability or presumption fault liability in torts. Nevertheless, if an autonomous car is
already beyond the driver‘s control, such as SAE level 4 and above, and this car is
security-deficient, the business operator engaging in designing, producing or manufacturing
of the car, shall assume a non-fault product liability. In fact, the non-fault product liability
should be applicable to security-deficient non smart goods, not to mention smart goods! But
what responsibility should we impose on the business operator of driverless vehicles, which
provides various kinds of transportation services to consumers?


</div>
<span class='text_page_counter'>(13)</span><div class='page_container' data-page=13>

traders who provide services shall be jointly and severally liable for damage to consumers or
third parties.77 This is so called non fault service liability in Taiwan since 1994. Confronting
the mobility services probably provided by the traders of unmanned vehicles in the AI era, the
Taiwanese doctrine believes that this non-fault service liability is already prepared and should
be applicable to smart service providers.78 The justification of no fault liability upon traders
of AI services is that they can efficiently disperse or transfer the risk of AI accidents through
market pricing or insurance at a lower cost, compared to consumers.





77


<i> Paragraph 1 of Article 7 of Taiwanese Consumer Protection Act: Traders engaging in designing, producing or </i>


<i>manufacturing of goods or in the provisions of services, shall ensure that goods or services provided meet and comply </i>
<i>with the contemporary technical and professional standards with reasonably expected safety requirements when placing </i>
<i>the goods into the stream of commerce, or at the time rendering services. </i>


<i> Paragraph 2 of Article 7 of the Act: All safety warnings and emergency response manuals shall be marked or labeled </i>


<i>conspicuously on the goods or services provided which may cause harm to the lives, bodies, health or properties of </i>
<i>consumers. </i>


<i> Paragraph 3 of Article 7 of the Act: Traders shall be jointly and severally liable in violating the foregoing paragraphs and </i>


<i>thereby causing injury or damage to consumers or third parties, provided that if traders can prove that they have not </i>
<i>been negligent, the court may reduce damages. </i>


78


</div>

<!--links-->
The research of using epad technology to support activities in administrative system
  • 102
  • 596
  • 0
  • ×