Artificial Intelligence (AI) Development and Ethics: Seeking Human-Centric AI at a French-German-Japanese Symposium

© DWIH Tokyo/iStock.com/onurdongel

March 19, 2021

[by Toru Kumagai]

Artificial intelligence (AI) is one of the most keenly watched areas of cutting-edge technology in academia, industry, and government worldwide. Basic forms of AI have already been applied in many different fields—from the financial sector to social media and even online stores. AI is predicted to see further developments in the future, bringing significant benefits to humanity in a wide range of areas including manufacturing and medicine. However, there is a danger that if AI became autonomous to an excessive degree, the pursuit of economic benefits and efficiency might neglect ethics and human rights. In other words, humans need to constantly keep a firm rein on machines.

Between November 16 to 20, 2020 a large online symposium was held in which experts from Germany, France and Japan discussed the strengths and weaknesses of AI, as well as the rules that humans should apply to it.

Entitled “Human-centric Artificial Intelligence: 2nd French-German-Japanese Symposium,” this conference was organized by the German Centre for Research and Innovation Tokyo (DWIH Tokyo) and the French Embassy in Japan in cooperation with the AI Japan R&D Network. The event was originally planned to take place at the National Museum of Emerging Science and Innovation (Miraikan) in Odaiba, Tokyo, but it was changed to an online format in order to reduce the risk of COVID-19 infection. Some 100 speakers joined from French, German, and Japanese government, academia and finance, and over 1,100 people attended the conference.

The first French-German-Japanese Symposium was held previously in October 2018 in Tokyo, laying out the initial steps for cooperation between Japan, Germany and France on AI. Based on this, the German Research Foundation (DFG), the National Agency for Research (ANR) in France and the Japan Science and Technology Agency (JST) issued calls for applications for trilateral joint projects on AI.

At this second symposium, government officials and other speakers from France, Germany and Japan discussed each country’s AI strategies, while lectures and panel discussions were held covering topics such as “AI & COVID-19,” “Trustworthy AI,” and “Geopolitics of AI.” There was also a section where startups were able to showcase AI applications. The symposium can be viewed on YouTube under the following link:

Human-centric Artificial Intelligence : 2nd French-German-Japanese Symposium (Day 1- Day 5) – YouTube

A joint statement published by the participants in the symposium summarized the event as follows: “The central message of the 2018 conference was the need for a ‘human-centric AI’ approach. At this conference, France, Germany, and Japan reaffirmed the shared value that collaboration between humans and AI is of paramount importance, and we discussed the role of AI in areas such as health, agriculture, risk prevention, education, and democracy. We also agreed that humanity is facing a global challenge such as the covid-19 pandemic, climate change, and communal fragmentation.”

The three countries have announced plans to hold a third symposium in 2022 to further discuss the numerous issues that humanity will face in the “Anthropocene”.

The Anthropocene is a hypothetical geological era representing a period of significant human impact on the Earth’s ecosystems and geology, a concept proposed by the Dutch chemist Paul J. Crutzen and others in 2000. Scholars are divided on the starting point of this era, but all researchers agree that human activity began to have an unprecedented impact on the Earth’s geology and climate after industrialization started in the 19th century. Climate change due to global warming is just one example of the effects of human activity.

In a joint statement, the participants of the second symposium said, “We will address the issues we face in the Anthropocene, the era in which humans are having a profound impact on the planet and its ecosystems, from a broader perspective that includes not only humans and AI technologies, but also the environment we live in.”

The participants added, “The three countries of France, Germany and Japan, which share the same values and social challenges, will be at the core of these efforts and will invite representatives from other regions and countries to exchange ideas on how AI can help to solve these challenges, not for a single nation or company alone, but for the benefit of all of humanity and our planet itself.” This statement indicates the future direction to expand the symposium further as an international gathering.

The German government announced an AI strategy in November 2018 and made the decision to invest 3 billion euros (about 378 billion yen) in research and development and for training of AI experts by 2025, which was increased to 5 billion euros in December 2020. The government’s AI strategy emphasizes reinforcing international cooperation as well as achieving human-centric applications of AI and prioritizing public welfare regarding the use of AI.

The countries that have made the greatest progress in AI research and commercialization are the United States and China. The US emphasizes small government, relaxed regulations, noninterference, and market principles, so business principles are paramount in the development and commercialization of AI. IT giants such as Amazon, Google, Apple, Microsoft and Facebook are worldwide at the cutting edge for AI research and commercialization—far ahead of European and Japanese efforts. The enormous financial power these companies will certainly bring about rapid progress in AI commercialization.

At the same time, the emphasis on economic benefits and efficiency in the US, where government regulation of companies is weak, renders ethical considerations and human rights issues a secondary concern. This trend is reflected in the fact that the degree to which personal information is protected in the US is much weaker than it is in Europe.

In China, the interests of the state and government take top priority, and so the protection of personal information, ethics, and human rights are given less importance. AI is being used widely for surveilling citizens, for example through facial recognition.

It is a distinguishing characteristics of AI that if used incorrectly, there is a risk that humans might become subservient to machines. Many areas in AI are black boxes that can only be fully understood by IT experts, which makes it difficult for regular people to object to decisions made by AI.

AI is not only a technical, scientific, and economic topic, but also an ethical one.

Take the following example: An autonomous car is using AI for drive-control. A 70-year-old elderly person and a 10-year-old child suddenly jump out in front of the car. The AI has the following three options:
(1) Steer the car to avoid the 70-year-old person and hit the 10-year-old child.
(2) Steer the car to avoid the 10-year-old child and hit the 70-year-old person.
(3) Steer the car to avoid both of these pedestrians and collide with a wall and cause injury to the driver.

What choice should the AI make in this situation? An expert committee has been established in Germany to discuss the ethics of AI and autonomous driving technology. At present, the leading view among experts is that humans must not input algorithms into machines that allow them to make life and death decisions about humans. In other words, it would be unethical and unacceptable, for example, to input an algorithm that would decide to save the 10-year-old at the expense of the 70-year-old because the 10-year-old has longer to live. This view is grounded in the idea that neither machines nor humans should “play God.” It is necessary to consider ethical issues in any kind discussion about AI.

Hence the debate about AI needs to be joined by ethicists, philosophers, jurists and political scientists in addition to IT experts.

Also, who is liable to provide compensation if a person suffers physical or economic harm as a result of an input error in the AI algorithm? AI will be used at medical institutions for diagnosing illnesses and other purposes going forward, and there is a non-zero chance for the AI to make an erroneous diagnosis.

France, Germany, and Japan should take a different third path from the U.S. and China, emphasizing ethics, human rights, and privacy in the use of AI. These three countries should distinguish themselves from the business focus seen in the US on one hand and the prioritizing of state interests found in China on the other. In light of this, the participants in the French-German-Japanese symposium deserve recognition and praise for the fact that they have placed significant emphasis on human-centric AI, sending out an important message from Japan and Europe to the rest of the world.

Click here to read other articles from the series “Toru Kumagai’s report on R&D trends in Germany”.

About Toru Kumagai

Born in Tokyo in 1959, Kumagai graduated from the Department of Political Science and Economics at Waseda University in 1982 and joined Japan Broadcasting Corporation (NHK), where he gained a wealth of experience in domestic reporting and overseas assignments. After NHK, he has lived and worked as a journalist in Munich, Germany, since 1990. He has published more than 20 books on Germany and Germany-Japan relations, as well as been to numerous media outlets to report on the situation in Germany.