How China Seeks to Govern AI

Global Challenges Foundation
6 min readSep 5, 2018

--

Jeffrey Ding, Future of Humanity Institute, Oxford University

The views expressed in this article are those of the authors. Their statements are not necessarily endorsed by the affiliated organisations or the Global Challenges Foundation.

Fears of a U.S.-China AI arms race have proliferated in the past year, with leading thinkers highlighting AI as a technology that could provide a decisive strategic advantage for the country best equipped to harness its potential. These fears intensified after China announced its intentions to become the world’s “primary” AI innovation center by 2030 in a far-reaching AI development plan (AIDP). While it is important to understand how China’s pursuit of AI could affect international competition over strategic technologies, it is equally important to take note of how China’s efforts to govern AI could promote international cooperation. As reflected by the AIDP as well as other key texts, multi-stakeholder discussions are taking place in China in three key categories: near-term AI governance issues, long-term AI safety risks, and autonomous weapons. Developments in China’s governance of these areas will only increase in significance, as China seeks to play a more active role in international efforts to regulate AI technologies.

In the past year, China has emerged as an indispensable actor in the governance of AI. In July 2017, China’s State Council, the central cabinet body which issues national policies, set a benchmark of USD 1.5 trillion for the scale of China’s AI industry in 2030 — a figure that would put China into a world-leading position. This goal, while ambitious, is not outside the realm of possibility: across many drivers of AI development — including hardware, data, talented researchers, and AI firms — China is making enormous gains. Particularly notable is the growth of China’s AI startup scene, which received more funding than U.S. AI startups in 2017.¹ Moreover the AIDP and follow-up measures set forth China’s whole-of-society approach toward spurring AI, as exemplified by government guidance funds which funnel money toward AI startups and initiatives to attract and retain talented researchers.

Alongside the growth of China’s commercial AI ecosystem, Chinese scholars and policymakers have paid increasing attention to issues of AI governance, ranging from near-term issues to existential risks. Under a section on “Safeguard Measures,” the State Council’s AI plan lays out a framework for developing laws, regulations, and ethical norms for AI governance.² The plan’s drafters, which included prominent computer science professors, not only engage with near-term AI safety issues — they call for reforms to the legal system to address the effects of AI on criminal liability, privacy, intellectual property rights, and information security — but also explicitly note long-term risks. Forward-looking governance measures include multi-level structures that determine the morality of various AI systems, ethical frameworks for human-machine collaboration, and codes of conduct for researchers, developers, and designers of AI products. The plan’s primary focus is on governance at the national level, but it recognizes that a favorable international environment is crucial to China’s development of AI. To that end, it also calls for China to “strengthen research on global commons problems” — in particular, it mentions robot malfunctions, in which robots diverge from their manufacturer’s pre-set goals as one such commons problem — and to “deepen international cooperation in artificial intelligence laws and regulations, international rules, etc., to jointly deal with global challenges.”

Aside from the State Council’s plan, there is evidence that other important stakeholders are taking long-term AI risks seriously. In November 2017, the China Academy of Information and Communications Technology, a government think tank, and three divisions of Tencent jointly published a book titled Artificial Intelligence: A National Strategic Initiative. Two chapters, in particular, demonstrate deep engagement with AI safety issues, and present a contribution to the global conversation on AI from a Chinese perspective. The first, titled “Moral Machines,” warns that intelligent machines may “break” their designer’s pre-set rules in order to protect their own survival and calls for more research into value alignment.³ Another chapter, “23 ‘Strong Regulations’ for AI,” substantively discusses both the 23 Asilomar AI Principles for beneficial AI⁴ and the risks of superintelligence.

China’s rhetoric and diplomatic postures towards AI applications in the military arena has been largely ambiguous. On the one hand, China has exhibited a willingness to regulate autonomous weapons at the international level. At a April 2018 meeting of the Group of Governmental Experts (GGE) on lethal autonomous weapons systems (LAWS), it became the first permanent member of the UN Security Council to support a ban on the use of lethal autonomous weapon systems. At the same time, China is one of more than a dozen countries that is developing partly autonomous weapon systems, and its latest position paper on the issue, takes such a narrow definition of LAWS that it would allow for the development of very powerful, fully autonomous weapons as long as some degree of human intervention remains.⁵

When it comes to shaping AI ethics and standards globally, China has taken a more active role. In January 2018, the Standards Administration of China (SAC) issued a White Paper on AI standardization more detailed than any similar attempts by other governments. As further evidence of China’s interest in shaping the international governance of AI, the White Paper was presented to a new standards committee for AI, part of an influential international standards body (SC 42),⁶ which held its first meeting in Beijing in April 2018. This, added to Chinese efforts in translating standards from US bodies such as the Institute of Electrical and Electronics Engineers (IEEE), demonstrates China’s desire to set the pace in strategic AI technology development.

Given the breadth and depth of Chinese multi-stakeholder engagement with AI governance, discussions should evolve from rudimentary debates over whether China is even aware of AI governance issues to a more substantive exploration of how China’s views on AI governance will shape the global governance of long-term AI risks. For clues to answering these deeper questions, researchers should pay close attention to both the language and structure surrounding Chinese governance of AI. One important element is the phrase “social ethics” (社会伦理) which has been used in discussions about the long-term risks of AI. Anchoring his reflection in the idea that all societies need a moral base, He Huaihong, professor of Chinese philosophy at Peking University, has argued that China needs to rebuild its social ethics on Confucian values in the face of rapid changes and developments in Chinese society.⁷ Given that this notion of “social ethics” has also been used in discussions by Chinese researchers of the risks of human cloning,⁸ comprehending how the risks posed by AI development fit within Chinese discussions of social ethics is an important endeavor. In addition to the language, the structures surrounding Chinese governance of AI are also significant. In particular, the structure of China’s AI standards system is very hierarchical and controlled by the central government. This structure may differ greatly from other countries, which often adopt different structures for setting AI standards. Notably, the U.S. standards system that governs AI technologies is much more decentralized and allows the private sector to take the lead.

Following the explosive growth of China’s AI sector, Chinese stakeholders are taking positions on a wide variety of AI governance issues, including near-term issues such as technical standardization and privacy, as well as long-term AI safety risks related to superintelligence, and autonomous weapons. These discussions are increasingly taking place at international fora, as a factor of China’s ambitions to take a leading role in setting the rules for this strategic technology, as well as of the growing number of countries who have outlined their own visions of safe and ethical development of AI. Undoubtedly, China’s approach to AI governance, at the domestic and international level, will significantly influence the extent to which catastrophic risks associated with the development of AI can be prevented.

¹ CB Insights, 2017. ‘Artificial Intelligence Trends to Watch 2018’, https://www.cbinsights.com/research/report/artificial-intelligence-trends-2018/

² China State Council, 2017. “State Council Notice on the New Generation Artificial Intelligence Development Plan” [国务 院关于印发新一代人工智能发展规划的通知]. July 8, 2017. http://www.gov.cn/zhengce/content/2017-07/20/ content_5211996.htm. All translations are the author’s own.

³ Tencent’s Research Institute & China Academy of Information and Communications Technology 2017, Artificial Intelligence: A National Strategic Initiative for Artificial Intelligence. A translation is accessible at [https://docs.google.com/document/d/1Lz0vEWsUmgNolVJVw3FjKbyAIH6Nw7LMpnjUwSboqXA/edit?usp=sharing]

https://futureoflife.org/ai-principles/

https://www.lawfareblog.com/chinas-strategic-ambiguity-and-shifting-approach-lethal-autonomous-weapons-systems

⁶ the International Organization for Standardization/International Electrotechnical Commission (ISO/IEC) SC42 committee

https://www.brookings.edu/book/social-ethics-in-a-changing-china/

www.xinhuanet.com/english/2018-01/26/c_136927556.htm

--

--

Global Challenges Foundation
Global Challenges Foundation

Written by Global Challenges Foundation

We work to incite deeper understanding of the global risks that threaten humanity and catalyze ideas to tackle them by reforming global governance.

Responses (1)