The 2021 International Workshop on Cross-Cultural Cooperation on Artificial Intelligence

16:00-17:15 December 4th 2021 (Online)

Introducing Cross-Cultural Cooperation on Artificial Intelligence

Realizing global sustainable AI development requires international cooperation on AI infrastructures, ethical and governance frameworks and mechanisms of interactions and coordination, especially taking into account different cultural perspectives. Currently, there are many obstacles to achieving this goal, such as relatively lower interests to appreciate differences in cultures and values, distrust between cultures and coordination challenges across regions. This forum will discuss these challenges and explore how to increase cross-cultural AI cooperation between different countries and regions.

Co-Chairs:
Yi Zeng
Professor and Founding Director, International Research Center for AI Ethics and Governance, Institute of Automation, Chinese Academy of Sciences.
Chief Scientist, Institute of AI International Governance, Tsinghua University.
Seán S. ÓhÉigeartaigh
Co-Director, Centre for the Study of Existential Risk (CSER) , University of Cambridge.
Program Director at Leverhulme Centre for the Future of Intelligence (LCFI), University of Cambridge.

Speakers:
Vincent C. Müller
Vincent C. Müller is Professor for Philosophy of Technology at the Technical University of Eindhoven (TU/e) - as well as University Fellow at the University of Leeds , Turing Fellow at the Alan Turing Institute, London, President of the European Society for Cognitive Systems and Chair of the euRobotics topics group on 'ethical, legal and socio-economic issues'. He was Professor at Anatolia College/ACT (Thessaloniki) (1998-2019), Stanley J. Seeger Fellow at Princeton University (2005-6) and James Martin Research Fellow at the University of Oxford (2011-15). Müller is known for his research on theory and ethics of disruptive technologies, particularly artificial intelligence (AI). Müller edits the "Oxford handbook of the philosophy of artificial intelligence" (OUP), wrote the Stanford Encyclopedia of Philosophy article on Ethics of AI and Robotics. He has organised ca. 25 conferences or workshops, among them a prominent conference series on the Philosophy and Theory of AI (PT-AI). Müller is one of the 32 experts on the Global Partnership on AI (GPAI) and one of two 'Key Experts' (PI) in the "International Alliance for human-centric AI" (IA-AI) (2.5M€), consulting the EU on the coordination of AI-policy with other global actors (G7, G20, UNESCO, UN).
Danit Gal
Danit Gal is associate fellow at the Leverhulme Centre for the Future of Intelligence at the University of Cambridge and visiting research fellow at the S. Rajaratnam School of International Studies at the Nanyang Technological University. Gal is interested in technology ethics, geopolitics, governance, safety, and security. Previously, she was a technology advisor at the United Nations, leading work on AI in the implementation of the United Nations Secretary-General's Roadmap for Digital Cooperation. Gal serves as the vice chair of the P7009 IEEE standard on the Fail-Safe Design of Autonomous and Semi-Autonomous Systems, member of the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems executive committee, founding editor and editorial board member of Springer’s AI and Ethics journal, executive committee member of the AI4SDGs Cooperation Network, and advisory board member of EPSRC and UKRI’s Trustworthy Autonomous Systems Verifiability Node.
Mark FINDLAY
Professorial Research Fellow at School of Law, and Director for Centre for AI and Data Governance at Singapore Management University. Mark Findlay has honorary Chairs at the Australian National University, the University of Edinburgh and the University of New South Wales, as well as being an Honorary Senior Research Fellow at the British Institute for International and Comparative Law, and an Honorary Fellow of the Law School, University of Edinburgh. He was at the University of Sydney for over twenty years, as the Chair in Criminal Justice and the Director of the Institute of Criminology.
Emma Ruttkamp-Bloem
Professor of Philosophy at University of Pretoria, and Research Fellow at Centre for AI Research (CAIR), Council for Scientific and Industrial Research (CSIR). Chairperson of the UNESCO Ad-Hoc Expert Group on AI Ethics, Member of the UNESCO’s World Commission on the Ethics of Scientific Knowledge and Technology (COMEST), Corresponding Member at International Academy for the Philosophy of Science, member of European Network for Philosophy of the Social Sciences (ENPOSS), and Member of the Philosophical Society of Southern Africa (PSSA).
Amandeep Gill
Ambassador Amandeep Gill is Project Director and CEO of the International Digital Health & AI Research Collaborative (I-DAIR), a new international platform to promote inclusive, impactful and responsible AI research and development of digital technologies for health. Amandeep Gill was Executive Director and co-Lead of the Secretariat of the UN Secretary General’s High-Level Panel on Digital Cooperation until August 2019. He previously served as India’s Ambassador and Permanent Representative to the Conference on Disarmament in Geneva. Amandeep is in the UNESCO Adhoc Expert Group on AI Ethics for drafting the recommendation on AI Ethics. Ambassador Gill chaired the Group of Governmental Experts of the Convention on Certain Conventional Weapons (CCW) on emerging technologies in the area of lethal autonomous weapon systems from 2017-2018. He has served on the UN Secretary General’s Advisory Board on Disarmament Matters and on WEF’s Global Futures Council on Values, Ethics, and Innovation; he served as a member of the GFC on Global Public Goods.
Seán S. ÓhÉigeartaigh
Seán Ó hÉigeartaigh is Co-Director of, and was the founding Executive Director of, Cambridge’s Centre for the Study of Existential Risk (CSER), an academic research centre focused on global risks associated with emerging technologies and human activity. He is also a Program Director at Leverhulme Centre for the Future of Intelligence, University of Cambridge. Since 2011 he has played a central role in international research on long-term trajectories and impacts associated with artificial intelligence (AI) and other emerging technologies, project managing the Oxford Martin Programme on the Impacts of Future Technology from 2011-2014, co-developing the Strategic AI Research Centre (Cambridge-Oxford collaboration) in 2015, and the Leverhulme Centre for the Future of Intelligence (Cambridge-Oxford-Imperial-Berkeley collaboration) in 2015/16.
Yi Zeng
Yi Zeng is a Professor at Institute of Automation, Chinese Academy of Sciences, serving as deputy director of Research Center of Brain-inspired Intelligence, and founding director of International Research Center for AI Ethics and Governance. He is also a chief scientist in AI ethics and governance at Institute of AI International Governance, Tsinghua University. He is the Chair of the Professional Committee on Information Technology and Artificial Intelligence, at the Science and Technology Ethics Committee of Chinese Academy of Sciences. He is a member of the advisory council for Institute of Ethics in AI, University of Oxford. He serves as a board member for the National Governance Committee for the New Generation Artificial Intelligence, an expert in the UNESCO Ad-hoc Expert Group in AI Ethics, and an expert in the Expert Group of AI Ethics and Governance in Health, World Health Organization (WHO). For brain-inspired artificial intelligence, he leads the effort on brain-inspired cognitive engine, a brain-inspired spiking neural network architecture towards artificial general intelligence. For AI ethics and governance, he leads the effort on drafting and grounding Beijing AI Principles, AI for Children: Beijing Principles, and is one of the main drafters of the National Governance Principles for the New Generation Artificial Intelligence, the Ethical Norms for the New Generation Artificial Intelligence. He is the principal investigator for Linking AI Principles, AI Governance Online, and AI for SDGs Think Tank.