13:30-17:30 GMT+8 (Beijing Time) May 7th, 2023.
The advancement of Large AI Models has already brought many potential risks to actual and existing risks to the society. The 2023 Workshop on Ethics and Governance of Large AI Models (EGLAIM 2023) discusses the ethics and governance of Large AI Models, and invites experts and scholars from academia and industry to explore near-term and long-term ethical risks for Large AI Models, especially Generative AI models , to share the latest thinking, academic and industrial practices, and governance trends. The EGLAIM 2023 Workshop is part of the China Artificial Intelligence Industry Annual Conference organized by the Chinese Association for Artificial Intelligence (CAAI), in Suzhou from May 7th to 8th, 2023, and it is also a workshop for the International Conferences on AI Ethics and Sustainable Development (AIESD)
Director, Center for Long-term AI
Professor, Institute of Automation, Chinese Academy of Sciences
Yi Zeng is a Professor and Director of International Research Center for AI Ethics and Governance, and Brain-inspired Cognitive Intelligence Lab both at Institute of Automation, Chinese Academy of Sciences. He is the founding director of Center for Long-term AI, a board member for the National Governance Committee for the New Generation Artificial Intelligence China. He is a member for UNESCO Adhoc Expert Group on AI Ethics, and a member for the expert group of AI Ethics and Governance for Health, World Health Organization (WHO). Yi focuses on Brain-inspired AI, AI ethics and governance, and AI for Sustainable Development. Yi is the scientific director of "BrainCog", the brain-inspired cognitive intelligence engine, the Safe and Ethical AI (SEA) Platform Network, and the AI for Sustainable Development Goals (AI4SDGs) Cooperation Network.
Director and Researcher of the Research Office of Philosophy of Science and Technology, Institute of Philosophy, Chinese Academy of Social Sciences, Distinguished Professor of Fudan University. Director of the Science, Technology and Social Research Center of the Chinese Academy of Social Sciences, an expert enjoying special allowances from the State Council, a former visiting scholar at Oxford University and the University of Pittsburgh, and a Berggruen scholar.
His main research fields are philosophy of science and technology, science and technology ethics, philosophy and ethics of big data and artificial intelligence, and his recent works include "The Ethical Foundation of Information Civilization", "Acceptable Science: Reflections on the Foundation of Contemporary Science", etc. He is currently the vice chairman of the China Big Data Expert Committee, the academic consultant of the Scientific Standards and Ethics Research Support Center of the Chinese Academy of Sciences, the advisory member of the Meituan Artificial Intelligence Governance Committee, and the member of the Ant Group Science and Technology Ethics Advisory Committee.
Abstract: With the development of generative and conversational artificial intelligence, the language generated by artificial intelligence has carried out a kind of compressed image-like copy of human language. The result of the copy is to strengthen the relative mode of human language and the thinking behind it content that can be customized (including anti-patterned unlimited combinations of language ideas). This has led to the double machine domestication of human thinking: on the one hand, the pseudo language of machine language has obtained a language expression ability higher than the average level of human beings through the extraction of a large number of human languages, which improves human modeling and creative expression. On the other hand, all kinds of good and correct words uttered by machines become anthropomorphic knowledge authorities, urging us to think and act in a certain way of conformity.
Director and Researcher of the Research Office of Philosophy of Science and Technology, Institute of Philosophy, Chinese Academy of Social Sciences, Distinguished Professor of Fudan University. Director of the Science, Technology and Social Research Center of the Chinese Academy of Social Sciences, an expert enjoying special allowances from the State Council, a former visiting scholar at Oxford University and the University of Pittsburgh, and a Berggruen scholar.
His main research fields are philosophy of science and technology, science and technology ethics, philosophy and ethics of big data and artificial intelligence, and his recent works include "The Ethical Foundation of Information Civilization", "Acceptable Science: Reflections on the Foundation of Contemporary Science", etc. He is currently the vice chairman of the China Big Data Expert Committee, the academic consultant of the Scientific Standards and Ethics Research Support Center of the Chinese Academy of Sciences, the advisory member of the Meituan Artificial Intelligence Governance Committee, and the member of the Ant Group Science and Technology Ethics Advisory Committee.
Abstract: Since 2017, various countries and organizations have published more than 100 guidelines and principles on AI ethics. There are many commonalities between these guidelines, but also different emphases. We wish to understand the philosophical traditions behind these different approaches and understand the roles and functions of countries in developing these guidelines and principles. We also offer some perspectives on research and development on the necessity and challenges of translating these principles and guidelines into AI models, systems, and products.
Chair Professor of the Department of Electrical and Computer Engineering and Director of the Center for Artificial Intelligence Research (CAiRE) at the Hong Kong University of Science and Technology (HKUST), Visiting Professor of the Central Academy of Fine Arts in Beijing, Fellow of AAAI, ACL, IEEE and ISCA. An expert of the Global Future Committee on Artificial Intelligence of the World Economic Forum, on behalf of the Hong Kong University of Science and Technology to carry out partnerships in artificial intelligence for the benefit of mankind and society. Serves on the Board of Governors of the IEEE Signal Processing Society and a member of the IEEE Working Group responsible for the development of IEEE Standards - Recommended Practices for the Governance of Artificial Intelligence Organizations. In 2022, he will serve as Meta's RAI Distinguished Consultant. He has served as editor and associate editor of journals such as Computer Speech and Language, IEEE/ACM Transactions on Audio, Speech and Language Processing, Transactions for ACL, Journal of Machine Learning, etc. His team has won several best and outstanding paper awards at ACL, ACL, and NeurIPS workshops.
Abstract: I will begin by explaining how the potential risks brought by Large AI models quickly become actual risks. Then, I will introduce efforts on linking AI ethical principles, AI governance self assessment, and crowdsourcing on ethical and safety challenges for Large AI models by efforts of the SEA AI Ethics and Safety Platform Network. Finally, I will talk about the limitations of current AIs, and discuss the approach for creating brain and mind inspired moral AI.
Director, Center for Long-term AI
Professor, Institute of Automation, Chinese Academy of Sciences
Yi Zeng is a Professor and Director of International Research Center for AI Ethics and Governance, and Brain-inspired Cognitive Intelligence Lab both at Institute of Automation, Chinese Academy of Sciences. He is the founding director of Center for Long-term AI, a board member for the National Governance Committee for the New Generation Artificial Intelligence China. He is a member for UNESCO Adhoc Expert Group on AI Ethics, and a member for the expert group of AI Ethics and Governance for Health, World Health Organization (WHO). Yi focuses on Brain-inspired AI, AI ethics and governance, and AI for Sustainable Development. Yi is the scientific director of "BrainCog", the brain-inspired cognitive intelligence engine, the Safe and Ethical AI (SEA) Platform Network, and the AI for Sustainable Development Goals (AI4SDGs) Cooperation Network.
Abstract: General model is one of the important directions of artificial intelligence development in recent years. With the increase in the development and application of models, the social and ethical impact of models has received extensive attention. From the perspective of philosophy of technology, the report will examine the intermediary nature of the general model and the ethical challenges it brings, and then reflect on the current countermeasures and limitations from two aspects of governance technology and governance mechanism.
Ph.D., professor and doctoral supervisor of School of Philosophy, Fudan University, Distinguished Professor of Changjiang Scholars of the Ministry of Education. Dean of the China Association for Science and Technology-Fudan University Institute of Science and Technology Ethics and the Future of Humanity. Concurrently serves as a member of the Medical Sub-Ethics Committee of the National Science and Technology Ethics Committee, co-chairman of the Professional Ethics and Academic Ethics Committee of the China Computer Federation, vice-chairman of the Technology and Engineering Ethics Committee of the China Society for Dialectics of Nature, deputy director of the Technology Ethics Committee of the Chinese Ethics Society, Shanghai Director of the Ethics Committee of the International Human Phenotype Project, Chairman of the Shanghai Society for Dialectics of Nature. Research directions: science and technology ethics, artificial intelligence and biomedical ethics.
Abstract: Since the beginning of this year, the emergence of innovations in basic algorithms such as generative algorithms, pre-trained models, and multimodal technology has detonated a wave of large models, spawning a large number of new models and new business formats, and the artificial intelligence industry has returned to a booming trend. But at the same time, the industrialization of artificial intelligence faces complex security issues caused by uncertainties in technology and application forms, such as new attacks, false news, data leakage and other security risks caused by generative large models such as ChatGPT. Coordinating development and security is an inevitable problem in the development process of every new technology. How to achieve a benign interaction between high-level development and high-level security is also a major proposition for the development of the current artificial intelligence industry. This report will discuss the governance challenges and solutions caused by artificial intelligence in the era of large models.
Graduated from Tsinghua University with a major in computer science, he is currently the CEO of Beijing Ruilai Smart Technology Co., Ltd. He has long been engaged in research in the fields of machine learning and knowledge aggregation, and has published dozens of high-level papers in top conference journals in the field of artificial intelligence such as NIPS, ICML, AAAI, and PAMI, and has dozens of invention patents in the field of AI; The Expert Committee of the Generation Artificial Intelligence Industry Technology Innovation Strategic Alliance, the Beijing Science and Technology Rising Star Program, and was elected as the vice chairman of the Beijing Haidian District Federation of Industry and Commerce; won the "Wu Wenjun Artificial Intelligence Outstanding Youth Award" in 2022, and was awarded the China International Science Exchange Foundation in 2023 "Outstanding Young Engineer Award".
Dialogue Guest
Patrick Huen Wing Ming Professor of Systems Engineering & Engineering Management, Director of Stanley Ho Big Data Decision Analytics Research Centre, The Chinese University of Hong Kong. Director of InnoHK Centre for Perceptual and Interactive Intelligence.
Dialogue Guest
Head of AI risk management of Ant Group, senior comprehensive risk management expert. Graduated from Shanghai Jiaotong University, worked in Bank of Communications, Ant Group and other units successively. He has been engaged in model risk management, algorithm governance, and technology ethics for a long time. He has rich experience in model risk management and algorithm governance of Internet platform companies and banking industry and other experience, participated in the formulation of a number of standards related to artificial intelligence models.
Dialogue Guest
Director of Artificial Intelligence Ethics and Governance Research of SenseTime, Executive Director of Computational Law and AI Ethics Research Center of Shanghai Jiao Tong University, Member of Technical Steering Committee of Engineering Research Center of Trusted AI Ministry of Education. As an expert member of SenseTime's "Artificial Intelligence Ethics and Governance Committee", he is responsible for the development of artificial intelligence ethics research, ethical governance process and tool construction, and takes the lead in product ethics review. He has worked in domestic and foreign public policy research institutions such as China Academy of Information and Communications Technology for a long time, presided over or participated in a number of major provincial and ministerial research projects, and many research reports have been adopted by departments above the provincial and ministerial level. The main research directions include: artificial intelligence ethics and governance, science and technology policy, and international strategy.
Dialogue Guest
Senior researcher at Tencent Research Institute, member of the Social and Legal Committee of the Guangdong Provincial Committee of the China Democratic League. He is also a visiting professor of Shanghai University of Political Science and Law, a distinguished researcher of the Digital Law Research Institute of East China University of Political Science and Law, a researcher of the Digital Economy and Legal Innovation Research Center of the University of International Business and Economics, a member of the Artificial Intelligence Ethics and Governance Working Committee of the Chinese Artificial Intelligence Association, and a member of the Guangdong Law Society Information Communication Law Research Association director and other social positions. Has long been engaged in policy, law and social ethics research related to Internet frontier technology and digital economy, mainly focusing on artificial intelligence, autonomous driving, blockchain, metaverse, intellectual property and data system, platform responsibility, international digital economy governance, legal technology, etc. In the field of artificial intelligence, he has been invited to participate in top domestic and foreign conferences in the field of artificial intelligence and give speeches. Lead the planning and writing of research reports such as "2018 Global Autonomous Driving Legal Policy Research Report", "Explainable AI Development Report 2022", "AI Generated Content Development Report 2020". Representative works include "Digital Justice" (main translator), "Artificial Intelligence: National Artificial Intelligence Strategic Action Grasp" (main author), "Industrial Blockchain" (main author), etc., published in "Guangming Daily" and "Study Times". Hundreds of papers and articles have been published in various periodical media such as "Rule of Law Daily", and many papers have been reprinted by "Chinese Social Science Digest".
Dialogue Guest
Graduated from Zhejiang University, she is committed to researching issues such as the development of the data industry, the value and transaction of data elements, algorithm governance, and technology ethics. She has successively worked in international consulting companies such as Capgemini and IBM, serving multiple industries such as government, communications, energy, manufacturing, and transportation. After joining Ali, She personally planned and constructed many national key projects such as the world's largest data center (State Grid), the country's largest video recognition algorithm application (traffic free flow). Based on 20 years of experience in the digital transformation of enterprises, rethink the value measurement, security governance and ethical rule construction of data economic development. She is currently the director of the Data Economy Research Center of the Ali Research Institute and the leader of the Ethics Group of the Ali Group Science and Technology Ethics Committee. She is responsible for the compilation of the Group's "White Paper on Artificial Intelligence Governance", the formulation of the Group's ethical principles and internal review system, and the committee's technology group and compliance group. Review and governance of core algorithm projects such as rider scheduling, search recommendation, and generative artificial intelligence.