|
|
The Risks and Governance of Generative Artificial Intelligence: Addressing the “Collingridge Dilemma” |
Shen Fangjun |
The Belt and Road College, Zhejiang International Studies University, Hangzhou, Zhejiang 310023, China |
|
|
Abstract With the rapid development of global artificial intelligence (AI) technologies, their profound and tangible impacts on socioeconomic progress and human civilization are undeniable. AI has become a battleground for the technological competitions among nations and a key indicator of comprehensive national power and competitiveness. The future of humanity hinges on the development and governance of AI, as the risks and challenges it presents are a shared concern of the international community. The “Collingridge Dilemma” highlights the regulatory balancing act between innovation and control, which is a critical issue for the high-quality and ethical advancement of AI. To break free from this dilemma, there is an urgent need to reach consensus on human ethical values, improve supervisory governance system, and achieve balance between standardization and development. Firstly, identifying the risks posed by generative AI and determining the areas and methods of governance are prerequisites to overcoming the challenge. Risks extend beyond algorithmic and data issues to include national security, public safety, social trust systems, and employment, necessitating a more comprehensive and forward-looking risk assessment. Secondly, the “Collingridge Dilemma” possesses both epistemological and axiological dimensions, encompassing profound value orientation issues. It is imperative to ensure that AI remains conducive to the advancement of human civilization. China must incorporate value considerations into the governance of generative AI, adhere to a people-centered approach, and promote the values of socialism to advocate “intelligence for good”. This involves drawing on constructive technological assessments that incorporate values like fairness, justice, and harmony into the algorithm design and moralization of generated content to uphold the social trust system. Finally, the existing governance framework must be refined to mitigate the potential risks of generative AI. On a macro level: further improve governing subjects, making clear the principal roles of State Scientific and Technological Commission of the People’s Republic of China in fulfilling its management functions by establishing an AI Security Review Committee (or Bureau) responsible for reviewing and supervising AI safety. Specialized institutions under unified leadership are essential for enhancing governance efficiency, reducing costs, and facilitating policy implementation. Strengthen legislative support, expedite the development of AI safety laws and regulations, and build a comprehensive governance mechanism encompassing preventive review, intervention, and post-event punishment. On a micro level: introduce access systems for preventive review, establish new safe harbor regulations to clarify responsible parties and methods of accountability, and shift data governance focus to building data-sharing mechanisms, exploring public data development, and compelling private entities to open their data for the public good, setting the stage for a future market in data sharing. Through these macro and micro-level regulatory measures, ensure that AI development is safe, trustworthy, and controllable, which aligns with common values of peace, development, justice, and aspiration for goodness in order to promote the progress of human civilization.
|
Received: 23 October 2023
|
|
|
|
1 Collingrideg D., The Social Control of Technology, New York: St. Martin’s Press, 1980. 2 钟祥铭、方兴东、顾烨烨:《ChatGPT的治理挑战与对策研究——智能传播的“科林格里奇困境”与突破路径》,《传媒观察》2023年第3期,第25-35页。 3 Kateb G., Utopia and Its Enemies, New York: Free Press of Glencoe, 1963. 4 Calvert J., “Governing in the context of uncertainty,” Hastings Center Report, Vol. 44, No. S5 (2014), pp. 31-33. 5 何哲:《通向人工智能时代——兼论美国人工智能战略方向及对中国人工智能战略的借鉴》,《电子政务》2016年第12期,第2-10页。 6 李猛:《深度伪造技术的国家安全风险及其全球治理》,《云南行政学院学报》2023年第5期,第61-71页。 7 张灿:《人工智能深度伪造技术的伦理风险与虚无困境》,《山东科技大学学报(社会科学版)》2023年第2期,第11-19页。 8 容志、任晨宇:《人工智能的社会安全风险及其治理路径》,《广州大学学报(社会科学版)》2023年第6期,第93-104页。 9 张霄军、邵璐:《构建可信赖机器翻译系统的基本原则——一种基于工程伦理的观点》,《外国语文》2021年第1期,第1-8页。 10 冯迪凡:《欧盟就全球首部AI监管法案达成临时协议》,《第一财经日报》2023年12月12日,第A05版。 11 刘晓春、夏杰:《美国人工智能立法态势介评》,《中国对外贸易》2023年第10期,第38-41页。 12 刘慧:《健全AI企业内外部治理体系确保AI向善》,《中国经济时报》2023年12月11日,第4版。 13 毕文轩:《生成式人工智能的风险规制困境及其化解:以ChatGPT的规制为视角》,《比较法研究》2023年第3期,第155-172页。 14 张涛、韦晓霞:《我国算法治理政策法规内容及框架分析》,《现代情报》2023年第9期,第98-110页。 15 辛勇飞:《中国数据治理规则体系构建:现状、挑战与展望》,《人民论坛·学术前沿》2023年第6期,第6-12页。 16 谢晖:《法律信仰的理念与基础》,济南:山东人民出版社,1997年。 17 陈凡、贾璐萌:《技术控制困境的伦理分析——解决科林格里奇困境的伦理进路》,《大连理工大学学报(社会科学版)》2016年第1期,第77-82页。 18 金欣:《人工智能对法律价值的挑战与应对》,《西北民族大学学报(哲学社会科学版)》2022年第3期,第69-78页。 19 卓泽渊:《法的价值的诠释》,《苏州大学学报(哲学社会科学版)》2005年第5期,第18-21页。 20 文成伟、汪姿君:《预知性技术伦理消解AI科林格里奇困境的路径分析》,《自然辩证法通讯》2021年第4期,第9-15页。 21 袁曾:《生成式人工智能责任规制的法律问题研究》,《法学杂志》2023年第4期,第119-130页。 22 郑曦、朱溯蓉:《生成式人工智能的法律风险与规制》,《长白学刊》2023年第6期,第80-88页。 23 张凌寒、于琳:《从传统治理到敏捷治理:生成式人工智能的治理范式革新》,《电子政务》2023年第9期,第2-13页。 24 Kiran H. A., Oudshoorn N. & Verbeek P., “Beyond checklists: toward an ethical-constructive technology assessment,” Journal of Responsible Innovation, Vol. 2, No. 1 (2015), pp. 5-19. 25 邢怀滨:《建构性技术评估及其对我国技术政策的启示》,《科学学研究》2003年第5期,第487-491页。 26 Schot J. W., “Constructive technology assessment and technology dynamics: the case of clean technologies,” Science, Technology & Human Values, Vol. 17, No. 1 (1992), pp. 36-56. 27 Latour B., “Where are the missing masses? The sociology of a few mundane artifacts,” in Bijker W. E. & Law J. (eds.), Shaping Technology/Building Society: Studies in Sociotechnical Change, Cambridge, MA: MIT Press, 1992, pp. 225-258. 28 叶少芳、刘婵娟:《智能时代数字医疗的伦理进路——基于“道德物化”理论的思考》,《浙江社会科学》2023年第9期,第114-120,160页。 29 宋华琳:《法治视野下的人工智能伦理规范建构》,《数字法治》2023年第6期,第1-9页。 30 World Economic Forum, “Agile governance: reimagining policy-making in the fourth industrial revolution,” 2018-04-24, https://cn.weforum.org/publications/agile-governance-reimagining-policy-making-in-the-fourth-industrial-revolution/, 2023-06-07. 31 邹开亮、刘祖兵:《论类ChatGPT通用人工智能治理——基于算法安全审查视角》,《河海大学学报(哲学社会科学版)》2023年第6期,第46-59页。 32 陈锐、江奕辉:《生成式AI的治理研究:以ChatGPT为例》,《科学学研究》2024年第1期,第21-30页。 33 孟飞:《公共数据开放利用的逻辑与规则》,《上海政法学院学报(法治论丛)》2023年第5期,第75-90页。 34 冉高苒、高富平:《“公益数据私人控制”的破解:确立私主体的数据开放义务》,《中南大学学报(社会科学版)》2023年第4期,第73-84页。 35 冉高苒:《数据分享理论——数据法律基础概念的厘清与再造》,《东方法学》2023年第6期,第53-63页。 36 张凌寒:《生成式人工智能的法律定位与分层治理》,《现代法学》2023年第4期,第126-141页。 |
|
|
|