|
|
Manufacture and Dissemination of Fake Information vs. Protection of National Security and People’s Rights and Interests in the AIGC Era |
Lu Jianping1, Dang Ziqiang2 |
1.College of Media and International Culture, Zhejiang University, Hangzhou 310058, China 2.College of Computer Science and Technology, Zhejiang University, Hangzhou 310058, China |
|
|
Abstract With the impressive debut of ChatGPT, the idea of artificial intelligence-generated content (AIGC) assisting or even replacing human authors in content creation is becoming a reality. ChatGPT quickly attracted hundreds of millions of users, and various AIGC applications rapidly went online and became popular, marking the advent of a new era in content production.Artificial intelligence technology has experienced three major stages since the 1950s: the shift from machine learning to deep learning, the introduction of Transformer models, and the arrival of the Foundation Model era. With the advancement of artificial intelligence technology, the number of parameters of AI models has continuously increased, from 117 million to 1.5 billion and then to 175 billion, until the birth of the ChatGPT. Moreover, various amazing AIs have emerged at the same time; these AIs can be flexibly used in creative fields such as writing, music arrangement, painting, and video production. However, while foundation models are driving the cognitive capabilities of intelligent entities, they also pose risks and challenges to humans: the training data and objectives of large language models may be ambiguous and uncertain and thus may show misleading and biased behavior; when large language models become easy to manipulate, they are most likely to be used for the release of malicious empowerment, resulting in issues such as fake news, online fraud, and more seriously, they may affect social stability and endanger national security.Given that AI technology is easily accessible to the public and has extremely powerful capabilities, malicious actors can misemploy it when applying for AI text containing image rumours, AI video fraud, AI audio fraud, and AI scene fraud and even when imitating the voice and behavior of real humans, making it difficult for people to distinguish between the content generated by AI and actual truth and public opinion. In addition, the arbitrary dissemination of anthropomorphic AI in the water army will not only disturb normal public cognition but also impact the existing social order. Indeed, due to the power of AIGC, the fabrication of fake information has become easier and simpler, and its content is becoming more diverse and realistic. The world has seen serious cases involving rumour-mongering and fraud, greatly jeopardizing the legitimate rights and interests of citizens and threatening national security. In view of this, this study, based on a brief introduction of the mechanism of AIGC, analyses a few cases and proposes the following suggestions and measures: (1) At the technical level, research investment should be increased to improve the effectiveness of Deepfake Detection, e.g., to design and produce tit-for-tat AI of Justice to counterattack Evil/Criminal AI. (2) At the legislative level, relevant laws and regulations should be improved to clarify the boundaries of crime and noncrime. For example, items of responsibilities and penalties in the newly promulgated “Interim Measures for the Management of Generative Artificial Intelligence Services” should be combined with China’s existing relevant laws and regulations to clarify relevant actors in the manufacturing, utilization and dissemination of artificial intelligence false information. (3) At the law enforcement level, specialized institutions should be established to handle emergencies and strictly enforce laws and regulations in combating AI crimes. (4) In terms of publicity and education, AIGC artificial intelligence knowledge should be popularized to improve citizens’ AI recognition and alertness so that an effective early danger warning mechanism can be established together with a hazard handling mechanism to avoid and reduce damage to people’s interests and to safeguard national security.
|
Received: 10 May 2023
|
|
|
|
1 Cao Y., Li S. & Liu Y. et al., “A comprehensive survey of AI-generated content (AIGC): a history of generative AI from GAN to ChatGPT,” https://arxiv.org/pdf/2303.04226, 2023-11-12. 2 Vaswani A., Shazeer N. & Parmar N. et al., “Attention is all you need,” https://proceedings.neurips.cc/paper_files/paper/2017/file/3f5ee243547dee91fbd053c1c4a845aa-Paper.pdf, 2023-11-12. 3 Radford A., Narasimhan K. & Salimans T. et al., “Improving language understanding by generative pre-training,” https://www.mikecaptain.com/resources/pdf/GPT-1.pdf, 2023-11-12. 4 Radford A., Wu J. & Child R. et al., “Language models are unsupervised multitask learners,” https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf, 2023-11-12. 5 Brown B., Mann B. & Ryder N. et al., “Language models are few-shot learners,” https://arxiv.org/pdf/2005.14165, 2023-11-12. 6 Bommasani R., Hudson D. & Adeli E. et al., “On the opportunities and risks of foundation models,” https://arxiv.org/pdf/2108.07258, 2023-11-12. 7 Kirillov A., Mintun E. & Ravi N. et al., “Segment anything,” https://arxiv.org/pdf/2304.02643, 2023-11-12. 8 Yu T. & Feng R., “Inpaint Anything: segment anything meets image inpainting,” https://arxiv.org/pdf/2304.06790, 2023-11-12. 9 林爱珺、林倩敏: 《AI换脸的技术风险与多元规制》,《未来传播》2023年第1期,第60-69页。 10 Nguyen T., Nguyen Q. & Nguyen D. et al., “Deep learning for deepfakes creation and detection: a survey,” Computer Vision and Image Understanding, Vol. 223 (2022), https://arxiv.org/pdf/1909.11573. 11 Wang T., Liao X. & Chow K. et al., “Deepfake detection: a comprehensive study from the reliability perspective,” https://arxiv.org/pdf/2211.10881, 2023-11-12. 12 Li G., Zhao X. & Chow K., “Forensic symmetry for deepfakes,” IEEE Transactions on Information Forensics and Security, Vol. 18 (2023), pp. 1095-1110. 13 Mirsky Y. & Lee W., “The creation and detection of deepfakes: a survey,” ACM Computing Surveys (CSUR), Vol. 54, No. 1 (2021), pp. 1-41. 14 赵竹青: 《系好“安全带”,生成式人工智能才会更好发展》,2023年4月13日,https://finance.eastmoney.com/a/202304132690487526.html,2023年8月24日。 |
|
|
|