2025年10月25日 星期六

OpenAI稱疑似中國政府特工利用ChatGPT構思出大規模監控提案

 Recently Google News on-line picked up the following:

Suspected Chinese government operatives used ChatGPT to shape mass surveillance proposals, OpenAI says

CNN - By Sean Lyngaas, Jim Sciutto

Oct 7, 2025

Suspected Chinese government operatives asked ChatGPT to help write proposal for a tool to conduct large-scale surveillance and to help promote another that allegedly scans social media accounts for “extremist speech,” ChatGPT-maker OpenAI said in a report published Tuesday.

The report sounds the alarm about how a highly coveted artificial intelligence technology can be used to try to make repression more efficient and provides “a rare snapshot into the broader world of authoritarian abuses of AI,” OpenAI said.

The US and China are in an open contest for supremacy in AI technology, each investing billions of dollars in new capabilities. But the new report shows how AI is often used by suspected state actors to carry out relatively mundane tasks, like crunching data or making language more polished, rather any startling new technological achievement.

“There’s a push within the People’s Republic of China to get better at using artificial intelligence for large-scale things like surveillance and monitoring,” Ben Nimmo, principal investigator at OpenAI, told CNN. “It’s not last year that the Chinese Communist Party started surveilling its own population. But now they’ve heard of AI and they’re thinking, oh maybe we can use this to get a little bit better.”

In one case, a ChatGPT user “likely connected to a [Chinese] government entity” asked the AI model to help write a proposal for a tool that analyzes the travel movements and police records of the Uyghur minority and other “high-risk” people, according to the OpenAI report. The US State Department in the first Trump administration accused the Chinese government of genocide and crimes against humanity against Uyghur Muslims, a charge that Beijing vehemently denies.

Another Chinese-speaking user asked ChatGPT for help designing “promotional materials” for a tool that purportedly scans X, Facebook and other social media platforms for political and religious content, the report said. OpenAI said it banned both users.

AI is one of the most high-stakes areas of competition between the US and China, the world’s two superpowers. Chinese firm DeepSeek alarmed US officials and investors in January when it presented a ChatGPT-like AI model called R1, which has all the familiar abilities but operates at a fraction of the cost of OpenAI’s model. That same month, President Donald Trump touted a plan by private firms to invest up to $500 million in AI infrastructure.

Asked about OpenAI’s findings, Liu Pengyu, a spokesperson for the Chinese Embassy in Washington, DC, said: “We oppose groundless attacks and slanders against China.”

China is “rapidly building an AI governance system with distinct national characteristics,” Liu’s statement continued. “This approach emphasizes a balance between development and security, featuring innovation, security and inclusiveness. The government has introduced major policy plans and ethical guidelines, as well as laws and regulations on algorithmic services, generative AI, and data security.”

The OpenAI report includes several other examples of just how commonplace AI is in the daily operations of state-backed and criminal hackers, along with other scammers. Suspected Russian, North Korean and Chinese hackers have all used ChatGPT to perform tasks like refining their coding or make the phishing links they send to targets more plausible.

One way state actors are using AI is to improve in areas where they had weaknesses in the past. For instance, Chinese and Russian state actors have often struggled to avoid basic language errors in influence operations on social media.

“Adversaries are using AI to refine existing tradecraft, not to invent new kinds of cyberattacks,” Michael Flossman, another security expert with OpenAI, told reporters.

Meanwhile, scammers very likely based in the Southeast Asian country of Myanmar have used OpenAI’s models for a range of business tasks, from managing financial accounts to researching criminal penalties for online scams, according to the company.

But an increasing number of would-be victims are using ChatGPT to spot scams before they are victimized. OpenAI estimates that ChatGPT is “being used to identify scams up to three times more often than it is being used for scams.”

CNN asked OpenAI if it was aware of US military or intelligence agencies using ChatGPT for hacking operations. The company did not directly answer the question, instead referring CNN to OpenAI’s policy of using AI in support of democracy.

US Cyber Command, the military’s offensive and defensive cyber unit, has made clear that it will use AI tools to support its mission. An “AI roadmap” approved by the command pledges to “accelerate adoption and scale capabilities” in artificial intelligence, according to a summary of the roadmap the command provided to CNN.

Cyber Command is still exploring how to use AI in offensive operations, including how to use it to build capabilities to exploit software vulnerabilities in equipment used by foreign targets, former command officials told CNN.

Translation

OpenAI稱疑似中國政府特工利用ChatGPT構思出大規模監控提案

ChatGPT開發者OpenAI在周二發佈的一份報告中稱,疑似中國政府特要求ChatGPT協助撰寫一項用於進行大規模監控的工具提案,並協助推廣另一項據稱可以掃描社交媒體帳戶以查找「極端主義言論」的工具。

OpenAI表示,這份報告敲響了警鐘,提醒人們一項備受追捧的人工智技術可能被用來提高鎮壓效率,並「罕見地展現了專制政府濫用人工智能的廣闊世界」。

美國和中國正在公開爭奪人工智能技術的霸主地位,雙方都投入了數十億美元用於開發新技術。但這份新報告顯示,可疑國家行動執行者通常使用人工智能執行相對平凡的任務,例如處理數據或使語言更精煉,而不是任何令人震驚的新技術成就。

OpenAI 首席研究員 Ben Nimmo 告訴 CNN中華人民共和國內部正在努力改進人工智能應用在大規模事務例如監視和監控; 中國共產黨開始監控本國人口並非去年才開始的。但現在他們聽說了人工智能,並開始思考,哦,也許我們可以利用它做得更好一點。’”

OpenAI 的報告稱,在一個案例中,一位「可能與[中國]政府機構有聯繫」的 ChatGPT 戶請求人工智模型協助撰寫一份提案,該提案旨在開發一種工具,用於分析維吾爾族和其他「高危險」人群的旅行軌跡和警方記錄。在特朗普第一屆政府時期,美國國務院指控中國政府對維吾爾族穆斯林犯下了種族滅絕和反人類罪,但北京方面對此予以堅決否認。

報道稱,另一位中文用戶請求ChatGPT協助為一款工具設計宣傳材料,該工具據稱可以掃描XFacebook和其他社交媒體平台上的政治和宗教內容。 OpenAI表示已封禁這兩位用戶。

人工智是中美這兩個世界超級大國之間競爭風險最高的領域之一。今年1月,中國公司DeepSeek推出了一款名為R1的類似ChatGPT的人工智能模型,令美國官員和投資者感到震驚。模型擁有所有常見的功能,但運行成本僅為OpenAI模型的一小部分。同月,特朗普總統吹捧一個投資高達5億美元用於人工智能基礎建設的私人企業計劃。

當被問及OpenAI的調查結果時,中國駐華盛頓大使館發言人劉鵬宇表示:我們反對對中國的無端攻擊和誹謗。

劉鵬宇的聲明繼續說道,中國正在「快速建構具有鮮明國家特色的人工智治理體系; 「這種方法強調發展與安全之間的平衡,強調創新、安全和包容。政府推出了重要的政策計劃和道德準則,以及關於演算法服務、生成式人工智能和資料安全的法律法規」。

OpenAI 的報告列舉了其他幾個例子,顯示人工智能在有國家支持的駭客、犯罪分子以及其他詐騙分子的日常行動中是多麼普遍。疑似俄羅斯、北韓和中國的駭客都使用 ChatGPT 來執行諸如改進代碼或使其發送給目標的網路釣魚連結更加可信等任務。

國家行動執行者使用人工智能的一種方式, 是改善過去存在的弱點。例如,中國和俄羅斯的國家行動執行者在影響社交媒體的行動中, 經常難以避免基本的語言錯誤。

OpenAI 的另一位安全專家 Michael Flossman 告訴記者:對手正在使用人工智能去改進現有的技術,而不是發明新的網路攻擊。

同時,據 OpenAI 稱,很可能來自東南亞國家緬甸的詐騙者已將 OpenAI 的模型用於一系列商業任務,從管理金融帳戶到研究網路詐騙的刑事處罰。

但越來越多的潛在受害者正在使用 ChatGPT 來識別詐騙,以免上當。 OpenAI 估計,ChatGPT「用於識別詐騙的頻率是其被用於進行詐騙的三倍」。

CNN 詢問 OpenAI 是否知道美國軍方或情報機構正在使用 ChatGPT 進行駭客行動。 OpenAI 並沒有直接回答這個問題,而是請CNN去參考 OpenAI 使用人工智能支援民主的政策。

美國軍方的攻防網路部隊美國網路司令部已明確表示,將使用人工智能工具來支援其使命。根據該司令部向 CNN 提供的路線圖摘要,該司令部批准的「人工智能路線圖」承諾「加速人工智能的採用和擴展能力」。

網路司令部前指揮官告訴 CNN,網路司令部正在探索如何在攻擊行動中使用人工智能,包括如何使用它來建立能力, 去利用外國目標所使用的設備的軟體漏洞。

So, ChatGPT-maker OpenAI says in a report that suspected Chinese government operatives has asked ChatGPT to help write proposal for a tool to conduct large-scale surveillance etc. Meanwhile the US Cyber Command has made clear that it will use AI tools to support its mission. Apparently, the US and China are using AI technology to help their operations.

沒有留言:

張貼留言