State-Backed Hackers Utilize Microsoft-Backed OpenAI Tools for Espionage

State-Backed Hackers Utilize Microsoft-Backed OpenAI Tools for Espionage

Key Points:

  • State-backed hackers from Russia, China, and Iran utilize Microsoft-backed OpenAI tools for hacking.
  • Microsoft announces a blanket ban on state-backed hacking groups accessing AI products.
  • China opposes allegations, advocating for the safe deployment of AI technology.
  • Concerns arise about state-sponsored hackers’ potential misuse of rapidly proliferating AI technology.

Microsoft has revealed that state-backed hacking groups affiliated with Russia, China, and Iran have been leveraging tools from OpenAI, a company backed by Microsoft, to refine their hacking techniques. In a report published on Wednesday, Microsoft disclosed that hacking groups associated with Russian military intelligence, Iran’s Revolutionary Guard, and the governments of China and North Korea were employing large language models from OpenAI to enhance their hacking capabilities.

Having tracked these hacking groups, Microsoft announced a comprehensive ban on state-backed hacking entities utilizing AI products. The ban aims to prevent threat actors identified and tracked by Microsoft from accessing AI technology.

Tom Burt, Vice President for Customer Security at Microsoft, stated that regardless of legal violations or terms of service breaches, the company is taking a proactive stance against providing technology to these identified threat actors.

According to the report, Russian, Iranian, and North Korean officials did not respond to requests for comment. In response to the allegations, China’s U.S. embassy spokesperson, Liu Pengyu, opposed “groundless smears and accusations against China” and emphasized the advocacy for AI technology’s “safe, reliable, and controllable” deployment.

This revelation underscores concerns about the misuse of rapidly proliferating AI technology by state-sponsored hackers, marking one of the first instances where an AI company publicly discusses how cybersecurity threat actors employ AI technologies. OpenAI and Microsoft described the hackers’ use of their AI tools as “early-stage” and “incremental,” emphasizing that no breakthroughs were observed.

According to Microsoft, hacking groups associated with Russia’s GRU were reportedly researching satellite and radar technologies related to conventional military operations in Ukraine. North Korean hackers used the models for content generation in spear-phishing campaigns, while Iranian hackers employed the technology to draft convincing emails, including an attempt to lure prominent feminists to a malicious website.

Microsoft also revealed that Chinese state-backed hackers experimented with large language models to ask questions about rival intelligence agencies, cybersecurity issues, and notable individuals.

TechGolly editorial team led by Al Mahmud Al Mamun. He worked as an Editor-in-Chief at a world-leading professional research Magazine. Rasel Hossain and Enamul Kabir are supporting as Managing Editor. Our team is intercorporate with technologists, researchers, and technology writers. We have substantial knowledge and background in Information Technology (IT), Artificial Intelligence (AI), and Embedded Technology.

Read More

We are highly passionate and dedicated to delivering our readers the latest information and insights into technology innovation and trends. Our mission is to help understand industry professionals and enthusiasts about the complexities of technology and the latest advancements.

Follow Us

Advertise Here...

Build brand awareness across our network!