Microsoft and OpenAI, the developer of ChatGPT, have joined forces with the organization to thwart five cyberattacks associated with distinct states.
Microsoft conducted surveillance on hacking groups associated with the Chinese and North Korean governments, Russian military intelligence, and the Revolutionary Guard of Iran to enhance their hacking tactics through utilizing large language models (LLMs), as disclosed in a report on Wednesday.
LLMs, also known as artificial intelligence (AI), are computer programs that employ extensive text datasets to generate responses that resemble those of humans.
OpenAI reported that two Chinese-affiliated groups—Salmon Typhoon and Charcoal Typhoon—were responsible for the cyberattacks. Additionally, Crimson Sandstorm established a connection between the assaults and Iran, Emerald Sleet found a link with North Korea, and Forest Blizzard linked Russia.
According to OpenAI, the groups attempted to utilize GPT-4 for the following purposes:
- Studying satellite communication and radar technology
- Debugging code
- Conducting phishing campaigns
- Translating technical papers
- Eluding malware detection
They also attempted to use GPT-4 to research companies and cybersecurity tools. Deactivation of the accounts occurred immediately after their detection.
It disclosed the revelation as the organization enforced a comprehensive prohibition on state-backed cyber groups employing AI products. Although OpenAI successfully averted these incidents, it recognized the difficulty of completely preventing every misuse.
In the wake of the proliferation of deepfakes and fraudulent activities generated by AI after the introduction of ChatGPT, policymakers intensified their examination of developers of generative AI.
OpenAI unveiled a grant program in June 2023, allocated $1 million, to augment and assess the efficacy of cybersecurity technologies powered by artificial intelligence.
Notwithstanding OpenAI’s cybersecurity endeavors and implementation of safeguards to prevent ChatGPT from generating detrimental or inappropriate responses, malicious actors have discovered ways to circumvent these precautions and manipulate the chatbot into generating such content.
A coalition of over two hundred organizations, including Google, OpenAI, Microsoft, and Anthropic, recently joined forces with the Biden Administration to form the AI Safety Institute and U.S. AI Safety Institute Consortium (AISIC).
The objective is to facilitate the secure advancement of artificial intelligence, counteract deepfakes produced by AI, and tackle concerns related to cybersecurity.
The formation of the U.S. AI Safety Institute (USAISI) ensued after President Joe Biden issued an executive order on AI safety in late October 2023.