Foreign adversaries are increasingly using multiple AI tools to power hacking and influence operations, according to a new OpenAI report released Tuesday.

Why it matters: In the cases OpenAI discovered, the adversaries typically turned to ChatGPT to help plan their schemes, then used other models to carry them out — reflecting the range of applications for AI tools in such operations.

Zoom in: OpenAI banned several accounts tied to nation-state campaigns that seemed to be using multiple AI models to improve their operations. • A Russian-based actor that was generating content for a covert influence operation used ChatGPT to write prompts seemingly for another AI video model. • A cluster of Chinese-language accounts used ChatGPT to research and refine phishing automation they wanted t

See Full Page