OpenAI’s ChatGPT has guardrails that are supposed to stop users from generating information that could be used for catastrophic purposes, like making a biological or nuclear weapon. But those guardrails aren’t perfect. Some models ChatGPT uses can be tricked and manipulated. In a series of tests conducted on four of OpenAI’s most advanced models, two of which can be used in OpenAI’s popular ChatGPT, NBC News was able to generate hundreds of responses with instructions on how to

See Full Page