OpenAI last week unveiled two new free-to-download tools that are supposed to make it easier for businesses to construct guardrails around the prompts users feed AI models and the outputs those systems generate.
The new guardrails are designed so a company can, for instance, more easily set up contorls to prevent a customer service chatbot responding with a rude tone or revealing internal policies about how it should make decisions around offering refunds, for example.
But while these tools are designed to make AI models safer for business customers, some security experts caution that the way OpenAI has released them could create new vulnerabilities and give companies a false sense of security. And, while OpenAI says it has released these security tools for the good of everyone, some q

Fortune

PC World
Fast Company Technology
The Atlantic
New York Post
PC World Business
@MSNBC Video
AlterNet
KPTV Fox 12 Oregon
MLB