Connect with us

Hi, what are you looking for?

Hit The StockHit The Stock

Tech News

Microsoft’s new safety system can catch hallucinations in its customers’ AI apps

Microsoft logo
Illustration: The Verge

Sarah Bird, Microsoft’s chief product officer of responsible AI, tells The Verge in an interview that her team has designed several new safety features that will be easy to use for Azure customers who aren’t hiring groups of red teamers to test the AI services they built. Microsoft says these LLM-powered tools can detect potential vulnerabilities, monitor for hallucinations “that are plausible yet unsupported,” and block malicious prompts in real time for Azure AI customers working with any model hosted on the platform.

“We know that customers don’t all have deep expertise in prompt injection attacks or hateful content, so the evaluation system generates the prompts needed to simulate these types of attacks. Customers can then get a…

Continue reading…

You May Also Like

Editor's Pick

Patrick G. Eddington In a fiery 40‐​minute press conference attended by almost a dozen House Freedom Caucus (HFC) members, the group discussed the current state of...

Editor's Pick

Marc Joffe Performance measurement in the public sector can be challenging, and state and local governments too often avoid the task entirely. However, the...

Editor's Pick

Chris Edwards President Biden has released his proposed federal budget for fiscal year 2025 and beyond. The proposal includes a raft of spending increases, including...

Editor's Pick

Walter Olson On page 250 of his report, special counsel Robert Hur explains why it’s not inconsistent as a legal matter for the federal government...