OpenAI is moving to publish the results of its internal AI model safety evaluations more regularly in what the outfit is pitching as an effort to increase transparency. On Wednesday, OpenAI launched the Safety evaluations hub, a web page showing how the company’s models score on various tests for harmful content generation, jailbreaks, and hallucinations. […]
This site uses User Verification plugin to reduce spam. See how your comment data is processed.
