OpenAI significantly shortens the security testing time of AI models
OpenAI Drastically Shortens AI Model Safety Testing Time
OpenAI has significantly cut the time and resources spent on testing the safety of its powerful artificial intelligence models, raising concerns that its technology is being rushed out without adequate safeguards. Compared with a few months ago, staff and third - party teams recently had only a few days to "evaluate" OpenAI's latest large - language models. According to eight people familiar with OpenAI's testing process, due to the pressure to quickly release new models and maintain a competitive edge, the company's testing has become less thorough, with insufficient time and resources dedicated to identifying and mitigating risks. OpenAI has been striving to release its new model o3 as early as next week, giving some testers less than a week for safety checks. Previously, OpenAI allowed several months for safety testing.
—— Financial Times
via Fengxiangqi Reference Express - Telegram Channel