OpenAI GPT-4o System Card revealing risk assessments and safety measurements.

OpenAI Evaluates GPT-4o as Medium Risk: Insights from the System Card

OpenAI Releases GPT-4o System Card: A Step Towards Transparency in AI Safety

OpenAI has introduced its GPT-4o System Card, a comprehensive research document detailing the safety measures and risk evaluations undertaken before the release of its latest model. Since its launch in May 2023, GPT-4o has been subject to rigorous testing to ensure its capabilities align with safety standards.

Key Risk Evaluations by External Experts

Before its public debut, OpenAI engaged an external group of red teamers—security experts responsible for identifying potential weaknesses in systems—to assess key risks associated with GPT-4o. This practice is standard within the tech industry to mitigate possible threats. The evaluation focused on potential issues such as:

  • Creation of unauthorized voice clones
  • Production of erotic and violent content
  • Reproducing copyrighted audio segments

The findings revealed that the overall risk level for GPT-4o was categorized as medium risk. This assessment was based on the evaluation of four key risk categories: cybersecurity, biological threats, persuasion, and model autonomy. Notably, while the risks in cybersecurity, biological threats, and model autonomy were considered to be low, the category of persuasion raised some concerns.

Persuasion Risks Highlighted

The researchers noted that certain writing samples produced by GPT-4o had the potential to persuade readers more effectively than human-generated text. However, it was also stated that the model’s responses were not uniformly more persuasive overall.

Insights from OpenAI's Team

Lindsay McCallum Rémy, a spokesperson for OpenAI, explained that the system card includes evaluations prepared by both internal teams and external testers, such as the Model Evaluation and Threat Research (METR) and Apollo Research groups. These teams contribute significantly to the overall safety assessments of AI systems.

Context of GPT-4o's Release

OpenAI's release of the GPT-4o System Card comes at a crucial juncture, amidst growing criticism regarding the company's safety standards. Concerns have been voiced by various stakeholders, including employees and public officials. Recently, The Verge reported on an open letter from U.S. Senator Elizabeth Warren and Representative Lori Trahan, urging OpenAI to clarify its procedures concerning whistleblowers and safety reviews. The letter underscores considerable safety issues that have been publicly addressed, including the temporary ousting of CEO Sam Altman in 2023 due to board concerns and the departure of a safety executive who indicated that safety measures were overshadowed by the pursuit of new technology.

Implications Ahead of the Presidential Election

Releasing a highly capable multimodal model such as GPT-4o just before the U.S. presidential election raises additional risks. There is a significant concern regarding misinformation and the potential for the model to be exploited by malicious actors. OpenAI asserts that it is actively testing real-world scenarios to mitigate these risks and prevent misuse of their technology.

Calls for Greater Transparency

The tech community has echoed calls for OpenAI to enhance transparency regarding its model training data—the origins of its datasets, such as whether they include YouTube content—and its safety testing processes. In California, State Senator Scott Wiener is working on legislation that would impose regulations on large language models, requiring companies to be held legally accountable if their AI technology is used irresponsibly.

The Future of AI Safety

If enacted, this bill would mandate that OpenAI's frontier models comply with thorough state-mandated risk assessments prior to their public availability. Ultimately, the most significant takeaway from the GPT-4o System Card is that, despite the involvement of external experts, there is a heavy reliance on OpenAI to conduct self-assessments of its models.

Conclusion

As OpenAI continues to advance its AI technology, the scrutiny over its safety practices exemplifies the need for careful oversight and transparent communications. Stakeholders and the public will be watching closely as new developments in AI safety standards unfold, with the hope that organizations prioritize ethical responsibilities in tandem with technological advancements.

Back to blog