
OpenAI introduces identity verification for access to advanced AI models
OpenAI is increasing security by requiring identity verification for access to advanced AI models. Only organizations with a valid government ID from supported countries will qualify for the new "Verified Organization" status. This move aims to prevent misuse of AI tools and limit access from state-backed cyber attackers, addressing growing global concerns.
- Tech
- Agencies and A News
- Published Date: 10:24 | 15 April 2025
- Modified Date: 10:24 | 15 April 2025
OpenAI is enhancing security measures for developers, preparing to make "Verified Organization" status mandatory. The new system will require identity verification for access to advanced AI models. According to OpenAI's website, this initiative aims to ensure the responsible use of AI tools.
Only organizations presenting a valid government ID and operating in supported countries will be eligible for this status. An ID can be used for a different organization only once every 90 days. Not all organizations will be automatically deemed eligible; eligibility will be assessed based on specific criteria.
The primary goal of the new authentication system is to prevent the misuse of products. OpenAI stated that a small number of developers have deliberately abused the OpenAI API, violating usage policies. To prevent unsafe usage while continuing to offer advanced models, the company is implementing the authentication process.
As AI models strengthen, the risks of disinformation, data integrity violations, and intellectual property rights infringements are growing. The new system is introduced as a precaution against these risks.
The authentication step also responds to rising global concerns about AI misuse. According to TechCrunch, the system was designed to limit access from state-backed cyber attackers. OpenAI had previously reported that groups linked to North Korea and Russia attempted to use the platform illegally.
Tech Times suggested that Deepseek, a Chinese AI company, lacked necessary security measures, possibly allowing large amounts of data to be copied through the platform.
Following these developments, OpenAI announced plans to restrict access from China by mid-2024 as part of a strategy to reduce geopolitical risks and prevent foreign interference. A specific timeline for when the new model will be implemented has not been released, but the announcements indicate that verification will become mandatory for access to advanced models in the future.