OpenAI sets out framework for localising AI systems by country
Company establishes red-line principles prohibiting severe harms whilst allowing customisation for local users
OpenAI has outlined how it plans to localise its artificial intelligence systems to serve users and institutions in different countries.
The framework is grounded in its public Model Spec and a set of red-line principles that define which aspects of model behaviour can and cannot be changed for local deployments, the company said.
The red-line principles prohibit uses that enable severe harms including acts of violence, weapons of mass destruction, terrorism, persecution or mass surveillance.
They also bar targeted or scaled exclusion, manipulation, or actions that undermine human autonomy or civic participation.
Human safety and human rights are paramount to OpenAI's mission, the company said.
When it operates first-party experiences such as ChatGPT, users should have access to trustworthy safety-critical information and customisation, personalisation and localisation will not override the Model Spec, including the objective point-of-view principle.
Any content omitted for legal reasons or added for local relevance will be transparently indicated to users, the company said.
OpenAI said it is piloting a localised ChatGPT for students in Estonia as part of ChatGPT Edu, incorporating local curricula and pedagogical approaches.
Related reading
- OpenAI publishes privacy policy detailing data collection practices
- OpenAI's new coding platform "helped create itself"
- Vikings to use Viper AI for game tape, draft scouting and free agency
The company is exploring additional pilot efforts with other countries.
OpenAI said it will continue sharing what it learns and evolve its approach transparently.
The Recap
- OpenAI set out approach to localize AI for countries.
- Piloting a localized ChatGPT for students in Estonia.
- OpenAI will share further details and evolve its approach.