OpenAI Supports Requiring Licenses for Advanced AI Systems
OpenAI have drafted a memo showing support for the idea of requiring government licenses for the development of advanced AI systems, suggesting they are planning on pulling back on data used to train image generators.
The company laid out multiple AI policy commitments between White House officials and tech executives including Chief Executive Officer Sam Altman.
“We commit to working with the US government and policy makers around the world to support development of licensing requirements for future generations of the most highly capable foundation models.”
This now sets the stage for a possible clash between startups and open-source developers. And this is not the first time the company has raised the idea. Back in May Altman supported the creation of an agency that could issue licenses for AI products, taking them for rule violators.
This policy has come just as Google and OpenAI are expected to commit to safeguards for developing the technology.
The company claimed the ideas laid out will be different from those announced soon by the White House.
Vice President for Global Affairs at OpenAI, Anna Makanju said they aren’t “pushing” for licenses, but believe permitting is a “realistic” way to track emerging systems.
“It’s important for governments to be aware if super powerful systems that might have potential harmful impacts are coming into existence,” and there are “very few ways that you can ensure that governments are aware of these systems if someone is not willing to self-report the way we do.”
The company are supporting licensing regimes ONLY for models more powerful that ChatGPT, ensuring smaller startups are free from regulatory burden,
“We don’t want to stifle the ecosystem.”
They also signaled they are willing to be open about data used to train image generators, and are committed to “incorporating a provenance approach” by the end of the year.
Data provenance was raised by policy makers as critical to keeping AI tools from spreading misinformation and bias.
OpenAI disclosed its conducting a survey on watermarking, along with detection and disclosure in AI-made content, with plans to publish the results.