Rephrase and rearrange the whole content into a news article. I want you to respond only in language English. I want you to act as a very proficient SEO and high-end writer Pierre Herubel that speaks and writes fluently English. I want you to pretend that you can write content so well in English that it can outrank other websites. Make sure there is zero plagiarism.:
- OpenAI updated its usage policies on January 10.
- As part of the update, it eased restrictions on military use of its technology.
OpenAI quietly eased restrictions on military applications of its technology earlier this week.
In an unannounced update to its usage policies on January 10, OpenAI lifted a broad ban on using its technology for “military and warfare.” The new language still prohibits OpenAI’s services from being used for more specific purposes like developing weapons, injuring others, or destroying property, a spokesperson for OpenAI told Business Insider.
The spokesperson added that the company “aimed to create a set of universal principles that are both easy to remember and apply, especially as our tools are now globally used everyday users who can now also build GPTs.” On January 10, OpenAI rolled out its GPT Store, a marketplace for users to share and browse customized versions of ChatGPT known as “GPTs.”
OpenAI’s new usage policy now includes principles like “Don’t harm others,” which are “broad yet easily grasped and relevant in numerous contexts,” as well as bans on specific use cases like developing or using weapons, OpenAI’s spokesperson said.
Some AI experts worry that OpenAI’s policy rewrite is too generalized, especially when AI technology is already being used in the conflict in Gaza. The Israeli military said it used AI to pinpoint targets to bomb inside the Palestinian territory.
“The language that is in the policy remains vague and raises questions about how OpenAI intends to approach enforcement,” Sarah Myers West, managing director of the AI Now Institute and a former AI policy analyst at the Federal Trade Commission, told The Intercept.
Though OpenAI did not offer many specifics about its plans, the language changes could open the door to future contracts with the military. A spokesperson for OpenAI told BI that there are national security use cases that align with the company’s mission, which is in part what led to the changes. OpenAI is already working with Defense Advanced Research Projects Agency, for instance, “to spur the creation of new cybersecurity tools to secure open source software that critical infrastructure and industry depend on.”