Seven leading companies in artificial intelligence have committed to managing risks posed by the tech, the White House has said.
This will include testing the security of AI, and making the results of those tests public.
It follows a number of warnings about the capabilities of the technology.
The pace at which the companies have been developing their tools have prompted fears over the spread of disinformation, especially in the run up to the 2024 US presidential election.
"We must be clear-eyed and vigilant about the threats emerging from emerging technologies that can pose - don't have to but can pose - to our democracy and our values," President Joe Biden said during remarks on Friday.
On Wednesday, Meta, Facebook's parent company, announced its own AI tool called Llama 2.
As part of the agreement signed on Friday, the companies agreed to:
- Security testing of their AI systems by internal and external experts before their release.
- Ensuring that people are able to spot AI by implementing watermarks.
- Publicly reporting AI capabilities and limitations on a regular basis.
- Researching the risks such as bias, discrimination and the invasion of privacy.