Share this article on:
ChatGPT maker OpenAI is currently discussing whether or not its users should be allowed to create pornographic material using its artificial intelligence (AI) tools.
The company had previously stated that its tools “should not serve content that is not safe for work (NSFW)”, meaning content that would not be appropriate in a professional conversation or setting.
OpenAI defines this content as including “erotica, extreme gore, slurs, and unsolicited profanity”.
However, OpenAI has now responded to its own rules, with commentary saying it is currently considering allowing its users to generate this kind of content.
“We believe developers and users should have the flexibility to use our services as they see fit, so long as they comply with our usage policies,” OpenAI said.
We’re exploring whether we can responsibly provide the ability to generate NSFW content in age-appropriate contexts through the API and ChatGPT. We look forward to better understanding user and societal expectations of model behaviour in this area.
As reported by The Guardian, OpenAI employee Joanne Jang told news publication NPR that the company wanted to explore whether its tools should be used for pornographic text and imagery but confirmed that deepfake content would always be prohibited.
“We want to ensure that people have maximum control to the extent that it doesn’t violate the law or other people’s rights, but enabling deepfakes is out of the question, period,” she said.
“This doesn’t mean that we are trying now to create AI porn,” she added, saying that considering the move one to begin creating AI porn depends on one’s definition.
OpenAI’s change of stance is a significant one, particularly as AI regulation is still being developed and there is an epidemic of AI-created porn online.
X, formerly Twitter, had to ban searchers for Taylor Swift content temporarily after a number of pornographic deepfake images of her appeared on the platform, while the UK Labour Party is considering banning nudification tools that turn normal photos of people into naked ones.
Most concerning is the capability of this technology to potentially create child pornography, with paedophiles already using AI to create NSFW images of children, according to the Internet Watch Foundation.
Beeban Kidron, a campaigner for child online safety and a member of the UK House of Lords, said that AI was going against its own mission statement with the new proposal.
“It is endlessly disappointing that the tech sector entertains themselves with commercial issues, such as AI erotica, rather than taking practical steps and corporate responsibility for the harms they create,” she said, as reported by The Guardian.