Tech

Stable Diffusion makes it harder for artists to copy and generate porn, users are angry


AI image generation tool Stable Diffusion users software update It “undermines” the ability to generate NSFW output and photos in a particular artist’s style.

Stability AI, a company that funds and distributes software, has announced Stable Diffusion version 2. early this morning European time. In this update, key components of the model have been redesigned to improve certain features such as upscaling (the ability to increase image resolution) and inpainting (context-aware editing). However, the changes also make it harder for Stable Diffusion to produce certain kinds of images that have attracted both controversy and criticism. These include nudity and pornographic output, photorealistic photos of celebrities, and images that mimic the artwork of certain artists.

“They nerfed the model”

“They nerfed the model” 1 user commented On the Stable Diffusion sub reddit. “It’s kind of an unpleasant surprise.” another person said On the software’s official Discord server.

The user notes that he asked version 2 of Stable Diffusion to generate images in the style of Greg Rutkowski. Becomes a literal shorthand for creating high-quality images — I no longer create artwork that closely resembles my own work. (comparison these are 2 images, etc.). “What did you do to Greg😔?” commented 1 user on Discord.

As with the software, the changes to Stable Diffusion are also noticeable. Very influential and helps set the norm In fast-moving generative AI scenes. Unlike rival models such as OpenAI’s DALL-E, Stable Diffusion is open source. This allows the community to improve the tool quickly and allows developers to integrate the tool into their products for free. But it also means that Stable Diffusion is less constrained in how it can be used, and as a result has attracted significant criticism. In particular, many artists like Rutkowski are upset that Stable Diffusion and other image generation models have been trained on their artwork without their consent, allowing them to recreate their own style. Whether this kind of AI-powered copying is legal depends on something of an open questionExperts say training AI models on copyrighted data is likely legal, but say certain use cases could be challenged in court .

A comparison of Stable Diffusion’s ability to generate images similar to a particular artist.
image: lkewis via Reddit

Stable Diffusion users speculate that changes to the model were made by Stability AI to mitigate such potential legal issues. however, The Barge When I asked Stability AI founder Emad Mostaque in a private chat if this was the case, Mostaque didn’t respond. Mostaque confirmed that Stability AI did not remove artist images from the training data (as many users have speculated). Instead, the model’s reduced ability to copy artists is the result of changes made to how the software encodes and retrieves data.

“There’s no artist specific filtering going on here,” Mostaque said. The Barge(He also detailed the technical underpinnings of these changes. Messages posted on Discord.)

However, nude and pornographic images were removed from Stable Diffusion’s training data. AI image generator is already used to generate NSFW output, contains both photorealistic and anime-style photos. However, these models can also be used to generate NSFW images of him (known as non-consensual pornography) and child abuse images that resemble a specific individual.

Discuss the changes in Stable Diffusion Version 2 on Mostaque, the software’s official Discord. Note This latter use case is why we exclude NSFW content. “You can’t include a child and her nsfw in an open model,” says Mostaque (because two types of images can be combined to create child sexual abuse material).

1 user The on Stable Diffusion sub-reddit said removing NSFW content was “censorship” and “against the open source community’s ethos.” A user said: [sic] In a limited/censored model.However, others noted that the open source nature of Stable Diffusion meant that naked training data Easy to add New software does not affect previous versions. “Never mind if V2.0 is missing an artist or his NSFW, you’ll soon be able to create your favorite celebs naked. Anyway, you can already do it.”

While the changes to Stable Diffusion Version 2 annoyed some users, many others preferred deeper features, like the software’s new ability to create content that matches the depth of existing images. Others said the change made it harder to create high-quality images quickly, but the community will likely add this feature in a future version. as a single user Summary of changes: “2.0, in my experience so far, excels at interpreting prompts and creating coherent photographic images. But it can’t make Rutkowski boobs.”

Mostaque himself compared this new model to a pizza base where anyone can add their preferred ingredients (training data). “A good model should be something that everyone can use. If you want to add something, add it.” Said on Discord.

Mostaque also said future versions of Stable Diffusion will use a training dataset that allows artists to opt in or out. “We are trying to improve the base model and incorporate community feedback while being very transparent,” he said. The Barge.

The public demo of Stable Diffusion Version 2 is access here (However, due to high user demand, models may not be accessible or may be slow).





Source link

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button