Notice: Function _load_textdomain_just_in_time was called incorrectly. Translation loading for the hcaptcha-for-forms-and-more domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /var/www/wp-includes/functions.php on line 6114

Notice: Function _load_textdomain_just_in_time was called incorrectly. Translation loading for the hcaptcha-for-forms-and-more domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /var/www/wp-includes/functions.php on line 6114

Notice: Function _load_textdomain_just_in_time was called incorrectly. Translation loading for the wordpress-seo domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /var/www/wp-includes/functions.php on line 6114
Adobe Firefly is doing generative AI differently and it may even be good for you
Friday, December 20, 2024

Trending Topics

HomeInnovationAdobe Firefly is doing generative AI differently and it may even be good for you

Adobe Firefly is doing generative AI differently and it may even be good for you

spot_img

In generative AI imagery’s short life span, it has traveled a parabolic trajectory that’s taken it from excitement to pillory and back. With each update to leaders like Midjourney and Stable Diffusion excitement grows but the unease, especially in the creative community, expands as well. These concerns go back to the source, and I mean the literal source of all that imagery training data these systems used to build their models.

The open internet provided a vast source of original creative data that OpenAI , for instance, used to train DALL-E . Artists discovered their work was being used without consent or their control. Anyone who’s used these systems knows that they do not duplicate art.

Instead, they use their training to create something new, though it can ape anyone’s style and, in some cases regurgitate, at least in part, the content it was trained on. Some artists have tried, with little success, to sue companies like Stable Diffusion on copyright claims. However, the idea that Generative AI imagery success always requires significant concessions from the artists’ community is not necessarily accurate.

Just ask Adobe . Recently, I sat down at CES 2024 with Adobe VP of Generative AI, Alexandru Costin. A 20-year Adobe veteran, Costin’s been with the AI group for a year, but it’s been a whirlwind of activity that’s impacting not only Adobe Creative Cloud’s millions of users but the future of Adobe.

“I do think a big chunk of our software stack will be replaced by models; we’re transforming from software company to AI company,” said Costin over breakfast. Costin acknowledged that Adobe’s been in the AI game for years, adding Content Aware Fill to Photoshop in 2016, but its approach has in general always been rooted not in creating something out of nothing but in manipulating pre-existing imagery and creating a new verb in the process (Was this Photoshopped?). “Our customers, they don’t want to generate images.

They want to edit the image,” said Costin. ‘Of course, we’re working on video’ Adobe’s plan as it entered the Generative Image space was to find ways to use these powerful tools and models to enhance existing imagery while also protecting creators and their work. Some consider Adobe late to the generative image game.

It’s breakthrough platform, Firefly, did follow DALL-E, Stable Diffusion, and others in the space. Adobe, though had enough experience with models, training, and output to know what it didn’t want to do. Costin recounted VoCo, an early Adobe AI project that let you take an audio recording and then, using text prompts, refashion it so the speaker said something different.

While the reception at the Adobe Max Conference that year appeared positive , those online quickly raised concerns about VoCo being used to create audio deep fakes. The technology was never released but it did lead to Adobe creating its own Content Authenticity Initiative. “[We were] reacting in 2018 but since then we’re trying to be ahead of the curve and be a thought leader in how AI is done,” said Costin.

When Adobe decided to approach Generative AI imagery, it wanted to build a tool that Photoshop creators could use to output images that could be used anywhere, including in commercial content. To do so though meant setting a high bar for training material: No copyrighted, trademarked, or branded material. Instead of scraping the internet for visual data, Adobe would look to a different, local source.

In addition to a wide collection of Adobe Creative Cloud apps that includes Photoshop , Premiere , After Effects , Illustrator , InDesign , and more, Adobe has its own vast stock image library: Adobe Stock . Costin said they have hundreds of thousands of contributors and untold assets across photos, illustrations, and 3D imagery. It’s free of commercial, branded, and trademark imagery, and is also moderated for hate and adult imagery.

“We’re using 100s of millions of assets, all trained and moderated to have no IP,” noted Costin. ‘We’re transforming from software company to AI company. ‘ Adobe uses that data to train Firefly and then programmed the Generative AI system so that it cannot render trademarked, recognizable characters (no putting Bart Simpson in a compromising situation).

Costin noted that there is a gray area as the copyright on some characters, like one version of Mickey Mouse , expires. There could be a case down the road where renders of that version of the iconic mouse are allowed. There’s another wrinkle here that sets Adobe Firefly apart from Stable Diffusion and others: Adobe is paying its creators for the use of their work to train its AI.

What they’re paid is based, in part, on how much of their creative output is used in the training. Oddly, Adobe is also allowing creators to add generative imagery to Adobe Stock, which may mean the system is feeding itself. Regardless, Costin sees it all as a win-win.

“Generative AI enables amplified creativity,” he told me. Costin, though, is no Pollyanna. He told me the new models are more powerful and “require more governance.

” His team has trained new models to be safer for the education market – weeding out the ability to create NSFW content – while also providing a window for higher education where adult artists might need more creative options. Adobe’s generative tools also can’t afford to take a one-size-fits-all approach to generative AI. Costin explained to me how Firefly, for instance, handles location.

The models consider where the requester lives and looks at “the skin color distribution of people living in your country,” to ensure that’s reflected in output. It does similar work with genders and age distribution. While it’s hard to know if those efforts entirely weed out bias, it’s clear Adobe is putting in some effort to ensure its AI reflects its creators’ communities.

‘It is impossible to trust what you see’ In Costin’s decades of software development experiences, this epoch stands out. “We’ve seen incredible progress in short periods of time,” said Costin. “This is the fastest innovation cycle we’ve all experienced.

It requires new ways of building software. ” Adobe appears to be adjusting well on that front but it’s hard to match that speed with the proper governance and public perception. “It is impossible to trust what you see,” warned Costin.

“This will change how people perceive the internet. ” His comments echoed those of his colleague, Adobe Chief Strategy Officer Scott Belsky who recently noted, “We’re entering an era where we can no longer believe our eyes. Instead of ‘trust but verify,’ it’s going to become ‘verify then trust.

‘” Perhaps, though, the road will be a bit easier for Adobe which, under Costin’s guidance, is focusing less on bespoke image creation and more on its fundamentals of image alteration and enhancement. I’ve rarely used Adobe Firefly to create brand new images, but often apply Generative Fill in Photoshop to match the image aspect ratio I need by extending an empty desktop without touching the photo’s subject (I did once use it to expand and alter meme images ). Adobe’s AI future looks very much like its recent past: adding more generative AI to key features and apps that speak directly to users’ primary tasks and needs.

I asked Costin to outline where Adobe is going in AI. He didn’t announce any new products but acknowledged a few things: On the Adobe Premiere front: “Of course, we’re working on video. ” Costin and his team are talking to the Premier and After Effects teams, asking them what they need.

Whatever Adobe does there will follow the same AI playbook as Photoshop. I asked about batch processing in Photoshop and Costin told me, “We’re thinking about it…. nothing to announce but it’s a key workflow.

” Despite the breakneck development pace and challenges, Costin remains positive about the trajectory of generative AI writ large. “I’m an optimist,” he smiled. .


From: techradar
URL: https://www.techradar.com/computing/artificial-intelligence/adobe-firefly-is-doing-generative-ai-differently-and-it-may-even-be-good-for-you

DTN
DTN
Dubai Tech News is the leading source of information for people working in the technology industry. We provide daily news coverage, keeping you abreast of the latest trends and developments in this exciting and rapidly growing sector.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

spot_img

Must Read

Related News