Adobe is making a significant push into artificial intelligence, with its annual MAX conference in Los Angeles unveiling a slew of new features for its creative apps. The bulk of the announcements revolve around Firefly, Adobe's darling app that allows users to create images and videos through generative AI.
Custom models are now available in Firefly, enabling customers to train their own AI models to create specific characters and tones. This feature has been available to businesses, but is rolling out to individual customers at the end of the year. To train a custom model, users need six to 12 images, with slightly more needed for tone training.
The Firefly Image Model 5 is also launching today, featuring a native 4-megapixel resolution and support for prompt-based editing. The new feature allows users to upload an image and have Firefly identify different elements, which can then be moved, resized, or replaced with generative features. In a demo, Adobe showed how the model worked by cutting out and moving chopsticks in a bowl of ramen.
Two new generative AI features are also being introduced in Firefly: Generate Soundtrack and Generate Speech. The former scans an uploaded video and suggests a prompt for a soundtrack, while the latter uses text-to-speech capabilities to generate audio based on written prompts. Both features leverage Adobe's own models, with 15 languages supported by Generate Speech.
The Firefly video editor is also getting an upgrade, allowing users to access a full, multi-track video editor in their browser with built-in Firefly. The new feature will combine generated and captured content across video, audio, and images.
In addition to these updates, Adobe has announced the integration of its AI assistant into Photoshop and Express. The assistant will provide guidance on specific tasks and offer suggestions for tools to use, while still giving users control over the final output.
Project Moonlight is another significant announcement, with Adobe working on integrating its creative features directly into ChatGPT using OpenAI's technology. This system aims to carry context across Adobe's applications, allowing users to generate content that fits their own style and tone.
The company has not announced an official release date for Project Moonlight, but a private beta is launching at MAX for attendees. A larger public beta is expected soon.
Overall, Adobe is making significant strides in the realm of artificial intelligence, with its creative apps becoming increasingly powerful tools for generating content.
Custom models are now available in Firefly, enabling customers to train their own AI models to create specific characters and tones. This feature has been available to businesses, but is rolling out to individual customers at the end of the year. To train a custom model, users need six to 12 images, with slightly more needed for tone training.
The Firefly Image Model 5 is also launching today, featuring a native 4-megapixel resolution and support for prompt-based editing. The new feature allows users to upload an image and have Firefly identify different elements, which can then be moved, resized, or replaced with generative features. In a demo, Adobe showed how the model worked by cutting out and moving chopsticks in a bowl of ramen.
Two new generative AI features are also being introduced in Firefly: Generate Soundtrack and Generate Speech. The former scans an uploaded video and suggests a prompt for a soundtrack, while the latter uses text-to-speech capabilities to generate audio based on written prompts. Both features leverage Adobe's own models, with 15 languages supported by Generate Speech.
The Firefly video editor is also getting an upgrade, allowing users to access a full, multi-track video editor in their browser with built-in Firefly. The new feature will combine generated and captured content across video, audio, and images.
In addition to these updates, Adobe has announced the integration of its AI assistant into Photoshop and Express. The assistant will provide guidance on specific tasks and offer suggestions for tools to use, while still giving users control over the final output.
Project Moonlight is another significant announcement, with Adobe working on integrating its creative features directly into ChatGPT using OpenAI's technology. This system aims to carry context across Adobe's applications, allowing users to generate content that fits their own style and tone.
The company has not announced an official release date for Project Moonlight, but a private beta is launching at MAX for attendees. A larger public beta is expected soon.
Overall, Adobe is making significant strides in the realm of artificial intelligence, with its creative apps becoming increasingly powerful tools for generating content.