Adobe rolled out Firefly Image 4 alongside a Creative Agent powered Firefly AI Assistant on April 15 and pushed the assistant to public beta on April 27. The release expands Adobe's strategy of becoming the central platform that hosts both its own foundation models and partner models from competitors. The Firefly platform now offers more than 30 image, video, and audio models in a single creative workflow, with native integrations across Photoshop, Illustrator, Premiere, After Effects, and Express. The pricing is bundled into existing Creative Cloud plans for individuals and into the Firefly Enterprise plan for organizations.
Image 4 is the headline upgrade and the model where Adobe needed to close ground on Midjourney and Black Forest Labs Flux. The model produces measurably better human anatomy, hands, and faces than Image 3, with internal Adobe benchmarks showing a 47 percent improvement in user preference scoring across paired comparison tests. Animal rendering improved 38 percent. Lighting consistency across multi character scenes improved by 41 percent. The control surface added camera angle, focal length, and zoom parameters as first class controls rather than as prompt engineering tricks. The output speed runs 1.6 seconds per image at standard resolution and 4.2 seconds at high resolution on the platform's default GPU tier.
The Firefly AI Assistant is the more strategically significant release. Adobe built the Assistant as a creative agent that can take a natural language brief, decompose it across the right tools and models, and produce a complete deliverable. A user can ask the Assistant to design a three frame social campaign for a product launch, and the Assistant will generate options, propose a layout, generate the imagery, write copy, place type, and export to the right platform specs. The early use case data shared at the public beta launch shows Assistant powered campaign builds running 73 percent faster than equivalent manual workflows. The pre release version was tested on 8,400 paying customers across the Creative Cloud Pro tier through Q1.
The partner model strategy is what separates Adobe from the model labs. Firefly now hosts Kling 3.0 and Kling 3.0 Omni from Kuaishou, Google's Nano Banana 2 and Veo 3.1, Runway's Gen-4.5, and ElevenLabs Multilingual v2. The user picks the model best suited to the task and Adobe handles the orchestration, IP indemnification, and rendering. The competing strategy from OpenAI, Anthropic, and Midjourney is to make their own models the only model in their workflow. Adobe is betting that creative professionals will pay for the platform that gives them every model rather than the one that gives them the best model.
The video editor side of the release added studio quality sound generation, advanced color adjustments, and Adobe Stock integration for B roll. Premiere users can generate music, sound effects, and ambient audio directly from text prompts with multitrack output that integrates into the timeline. The color tools added scene aware adjustments that match colorimetry across cuts automatically. The Stock integration lets Premiere users generate clips that match the look of footage they have already shot, which solves the persistent problem of mismatched B roll for documentary, marketing, and corporate video work.
Pricing is the question every Adobe user is now asking. The base Firefly plan included with Creative Cloud All Apps gives 1,000 generative credits per month for image and 100 credits per month for video. The Firefly Standard plan at $9.99 monthly offers 2,000 image credits and 200 video credits. Firefly Pro at $29.99 monthly offers 7,000 image and 500 video credits. The Enterprise plan starts at $30 per user per month and offers unlimited generations within fair use limits, plus IP indemnification and admin controls. Premium models like Kling and Veo cost additional credits per generation, with rates published per model.
For working creatives, the practical implication is that Adobe just made the calculus around standalone AI tool subscriptions more complicated. A photographer who was paying $50 monthly across Midjourney, Topaz, and a stock service can now get most of that workflow inside Creative Cloud at the existing $59.99 monthly. A small marketing agency that was running ChatGPT, Midjourney, Runway, and ElevenLabs separately can now consolidate inside Firefly with central billing and IP indemnification at a lower aggregate cost. The model laboratory subscriptions still have advantages for power users who push specific model capabilities to their limits, but the median user case has a real argument for consolidation now.
The IP indemnification piece is the enterprise hook that no model lab has matched at scale. Adobe trained Image 4 on licensed Adobe Stock content and explicitly indemnifies enterprise customers against IP infringement claims from outputs generated through the platform. The Getty Images suit against Stability AI, the New York Times suit against OpenAI, and the music industry suits against Suno and Udio have made enterprise legal teams conservative about deploying generative AI at scale. Adobe's clean training data story and contractual indemnification has been the closing argument that has won the company major enterprise wins in the last 18 months at JPMorgan, IBM, and Estee Lauder per the company's Q1 earnings call.