At the annual Adobe MAX conference, Adobe announced the introduction of chat-based AI assistants in Photoshop, Express, and Firefly, and significantly expanded support for third-party AI models including Google, OpenAI, and Runway, marking a new stage of openness and intelligence in the content creation field.

Adobe is embedding conversational AI capabilities into its core applications. The new Photoshop AI assistant allows users to delegate creative tasks through chatting and receive step-by-step guidance. Similar features are also under development for Express and Firefly. Additionally, Project Moonlight, currently in preview, will connect Adobe apps with users' social channels, aiming to simplify content management.

Adobe has for the first time allowed users to directly access and use a diverse range of external AI models in Photoshop and Firefly, opening up an ecosystem. Partners include: Google Gemini 2.5 Flash Image, Black Forest Labs FLUX.1 Context and Flux 1.1, OpenAI, Runway, Luma AI, Moonvalley, Pika, Ideogram, Topaz Labs for image upscaling, and ElevenLabs Multilingual v2 for voiceovers. All models are available through a unified interface and pricing plan. From December 1st, Creative Cloud Pro and Firefly subscribers can use any model to generate images unlimitedly. Adobe plans to introduce more third-party models for animation, 3D, and audio design in the coming months, allowing users to compare outputs, set preferences, and access detailed usage metrics.

Firefly has undergone a comprehensive upgrade. The new Firefly Image Model 5 (public beta) can directly generate native 4 million pixel resolution images without zooming, offering rich and realistic details. Additionally, the "Edit Prompts" feature supports natural language descriptions for modifications, while the "Layered Editing" function enables context-aware image synthesis. In audio, Firefly added "Generate Soundtrack" and "Text-to-Speech" (both public beta), for creating licensed background music and AI voiceovers. Furthermore, the Firefly Video Editor (internal beta) offers web-based multi-track timeline editing with various presets, and Firefly Creative Production (internal beta) supports batch editing of thousands of images, including background replacement, color adjustment, and cropping.

Adobe's flagship applications have also introduced AI innovations. In Photoshop, users can now use "Generative Fill" driven by partner models, "Generative Upscale" for 4K enhancement, and the "Harmony" feature that automatically matches objects and backgrounds in terms of lighting, color, and tone—all of which are now available. In Premiere Pro, the "AI Object Mask" (beta) automatically identifies and isolates people and objects in video frames; tools like "Quick Vector Mask" speed up the tracking process. For Lightroom, the "Assisted Filtering" (beta) accelerates finding the best images in large photo collections.

For enterprise customers, Adobe launched "Firefly Custom Models" (internal beta), allowing users to upload reference images to train their own AI models and generate materials that match their personal style. Firefly Foundry serves enterprises, helping them create custom models trained on intellectual property, supporting images, videos, and audio. At the same time, Adobe is expanding its end-to-end content supply chain platform GenStudio, adding integrations with Amazon Ads, Innovid, Google Marketing Platform, LinkedIn, and TikTok.

In terms of partnerships, Adobe announced a collaboration with YouTube, bringing the "Create for YouTube" section to mobile creators. This section is integrated into the free Premiere app, providing exclusive effects, transitions, and templates, supporting direct editing and sharing to YouTube Shorts (coming soon).