AUTOMATIC1111 Stable Diffusion Web UI (commonly called A1111, SD WebUI, or simply Automatic1111) is an open-source web interface for Stable Diffusion, implemented using the Gradio library. Created by a pseudonymous developer known as AUTOMATIC1111, it was first released on GitHub on August 22, 2022, roughly one month after Stability AI published the original Stable Diffusion model. At a time when Stable Diffusion could only be run through command-line scripts, A1111 provided an accessible browser-based graphical interface that quickly made it the most widely used tool for running diffusion models locally. As of early 2026, the repository has accumulated over 162,000 GitHub stars and more than 30,000 forks, making it one of the most-starred open-source AI projects in history. The project is licensed under the GNU Affero General Public License v3.0 (AGPL-3.0) and has received contributions from over 580 developers.
Stable Diffusion was publicly released on August 22, 2022, by Stability AI in collaboration with researchers from the CompVis Lab at LMU Munich and Runway. The release of this powerful diffusion model as open-source software was unprecedented for a model of its capability, and within days the open-source community had it running on Windows laptops, M1 Macs, and custom home servers. However, the initial release required command-line interaction, which limited accessibility for most users.
A pseudonymous developer operating under the handle "AUTOMATIC1111" published a web-based interface for the model on GitHub on the same day, August 22, 2022. The earliest archived snapshot of the repository on the Internet Archive dates to September 12, 2022, by which point the project was already gaining significant traction. AUTOMATIC1111 built on earlier community efforts to wrap Stable Diffusion in a user-friendly interface, using Gradio Blocks to construct a tabbed browser interface covering text-to-image generation, image-to-image processing, and a growing collection of post-processing tools.
The interface exposed every tunable parameter of the Stable Diffusion pipeline through form fields, sliders, and dropdown menus, making the technology approachable for artists, designers, and hobbyists with no programming experience. It supported Stable Diffusion 1.4 and 1.5 checkpoints and offered features that no other interface provided at the time, including prompt attention syntax, negative prompts, inpainting, outpainting, and textual inversion training. By the end of 2022, the project had already attracted tens of thousands of GitHub stars and a large community of contributors.
The timing of the release was critical to the project's success. When Stable Diffusion launched, no polished graphical interface existed for running the model locally. AUTOMATIC1111's WebUI filled that gap at exactly the right moment. Several factors drove its explosive adoption:
By early 2023, the repository had become one of the fastest-growing projects on GitHub. The community organized around Reddit, Discord servers, and dedicated tutorial sites, creating a rich ecosystem of guides, custom models, and shared workflows. Model-sharing platforms like Civitai and Hugging Face standardized their model pages around A1111 usage instructions, further cementing its position as the default Stable Diffusion interface.
On January 5, 2023, AUTOMATIC1111's GitHub account was briefly suspended for alleged Terms of Service violations. The suspension also temporarily took down the entire repository, causing significant alarm across the Stable Diffusion community. Various theories circulated about the cause, including speculation about mass reporting by opponents of AI-generated art and possible conflicts with Microsoft (GitHub's parent company). GitHub discussions suggested the suspension may have been related to content in the repository's wiki. The account was restored shortly afterward without a detailed public explanation. The incident highlighted how dependent the community had become on a single developer's repository and spurred discussions about mirroring the project to prevent future disruptions.
For the first several months of its existence, the project did not use formal version numbers. Users tracked changes through Git commits rather than tagged releases. The project adopted formal semantic versioning in early 2023 with version 1.0.0-pre. Throughout 2023, a steady stream of releases introduced support for new model architectures:
| Version | Release Date | Key Highlights |
|---|---|---|
| v1.0.0-pre | January 24, 2023 | First tagged release; extension system formalized |
| v1.1.0 | May 1, 2023 | PyTorch 2.0 support, bringing performance improvements |
| v1.2.0 | May 13, 2023 | Stability improvements and bug fixes |
| v1.3.0 | May 27, 2023 | Additional refinements |
| v1.4.0 | June 27, 2023 | Further features and stability improvements |
| v1.5.0 | July 25, 2023 | Initial SDXL support, allowing generation at 1024x1024 resolution |
| v1.5.1 | July 27, 2023 | SDXL bug fixes |
| v1.6.0 | August 31, 2023 | Full native SDXL support with the refiner pipeline; new samplers (Restart, DPM++ 2M SDE Exponential); multi-model memory caching; style editor |
| v1.7.0 | December 2023 | Settings redesign with search; HyperTile optimization; Intel Arc GPU support via IPEX; LyCORIS GLoRA and OFT network support |
Development continued into 2024 with several significant updates:
| Version | Release Date | Key Highlights |
|---|---|---|
| v1.8.0 | March 2, 2024 | Updated to PyTorch 2.1.2; soft inpainting; FP8 support; SDXL-Inpaint model support; Spandrel integration for upscaling; NPU support |
| v1.9.0 | April 13, 2024 | Scheduler selection in main UI; LyCORIS BOFT and DoRA network support; Extra Networks tree view; SDXL-Lightning scheduler |
| v1.9.4 | May 28, 2024 | Pinned setuptools version to resolve startup error |
| v1.10.0 | July 27, 2024 | Stable Diffusion 3 support; six new schedulers (Align Your Steps, KL Optimal, Normal, DDIM, Simple, Beta); DDIM CFG++ sampler; significant performance improvements |
| v1.10.1 | February 9, 2025 | Bug fix release (CPU image upscale fix) |
After v1.10.1, development effectively stalled. By late 2024, community members began raising concerns about the project's future. GitHub discussions titled "Future of Automatic1111 for 2025" and "Is this a dead project?" noted that no releases had occurred since July 2024, with 44 open pull requests remaining unmerged and nothing approved in months. A further discussion titled "Why did you stop updating?" appeared in early 2025. As of March 2026, AUTOMATIC1111 has not published any new releases since v1.10.1, though the repository remains available and functional.
The developer behind AUTOMATIC1111 operates under this pseudonym and has not publicly disclosed their real identity. Very little is known about them beyond their GitHub activity. The AUTOMATIC1111 GitHub profile shows contributions to several related repositories, including the main web UI, a feature showcase repository, an extensions index repository, and a web assets repository.
Despite this anonymity, AUTOMATIC1111 became one of the most influential figures in the open-source AI art community during 2022 and 2023. The pseudonymous nature of the project's leadership became a point of discussion during the January 2023 GitHub suspension, when the entire community's primary tool temporarily disappeared along with its creator's account. The project's dependence on a single anonymous maintainer is widely cited as both a risk factor and a distinctive feature of its history.
A1111 is built using Gradio Blocks, a Python library developed by Hugging Face for creating interactive web applications. Gradio Blocks provides a low-level framework for designing web apps with customizable layouts and data flows. The UI runs in any standard web browser, typically accessible at http://127.0.0.1:7860 on the local machine. The interface is organized into tabs that separate different functionalities: txt2img, img2img, Extras, PNG Info, Checkpoint Merger, Train, and Settings.
The choice of Gradio allowed rapid development and a familiar interface pattern for the machine learning community. Gradio handles input components (text boxes, sliders, image uploaders), output displays, and event callbacks, while A1111 adds Stable Diffusion-specific logic on top.
Users can customize the interface through a ui-config.json file that controls slider ranges, default values, and visibility of UI elements. A user.css file allows CSS modifications for further visual customization, and a built-in dark theme is available through a URL parameter or command-line flag.
Underneath the Gradio frontend sits a FastAPI backend that exposes a RESTful API. Users can enable the API by launching the application with the --api flag, after which auto-generated endpoint documentation becomes available at http://127.0.0.1:7860/docs via Swagger UI. The primary endpoints are /sdapi/v1/txt2img and /sdapi/v1/img2img, and all endpoints are located under the /sdapi/v1/* path. The API supports all core operations, including text-to-image generation, image-to-image processing, upscaling, model switching, and interrogation (extracting prompts from images). This enables integration with external applications, automation scripts, Discord bots, and third-party services. HTTP Basic Authentication can be enabled with the --api-auth parameter for secured deployments.
The codebase is composed of approximately 87.5% Python, 8.4% JavaScript, 2.1% CSS, and 1.3% HTML. The project has accumulated over 7,600 commits from 586 contributors.
The backend loads Stable Diffusion model checkpoints (in .safetensors or .ckpt format) into GPU memory and handles the full diffusion inference pipeline. This includes text encoding through CLIP, noise scheduling, iterative denoising through the U-Net (or the MMDiT transformer for SD3), and final image decoding through the VAE. The system supports loading multiple model components independently: base checkpoints, VAE files, LoRA adapters, textual inversion embeddings, and hypernetworks.
A1111 includes a broad collection of image generation and manipulation features, many of which were pioneered or popularized by the project.
The primary generation mode takes a text prompt and produces an image. Users can control resolution, sampling steps, CFG (Classifier-Free Guidance) scale, seed value, and batch size. The interface supports positive and negative prompts, allowing users to specify both desired and undesired image characteristics. A1111 supports prompt lengths beyond the standard 75-token CLIP limit by automatically chunking long prompts. Generated images include embedded metadata containing all generation parameters, enabling exact reproduction of results.
This mode takes an existing image as a starting point and applies the diffusion process with a specified denoising strength. Lower strength values produce results closer to the original image, while higher values allow greater deviation. The img2img pipeline is useful for style transfer, image refinement, and iterative creative workflows. The Loopback feature can automatically feed the output of one generation back as the input for the next, allowing progressive refinement over multiple iterations.
Within the img2img tab, users can draw a mask over specific regions of an image. The model regenerates only the masked area while keeping the rest of the image intact. The "Inpaint area: Only masked" option resizes just the masked region for processing at higher resolution, then composites it back into the original image. This allows detailed work on specific portions of large images. Options for masked content include fill, original, latent noise, and latent nothing. Soft Inpainting, added in v1.8.0, supports soft-edged masks for smoother blending between inpainted and original regions.
Outpainting extends an image beyond its original boundaries. The system creates empty space around the original image and uses inpainting techniques to fill the new regions with contextually appropriate content. Effective outpainting typically requires a descriptive prompt matching the existing image, high denoising strength and CFG scale values, and 50 to 100 sampling steps using ancestral samplers such as Euler Ancestral or DPM2 Ancestral.
A two-pass generation technique where the image is first rendered at a lower resolution, upscaled using a selected method (such as Latent, ESRGAN, SwinIR, or other models available through the Spandrel integration), and then refined with a second diffusion pass at the target resolution. This approach avoids the compositional artifacts that often appear when generating directly at high resolutions with Stable Diffusion 1.5, such as duplicated subjects or distorted anatomy.
The Extras tab provides tools for post-processing generated images. Face restoration can be performed using GFPGAN or CodeFormer with adjustable strength. Image upscaling supports multiple algorithms including Real-ESRGAN (with anime-specific variants), ESRGAN, SwinIR, and LDSR. The Spandrel integration added in v1.8.0 provides a unified framework for loading various upscaling and restoration models. Users can chain upscaling with a secondary upscaler at a configurable ratio.
The interface offers a wide selection of samplers, each balancing speed, quality, and determinism differently:
| Sampler | Type | Notes |
|---|---|---|
| Euler | Deterministic | Simplest sampler; fast with good results |
| Euler a (Ancestral) | Stochastic | Adds random noise each step; more creative outputs |
| DPM++ 2M Karras | Multistep | Popular default; good balance of speed and quality |
| DPM++ SDE Karras | Stochastic | High detail; slightly slower |
| DPM++ 3M SDE Karras | Stochastic | Third-order solver; improved detail |
| Heun | Deterministic | Second-order method; higher quality per step |
| LMS | Deterministic | Linear Multi-Step method |
| DDIM | Deterministic | Classic sampler; supports fewer steps |
| PLMS | Deterministic | Pseudo Linear Multi-Step method |
| UniPC | Multistep | Fast convergence; good at low step counts |
| LCM | Distilled | Supports very few steps (4-8); requires LCM LoRA |
| DPM++ 2M SDE Exponential | Stochastic | Added in v1.6.0 |
| Restart | Restart-based | Added in v1.6.0 |
| DDIM CFG++ | Deterministic | Added in v1.10.0 |
Most samplers are borrowed from the k-diffusion library. Starting with v1.9.0, a separate scheduler dropdown allows users to choose scheduling strategies independently from the sampler, including Karras, exponential, Align Your Steps, KL Optimal, Normal, Simple, and Beta schedules.
A1111 introduced or popularized several prompt engineering techniques that became standard across the Stable Diffusion ecosystem:
(word:1.2) increase attention to a term; square brackets [word] decrease it.[from:to:step] switches between prompts at a specified sampling step during generation.[word1|word2] cycles between terms at each sampling step.AND operator combines multiple prompts with independent weights.# are ignored during generation.| Feature | Description |
|---|---|
| Prompt Matrix | Generates all combinations from prompts separated by the pipe character |
| X/Y/Z Plot | Creates grids comparing different parameter values across rows, columns, and batches |
| Loopback | Feeds the output of one generation as the input of the next, iteratively |
| CLIP Interrogator | Generates a text description from an existing image using the CLIP model |
| Composable Diffusion | Combines multiple prompts using AND with adjustable weights |
| Seed Resize | Produces similar compositions at different resolutions using the same seed |
| Batch Processing | Generates multiple images in a single run or processes folders of images |
| Prompt S/R (Search and Replace) | Compares variations by substituting keywords across a batch |
| Prompts from File | Executes batch jobs by reading prompts from a text file |
A1111 embeds all generation parameters (prompt, negative prompt, seed, sampler, steps, CFG scale, model hash, and more) directly into the metadata of output PNG files. The PNG Info tab allows users to drag in any previously generated image to retrieve its full generation parameters, making results reproducible and shareable. This metadata embedding convention was adopted by many other tools and became a community standard.
A1111 supports a range of Stable Diffusion model architectures:
| Model | Support Added | Notes |
|---|---|---|
| Stable Diffusion 1.4 / 1.5 | August 2022 (launch) | The original supported architecture; generates 512x512 images; largest ecosystem of fine-tunes and LoRAs |
| Stable Diffusion 2.0 / 2.1 | Late 2022 | Includes 512px and 768px variants, v-prediction and epsilon-prediction modes, depth-guided model, and inpainting model |
| Stable Diffusion XL (SDXL) | v1.5.0 (July 2023), refined in v1.6.0 | Generates 1024x1024 images; supports base + refiner pipeline |
| SSD-1B (Segmind) | 2023 | Distilled SDXL variant |
| SDXL-Lightning | v1.9.0 (April 2024) | Few-step distilled model with Sgm Uniform scheduler |
| Stable Diffusion 3 | v1.10.0 (July 2024) | Uses MMDiT architecture; Euler sampler recommended |
| InstructPix2Pix | 2023 | Text-guided image editing model |
| Alt-Diffusion | 2023 | Multilingual input support |
| SD2 Variation Models | 2023 | CLIP embedding-based image variations |
The interface loads models in .ckpt and .safetensors formats and supports hot-swapping between checkpoints through a dropdown menu in the UI. Notably, AUTOMATIC1111 does not natively support FLUX models by Black Forest Labs. FLUX uses a fundamentally different architecture that requires significant backend changes. Users who want FLUX support through an A1111-style interface typically use the Forge fork or ComfyUI.
A1111 supports several types of supplementary model weights that modify generation behavior without replacing the base checkpoint:
<lora:filename:multiplier> in the prompt. LoRA became the dominant method for adding custom subjects, styles, and concepts to Stable Diffusion generations. Multiple LoRAs can be combined in a single generation.The interface provides a unified Extra Networks browser with thumbnail previews for browsing and selecting all installed model supplements.
One of A1111's greatest strengths is its extension system. Extensions are community-developed Python scripts and modules that plug into the web UI to add new features, tabs, or processing steps. The extension architecture allowed the community to expand the tool's capabilities far beyond what the core team built. Extensions can be installed directly from the UI through the Extensions tab, which includes a searchable index of available extensions, or by cloning Git repositories into the extensions/ directory.
| Extension | Developer | Function |
|---|---|---|
| ControlNet | Lvmin Zhang (lllyasviel) | Adds conditional control using edge maps, depth maps, pose keypoints, segmentation maps, scribbles, and other structural inputs. Widely considered one of the most important extensions in the Stable Diffusion ecosystem. |
| ADetailer (After Detailer) | Bing-su | Automatically detects and inpaints faces and hands to fix common generation artifacts |
| AnimateDiff | continue-revolution | Converts text-to-image outputs into animated GIFs or MP4 videos by adding temporal motion |
| Deforum | deforum-art | Creates AI-driven 2D and 3D animations, video stylization, and camera motion sequences |
| Regional Prompter | hako-mikan | Splits the image canvas into regions with independent prompts for precise compositional control |
| Ultimate SD Upscale | Coyote-A | Tile-based upscaling with ControlNet Tile integration for enlarging images on limited VRAM |
| ReActor | Gourieff | Face-swapping extension for replacing faces in generated images |
| Civitai Helper | butaixianran | Allows downloading models directly from the Civitai model repository within the UI |
| SadTalker | OpenTalker | Animates portrait images into talking head videos |
| AgentScheduler | ArtVentureX | Manages and queues multiple generation jobs with scheduling |
| Openpose Editor | fkunn1326 | Visual editor for creating and manipulating human pose skeletons for ControlNet |
| Infinite Image Browsing | zanllp | Gallery browser for generated images with metadata search and filtering |
| Inpaint Anything | Uminosachi | Uses Segment Anything Model (SAM) for precise automatic mask creation |
| Loopback Wave | FurkanGozukara | Advanced loopback with wave-based denoising strength patterns |
The ControlNet extension, in particular, was transformative for the Stable Diffusion ecosystem. Created by Lvmin Zhang (the same developer who later created the Forge fork), ControlNet allowed users to provide structural guidance to the generation process through reference images. Its initial release in February 2023 is widely considered one of the most important moments in the history of open-source image generation. The official AUTOMATIC1111 extensions index on GitHub catalogues hundreds of community extensions.
| Component | Minimum | Recommended |
|---|---|---|
| GPU | NVIDIA with 4 GB VRAM | NVIDIA with 8+ GB VRAM (12 GB for SDXL) |
| RAM | 8 GB | 16 GB |
| Storage | 12 GB free (SSD preferred) | 20-50 GB for multiple models and extensions |
| Python | 3.10.6 | 3.10.x (specific version required for PyTorch compatibility) |
| Git | Required | Required |
| OS | Windows 10/11, Linux, macOS | Windows 10/11 or Linux with NVIDIA GPU |
NVIDIA GPUs with CUDA are the best-supported option. AMD GPU users on Linux can use ROCm, while AMD users on Windows can use DirectML. Apple Silicon users can run on macOS using the MPS (Metal Performance Shaders) backend, though generation speed is significantly slower than on equivalent NVIDIA hardware.
For systems with limited VRAM, A1111 provides several command-line flags:
--medvram: Splits the model across processing stages to reduce peak VRAM usage. Suitable for GPUs with 6-8 GB VRAM.--lowvram: More aggressive memory optimization for GPUs with 4 GB VRAM or less. Significantly slower but allows generation on entry-level hardware.--xformers: Enables xformers memory-efficient attention, cutting GPU memory usage roughly in half on many cards.--opt-sdp-attention or --opt-sdp-no-mem-attention: Uses PyTorch's built-in Scaled Dot Product Attention for improved performance.Additional optimization options include FP8 support (added in v1.8.0) for reducing model memory footprint and TAESD (Tiny AutoEncoder for Stable Diffusion) for fast preview generation and larger batch sizes.
The standard installation involves cloning the Git repository and running a platform-specific launch script:
git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui.gitwebui-user.bat (Windows) or webui.sh (Linux/macOS).http://127.0.0.1:7860.The first launch typically takes 20-30 minutes as dependencies are downloaded. Users must separately download Stable Diffusion model checkpoint files and place them in the models/Stable-diffusion/ directory. Command-line arguments are configured through the COMMANDLINE_ARGS variable in the webui-user.bat or webui-user.sh file.
Stable Diffusion WebUI Forge (commonly called "Forge") is a fork of AUTOMATIC1111's WebUI created by Lvmin Zhang, who uses the GitHub handle "lllyasviel." Zhang is also the creator of ControlNet and Fooocus (a simplified Stable Diffusion interface inspired by Midjourney's approach). The Forge repository was created on January 14, 2024, and publicly released in February 2024. The project name was inspired by Minecraft Forge, reflecting the goal of becoming a modding platform for Stable Diffusion.
Forge completely rewrote the resource management backend. All of A1111's VRAM-related command-line flags (--medvram, --lowvram, --medvram-sdxl, --precision full, --no-half, --no-half-vae, and all --attention_* flags) were removed and replaced with an automatic memory management system. Without any flags, Forge can run SDXL on GPUs with 4 GB VRAM and SD 1.5 on GPUs with 2 GB VRAM.
Other improvements include:
Forge added native support for FLUX models, including quantized formats like BitsAndBytes NF4 and GGUF (Q8_0, Q5_0, Q5_1, Q4_0, Q4_1). A configurable GPU weight slider and offload toggle allow users to balance speed and VRAM usage when running these large models. This FLUX support was a significant advantage over the original A1111, which does not natively support FLUX.
As of early 2026, the Forge repository has approximately 12,300 GitHub stars and 1,500 forks. Many A1111 users migrated to Forge because it maintains the familiar A1111 interface while delivering better performance and broader model support. Most A1111 extensions are compatible with Forge, though some require updated versions.
The Forge project has itself spawned community forks:
ComfyUI, created by a developer using the handle "comfyanonymous," represents the primary alternative and eventual successor to A1111 as the leading open-source Stable Diffusion interface. The two tools represent fundamentally different design philosophies.
| Aspect | AUTOMATIC1111 | ComfyUI |
|---|---|---|
| Interface | Traditional form-based tabs (txt2img, img2img) | Node-based graph/flowchart editor |
| Learning Curve | Lower; familiar web form layout | Higher; requires understanding of node connections and data flow |
| GitHub Stars (March 2026) | ~162,000 | ~106,000 |
| First Release | August 2022 | January 2023 |
| Framework | Gradio (Python) | Custom frontend (JavaScript/TypeScript) |
| VRAM Efficiency | Moderate; requires manual flags for low VRAM | Dynamic model loading/unloading; better automatic memory management |
| Generation Speed | Baseline | Approximately 10-20% faster on identical hardware |
| FLUX Support | Not natively supported (available via Forge fork) | Full native support through custom nodes |
| Video Model Support | Limited (via extensions like AnimateDiff) | Extensive (Stable Video Diffusion, Mochi, LTX-Video, Hunyuan Video, Wan 2.1/2.2) |
| 3D Model Support | Not supported | Hunyuan3D 2.0 support |
| Audio Model Support | Not supported | Stable Audio, ACE Step support |
| Workflow Sharing | Via settings and prompt styles | Node graphs exportable as JSON/PNG/WebP files |
| Extension System | Python scripts in extensions directory | Custom nodes with ComfyUI Manager |
| API | RESTful FastAPI endpoints | Built-in WebSocket and REST API with queue system |
| Development Status (March 2026) | Largely inactive since February 2025 | Actively developed with weekly releases |
| Organization | Solo pseudonymous maintainer | Comfy Org with 346+ contributors and venture funding |
| Model Support | SD 1.x, 2.x, SDXL, SD3 | SD 1.x, 2.x, SDXL, SD3, FLUX, Pixart, AuraFlow, HunyuanDiT, HiDream, and more |
| Desktop App | No | Yes (Windows and macOS) |
| License | AGPL-3.0 | GPL-3.0 |
Several factors made A1111 the default choice during the early Stable Diffusion era:
Starting in 2024, multiple factors shifted momentum toward ComfyUI:
By 2025, ComfyUI had become the standard tool for advanced users and professionals, while A1111 and its Forge fork remained popular among users who preferred a simpler interface for straightforward image generation.
A1111 played a foundational role in democratizing AI image generation. Before its release, running Stable Diffusion required command-line familiarity and technical setup. A1111 lowered that barrier dramatically, contributing to the rapid growth of the AI art movement in late 2022 and 2023.
Key community resources include:
The project's 586 contributors collectively made over 7,600 commits. A1111 helped establish patterns that other interfaces adopted, including the prompt attention syntax (word:weight), the negative prompt field as a standard UI element, and the practice of embedding generation parameters in PNG metadata for reproducibility.
Despite its wide adoption, A1111 has several recognized limitations:
AUTOMATIC1111's Stable Diffusion Web UI played a central role in the generative AI movement of 2022-2023. By making Stable Diffusion accessible to non-technical users, it dramatically expanded the audience for open-source image generation technology. The project demonstrated that a community-driven, open-source interface could outpace commercial offerings in features and adoption during a critical period of AI development.
Its extension architecture became a model for other AI tools, and many concepts it popularized have become standard across image generation platforms. The project also spawned notable derivatives beyond Forge, including vladmandic's SD.Next (which added broader model support and a modernized UI) and Fooocus (a simplified interface by lllyasviel).
Even as ComfyUI has taken the lead for advanced workflows and newer model architectures, A1111's influence on the design language and user expectations for AI image generation interfaces remains significant.