Runwayml/stable-diffusion-v1-5 model: Difference between revisions

From AI Wiki
(Created page with "{{Model infobox | hugging-face-uri = runwayml/stable-diffusion-v1-5 | creator = | type = Multimodal | task = Text-to-Image | library = Diffusers | dataset = | language = | paper = | license = arxiv:2207.12598, arxiv:2112.10752, arxiv:2103.00020, arxiv:2205.11487, arxiv:1910.09700, creativeml-openrail-m | related-to = stable-diffusion, stable-diffusion-diffusers | all-tags = Text-to-Image, Diffusers, stable-diffusion, stable-diffusion-diffusers, arxiv:2207.12598, arxi...")
 
No edit summary
Line 15: Line 15:


==Model Description==
==Model Description==
==Comments==
<comments />


==Clone Model Repository==
==Clone Model Repository==
Line 41: Line 44:
</tabber>
</tabber>


==Hugging Face Transformers Library==
<pre>
undefined
</pre>


==Deployment==
==Deployment==
Line 96: Line 95:


===Amazon SageMaker===
===Amazon SageMaker===
<tabber>
 
</tabber>


===Spaces===
===Spaces===
Line 107: Line 105:


==Training==
==Training==
===Amazon SageMaker===
<tabber>
</tabber>


==Model Card==
==Model Card==

Revision as of 18:37, 21 May 2023

Runwayml/stable-diffusion-v1-5 model is a Multimodal model used for Text-to-Image.

Model Description

Comments

Loading comments...

Clone Model Repository

#Be sure to have git-lfs installed (https://git-lfs.com)
git lfs install
git clone https://huggingface.co/runwayml/stable-diffusion-v1-5
  
#To clone the repo without large files – just their pointers
#prepend git clone with the following env var:
GIT_LFS_SKIP_SMUDGE=1

#Be sure to have git-lfs installed (https://git-lfs.com)
git lfs install
git clone [email protected]:runwayml/stable-diffusion-v1-5
  
#To clone the repo without large files – just their pointers
#prepend git clone with the following env var:
GIT_LFS_SKIP_SMUDGE=1


Deployment

Inference API

import requests

API_URL = "https://api-inference.huggingface.co/models/runwayml/stable-diffusion-v1-5"
headers = {"Authorization": f"Bearer {API_TOKEN}"}

def query(payload):
	response = requests.post(API_URL, headers=headers, json=payload)
	return response.content
image_bytes = query({
	"inputs": "Astronaut riding a horse",
})
# You can access the image with PIL.Image for example
import io
from PIL import Image
image = Image.open(io.BytesIO(image_bytes))

async function query(data) {
	const response = await fetch(
		"https://api-inference.huggingface.co/models/runwayml/stable-diffusion-v1-5",
		{
			headers: { Authorization: "Bearer {API_TOKEN}" },
			method: "POST",
			body: JSON.stringify(data),
		}
	);
	const result = await response.blob();
	return result;
}
query({"inputs": "Astronaut riding a horse"}).then((response) => {
	// Use image
});

curl https://api-inference.huggingface.co/models/runwayml/stable-diffusion-v1-5 \
	-X POST \
	-d '{"inputs": "Astronaut riding a horse"}' \
	-H "Authorization: Bearer {API_TOKEN}"

Amazon SageMaker

Spaces

import gradio as gr

gr.Interface.load("models/runwayml/stable-diffusion-v1-5").launch()

Training

Model Card