Sentence-transformers/all-mpnet-base-v2 model: Difference between revisions

From AI Wiki
(Created page with "{{Model infobox | hugging-face-uri = sentence-transformers/all-mpnet-base-v2 | creator = | type = Natural Language Processing, Multimodal | task = Sentence Similarity, Feature Extraction | library = PyTorch, Sentence Transformers | dataset = s2orc, flax-sentence-embeddings/stackexchange_xml, MS Marco, gooaq, yahoo_answers_topics, code_search_net, search_qa, eli5, snli, multi_nli, wikihow, natural_questions, trivia_qa, embedding-data/sentence-compression, embedding-data/...")
 
No edit summary
 
(2 intermediate revisions by 2 users not shown)
Line 7: Line 7:
| dataset = s2orc, flax-sentence-embeddings/stackexchange_xml, MS Marco, gooaq, yahoo_answers_topics, code_search_net, search_qa, eli5, snli, multi_nli, wikihow, natural_questions, trivia_qa, embedding-data/sentence-compression, embedding-data/flickr30k-captions, embedding-data/altlex, embedding-data/simple-wiki, embedding-data/QQP, embedding-data/SPECTER, embedding-data/PAQ_pairs, embedding-data/WikiAnswers
| dataset = s2orc, flax-sentence-embeddings/stackexchange_xml, MS Marco, gooaq, yahoo_answers_topics, code_search_net, search_qa, eli5, snli, multi_nli, wikihow, natural_questions, trivia_qa, embedding-data/sentence-compression, embedding-data/flickr30k-captions, embedding-data/altlex, embedding-data/simple-wiki, embedding-data/QQP, embedding-data/SPECTER, embedding-data/PAQ_pairs, embedding-data/WikiAnswers
| language = English
| language = English
| paper =
| paper = arxiv:1904.06472, arxiv:2102.07033, arxiv:2104.08727, arxiv:1704.05179, arxiv:1810.09305
| license = arxiv:1904.06472, arxiv:2102.07033, arxiv:2104.08727, arxiv:1704.05179, arxiv:1810.09305, apache-2.0
| license = apache-2.0
| related-to = mpnet
| related-to = mpnet
| all-tags = Sentence Similarity, PyTorch, Sentence Transformers, s2orc, flax-sentence-embeddings/stackexchange_xml, MS Marco, gooaq, yahoo_answers_topics, code_search_net, search_qa, eli5, snli, multi_nli, wikihow, natural_questions, trivia_qa, embedding-data/sentence-compression, embedding-data/flickr30k-captions, embedding-data/altlex, embedding-data/simple-wiki, embedding-data/QQP, embedding-data/SPECTER, embedding-data/PAQ_pairs, embedding-data/WikiAnswers, English, mpnet, feature-extraction, arxiv:1904.06472, arxiv:2102.07033, arxiv:2104.08727, arxiv:1704.05179, arxiv:1810.09305, License: apache-2.0
| all-tags = Sentence Similarity, PyTorch, Sentence Transformers, s2orc, flax-sentence-embeddings/stackexchange_xml, MS Marco, gooaq, yahoo_answers_topics, code_search_net, search_qa, eli5, snli, multi_nli, wikihow, natural_questions, trivia_qa, embedding-data/sentence-compression, embedding-data/flickr30k-captions, embedding-data/altlex, embedding-data/simple-wiki, embedding-data/QQP, embedding-data/SPECTER, embedding-data/PAQ_pairs, embedding-data/WikiAnswers, English, mpnet, feature-extraction, arxiv:1904.06472, arxiv:2102.07033, arxiv:2104.08727, arxiv:1704.05179, arxiv:1810.09305, License: apache-2.0
Line 42: Line 42:


==Hugging Face Transformers Library==
==Hugging Face Transformers Library==
<pre>
undefined
</pre>


==Deployment==
==Deployment==
Line 108: Line 105:


===Amazon SageMaker===
===Amazon SageMaker===
<tabber>
</tabber>


===Spaces===
===Spaces===
Line 119: Line 114:


==Training==
==Training==
===Amazon SageMaker===
<tabber>
</tabber>


==Model Card==
==Model Card==
==Comments==
<comments />

Latest revision as of 20:21, 21 May 2023

Sentence-transformers/all-mpnet-base-v2 model is a Natural Language Processing, Multimodal model used for Sentence Similarity, Feature Extraction.

Model Description

Clone Model Repository

#Be sure to have git-lfs installed (https://git-lfs.com)
git lfs install
git clone https://huggingface.co/sentence-transformers/all-mpnet-base-v2
  
#To clone the repo without large files – just their pointers
#prepend git clone with the following env var:
GIT_LFS_SKIP_SMUDGE=1

#Be sure to have git-lfs installed (https://git-lfs.com)
git lfs install
git clone [email protected]:sentence-transformers/all-mpnet-base-v2
  
#To clone the repo without large files – just their pointers
#prepend git clone with the following env var:
GIT_LFS_SKIP_SMUDGE=1

Hugging Face Transformers Library

Deployment

Inference API

import requests

API_URL = "https://api-inference.huggingface.co/models/sentence-transformers/all-mpnet-base-v2"
headers = {"Authorization": f"Bearer {API_TOKEN}"}

def query(payload):
	response = requests.post(API_URL, headers=headers, json=payload)
	return response.json()
	
output = query({
	"inputs": {
		"source_sentence": "That is a happy person",
		"sentences": [
			"That is a happy dog",
			"That is a very happy person",
			"Today is a sunny day"
		]
	},
})

async function query(data) {
	const response = await fetch(
		"https://api-inference.huggingface.co/models/sentence-transformers/all-mpnet-base-v2",
		{
			headers: { Authorization: "Bearer {API_TOKEN}" },
			method: "POST",
			body: JSON.stringify(data),
		}
	);
	const result = await response.json();
	return result;
}

query({"inputs": {
		"source_sentence": "That is a happy person",
		"sentences": [
			"That is a happy dog",
			"That is a very happy person",
			"Today is a sunny day"
		]
	}}).then((response) => {
	console.log(JSON.stringify(response));
});

curl https://api-inference.huggingface.co/models/sentence-transformers/all-mpnet-base-v2 \
	-X POST \
	-d '{"inputs": { "source_sentence": "That is a happy person", "sentences": [ "That is a happy dog", "That is a very happy person", "Today is a sunny day" ] }}' \
	-H "Authorization: Bearer {API_TOKEN}"

Amazon SageMaker

Spaces

import gradio as gr

gr.Interface.load("models/sentence-transformers/all-mpnet-base-v2").launch()

Training

Model Card

Comments

Loading comments...