Jump to content

Jonatasgrosman/wav2vec2-large-xlsr-53-english model: Difference between revisions

no edit summary
No edit summary
Line 38: Line 38:
</pre>
</pre>
</tabber>
</tabber>


==Hugging Face Transformers Library==
==Hugging Face Transformers Library==
Line 51: Line 52:
==Deployment==
==Deployment==
===Inference API===
===Inference API===
====Python====
<tabber>
|-|Python=
<pre>
<pre>
import requests
import requests
Line 66: Line 68:
output = query("sample1.flac")
output = query("sample1.flac")
</pre>
</pre>
====JavaScript====
|-|JavaScript=
<pre>
<pre>
async function query(filename) {
async function query(filename) {
Line 86: Line 88:
});
});
</pre>
</pre>
====cURL====
|-|cURL=
<pre>
<pre>
curl https://api-inference.huggingface.co/models/jonatasgrosman/wav2vec2-large-xlsr-53-english \
curl https://api-inference.huggingface.co/models/jonatasgrosman/wav2vec2-large-xlsr-53-english \
Line 94: Line 96:


</pre>
</pre>
</tabber>


===Amazon SageMaker===
===Amazon SageMaker===
====Automatic Speech Recognition====
<tabber>
|-|Automatic Speech Recognition=
=====AWS=====
=====AWS=====
<pre>
<pre>
Line 106: Line 110:
hub = {
hub = {
'HF_MODEL_ID':'jonatasgrosman/wav2vec2-large-xlsr-53-english',
'HF_MODEL_ID':'jonatasgrosman/wav2vec2-large-xlsr-53-english',
'HF_TASK':'image-classification'
'HF_TASK':'automatic-speech-recognition'
}
}


Line 138: Line 142:
hub = {
hub = {
'HF_MODEL_ID':'jonatasgrosman/wav2vec2-large-xlsr-53-english',
'HF_MODEL_ID':'jonatasgrosman/wav2vec2-large-xlsr-53-english',
'HF_TASK':'image-classification'
'HF_TASK':'automatic-speech-recognition'
}
}


Line 160: Line 164:
})
})
</pre>
</pre>
====Conversational====
|-|Conversational=
=====AWS=====
=====AWS=====
<pre>
<pre>
Line 224: Line 228:
})
})
</pre>
</pre>
====Feature Extraction====
|-|Feature Extraction=
=====AWS=====
=====AWS=====
<pre>
<pre>
Line 258: Line 262:
=====Local Machine=====
=====Local Machine=====
<pre>
<pre>
OLD MODEL INFOBOX
from sagemaker.huggingface import HuggingFaceModel
import boto3
 
iam_client = boto3.client('iam')
role = iam_client.get_role(RoleName='{IAM_ROLE_WITH_SAGEMAKER_PERMISSIONS}')['Role']['Arn']
# Hub Model configuration. https://huggingface.co/models
hub = {
'HF_MODEL_ID':'jonatasgrosman/wav2vec2-large-xlsr-53-english',
'HF_TASK':'feature-extraction'
}
 
# create Hugging Face Model Class
huggingface_model = HuggingFaceModel(
transformers_version='4.17.0',
pytorch_version='1.10.2',
py_version='py38',
env=hub,
role=role,
)
 
# deploy model to SageMaker Inference
predictor = huggingface_model.deploy(
initial_instance_count=1, # number of instances
instance_type='ml.m5.xlarge' # ec2 instance type
)
 
predictor.predict({
'inputs': "sample1.flac"
})
</pre>
</pre>
====Fill-Mask====
|-|Fill-Mask=
=====AWS=====
=====AWS=====
<pre>
<pre>
Line 324: Line 356:
})
})
</pre>
</pre>
====Image Classification====
|-|Image Classification=
=====AWS=====
=====AWS=====
<pre>
<pre>
OLD MODEL INFOBOX
from sagemaker.huggingface import HuggingFaceModel
import sagemaker
 
role = sagemaker.get_execution_role()
# Hub Model configuration. https://huggingface.co/models
hub = {
'HF_MODEL_ID':'jonatasgrosman/wav2vec2-large-xlsr-53-english',
'HF_TASK':'image-classification'
}
 
# create Hugging Face Model Class
huggingface_model = HuggingFaceModel(
transformers_version='4.17.0',
pytorch_version='1.10.2',
py_version='py38',
env=hub,
role=role,
)
 
# deploy model to SageMaker Inference
predictor = huggingface_model.deploy(
initial_instance_count=1, # number of instances
instance_type='ml.m5.xlarge' # ec2 instance type
)
 
predictor.predict({
'inputs': "sample1.flac"
})
</pre>
</pre>
=====Local Machine=====
=====Local Machine=====
Line 361: Line 420:
})
})
</pre>
</pre>
====Question Answering====
|-|Question Answering=
=====AWS=====
=====AWS=====
<pre>
<pre>
Line 425: Line 484:
})
})
</pre>
</pre>
====Summarization====
|-|Summarization=
=====AWS=====
=====AWS=====
<pre>
<pre>
Line 489: Line 548:
})
})
</pre>
</pre>
====Table Question Answering====
|-|Table Question Answering=
=====AWS=====
=====AWS=====
<pre>
<pre>
Line 553: Line 612:
})
})
</pre>
</pre>
====Text Classification====
|-|Text Classification=
=====AWS=====
=====AWS=====
<pre>
<pre>
Line 587: Line 646:
=====Local Machine=====
=====Local Machine=====
<pre>
<pre>
OLD MODEL INFOBOX
from sagemaker.huggingface import HuggingFaceModel
import boto3
 
iam_client = boto3.client('iam')
role = iam_client.get_role(RoleName='{IAM_ROLE_WITH_SAGEMAKER_PERMISSIONS}')['Role']['Arn']
# Hub Model configuration. https://huggingface.co/models
hub = {
'HF_MODEL_ID':'jonatasgrosman/wav2vec2-large-xlsr-53-english',
'HF_TASK':'text-classification'
}
 
# create Hugging Face Model Class
huggingface_model = HuggingFaceModel(
transformers_version='4.17.0',
pytorch_version='1.10.2',
py_version='py38',
env=hub,
role=role,
)
 
# deploy model to SageMaker Inference
predictor = huggingface_model.deploy(
initial_instance_count=1, # number of instances
instance_type='ml.m5.xlarge' # ec2 instance type
)
 
predictor.predict({
'inputs': "sample1.flac"
})
</pre>
</pre>
====Text Generation====
|-|Text Generation=
=====AWS=====
=====AWS=====
<pre>
<pre>
Line 653: Line 740:
})
})
</pre>
</pre>
====Text2Text Generation====
|-|Text2Text Generation=
=====AWS=====
=====AWS=====
<pre>
<pre>
Line 717: Line 804:
})
})
</pre>
</pre>
====Token Classification====
|-|Token Classification=
=====AWS=====
=====AWS=====
<pre>
<pre>
Line 781: Line 868:
})
})
</pre>
</pre>
====Translation====
|-|Translation=
=====AWS=====
=====AWS=====
<pre>
<pre>
Line 845: Line 932:
})
})
</pre>
</pre>
====Zero-Shot Classification====
|-|Zero-Shot Classification=
=====AWS=====
=====AWS=====
<pre>
<pre>
Line 909: Line 996:
})
})
</pre>
</pre>
</tabber>


===Spaces===
===Spaces===
Line 920: Line 1,008:
==Training==
==Training==
===Amazon SageMaker===
===Amazon SageMaker===
====Causal Language Modeling====
<tabber>
|-|Causal Language Modeling=
=====AWS=====
=====AWS=====
<pre>
<pre>
Line 991: Line 1,080:
huggingface_estimator.fit()
huggingface_estimator.fit()
</pre>
</pre>
====Masked Language Modeling====
|-|Masked Language Modeling=
=====AWS=====
=====AWS=====
<pre>
<pre>
Line 1,062: Line 1,151:
huggingface_estimator.fit()
huggingface_estimator.fit()
</pre>
</pre>
====Question Answering====
|-|Question Answering=
=====AWS=====
=====AWS=====
<pre>
<pre>
Line 1,133: Line 1,222:
huggingface_estimator.fit()
huggingface_estimator.fit()
</pre>
</pre>
====Summarization====
|-|Summarization=
=====AWS=====
=====AWS=====
<pre>
<pre>
Line 1,204: Line 1,293:
huggingface_estimator.fit()
huggingface_estimator.fit()
</pre>
</pre>
====Text Classification====
|-|Text Classification=
=====AWS=====
=====AWS=====
<pre>
<pre>
Line 1,275: Line 1,364:
huggingface_estimator.fit()
huggingface_estimator.fit()
</pre>
</pre>
====Token Classification====
|-|Token Classification=
=====AWS=====
=====AWS=====
<pre>
<pre>
Line 1,312: Line 1,401:
=====Local Machine=====
=====Local Machine=====
<pre>
<pre>
{{Model infobox
import sagemaker
import boto3
from sagemaker.huggingface import HuggingFace
 
# gets role for executing training job
iam_client = boto3.client('iam')
role = iam_client.get_role(RoleName='{IAM_ROLE_WITH_SAGEMAKER_PERMISSIONS}')['Role']['Arn']
hyperparameters = {
'model_name_or_path':'jonatasgrosman/wav2vec2-large-xlsr-53-english',
'output_dir':'/opt/ml/model'
# add your remaining hyperparameters
# more info here https://github.com/huggingface/transformers/tree/v4.17.0/examples/pytorch/token-classification
}
 
# git configuration to download our fine-tuning script
git_config = {'repo': 'https://github.com/huggingface/transformers.git','branch': 'v4.17.0'}
 
# creates Hugging Face estimator
huggingface_estimator = HuggingFace(
entry_point='run_ner.py',
source_dir='./examples/pytorch/token-classification',
instance_type='ml.p3.2xlarge',
instance_count=1,
role=role,
git_config=git_config,
transformers_version='4.17.0',
pytorch_version='1.10.2',
py_version='py38',
hyperparameters = hyperparameters
)
 
# starting the train job
huggingface_estimator.fit()
</pre>
</pre>
====Translation====
|-|Translation=
=====AWS=====
=====AWS=====
<pre>
<pre>
Line 1,385: Line 1,506:
huggingface_estimator.fit()
huggingface_estimator.fit()
</pre>
</pre>
</tabber>




==Model Card==
==Model Card==