Jump to content

NVIDIA Triton Inference Server: Difference between revisions

no edit summary
No edit summary
No edit summary
 
Line 1: Line 1:
{{see also|Deployment}}
{{see also|Model Deployment|artificial intelligence applications}}
== Introduction ==
== Introduction ==
{{#ev:youtube|1kOaYiNVgFs|400|right}}
{{#ev:youtube|1kOaYiNVgFs|400|right}}
Line 86: Line 86:
NVIDIA continues to invest in Triton's development, incorporating new features and improvements based on user feedback and industry needs. Upcoming advancements may include additional framework support, improved orchestration capabilities, enhanced performance optimization, and more.
NVIDIA continues to invest in Triton's development, incorporating new features and improvements based on user feedback and industry needs. Upcoming advancements may include additional framework support, improved orchestration capabilities, enhanced performance optimization, and more.


[[Category:Deployment]] [[Category:Inference]] [[Category:Servers]] [[Category:DevOps]]
[[Category:Model Deployment]] [[Category:Inference]] [[Category:Servers]] [[Category:DevOps]]