Page history
24 February 2023
no edit summary
+42
Alpha5 moved page LLaMA to LLaMA/Model Card without leaving a redirect
m→Metrics
+3
no edit summary
+173
→Quantitative analysis
+233
→Quantitative analysis
+29
no edit summary
+159
Created page with "==Model details== Organization developing the model The FAIR team of Meta AI. Model date LLaMA was trained between December. 2022 and Feb. 2023. Model version This is version 1 of the model. Model type LLaMA is an auto-regressive language model, based on the transformer architecture. The model comes in different sizes: 7B, 13B, 33B and 65B parameters. Paper or resources for more information More information can be found in the paper “LLaMA, Open and Efficient Found..."
+6,353