Snowflake Arctic is an open large language model released by Snowflake AI Research on April 24, 2024. It uses a Dense Mixture of Experts (Dense-MoE) hybrid architecture with 480 billion total parameters and only 17 billion active parameters per token. The model came in two flavors at launch, Arctic Base and Arctic Instruct, both shipped under the Apache License 2.0 with weights, training code, and a multi-part training cookbook released alongside. Snowflake's pitch was straightforward: instead of chasing general intelligence the way Llama and Mistral did, Arctic was built specifically for what the company called "enterprise intelligence," meaning SQL generation, code synthesis, instruction following, and structured reasoning over business data.
The release landed exactly one month after Databricks shipped DBRX, turning April 2024 into a strange dual moment for the data warehouse industry. Both companies suddenly had their own large MoE language models, both were positioned as "open," and both were aimed at the same enterprise customers. Arctic distinguished itself with a much higher total parameter count (480B vs DBRX's 132B), a much lower active parameter count (17B vs 36B), a true Apache 2.0 license rather than DBRX's Databricks Open Model License, and a training run that Snowflake claimed cost less than $2 million on roughly 1,000 H100 GPUs over about three months. The training compute, Snowflake said, was about one-eighth that of Llama 3 70B and one-seventeenth that of Llama 3 70B on enterprise metric parity.
Arctic was developed under the technical leadership of Yuxiong He, the DeepSpeed co-creator who joined Snowflake in 2023, and a team that drew heavily on Microsoft Research alumni. The model was distributed through Hugging Face, NVIDIA NIM, AWS, Azure, Together AI, Replicate, Perplexity Labs, and Snowflake's own Cortex AI service. While Arctic never became a household name in the broader chatbot world, it produced something arguably more durable: an open training recipe and an extensive technical cookbook that the broader mixture of experts research community continued to cite long after the model itself fell behind on raw capability.
| Developer | Snowflake AI Research |
| Initial release | April 24, 2024 |
| Parameters (total) | 480 billion |
| Parameters (active per token) | 17 billion |
| Architecture | Dense-MoE Hybrid Transformer |
| Experts | 128 (top-2 routing) |
| Context length | 4,096 tokens (initial); plans to extend to 32K |
| Training tokens | ~3.5 trillion |
| Training cluster | ~1,000 NVIDIA H100 GPUs |
| Training cost | Less than $2 million (claimed) |
| Training duration | About 3 months |
| Tokenizer | Adopted from Llama 2 |
| License | Apache License 2.0 |
| Variants | Arctic Base, Arctic Instruct |
| Distribution | Hugging Face, NVIDIA NIM, AWS, Azure, Together AI, Replicate, Snowflake Cortex AI |
Snowflake Inc. was founded in 2012 as a cloud data warehouse, and for roughly a decade its public identity was the separation of compute from storage rather than anything resembling AI. The company's pivot toward generative AI accelerated in late 2022 and early 2023 once the ChatGPT shock hit enterprise software, and intensified after the May 2023 acquisition of Neeva, an AI search startup founded by ex-Google executive Sridhar Ramaswamy. Ramaswamy joined Snowflake as senior vice president of AI, then took over as CEO in February 2024 from Frank Slootman. Within roughly two months of becoming CEO, Ramaswamy unveiled both the broader Cortex AI platform and Arctic itself.
The strategic context matters. Snowflake's main competitor, Databricks, had been investing heavily in machine learning since well before the transformer era and had acquired MosaicML for $1.3 billion in 2023. Databricks shipped DBRX on March 27, 2024, an event that left Snowflake conspicuously without an answer. Arctic was Snowflake's answer, although it had been in development for months before the DBRX launch and was not, despite some media framing, a panicked response.
Snowflake AI Research, the group that built Arctic, was assembled rapidly during 2023. Its central figure was Yuxiong He, a former Microsoft Research Principal Researcher who had co-created the DeepSpeed deep learning optimization library used to train many of the largest models of the early transformer era. He joined Snowflake in 2023 along with several engineers from her DeepSpeed team. Other Snowflake AI Research staff came from Meta AI, NVIDIA, and a number of academic labs. Samyam Rajbhandari, another DeepSpeed co-author, served as the model's chief architect. Aurick Qiao acted as the lead author for many of the published cookbooks.
This lineage shaped Arctic's design priorities heavily. The team had spent years optimizing distributed training and inference, so when they sat down to design a model from scratch the natural instinct was to optimize the training-cost and inference-cost frontier rather than to chase the largest possible total parameter count or the highest possible MMLU score. The result is a model whose architectural choices look strange in isolation, but make perfect sense when read as a systems-research paper rather than a capability paper.
From the beginning, Snowflake framed Arctic around the gap between what consumer benchmarks measured (general world knowledge, trivia, creative writing) and what their actual customers wanted to do with large language models inside Snowflake. Customers were running text-to-SQL pipelines, classifying support tickets, summarizing call transcripts, generating reports, and increasingly building retrieval-augmented generation chatbots over their own structured data. None of those tasks rewarded a model for knowing who won the 1968 Best Picture Oscar.
Baris Gultekin, Snowflake's head of product for AI, put it this way at launch: "LLMs in the market do well with world knowledge, but our customers want LLMs to do well with enterprise knowledge." Arctic's training data, evaluation suite, and architectural decisions all reflected that bet. The benchmarks Snowflake highlighted in the launch were Spider (text-to-SQL), HumanEval+ and MBPP+ (code), IFEval (instruction following), and a custom "enterprise intelligence" composite, rather than MMLU, HellaSwag, or GSM8K.
Arctic's defining architectural choice is what Snowflake called the Dense-MoE Hybrid. The model is a standard decoder-only transformer, but every transformer block contains both a dense feedforward network and a parallel residual MoE feedforward network. The dense path is always active. The MoE path is active for whatever fraction of tokens the router selects. The two outputs are summed back into the residual stream. This is structurally different from a pure MoE design like Mixtral 8x7B, where the only feedforward path through each block is the routed one, and there is no dense backbone running in parallel.
The rationale is that the dense path provides a stable, broadly-trained backbone that handles general reasoning competently for every token, while the MoE path adds extra specialized capacity that gets picked up only when the router thinks it would help. In Snowflake's framing, the dense backbone was responsible for general capability and the MoE residual was responsible for enterprise-specific capability. From a parameter accounting perspective, the dense backbone is roughly 10 billion parameters and the MoE residual contributes 128 experts of roughly 3.66 billion parameters each, with top-2 routing. Active parameters per token are 17 billion, total parameters are 480 billion.
| Component | Specification |
|---|---|
| Architecture family | Decoder-only transformer with dense-MoE hybrid feedforward |
| Hidden size | 7,168 |
| Number of transformer layers | 35 |
| Number of attention heads | 56 |
| Dense FFN size | ~2x hidden size (per layer) |
| MoE experts per layer | 128 |
| Routing | Top-2 |
| Expert size | ~3.66 billion parameters each |
| Total parameters | 480 billion |
| Active parameters per token | 17 billion |
| Sequence length (training) | 4,096 |
| Tokenizer | Llama 2 BPE (32K vocabulary) |
| Position encoding | Rotary (RoPE) |
| Normalization | RMSNorm |
The choice of 128 experts is itself notable. Most open MoE models at the time used 8 (Mixtral 8x7B, Mixtral 8x22B) or 16 (DBRX) experts. Snowflake argued that more experts with finer specialization, paired with a small fixed dense backbone, would deliver better quality per active parameter than a smaller number of larger experts, an argument that lined up with parallel research from DeepSeek and others.
| Model | Total params | Active params | Experts | Top-K | Architecture style |
|---|---|---|---|---|---|
| Snowflake Arctic | 480B | 17B | 128 | 2 | Dense backbone + residual MoE |
| DBRX | 132B | 36B | 16 | 4 | Pure MoE |
| Mixtral 8x22B | 141B | 39B | 8 | 2 | Pure MoE |
| Mixtral 8x7B | 47B | 13B | 8 | 2 | Pure MoE |
| Llama 3 70B | 70B | 70B | n/a | n/a | Dense |
| Llama 2 70B | 70B | 70B | n/a | n/a | Dense |
Arctic ends up with by far the highest total-to-active ratio in this peer group, around 28:1, compared to roughly 4:1 for DBRX and Mixtral. That ratio is the source of both the model's most attractive property (cheap inference relative to parameter count) and one of its inconveniences (loading the full weights requires a lot of memory even though only a fraction is active at any moment).
Arctic uses the Llama 2 BPE tokenizer with a 32,000-token vocabulary, a deliberately conservative choice that allowed the team to reuse pre-existing tooling and avoid spending compute on tokenizer-comparison experiments. The training context window was 4,096 tokens, again a conservative call relative to the longer windows that had become standard among frontier models by spring 2024. Snowflake's reasoning was that most enterprise tasks at the time fit comfortably in a 4K window, and that a longer training window would have raised costs without proportionate benefit. The team flagged 32K extension and an attention-sinks-style sliding window as planned follow-ups, and inference frameworks at launch supported much longer sequences with quality degradation past 4K.
Arctic was trained on Amazon EC2 P5 instances, each containing eight NVIDIA H100 GPUs. The total cluster size for the main training run was approximately 1,000 H100 GPUs, with the training run completing in roughly three months. Snowflake reported the resulting compute cost at less than $2 million in raw hardware terms, calculated against fewer than 3,000 GPU-weeks. Both numbers should be read with care. They reflect the raw compute of the final training run only, do not include data preparation, ablation experiments, or salaries, and assume internal cost accounting that may differ from what a third party would pay on-demand. They are nonetheless a useful order-of-magnitude figure, and the comparable training compute of Llama 3 70B is roughly eight times higher.
Arctic's pretraining used what Snowflake called a "three-stage data curriculum," a deliberate progression of data mixtures that shifted the model's emphasis as training advanced.
| Stage | Tokens | Emphasis |
|---|---|---|
| Stage 1 | ~1T | Broad web text, books, common-crawl-style data |
| Stage 2 | ~1.5T | Heavier code, math, and reasoning data |
| Stage 3 | ~1T | Enterprise and instruction-following data |
The total training corpus was approximately 3.5 trillion tokens. The curriculum was designed so that the dense backbone would absorb general capability in the early stages while the MoE residual would specialize on enterprise-style data later, a division that the Arctic Cookbook describes in some detail. Specific dataset compositions were not fully disclosed, but the team did publish ablation results showing that the dynamic curriculum produced measurably better enterprise scores than a static, single-mixture training run at matched compute.
The Arctic Cookbook details several engineering choices that the team flagged as critical for completing the run within budget:
These choices are arguably more enduring than the Arctic model itself. The kernels and scheduling tricks were upstreamed into DeepSpeed, and the published cookbook became a reference document for several later open MoE projects.
The Arctic Instruct variant was produced through a supervised fine-tuning phase on a curated mix of public and internal instruction datasets, plus a Direct Preference Optimization (DPO) alignment step against a preference dataset. Snowflake did not publish the full instruction mix, but the cookbook describes heavy weighting on enterprise tasks: SQL completion, structured data extraction, reasoning over tabular inputs, and instruction following with multiple constraints. Compared to instruction-tuned consumer models, the Arctic Instruct mix had relatively little creative writing or open-ended chat data.
Two variants were available at launch. Both were uploaded to Hugging Face under the same Apache 2.0 license.
| Variant | Hugging Face ID | Purpose |
|---|---|---|
| Arctic Base | Snowflake/snowflake-arctic-base | Pretrained foundation model, no instruction tuning |
| Arctic Instruct | Snowflake/snowflake-arctic-instruct | Instruction-tuned and DPO-aligned for enterprise tasks |
No subsequent Arctic LLM successor (Arctic 2, Arctic Next, etc.) was released. Snowflake's later AI strategy moved away from training new foundation models in-house and toward integrating third-party models, while continuing to develop the separate Arctic Embed family of text embedding models, which is unrelated to the Arctic LLM despite the shared name.
Snowflake's launch posts emphasized a composite they called "Enterprise Intelligence," which averaged scores on Spider (SQL), HumanEval+ (Python coding), MBPP+ (Python coding), and IFEval (instruction following). Arctic Instruct's reported scores on the academic and enterprise suite, as published in the Snowflake Arctic blog and on the Hugging Face model card, are summarized below.
| Benchmark | Score | Notes |
|---|---|---|
| MMLU (5-shot) | 67.3 | General knowledge, where Arctic underperforms peers |
| GSM8K (8-shot) | 74.2 | Grade-school math word problems |
| HumanEval+ | 64.3 | Python code synthesis (extended test set) |
| MBPP+ | 60.6 | Python programming problems (extended test set) |
| IFEval | 52.4 | Instruction following with verifiable constraints |
| Spider | 79.0 | Cross-domain text-to-SQL |
| HellaSwag | ~76 | Commonsense reasoning |
| ARC-Challenge | ~57 | Multiple-choice science reasoning |
| WinoGrande | ~73 | Pronoun-resolution commonsense |
| TruthfulQA | ~57 | Truthfulness on adversarial questions |
The main story in this table is not the absolute scores. It is the gap between MMLU (where Arctic scored about 12 points below Llama 3 70B) and Spider/HumanEval+/IFEval (where Arctic was roughly competitive). Snowflake's argument was that the gap on MMLU reflected a deliberate trade-off, since Arctic was not trained heavily on world-knowledge data, and that the rough parity on enterprise tasks was the part their customers should care about.
| Benchmark | Arctic | DBRX Instruct | Mixtral 8x22B | Mixtral 8x7B | Llama 3 70B | Llama 2 70B |
|---|---|---|---|---|---|---|
| MMLU | 67.3 | 73.7 | 77.8 | 70.6 | 79.5 | 68.9 |
| GSM8K | 74.2 | 66.9 | 88.4 | 58.4 | 93.0 | 56.8 |
| HumanEval+ | 64.3 | 61.0 | 56.7 | 36.6 | 76.2 | 32.3 |
| MBPP+ | 60.6 | 56.1 | 53.7 | 40.9 | 69.6 | 41.6 |
| Spider | 79.0 | 76.3 | 79.2 | 70.4 | 80.2 | 56.4 |
| IFEval | 52.4 | 27.6 | n/a | n/a | n/a | n/a |
Note that the IFEval column is sparse because not every comparison model published this benchmark on a comparable evaluation harness at the time of release. On Snowflake's own enterprise composite, Arctic claimed first place against DBRX, Mixtral 8x22B, Llama 2 70B, and Llama 3 8B, and effective parity with Llama 3 70B at one-seventeenth the training compute.
The more straightforward win for Arctic was inference economics. Because only 17 billion of the 480 billion parameters fire per token, the model's per-token compute cost is comparable to that of a 17-billion-parameter dense model rather than a 70-billion-parameter dense model, while the total knowledge stored in the weights is far higher. Snowflake reported that Arctic activated roughly 50% fewer parameters than DBRX and roughly 75% fewer than Llama 3 70B during inference, which translated to lower latency and lower cost per query at equal hardware. The catch is that the full 480 billion parameters still need to be loaded somewhere accessible, which in practice meant a single 8x H100 node (an AWS p5.48xlarge or equivalent) at FP8 or FP6 quantization. For settings with strict memory budgets, that is a less attractive trade than the active-parameter number alone suggests.
Arctic was released under the Apache License 2.0, with no usage restrictions, no acceptable use policy carve-outs, and no commercial gating. The weights were uploaded directly to Hugging Face. The training code, the cookbook series, and the inference optimizations were all published in the Snowflake-Labs/snowflake-arctic GitHub repository under the same license. Snowflake's marketing leaned heavily on the "truly open" framing, with explicit, repeated comparisons to DBRX, Llama, and other contemporary releases.
That framing had a real basis. DBRX shipped under the Databricks Open Model License (DOML), which permits free commercial use up to a certain user threshold but is not OSI-approved as open source and contains attribution requirements that Apache 2.0 does not. Llama 2 and Llama 3 shipped under the Meta Llama Community License, which restricts use by very large companies (700 million monthly active users or more) and includes an acceptable use policy. By contrast, Arctic's Apache 2.0 release imposed essentially no constraints. A startup, a research lab, or a Fortune 100 enterprise could all download the weights, fine-tune them, and embed them in a product without notifying Snowflake.
| Model | License | OSI-approved | Commercial use | Major restrictions |
|---|---|---|---|---|
| Snowflake Arctic | Apache 2.0 | Yes | Unrestricted | None notable |
| DBRX | Databricks Open Model License | No | Permitted with terms | Attribution and naming requirements |
| Llama 3 | Meta Llama 3 Community License | No | Permitted up to 700M MAU | Acceptable use policy, MAU cap |
| Mixtral 8x7B / 8x22B | Apache 2.0 | Yes | Unrestricted | None notable |
| Falcon 180B | TII Falcon LLM License | No | Permitted with conditions | Hosting restrictions |
| Grok-1 (March 2024) | Apache 2.0 | Yes | Unrestricted | None notable |
In that group, Arctic was joined by Mixtral and Grok-1 as the genuinely permissively licensed models. The cookbook publication, including ablation results and architectural rationale, was an additional layer of openness that even some Apache 2.0 releases did not match.
Arctic Base and Arctic Instruct were uploaded to Hugging Face on launch day at huggingface.co/Snowflake/snowflake-arctic-base and huggingface.co/Snowflake/snowflake-arctic-instruct. The model card included loading instructions, a small inference example, recommended hardware (a single 8xH100 instance with FP8 or FP6 quantization via DeepSpeed), and links to the cookbook. The repository required trust_remote_code=True because Arctic's hybrid feedforward block was not supported by stock transformers at release.
Arctic was added to Snowflake Cortex AI, the company's managed inference platform inside the Snowflake data cloud, as a first-party model alongside hosted versions of Llama 2 and several Mistral models. Customers could call Arctic via SQL functions like SNOWFLAKE.CORTEX.COMPLETE('snowflake-arctic', '...'). Cortex AI's wider model selection eventually grew to include Llama 3, Llama 4, Anthropic Claude (via partnership), OpenAI GPT models, Reka Flash and Reka Core, and Mistral Large. Within that catalogue Arctic served as the default option for SQL-heavy workloads.
The launch partner list was unusually broad for an open model:
The NVIDIA NIM packaging was significant, since it made Arctic deployable as a containerized microservice in any environment that supported NIM, and provided the most heavily optimized inference path outside of Cortex itself.
The immediate reception was generally positive, framed within the context of the open-model arms race that had been going on through early 2024.
VentureBeat's Carl Franzen, in coverage that ran on launch day, called Arctic the "most open" of the recent enterprise-grade releases and emphasized the Apache 2.0 license and the cookbook publication. TechCrunch's coverage focused on the comparative-cost angle, highlighting the under-$2-million training claim and putting Arctic in dialogue with both DBRX and Mixtral 8x22B. The Register and HPCwire wrote up the architecture details, with HPCwire noting that the 128-expert design was the largest expert count among open models at that point. Constellation Research analyst Doug Henschen, quoted in TechTarget, said it was "important for data platforms to cover all bases" and treated Arctic as a logical extension of Snowflake's broader strategy.
Critical reception centered on three themes:
First, the niche framing. Several reviewers, including Kyle Wiggers at TechCrunch, pointed out that "enterprise intelligence" was effectively Snowflake-defined and that the deliberate underweighting of MMLU and other general benchmarks left it unclear how the model would perform on tasks customers had not anticipated. The argument was not that Arctic was bad, but that any deviation from the customer's expected workload mix would expose the trade-offs.
Second, the scale of the model relative to inference economics. While 17B active parameters made inference cheap, 480B total parameters made deployment expensive in raw memory terms. For customers without an 8xH100 server already on hand, the practical hardware footprint was higher than DBRX's or Mixtral 8x22B's, despite Arctic being faster per token.
Third, the gap between published benchmarks and external verification. Snowflake's enterprise composite was a custom metric. Independent third-party evaluations on standard suites took time to arrive, and once they did the picture was somewhat more mixed than the launch materials suggested. On Hugging Face's Open LLM Leaderboard at the time, Arctic Instruct landed in the middle of the open-model pack rather than at the top, a fact that several Twitter critics raised within a few days of launch.
Nonetheless, the model was widely seen as a credible technical achievement, and the cookbook was the most-praised single artifact of the release. Researchers at multiple labs cited the Arctic Cookbook through 2024 and 2025 in subsequent papers on MoE training stability, expert load balancing, and dense-MoE hybridization.
The table below sets Arctic against the four open or quasi-open models it was most often benchmarked against during the spring 2024 release cycle.
| Property | Snowflake Arctic | DBRX Instruct | Mixtral 8x22B | Llama 3 70B Instruct |
|---|---|---|---|---|
| Developer | Snowflake AI Research | Mosaic / Databricks | Mistral AI | Meta AI |
| Released | April 24, 2024 | March 27, 2024 | April 17, 2024 | April 18, 2024 |
| License | Apache 2.0 | Databricks Open Model License | Apache 2.0 | Meta Llama 3 Community License |
| Architecture | Dense + residual MoE | Pure MoE | Pure MoE | Dense |
| Total parameters | 480B | 132B | 141B | 70B |
| Active parameters | 17B | 36B | 39B | 70B |
| Experts | 128 (top-2) | 16 (top-4) | 8 (top-2) | n/a |
| Context (initial) | 4K | 32K | 64K | 8K |
| Tokenizer | Llama 2 BPE | GPT-4 tiktoken | Mistral SentencePiece | Llama 3 tiktoken |
| Training tokens | ~3.5T | ~12T | not disclosed | ~15T |
| Training cost (claimed) | <$2M | $10M+ (reported) | not disclosed | tens of millions |
| Focus | Enterprise (SQL, code, IF) | General + enterprise | General | General |
| MMLU | 67.3 | 73.7 | 77.8 | 79.5 |
| HumanEval+ | 64.3 | 61.0 | 56.7 | 76.2 |
| Spider (SQL) | 79.0 | 76.3 | 79.2 | 80.2 |
For SQL and instruction-following, Arctic was competitive. For general knowledge and math, it was clearly behind Llama 3 70B and Mixtral 8x22B. For inference cost at scale, it had the most attractive active-parameter number in the group, although its memory footprint was the largest. The mix made Arctic a sensible default inside Snowflake Cortex for SQL-heavy queries and a less obvious choice for everything else.
Despite the sometimes-rumored "Arctic 2," Snowflake never released a successor large language model under the Arctic name. The Arctic Embed family of text embedding models, including Arctic Embed v1 (April 2024), Arctic Embed M v1.5 (July 2024), and the multilingual Arctic Embed 2.0 (December 2024), continued to be developed and shipped, but those are dense BERT-style retrieval models, not generative LLMs. The decision not to follow up Arctic with a new generation of text-generation models reflected a strategic shift inside Snowflake away from in-house frontier model development and toward a model-broker positioning, where Cortex AI hosts third-party models from Anthropic, Meta, Mistral, Reka, and OpenAI.
In May 2024, less than a month after Arctic's launch, Bloomberg reported that Snowflake was in talks to acquire Reka AI for more than $1 billion. The rumored deal would have given Snowflake an in-house multimodal model team and a follow-up path beyond Arctic. The talks broke down by late May 2024 with no transaction. Snowflake instead announced a deep partnership with Reka, integrating Reka's Flash and Core multimodal models into Cortex AI, and Snowflake Ventures continued to hold a stake in the company through earlier investment rounds. The episode is often read as the moment Snowflake decided not to compete on training frontier models directly and to lean into the partnership-and-platform posture that has characterized Cortex AI ever since.
Following the Reka decision, Snowflake's AI roadmap placed less emphasis on building new foundation models and more on the integration layer. Cortex AI added Anthropic Claude under partnership, then OpenAI GPT models, then Llama 3, then Llama 4, then a steady stream of additional Mistral, Reka, and DeepSeek models. The platform-level features (Cortex Search, Cortex Analyst, Cortex Agents, Snowflake Intelligence) became the differentiation rather than any single hosted model. Arctic continued to serve as the default for SQL-heavy operations but was no longer marketed as Snowflake's flagship technical artifact.
The most durable contributions of the Arctic project were arguably not the model weights but the published artifacts:
Later open MoE projects, including DeepSeek V3, Qwen MoE variants, and a number of academic models, have cited Arctic's design choices and training tricks. The model itself fell behind on raw capability fairly quickly as Llama 3.1, Llama 3.3, GPT-4o, and frontier MoE models from DeepSeek raised the bar through 2024 and 2025, but the engineering documentation outlasted the model.
Viewed in 2026 retrospect, the spring 2024 cluster of releases (DBRX, Mixtral 8x22B, Llama 3, Arctic) marked the last moment when American enterprise software companies seriously competed on training their own frontier-quality LLMs from scratch. By late 2024 the cost frontier had moved past what was feasible without sustained billions of dollars in compute, and the open-model conversation shifted toward Chinese and frontier-lab releases (DeepSeek V2, V3, Qwen2, Mistral Large 2, Llama 3.3). Arctic was both an exit hatch and a marker. Snowflake demonstrated that a focused team could build a competitive enterprise model on a small budget, then quietly acknowledged that doing it again with each generation was not the company's comparative advantage.