Mercor is an AI-powered hiring and talent marketplace platform founded in 2023 and headquartered in San Francisco, California. The company connects domain experts, including engineers, lawyers, doctors, bankers, and journalists, with frontier AI laboratories that need specialized human intelligence to train and evaluate large language models. By October 2025, Mercor had reached a $10 billion valuation following a $350 million Series C funding round, making its three co-founders the youngest self-made billionaires on record at age 22.
The platform operates on two interlocking sides: a proprietary AI interviewer that screens and assesses talent at scale, and a marketplace that routes those vetted experts to AI lab clients for reinforcement learning from human feedback (RLHF), data annotation, and model evaluation work. Customers include OpenAI, Anthropic, Meta, Google, and Microsoft.
Mercor was co-founded by Brendan Foody (CEO), Adarsh Hiremath (CTO), and Surya Midha (COO, later Chairman). All three grew up in the San Francisco Bay Area as children of software engineers. Hiremath and Midha met at age ten through elementary school debate tournaments. The two later attended Bellarmine College Preparatory in San Jose, where Foody joined them and all three competed together on the school's nationally ranked Speech and Debate team. Their principal later described them as "some of Bellarmine's top debaters in its history and captains of our history Speech and Debate team."
After graduating high school, Hiremath enrolled at Harvard University while Foody and Midha both attended Georgetown University in Washington, D.C. During their sophomore year in 2023, the three dropped out to launch Mercor. In March 2024, they received $100,000 Thiel Fellowships, a two-year program founded by investor Peter Thiel that provides funding and professional network access to promising entrepreneurs who leave college.
Mercor was incorporated in January 2023. The company initially focused on connecting freelance software engineers in India with U.S.-based companies, a model the founders developed after noticing that highly capable engineers abroad were not receiving job opportunities commensurate with their skills. CEO Foody described the founding insight: "The company really grew out of working with phenomenally talented people all around the world, particularly in India at the time, where we were amazed that these talented people weren't getting job opportunities."
The early product used a proprietary language model built on OpenAI technology to screen candidates through automated 20-minute video interviews, then matched qualified candidates with employers. The company bootstrapped to seven-figure annual revenue while still operating from Harvard and Georgetown dorm rooms, before raising outside capital.
As demand from AI labs grew, Mercor shifted its primary focus toward the AI training data market, recruiting domain experts across a much wider range of professional fields and routing them to frontier labs needing high-quality human judgment for model training and evaluation.
The company attracted notable individual investors early in its development, including Peter Thiel, Twitter co-founder Jack Dorsey, Quora CEO and OpenAI board member Adam D'Angelo, and former U.S. Treasury Secretary Larry Summers. These relationships predated the formal Series A round and reflected the broader interest among AI-adjacent investors in the emerging market for expert human data.
Mercor raised a $3.6 million seed round in January 2024, led by General Catalyst. At this stage, the company had evaluated approximately 300,000 candidates, served talent across more than 25 countries, and was generating early revenue from employer matching fees.
In September 2024, Mercor raised $32 million in a Series A round led by Benchmark, with Victor Lazarte, Benchmark's newest partner, joining Mercor's board. The round valued the company at approximately $250 million. Other investors included Peter Thiel, Jack Dorsey, Adam D'Angelo, and Larry Summers. At the time of the round, the company reported 50 percent month-over-month revenue growth.
In February 2025, Mercor raised $100 million in a Series B round led by Felicis Ventures, with participation from Benchmark, General Catalyst, DST Global, and Menlo Ventures. The post-money valuation reached $2 billion, an 8x step-up from the Series A valuation eight months earlier. The company reported approximately $75 million in annualized recurring revenue at the time of the round, representing a valuation multiple of roughly 27x ARR.
The round coincided with Mercor's emergence as a primary destination for AI labs seeking specialized domain experts, as the company announced it was working with the top five AI laboratories globally. Around this time, Mercor also recruited the former head of Human Data Operations at OpenAI and the former head of Growth at Scale AI.
In October 2025, Mercor raised $350 million in a Series C round at a $10 billion valuation, a 5x increase from its Series B valuation eight months prior. The round was again led by Felicis Ventures, with participation from Benchmark, General Catalyst, and Robinhood Ventures as a new investor. At the time of the announcement, Mercor was paying out more than $1.5 million per day to its contractor network and was on track for $500 million in annualized revenue. By the end of 2025, revenue had reached approximately $760 million annualized, growing to roughly $1 billion annualized by early 2026.
The rapid valuation increase from $250 million to $10 billion in approximately 13 months, combined with the founders' ages, led to widespread coverage of Foody, Hiremath, and Midha as the youngest self-made billionaires on record.
In May 2025, Mercor hired Sundeep Jain as its first president. Jain previously served as chief product officer and SVP of Engineering at Uber and VP of Product Management at Google, where he oversaw Search Ads.
Mercor operates as a two-sided talent marketplace. On the supply side, job seekers and domain experts register, complete the platform's AI-led interview, and receive a verified profile that can be matched to available opportunities. On the demand side, employers and AI laboratory clients upload job descriptions or project briefs and receive ranked shortlists of candidates. The platform manages contracts, invoicing, and payroll within a single interface.
The company generates revenue primarily through hourly finder fees and a roughly 30 percent matching fee on contractor placements. The take rate on gross contractor payments runs approximately 30 to 40 percent. Contractors on the platform earn an average of more than $85 per hour, with domain expert roles commanding higher rates.
As of October 2025, Mercor had screened more than 300,000 professionals, maintained an active roster of approximately 30,000 contractors, and managed around $1.5 million in daily contractor payouts.
Mercor's AI interviewer system, called Monty, conducts approximately 10,000 interviews per day across hundreds of job categories. Each interview lasts roughly 15 to 20 minutes, equivalent to one interview processed approximately every nine seconds at full throughput.
Monty uses a conversational voice format in which candidates interact with an AI interviewer over video. The system asks technical questions, adapts follow-up questions based on candidate responses, and can push candidates to solve complex problems live. About half of each interview focuses on work experience and half on case studies or skill assessments relevant to the role. For technical roles, coding problems are generated dynamically based on the candidate's resume and the live conversation, with language-specific starter code provided.
Technical architecture. Each Monty session runs in an isolated container on Modal, a cloud compute platform. The system manages roughly 200 containers at peak hours (around noon Pacific time) and maintains an 80-container floor overnight. To keep session startup latency under 200 milliseconds, Mercor uses a warm pool strategy: Modal keeps about 30 containers pre-booted at the compute level, with a background job running every five minutes to keep approximately 10 fully initialized containers ready.
Audio and WebRTC handling is managed through Daily.co, which routes peer connections and records sessions to S3 via the Pipecat open-source voice AI framework. The speech pipeline is fully streaming end-to-end, with automatic failover across both commercial APIs and open-source models at each stage, covering speech recognition, language model inference, and text-to-speech.
Turn detection uses the Smart-turn-v3 ONNX model running on Modal at approximately 150 milliseconds latency at the 50th percentile. Total round-trip latency from candidate silence to Monty's first audio response has a median of approximately 700 milliseconds, with a production threshold of 900 milliseconds set to balance natural conversation feel against perceived lag.
The system uses a 400-millisecond aggregation timeout to capture brief acknowledgments such as "yes" or "uh-huh" that lack trailing silence, and an LLM-based classifier to detect and discard turns where candidates repeat Monty's own output (echo cancellation). A 120-millisecond minimum floor prevents mid-sentence triggering.
Assessment categories. Rather than maintaining separate assessment rubrics for every job title, Monty clusters positions by underlying skill type. The primary categories are Domain Expert Interview (approximately 2,800 sessions per day), Language assessment (approximately 750 sessions per day), Code Assessment (approximately 600 sessions per day), and Professional assessment (approximately 380 sessions per day). These three primary categories cover about 90 percent of total interview volume.
Monty scores candidates across dozens of parameters covering communication, reasoning, and technical clarity, generating a structured verified profile. Interviews are personalized using candidate resume data processed before the session begins, which determines question difficulty, probe depth, and which topic areas to skip.
More than 50 percent of job offers sent through the platform are proactive, going to candidates who previously completed an interview but had not applied to that specific role. The system's resume and profile data feed a semantic search layer that allows employers to find qualified candidates through natural-language descriptions of what they need.
Deployment reliability. Mercor uses a blue-green deployment strategy that rolls configuration changes across the interview system over approximately one week to catch regressions before they affect thousands of sessions. Automatic failover is built into each stage of the speech pipeline so that individual service outages remain invisible to candidates.
A significant portion of Mercor's revenue since 2024 has come from connecting frontier AI labs with domain experts for AI training tasks, particularly reinforcement learning from human feedback (RLHF) and direct preference optimization (DPO). These tasks require human evaluators with genuine expertise in specific fields, not generic crowdworkers, because the quality of model behavior in specialized domains depends directly on the quality of human judgment used in training.
Mercor recruits and vets specialists including software engineers, medical professionals, lawyers, bankers, financial analysts, and journalists, then routes them to AI lab clients through a managed project workflow. Individual engagements can involve thousands of contractors working simultaneously; the company has noted single projects involving more than 5,000 contractors. Contractor compensation for specialist roles is significantly higher than typical annotation platforms, with the average across the network exceeding $85 per hour.
AI lab clients pay Mercor a finder fee and matching rate, giving the company a take rate of roughly 30 to 40 percent on gross contractor payments. The remaining 60 to 70 percent flows to contractors.
In September 2025, Mercor launched the AI Productivity Index (APEX), a research benchmark studying how effectively different AI models perform tasks in business domains including consulting, investment banking, and law. In January 2026, the company released APEX-Agents, a related leaderboard focused specifically on long-horizon agentic professional workflows. The APEX-Agents report found that frontier AI agents complete fewer than 25 percent of professional tasks on their first attempt and approximately 40 percent with up to eight retry attempts. The benchmarks draw on Mercor's database of domain expert performance as a reference standard for what human professionals accomplish on comparable tasks.
Mercor's customer base for AI training and evaluation work includes the major frontier AI laboratories. The company has publicly confirmed OpenAI and Anthropic as clients and has referenced working with "the world's top five AI labs." Documented customers include OpenAI, Anthropic, Meta, Google, and Microsoft.
Meta paused all work with Mercor in March 2026 following a security incident. The company also works with enterprise employers across technology, finance, healthcare, and legal sectors for conventional talent acquisition.
Mercor operates in the same general market as Scale AI and Surge AI but takes a meaningfully different position along the quality-versus-volume axis.
| Feature | Mercor | Scale AI | Surge AI |
|---|---|---|---|
| Founded | 2023 | 2016 | 2020 |
| Funding status | $484M raised, $10B valuation (2025) | $15.9B raised; Meta holds ~49% stake | Bootstrapped; ~$1.2B valuation |
| Primary focus | Domain expert sourcing and AI training data | High-volume data labeling at enterprise scale | High-skill annotation with verified credentialed experts |
| Contractor network | 30,000+ active; 300,000+ screened | ~300,000 contractors | Smaller pool, heavily credentialed |
| Average contractor pay | $85+/hour | Not publicly disclosed | $200-$500/hour for medical doctors, $150-$350/hour for PhDs |
| Take rate | ~30-40% | Not publicly disclosed | Not publicly disclosed |
| AI interviewing | Proprietary (Monty), 10,000 interviews/day | No | No |
| Key clients | OpenAI, Anthropic, Meta, Google, Microsoft | Meta, Microsoft, OpenAI, General Motors | OpenAI, Google, Microsoft, Meta, Anthropic |
| Revenue (2025) | ~$500-760M ARR | Not publicly disclosed | ~$1B ARR |
Scale AI was founded in 2016 by Alexandr Wang and built its dominance through software-driven quality control systems that could manage extremely large annotation workloads for automotive, defense, and enterprise customers. Meta made a $14 billion investment in Scale AI in 2024, acquiring a roughly 49 percent stake. Following that deal, Alexandr Wang departed to join Meta as Chief AI Officer, and Meta's TBD Labs researchers reportedly expressed preference for Surge AI and Mercor for training data quality over Scale AI. Scale AI cut 14 percent of its workforce in July 2025 as its relationship with former AI lab clients shifted.
Surge AI, founded in 2020, is probably Mercor's closest peer in the quality tier. Surge built its model around highly educated annotators from the outset, recruiting Fields Medal mathematicians and Supreme Court litigators for tasks requiring elite domain knowledge. Surge crossed $1 billion in annual revenue in 2025 without raising external capital. Both Mercor and Surge target work where generic crowdworker annotation is insufficient and specialist judgment matters directly to model capability.
Mercor's distinguishing characteristic relative to Surge is its proprietary AI interview and matching infrastructure. Where Surge curates its pool through manual vetting, Mercor has built an automated pipeline capable of screening hundreds of thousands of candidates and conducting tens of thousands of interviews per day. This gives Mercor the ability to scale its expert pool rapidly and to match candidates to specific task requirements programmatically.
Scale AI filed a lawsuit against Mercor in September 2025, alleging that a former Scale employee had downloaded proprietary customer strategy documents before joining Mercor. The litigation was ongoing as of mid-2026.
Mercor's platform supports three broad categories of use cases.
AI lab training and evaluation. Frontier labs use Mercor to source domain experts for RLHF annotation, preference ranking, red-teaming, and model evaluation. These tasks require genuine professional expertise to produce training signals that improve model behavior in specialized domains like medicine, law, or quantitative finance.
Enterprise talent acquisition. Traditional employers use Mercor's job marketplace to source and hire full-time, part-time, or contract knowledge workers. The AI interview and matching system reduces screening time and provides standardized assessment across candidates from different geographies. The platform draws talent particularly from India and the United States.
AI productivity benchmarking. Through APEX, Mercor provides AI labs, enterprise buyers, and researchers with comparative performance data on how frontier AI models handle professional workflows, measured against the baseline of what Mercor's domain expert network accomplishes on the same tasks.
Mercor's rise attracted substantial press attention for both its financial trajectory and the ages of its founders. TechCrunch, Forbes, Bloomberg, CNBC, and the Harvard Crimson all covered the company's funding rounds. Bloomberg included Mercor in its "24 AI Startups to Watch in 2026" list under the Foundation Builders category in October 2025. All three founders appeared on the Forbes 30 Under 30 list for 2025.
The company's revenue growth, moving from approximately $3 million in early annualized revenue to more than $750 million in roughly two years, was widely cited as one of the fastest growth trajectories in enterprise software history. The resulting paper net worth of the three founders, still 22 years old at the time of the Series C, prompted coverage of them as the youngest self-made billionaires ever documented.
Mercor's contractor workforce drew scrutiny in early 2026. Reports from The Verge and New York Magazine documented contractor accounts describing a stressful work environment, poor project management, and declining pay rates over time. Individual contractor rates varied widely, with large-scale AI training projects reportedly paying as little as $21 per hour for some roles, far below the platform's average of $85 per hour across the full network.
In March 2026, The San Francisco Gazetteer published a detailed account of internal workplace complaints at Mercor's San Francisco headquarters. Former employees described expectations of work until 2 a.m., management practices characterized as vindictive toward staff who resisted those norms, and systematic misrepresentation of working conditions during the recruitment process. Multiple employees reportedly quit within their first week.
A December 2025 internal survey asked employees to identify colleagues who "lower the bar," a practice former employees said deepened internal tension. An embezzlement incident in early 2025, in which an employee managing an Anthropic project directed hundreds of thousands of dollars in payments to family members via Venmo transactions logged as bonuses, surfaced as evidence of inadequate internal financial controls. The employee was eventually fired.
Surya Midha stepped away from the COO role in October 2025 to take a board chairman position. Following his departure, former employees described the relationship between Foody and Hiremath as strained. Mercor's spokesperson responded to the workplace coverage by stating that the company makes employment decisions "based on performance, business, and customer needs" and ties bonus structures to individual performance.
On March 27, 2026, a threat group published two malicious versions of the open-source LiteLLM Python package (versions 1.82.7 and 1.82.8) to the Python Package Index (PyPI). The tainted packages were available for roughly 40 minutes before being identified and removed, but the window was sufficient to compromise systems at Mercor and other companies that automatically updated the dependency. The attackers, associated with the group Lapsus$, published samples of allegedly stolen data including Slack messages, internal ticketing data, and video files, and claimed to have obtained approximately four terabytes of data in total including source code and database records.
Mercor confirmed the breach and disclosed that the stolen data included personally identifiable information for approximately 40,000 contractors, video interview recordings, and proprietary source code. Meta indefinitely paused all work with Mercor following the announcement. At least four class-action lawsuits were filed against Mercor in the U.S. District Court for the Northern District of California in early April 2026, including the lead case Gill v. Mercor.io Corporation, proposing a nationwide class.
The breach intersected with a separate controversy involving Delve Technologies, the GRC automation firm that had issued LiteLLM's SOC 2 and ISO 27001 compliance certifications. An anonymous whistleblower accused Delve of faking data for security certifications, complicating the broader question of how the supply chain compromise went undetected.