The International Conference on Learning Representations, almost universally known by its acronym ICLR, is an annual academic conference that focuses on the theory and practice of deep learning and representation learning. Founded in 2013 by Yoshua Bengio and Yann LeCun, ICLR has grown from a small workshop with fewer than a hundred submissions into one of the three most influential venues in machine learning research, sitting alongside NeurIPS and the International Conference on Machine Learning (ICML). The conference is widely credited with two distinctive contributions to the modern research ecosystem: it gave the field a dedicated home for work on how machines learn useful representations of data, and it pioneered the use of fully open peer review through the OpenReview platform, an experiment that has since spread well beyond machine learning.
ICLR is run as a non-profit by an organization of the same name registered in California. The 2026 edition, the fourteenth in the conference series, took place at the Riocentro Convention and Event Center in Rio de Janeiro, Brazil, and attracted close to twenty thousand paper submissions, making it one of the largest scientific conferences in any field. Many landmark contributions to modern AI first appeared at ICLR or its workshop track, including the original Word2Vec paper, the Adam optimizer, the variational autoencoder, the Vision Transformer, and a long list of other works that now form part of the standard vocabulary of neural network research.
The immediate origin of ICLR was a sense of frustration with the existing publication culture in machine learning. By the early 2010s, deep learning was beginning its rapid ascent, but two of the dominant venues, NeurIPS and ICML, had inherited a journal-style review process that many practitioners felt was slow, conservative, and biased toward incremental theoretical work. Yann LeCun, then heading Facebook AI Research and a long-time advocate of neural network methods, and Yoshua Bengio at the Université de Montréal both argued that representation learning had grown into a coherent enough subfield to deserve its own conference, one that could move at the pace of an empirical science and that would treat reproducibility and openness as first-class concerns.
The first ICLR was held in Scottsdale, Arizona, on May 2 to 4, 2013. The program was modest by later standards, with sixty-seven submissions to the conference track and twenty-three accepted papers, an acceptance rate of roughly thirty-four percent. The workshop track ran in parallel and featured short, exploratory contributions. Among them was a four-page paper by Tomas Mikolov, Kai Chen, Greg Corrado, and Jeff Dean titled "Efficient Estimation of Word Representations in Vector Space," which introduced the Skip-gram and Continuous Bag of Words architectures and became the seminal Word2Vec reference. The fact that one of the most cited natural language processing papers of the decade first appeared as an ICLR workshop submission helped cement the conference's reputation as a venue where genuinely new ideas surfaced first.
From the very first edition, ICLR adopted an open peer review model in which submissions, reviews, author responses, and final decisions were posted publicly on the OpenReview platform. The platform itself was developed at the University of Massachusetts Amherst under the direction of Andrew McCallum, and ICLR became its anchor venue. This was a radical departure from the closed review processes used by most other conferences and journals at the time, and it provoked a great deal of debate. Supporters argued that public reviews would discipline reviewers, give authors a fair chance to respond, and create a permanent record of the scientific conversation around each paper. Critics worried about reviewer intimidation and the risk that authors might be penalized in future career decisions for having received negative reviews, even on work that was eventually published elsewhere. Over time the model has been adopted, in various forms, by other major venues, and OpenReview now hosts submissions for ICLR, the Conference on Robot Learning, the Conference on Language Modeling, and many smaller workshops.
ICLR runs as a single-track conference with parallel poster sessions and a small number of oral and spotlight presentations. The main conference typically lasts three to four days, followed by one to two days of workshops on focused topics chosen each year by an open call. Tutorials, social events, sponsor exhibits, and an expanding affinity group program for traditionally under-represented communities round out the schedule.
The review process has evolved with the conference's scale but retains its core characteristics. Papers are submitted in October or November to the OpenReview platform, where they are immediately visible to the entire community in anonymized form. Submissions are reviewed in a double-blind fashion: reviewers do not see author names, and authors do not see reviewer identities, although the discussion thread itself is public. Each paper typically receives three to four reviews from program committee members, after which an author response period opens for several weeks. During this period authors may revise their submissions, post rebuttals, and engage in asynchronous discussion with their reviewers and the assigned area chair. Reviewers are explicitly encouraged to update their scores based on the discussion, and area chairs synthesize the conversation into a recommendation that the senior area chairs and program chairs use to make the final decision.
This review structure has come in for both praise and criticism. The transparent record means that anyone can read the full evaluation of every submitted paper, accepted or not, which is unusual in scientific publishing and has become a teaching resource for new researchers learning what reviewers expect. The emphasis on iterative revision during the discussion phase rewards authors who engage seriously with feedback. The downside, which has grown more acute as submission counts climbed past ten thousand, is reviewer fatigue and the difficulty of finding qualified reviewers for highly specialized work. Several research papers have been written analyzing the dynamics of the ICLR review process itself, with findings ranging from measurable but modest improvements in scores after rebuttals to documented inconsistencies between reviewer assignments.
ICLR has deliberately rotated its host city each year to broaden international participation. The locations have alternated between North America, Europe, and increasingly other parts of the world, with virtual editions during the COVID-19 pandemic. The 2023 edition in Kigali, Rwanda, was particularly significant as the first major AI conference held in Africa and produced a sharp increase in African participation, from sixteen attendees in 2019 to two hundred ninety-one in 2023.
| Year | Location | Country | Notes |
|---|---|---|---|
| 2013 | Scottsdale, Arizona | United States | Inaugural edition, May 2 to 4 |
| 2014 | Banff | Canada | Workshop and conference tracks expand |
| 2015 | San Diego, California | United States | Adam optimizer paper published |
| 2016 | San Juan | Puerto Rico | First edition outside the continental US/Canada/Europe |
| 2017 | Toulon | France | First European edition; three Best Paper awards |
| 2018 | Vancouver | Canada | Submissions cross one thousand |
| 2019 | New Orleans, Louisiana | United States | Submissions reach roughly fifteen hundred |
| 2020 | Addis Ababa | Ethiopia (planned) | Moved to virtual due to COVID-19 |
| 2021 | Virtual | Global | Originally scheduled for Vienna, Austria |
| 2022 | Virtual | Global | Submissions cross three thousand |
| 2023 | Kigali | Rwanda | First major AI conference in Africa, May 1 to 5 |
| 2024 | Vienna | Austria | First in-person European edition since pandemic |
| 2025 | Singapore | Singapore | First Southeast Asian edition |
| 2026 | Rio de Janeiro | Brazil | Held at Riocentro, April 23 to 27 |
The choice of host city is made by the ICLR board with input from the local research community and is announced one to two years in advance. Sponsorship from large industrial laboratories, including Google DeepMind, Meta AI, Microsoft Research, OpenAI, Apple, NVIDIA, and Amazon, has become essential to underwriting the cost of these increasingly large gatherings.
Few academic conferences have grown as quickly as ICLR. The number of full-length conference track submissions has roughly doubled every two to three years since the founding, propelled by the explosive growth of deep learning research, the rise of large language models, and the increasing professionalization of industrial AI labs.
| Year | Submissions | Accepted | Acceptance rate |
|---|---|---|---|
| 2013 | 67 | 23 | 34.3% |
| 2014 | 69 | 69 | 100.0% (workshop format) |
| 2017 | 490 | 198 | 40.4% |
| 2018 | 1,013 | 337 | 33.3% |
| 2019 | 1,579 | 502 | 31.8% |
| 2020 | 2,594 | 687 | 26.5% |
| 2021 | 3,014 | 860 | 28.5% |
| 2022 | 3,422 | 1,095 | 32.0% |
| 2023 | 4,955 | 1,575 | 31.8% |
| 2024 | 7,304 | 2,260 | 30.9% |
| 2025 | 11,672 | 3,704 | 31.7% |
| 2026 | ~19,800 | 5,340 | 27.0% |
Acceptance rates have settled into a band between roughly twenty-seven and thirty-three percent, comparable with NeurIPS and ICML. The 2024 conference featured eighty-six oral presentations and three hundred sixty-six spotlight posters, supported by a program committee of about eight thousand nine hundred fifty reviewers and six hundred twenty-four area chairs. By 2026 the reviewer pool exceeded twelve thousand, and the conference faced ongoing discussions about whether continued growth was sustainable, with proposals ranging from longer review cycles to splitting the conference into themed tracks.
A striking number of foundational papers in modern AI first appeared at ICLR. The Word2Vec workshop paper at the inaugural 2013 edition is often cited as the moment when distributed word representations entered the mainstream of natural language processing. At ICLR 2014, Diederik Kingma and Max Welling published "Auto-Encoding Variational Bayes," introducing the variational autoencoder and a stochastic variational inference algorithm that scales to large datasets. The same authors, with Jimmy Ba joining Kingma, presented "Adam: A Method for Stochastic Optimization" at ICLR 2015, an optimizer that became the default choice for training deep neural networks and is still the workhorse behind the training of essentially every large language model. ICLR 2015 also hosted Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio's "Neural Machine Translation by Jointly Learning to Align and Translate," which introduced the attention mechanism that would later be generalized into the Transformer.
ICLR 2016 saw the publication of "Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks" by Alec Radford, Luke Metz, and Soumith Chintala, the DCGAN paper that demonstrated for the first time that generative adversarial networks could synthesize diverse and realistic images at high resolution. The same year, Timothy Lillicrap and colleagues at DeepMind published "Continuous Control with Deep Reinforcement Learning," introducing the Deep Deterministic Policy Gradient algorithm.
Later editions added the Vision Transformer at ICLR 2021 ("An Image is Worth 16x16 Words" by Alexey Dosovitskiy and colleagues), which showed that pure transformer architectures could match or exceed the best convolutional networks on image classification when given enough data. ICLR 2021 also featured the DeBERTa improvements to BERT-style language models. Subsequent years brought a flood of work on diffusion models, retrieval-augmented generation, in-context learning, and multimodal foundation models.
The Test of Time award was inaugurated at ICLR 2024 to recognize papers from a decade earlier that have had a particularly enduring influence. It is now an annual feature, complementing the Outstanding Paper awards selected from the current year's program.
| Year awarded | Test of Time winner | Original year | Authors |
|---|---|---|---|
| 2024 | Auto-Encoding Variational Bayes | 2014 | Diederik Kingma, Max Welling |
| 2025 | Adam: A Method for Stochastic Optimization | 2015 | Diederik Kingma, Jimmy Ba |
| 2025 (runner-up) | Neural Machine Translation by Jointly Learning to Align and Translate | 2015 | Dzmitry Bahdanau, Kyunghyun Cho, Yoshua Bengio |
| 2026 | Unsupervised Representation Learning with Deep Convolutional GANs | 2016 | Alec Radford, Luke Metz, Soumith Chintala |
| 2026 | Continuous Control with Deep Reinforcement Learning | 2016 | Timothy Lillicrap, Jonathan Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, Daan Wierstra |
The Outstanding Paper awards are given each year to a small number of accepted submissions, typically three to five, with a longer list of honorable mentions. ICLR 2017 famously gave three Best Paper awards to "Understanding Deep Learning Requires Rethinking Generalization" by Chiyuan Zhang and colleagues, "Making Neural Programming Architectures Generalize via Recursion" by Jonathon Cai, Richard Shin, and Dawn Song, and "Semi-Supervised Knowledge Transfer for Deep Learning from Private Training Data" by Nicolas Papernot and colleagues. The 2024 edition recognized five Outstanding Papers, including "Generalization in Diffusion Models Arises from Geometry-Adaptive Harmonic Representations" by Zahra Kadkhodaie, Florentin Guth, Eero Simoncelli, and Stéphane Mallat. ICLR 2025 gave Outstanding Paper awards to three submissions selected from a finalist pool of thirty-six, including "AlphaEdit: Null-Space Constrained Knowledge Editing for Language Models" and "Learning Dynamics of LLM Finetuning."
ICLR is generally grouped with NeurIPS and ICML as the three top-tier general machine learning conferences, often called the "big three." Each has a distinct character that reflects its history and the priorities of its founders.
| Feature | ICLR | NeurIPS | ICML |
|---|---|---|---|
| Founded | 2013 | 1987 (as NIPS) | 1980 (annual since 1988) |
| Time of year | Late April or early May | Early December | Mid July |
| Typical scope | Deep learning and representation learning | Broad: ML, computational neuroscience, applications | Core ML methodology, theory, optimization |
| Review platform | OpenReview, fully public | OpenReview since 2023, single-blind by default with opt-in public reviews | OpenReview, single-blind discussion |
| Submissions in 2024 | 7,304 | ~17,000 | ~9,500 |
| Acceptance rate (2024) | 30.9% | ~25% | ~27% |
| Distinguishing feature | Most aggressive openness norms; deep learning focus | Largest by attendance; longest history; computational neuroscience track | Strongest theoretical and statistical learning content |
In practice the three conferences share many of the same authors, reviewers, and topics, and a substantial number of papers are submitted first to one venue, rejected or withdrawn, then submitted to the next. ICLR's spring slot in the calendar makes it a natural target for work that misses the September NeurIPS deadline. The conference's deep learning focus has loosened somewhat as the field has matured: by the mid-2020s ICLR routinely accepts work on topics that would have looked at home at NeurIPS or ICML, including reinforcement learning theory, causal inference, and statistical learning. What still distinguishes it is the stronger emphasis on representation learning as a unifying theme, the OpenReview-based public review, and a culture that tends to reward bold empirical claims over pure theoretical contributions.
The one to two day workshop program that follows the main ICLR conference each year is one of the venue's most distinctive features. Workshops are proposed by groups of researchers in the autumn, peer reviewed by the workshop chairs, and selected to balance topical breadth with depth. Successful workshops at ICLR have launched entire subfields. The 2013 workshop track gave Word2Vec to the world. Later years saw influential workshops on adversarial examples, meta-learning, neural network compression, AI for science, large language model evaluation, mechanistic interpretability, and machine learning for code. Many workshops alternate between ICLR, NeurIPS, and ICML, and the workshop community is widely regarded as a key incubator for new research directions.
In addition to formal workshops, ICLR hosts a substantial set of affinity group events organized by communities such as Women in Machine Learning (WiML), LatinX in AI, Black in AI, Queer in AI, North Africans in ML, Indigenous in AI, and Tiny Papers, an outreach program for first-time authors. These events combine networking with research presentations and have helped diversify both the demographics and the topical coverage of the broader machine learning community.
The partnership between ICLR and OpenReview is one of the conference's defining institutional features. Every paper ever submitted to ICLR, accepted or not, is permanently archived on OpenReview together with its reviews and author responses. This produces an unusually rich public record of the field's evolution. Researchers have used the corpus to study reviewer behavior, score calibration, the effects of author identity, the dynamics of rebuttals, and the relationship between review scores and long-term citation impact.
Reproducibility has been an explicit priority at ICLR since at least 2018, when the conference launched a Reproducibility Challenge in partnership with the Machine Learning Reproducibility Challenge (MLRC). The challenge invites the community, often student-led teams, to reproduce the results of accepted ICLR papers and publish their findings, which are themselves reviewed and indexed. The reports produced through MLRC have become a standard reference for understanding the practical claims of well-known papers and have led to several formal corrections and clarifications.
The conference also pioneered the use of supplementary materials policies that require authors to release code or detailed pseudocode for any empirical claims, and many years have featured a Code of Ethics review track for papers raising potential safety, privacy, or dual-use concerns.
The rapid commercial relevance of deep learning has made ICLR a major recruiting and branding event for industrial AI labs. Sponsorship at the gold or platinum tiers brings exhibition booths, networking events, and recruiting access; in recent years sponsors have included Google DeepMind, Meta AI, Microsoft Research, OpenAI, Anthropic, Apple, NVIDIA, Amazon, Bytedance, Huawei, Tencent, and many smaller startups. The visible industrial presence has at times prompted concern about the influence of corporate priorities on the research agenda, and the conference's leadership has experimented with rules limiting sponsor signage in technical sessions, requiring conflict-of-interest disclosures, and ensuring that sponsorship money is not the only path to a conference badge.
Other recurring debates include the carbon footprint of large in-person conferences, the visa challenges that researchers from many countries face when meetings rotate between continents, the appropriate role of large language models in writing and reviewing papers, the question of whether the open review record should be redacted for rejected papers, and the difficulty of recognizing genuine novelty when submission volumes outpace the community's collective reading capacity. The conference has generally addressed these issues incrementally, with a mix of policy changes, town halls during the conference, and public statements from the board.
In just over a decade, ICLR has become one of the most consequential venues for AI research. Its contributions can be grouped under three headings. First, by giving representation learning a dedicated home, it accelerated the maturation of deep learning as a discipline with shared methods, benchmarks, and review standards. Second, by anchoring OpenReview and championing public peer review, it changed the norms of scientific communication in ways that have spread well beyond machine learning. Third, by deliberately rotating its host city across continents and supporting affinity groups, it has helped broaden participation in a research community that was once heavily concentrated in a handful of North American and European institutions.
The challenges facing ICLR in the late 2020s mirror those facing the field more broadly. Submission volumes have grown faster than the supply of qualified reviewers, the line between academic and industrial research has blurred, and the rise of large language models has raised questions about the basic units of scientific contribution that conference review systems were built to evaluate. Whether the conference can continue to scale while preserving the openness and intellectual ambition of its early years is one of the central institutional questions in machine learning today, and it is being worked out in public, on OpenReview, in keeping with the conference's founding ethos.