Papers

Learn more about AI2's Lasting Impact Award
Viewing 11-20 of 991 papers
  • The Expressive Power of Transformers with Chain of Thought

    William Merrill, Ashish SabharwalICLR2024 Recent theoretical work has identified surprisingly simple reasoning problems, such as checking if two nodes in a graph are connected or simulating finite-state machines, that are provably unsolvable by standard transformers that answer immediately after…
  • TRAM: Bridging Trust Regions and Sharpness Aware Minimization

    Tom Sherborne, Naomi Saphra, Pradeep Dasigi, Hao PengICLR2024 By reducing the curvature of the loss surface in the parameter space, Sharpness-aware minimization (SAM) yields widespread robustness improvement under domain transfer. Instead of focusing on parameters, however, this work considers the transferability of…
  • What's In My Big Data?

    Yanai Elazar, Akshita Bhagia, Ian Magnusson, Abhilasha Ravichander, Dustin Schwenk, Alane Suhr, Pete Walsh, Dirk Groeneveld, Luca Soldaini, Sameer Singh, Hanna Hajishirzi, Noah A. Smith, Jesse DodgeICLR2024 Large text corpora are the backbone of language models. However, we have a limited understanding of the content of these corpora, including general statistics, quality, social factors, and inclusion of evaluation data (contamination). In this work, we propose…
  • Estimating the Causal Effect of Early ArXiving on Paper Acceptance

    Yanai Elazar, Jiayao Zhang, David Wadden, Boshen Zhang, Noah A. SmithCLearR2024 What is the effect of releasing a preprint of a paper before it is submitted for peer review? No randomized controlled trial has been conducted, so we turn to observational data to answer this question. We use data from the ICLR conference (2018--2022) and…
  • The precipitation response to warming and CO2 increase: A comparison of a global storm resolving model and CMIP6 models.

    Ilai Guendelman, Timothy M. Merlis, Kai-Yuan Cheng, Lucas M. Harris, Christopher S. Bretherton, Max Bolot, Lin Zhou, Alex Kaltenbaugh, Spencer K. Clark, Stephan FueglistalerGeophysical Research Letters2024 Global storm-resolving models (GSRMs) can explicitly resolve some of deep convection are now being integrated for climate timescales. GSRMs are able to simulate more realistic precipitation distributions relative to traditional CMIP6 models. In this study, we…
  • Emulation of cloud microphysics in a climate model

    W. Andre Perkins, Noah D. Brenowitz, Christopher S. Bretherton, Jacqueline M. NugentJAMES2024 We present a machine learning based emulator of a microphysics scheme for condensation and precipitation processes (Zhao-Carr) used operationally in a global atmospheric forecast model (FV3GFS). Our tailored emulator architecture achieves high skill (≥94%) in…
  • Closing the Curious Case of Neural Text Degeneration

    Matthew Finlayson, John Hewitt, Alexander Koller, Swabha Swayamdipta, Ashish SabharwalICLR2024 Despite their ubiquity in language generation, it remains unknown why truncation sampling heuristics like nucleus sampling are so effective. We provide a theoretical explanation for the effectiveness of the truncation sampling by proving that truncation…
  • A machine learning parameterization of clouds in a coarse-resolution climate model for unbiased radiation

    Brian Henn, Yakelyn R. Jauregui, Spencer K. Clark, Noah Brenowitz, Jeremy McGibbon, Oliver Watt‐Meyer, Andrew G. Pauling, Christopher S. BrethertonJAMES2024 Coarse-grid weather and climate models rely particularly on parameterizations of cloud fields, and coarse-grained cloud fields from a fine-grid reference model are a natural target for a machine-learned parameterization. We machine-learn the coarsened-fine…
  • A Survey on Data Selection for Language Models

    Alon Albalak, Yanai Elazar, Sang Michael Xie, Shayne Longpre, Nathan Lambert, Xinyi Wang, Niklas Muennighoff, Bairu Hou, Liangming Pan, Haewon Jeong, Colin Raffel, Shiyu Chang, Tatsunori Hashimoto, William Yang WangarXiv2024 A major factor in the recent success of large language models is the use of enormous and ever-growing text datasets for unsupervised pre-training. However, naively training a model on all available data may not be optimal (or feasible), as the quality of…
  • Application of the AI2 Climate Emulator to E3SMv2's global atmosphere model, with a focus on precipitation fidelity

    James P. C. Duncan, Elynn Wu, Jean-Christoph Golaz, Peter M. Caldwell, Oliver Watt-Meyer, Spencer K. Clark, Jeremy McGibbon, Gideon Dresdner, Karthik Kashinath, Boris Bonev, Michael S. Pritchard, and Christopher S. BrethertonAuthorea2024 Can the current successes of global machine learning-based weather simulators be generalized beyond two-week forecasts to stable and accurate multiyear runs? The recently developed AI2 Climate Emulator (ACE) suggests this is feasible, based upon 10-year…