Published
Reading time
3 min read
Sara Hooker: Prioritize Inclusion

The past year has seen incredible innovation in AI, and I expect as much or more in 2024. The coming year undoubtedly will be a year of rapid progress in models – multimodal, multilingual, and (hopefully) smaller and faster.

To date, models and datasets used for training have been heavily biased towards English-speaking and Western European countries, offering little representation of languages from the Global South and Asia. Even when languages from the Global South are represented, the data almost always is translated from English or Western European languages. In 2023, the “rich got richer” and the “poor got poorer” as breakthroughs facilitated use of widely spoken languages like English while further impeding access for speakers of languages for which much less data is available.

Next year will be the year of Robin Hood, when we try to reshare the gains by closing the language gap. We will see rapid improvement in state-of-the-art multilingual models, as well as innovation in synthetic data generation to build foundation models for specific languages. I believe we will make progress in closing the language gap and strengthen our collective effort to incorporate research, training data, and individuals from across the globe. This will include projects like Aya, a model from Cohere For AI that will cover 101 languages. Bridging the gap is not just a matter of inclusivity, it’s key to unlocking the transformative power of AI and ensuring that it can serve a global audience, irrespective of language or cultural background.

In addition, I expect 2024 to be a year for research bets. Multimodal will become a ubiquitous term as we move away from subfields dedicated to language, computer vision, and audio in isolation. Models will be able to process multiple sensory inputs at once, more like humans. We will care urgently about model size as we deploy more models in resource-constrained environments. AI models will become smaller and faster. Our lab is already pushing the limits of efficiency at scale, data pruning, and adaptive computing. Localization of models using retrieval augmented generation (RAG) and efficient fine-tuning will be paramount, as everyday users look to unlock the potential in frontier models. 

In the coming year, it will be even more important to interrogate the defaults of where, how, and by whom research is done. To date, state-of-the-art models have come from a handful of labs and researchers. The community responsible for recent breakthroughs is so small that I know many of the people involved personally. However, we need to broaden participation in breakthroughs to include the best minds. At Cohere For AI, we are in the second cohort of our Scholars Program, which provides alternative points of entry into research for AI talent around the world. 

The compute divide will persist in the coming year. Shortages of compute combined with stockpiling of GPUs mean there won’t be immediate changes in the availability of compute. This year, we launched our research grant program, so independent and academic researchers can access frontier models at Cohere. More needs to be done at national and global scales to bridge the divide for researchers and practitioners. 

We are in an interesting time, and it is rare to work on research that is being adopted so quickly. Our ideas not only resonate in AI conferences but have a profound impact on the world around us. In 2024, expect more rapid change and some breakthroughs that make this technology immediate and usable to more humans around the world. By prioritizing inclusivity in model training and fundamental research, we can help ensure that AI becomes a truly global technology, accessible to users from all backgrounds. 

Sara Hooker is a senior VP of research at Cohere and leads Cohere For AI, a nonprofit machine learning research lab that supports fundamental enquiry and broad access. 

Share

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox