Pelonomi Moiloa Smaller Models That Learn More From Less Data

Published
Reading time
2 min read
Pelonomi Moiloa

One of my favourite flavours of conversation is listening to reinforcement learning experts talk about their children as reinforcement learning agents. These conversations highlight just how comically far behind humans our machine learning models are. Especially when comparing the ability to acquire knowledge without being told explicitly what to learn and when comparing the amount of information required for that learning. 

My co-founder has a three-year-old son who is obsessed with cars. It would seem his objective function is to be exposed to as many cars as possible. So much so that he came home from a supercar show ranting and raving about the Daihatsu he saw in the parking lot, because he had never seen a Daihatsu before. On another occasion, when my co-founder told him the vehicle he was pointing at and enquiring about was a truck, the child did not hesitate to know that truck was a descriptor for a class of vehicle and not the name of the car.

What makes his little brain decide what is important to learn? How does it make connections? How does it make the inference so quickly across such a vast domain? Fueled solely by a bowl of Otees cereal? 

What we have been able to achieve with our models as a species is quite impressive. But what I find far less impressive is how big the models are and the exorbitant resources of data, compute, capital, and energy required to build them. My co-founder's child learns far more from far less data, with a lot less energy. 

This is not only a conundrum of resources for machine learning architects. It has profound implications for implementing AI in parts of the world where not only data but also electricity and computing equipment are severely limited. As AI practitioners, we need to understand how to build smaller, smarter models with less data.

Although efforts to put today's top-performing models on mobile devices are driving development of smaller models, prioritising small models that learn from relatively small datasets runs counter to mainstream AI development. 

AI has the potential to help us understand some of the biggest questions of the universe, and it could provide solutions to some of the most pressing issues of our lifetime, like ensuring that everyone has access to clean energy, clean water, nutritious meals, and quality healthcare; resolving conflict; and overcoming the limitations of human greed. Yet the current mainstream of AI largely overlooks the lives affected by such problems. An approach that does not require the level of capital investment typical of AI would open the AI domain to more people, from more places, so they too can leverage the power of AI for the benefit of their communities. 

I hope for many things for AI: that regulation and governance will improve, that the people who build the technology will do so with intention and with principles and values grounded in the connection of humanity. But the hope I am focusing on for now is more building of smaller, smarter models with less data to share the benefits of AI throughout the world. What are we working toward if not to make the world a sustainably better place for more people? 

Pelonomi Moiloa is CEO of Lelapa AI, a socially grounded research and product lab that focuses on AI for Africans, by Africans.

Share

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox