MIT Teaches Kids How To Build AI Models

image from Freepik

MIT Teaches Kids How To Build AI Models

Reading time: 3 min

  • Kiara Fabbri

    Written by: Kiara Fabbri Multimedia Journalist

  • Justyn Newman

    Fact-Checked by Justyn Newman Lead Cybersecurity Editor

In a Rush? Here are the Quick Facts!

  • Little Language Models helps kids learn AI by building small-scale models themselves.
  • Program uses dice to teach probabilistic thinking, a core concept in AI.
  • Demonstrates AI bias by simulating diverse datasets and adjusting probabilities.

In a press release published today, MIT unveiled a new educational tool developed by MIT researchers Manuj and Shruti Dhariwal.

Their application, Little Language Models, invites children to explore how AI works by allowing them to create simplified, small-scale models. This hands-on approach provides an alternative to the often abstract or lecture-based introductions to AI, making concepts accessible through interactive learning.

The program starts by using a pair of dice to introduce probabilistic thinking—one of the foundational concepts behind language models (LLMs). In AI, probabilistic thinking enables a model to predict the most likely next word in a sentence, accounting for uncertainty and making decisions based on likelihoods, notes MIT Review.

By adjusting dice to visualize this process, students can grasp that a model’s output isn’t always flawless but is based on probabilities. With Little Language Models, children can modify each side of the dice to represent different variables and adjust the probability of each side appearing, mimicking the decision-making process behind AI models.

By doing so, students can see how varying conditions lead to different outputs, helping to clarify that AI models, like their dice experiment, rely on probabilistic reasoning rather than deterministic rules.

Beyond illustrating AI fundamentals, the program also addresses bias in machine learning. Educators can use the tool to explain how bias can emerge in AI by having students assign colors to each side of the dice to represent different skin tones.

Initially, students might set the probability of a white hand at 100%—a scenario meant to reflect an imbalanced dataset containing only images of white hands. In response, the AI model generates only white hands when prompted.

Afterward, students can adjust the probabilities to include a more diverse range of skin tones, simulating a balanced dataset. This helps demonstrate how data diversity influences AI outputs and how biases can be mitigated through better data representation.

This feature is particularly timely as AI ethics and transparency become key issues in technology education. By introducing children to these concepts early on, the Dhariwals hope to foster a generation of tech-savvy individuals who understand AI’s strengths and limitations.

Emma Callow, a learning experience designer who collaborates with schools on integrating new technology into curricula, praised the program’s approach. “There is a real lack of playful resources and tools that teach children about data literacy and about AI concepts creatively,” Callow explained.

“Schools are more worried about safety rather than the potential to use AI. But it is progressing in schools, and people are starting to kind of use it. There is a space for education to change,” she added.

Little Language Models will launch on the Dhariwals’ online education platform, coco.build, in mid-November. The program will also be piloted in various schools over the next month, providing educators with early feedback and refinement opportunities, as noted by MIT Review.

Did you like this article? Rate it!
I hated it I don't really like it It was ok Pretty good! Loved it!
0 Voted by 0 users
Title
Comment
Thanks for your feedback
Please wait 5 minutes before posting another comment.
Comment sent for approval.

Leave a Comment

Show more...