Here, your main goal is to feel what “training” and “testing” mean by running a small experiment that involves patterns, data, and mistakes.
Begin with a paper-based classification game. Draw or print at least 24 small images divided into two classes you can tell apart, such as “cats vs. dogs” or “triangles vs. circles.” Mix in some strange examples: odd angles, partial shapes, unusual sizes. Ask someone to act as the “model” and show them a training pile of clearly labeled examples. Then, give them a mixed test pile without labels and have them guess the class for each one. Record which guesses are correct and which are wrong. Switch roles and try again. Talk together about which images were confusing and why—maybe the pattern was unclear, or the training examples were too limited.
On a separate page, write a short “training report.” Describe what your two classes were, what your training examples looked like, what kinds of mistakes the “model” made, and what you would change about the data next time to make the model better.
If you have access to Teachable Machine, you can repeat a similar idea with a real machine learning tool. Go to:
https://teachablemachine.withgoogle.com/
Create an image project with two or three classes, such as different hand gestures or different objects held in front of the camera. Collect a variety of training images for each class, train the model, and then test it under different lighting or with different people. Keep a simple table of correct and incorrect guesses. Use your paper training report as a guide and write a comparison: in what ways did the AI model behave like your human “model,” and in what ways was it different?
If you have access to Python in a kid-friendly environment (for example on Replit: https://replit.com/ or in a Jupyter notebook provided at school), you can run a tiny code-based experiment. Create a small list of numbers and label them “big” or “small,” then use a short example script (from a teacher or tutorial) that trains a simple classifier. After you run it, change the training data by adding unusual numbers and see how the predictions change. Add a few sentences to your report explaining what “training data” meant in all of your experiments—paper, Teachable Machine, and Python—and why good data matters so much.