The Case for Child-First (AI) Tech
are screens really the enemy? the future of the AI-native generation.
Hi all, today’s post may come off as me ‘momming’ too hard, but honestly I think this is a very important topic to discuss because it quite literally touches our future generations, and not enough people are looking at it rationally. With some oversharing and preaching, I share my thoughts on the relationship between children and technology and try to dissect the economics of it all.
Child-first product design
While I was in San Francisco last week, I was introduced to a very interesting startup called Dex Camera. It looks like a little fan, but it is actually a handheld camera with a small screen that kids can point at objects and ask what they are. It is an AI-powered educational device. The tone sounds like Ms. Rachel. The design is simple, like the Toniebox. If you have a toddler, you know exactly what I’m talking about.
The AI-powered educational product seemed consciously designed around the most relentless toddler question in the world: What is that? The child points, and then the device responds. It asks the child to repeat after it before moving on. It turns curiosity into a loop.
As a mother of a toddler in a trilingual (quadrilingual) environment, this made sense to me instantly. We speak English at home, she speaks Mandarin at nursery, and she is surrounded by Cantonese at the playground. All day I get some version of, “mama what is that, how do you say it in English, how do you say it in Mandarin? No, but what about Cantonese?” So when I saw Dex, I was quite enamored by a tech-enabled language tool built around how a child’s brain naturally works.
But Dex itself is not really the story here today. Dex just made me realize how rarely we design technology from the child upward rather than the adult downward.
In all our mommy groups/ playdate chats or whatnot, we keep debating whether screen time is good or bad, when maybe we should first ask whether the product was ever designed for a child in the first place.
Most of the digital products children encounter today are adult products with parental controls bolted on later. A smartphone, a tablet, and a chatbot are, by nature, made for adults. All we have done is add time limits, filters, controls, and guardrails, and hope that somehow transforms them into something child-appropriate.
But every once in a while, you see a product, and it becomes obvious that the people behind it have actually spent time with small children and tried to understand them. Toniebox is one example. Dex is another. 小天才手表, the Little Genius watches in China, is another. They are all doing different things, but they share one common trait: they are not adult devices shrunk down for kids. They are built around what children can physically do, cognitively handle, and naturally want.
The universally recognized IPs, such as Ms. Rachel, and devices like Tonieboxes, are a testament to the fact that good product design knows no borders. Ms. Rachel herself, a trained early-stage educator who combined her expertise and her husband’s, who works on Broadway, created one of the biggest musical/educational IP empires in modern day. They are the “best” kind of tech learning and entertainment for children, and probably would get you the least amount of judgment from the people who love to give unsolicited parenting advice.
And if you think about it, the underlying technology for Toniebox is not new. Audio stories are not new. But the product design is intuitive enough for a small child to use independently. Put the figure on top, the story starts. Remove it, the story stops. The ears act as volume buttons and are physically obvious, unlike an iPhone screen, where children do not understand the lack of texture. As a mother, I can’t overstate how much that matters. It really is product design with a children-first mentality.
So with that, it made me think about how to approach AI for kids, especially the children who will be the first AI-native generation, and why there is still no clear leader in this vertical.
The overlooked business vertical
I think part of the problem is that we may be misclassifying the category.
If you call these products toys, they look expensive or overly digital. If you call them gadgets, they look niche. But if you think of them as child-native learning and communication devices, the economics start to look different.
What makes this category commercially interesting is that parents are not really making an entertainment purchase. They are making a risk-management and convenience decision with a learning upside.
In practice, these products are competing less with toys than with a messy bundle of substitutes: the old iPhone handed down too early, the extra tutoring session, the endless parental explaining, the need to stay in touch, and the desire to give a child some autonomy without opening the entire internet. That is why the category can support higher pricing than a normal toy aisle product, because if done right, it solves several jobs at once.
That is why I think this may be a surprisingly overlooked vertical, even for me as a parent to young kids. It kind of sits awkwardly between consumer hardware, toys, education, and family safety, so nobody quite owns it. Consumer investors may dismiss it as a low-ticket novelty. Educators may think it is too gadget-driven or risky. Hardware investors may think it is too small a niche. But that blind spot may actually be an opportunity for innovation and something truly new.
Unlike AI schools or tutoring services like Alpha School, Khanmigo, or Ello, I want to look into the three business models of ‘made-for-kids’ tech products that I think could showcase the potential viable business models for future products to follow.
One model is Tonies. One is a Dex-like product. One is Little Genius.
A. Tonies = IP + content library
Toniebox is not really a hardware story. The box gets into the home, but the business is the library that forms around it. Tonies reported H1 2025 group revenue of €176.6 million, of which €136.7 million came from figurines and only €32.3 million came from Tonieboxes. The company also says it has sold more than 10 million Tonieboxes and about 134 million Tonies worldwide. What looks like a toy is really an IP and content-library business with household lock-in. It actually becomes a habitual lock-in, and then it becomes a hardware and software switching cost.
The Tonies model is clever because it combines licensed characters with original IP. The licensed characters do the obvious job: they attract attention, take up shelf space, and secure the first purchase. And they let Tonie own more of the habit, the release calendar, and the relationship with the child and the family. The trade-off, of course, is that licensed IP is helpful for customer acquisition but can also compress margins and leave the company more dependent on third-party franchises than investors may initially appreciate. Recent figurine growth, by the company’s own description, has been driven by a mix of licensed third-party Tonies and Tonies Originals, including Sleepy Friends.
And that is why I think Tonies is such an important template for what AI-native children’s products could become because the underlying design principles are so strong: durability, simplicity, low-friction engagement, and real autonomy. The child has agency, but inside a bounded system. They choose and react, and through that autonomy, they build taste, and they are not just swiping through an infinite sludge of screens.
B. Dex-like = hardware + recurring AI service
A Dex-like product is a very different business. Dex currently sells a $249 device (which I do think is a bit $) and offers a free plan alongside paid tiers at $9.99 and $19.99 a month. The plans add features like LTE connectivity, daily activities, and more customized learning, while the current version still relies on internet access, whether via Wi-Fi or an always-on eSIM.
That is a fundamentally different economic model. Tonies can sell the same figurine again and again. A Dex-like product, by contrast, likely needs a subscription (much like Spotify's business model) to justify the ongoing compute, moderation, safety layers, and fresh content. And that is where the challenge becomes more interesting. Parents do not want to feel like they are paying per token or being nickel-and-dimed every time a child points at a flower or a bus. But the company also cannot pretend that inference, safety, and personalization are free forever. So the product has to make pricing feel simple, while the economics underneath are much more complex.
Which is why I do not think the moat here can just be “we wrapped an LLM in a cute device.” At this rate, anyone can build a wrapper. The moat would have to come from a combination of hardware and software design, including child user behavior, trust, guardrails, saved progress, family language setup, curriculum, and habit. In other words, the defensibility is not solely in the model. It is in how well the entire experience is designed for a child and how trusted it is by a parent.
(btw I asked the founder where I can get one of these, he pointed to their Shopify site)
C. Little Genius = network + controlled communication
Then there is Little Genius 小天才手表, which points to a third model again. Not IP. Not token usage. More like a network and family-safe ecosystem as the selling point. Bright colors and big buttons. Easy calling and location tracking. Saved contacts and controlled communication. No truly open internet, but basically all the technology that modern parents want their kids to carry around.
According to SCMP, citing Counterpoint Research, Little Genius held more than 48% of the global children’s smartwatch market in the first half of 2025. That number matters not just because it is large, but because it tells you this category is not imaginary. I think there is real demand for devices that sit between no technology at all and full internet access.
It also hints at where this business model can be a bit tricky. A communication device can become a network. A network can become a social graph. And once children want to be where their friends are, the moat is no longer just the hardware. It is the network density and the norms that form within it – it’s like a child-only, safe-space social media network. That may turn out to be the most defensible model of all, but it can also be a slippery slope, as it can become the one most vulnerable to regulation and interoperability, probably both. Wired recently reported on how social features inside Xiaotiancai have already produced like-chasing, status games, and cyberbullying concerns, which is exactly what happens when a child-safe device starts drifting toward social media logic.
So when people talk about “AI for children” as if it is one market, I think they are already thinking too loosely. Tonies, Dex-like devices, and Little Genius are three very different businesses. One is IP plus content-library lock-in. One is hardware plus a recurring AI service. One is network plus controlled communication.
Technology does not have to replace thinking
I feel that the broader debate around AI and children right now often gets too binary. It also falls under a bigger discussion that often gets collapsed into one: is technology improving our lives, or is it ruining them? My own view is that we cannot reverse technological progress, and eventually, they will all need to learn how to co-exist with technology. What we can do is design technology more mindfully for children.
Technology, broadly speaking, has improved the quality of life throughout human history. The question is what kind of technology we are placing in front of children, and what kind of habits it is training into them.
On the one hand, there is the view that AI and smartphones are melting children’s brains, killing attention spans, and replacing real thinking. On the other side is a kind of super-charged techno-optimism that puts children in front of technology designed for adults and expects them to just learn to use these tools early because the future is inevitable anyway.
I find both sides are lacking nuance, and the reality is that there is great tension between these evolving thoughts.
I do not think it is paranoid to worry about addiction, bad content, persuasive design, privacy, or the habit of outsourcing thought. A lot of those concerns are justified. And this is where I do agree with a lot of Jonathan Haidt’s points in The Anxious Generation, and it has basically become gospel among millennial parents. My husband certainly treated it that way for a while and preached it to everyone we met.
The reason the book hit so hard is that it captured something many parents already felt, especially for us who did not see the dark side of social media coming when we were introduced to it. The gist of his book is that there is an irony in how we parent today. While we have become overprotective of children physically, we have also turned a blind eye to how they’re exposed to the digital world. We monitor where they go, but hand them devices that expose them to distraction, negativity, age-inappropriate content, and potentially unsafe contacts.
But not all technology belongs in the same bucket. Bluey and YouTube autoplay are not the same thing. A reading coach and an AI companion are not the same thing. A handheld language camera and a general-purpose chatbot are not the same thing. We flatten these categories, then act surprised when the debate goes nowhere and yields no real constructive solutions.
That nuance matters because the key to learning is still thinking. Observation. Retrieval. Pattern recognition. Frustration tolerance. Curiosity. Technology does not need to replace any of that. In fact, the best technology for children should do the opposite. It should protect those muscles, because that is what teaches critical thinking and continuous learning.
A child using a camera to identify a leaf, repeat the word, hear it in another language, and then go look for another object is still observing the world. A child using a reading tool that listens while they decode words is still doing the reading. A child using an always-on AI companion as a best friend or all-purpose answer engine is in a very different kind of setting.
Common Sense Media warned in January that no child under 5 should be given an AI toy companion, urged extreme caution for ages 6 to 12, and found that 27% of AI toy outputs in its testing were inappropriate for children. That is not a small miss rate when the user is a child.
A good child-first product should encourage the child to engage with the real world and do what they are intuitively curious to do: try new things and learn by making and asking. A bad one does the opposite. It fills the silence, removes friction, gives answers too quickly, and slowly trains the child to eliminate uncertainty rather than sit with it.
The reality is that children today will grow up in a world saturated with AI and devices. In fact, I think it would probably harm them if they were completely siloed away from technology and then suddenly became the only ones in the classroom who did not know how to use a device, or were seen as weird for being the only one not online. What I think we should really be pushing tech companies, and maybe educators too, to do is build child-safe and children-first products.
It is accepted across all parenting philosophies that children need some form of guidance on how they interact with others, what literature to read, and where to go running without falling into a pit, but shouldn’t that consensus now extend to how they interact with technology, too?
The future of children’s technology should not be just about making kids better at using machines. I reject the “vocational school” approach to it all too. It should be about designing tools that still leave room for noticing, wondering, remembering, struggling a little, and making sense of the world.
In many ways, AI can absolutely be used to kill thinking if what you expose children to is a general LLM. But it can also be designed to harness better thinking, and a more realistic embrace of the digital world we already live in.
I know some people may dismiss this topic as “soft,” but I do not think it is soft at all. It is about our children, about technology, and about a business vertical that still feels underbuilt and underthought. I hope more people take this topic seriously and think about the consequences of our children x technology and design and build more mindfully.
Disclosure: this is not a sponsored piece. I just came away genuinely intrigued by some of the devices now being designed for children, and by how much more intentional they feel than simply handing a child an adult device with a few filters slapped on top.







"Frustration tolerance. Curiosity. Technology does not need to replace any of that. In fact, the best technology for children should do the opposite. "
I think games give kids a useful exposure to frustration, strategy and curiosity, especially against other kids. You learn quickly what works and what doesn't and you keep at it until you win. I remember talking to an educational psychologist on twitter about this years ago.
In my opinon, you are nowhere near "momming." This is indeed a very important, critical topic and it is nice to have a sane adult in the room where so few allow misinformation, fear, and worse, to feed into the madness. So, first, thank you for looking at it rationally.
Your "child-up" take is dead-on and objective. More to the point, it requires actual thinking outside the box as well as moral heavy lifting, which other adults seem to find unreasonable now. AI, in part, asks us to create and contemplate new pathways we otherwise might not, or refuse to.
If anyone thinks that is "soft" then they are willfully ignorant and woefully ineffective as parents of children, and the future as a whole. No. You speak "hard" truth. Not all believe in truth anymore.