Qwen Launches Personal Assistant. Notes From AGI Next Summit. China's Consumer AI, Superapp Ecosystems, and Bottlenecks
A super-AI app launch, coffee chat, an AGI joke, and a reality check
Hi all,
1) Personal reflections
A few days ago, I was at a coffee chat with about ten women in AI. Some were building consumer-facing startups, some worked at mega-cap infra companies, some were ex–Mag 7, some were investors, and so on. The conversation was casual, but it drifted fast from how AI might reshape education to how we preserve our humanity: finding time to shut off our phones, make art, read a book, and finish a puzzle.
Where we kept getting stuck was the same question everyone keeps circling: is human intelligence really replaceable? We ended up debating whether a model is a product or infrastructure. We also debated whether the Manus deal is a good example and path forward for Chinese AI companies trying to go global, or just a situation. Many of us were young mothers, anxious about what AI may mean for education and our children’s future.
One person shared a joke she’d heard from an AI researcher in Silicon Valley. If AGI is achieved, what will humans do? He replied calmly: “We should exercise and strengthen our bodies.” The irony. But jokes aside, there’s something true in it. When new technology develops, some jobs get replaced, but new ones also emerge that we couldn’t have imagined. When cars were invented, horse carriage riders probably lost their jobs, but drivers as a profession emerged. As self-driving cars go mainstream, drivers may be displaced again, and we don’t yet know what replaces that work. Ten years ago, there was no gig economy nor food delivery riders. I guess my point is: there needs to be a cautious embrace, but also optimistic hope that new technology does not just wipe out all of our purpose. For a temporary period of time, some may be displaced—but then balance within society will resume.
To be honest, I think of it more as we are teaching our children how to eventually use these tools, just as my parents taught me how to type or search the web. I’m in the camp that traditional credential signaling (fancy degrees) may mean less, but agency cannot be replaced. How does pattern recognition help us understand the intricate relationship between people, their emotions, and the way people make decisions irrationally? How do machines ever piece together sensory things like lighting in a room, the mood of a crowd, smell, and color that can evoke emotions?
In a recent AI summit held in Beijing, Alibaba’s Lin Junyang said, “Personally, I think technological progress is linear, but human perception is exponential.” That’s exactly it, isn’t it? Even as technology removes “friction” like cars, planes, the internet, AI in whatever form, and even our fear, our greed, our lust, our love… in their rawest form, these haven’t really evolved or changed for centuries. It’s just now we cry over texts instead of pigeon-couriered handwritten letters, maybe.
So no matter how AI develops, I believe the truest form of intelligence is within our ability to connect the dots and make sense of the world of noise, sensory input, and emotions. It makes me believe more and more that no matter what my children do one day, I will encourage them to read literature, read non-fiction, and study liberal arts in school. Vocational skills can be picked up in jobs. Execution, more and more, can be done by agents. But the ability to think critically will not be replaced, and if anything, will matter more than ever.
That’s my little opening rant.
And then, like these conversations always do, it snapped from philosophy back to economics. Because underneath the “will AI replace us” anxiety is a much more practical question: what exactly is AI trying to solve, and what’s actually holding it back?
I want to double-click on Alibaba’s Qwen AI Assistant App launch today and then share some notes on my thinking from rereading the AGI Next Summit transcript. The event featured some of the leading voices in China's AI industry: Alibaba’s Lin Junyang, Tencent’s Yao Shunyu, and Zhipu/Z.ai’s Tang Jie on the bottlenecks and trends in China’s AI diffusion.
2) Where the deployment bottleneck is
The model-versus-product debate sounds abstract, but it’s becoming a heated one for a reason. On one side, you have labs committed to research and not caring much about user numbers; on the other, you have “wrappers” betting on usage and distribution, which also means betting their fate on someone else’s roadmap. Then you have the in-betweeners, and Manus is the obvious example, where the claim is that the raw material (the model) matters, but how you “cook” it is what separates a meh restaurant from a Michelin-star experience.
If the model is the product, the logic is straightforward: benchmarks, rankings, “best model wins,” and everyone else becomes a substitute. If the model is infrastructure, then “best model” is only the beginning. The value migrates to distribution, integration, and the interfaces people already live inside, because that’s where behavior is and where humans actually interact. That’s where switching costs become emotional, not technical.
Yao Shunyu, Tencent’s latest high-profile hire from OpenAI, was unusually direct about what he thinks the real bottleneck is. Not parameters. Not the model alone. Education first:
“It’s not about humans being replaced by AI. It’s about people who know how to use tools replacing those who don’t. Instead of obsessing over model parameters, it’s more meaningful, at this stage in China, to teach people how to use Claude, Kimi, Zhipu, and other tools effectively.”
Then deployment:
“From my experience in To-B companies, even if the model doesn’t get any smarter, just deploying it better in real-world environments can bring 10x or even 100x returns. Right now, AI’s impact on GDP is still under 1%, so the room for growth is huge.”
If you take those seriously, the “model is the product” story starts to wobble. Because the hard part is no longer only building intelligence. It’s getting intelligence to show up where it matters, inside workflows, inside organizations, inside habit.
And yes, there’s a cynical read here too: it’s also a neat way to justify moving from OpenAI to Tencent. If you believe deployment is the real bottleneck, then Tencent, of all companies, should be able to win simply by embedding AI into the default interface of daily life.
3) What AI is even trying to solve
The AGI discussion largely revolves around benchmarks and performance, but in conversations with practitioners in real life, it’s often a more practical question: can the system help you solve problems that don’t come with a clear recipe?
This is where the Manus thinking fits naturally. Lin Junyang framed it by borrowing Peak’s take (Manus co-founder), and it’s one of the clearest definitions I’ve heard of what “general agents” are actually trying to do:
“Just because I work on foundation models doesn’t mean I get to be a startup mentor for this. I’ll borrow something Peak (Manus co-founder) said: The most interesting part of building general agents is solving long-tail problems. That’s also the most compelling aspect of AGI, helping people solve problems that have no clear solution elsewhere. Popular problems (like top product recommendations) are easy. True AGI means solving those edge-case issues that no one else has figured out.
So, if you’re a great product wrapper that has solved an issue better than the model company itself, then you’ve got a shot. But if you lack that confidence, model companies will likely win out. When they hit a wall, they can just “burn GPUs” and train deeper models. That’s a low-level advantage, and it’s hard to compete with. So really, it depends on your approach.”
I like this because it drags AGI out of sci-fi and back into product reality. If “long-tail problems” are the point, then the bottleneck isn’t only technical. It becomes taste.
Which long-tail problems matter? Which are valuable? Which are solvable? Which are too regulated, too messy, too expensive? Which benchmarks do you chase, and which do you ignore? Those aren’t neutral choices. They’re incentives. They’re business models. They’re judgment calls.
Zhipu’s Chairman Tang Jie just experienced its IPO hype, and during the public engagement, he seemed also very alert to commercialization, describing what separates survivable agent companies from disposable ones, in a way that’s both harsh and useful:
“There are three key factors that will shape where agents go:
Value: how valuable is the problem the agent is solving? Early GPTs failed because they were too simplistic, just prompt wrappers. Agents must tackle real, complex problems to be useful.
Cost and boundaries: here’s the paradox. If a problem can be solved with a quick API tweak, the foundation model will absorb it. So applications need to find their niche before the base model catches up.
Speed: This is all about timing. If we can go deeper into a use case and polish the experience ahead of the curve, even by six months, we can survive. Speed is the name of the game. If our code is solid, we might just outpace others in coding or agent development.”
That’s not “wrappers are dead.” It’s also not “wrappers are the future.” If the foundation model lab pushes out a similar feature and can swallow your feature overnight, you have no business. That’s why the startups that have proprietary data in niche use cases or have doubled down on a vertical can have a proper shot, and your moat will be your expertise. So for startups, they’ll need to either solve something valuable and hard enough or niche enough to resist absorption, or you outrun the absorption clock.
4) China’s current AI landscape at a glance
That brings me to where the Chinese AI landscape looks right now, at a high level. I’d still break it down into big tech versus startups, and within big tech, companies look similar at a distance, but in detail, they’re very different.
Big tech in China is mostly doing some version of “models plus ecosystem,” but their strengths and paths to monetization diverge.
Alibaba started with an enterprise-first mentality. Whether it was open-sourcing Qwen or selling API services, all roads led back to the cloud. They’ve been China’s cloud anchor for a long time, and the model strategy has always been tied to that.
More recently, there’s been a conscious move to make the consumer surface look like the model itself, renaming the app to Qwen, signaling: here’s China’s best open-source model, directly accessible. But the bigger play isn’t the app in isolation. It’s what the model can call inside Alibaba’s ecosystem: Fliggy (travel), Gaode (navigation and ride-hailing), Ele.me (food delivery), and, of course, Taobao. If OpenAI is trying to invent “mini programs” and shopping assistant features from scratch, Alibaba already has the services, the data, and the payment rails inside one world.
Tencent’s path has been more consumer-native and more distribution-led. They were early to integrate DeepSeek into WeChat—pretty bold in a market where many incumbents were still guarding their platforms and sticking to single-model pride. But the thinking was simple: Yuanbao, frankly, lagged behind on model performance, and distribution is Tencent’s unfair advantage.
That move mattered because it showed the power of the ecosystem and functional adjacency. Opening WeChat and giving people a chat interface to interact with AI is natural; MAU rises quickly when you don’t ask users to change habits. But the deeper question is what happens after the first wave of curiosity. Distribution can give you the first hundred million. It can’t automatically give you retention, usefulness, and monetization.
ByteDance is the third pillar. Doubao (both app and model family) has the most sheer consumer volume in China right now. ByteDance has also gone all in on multimodality. If this is a consumer crown race, they’re not leaving it to anyone else.
Then you have the startups: the “four tigers”: Zhipu, MiniMax, Moonshot, Baichuan. Each has had to make a strategic choice under tighter capital conditions and tougher competition from big tech. Baichuan has leaned into healthcare. Moonshot is still associated with frontier research and Kimi. Zhipu is pushing model-as-a-service and a developer/coding angle. MiniMax has been strong on multimodality. It’s crowded, and the reality is that startups are tighter on resources while big tech has distribution, compute access, and more room to absorb mistakes.
If anything, that pushes startups toward verticals. Not because vertical is trendy, but because horizontal consumer distribution is a bloodsport and the incumbents own the on-ramps.
5) Agentic and the ecosystems of Alibaba and Tencent
Much of the AI discourse remains obsessed with replacement. But the more honest lens is: what gets taken off our plates first, and where does AI show up without asking us to relearn life? For much of the last year, we’ve been writing about the super app ecosystem that exists in China, which is very different from the US. This isn't just as simple as a walled-garden moat or something; this is integrating all the data, the user, the experience within one interface, and truly creating an AI operating system organically as the company pushes this direction top-down.
Just today, Alibaba’s VP Wujia, who oversees the Qwen chatbot app and Quark AI (the enterprise product) announced the Qwen assistant app. The company says it believes that everyone will have an AI assistant in their pocket one day. Qwen 任务助理 Assistant 1.0 will be free to all to use - think of an agentic assistance in your palms.
It is said that it can support almost every task within the digital infrastructure. You can go straight to Fliggy (Bookings.com) and Amap (Google Maps + Uber) to plan your itinerary and book your tickets for travel. Book entertainment tickets through Damai大麦 (ticketmaster), pay through Alipay, and order goods via Taobao. So consumers can actually access everything from the Qwen app. This is exactly what I predicted. Everything can be completed within its ecosystem through its chatbot interface. This is what OpenAI is trying to do, but it has to onboard all kinds of external partners, essentially, where Alibaba could connect it all in the backend.
The key is all that “context” that Qwen has, which is the part that quietly decides whether consumer AI becomes a default interface or just a feature. However, the “context,” which is the data, will also be the hardest to incorporate and translate across different apps for Qwen to dissect.
Right now, most people don’t use AI as they would a new operating system. They use it like a turbocharged search box. That matters because it means the leap isn’t “bigger model.” The leap is what happens when the model becomes relevant in context, not just competent in abstraction.
That’s why Tencent Yao’s deployment line is so sobering: even if the model doesn’t get smarter, better deployment can bring “10x or even 100x returns.” That’s a diffusion argument. At this point, capability is not the bottleneck. Integration is.
China has a distinctive setup here: the distribution layer. If your “agent” can sit inside a superapp and call existing services, payments, bookings, commerce, and customer service, then the agent doesn’t need to invent the ecosystem. It just needs to orchestrate it. That’s why the agent story in China is so tied to WeChat mini programs, and why Alibaba’s ecosystem is interesting, because it can call Taobao, Ele.me, Gaode, Fliggy, all inside one world.
Yao touched something sensitive here, but it’s central to the consumer story:
“If the model knows it’s cold out, which neighborhood I’m in, what my spouse likes to eat, or what we talked about last week, well, that changes everything. Imagine forwarding your WeChat chat history to a bot like Yuanbao. Suddenly, the model has much richer context to work with, and the value to the user increases significantly.”
The nuance matters: forwarding implies intent and consent. That’s the difference between creepy and useful. It’s also why existing platforms: search bars, chat interfaces, super apps, are such powerful diffusion engines. They already have behavior and context surfaces. They just need permissioned pathways to make that context actionable.
Yao summarized the thesis cleanly: “In short, leveraging extra context, paired with a strong model, is key to unlocking better consumer AI experience.” And he explained why Tencent believes it has a structural edge beyond just “having a model”: “Startups often rely on externally labeled data or APIs. Our edge is that we already have rich, real-world use cases.” He also pointed to internal data loops as a moat: “We can tap into the actual development and debugging processes of our thousands of engineers—that’s way more valuable than synthetic data.” Then he closed the loop: “For To-C, it’s about going deep on context. For To-B, it’s about leveraging internal use cases and real-world data.”
And then, beneath all of it, there’s a deeper bottleneck that doesn’t show up on leaderboards.
Yao said it most starkly: “The real bottleneck is imagination. We understand what reinforcement learning can do; we’ve seen things like O1 solving math problems. But with self-learning, we haven’t even defined what the ‘target task’ looks like.”
Tang Jie from Z.ai describes the same wall through economics. When you pour in resources and stop seeing gains, you hit “efficiency bottlenecks.” “What we need is a new paradigm to measure return: Intelligence Efficiency.” And his conclusion is a shot across the bow: “That’s why I truly believe a paradigm shift is coming in 2026. And we’re working hard to make sure we’re the ones leading it.”
Lin adds another under-discussed constraint: AI is sliding into daily life, but “we lack a unified metric to judge how well it’s working.” If you can’t measure it, you can’t govern it. If you can’t govern it, you can’t deploy it widely.
So if the last two years were about “how smart can the model get,” the next two may feel more like “where does intelligence actually become real?” In interfaces. In deployment. In education. In a permissioned context. In power constraints. Whether we can define the right tasks and measure whether they’re being solved.
In the U.S., we know there are stricter regulations around opening up functionality and anti-trust concerns, so there is no super-app. But what OpenAI is trying to build is resembling more and more like a superapp, and what Chinese big tech has, even though with somewhat less capable models right now, is a crazy ability to call on all of its apps and functionalities across its ecosystem in one interface.
The companies that can deploy well will outpace the ones that can’t. And the most enduring human edge may not be raw intelligence; it may be taste, judgment, and the ability to decide what problems are worth solving in the first place.
FYI: A ton of other big tech coverage, and details of how each BAT is running their AI strategy: AI Proem Big Tech






Among your writings, this has been the most insightful piece for me so far. I really appreciated it.
10/10. Thanks Grace.