Podcast: China and AI supply chains
What are the other parts of the AI stack beyond models and chips? Where does China stand on these layers?
Watch or listen to the High Capacity podcast on:
In this special crossover episode, I talk with TP Huang about the broader AI supply chain beyond just models and chips. We cover the new DeepSeek V4 model and China’s efforts to integrate with domestic AI chips. We talk about energy infrastructure, like inverters and transformers. And we discuss the robotics supply chain and how this overlaps with smartphones, EVs, and other industries in China.
Note: TP Huang is not in the video version on YouTube for privacy reasons.
Links:
@tphuang on Twitter / X
Transcript
Kyle Chan (00:00)
Welcome to a special crossover episode of the High Capacity Podcast and the China Tech Podcast. I’m Kyle Chan, a fellow at Brookings.
TP Huang (00:09)
And I am TP Huang. I’m just an anonymous dude who writes about China tech stuff online. I’m very excited to join Kyle today to talk about China and AI.
Kyle Chan (00:16)
TP, you don’t give yourself enough credit. TP Huang is one of the sharpest observers and analysts of China’s tech scene across a huge range of sectors: AI, EVs, and more. If you don’t follow him, you really don’t follow China tech.
Today we’re going to talk about China and the AI supply chain. A lot of the discussion around AI focuses on the latest models and maybe some chips, but there’s a lot more to the broader AI stack, from hardware systems and energy to robotics and other applications. We’re not going to cover every part in this episode, but we’ll talk about a few layers of the broader AI stack that we think deserve more attention.
This will be a free-form discussion. Maybe we can start with DeepSeek V4 and the integration with Huawei Ascend chips. TP, what do you make of the new DeepSeek model, and especially this emphasis on working with Huawei chips and the CANN ecosystem?
TP Huang (01:36)
I think one of the issues facing smaller Chinese AI startups is that, unlike the Anthropics and OpenAIs of the world, they really lack a lot of funding and they also lack access to a lot of chips. It’s less of an issue for big players like ByteDance and Tencent, and maybe even Alibaba, but you can really see that companies like DeepSeek, Z.ai, and Moonshot have done some great work but clearly lack compute.
One of the things they do, at least in DeepSeek’s case, is that because they were the first to really stand out domestically in China, they attracted the attention of Huawei and also the Chinese government. I don’t know for sure if this is the case, but it definitely seems like Huawei and DeepSeek had a lot of cooperation to make sure that the new Ascend chips work better with DeepSeek’s algorithms.
There were some recent talks on Huawei’s blog, and they also made a video about it. They compared inference speed and talked about the low-level kernel changes they needed to make to support various features so that the chips can run inference on the models better.
Just as a heads-up for everyone out there, chips are typically used for two different purposes. One is the actual training process. Training also includes pre-training and post-training. The other purpose is inference. Once a model is trained and released, you need to actually run it, and that is the inference part.
Huawei basically looks like it worked with DeepSeek on both the training part and the inference portion. I think a lot of the initial training was still likely done with Nvidia chips. Their stash of Nvidia chips is probably pretty small. The reason for that is they seem to be working pretty closely here with Huawei. If they had more chips, V4 would be better.
The right word is that it feels a little undertrained. I don’t know if you’ve tried to run DeepSeek yet and compared it with other Chinese models online, but if you run Kimi and DeepSeek side by side on the same queries, DeepSeek runs so much faster. It’s not even thinking all that much, where Kimi is taking a lot of compute resources.
Based on my observation, the Kimi models are better right now in terms of results. But I think that’s because the DeepSeek models are still relatively undertrained. Part of that is that they’re waiting for these superpods from Huawei to become available. They’re working with Huawei to deeply integrate and make sure they have all the features they need to actually train the models. Once they get access to them, their models will hopefully, in their mind, improve a lot more quickly.
Kyle Chan (05:35)
This is super interesting because it’s part of this bigger shift in China’s whole AI industry toward domestic hardware and decreasing reliance on Nvidia chips. Another major player leading the charge seems to be Zhipu, or Z.ai. I believe their GLM-Image was one of the first major models that was fully trained on Huawei Ascend, and that’s training rather than inference. Their latest GLM-5.1 models have been optimized for day-zero inference on a whole range of Chinese chips: Huawei, Cambricon, and a whole bunch of others, it seems.
Some of the labs seem to be racing every few months, or even every month, to turn out a new version and prove that they’re still SOTA, or state of the art. But there’s this longer current of a shift toward domestic hardware integration. This is something Nvidia thinks a lot about: co-design and what it means to work with the model side. It’s not just about making better chips on your own. It’s about integrative development, right?
TP Huang (07:08)
Jensen really gets it. When I hear Jensen talk about this, I feel like he understands why this is such a dire situation. He wants to make sure that Nvidia, and CUDA in general, keeps its monopoly as the source that everyone uses to do their training and other work.
He doesn’t want a scenario where the Chinese domestic supply chain gets totally figured out. My personal experience is that once China gets into something, the profit is gone in that industry.
Kyle Chan (07:47)
You can point to a whole bunch of industries right now where that is happening.
TP Huang (08:04)
I can totally understand why he talks about it this way. For labs like Z.ai and Moonshot, if they have a dual-track mandate of, one, keeping up with Anthropic, and two, making sure that domestic chipmakers can get up to par, that’s a lot of work for these smaller companies and labs.
Kyle Chan (08:37)
They’re not trillion-dollar startups like in the U.S., where you have OpenAI and Anthropic, much less the actual American hyperscalers in terms of capital resources, staffing, and researchers who can experiment. So this is a heavy lift. Some of these companies, including Moonshot and Z.ai, are at the hundreds-of-employees level, or even smaller. They’re trying to juggle a lot of things at once, it seems.
TP Huang (09:10)
It’s not just that the Chinese labs are smaller in valuation. They have maybe one-tenth the researchers that the U.S. labs have. And because there are so many of them, they’re not only battling each other for funding; they also have to compete against Xiaomi, ByteDance, and all these guys, because Xiaomi has a competitive model too.
Kyle Chan (09:18)
Good luck. I would not want to be competing against all these guys.
TP Huang (09:37)
I haven’t had a chance to listen to your interview with Z.ai yet. What did they say about the pressure facing them?
Kyle Chan (09:50)
That was super interesting. One thing he mentioned was that while they feel compute-constrained now, they expect that issue to get better later this year, in the next few months. That triangulates with DeepSeek’s whole thing about maybe being able to lower prices later on as the Huawei Ascend 950s really ramp up.
That was a super interesting conversation. It also covered their overall strategy, how they’re trying to differentiate from some of the other major players in the Chinese market, and their global strategy.
TP Huang (10:35)
I have to listen to that one. That’s on my to-do list for sure. I also really want to talk to the people behind Moonshot because I don’t think they have been as out there in talking about their company as much as Z.ai has, which has been really good, honestly.
Kyle Chan (10:39)
There are too many podcasts now, in a good way. We’re guilty of adding to that.
On the Chinese-language side, there have been a number of these Chinese AI founders talking on Chinese podcasts. The recent one with MiMo — not MiniMax, MiMo — was super interesting because there was a big focus on the agentic layer, the sudden rise of OpenClaw, and what it can signal for the future of development: not just having the single best model.
I think a lot of Chinese AI labs still look up to Anthropic, especially Opus 4.6 or 4.7, but there is also agent orchestration across different models and trying to draw on the strengths of each of them. Some are stronger on image generation or dealing with images. Some are stronger on coding. So you can not have the absolute best coding model and still get a lot of work done.
TP Huang (12:02)
In terms of getting computing work done, these days I use GLM and Kimi for pretty much all my coding. Just in that component, they’re already better than me at programming. So as far as I’m concerned, the Chinese models are already good enough in certain areas. It’s more that the bigger Anthropic models are bigger.
OpenAI’s models are big, though they don’t tell us how many trillions of parameters they have. But my sense is that, if you see the compute-resource issues Anthropic is running into, they’re running some pretty big models that use a lot of memory. In light of all these DRAM shortages, this is actually a major constraint for some of these guys.
Kyle Chan (13:01)
Do you want to say more about the broader hardware front? There’s so much focus on the single-chip level. You’ve talked before and posted some interesting things about looking at the system level, like these Huawei Atlas superpods, or trying to fill gaps on the memory side. There’s so much more than just the GPUs or NPUs per se.
TP Huang (13:33)
I find this interesting because if you read online, some Silicon Valley people are very obsessed with AGI. I don’t quite understand why, but they’re very obsessed with it. They’re also very hyperbolic about the need to win AGI and America’s chip advantages.
I have a close friend who works on building data centers for Meta, so I have a pretty good idea from talking to him about the shortages in AI data centers. Typically, it’s not the GPUs themselves. I asked him about transceivers; transceivers are not a problem. I asked what the problems were, and he said, “We’re running short on CPUs these days.” Several months later, we hear all this stuff about CPUs being short. He also said electricity is a problem. They can’t meet their original goals for 2026 because they just don’t have enough power supply for it.
That makes sense. That seems reasonable. He also mentioned fiber-optic cables, which are becoming an issue. NIC cards too. We’ll see if some of this actually plays out. My sense is that when we’re looking at AI data centers, we’re not talking about just the GPUs themselves. You have to put them in racks, and Nvidia is not going to make the racks for you. They find suppliers that they qualify for their programs. If the suppliers meet their specifications, then Nvidia says, “Okay, this is a qualified partner.”
But when these suppliers are trying out for Nvidia’s supplier programs, they’re sending their best prototypes or samples to Nvidia. When you have to double, quadruple, or 10x your production in a couple of years, you can imagine what that does to quality. One of the issues that AI data centers in America are running into — and I don’t know if this is the same case in China or not — is that once production comes up, quality becomes an issue.
So you could have the best GPUs, but if you’re not assembling them right, your GPU gets wasted.
Kyle Chan (16:23)
Putting together these clusters means putting together tens of thousands or hundreds of thousands of chips and dealing with mean time to failure and all the different issues that can go wrong. I don’t know what the curve is, whether it’s exponential or linear, but increasing complexity makes it much harder not just to set up these systems, but to keep them running and operating in a performant way.
TP Huang (16:58)
I think so. I don’t know if you’ve seen those X posts, or the Bloomberg article, about how many data centers are delayed in America and how many planned ones haven’t even started. I think it’s because building all this stuff is difficult. It’s not as easy as some Silicon Valley people thought.
I do data stuff. I’m in software. Software people are smart, but most software people have never done manufacturing in their lives. When their entire life is about how AI can figure it out because it can write programming languages for them, they might think AI can solve the energy problem, manufacturing problems, and everything else. They might think we can cut all the government funding for science and all the other things that are needed, and it won’t cause any problems because AI will solve it.
Kyle Chan (18:00)
I think it’s ironic that now there are bottlenecks in many different areas, but some of the most glaring ones are at the traditional services and industry level: electricians, plumbers, people who can literally do the construction, and the engineers to put together these data centers and the energy systems. That’s not even to mention the grid connection and grid infrastructure.
In some ways, those are areas where China has some advantage, partly because it has been building out energy at such a rapid pace for manufacturing for so long. It has also been building, so some of that skill set can transfer, though some of it is new. There are potential issues in both countries in terms of maintaining quality as you’re putting together these very complex systems. The bottlenecks aren’t always where you think they are. They’re not always at the GPU level or what have you.
TP Huang (19:13)
I don’t know if you’ve seen one of the people online — his name on Twitter is Cashkey or something like that. He always writes about these issues: GPUs are probably collecting dust in a warehouse somewhere because we still need to figure out how to actually put them in racks and supply power to them.
Kyle Chan (19:19)
Exactly. Speaking of rackmakers, a lot of them are in Asia too.
TP Huang (19:42)
I actually have some deep info on that one. Meta apparently builds theirs with Foxconn’s help in the cartel country of Mexico. As you can imagine, when something is built in the Jalisco cartel center, it’s not the best condition. They run into issues where people are eating tacos on the factory floor. They’ll go to these factories for inspection and see liquid leaking on the floor. So you can imagine spending billions on these AI cards and then they get destroyed because the sanitation standards in the factories aren’t good enough.
Kyle Chan (20:25)
This goes back to the idea that even with super high-tech stuff, it comes back to the basics. It’s not easy to pull this off. I do think it’s very interesting to watch a company like Foxconn pivot into servers and racks. Now that seems to be driving a huge amount of its growth, compared with, say, iPhones, which I think most people associate Foxconn with.
TP Huang (20:53)
They’re pretty good at managing supply chains and manufacturing stuff in general.
Kyle Chan (20:56)
I was wondering if you had thoughts about deeper parts of the energy supply chain as well. There’s a transformer shortage in the U.S., and you need those to set up the substations that go along with the data centers. There are bureaucratic issues or bureaucratic capacity issues related to interconnection queues and all these issues.
China is now one of the largest exporters of transformers. Huawei is a major producer of inverters. That’s another key component. These are things you don’t hear about much when people talk about AI and data centers, but they are some of the underlying backbone infrastructure that goes into this.
TP Huang (21:52)
I always find it interesting that people online say things like, “I see no reason why we should sell any AI chips to China.” Then people on the China side say, “We get to sell more to America: inverters, transformers, everything. Whatever you need, we’ve got it here.”
It’s an interesting mindset. People in China are hustling through it and trying to increase production. I think they did get the electricity expansion figured out pretty well. That’s why they’re scaling production. A lot of times when you see them talking about exports, they talk about two markets: North America and Southeast Asia. The North America part is very explainable.
Southeast Asia is because all the Chinese big tech companies are building their data centers there. A lot of them basically have some third party run them, buy Nvidia GPUs, and then rent off that and run their models for training and things like that.
The entire grid infrastructure ecosystem in China is very advanced because they’ve invested so much money in this. Dave Fishman showed me a chart a few months ago that basically showed that, whatever the expected expansion of electricity demand is over the next five years, only about 21 percent of that is projected to be data center demand. Something close to that is going to be used for EV-related stuff. There is also a lot being used for hydrogen production, which is another major one.
Kyle Chan (23:58)
And then industrial end-use.
TP Huang (24:08)
If you imagine that the demand for electricity for data centers in the U.S. and China is probably similar — I don’t know for sure, but that seems to be where it’s at — I did a calculation for China, and my guess is they’re going to try to build around 80 gigawatts of data centers in the next five years.
That’s a huge amount. Chinese data centers are likely to use older chips that are less power-efficient. Maybe they have half the amount of compute that the U.S. has, but it’s still going to be a big number. To compensate for that, they need a lot of cheap electricity from the western parts of China and Inner Mongolia.
Then you need a lot of inverters and transformers too. They need all the high-voltage equipment that moves electricity from the energy-generating part of the country, like the northwest, to where the actual data centers are. A lot of data centers are built next to renewable farms themselves, but many of them have to be close to where the actual companies are. I read today that Hangzhou has the second-largest amount of AI compute among Chinese cities, which makes a lot of sense because it has Alibaba and DeepSeek.
Kyle Chan (25:32)
If you want low latency and to be directly accessible to your customers, that’s a key demand center. But at the same time, you have this broader national distributed network that is trying to leverage, as you mentioned earlier, abundant renewable energy resources out in the western or northern provinces.
You need the infrastructure to connect all that together, whether it’s ultra-high-voltage cables, which are super expensive. That’s something China has been building out for a long time and now seems to be starting to export as a capability: producing ultra-high-voltage lines at scale. That’s interesting to follow as part of China’s broader infrastructure expansion, and its ability to turn that into potential overseas opportunities as well.
TP Huang (26:34)
You probably follow the U.S. energy side of things more than I do. My feeling is that U.S. data centers are very reliant on gas turbines and natural gas as they’re expanding. Are we constrained here? I hear about it, but I don’t know how bad things are.
Kyle Chan (27:04)
It seems pretty constrained. There’s a backlog for gas turbines, and then there are things like diesel generators for backup. Regardless of the politics in the U.S. and people’s attitudes toward different energy sources, it seems like the companies themselves are looking for the broadest array of energy sources. Maybe there is a slight preference for renewables, but overall they are in a race to deal with this energy bottleneck.
My sense is that they all see this as an acute problem. And as I mentioned earlier, it’s not just the sheer physical shortage. It’s also getting things hooked up to the grid and figuring out these thornier infrastructure issues. That’s not even to say anything of local community backlash, like Maine passing a moratorium on new data center build-out because people are worried about rising energy costs for local communities. I don’t know if the governor will veto it. But it’s not as easy as just stringing a bunch of chips together. So many other things are involved.
TP Huang (28:27)
I feel like this is going to be a major midterm election issue that politicians want to deal with. People who say they want to put a moratorium on data centers are going to get elected. That seems very likely.
Kyle Chan (28:44)
It’s emerging as a bipartisan issue. Or at least there are voices on the left and the right who are very concerned about this in the U.S. That’s why I think one of the biggest issues is not on the physical technology side, but on the social and political side for the U.S.: trying to make sure the AI boom doesn’t become a negative issue for average Americans who ask, “What is this technology, and why is my energy bill so high?” That’s what they see from AI. And also, “Why are jobs being threatened?” It’s all risk, and where’s the upside?
TP Huang (29:27)
The job-loss issue is interesting. We are here in America, so we see all the job-loss announcements on the news. I know China is also having this issue, where young people are having trouble finding jobs. AI is probably only going to make that worse. It’s interesting to see what the future impact of all this is.
Kyle Chan (29:57)
I want to pivot. We were digging deeper down into the AI stack, and I want to pivot up into deployment, applications, and even the embodied AI part. But one thing I wanted to highlight was something you raised, which was these smaller models.
In the U.S., there’s this push toward ever bigger models, with multi-trillion-parameter, massive frontier models. In China, there are a lot of very good small models, like Qwen3 6B and 27B, that you can run locally on your own MacBook. Some are small enough that you can put them on your phone and run them straight on your smartphone. What do you make of that strategy, and what do you think about edge AI, being able to have that capability without connecting to the cloud, and having local data control?
TP Huang (31:01)
I think it’s actually a big failure on America’s part not to put more effort into this area because there is a pretty big market for it. I remember the last company I worked at, which was an AI company, wanted to create an AI robot. Basically, we couldn’t use any American models because they just didn’t exist. We tried Llama, but it just wasn’t good. So pretty much everyone uses Qwen for smaller AI-on-chip performance.
What you were trying to get into was not necessarily the one-billion-parameter models, but the slightly larger ones that are still pretty state of the art. They’re close enough to state of the art that they can do real work, but they’re also small enough to fit on your MacBook. They’re kind of a game changer, in my opinion, because then essentially there is no cost of running AI. You don’t need data centers. That’s going to change a lot of things.
I feel like that’s really hard on a lot of young people coming into the industry, because now you can run anything and there are no bottlenecks to using AI. You don’t have to worry about Anthropic suddenly having issues on its servers at 3 p.m. because everyone is trying to use it at the same time. You can just run it on your home server.
I think it’s going to be a giant disaster for countries like India and the Philippines. I don’t know if you’ve thought about the India side of things, because I think you’ve written about the Indian economy quite a bit. I don’t know what it’s going to do to India if it loses the software outsourcing part of its economy because it becomes so cheap to run AI locally on your computer.
Kyle Chan (32:52)
The service sector, yes. In particular, India and the Philippines have a lot of that outsourcing. Those are huge multibillion-dollar industries and a really important source of international revenue and profit. That could get hit pretty hard, if it hasn’t already.
TP Huang (33:30)
We saw Infosys stock and some other stocks go down quite a bit.
Kyle Chan (33:39)
What else do you think edge AI enables? In addition to the economic implications, what are the cool uses? And in general, how do you see this broader stratification of the AI market? I think there was, and maybe still is, this big bet in the U.S. that you need bigger and bigger, more powerful models. That’s the primary dimension you’re focused on.
But in reality, you get a mix. You get enterprise. You get a lot of interesting use from very strong models that are not the biggest. Then there are special cases where maybe you can get models that run on your car, your phone, or wearable devices. Then you don’t have the latency that comes from having to send a prompt or query to the cloud and get a response back.
TP Huang (34:47)
Even if it runs in the cloud, if it has a smaller footprint — if it’s just a smaller model somewhere in the cloud — you don’t need as much DRAM, so more people can use it at the same time.
I find it interesting that we’re so focused here on creating the largest frontier models. I’m not a scientist, so I can’t really answer this question. But if you had the most powerful reasoning model, one that can think like humans, how much can it do? How many new things can it develop without trying them out in the real world? How much can be done virtually versus physically?
I think the cool thing about embodied AI is that it can train itself long-term. If you have a robot, it can be like a baby. Babies grab things, touch things, learn how to stand up, and walk around. It takes them a long time, but they do it.
If they are tasked with discovering or doing a science experiment, once they get really good with their hands, they can start doing lab experiments. You don’t necessarily need humans to do it. You can put a bunch of robots in the corner somewhere, and they don’t have to worry about environmental standards or anything like that. They can just figure it out. But that doesn’t seem like something you can do with a purely digital, virtual LLM. To me, if you have a scientific theory, you still need scientists to figure it out.
Kyle Chan (36:41)
You need that physical presence to manipulate the real world, gather data, and have the whole feedback loop. That reminds me of Zhang Peng at Z.ai, who was asked about the path to AGI. He basically said it involves having AI out there in the world, interacting and learning from the real world, rather than just being on machines in the cloud.
This is obviously a huge priority for China’s AI industry and Chinese policymakers. They’re really pushing embodied AI, or physical AI, as people in the U.S. usually refer to it. You see this massive push into robotics and especially the humanoid space. I’m sure your Twitter feed is the same as mine, where every so often there’s a flood of new Unitree videos, whatever cool backflips they can do.
TP Huang (37:41)
To me, it’s funny to see how quickly things are moving. It’s also scary in a way. Look at the marathon we had recently, and compare what happened a year ago to now. Now the robots are able to run by themselves, which is kind of crazy. A lot of them fell and couldn’t get up, but a lot of them did get up afterward and complete the race. They figured out how to fall on the ground and get up.
Just with lasers, their versions of GPS, and some camera sensors in front of them, they can figure it out without human help. It’s mind-boggling how quickly it’s already improving. I was talking to someone recently on my podcast, and he was saying that China has no problem finding lots of college students who are having trouble getting real jobs and don’t mind going to these places to provide real-world data for the robots.
Kyle Chan (38:59)
You can put on motion capture or something and do these complicated manipulation tasks. That data is very valuable training data.
TP Huang (39:14)
From my experience here, it’s sad to see how hard it is to do a lot more of this stuff in the U.S. as a startup. All the supply chain is over there. Even if the stuff is not produced inside China itself, they have the supply chain sourced pretty well already. They always have a huge stock of these parts around there. If you want to make a robot, they can get you the battery you need. If you’re worried about heat, they can say, “We have this fan we can put in,” or “We can use liquid cooling,” or whatever.
Whereas here, you have to figure it out by yourself. If DRAM triples in price, what are we going to do now? That’s a real issue for a lot of startup developers.
Kyle Chan (40:06)
Shenzhen in particular was the hardware capital of the world even before the recent rise of humanoids. You have the whole smartphone industry, the EV industry, battery technology, and then all the components that go into robotics: sensors, actuators. That feeds into the fully assembled humanoid robot companies, and then also some of the component companies, like those focused on making hands or different parts.
There was a recent white paper that Shenzhen released, listing the number of firms per segment of the robotics supply chain. It was on the order of thousands of companies in almost each category. That kind of depth means that whatever part you need, you can just drive over and get it. Maybe that supplier doesn’t have exactly what you need, so you drive next door and there’s another guy who can probably figure it out and get you the part you need. We should do a trip there and hop around to these different suppliers. That would be awesome.
TP Huang (41:21)
We need to go to Shenzhen.
My cousin actually works there right now. He’s the kind of typical Shenzhen guy you can think of. He went there to hustle and create his dream. I think they make medical testing equipment and stuff like that. He said there are only six people inside his company and they work nonstop, day and night. He loves it because it’s the only place in China where there is a complete absence of politics and people can just get together and exchange ideas. Apparently, they’re all workaholics over there. They don’t sleep; they just get up and work.
Kyle Chan (42:06)
I’m not surprised. I guess that’s what it takes to have a globally competitive robotic supply chain.
Your comment about cooling systems also stood out. You’ve highlighted this previously, but for the Beijing humanoid robot half-marathon, the winner was a robot made by Honor, which I used to associate with cheap smartphones. Honor is a spinoff of Huawei, and as you pointed out, they have cooling systems they’ve developed for consumer electronics and smartphones. That technology can be reapplied in the robotics space.
This is an example of crossover linkages between different industries, where strength in one area, even at the firm level, can transfer over to other industries.
TP Huang (43:08)
If we’re really going to get robots to do real work out there, they have to be able to operate in the global environment. Think about all these Chinese companies with products in the Middle East and Sub-Saharan Africa, where it’s super hot. The phones themselves have to deal with very adverse conditions. They have to have the right materials. They have to have cooling. The sensors and cameras still need to work even when it’s 100 degrees outside.
It’s hot in those places. I remember I was in Morocco two or three years ago, and it was 110 in the afternoon outside. I just stayed indoors.
Kyle Chan (43:56)
When I was in India, it was 110 every day for months. Also, when I was in Harbin once to see the ice sculpture festival, as one does, my iPhone kept dying. The battery couldn’t handle the cold in the middle of winter. I forget exactly how cold it was, but it was way below zero Fahrenheit.
Other Chinese tourists had smartphones, and they told me, “You need a Chinese smartphone because it can handle the cold.” I don’t know exactly what was in there, but it seemed to be working better than my iPhone.
TP Huang (44:36)
I think that’s one advantage China definitely has over America, and not just America but the rest of the world. When you make everything else, making robots gives you an advantage in making robots too. I guess that’s where we are.
America is really good at digital stuff. LLMs don’t have the same constraint as being able to make physical stuff.
Kyle Chan (45:09)
It’s like different kinds of supply chains: a digital one versus a physical one. The question is what exactly you can transfer across these, and to what extent they are mutually codependent. To put it a different way, to what extent are there massive costs to trying to split them apart? Then you’re trying to rebuild from scratch versus being able to build off the best available platform.
TP Huang (45:42)
That’s what they forced the Chinese semiconductor industry to do. I forgot to mention this earlier, but I think some of the innovations that we saw with Ascend SuperNodes, and that we are likely to see when it gets applied to DeepSeek later this year, are basically things that Chinese industries had to do to deal with deficits elsewhere.
For example, they have lower yields on advanced processes, so they have to make their chips smaller. They can’t use a three-nanometer process, so they have to use a seven-nanometer process. They make up for that by producing smaller chips. As a whole, that uses more electricity, but they link them together with a larger web of optical cables. Then they try to take advantage of Huawei’s advantages in telecom technology and how to build networks to try to get the same performance, but with more chips, consuming more electricity along the way.
So I think constraint forces innovation. That’s where things are.
Kyle Chan (47:16)
This idea that technology is not one path is getting almost philosophical, but there is not one single route to an end goal. You can think about different paths and different options. Right now, when China feels blocked on one route, it is looking around to see where it is strong and where it can leverage strengths in other areas to make up for deficits.
As you mentioned, Huawei has become a huge global player in networking technology. That’s one of its key strengths. So it is leaning heavily on that expertise, and not just Huawei but a whole bunch of other Chinese firms are leaning on that level.
A lot of other people have pointed out the same thing: to get down to the seven-nanometer level without EUV, or maybe even a quasi-five-nanometer level, how far can you push multi-patterning? Do you need EUV in the long run, or can you make up for lower yields in other ways? Some of these are question marks. It’s not clear exactly which paths will play out. But it does prompt not just a desire to innovate, but the necessity of it because you’re trying to make progress in a certain area and you’re blocked. You’re not going to give up. That’s what I see again and again: searching, like water always looking for cracks, for some new way to get where you want to go.
TP Huang (49:05)
It’s interesting that you mentioned EUV and different tech paths because of all the export controls put on China. The one that seems to have been the most effective is still the EUV one from 2018, in Trump’s first term. I almost feel like the ones put in during the Biden term more jolted Chinese industries to move faster on indigenization because they realized they just can’t rely on American systems at all. Longer term, that has had more negative effects.
The EUV restrictions probably put a limit on how small Chinese fabs can make chips. That obviously puts certain ceilings on Chinese chips. I don’t know if that’s just a matter of something you can overcome with more electricity, or whether it gets to a certain point where you can’t overcome it at all. I don’t know if there is a right answer. But the EUV issue definitely seems to be a pretty big hurdle. That’s probably also why the Chinese effort in EUV is so hidden from the outside world. They just don’t want people to know what’s going on with that factory in Dongguan somewhere.
Kyle Chan (50:26)
The rest of the world might hear a little snippet, like a Reuters report about a secret complex where they’re trying to build a prototype for EUV. But it’s hard to get information for a reason. There are even reports about trying to use particle accelerators as a more powerful light source. That’s a very different avenue for technological innovation. Who would have thought that something used for experimental physics would potentially be tried out for making chips? Who knows where that will go.
TP Huang (51:21)
You can see that they’re trying every aspect and every avenue. At the end of the day, physics is the same everywhere. You try every avenue you can, and maybe you can even come out ahead if you try a more novel approach.
It seems kind of sad in America. I really must say this. It feels like we’ve reached an anti-science age of U.S. government, where its approach to development isn’t to improve funding for science. It’s to try to stop the other guys by blocking stuff. It feels like winning by funding more science research is the right way to go.
Kyle Chan (52:09)
It’s hard to convince a lot of people that this is a long-term investment in basic research and that it will pay off. It will lead to cancer treatments and the future AI boom — maybe not next year, but maybe in 10 or even 20 years, especially for the big stuff like fusion or quantum. These are much longer-term bets.
It’s been a struggle for me to watch as well. I spent so long at Princeton, and I can see that there’s incredible research being done at American universities. Individual labs don’t need that much money relative to what you can enable in terms of hiring more researchers or getting more equipment, and relative to the potential impact. You could unlock quantum computing, or one of these really transformative technologies. That should be an easy investment decision, but apparently it is not always the most politically salient one.
TP Huang (53:32)
Can I ask you a couple of questions about that? This is something I have no knowledge of. One is that I was talking to my contact, and he mentioned that in China, the line between the public and private sectors is pretty thin. Tsinghua University people can go off and create their humanoid robot labs, or Zhipu or Moonshot can come from graduates of those programs. From what you’re saying, it sounds like that’s a lot harder to do in America, as someone from Princeton.
Kyle Chan (54:12)
It depends. In some areas, the academic-to-commercialization pipeline is very strong in the U.S. There are very good incubator policies at the university level that try to help spin off companies. For some universities, I believe a sizable chunk of their income comes from connections to spin-off companies.
The most famous example might be the University of Florida and Gatorade, which is a little off-topic. There’s some Gatorade recipe, and they get a steady income stream from that. But one of the biggest areas is biotech. You get the Boston biotech cluster, with the Broad Institute, Harvard Medical School, and MIT spinning off a bunch of biotech startups that then get bought by the big pharmaceutical companies.
There is some of that. But at the same time, it comes back to what you were saying earlier: it depends on where your existing supply-chain strengths are. On the robotics side, there is interesting work being done at the university level on machine vision and control systems. But if you want to turn that into scaled-up mass manufacturing of practical robots, it will be tough unless you are deeply integrated with the Chinese robotics supply chain.
TP Huang (55:49)
I just want to make clear to people out there how easy it is to do this in China versus here. I’ve been listening to this Chinese podcast about what they do to get started, and they said something along the lines of: “We had this idea in our mind. We went to talk to the suppliers, and in a month we had a prototype.”
In America, what we have to do is go on Amazon. First I look for the board I need to get, and it needs to be shipped from China. That takes a week. Then I have to get multiple of them, just hoping one works. Then my local setup...
Kyle Chan (56:24)
And then it’s the wrong board, and you have to wait another week.
TP Huang (56:39)
Once that’s figured out, I need to find myself a manufacturer. Where else do I go? Okay, I need to go to China now. I need to find a contact there and find a local factory that can produce this stuff for me. Then now we have a tariff issue. How are we going to get this manufactured without paying too much? These are headaches that someone developing this stuff in China does not have to worry about at all.
I even saw one of their websites. It was pretty interesting. You go there, and they have a list of factories willing to work with you. You can select the factories. You can select the physical board you want. I guess you can select how much battery life and things like that. Then maybe you need to draw some inspiration for what your robot or machinery should look like. They will help you figure out which AI model is best for your application. After several iterations, you can get your initial product. If it works, you can order more from that company.
Kyle Chan (57:58)
That’s the thing. That’s just getting to the initial product. Scaling is going to be even easier in China for anything hardware-related. The point is very well taken.
TP Huang (58:11)
I was reading recently about how humanoid robot companies in America have to develop all their own actuators because that supply chain is not available here. That’s hard. You have to have core competency in everything. You can’t just plug and play. You can’t just use the supply chain and do it.
Kyle Chan (58:37)
That’s a big barrier to entry. In the last few minutes, maybe I can ask you about your background, and then you can ask questions about mine. I was curious how you got interested in following such a wide range of developments in Chinese tech. For me, a pretty good chunk of the value of Twitter is following your account on EVs, clean energy, very granular military technology, defense tech. You know a ton about all of this.
TP Huang (59:22)
I guess I spent some time following it when I was a kid. I had a sense of Chinese nationalism growing up in Canada, and I spent a lot of time on Chinese technology back in the early 2000s. Military technology was part of it. Back then it was more about seeing where they were. They were still making cheap knockoffs compared with what was available in the West.
Then I went away from that for a little bit. Because I was getting interested in EVs, this was a great thing for me to come back to the entire China tech scene. I had heard of BYD a long time ago. Then we had the October surprise, and that’s why I got into the semiconductor side of things a little more.
At a certain point, because I’m in the software industry and AI is such a big growing sub-industry here, I decided to quit my job at a finance firm to do AI for a year. I got to try all the different AIs, and that’s how I stayed very interested in it. Now I use all these AI tests more for my personal productivity gains. I had a lot of experience with the embodied AI part because I actually worked on a project from here, as you can probably tell. Halfway through, I thought, “I don’t know how America is going to compete in this.” So I quit.
Kyle Chan (1:00:59)
Firsthand experience competing.
TP Huang (1:01:25)
Then I came back to finance, but in the work I’m doing right now, I still use a lot of AI. Once you get into one side of the tech competition, following the rest of it is very interesting. That’s where I’m at.
What about you, Kyle? One thing I saw you posting that was really interesting early on was both China and India stuff. This was early on, when I only had about a thousand followers.
Kyle Chan (1:02:00)
I remember those days.
TP Huang (1:02:08)
But then you definitely shifted more to China stuff later. In the beginning, I saw you writing a lot about both the China and India sides of things. How did you get into that area as an academic?
Kyle Chan (1:02:17)
I’ve long been interested in both China and India. The way I put it is that I’m even more interested in this broader topic of development. I was very fortunate to be able to travel — hostels, buses, that kind of travel — in many parts of the world. I was struck by the massive inequality: where you are born and where you live shapes your whole life.
Why does someone in the U.S. from a rich family in New York not have to work that hard and still live a pretty good life, while someone in Delhi can work incredibly hard and still have such a tough life? Why does it have to be that way? Even getting to work is so difficult because you have a poor road network, potholes everywhere, and maybe no transportation to get there, so you’re taking a three-wheeler or something like that.
I was interested in this global inequality and what drives it. The India and China story in particular matters because they are such fast-changing countries. I felt I had to be there on the ground trying to figure this out. Especially on the infrastructure part, Americans take it for granted.
We just whine when the subway stops working or there’s traffic on the highway. But in much of the rest of the world, infrastructure matters a ton and you don’t take it for granted. You notice it a lot in your daily life when you can’t get where you need to go or you don’t have reliable power. So I wanted to understand how this was being done in these two countries.
I was particularly focused on high-speed rail because the Chinese high-speed rail build-out has to be one of the biggest stories of our lifetimes. When I tried to explain this to Americans, they’d say, “Yeah, I guess we have Amtrak. We don’t really care.” And I’m like, “No, no, you don’t understand. It’s a big deal for China. It’s a big deal for the rest of the world. It should be a bigger deal for us, but that’s another story.”
That was a lens for me to explore not just infrastructure, but how the government works, how industrial policy works, and how technology works.
The thing with the Chinese high-speed rail story — and you know this, but for people who don’t know — is that it’s not just that China bought a bunch of trains from Siemens and imported them. In theory, any country with money can do that. It’s that they indigenized this whole process. They had those bullet trains manufactured locally through these JVs and this whole process of re-innovation, as they call it. I wanted to understand how they did that and what was really happening.
I was also annoyed by very simplistic top-down models of how China did this for railways. The more I dug into it, the more I found that they actually gave a lot of power to smaller groups within the railway administration. That was really key because they were more nimble and had more accountability at that level. Anyway, I’m going on a rant here.
TP Huang (1:05:48)
No, but I do find the Chinese case very interesting. Last summer, when I was in China — I hadn’t been there for 12 years — I tried high-speed rail inside China for the first time. My cousins showed me the app they use to book high-speed rail tickets, and I was like, “This is government-made and it completely works.” You try to get the U.S. government to make some software for you — forget about it.
I don’t know how they got the government part working so well in China. That’s interesting to me. The high-speed rail story is amazing. They not only built such a network; they also built this entire industry of people who know how to build high-speed rail and all the components for it. If we were to start here, we would still need to train people to do all this.
Kyle Chan (1:06:40)
Totally. Some government officials in China I talked to said that even a thirty-something Chinese engineer who has been working in the high-speed rail industry has more experience under their belt than someone close to retirement outside China in terms of building these things. They may have built several multibillion-dollar lines. That kind of experience and depth of technical knowledge is huge.
And it was deliberate. That’s the other big thing: long-term investment not just in physical hardware, but in people, training, institutions, and standardization. That’s another story that I feel like most people don’t care about, but China really tries to standardize things. They create a standard technical platform and scale it up so they don’t have to reinvent the wheel every time they build a new high-speed train.
TP Huang (1:07:44)
That’s interesting because in America, with the standards we developed, it took 100 years to develop some of them. China basically fast-tracked that entire process. They made a lot of mistakes along the way, but now their standards are pretty good, from what I can tell.
Kyle Chan (1:07:53)
At least on the high-speed rail side, for sure. Anyway, that’s a long answer to your question about how I got into this.
I started to see patterns, as I’m sure you did. Some of the same industrial policy tools and hypercompetitive industry dynamics you see in EVs, you see in the solar industry. It comes up again and again. So I thought, “Okay, there’s something deeper here.” That’s why I try to follow a broader range of sectors, kind of like you. There are many people who know semiconductors way better and can drill down layer by layer through the entire supply chain. But being able to connect some of these things together, and maybe tweet about it, is what we do, for better or worse.
TP Huang (1:09:01)
We spend too much time doing it, clearly. We’re giving our free time online for everyone to learn. We should get paid somehow in this process.
Kyle Chan (1:09:07)
That’s right. It’s better than taking a course on this stuff.
I think maybe we should wrap up here. Speaking of Twitter, I highly recommend everyone follow TP Huang, @tphuang, on Twitter/X. When I post this for the High Capacity Podcast, I’ll be sure to link to your Twitter account and your Substack, where people can also listen to your China Tech Talk podcast, which has been really cool too. Thanks for doing this, TP.
TP Huang (1:09:54)
Thank you, and I will do the same. Can you just spell out the... Actually, I should be able to put this on my post because I don’t think your Twitter handle is exactly K-Y-L-E-C-H-A-N.
Kyle Chan (1:10:09)
It’s Kyle I.
TP Huang (1:10:12)
There’s an “i” in there. There we go. I will definitely link your Substack, High Capacity, and your Twitter account. Yours is definitely one that I recommend people follow. Kyle is on YouTube quite a bit these days, and I believe you even talked to the Select Committee. I recommend everyone watch that. I listened to you on Heard and several other ones.
Kyle Chan (1:10:32)
Those are interesting. You never know what questions you’re going to get or what angle people will take.
Kyle Chan (1:10:55)
All right, well, let’s leave it there. Thanks a ton, TP. We should do this again before too long.
TP Huang (1:11:00)
Definitely.



