Home > News > AI

Kai-Fu Lee's Latest Wealth Forum AI Interview: Chinese AI is Only One Year Behind the U.S.

Sun, May 26 2024 07:32 AM EST

Reproduction without permission is prohibited. Please be sure to retain the original source link and public account button. ?url=http%3A%2F%2Fdingyue.ws.126.net%2F2024%2F0524%2Fce96c999j00sdzpnv018kd200u000gxg00u000gx.jpg&thumbnail=660x2147483647&quality=80&type=jpg Title: Lord of the Sky City

At the latest Wealth Innovation Forum, Professor Kai-Fu Lee conducted an English interview on the hot topic of AI and made a comparative evaluation of the development of AI in China and the United States. As one of the few industry giants with significant influence in both China and the U.S., Kai-Fu's representation of the voice of Chinese AI at a top forum is both needed and commendable.

In contrast to some recent statements claiming "China is at least n years behind the world in AI," Professor Kai-Fu Lee boldly asserted that China's AI development is only slightly behind the United States by less than a year.

The overall content of this interview is noteworthy and worth sharing.

Host:

It's a pleasure to have a conversation with Kai-Fu. Welcome.

Kai-Fu:

Thank you, Alison.

You mentioned that artificial intelligence is humanity's greatest breakthrough, representing the final step in understanding ourselves. I'm curious, Kai-Fu, why do you hold this view? What does it mean for us to understand ourselves?

In my doctoral thesis, I explored this question. In 1983, artificial intelligence was first applied to computers, and my thought at the time was that once we understand how to build artificial intelligence, we can understand how we think. That's why I chose to enter the field of artificial intelligence, even though it was during the AI winter. However, I now realize that we may create entities more powerful than humans, not necessarily general artificial intelligence (AGI), but entities with capabilities far beyond humans. Moreover, these entities may not need to mimic our brains entirely. So, the good news is that we can create entities more powerful than I imagined, but the bad news is that this may not help us deeply understand how the brain works; we still need cognitive scientists and neuroscientists in the world.

Wait, are you saying there could be entities other than general artificial intelligence (AGI)? I thought AGI was our ultimate goal. So, when can we achieve this goal, and how do we surpass it?

Yes, AGI is defined as a superset of human intelligence, meaning that AI can do everything humans can. I think this is a very narrow and self-centered view. Because we are human, we always hope that aliens, pets, monkeys, and so on can become human, but that's not the case. AI is a massive machine, and with the increase in GPU, computing power, and data, its performance will become stronger. It can perform many tasks, even better than humans. But that doesn't mean it will do everything humans do because its brain operates differently from ours. We don't compare marathon runners to cars, and similarly, I believe we don't need to compare AI to us, even though it exhibits many intelligent behaviors.

Therefore, I believe that if we compare what humans can do with what AI can do, five years ago, humans could do many things, while AI could only do a part, with some overlap. Now, AI can do more than humans, but it's not a complete superset. I predict that in the next two to three years, if the range of human capabilities is a circle, AI's capabilities will be as vast as the Earth, but it may still not be able to do everything we can. It may lack consciousness, love, empathy, compassion, or other skills.

So, we might see a superhuman, greater than all of us, but without compassion. It sounds like we are building a great world...

But it can feign compassion.

Oh my. We have a lot to discuss.

You are an entrepreneur, running a startup company, and a venture capitalist who has been investing for many years. You have worked at almost all major tech companies, leaders in the field of artificial intelligence. Therefore, there are many things worth exploring. First, I'd like to talk about your startup company, ZeroOne.ai, where you are the CEO and founder. About a year ago, when the company was founded, you didn't have a team. However, now your company is valued at $1 billion without any revenue. Is there some revenue?

There is some revenue. We expect to generate more revenue.

So, what is ZeroOne.ai exactly? What is its position in the world of artificial intelligence?

Yes, I believe that people in China cannot access ChatGPT. In fact, many of you may not be aware of this, but OpenAI has blocked access from China. Therefore, I believe China should not be excluded from this revolution. As I stated in my book "AI Superpowers," the U.S. will lead in breakthrough innovations, but China excels in execution. A year ago, I started thinking that now is the time for China to prove its execution capabilities. This time, I don't plan to invest in others but to take action myself. That was my initial attempt.

On the other hand, I found that this field is becoming increasingly closed. Despite being named OpenAI, it is not truly open. It may be the most closed company in the world, even more closed than companies like Google, Grok, Meta, and others. Therefore, I believe we need to collaborate with academia. We want to work with the open-source community, entrepreneurs, and people like you. We should not only harness the power of AI and create value but also make it accessible to everyone. If the best companies in this field keep all their technology closed, never open-source, never release it, then we cannot attract and bring in all the talented people in the world. Therefore, we have decided to take an open approach, which is very rare among American companies because, as we all know, American companies are more open than Chinese companies. But we chose an open attitude. So far, every best model we've built, from text models to multimodal models, has been open-sourced. They can be found on Hugging Face and other websites because we aim to change people's mindset and make these models accessible to more individuals.

Lastly, I find something quite unique - it seems like I'm witnessing in the AI field what happened with PCs and smartphones. I know people like to talk about AGI, but in reality, in terms of making money, it's very similar. PCs created a stack from CPU to operating systems, applications, servers, cloud, end-users, enterprises, and B2B, which made Microsoft a great company and earned them huge profits. Similarly, smartphones brought significant revenue to Google and Apple.

I believe the same opportunity exists, but many LLM companies are run by researchers solely focused on creating outstanding models. I think the phase of scientific exhibitions should come to an end. No matter how great your presentation is, at some point, investors will ask you, what do you have to show? What's your financial situation like? How much revenue do you generate? What's your growth rate? When will you break even?

I've learned a lot from this process myself. I started as a researcher, but now I've been in venture capital for 14 years, and I believe I have the capability to make this project profitable. Therefore, we won't spend too much time on this issue, but our ambition in creating the company is to have a complete tech stack with infrastructure, models, applications, and data. We've come up with many ways we think can be profitable in the future.

We're not doing this for the sake of money, but because we see the need to continue raising funds to purchase the GPUs we require. We don't want to create an investment bubble, make lofty promises, or achieve our goals through presentations. We aim to achieve our goals through actual revenue growth and profits.

This sounds expensive, seemingly unsustainable with just venture capital. Google and Meta have billions of dollars; how do you compete in this world?

For instance, when you think of GPUs and models, most people might assume that with so many GPUs, they can surely do well. However, Google's DeepMind department has over 2000 employees, all competing for GPUs and resources. Google's strategy is to diversify.

This is why OpenAI has been able to stay ahead; they bet on a method with far fewer GPUs than Google had a few years ago. Now, they both have plenty of GPUs. Therefore, I believe our strategy should be to have a very small AI modeling team and a very large infrastructure team.

Because if you have too many researchers and a culture where everyone can try out ideas, as you mentioned, as a startup, you'll quickly run out of funds. But if we're very focused, we hope these researchers read every paper, understand every technology, discuss things theoretically, debate from first principles, and then conduct some experiments to settle the matter. Then, before we truly spend on expensive GPUs, we either reach a consensus or I have to make a decision.

This is a culture very different from what Google and other companies are trying to do. We're essentially trying to gather a lot of data and make critical decisions so we don't consume too many GPUs. For example, you may have read about Stephen Levy's paper on Huawei and polar codes. I won't go into detail here, but essentially, Huawei required its engineers to read all papers, and they found a Turkish professor who invented something called polar codes. This brought about significant changes, allowing Huawei to lead in the 5G field.

We're taking a similar approach, working very hard to save GPUs. Then, infrastructure is one of the most underrated yet crucial technologies because these GPUs don't work very well. The failure rate for each Graphics Processing Unit (GPU) is 4%. So, if you have a cluster of 10,000 GPUs, you might have half the time in a month where you can't use the cluster because if one GPU fails, the whole cluster fails.

You may have heard Jensen Huang discuss Blackwell's recovery method. Now, we can achieve recovery by fixing the H800 in the company. Additionally, there's a metric called MFU used to measure the floating-point utilization of models. Most LLM companies have a floating-point utilization of around 40%, while Google and NVIDIA have slightly higher utilization, but our utilization reaches 63%.

Therefore, I firmly believe that innovation is demand-driven. Compared to Google and OpenAI, small companies like us face demands that force us to carefully choose methods, decisively decide on which methods to take, and build a large infrastructure team to reduce computing costs because we don't have as many computing resources.

I understand this, and there's still a long way to go.

You're hailed as a tech prophet, so I'd like to get some predictions from you. You've written two books, both predicting the future in different ways. In the superpower book, it's the US versus China - who will win? Where are we now? Where do both stand? Are your predictions accurate? For your last question, my answer is definitely yes. In the book, I predict that data will become the new oil, bringing about a whole new revolution. I am curious to explore what generative artificial intelligence is, utilizing all the data in the world for training and using generative methods to solve objective function problems.

The second prediction in the book is that the U.S. excels in innovation breakthroughs, while China performs better in execution. We have seen this in the early days of artificial intelligence. I believe we are about to witness this in the latest generation of AI. We need to see how things unfold.

Taking my company as an example, we were eight years behind just a year ago. Now, we might be less than a year behind the top U.S. companies. So, at least so far, we have been executing well. Of course, the past year has been the toughest, considering the strides OpenAI has made in that time. Therefore, we do not take it for granted. But we have closed the gap because in the past year, my company and other Chinese companies have indeed performed better. So, I still believe in this.

Another point I raise is when the U.S. can have an unassailable advantage. My viewpoint in the book is that technology is invented by some companies or entities, not scholars, as scholars publish papers. And when that company entity chooses to stop publishing, that's what we see now. Therefore, it is absolutely possible for the U.S. to do so. They can expand this leadership position as OpenAI and Google (to a lesser extent) have stopped publishing.

We will see the same with China and the U.S. Will they choose to take different paths and compete in their respective AI markets? Or will the world choose versions that are on par with China and the U.S. in the field of AI? If they are separate, how do we manage all of this?

Yes, we have gone far beyond this point, and we are now in a parallel universe. It's not an ideal situation, and I don't think any of us like it. But that's the reality. Given that it's a parallel universe, I believe we will see many interesting American solutions that cannot enter China, and interesting Chinese solutions that cannot enter the U.S.

So, whether you are an entrepreneur, a venture capitalist, or just a curious individual, observing these two worlds and seeing what exciting things each world has to offer is meaningful. Because when you compete in your world, whether it's the American world or the Chinese world, having an advantage from having researched the other parallel universe will give you superpowers in your work.

But we shouldn't have unrealistic expectations for the U.S. AI companies. In reality, AI companies from China and the U.S. won't actually compete in the same country unless it's in a few countries friendly to both.

When you look at the largest tech companies in the world today, you'll find that you've worked for at least half of them, like Apple, Microsoft, Google. So, in ten or even five years, who will still be leading? Who will fall behind? And which startups will surpass them?

Yes, I believe on one hand, Microsoft is the darling right now, Apple is doing something, Google is feeling frustrated with a lot of negative comments, and OpenAI is the rising star. Despite my previous comments on OpenAI, I am very optimistic about their future. They have truly accomplished admirable and incredible work. Even today, GPT-4 remains the gold standard. You will see Gemini Ultra and Claude 3 making claims, but the performance of GPT-4 and GPT-4 Turbo is incredibly good, striking a balance between performance and cost.

I believe that in the near future, OpenAI could become a trillion-dollar company.

How far or how close is that?

Possibly just two or three years.

Reaching a trillion dollars within two to three years?

I think that's a possible outcome. Of course, they might make mistakes in execution, or other companies might do great things, but despite my concerns about their lack of openness, I still greatly admire them. If I could invest in any of them, although I can't, I would invest in OpenAI.

NVIDIA is another company worth considering for a secure choice. Obviously, it's extremely expensive, but with its processors, CUDA, and its technology in the ecosystem and libraries, it's hard for competitors to replace them. They are indeed very expensive, but most people know that in secondary stocks, you should buy high, not low. These companies won't lower their prices.

Microsoft has been performing exceptionally well. I am somewhat disappointed with Microsoft's Copilot project because it simply grafts general artificial intelligence (Gen AI) onto existing products that are outdated and should be phased out. I understand why they did it, but I am more eager to see a brand-new product that can replace Microsoft Office, allowing AI to take on most of the writing work while humans only need to make minor adjustments.

However, I must admit that Microsoft has made a very wise decision in choosing to collaborate with OpenAI. I think Microsoft CEO Satya showed incredible leadership in trying to bring in Sam Altman, even though that attempt failed, he gracefully let him stay, and now they have Mustafa. Seeing Satya become one of the most outstanding CEOs is truly remarkable. As for Google, I still have some expectations for it. Despite its well-known issues, it remains the global epicenter of AI talent, surpassing Open AI, Microsoft, and other companies. The question is, can they start executing better?

In any case, the current government is undoubtedly in a favorable position, with everyone having a lot of cash. But for entrepreneurs like yourself, this is also a new era. If you're an entrepreneur looking to enter the world of AI, where should you start? What kind of wealth creation are we discussing here? Wealth and equity have become so massive. The company you mentioned, OpenAI, could skyrocket in value to tens of trillions of dollars almost overnight, within a few years. What a rapid trajectory of wealth creation! What does this trajectory mean for capitalism, for the rich and the poor?

This is the most advanced and astonishing technology to date, surpassing any technology by tenfold. So, looking back, electricity, the internet, personal computers, and mobile devices pale in comparison. Therefore, if you agree with this viewpoint, any company we mention, regardless of size, has no reason not to increase tenfold, including us, even hoping for more because we are cheaper. But for immature companies, I don't understand why they can't increase tenfold, as I believe that's how it is, and many of the problems plaguing these systems are like illusions. I can predict that within about a year and a half, these issues will be mostly resolved. So yes, they have indeed hindered progress, but I am very optimistic about the development of these technologies.

Of course, regarding your other question, I am also concerned that a few large companies will dominate more than ever before, which would be a very frightening scenario. It would be much better if 5 or 10 companies performed well. But even in that case, it would accelerate the rate of unemployment and the difficulties faced by small entrepreneurs. If you can only raise $5 million or $10 million and decide to develop an application, consider Jasper. They developed a great app at the time, but the underlying model absorbed all the learning, making the value of the app more questionable. This is not a malicious design by the platform provider but the natural power of the basic model to absorb all the information you input. Therefore, I believe this poses a serious challenge for the impoverished and researchers.

If you mention the lack of GPUs, I will do my best to explain how scarce our GPUs are. However, we are 100 times that of 99.9% of universities. So, how will professors cope?

The wealth gap between the rich and the poor has created a historic chasm, with disparities between professors, entrepreneurs, unskilled individuals, and those in white-collar and regular jobs, which deeply concerns me. This is why I have chosen to make the democratization of this great technology the mission of ZeroOne.ai. The emphasis I place is on "accessible," as I believe all of us should strive to avoid extreme disparities between the rich and the poor.

Regarding the loss of jobs, I think we all know this is inevitable. You mentioned around 2017 that you believed about 40% to 50% of jobs would be replaced by AI in the next 10 to 15 years. Do you still think this is an accurate prediction? How should people cope if everyone is unemployed in three years?

In fact, this prediction is very accurate. People criticized me for being too aggressive in my predictions in 2017, 2018, and 2019, and I was indeed a bit nervous at that time. However, when GenAI emerged, I believe everyone will join this trend and see it as the right direction. I think the loss of white-collar jobs will be faster, while the loss of blue-collar jobs may be slower, as more and more people shift to purely software work. I think this is a very important issue.

I believe some governments are finally waking up and realizing they must take action on this. In my book "AI 2041," I outline many creative but possibly not necessarily feasible solutions aimed at sparking people's thinking. So, I suggest you buy a book like that. We still have a lot to do.

We are running out of time, but I do want to pose a question: Can we find some hope here? How should we prepare our children to coexist with machines? If this is indeed about to happen quickly, not to mention all our jobs, we need to consider how to deal with this situation and help employees, help everyone. But when our children ask, "What should I do when I grow up?" how should we answer?

Yes, I believe the first thing all of us need to do, and something that affects everyone around us, is to stop the nonsense about children cheating using ChatGPT. This is no more serious than cheating with Word or Photoshop. When children enter the workforce, they will be evaluated based on the final work results, not on whether you used GPT chat or Google search.

Therefore, I think we need to encourage people to use AI and all tools to bring out their best. This tool can provide children with good guidance, clearly indicating what is worth pursuing and what is not. I believe we should actively embrace and utilize artificial intelligence, rather than trying to catch so-called cheating behavior. This is not cheating; it is creating tremendous output. It is no more deceptive than a wealth reporter using Microsoft Word's spell check or a wealth photographer using Photoshop. This is ChatGPT, a tool that everyone should use, learn from, and accept. At the same time, I believe we need to trust in the uniqueness of humans. I have always believed that we have souls. Machines can never give us compassion and empathy. We have emotions, the ability to love. We have the ability to connect with others, to build trust and earn trust. As successful individuals in the company, you all know that your success depends on everything, on your technical skills, on your business skills.

For the past 20 years, I have been telling young people that the most important skill is to earn the trust of others. Earning trust requires sincerity, teamwork, sharing, and emotional intelligence, not just IQ. Therefore, I believe there is something we can all accept in this. You don't need to be a genius to have high emotional intelligence. You don't need to be a genius to have love and compassion. In my two books, I have regarded this as the essence of humanity, and I still believe in it.

Can artificial intelligence mimic these? Yes. Do I think people will accept fake artificial intelligence at least in the next 50 years? No. This way, your children will have enough time to survive and plan the next steps for their children.

Alright. Our children will survive, and they will find other things in the future.

Thank you very much, Kaifu. It's been a pleasure chatting with you.

Thank you, Alison.