What it means for nations to have "AI sovereignty" – Marketplace

Imagine that you could walk into one of the worlds great libraries and leave with whatever you wanted any book, map, photo or historical document forever. No questions asked.

There is an argument that something like that is happening to the digital data of nations. In a lot of places, anyone can come along and scrape the internet for the valuable data thats the backbone of artificial intelligence. But what if raw data generated in a particular country could be used to benefit not outside interests, but that country and its people?

Some nations have started building their own AI infrastructure to that end, aiming to secure AI sovereignty. And according to venture capitalist Vinod Khosla, the potential implications, and opportunities, are huge.

The following is an edited transcript of Khoslas conversation with Marketplaces Lily Jamali.

Vinod Khosla: These language models are trained in English, but theres 13 Indian scripts, and within that theres probably a couple of hundred languages or language variants. So the cultural context for these languages is different. We do think it deserves an effort to have cultural context and nuances, like in India: You dont speak Hindi and you dont speak English, you mix the two, whats sometimes called Hinglish. So those kinds of things have to be taken into account. Then you go to the other level. Will India rely on something that the technology could be banned, like a U.S. model?

Lily Jamali: So you were just talking about the cultural context. There is a huge political overlay

Khosla: Political and national security. So imagine India is buying oil [from] Iran, which it does. If theres an embargo on Iranian trade, is that possible that they cant get oil or they cant get AI models? So every country will need some level of national security independence in AI. And I think thats a healthy thing. Maybe itll make the world more diversified and a little bit safer.

Jamali: More safe. Why? Why do you say that?

Khosla: Because everybody cant be held hostage to just an American model. The Chinese are doing this for sure. But if theres a conflict between India and China, can it 100% predict what the U.S. will do? They may care more about Taiwan than the relationship between India and China, for example.

Jamali: And can you explain why you think it is important for each country to have its own model?

Khosla: Im not saying in India theyll only use the Indian model. They will use all sorts of models from all over the world, including open-source models. Now China, I have a philosophical view [that we are] competitors and enemies, and I take a somewhat hawkish view on China. The best way to protect ourselves is be well-armed to be safe against China and avoid conflict if its mutually assured destruction, so to speak. In countries like India or Japan, theyll use all sorts of models from everywhere in the world, including their own local models, depending upon the application or the context.

Jamali: As some of our listeners may know, you were very early to the AI trend, and wed love to know what you think might come next. So what do you think?

Khosla: Heres what I would say. AI has surprised us in the last two years. But its taken us 10 years to get to that ChatGPT moment, if you might. What has happened since is theres a lot of resources poured in. And that will accelerate development. But also, it diversified the kinds of things we worked on pretty dramatically. And so I think well see a lot of progress. Some things are predictable, like systems will get much better at reasoning and logic, some things that they get critiqued for. But then therell be surprises that we cant predict.

Jamali: Although we may try.

Khosla: Other kinds of capabilities will show up in these systems. Reasoning is an obvious one. The embodied world, which is generally meant to represent what happens in the real world, of which is mostly robotics, will see a lot of progress in the next five years. So think of logic and reasoning, rapid progress. Think of robotics, artificial intelligence, rapid progress. Think of diversity in the kinds of algorithms being used. Theyll be really interesting and probably not one people are generally expecting.

Jamali: Diversity in the kinds of algorithms. What kind of diversity are we talking about?

Khosla: If you take the human brain, sometimes we do pattern matching, and theres all kinds of emergent behavior that emerge from that. And [large language models] are going to keep going. And they may do everything. And we may reach AGI, or artificial general intelligence, just with LLMs. But its possible theres other approaches, whats called sometimes neurosymbolic computing. Reasoning is symbolic computing planning, being able to make long-term plans, things like that. We do a lot of probabilistic thinking this might happen or that might happen, whats the likelihood of this happening? Thats generally called probabilistic thinking. Theyll start to emerge. So those are just some examples. And of course, Ill be surprised.

Another person talking a lot about this is Jensen Huang, CEO of Nvidia, which designs industry-leading graphics processing units. This week, the company announced a collaboration with software and cloud company Oracle to deliver sovereign AI solutions to customers around the world.

Huang envisions AI factories that can run cloud services within a countrys borders. The pitch: Countries and organizations need to protect their most valuable data, and Oracle CEO Safra Catz said in a statement that strengthening ones digital sovereignty is key to making that happen.

See original here:

What it means for nations to have "AI sovereignty" - Marketplace

Related Posts

Comments are closed.