How AWS is computing the future of the cloud – SiliconANGLE News

The highlight ofAmazon Web Services Inc.s annual re:Invent conference is always the marathon three-hour keynote by Chief Executive Officer Andy Jassy, and despite the show going virtual this year, itll be the same today.

As he always has, the longtime leader of Amazon.com Inc.s cloud computing company will debut adizzying array of new and upgraded services, which AWS is keeping close to the vest until Jassys keynote starts at 8 a.m. PST today.

But in an exclusive video conversation, Jassy (pictured) revealed more than a few hints of whats coming: He offered a deep dive into the processors AWS Annapurna operation has designed, the challenges and opportunities of serverless computing, its plan to democratize artificial intelligence and machine learning, and its let-a-thousand-flowers-bloom strategy for purpose-built databases.The conversation was lightly edited for clarity.

Look for more strategic and competitive insights from Jassy in my summary and analysis of the interview, as well as in two more installments coming in the next week or so, and in thefirst partthat ran Monday. And check out all the re:Invent coverage through Dec. 18 by SiliconANGLE, its market research sister companyWikibonand itslivestreaming studio theCUBE.

Q: Whats your vision on the compute piece of cloud at AWS?

A: Theres three basic modes of compute that people run. There are instances, which is really the traditional way people run compute and often when people want to have deeper access to the resources on the machine. And then theres the smaller units of compute that are really coming on strong and growing at a very rapid clip in containers. And theres serverless. I think all three are there to stay and are all growing at a very rapid rate.

If I look at instances, we have an unmatched array of instances, not just the number of instances, but just if you look at the detail. We have the fastest networking instances with 400 gigabit-per-second capabilities and the largest high-memory instances of 24 terabytes, the largest storage instances. Weve got the most powerful machine learning training in instances and the most powerful inference instances. We have a very broad array of instances with more coming in the next few weeks that will further extend that differentiation.

But I think that one of the things that our customers are really excited about, and its changing the way they think about compute on the instance side, is the Graviton2 chips that we have built and launched in our families, like our R6G and M6G and T4G. Weve used Intel and AMD processors in our instances for a long time. And I actually expect that we will for a very long time. Those partnerships are deep and matter and will be around for a long time. But we know that if we want to continue to push the price-performance of these chips and our customers want us to, were going to have to design and build some of those chips ourselves.

Q: Whats behind doing your own chip designs?

A: We bought a business, the Annapurna business, and they were a very experienced team of chip designers and builders. We put them to work on chips that we thought could really make a big difference to our customers. We started with generalized compute and we built these Graviton chips initially in the A1 instances we launched a few years ago, which really were for scale-out workloads like the web tier or microservices, things like that. It was 30% better price-performance and customers really liked them, but there were some limitations to their capabilities that made it much more appropriate for a smaller set of workloads.

We didnt know how fast customers would pick up those Graviton chips, butthey adopted them a lot quicker than we even thought. And a customer said, Can you build a chip that is a version of that that can allow us to run all our workloads on it? And thats what we do with Graviton2, which if you look at the performance of what weve delivered with Graviton2 chips in those instances I mentioned, its 40% better price performance than the latest processors from the large x86 providers. Thats a big deal. So we have customers trying to move as many workloads as they can as quickly as possible to these Graviton2 instances.

Q: The Annapurna team doesnt get a lot of public attention. What else has it been doing?

A: Weve also put that Annapurna team to work on some hard machine learning challenges. We felt like training was something that was reasonably well-covered. The reality is when you do big machine learning models at scale, 90% of your cost is on the inference or the predictions. So we built a chip to optimize the inference called Inferentia. And that already is growing incredibly quickly. Alexa, which is one of biggest machine learning models and inference machines around, already has 80% of its predictions being made through Inferentia. Thats saving it 30% on costs and 25% on latency.

So were continuing to build chips. We have the scale and the number of customers and the input from customers that allow us to be able to optimize for workloads that really matter to customers.

Q: Turning to containers, whats your strategy given all the competition there?

A: Most providers have a single container offering, which is a managed Kubernetes offering. But we realized with builders that they dont believe in one tool to rule the world. Different developers and teams have different tastes and different interests and different needs. So if youre a developer that wants to optimize for using the open-source Kubernetes framework, then youll use our Elastic KubernetesService (EKS), and its growing incredibly quickly. If youre somebody who wants to optimize for the container framework that has the deepest integration with AWS, youll work on our Elastic Container Service or ECS. Because since we own it, we can launch everything right from the get-go without having to run it through anybody else. So you have deep integration.

And if youre running containers without thinking about servers or clusters, then you run Fargate. By far, the largest number of net new container customers to AWS get going using Fargate because its serverless and they dont have to worry about it. You and I talked about it a few years ago at the Cube. I remember on your show I said that if Amazon were starting from scratch today, that we would build it on top of Lambda and on top of our serverless services. I think it was Stu [Miniman] who said, Youre not serious about that. I said, Oh no, I am serious about it.

Q: How serious are developers about it?

A: In 2020, half of the new applications that Amazon built were built on top of Lambda compute. I think the next generation of developers are going to grow up building in this serverless fashion, which is a combination of having event-driven, serverless computing service like Lambda and a bunch of enabling services like API Gateway and our Event Bus, Event Watch and things like Step Functions for orchestration workflow. But also all the services that can set event driven serverless triggers. We have 140 services at this point, which is seven times more than anybody else has.

You can really build end-to-end serverless workflows and applications that you couldnt a few years ago. I think compute is totally being reinvented and were working hard to help customers have better capabilities, more cost-effective and more agile.

Q: One surprise success is the partnership between AWS and VMware, which many people back in 2016 when it was announced by you and VMware CEO Pat Gelsinger thought was VMware simply capitulating to Amazon. Hows it going, and does it represent a trend?

A: Youre right that VMware Cloud and AWS, or VMC as the offering is called, has been a huge success for both VMware and for AWS. There are a lot of things that they were purported to make it easier to do hybrid that really were a lot of hype and no traction. The traction on VMC is very significant, even just in the last year double the number of nodes, double the number of VMs and big enterprises making their move to the cloud through VMC together.

Because most of the world is virtualized on VMware, to be able to use those same tools that youve used to run your infrastructure on VMware for many years to deploy and manage your AWS deployments is super-attractive. Thats why its compelling, but I will tell you that everybody aspires to have partnerships like that. We have a lot of them and lots of other companies do, but I would say that there arent that many partnerships that work as well as the way its working with VMware and AWS.

Q: Why did it work out that way?

A: It takes both companies really willing to lean in and to commit engineering resources together, to build something and to get your [people in the field] connected. You cant just make a press release and then just let it go and fire, forget. Those teams are meeting all the time at every level. And both Pat and I are very passionate and supportive and prioritize it. And we meet frequently together and with our teams. And I think those teams really function as one. I think our customers are seeing that. Even if you aspire to have a unique partnership like that, it takes a lot of work.

Q: How are developers viewing AWS now? How would you grade yourself in terms of ease of developer use and developer satisfaction?

A: Well, if you rate it based on how people are voting with their workloads and the amount of applications and workloads people are running on AWS, I think were doing quite well. But I would also argue thats not necessarily the right bar. I would say that we are perpetually dissatisfied with making it as easy as possible for developers to abstract away as much of the heavy lifting as you can. And I think were going to be working on that forever.

If you look at containers and you look at serverless, which is the smaller units of compute that more and more customers are moving to, even though weve radically improved how easy it is to get going and customers are moving really fast and Fargate is this totally different serverless offering nobody else has, I think we have a long way to go to get to where we want to be.

Q: What are some of the challenges of serverless?

A: If you use containers and serverless together, or Ill say Lambda on the compute side, you actually want to be able to deploy both of them from the same tools. Nobodys made that easy. No ones made it possible today. Just think about the challenge, John, in the difference in deployed containers than traditional instance-space servers. The traditional instance-space applications, its one code base. You use infrastructure as code tools like Cloud Formation, you build a CI/CD pipeline to deploy it. Its a block of code. If you have to change it, you change the block of code.

Thats very different than containers where people are building in these smaller chunks really of microservices where they all have their own code. They all have their own CI/CD pipelines. There are lots of teams that are actually operating on these that end up comprising a full application. For just one application where you have all those microservices, its really hard to keep them consistent and deploy in a high-quality way to track what everybodys doing that contributes to that application. And theres loads and loads of those. There are no tools today, really no tools, that do that well. And thats something that really matters to developers and something were working on.

Q: Last year on theCUBE, we were riffing on the fact that theres going to be thousands of databases out there, not one database to rule the world. First, I wanted to ask how customers are viewing database licensing issues that may affect which clouds they use.

A: For many years, most companies only ran on relational databases. And when you were in the neighborhood of gigabytes and sometimes terabytes of data that mightve been OK, but in this new world where were in the neighborhood of terabytes and petabytes, and even sometimes exabytes, those relational databases are not appropriate for a lot of those workloads.

If you look at their commercial-grade relational databases, which have really had all of the workloads on them back when people were running relational for everything, theyre basically Oracle and Microsoft SQL Server. And if you look at those two companies and those offerings, the offerings are expensive, proprietary, have high amounts of lock-in.

And then they have licensing terms that are really punitive where theyre constantly auditing their customers. And when they find things, they try to extract more money from them or theyll let you off the hook if you buy more from them. And I think those companies have no qualms about changing the licensing terms midstream to benefit themselves.

Q: Examples?

A: Just look at what Microsoft did with SQL Server over the last year or two where they basically told customers who had bought SQL Server licenses they couldnt use them on any other cloud than Microsoft. Now, is that better for customers? Hell no. Is it better for Microsoft? I think they think so. I happen to think its really short-term thinking, because customers really resent that. And as quickly as they can flee, they will.

But I think customers in general are really fed up and sick of these commercial-grade, old-guard database providers who change the licensing terms whenever they want and the pricing whenever they want to suit themselves. And I think its why so many companies have moved as quickly as they can to those open-source engines like MySQL. Its why we built Aurora, which is 100% compatible. They have additions for MySQL and PostgreSQL. Thats why its the fastest-growing service five, six years running.

So I think that customers are fed up with it. Theyre moving as fast as they can. We have an accelerating number of customers who are looking to move away, not just from Oracle, but from SQL Server, because theyre really sick of whats happening and they dont trust those companies anymore.

Q: More broadly, whats up with all these new databases from Amazon and others, and whats the value for customers?

A: In this new world, theres so much more data. Whats happened over time is that people realize that relational databases are more expensive and complex and overkill than they need for a lot of use cases, and that theyre better off with these purpose-built databases, like key-value stores, or in-memory databases, or graph databases, or time series databases, or document databases, all those types of things.

Most companies have got these central data lakes to run analytics and machine learning. And yet at the same time theyre using more and more of these purpose-built databases and purpose-built analytics services like Athena and Redshift and EMR and Kinesis and things like that. A lot of customers are trying to come to grips with, How do I think about having this data in the middle and this data in all of these external nodes, which I need to do for a lot of my applications for operational performance?

What a lot of customers are asking for help on is how to move that data from the inside out, from the outside in, and from those purpose-built databases on the outside, along the perimeter to other outside databases. Because if you can actually take some of those same views from databases and materialize them into other spots, they open up all kinds of opportunities, which today are really arduous and hard to do. And thats another area that were really squarely focused on.

Q: One of the things weve always said is that the huge thing about cloud is horizontal scalability. You can have purpose-built databases, but if you can tie them together horizontally, thats a benefit, and you can still have vertical specialty for the application. So are the old guard, these old mission-critical workloads, going to be replaced or cloudified or what?

A: An accelerating number of companies are not just building their new databases from the get-go on top of things like Aurora or the purpose-built databases we have, but are migrating away from those older guard databases as fast as they can. Since we built our database migration service, we have over 350,000 databases that have moved.

The Database Migration Service actually makes it quite doable to move the data and the database to another source and the Schema Conversion Tool we have allows you to move those schemas. And the last piece the customers really want help with is how to move that application code thats unique to some of these databases. Because some of these old-guard databases have built these unique dialects that work just with their particular database engine.

Show your support for our mission with our one-click subscription to our YouTube channel (below). The more subscribers we have, the more YouTube will suggest relevant enterprise and emerging technology content to you. Thanks!

Support our mission: >>>>>> SUBSCRIBE NOW >>>>>> to our YouTube channel.

Wed also like to tell you about our mission and how you can help us fulfill it. SiliconANGLE Media Inc.s business model is based on the intrinsic value of the content, not advertising. Unlike many online publications, we dont have a paywall or run banner advertising, because we want to keep our journalism open, without influence or the need to chase traffic.The journalism, reporting and commentary onSiliconANGLE along with live, unscripted video from our Silicon Valley studio and globe-trotting video teams attheCUBE take a lot of hard work, time and money. Keeping the quality high requires the support of sponsors who are aligned with our vision of ad-free journalism content.

If you like the reporting, video interviews and other ad-free content here,please take a moment to check out a sample of the video content supported by our sponsors,tweet your support, and keep coming back toSiliconANGLE.

Read the original:
How AWS is computing the future of the cloud - SiliconANGLE News

Related Posts

Comments are closed.