Category Archives: Cloud Servers
Your Digital Transformation Will Be as Successful as the Foundation Its Built On – CMSWire
PHOTO:Chris Nguyen
Reaching an enlightened state of digital transformation is a different journey often with a different destination for every company. The phrase digital transformation itself is up for interpretation. Some define it as simply moving to SaaS apps or cloud infrastructures. The purer definition, however, is more about reimagining business processes to drive greater business success in the fast-paced digital age.
Working in the software space, I have a front row seat to many companies varying quests towards digital maturity. We tend to expect our largest financial institutions, insurance companies, and to some degree, healthcare companies, to be digitally transformed and cloud-ready. This is due in part to the nature of their businesses sensitive consumer information and massive volumes of data as well as to the fierceness of the competition. If they dont evolve quickly, someone else will steal precious market share. So while theyre not always striving to be innovative, theyre highly conscious of remaining competitive.
The over 30 million small businesses in the US, however, are always weighing the cost of digital transformation against the cost of maintaining business continuity and being happy (and profitable) with the status quo. Small businesses account for 99.9% of our economic engine and jobs, and they all fall in different places along the spectrum of digital maturity.
One thing most of them have in common? They were not ready for 2020.
Here are three big-picture lessons companies across the board learned during the digital transformation crash course we all got this year.
Being agile is arguably more about culture than adopting specific processes. Agile companies are trained to fail fast, iterate and keep moving forward. This requires complete transparency and open communication across teams. Getting negative results is critical to learning and growing.
Because agile companies are constantly in experimentation mode where friction is a part of growth, rather than something to avoid their teams know how to adapt quickly when unideal circumstances present themselves, no matter their origin.
Companies that hadnt established agile cultures prior to COVID-19 lockdowns had difficulty adapting to 2020. Growth and velocity were likely secondary to simply keeping up.
Related Article: The 3 Fundamental Pillars of Organizational Agility
Weve all heard stories of the critical server under someones desk thats responsible for a core business function. For companies that still operate this way due to slowness to adapt or fear that cloud servers arent secure enough 2020 was extremely stressful when they learned they werent allowed in the office the next morning.
There are many areas that can be overlooked when you havent embraced the cloud. For instance, do you use on-premises source control on servers with out-of-date hardware? Is your CRM a modern SaaS tool like Salesforce or is it a legacy client-server product? Are there third-party connectors or adapters you have built internally over the years that arent cloud-ready? For example, a C++ plug-in that works perfect on your LAN but does not have a Web API for the same functions over HTTP?
Every one of us can relate to this because it wasnt too long ago that most organizations were still vulnerable to these issues even at the largest enterprises. The takeaway is that on-prem and legacy issues can have a significant adverse effect when unforeseen emergencies happen in the physical environment.
Related Article: The Role of Distributed Cloud Computing in the Enterprise
While distributed teams arent a new concept for most of us, a 100% remote workforce is.
SaaS communication and collaboration tools had already become a part of some companies everyday processes over the past few years, but the majority of businesses were still slow to adopt. Many remained dependent on drive-by hallway conversations between team members or executives to get things done. To these companies, casual kitchen conversations that led to major business breakthroughs were the norm and instilled in their culture.
Now, with still mostly remote workforces, driving business performance will depend on using the digital platforms that focus on team and company alignment on goals, enable decision-making with analytics, hold teams and individuals accountable, and keep everyone in the know.
The takeaway? Establish a data culture. Use data to drive decision making and understand the data behind your key metrics.
Doing this will increase your teams performance by letting you prioritize the initiatives that help you meet larger organizational goals.
If I were to sum all of this up into a single thought, it would be: You must keep up with the rapid pace of technological change or you will not survive. What was once optional, is now critical.
Jason is the SVP of Developer Tools at Infragistics, where for 16 years hes held roles at the intersection of tech evangelism and product management. He and his team spearhead the customer-driven, innovative features and functionality throughout all Infragistics testing, developer and user experience products.
Read more here:
Your Digital Transformation Will Be as Successful as the Foundation Its Built On - CMSWire
Kubernetes: What You Need To Know – Forbes
Digital generated image of data.
Kubernetes is a system that helps with the deployment, scaling and management of containerized applications.Engineers at Google built it to handle the explosive workloads of the companys massive digital platforms.Then in 2014, the company made Kubernetes available as open source, which significantly expanded the usage.
ADVERTISEMENT
Yes, the technology is complicated but it is also strategic. This is why its important for business people to have a high-level understanding ofKubernetes.
Kubernetes is extended by an ecosystem of components and tools that relieve the burden of developing and running applications in public and private clouds, said Thomas Di Giacomo, who is the Chief Technology and Product Officer at SUSE.With this technology, IT teams can deploy and manage applications quickly and predictably, scale them on the fly, roll out new features seamlessly, and optimize hardware usage to required resources only.Because of what it enables, Kubernetes is going to be a major topic in boardroom discussions in 2021, as enterprises continue to adapt and modernize IT strategy to support remote workflows and their business.
In fact, Kubernetes changes the traditional paradigm of application development.The phrase cattle vs. pets is often used to describe the way that using a container orchestration platform like Kubernetes changes the way that software teams think about and deal with the servers powering their applications, said Phil Dougherty, who is the Senior Product Manager for the DigitalOcean App Platform for Kubernetes and Containers.Teams no longer need to think about individual servers as having specific jobs, and instead can let Kubernetes decide which server in the fleet is the best location to place the workload. If a server fails, Kubernetes will automatically move the applications to a different, healthy server.
There are certainly many use cases for Kubernetes.According to Brian Gracely, who is the Sr. Director of Product Strategy at Red Hat OpenShift, the technology has proven effective for:
Now all this is not to imply that Kubernetes is an elixir for IT.The technology does have its drawbacks.
As the largest open-source platform ever, it is extremely powerful but also quite complicated, said Mike Beckley, who is the Chief Technology Officer at Appian.If companies think their private cloud efforts will suddenly go from failure to success because of Kubernetes, they are kidding themselves. It will be a heavy lift to simply get up-to-speed because most companies dont have the skills, expertise and money for the transition.
Even the setup of Kubernetes can be convoluted.It can be difficult to configure for larger enterprises because of all the manual steps necessary for unique environments, said Darien Ford, who is the Senior Director of Software Engineering at Capital One.
But over time, the complexities will get simplified.Its the inevitable path of technology.And there will certainly be more investments from venture capitalists to build new tools and systems.
ADVERTISEMENT
We are already seeing the initial growth curve of Kubernetes with managed platforms across all of the hyper scalerslike Google, AWS, Microsoftas well as the major investments that VMware and IBM are making to address the hybrid multi-cloud needs of enterprise customers, said Eric Drobisewski, who is the Senior Architect at Liberty Mutual Insurance.With the large-scale adoption of Kubernetes and the thriving cloud-native ecosystem around it, the project has been guided and governed well by the Cloud Native Computing Foundation. This has ensured conformance across the multitude of Kubernetes providers.What comes next for Kubernetes will be the evolution to more distributed environments, such as through software defined networks, extended with 5G connectivity that will enable edge and IoT based deployments.
Tom (@ttaulli) is an advisor/board member to startups and the author of Artificial Intelligence Basics: A Non-Technical Introduction and The Robotic Process Automation Handbook: A Guide to Implementing RPA Systems. He also has developed various online courses, such as for the COBOL and Python programming languages.
More here:
Kubernetes: What You Need To Know - Forbes
3 cloud computing trends to watch in 2021 – TechHQ
Technology has enabled businesses to continue operating this year, and the cloud has taken center stage going forward, it will only play a larger role in the enterprise.
According to CloudTech, public cloud spending is expected to grow from US$229 billion in 2019 to US$500 billion in 2023, with a compound annual growth rate (CAGR) of 22.3%.
Key players Amazon Web Services (AWS), Microsoft Azure, Google Cloud, and Alibaba, and are expected to grow yet bigger, and by the end of 2021, 60% of companies will leverage containers on public cloud platforms and 25% of developers will use serverless.
As this market continues its growth and evolution, here are three trends to watch in 2021.
A cloudless server is a technology that implements functions in the cloud on a necessary basis. Enterprises rely on serverless computing because it provides space to work on core products without the pressure to operate or manage servers. Microsoft CEO Satya Nadella favors serverless cloud and he believes serverless computing can not only respond and focus on back-end computing but also become key to the imminent future for distributed computing.
Serverless was among the top five fastest-growing PaaS cloud services for 2020, according to the Flexera 2020 State of the Cloud report.
Choosing between public, private, or hybrid cloud environments, has proved challenging for some organizations each offers advantages and disadvantages when it comes to flexibility, performance, security, and compliance.
According to Gartner, in 2019, the actual number of companies using hybrid cloud was 58%, up 6% from 2018. Hybrid cloud benefits include speed, control, and security. In terms of speed, it optimizes the network to reduce the latency and speeds up the data so that it can reach where it needs to be. For control, companies can customize the end of their hybrid cloud model, optimize it, and adjust it according to their needs rather than trusting a third-party cloud provider.
The continued demand for hybrid cloud could lead the worlds biggest providers to partially break out from their walled garden approach. By collaborating and introducing some interoperability, they can continue to satisfy multi-cloud demands. This will enable better data sharing and access between partners, who may be working across diverse applications and standards.
A virtual cloud desktop refers to the software requirements of a device being fully managed by cloud service providers. The user just needs to have a screen and some basic hardware while the rest of the processing power will be seamlessly handled by cloud-based services.
Virtual cloud desktop users only pay on cloud usage, eliminating costs associated with acquiring powerful new hardware, updating the existing hardware, and the disposal of redundant computing equipment.
Sometimes known as desktop-as-a-service, this model of computing is offered by Amazon via the Workspaces platform and Microsoft with Windows Virtual Desktop. Google also offers functionality through its Chromebook devices. In practice, this can increase efficiency across a workforce by ensuring everyone is using up-to-date, synchronized technology.
Read the original here:
3 cloud computing trends to watch in 2021 - TechHQ
Amazon Web Service Explains Its Major OutageAnd Other Small Business Tech News – Forbes
(Photo by Pedro Fiza/NurPhoto via Getty Images)
Here are five things in technology that happened this past week and how they affect your business. Did you miss them?
1 Amazon Web Services revealed what caused the major outage last week.
Many AWS operations were impacted last week from an outage that happened in the Northern Virginia region after additions were being made to the capacity of its Kinesis servers, which are used by additional AWS platforms such as Cognito and CloudWatch as well as developers. While the capacity addition set off the outage, it was not the sole reason for it. As capacity was being added, the front-end fleet servers started to exceed the amount of threads permitted by the system andwhen the maximum was reachedthe trigger started a domino effect creating the outage. (Source: ZDNet)
Why this is important for your business:
Just to remind, big brands like Netflix, Twitch, LinkedIn and Facebook and many others rely on AWS to deliver their cloud base products and services. Its an $8 billion business for Amazon and a major part of the companys future strategic plans. And yet, even with all that Amazon money, resources and technical know-howit still went down. The cloud is powerful. But its also not infallible.
2 Shopify merchants generated record sales of $5.1 billion over the holiday weekend.
Merchants on the popular e-commerce site Shopify broke records from Black Friday and Cyber Monday pulling in $5.1 billion total for the holiday shopping weekend. In 2019, Shopify set a $2.9 billion record which was broken this year by 5pm on the Saturday after Black Friday, a 76 percent increase YOY (year-over-year). According to the data released by Shopify, online sales ramped up 19 days earlier than years past as well with an 84 percent jump in sales the week of Thanksgiving. Sales during the weekend reached their height on Black Friday by 12pm reaching $102 million within a one-hour window. (Source: Motley Fool)
Why this is important for your business:
The smartest small business owners retailers especially who pivoted to ecommerce this year are reaping the benefits. Its platforms like Shopify, and their competitors, that enabled many small firms to stay in business and even profit during this unprecedented recession.This is not a trend. This is permanent.
3 Small businesses who pivoted to e-commerce saw record sales during Black Friday weekend.
Due to the difficulties brought on for retailers by the coronavirus pandemic, nearly a fourth of small businesses had to close. However, the select businesses who survived shifted to online sales in hopes to continue operations. Many businesseswho used to rely on their in-store offers and special customer experienceshave had to pivot to rely more heavily on their websites, many offering pick-up and online ordering. According to information released by Adobe Analytics, small businesses saw a 110 percent average increase in their online sales throughout the 2020 holiday season so far with a big boost coming during Black Friday weekend. (Source: NBC News)
Why this is important for your business:
Uhsee number 2 above please.
4 Small business digital platform GoSite landed $40 million.
GoSite a small business digital platformrecently received $40 million in fundraising. The funds will go toward hiring needed personnel and will help develop and add more features and offerings for small businesses. (Source: Pymnts)
Why this is important for your business:
Its not just ecommerce thats helping small businesses navigate Covid.Its the ability to leverage all the digital tools that help a business grow. With so many small businesses needing to transition to online due to the coronavirus, GoSite helps small businesses who operate online with their customer transactions as well as payments, bookings, reviews, messages, websites, listings, all in one place. Over the last year, the platform has doubled their users.
5 Microsoft Teams got an overhauled calling interface, CarPlay support, and more.
Microsoft announced changes and upgrades to calling features within Teams this past week which include updates to CarPlay support and the calling interface, to name a few. Some of these and additional features will be ready early next year. (Source: The Verge)
Why this is important for your business:
Get your people ready. The new calling interface is now going to include calling history, voicemail, and a contact list in one location with hopes that Teams will be able to replace the traditional desk phone. Users will soon have the ability to transfer phone calls between desktop and mobile, allowing more mobility as many continue to work from home due to the pandemic. CarPlay will now also make it easier for users to make or answer calls using Siri.
Go here to read the rest:
Amazon Web Service Explains Its Major OutageAnd Other Small Business Tech News - Forbes
How AWS is computing the future of the cloud – SiliconANGLE News
The highlight ofAmazon Web Services Inc.s annual re:Invent conference is always the marathon three-hour keynote by Chief Executive Officer Andy Jassy, and despite the show going virtual this year, itll be the same today.
As he always has, the longtime leader of Amazon.com Inc.s cloud computing company will debut adizzying array of new and upgraded services, which AWS is keeping close to the vest until Jassys keynote starts at 8 a.m. PST today.
But in an exclusive video conversation, Jassy (pictured) revealed more than a few hints of whats coming: He offered a deep dive into the processors AWS Annapurna operation has designed, the challenges and opportunities of serverless computing, its plan to democratize artificial intelligence and machine learning, and its let-a-thousand-flowers-bloom strategy for purpose-built databases.The conversation was lightly edited for clarity.
Look for more strategic and competitive insights from Jassy in my summary and analysis of the interview, as well as in two more installments coming in the next week or so, and in thefirst partthat ran Monday. And check out all the re:Invent coverage through Dec. 18 by SiliconANGLE, its market research sister companyWikibonand itslivestreaming studio theCUBE.
Q: Whats your vision on the compute piece of cloud at AWS?
A: Theres three basic modes of compute that people run. There are instances, which is really the traditional way people run compute and often when people want to have deeper access to the resources on the machine. And then theres the smaller units of compute that are really coming on strong and growing at a very rapid clip in containers. And theres serverless. I think all three are there to stay and are all growing at a very rapid rate.
If I look at instances, we have an unmatched array of instances, not just the number of instances, but just if you look at the detail. We have the fastest networking instances with 400 gigabit-per-second capabilities and the largest high-memory instances of 24 terabytes, the largest storage instances. Weve got the most powerful machine learning training in instances and the most powerful inference instances. We have a very broad array of instances with more coming in the next few weeks that will further extend that differentiation.
But I think that one of the things that our customers are really excited about, and its changing the way they think about compute on the instance side, is the Graviton2 chips that we have built and launched in our families, like our R6G and M6G and T4G. Weve used Intel and AMD processors in our instances for a long time. And I actually expect that we will for a very long time. Those partnerships are deep and matter and will be around for a long time. But we know that if we want to continue to push the price-performance of these chips and our customers want us to, were going to have to design and build some of those chips ourselves.
Q: Whats behind doing your own chip designs?
A: We bought a business, the Annapurna business, and they were a very experienced team of chip designers and builders. We put them to work on chips that we thought could really make a big difference to our customers. We started with generalized compute and we built these Graviton chips initially in the A1 instances we launched a few years ago, which really were for scale-out workloads like the web tier or microservices, things like that. It was 30% better price-performance and customers really liked them, but there were some limitations to their capabilities that made it much more appropriate for a smaller set of workloads.
We didnt know how fast customers would pick up those Graviton chips, butthey adopted them a lot quicker than we even thought. And a customer said, Can you build a chip that is a version of that that can allow us to run all our workloads on it? And thats what we do with Graviton2, which if you look at the performance of what weve delivered with Graviton2 chips in those instances I mentioned, its 40% better price performance than the latest processors from the large x86 providers. Thats a big deal. So we have customers trying to move as many workloads as they can as quickly as possible to these Graviton2 instances.
Q: The Annapurna team doesnt get a lot of public attention. What else has it been doing?
A: Weve also put that Annapurna team to work on some hard machine learning challenges. We felt like training was something that was reasonably well-covered. The reality is when you do big machine learning models at scale, 90% of your cost is on the inference or the predictions. So we built a chip to optimize the inference called Inferentia. And that already is growing incredibly quickly. Alexa, which is one of biggest machine learning models and inference machines around, already has 80% of its predictions being made through Inferentia. Thats saving it 30% on costs and 25% on latency.
So were continuing to build chips. We have the scale and the number of customers and the input from customers that allow us to be able to optimize for workloads that really matter to customers.
Q: Turning to containers, whats your strategy given all the competition there?
A: Most providers have a single container offering, which is a managed Kubernetes offering. But we realized with builders that they dont believe in one tool to rule the world. Different developers and teams have different tastes and different interests and different needs. So if youre a developer that wants to optimize for using the open-source Kubernetes framework, then youll use our Elastic KubernetesService (EKS), and its growing incredibly quickly. If youre somebody who wants to optimize for the container framework that has the deepest integration with AWS, youll work on our Elastic Container Service or ECS. Because since we own it, we can launch everything right from the get-go without having to run it through anybody else. So you have deep integration.
And if youre running containers without thinking about servers or clusters, then you run Fargate. By far, the largest number of net new container customers to AWS get going using Fargate because its serverless and they dont have to worry about it. You and I talked about it a few years ago at the Cube. I remember on your show I said that if Amazon were starting from scratch today, that we would build it on top of Lambda and on top of our serverless services. I think it was Stu [Miniman] who said, Youre not serious about that. I said, Oh no, I am serious about it.
Q: How serious are developers about it?
A: In 2020, half of the new applications that Amazon built were built on top of Lambda compute. I think the next generation of developers are going to grow up building in this serverless fashion, which is a combination of having event-driven, serverless computing service like Lambda and a bunch of enabling services like API Gateway and our Event Bus, Event Watch and things like Step Functions for orchestration workflow. But also all the services that can set event driven serverless triggers. We have 140 services at this point, which is seven times more than anybody else has.
You can really build end-to-end serverless workflows and applications that you couldnt a few years ago. I think compute is totally being reinvented and were working hard to help customers have better capabilities, more cost-effective and more agile.
Q: One surprise success is the partnership between AWS and VMware, which many people back in 2016 when it was announced by you and VMware CEO Pat Gelsinger thought was VMware simply capitulating to Amazon. Hows it going, and does it represent a trend?
A: Youre right that VMware Cloud and AWS, or VMC as the offering is called, has been a huge success for both VMware and for AWS. There are a lot of things that they were purported to make it easier to do hybrid that really were a lot of hype and no traction. The traction on VMC is very significant, even just in the last year double the number of nodes, double the number of VMs and big enterprises making their move to the cloud through VMC together.
Because most of the world is virtualized on VMware, to be able to use those same tools that youve used to run your infrastructure on VMware for many years to deploy and manage your AWS deployments is super-attractive. Thats why its compelling, but I will tell you that everybody aspires to have partnerships like that. We have a lot of them and lots of other companies do, but I would say that there arent that many partnerships that work as well as the way its working with VMware and AWS.
Q: Why did it work out that way?
A: It takes both companies really willing to lean in and to commit engineering resources together, to build something and to get your [people in the field] connected. You cant just make a press release and then just let it go and fire, forget. Those teams are meeting all the time at every level. And both Pat and I are very passionate and supportive and prioritize it. And we meet frequently together and with our teams. And I think those teams really function as one. I think our customers are seeing that. Even if you aspire to have a unique partnership like that, it takes a lot of work.
Q: How are developers viewing AWS now? How would you grade yourself in terms of ease of developer use and developer satisfaction?
A: Well, if you rate it based on how people are voting with their workloads and the amount of applications and workloads people are running on AWS, I think were doing quite well. But I would also argue thats not necessarily the right bar. I would say that we are perpetually dissatisfied with making it as easy as possible for developers to abstract away as much of the heavy lifting as you can. And I think were going to be working on that forever.
If you look at containers and you look at serverless, which is the smaller units of compute that more and more customers are moving to, even though weve radically improved how easy it is to get going and customers are moving really fast and Fargate is this totally different serverless offering nobody else has, I think we have a long way to go to get to where we want to be.
Q: What are some of the challenges of serverless?
A: If you use containers and serverless together, or Ill say Lambda on the compute side, you actually want to be able to deploy both of them from the same tools. Nobodys made that easy. No ones made it possible today. Just think about the challenge, John, in the difference in deployed containers than traditional instance-space servers. The traditional instance-space applications, its one code base. You use infrastructure as code tools like Cloud Formation, you build a CI/CD pipeline to deploy it. Its a block of code. If you have to change it, you change the block of code.
Thats very different than containers where people are building in these smaller chunks really of microservices where they all have their own code. They all have their own CI/CD pipelines. There are lots of teams that are actually operating on these that end up comprising a full application. For just one application where you have all those microservices, its really hard to keep them consistent and deploy in a high-quality way to track what everybodys doing that contributes to that application. And theres loads and loads of those. There are no tools today, really no tools, that do that well. And thats something that really matters to developers and something were working on.
Q: Last year on theCUBE, we were riffing on the fact that theres going to be thousands of databases out there, not one database to rule the world. First, I wanted to ask how customers are viewing database licensing issues that may affect which clouds they use.
A: For many years, most companies only ran on relational databases. And when you were in the neighborhood of gigabytes and sometimes terabytes of data that mightve been OK, but in this new world where were in the neighborhood of terabytes and petabytes, and even sometimes exabytes, those relational databases are not appropriate for a lot of those workloads.
If you look at their commercial-grade relational databases, which have really had all of the workloads on them back when people were running relational for everything, theyre basically Oracle and Microsoft SQL Server. And if you look at those two companies and those offerings, the offerings are expensive, proprietary, have high amounts of lock-in.
And then they have licensing terms that are really punitive where theyre constantly auditing their customers. And when they find things, they try to extract more money from them or theyll let you off the hook if you buy more from them. And I think those companies have no qualms about changing the licensing terms midstream to benefit themselves.
Q: Examples?
A: Just look at what Microsoft did with SQL Server over the last year or two where they basically told customers who had bought SQL Server licenses they couldnt use them on any other cloud than Microsoft. Now, is that better for customers? Hell no. Is it better for Microsoft? I think they think so. I happen to think its really short-term thinking, because customers really resent that. And as quickly as they can flee, they will.
But I think customers in general are really fed up and sick of these commercial-grade, old-guard database providers who change the licensing terms whenever they want and the pricing whenever they want to suit themselves. And I think its why so many companies have moved as quickly as they can to those open-source engines like MySQL. Its why we built Aurora, which is 100% compatible. They have additions for MySQL and PostgreSQL. Thats why its the fastest-growing service five, six years running.
So I think that customers are fed up with it. Theyre moving as fast as they can. We have an accelerating number of customers who are looking to move away, not just from Oracle, but from SQL Server, because theyre really sick of whats happening and they dont trust those companies anymore.
Q: More broadly, whats up with all these new databases from Amazon and others, and whats the value for customers?
A: In this new world, theres so much more data. Whats happened over time is that people realize that relational databases are more expensive and complex and overkill than they need for a lot of use cases, and that theyre better off with these purpose-built databases, like key-value stores, or in-memory databases, or graph databases, or time series databases, or document databases, all those types of things.
Most companies have got these central data lakes to run analytics and machine learning. And yet at the same time theyre using more and more of these purpose-built databases and purpose-built analytics services like Athena and Redshift and EMR and Kinesis and things like that. A lot of customers are trying to come to grips with, How do I think about having this data in the middle and this data in all of these external nodes, which I need to do for a lot of my applications for operational performance?
What a lot of customers are asking for help on is how to move that data from the inside out, from the outside in, and from those purpose-built databases on the outside, along the perimeter to other outside databases. Because if you can actually take some of those same views from databases and materialize them into other spots, they open up all kinds of opportunities, which today are really arduous and hard to do. And thats another area that were really squarely focused on.
Q: One of the things weve always said is that the huge thing about cloud is horizontal scalability. You can have purpose-built databases, but if you can tie them together horizontally, thats a benefit, and you can still have vertical specialty for the application. So are the old guard, these old mission-critical workloads, going to be replaced or cloudified or what?
A: An accelerating number of companies are not just building their new databases from the get-go on top of things like Aurora or the purpose-built databases we have, but are migrating away from those older guard databases as fast as they can. Since we built our database migration service, we have over 350,000 databases that have moved.
The Database Migration Service actually makes it quite doable to move the data and the database to another source and the Schema Conversion Tool we have allows you to move those schemas. And the last piece the customers really want help with is how to move that application code thats unique to some of these databases. Because some of these old-guard databases have built these unique dialects that work just with their particular database engine.
Show your support for our mission with our one-click subscription to our YouTube channel (below). The more subscribers we have, the more YouTube will suggest relevant enterprise and emerging technology content to you. Thanks!
Support our mission: >>>>>> SUBSCRIBE NOW >>>>>> to our YouTube channel.
Wed also like to tell you about our mission and how you can help us fulfill it. SiliconANGLE Media Inc.s business model is based on the intrinsic value of the content, not advertising. Unlike many online publications, we dont have a paywall or run banner advertising, because we want to keep our journalism open, without influence or the need to chase traffic.The journalism, reporting and commentary onSiliconANGLE along with live, unscripted video from our Silicon Valley studio and globe-trotting video teams attheCUBE take a lot of hard work, time and money. Keeping the quality high requires the support of sponsors who are aligned with our vision of ad-free journalism content.
If you like the reporting, video interviews and other ad-free content here,please take a moment to check out a sample of the video content supported by our sponsors,tweet your support, and keep coming back toSiliconANGLE.
Read the original:
How AWS is computing the future of the cloud - SiliconANGLE News
ONF Announces Aether 5G Connected Edge Cloud Platform Being Used as the Software Platform for Pronto Project – sUAS News
Today, the Open Networking Foundation (ONF) announced that ONFs Aether 5G Connected Edge Cloud platform is being used as the software platform for the$30MDARPA Pronto project, pursuing research to secure future 5G network infrastructure.
DARPA is funding ONF to build, deploy and operate the network to support research byCornell,PrincetonandStanforduniversities in the areas of network verification and closed-loop control. ONF will enhance and deploy its open source Aether software platform as the foundation for the Pronto research work, and in turn the research results will be open sourced back into Aether to help advance Aether as a platform for future secure 5G network infrastructure.
Aether 5G Connected Edge Cloud PlatformAether is the first open source 5G Connected Edge Cloud platform. Aether provides mobile connectivity and edge cloud services for distributed enterprise networks as a cloud managed offering. Aether is an open source platform optimized for multi-cloud deployments, and it simultaneously supports wireless connectivity over licensed, unlicensed and lightly-licensed (CBRS) spectrum.
Aether is a platform for enabling enterprise digital transformation projects. Coupling robust cellular connectivity with connected edge cloud processing creates a platform for supporting Industrial Internet-of-Things (IIoT) and Operational Technology (OT) services like robotics control, onsite inference processing of video feeds, drone control and the like.
Given Aethers end-to-end programmable architecture coupled with its 5G and edge cloud capabilities, Aether is well suited for supporting the Pronto research agenda.
Aether Beta DeploymentONF has operationalized and is running a beta production deployment of Aether.This deployment is a single unified cloud managed network interconnecting the projects commercial partners AT&T, Ciena, Intel, Google, NTT, ONF and Telefonica. This initial deployment supports CBRS and/or 4G/LTE radio access at all sites, and is cloud managed from a shared core running in the Google public cloud.
The University campuses are being added to this Aether deployment in support of Pronto. Campus sites will be used by Pronto researchers to advance the Pronto research, serving as both a development platform and a testbed for use case experimentation. The Aether footprint is expected to grow on the university campuses as Aethers 5G Connected Edge Cloud capabilities are leveraged both for research on additional use cases as well as for select campus operations.
Aether EcosystemA growing ecosystem is backing Aether, collectively supporting the development of a common open source platform that can serve as an enabler for digital transformation projects, while also serving as a common platform for advanced research poised to help unlock the potential of the programmable network for more secure future 5G infrastructure.
AtGoogle Cloud, we are working closely with the telecom ecosystem to help enable 5G transformation, accelerated by the power of cloud computing. We are pleased to support the Open Networking Foundations work to extend the availability of 5G and edge capabilities via an open source platform.Shailesh Shukla, VP and GM, Networking, Google Cloud
Cornellis deploying Aether on campus to bring private 5G/LTE connectivity services with edge cloud capabilities into our research facilities. We expect private 5G/LTE with connected edge cloud to become an important and integral part of our research infrastructure for many research and operational groups on the campus. We also see the value of interconnecting a nation-wide leading infrastructure withStanford,Princetonand ONF for collaborative research among university researchers across the country.David Lifka, Vice President for Information Technologies and CIO,Cornell University
Princeton Universityis deploying Aether on campus in the Computer Science Department in order to support the Pronto research agenda and offer it as an experimental infrastructure for other research groups. This deployment will enable private 5G/LTE connectivity and edge cloud services and will complementPrincetonsexisting P4 enabled infrastructure on campus. We plan to also explore how some of our mission critical production use cases can be supported on a private 5G Connected Edge Cloud.Jay Dominick, Vice President & CIO,Princeton University
Cienais pleased to be an early collaborator on the ONFs Aether project. We have an Aether site running in our 5G lab inMontreal, and we are excited by the prospect of helping enterprises leverage the 5G and edge cloud capabilities of Aether to help build transformative solutions.Stephen Alexander, Senior Vice President and Chief Technology Officer, Ciena
Intelis an active participant of the ONFs innovative Aether project to advance the development of 5G and edge cloud solutions on high volume servers. ONF has been leading the industry with advanced open source implementations in the areas of disaggregated Mobile Core, e.g. the Open Mobile Evolved Core (OMEC), and we look forward to continuing to innovate by applying proven principles of disaggregation, open source and AI/ML with Aether, the Enterprise 5G/LTE Edge-Cloud-as-a-Service platform. As open source, Aether will help accelerate the availability of innovative edge applications. Aether will be optimized to leverage powerful performance, AI/ML, and security enhancements, which are essential for 5G and available in Intel Xeon Scalable Processors, network adapters and switching technologies, including Data-Plane Development Kit (DPDK), Intel Software Guard Extensions (Intel SGX), and Intel Tofino Programmable Ethernet Switch.Pranav Mehta, Vice President of Systems and Software Research, Intel Labs
Learn MoreThe Aether ecosystem is open to researchers and other potential partners who wish to build upon Aether, and we welcome inquiries regarding collaboration. You can learn more at theAether websiteand theProject Pronto.
ONF is also hosting the live virtual event5G Connected Edge Cloud for Industry 4.0 Transformation,December 8-10th, where several of the talks will provide insight into Aether and Project Pronto. Featured speakers include:
Andre Fuetsch, President & CTO,AT&TYousef Khalidi, CVP,MicrosoftKang-Won Lee, VP & Head of Cloud, 5G MEC,SK TelecomNick McKeown, Professor of EE AND CS,Stanford& PI forProject ProntoGuru Parulkar, Executive Director,ONFShailesh Shukla, VP/GM,Google
Click hereto register for free.
About the Open Networking Foundation:The Open Networking Foundation (ONF) is an operator led consortium spearheading disruptive network transformation. Now the recognized leader for open source solutions for operators, the ONF first launched in 2011 as the standard bearer for Software Defined Networking (SDN). Led by its operator partners AT&T, China Unicom, Deutsche Telekom, Google, NTT Group and TrkTelekom, the ONF is driving vast transformation across the operator space. For further information visithttp://www.opennetworking.org
Read more from the original source:
ONF Announces Aether 5G Connected Edge Cloud Platform Being Used as the Software Platform for Pronto Project - sUAS News
International health IT week in review: December 6 – Pulse+IT
Written by Kate McDonald on 06 December 2020.
Pulse+IT's weekly round-up of international health IT news for the week ending December 6: Germany's digital transformation, dawn of digital medicine, phishing vaccine cold chain, pyjama time charting, Japanese doctors go digital, consult prep tool for patients, standardising patient addresses, linking vaccination records, Fitbit predicts COVID onset, rapid hospital at home
Want to see the future of digital health tools? Look to Germany.Harvard Business Review ~ Ariel D Stern ~ 02/12/2020
In late 2019, Germanys parliament passed the Digital Healthcare Act (Digitale-Versorgung-Gesetz, or DVG) an ambitious law designed to catalyze the digital transformation of the German health care system.
The dawn of digital medicineThe Economist ~ Staff writer ~ 02/12/2020
The pandemic is ushering in the next trillion-dollar industry.
IBM uncovers global email attack on Covid vaccine supply chainCNBC ~ Noah Higgins-Dunn ~ 03/12/2020
The companys task force dedicated to tracking down Covid-19 cybersecurity threats said it discovered fraudulent emails impersonating a Chinese business executive at a credible cold-chain supply company.
Ambient documentation with Epic helps reduce clinician burnout at Monument HealthHealthcare IT News ~ Bill Siwicki ~ 03/12/2020
The goal of its Nuance deployment is to significantly reduce after-hours pajama time charting while introducing the voice of the patient to the EHR, says its CIO and CMIO.
Google Health and HHS' AHRQ detail new pilot project helping patients prep for their doctor's appointmentMobiHealthNews ~ Dave Muoio ~ 02/12/2020
The tool would help patients develop health questions for their clinician, and reminds them to bring along any necessary test results or other materials.
The pandemic is inducing Japanese doctors to go digitalThe Economist ~ Staff writer ~ 02/12/2020
Telemedicine and electronic record-keeping are at last on the rise.
New ONC initiative aims to standardize patient address formatsMedCity News ~ Anuja Vaidya ~ 02/12/2020
Launching in 2021, the ONC's Project US@ aims to enhance efforts to correctly link patients with their health data a key part of interoperability by standardizing the way patients' mailing addresses are formatted.
Thousands of US lab results and medical records spilled online after a security lapseTechCrunch ~ Zack Whittaker ~ 02/12/2020
NTreatment, a technology company that manages electronic health and patient records for doctors and psychiatrists, left thousands of sensitive health records exposed to the internet because one of its cloud servers wasnt protected with a password.
Electronic flu vaccine notifications service expandedDigital Health News ~ Hannah Crouch ~ 01/12/2020
A service that sends information about flu vaccinations electronically from pharmacies to GP practices has been expanded ahead of winter.
Fitbit study predicts onset of COVID-19 and hospitalization likelihoodMobiHealthNews ~ Mallory Hackett ~ 01/12/2020
Consumer wearable devices can be key tools in predicting the onset of illnesses like COVID-19 by using health metrics like breathing rate, resting heart rate and heart rate variability (HRV), according to findings published in npj Digital Medicine from Fitbit.
Brigham and Womens, Biofourmis launch hospital-at-home solution nationwideMedCity News ~ Anuja Vaidya ~ 01/12/2020
In January, Brigham published a study in the Annals of Internal Medicine that showed the efficacy of its home hospital program, which utilized the co-developed solution.
See the original post:
International health IT week in review: December 6 - Pulse+IT
Lenovo boosts low end all-flash array with end-to-end NVMe – Blocks and Files
Lenovo has juiced up its entry level all-flash array with NVMe SSDs, NVMe/FC access and faster Fibre Channel support. The company said the new ThinkSystems DM5100F array is suitable for analytics and AI workloads.
Lenovo teamed up with NetApp in August to produce the all-flash ThinkSystem DM Series. According to the company, the new system delivers 45 per cent higher performance than its precursor, the DM5000, which uses SAS SSDs and 16Gbit/s FC access.
DM arrays use NetApp ONTAP software, while the hybrid flash/disk DE Series use SAN OS, NetApps software for its E-Series arrays.
The DM5100F scales out to 48 NVMe SSDs, with capacity topping out at 737.28TB. This is less than the DM5000 which holds 144 SAS SSDs for a maximum 2.2PB capacity.
The DM5100Fs maximum controller memory is 128GB, twice that of the DM5000Fs 64GB. The new model also has 16GB of NVRAM double the DM5000Fs 8GB. The increases reflect the greater burden on the DM5100F controller from the NVMe SSDs, NVMe/FC access and overall increased IOPS performance.
Lenovos new array requires ONTAP 9.8, which is also available for the other DM Series models.
All the DM Series arrays now get S3 object access support, adding to existing block and file access protocols (FC, iSCSI, NFS, pNFS, SMB, NVMe/FC). There istransparent failover and management of object storage. Customers can add cold-data tiering from the SSDs to the cloud, or replicate data to the cloud.
A new DB720S Fibre Channel switch links servers to the DM and DE Series arrays, and it adds 64Gbit/s Fibre Channel speed plus lower access latency to the existing 32Gbit/s and 16Gbit/s switches in Lenovos product locker. (This is an OEMed version Broadcom G720 switch.)
Lenovo has released Intelligent Monitoring 2.0, an update of its cloud-based management tool for the DM and DE Series arrays. This enables customers to monitor and manage storage capacity and performance for multiple locations from a single cloud-based interface. V2.0 improves the analytics and adds AI-based prescriptive guidance.
See the original post here:
Lenovo boosts low end all-flash array with end-to-end NVMe - Blocks and Files
Bull of the Day: Baidu (BIDU) – Yahoo Finance
After peaking above $270 in 2018, Baidu (BIDU) shares have been stuck in bull market purgatory, spending most of the last 18 months under $140.
That punishment could be ending soon as the company reinvents itself and recovers from government restrictions on its advertising revenues.
Here's what I wrote on June 13 when I profiled BIDU as the Bear of the Day...
Baidu, the $40 billion web search and marketing portal that used to be considered "the Google of China," especially as they forayed into AI technologies and self-driving cars, rallied over 16% since their strong Q1 earnings report on May 18. But guidance was cloudy enough to cause analysts to lower estimates and drive the stock into the cellar of the Zacks Rank.
BIDU shares didn't remain in the cellar of the Zacks Rank for long though as analysts re-worked their models based on company guidance and the conference call. And it gave me the green light to begin a new position. Here's what I wrote in the Buy Alert for my TAZR Trader members on July 6...
It was my plan this weekend to buy BIDU this morning, but little did I know Alibaba (BABA) would wake up like a beast too. Both have gapped higher and should run as BABA will enter a new phase of its bull market and BIDU is the AI-focused player in Chinese big data. Start with a 5% position this week and we'll add on any gap fills back toward $125.
Flash forward to this month's Q3 report, and the now $47 billion BIDU has ascended to a Zacks #1 Rank Strong Buy as analysts raise estimates and view the business transitions as gaining traction.
After a strong earnings beat and raised guidance, the Zacks EPS Consensus for this year jumped 10% to $8.25 in the past two weeks.
Immediately following the company presentation and conference call, Mizuho analyst James Lee raised his firm's price target on Baidu to $185 from $170, noting that the company's core revenue reached positive growth one quarter earlier than expected and was guided up 5% year-over-year for Q4.
Baidu also launched an expansion into "livestreaming," the new platform rage among Chinese youth for shopping and following influencers, by acquiring JOYY (YY) Live's Chinese business for $3.6 billion. Lee likes this move because it diversifies the revenue stream strongly into ecommerce and subscriptions and maintained Baidu as his top pick in China.
Lee also anticipates the AD (Autonomous Driving) segment's asset value could be unlocked through a strategic investment with leading OEMs, which could provide a 20% upside to the stock price.
I have always believed that AI and AD represented the primary growth drivers for Baidu, not internet search and advertising, as many Alphabet (GOOGL) investors must also believe about their beloved.
But for Baidu, I think these growth levers are stronger here because of the strong Chinese government support for advanced technologies. In early 2018, I described the development of the first urban "AI park" outside of Beijing where Baidu would be the primary R&D company to build and test AD technologies.
Since then, an "AI park" sprung up around Shanghai in 2019. And while NVIDIA (NVDA) gets all the attention as the premier builder of AI hardware and software stacks, Baidu's pedigree in AI is beginning to bear fruit.
More coming up on Baidu's evolution into an AI powerhouse in China, right after we take care of some recent negative news.
Muddy Waters shorts JOYY, calls company "almost entirely fraudulent"
Unfortunately, 2 days after Baidu's strong report, I had to share this update with my TAZR members...
Carson Block's Muddy Waters Research said via Twitter, "MW is short $YY bc we conclude it is a near-total fraud. We conclude its businesses, users, and cash are a fraction of what it reports. We estimate that ~90% of YY Live revenue is fraudulent, and ~80% of Bigo rev is fraudulent...When $BIDU diligences $YY Live, the massive scale of fraud will be apparent. Is BIDU so desperate to show growth that it will pay ~7% of its market cap for an almost entirely fraudulent business? Is China Inc that rotten?. Bigo's rot stems from inception & the lie about who founded it. This lie enabled Chmnn Li to defraud at least $156.1 million of real money from $YY shareholders & YY to fraudulently report substantial remeasurement gains."
YY shares dropped over 25% late on Nov 18, from new highs above $105 all the way down to the low $70s. But they have since recovered to $90 as more is learned about the two businesses. And BIDU, who had just made new 18-month highs above $150, fell back to $140.
The short report claims that YY meaningfully misled investor regarding its financials by misrepresenting how revenues flow between itself and talent agencies. Essentially, YY controls talent agencies that manage influencers on the livestreaming platform and paid more than 50% of the total volume of virtual gifts. To support that claim, the Muddy Waters report points to PRCs Credit Bureau, indicating that the five largest MCNs (talent agencies) on YY contributed only 15% of YY Lives reported revenues.
Here was reaction from Mizuho's Lee...
We do not cover YY, but based on our understanding of the Livestreaming industry, a platform typically has an internal talent agency that acquires, trains and manages influencers, so the claim by the report is not unusual, but materiality of revenues is the issue that YY needs to address, in our view.
In light of this report, we have confidence in Baidu management to conduct additional due diligence on issues raised by the report. At the same time, we believe that the acquisition could be delayed if YY hires an independent advisor to conduct its own reviews, very similar to what IQ did when facing an allegation of accounting improprieties a few months ago. Furthermore, a potential SEC inquiry could also slow the process.
If Baidu cannot move forward with the acquisition due to MAC (material adverse change), we believe that the company could either build a livestreaming platform internally, or seek other acquisition candidates.
Lee maintained Baidu as their top China Internet pick with a $185 price target, based on a SOTP (sum of the parts) valuation, noting that the stock trades at only 4X FY22 Baidu core EBITDA, against their estimated CAGR of 16%. He said the buy thesis has not changed as all previously outlined catalysts are still in play.
Baidu's Industrial AI Frontier in China
I've always admired Baidu for its committed role in AI, especially as I learned more about the vision and ethics of former chief scientist Andrew Yan-Tak Ng. As a technologist and investor, Ng co-founded and led Google Brain and was a former Vice President and Chief Scientist at Baidu, building the company's Artificial Intelligence Group into a team of several thousand people.
Ng is now an adjunct professor at Stanford University (formerly associate professor and Director of its AI Lab). Also a pioneer in online education, Ng co-founded Coursera and deeplearning.ai where he has successfully spearheaded many efforts to "democratize deep learning," teaching over 2.5 million students through his online courses.
In July when I bought shares, I highlighted news for TAZR members on Baidu's "new infrastructure" plan for the smart economy (courtesy of company PR excerpts)...
Baidu Unveils Plan to Increase Investments in New Infrastructure to Power the Rise of Industrial AI
Plans to Deploy 5 Million AI Cloud Servers by 2030 and Train 5 Million AI Professionals
Baidu announced that it will increase its investments in cloud computing, AI education, AI platforms, chipsets, and data centers in the coming ten years as part of its efforts to construct "new infrastructure" for the smart economy of the future.
"New infrastructure -- which encompasses emerging technologies like AI, cloud computing, 5G, IoT, and blockchain -- will be the driver for China's economic development in the coming decades," said Baidu Chief Technology Officer Haifeng Wang.
Under the plan, Baidu aims to have 5 million intelligent cloud servers by 2030 and train 5 million AI professionals within 5 years, which will help facilitate the widespread application of AI in transportation, city management, finance, energy, health care, and manufacturing to eventually achieve industrial intelligence.
Deploying 5 million intelligent cloud servers by 2030 is an ambitious target that would create a combined computing capability equal to seven times the total calculable computing power of the world's existing top 500 supercomputers.
Viewing human capital as a core component of the new infrastructure, the Baidu goal to train 5 million AI professionals in the next five years keeps humans at the center of this massive AI R&D. Baidu has been working with more than 200 leading universities in China to develop courses related to AI and deep learning and has already trained more than 1 million AI experts.
It sounds like China might be better paced than the US to migrate college students into jobs of the future.
Baidu has more than 7,000 published AI patent applications in China, the highest in the country. The AI open platform Baidu Brain has made available more than 250 core AI capabilities to over 1.9 million developers, while PaddlePaddle, the largest open-source deep learning platform in China, services 84,000 enterprises.
Baidu's Kunlun and Honghu AI chips are among the highest performing AI chips and are built for a wide range of scenarios. Baidu Cloud is China's leader in public cloud and AI cloud services with more than ten data centers across the country.
This new infrastructure is already allowing Baidu to lead the intelligent transformation of different industries. Baidu's smart finance products serve nearly 200 financial institutions, while Baidu's intelligent healthcare products are deployed at more than 300 hospitals and 1500 grassroots medical institutions.
Baidu Brain for Cities is already in place in Chongqing, Suzhou, and other cities, supporting more intelligent city management. Baidu's new investments will enhance its ability to rollout AI applications in these scenarios, as well as in manufacturing, energy, and transportation.
Bottom line on Baidu: The transformation into an AI powerhouse is real and streaming/social deals won't determine the fortunes of BIDU. I would remain a buyer of pullbacks to $130.
Disclosure: I own shares of BIDU, BABA, and NVDA for the Zacks TAZR Trader portfolio.
Looking for Stocks with Skyrocketing Upside?
Zacks has just released a Special Report on the booming investment opportunities of legal marijuana.
Ignited by referendums and legislation, this industry is expected to blast from an already robust $17.7 billion in 2019 to a staggering $73.6 billion by 2027. Early investors stand to make a killing, but you have to be ready to act and know just where to look.
See the pot stocks we're targeting >>
Want the latest recommendations from Zacks Investment Research? Today, you can download 7 Best Stocks for the Next 30 Days. Click to get this free reportJOYY Inc. (YY) : Free Stock Analysis ReportNVIDIA Corporation (NVDA) : Free Stock Analysis ReportAlphabet Inc. (GOOGL) : Free Stock Analysis ReportBaidu, Inc. (BIDU) : Free Stock Analysis ReportAlibaba Group Holding Limited (BABA) : Free Stock Analysis ReportTo read this article on Zacks.com click here.Zacks Investment Research
View original post here:
Bull of the Day: Baidu (BIDU) - Yahoo Finance
A re:Invent like no other shows an AWS capitalizing on 2020 chaos – Diginomica
(AWS)
After more than a decade of explosive growth and eight previous re:Invent conferences, cloud watchers are used to the annual firehose of information packed into CEO Andy Jassys keynotes.
However, the zeitgeist of 2020 thwarted the throngs that would normally pack into the Sands Convention Center and relegated Jassy to an empty facsimile of the typical expo hall and stage.
Regardless, Jassy didnt disappoint, regaling a larger-than-usual online audience with dozens of announcements spread over a three-hour tour de force of vocal, adrenal and vesical stamina.
With revenue of almost $130 million per day and still doubling every two-three years, AWS has grown to resemble its retail parent: the cloud service with something for everyone. As re:Invent has expanded, it becomes progressively more difficult to identify overarching themes in the dozens of product announcements and updates. Indeed, finding significant patterns amidst the barrage of keynote slides and AWS blog posts has become a Rorschach test for cloud watchers: what you highlight is more a reflection of personal preferences and biases than AWS priorities. Nonetheless, Ill highlight some topics I found significant, rationalizing a few by noting their prominent position early in Jassys address, before audience fatigue and temporal distractions took over.
Iexpected AWS to emphasize two areas, homegrown hardware and serverless services, indeed got top billing in Jassys keynote. With serverless which, for the clueless and cynics out doesnt mean devoid of computational hardware, but instead services that are used and decommissioned on-demand automatically and without previously provisioning instances, storage or capacity - AWS is emulating Google by expanding the definition beyond event-driven functions like Lambda to include managed applications and platforms, including an API gateway, pub/sub notifications and message queuing, an event bus and workflow scheduler. This year, Jassy emphasized its Aurora DBMS service, where a new version brings faster performance and scaling along with better support for SQL Server applications.
Jassy claims that Aurora, its Oracle killer that is now being aimed at Microsoft SQL Server customers, is the fastest-growing service in its history. With version 2, Aurora can scale to hundreds of thousands of transactions in a fraction of a second without requiring customers to pre-provision peak capacity. AWS estimates that Auroras improved-granularity auto-scaling cuts cost up to 90% compared with traditional cloud databases. AWS also introduced Babelfish for Aurora PostgreSQL, a code translation layer that can parse SQL Servers network protocol and command, making it interoperable with existing SQL Server drivers and applications. AWS also open sourced the Bablefish code to facilitate the migration of SQL Server developers and tools to Aurora.
Containers and serverless functions are the foundation of modern applications and AWS isnt about to let Google get a significant lead in mindshare or technology. Containers are the vehicle for custom applications and services with Kubernetes now the preferred workload and cluster management platform. AWS has long hedged its bets by offering two managed orchestration systems, ECS (native, using EC2 instances) and EKS (Kubernetes) and announced forthcoming support for running them in a hybrid configuration on internal bare metal or VMware servers. Both remain managed services (CaaS) that use the same configuration and management UI as cloud-based alternatives.
More interesting is how AWS continues to expand Lambda into a full-fledged execution environment. A problem with trying to make sophisticated Lambda functions are the required system dependencies like libraries and runtime environments. These can now be accommodated by Lambdas support for container images as large as 10GB. AWS has built base images with Python, Node.js, Java, .NET, Go and Ruby, but supports custom runtimes by bundling the requisite components and the Lambda Runtime API into an Amazon Linux image.
AWS also made Lambda more attractive for frequently-used, short-duration workloads by reducing the billing granularity from 100 to 1ms. AWS provided an example of a function in a 100,000-user web app that is called 20-times per day per client. Although the hypothetical code only takes 28ms to execute, the old pricing model rounded it up to 100ms, costing almost 3-times as much per month as the new billing scheme. Together, these improvements make Lambda significantly more compelling than a persistent VM for glue logic, scheduled jobs or high-use, short-duration functions requiring low-latency and high scalability. Jassy noted that almost half of the new applications deployed on AWS this year use Lambda and collectively run over a million transactions per second.
This yearmarks a turning point in the adoption of specialized, often custom-designed processors and SoCs in favor of general-purpose CPUs. Apple continues to lead the way in customized consumer components, with its A14 smoking competitive SoCs from Qualcomm and Samsung and new M1 chip outperforming everything Intel has to offer on the PC side. AWS followed suit as the first to announce a custom Arm SoC for cloud workloads with its Graviton instances two years ago. Last year it made significant improvements with the Graviton 2, which got a speed boost this year.
The C6g instances target compute-heavy workloads by improving network bandwidth 4-times to 100 Gbps and block storage (EBS) throughput to 38Gbps. Overall, Jassy claims that the Graviton2 family of instances deliver 40% better price-performance for all workloads than Xeon-based alternatives. He also highlighted the broad Graviton compatibility across the AWS service portfolio, including ECS, EKS and CodeX developer services, and support by third-party vendors and Linux distros. Indeed, he noted that customers are often surprised by how quickly they can port applications to Graviton.
AWS introduced its first custom chip for machine learning (ML) inference calculations two years ago with Inferentia, leaving NVIDIA V100 GPUs in its P3 instances as the preferred option for model training, until now. Jassy announced two new offerings, the homegrown AWS Trainium and one using Intels Habana Labs Gaudi processor, designed to provide better price performance. AWS didnt offer technical or performance details about the Trainium, which wont be available until sometime next year, but says it is compatible with the Neuron SDK used to develop Inferentia models. Since Neuron includes interfaces for popular ML frameworks like TensorFlow and MXNet, AWS expects developers to have little difficulty moving model training from GPU to Trainium instances.
Habanas Gaudi, which Intel acquired in 2019, is another special-purpose processor designed for AI model training that uses an approach similar to Googles TPU with eight tensor (vector) processing cores per chip. AWS will bundle 8 Gaudi accelerators in each instance, which it expects to deliver up to 40% better price performance than current GPU-based EC2 instances for training deep learning models. Like Neuron, the Gaudi SDK supports most popular AI frameworks. Both Trainium and Gaudi instances will be usable in EKC and ECS clusters or with the SageMaker development platform.
In reviewing another successful year, Jassy touted AWS as the broadest and deepest cloud platform and one glance at the accompanying eye chart shows that he isnt exaggerating. Although AWS does have a service for everyone, data from the cloud consultancy 2ndWatch shows that core IaaS products like EC2, RDS, DynamoDB and S3 remain the most popular services. However, 2ndWatch also finds that newer SaaS and platform services like Transcribe, Comprehend, Personalize and Athena have the fastest uptake. While these services often get lost amidst the annual flood of re:Invent announcements, AWS has found and filled a latent need for packaged, automated platforms and applications that relieve IT and developers from the burden of provisioning and administering cloud services.
It was telling that although Jassy led his keynote with the meat and potatoes of compute, containers, storage and components, he spent the latter half discussing SaaS business products like AWS Connect (contact center), QuickSight (BI), Glue (ETL), Lookout (image detection), Monitron (predictive analytics) and DevOps Guru (code analysis). Ill have more to say about these later, but collectively theyre turning AWS into a cloud supermart that defies traditional classifications.
My focus here is the increasingly differentiated foundation AWS has built for its core infrastructure services, which are creating the same sort of competitive moat that parent Amazons has built through its sustained investment in distribution and logistics infrastructure. AWS has recreated the virtuous cycle Bezos first envisioned for Amazon and the resulting flywheel effect poses nearly insurmountable challenges for AWSs cloud competitors.
More:
A re:Invent like no other shows an AWS capitalizing on 2020 chaos - Diginomica