Category Archives: Cloud Servers

From 5G to 6G: The race for innovation and disruption – TechRepublic

Image: Unsplash

Connectivity is all about faster, better and increased data transfer between endpoints. The race for wireless connections, beginning in 1979 with the first 1G technology in Tokyo deployed by the Nippon Telegraph and Telephone (NTT), has led the world to 5G and 6G four decades later.

McKinsey Technology Trends Outlook 2022 reveals that advanced connectivity, which includes 5G, 6G, low-Earth-orbit satellites and other technologies, is driving growth and productivity across industries with an investment of $166 billion in 2021. Unlike other new technologies like artificial intelligence (AI) or mobility, the technology has a high adoption rate.

In a report shared by Market Research and Future to TechRepublic, the organization explains that the COVID-19 pandemic was a significant catalyst for implementing 5G globally.

With the power to transform industries faster, with greater capacity and less latency, 5G tech will impact transportation, banking systems, traffic control, remote healthcare, agriculture, digitized logistics and more, Market Research Future says.

New technologies like AI, machine learning, industrial Internet of Things (IIoT), new intelligent cars, and augmented and virtual reality applications in the metaverse also require faster download times and increased data communications in real-time. 5G and 6G are expected to boost these new trends.

SEE: Metaverse cheat sheet: Everything you need to know (free PDF) (TechRepublic)

Market Research and Future explains that the deployment of 5G does not come without challenges. The standardization of spectrum and the complexity in 5G network installation are the most prominent. MIT Tech Review adds that 6G will also face challenges and require cross-disciplinary innovation, new chips, new devices and software.

The next generation of cellular technologies offering higher-spectrum efficiency and high bandwidth have seen their share of debate. As McKinsey explains, many still wonder if 5G can completely replace the 4G LTE network and what percentage of networks will have 5G.

The Global Mobile Suppliers Association by May 2022, had identified 493 operators in 150 countries investing in 5G technology and an additional 200 companies that had technology that could potentially be used for 5G. New announcements for smartphones with 5G rose by 164% by the end of 2020, and cataloged 5G devices increased by 60%.

While new consumer products have rapidly adapted to 5G capabilities, industrial and business devices have not.

Shifting from 4G LTE to private 5G may not be cost-effective for all players; this would depend on a players technological aspirations and planned use cases, McKinsey said.

Market Research Future explains that $61.4 billion are driving this very competitive market, expected to reach $689.6 billion by 2027. But, infrastructure equipment, devices and software providers have been restraining growth.

MIT explains that 6G shares similar challenges with 5G but also presents new challenges. 6G engineers must work on infrastructure, devices and software to build the next-generation communication systems. 6G connectivity can not be done by simply scaling or updating todays technology.

MIT adds that 6G uses more sophisticated active-antenna systems, which integrate further using other Radio Access Technologies such as WLAN (wireless local area network), Bluetooth, UWB (ultra-wideband) and satellite. Fitting all this tech into a smartphone requires reimagining components like chips and radio transceiver technology.

This will require very creative electrical and computer engineering as well as disruptive industrial engineering and power management, MIT explained.

New 6G chips are essential to process the increased computing power. Low latencythe capacity to process a very high volume of data messages with minimal delayis already a challenge for 5G and will be even more defiant with 6G tech.

Low latency is essential for interactive data, real-time data and applications, and virtual environments or digital twins. These are all requirements for AI, the metaverse and the industrial sector. 6G latency will be reduced by using nearby devices, creating a signal on a 3-dimensional network.

SEE: Artificial Intelligence Ethics Policy (TechRepublic Premium)

To solve these problems, new semiconductor materials, intelligent surfaces, AI and digital twin technology developments are being used to test concepts, develop prototypes, and manage and enhance the network.

McKinsey stresses that 5G has proven that only a few telecommunications companies have been able to monetize from 5G enough to get a good return on investment (ROI). Therefore, capital expenditures and maintenance costs will also be closely watched. Additionally, large capital investments are required to build new technology and networks, representing another business challenge.

In its Dresden plant in Germany, Volkswagen replaced wired connections between machinery and now updates finished cars with over-the-air updates and connects unmanned vehicles with edge-cloud servers. Michelin uses new connectivity technologies for real-time inventory management, and Bosch equipped their first factory with 5G, enabling automation, connecting hundreds of end-points and synchronizing robotics with human factory workers. These are just some examples McKinsey gives of how advanced connectivity is disrupting industries.

Connectivity is expected to increase the annual rate of data creation by up to 25%, connect 51.9 billion devices by 2025 and impact the global GDP (gross domestic product) by more than $2 trillion. Additionally, 5G and 6G are expected to contribute to closing the digital divide allowing hundreds of millions of people to be connected for the first time.

In automotive and assembly, 5G and 6G are used to enhance maintenance and navigation, prevent collisions and drive the first fleets of autonomous vehicles. Healthcare devices and sensors connected to low-latency networks will improve patient treatment and monitoring with real-time data, significantly impacting treatment for patients with chronic disease that require constant checks.

Aerospace and defense are using 5G to boost their capacity and performance, while retail has improved inventory management, supply chain coordination and payment process and has created metaverse experiences thanks to the technology. The construction and building industry is printing 3D structures and using high-speed digital twins and applications, and the mining and natural resources sector is turning to smart exploration and exploitation with the digitalization of practices and automation of operations.

Leaders from almost every industry are considering engaging with new connectivity technologies. McKinsey says they should consider advanced connectivity a key enabler of revolutionary capabilities. From digital transformations to driving efficiency through automation and enabling technologies reliant on high-quality connectivity, such as cloud computing and IoT, connectivity will continue to drive the way the world works and lives.

More here:
From 5G to 6G: The race for innovation and disruption - TechRepublic

The Network Binds The Increasingly Distributed Datacenter – The Next Platform

Before founding software-defined networking startup PlumGrid and then moving to VMware when it bought his company in 2016, Pere Monclus spent almost 12 years with Cisco Systems at a time when while much of enterprise networking was still in the corporate datacenter, the shift to network virtualization and the migration to the cloud were getting underway.

Cisco was dominant in the datacenter networking space and fed organizations with a steady stream of hardware, from routers to switches to silicon. The company carried an expansion view of its role in networking.

At Cisco, we were thinking always we have to control the end-to-end of the network, Monclus, vice president and chief technology officer of VMwares Networking and Security business unit, tells The Next Platform. The idea was we have to control the edge of the network so the core doesnt fall, because the core was where most of the markets were. We would have core routers, core switches and then take it all the way to the access to create the end-to-end networking as a principle, because from a Cisco perspective, what we were delivering was an end-to-end connectivity solution with our protocols.

About a year after Monclus left Cisco to found PlumGrid, VMware bought Nicira for $1.26 billion, a move that allowed the company that already was a significant datacenter presence through its server and storage virtualization to absorb networking into its increasingly software-defined world. NSX and networking have evolved over the past ten years to become a key part of VMwares own adaptation to an IT world that has broken well beyond the datacenter boundaries and out to the cloud and the edge. With containers, microservices and Kubernetes, software now dictates to hardware rather than the other way around.

Its also a world where the network is now tie that binds this increasingly decentralized IT environment, becoming the main thoroughfare for applications and data moving between the datacenter, cloud and edge and a central focus for organizations security measures. All this was on full display this week at VMwares Explore 2022 conference, which allowed the company to tout its ongoing expansion into the cloud and out to the edge and its networking portfolios central role in helping to make this happen.

The evolution of networking at VMware has taken several steps, Monclus says. At the time of the Nicira acquisition, enterprises would spend weeks or months putting the network in place before applications that would run top it could be put into production.

When VMware got into networking, the company heard from customers that they could quickly create and application and get a server up and running, but it takes them weeks to configure the network, he says. We started that journey with network virtualization and the first story [for networking] was about automation and agility. The question was, If I create a VM, could I just connect it to the network and give it an IP address? That was kind of the early days of network virtualization.

As more workloads and data were making their way out of the datacenter, security of the network became increasingly important, which is why VMware embraced micro-segmentation, a way to manage network access and separate workloads from one another to reduce an organizations attack surface and more easily contain breaches by preventing the lateral movement of attackers. The acquisition two years ago of network security startup Lastline helped fuel the vendors distributed IDS/IPS technology to complement the east-west protection delivered by micro-segmentation.

In June, the company added to its lateral security for network and endpoint technologies with a broad threat intelligence capability called Contexa. It sits in the infrastructure and offers visibility into both traditional and modern applications.

VMware over the years has put networking and security capabilities into the hypervisor and made them available as services in its own cloud offering and those of hyperscalers like Amazon Web Services and Google Cloud. Its also making NSX, and its expanding growing security capabilities including those from Carbon Black, which it bought in 2019 for $2.1 billion key parts of the multicloud strategy.

The vendor at Explore rolled out a broad range of enhancements to its networking and security portfolio all aimed at making it easier for enterprises to manage and secure their multicloud environments. It also gave a look to what the near-term future looks like with the introduction of a number of network- and security-focused projects.

VMware is embedding network detection and visibility capabilities into Carbon Black Clouds endpoint protection program, a move that is now in early access and brings together visibility into both the network and endpoints. It also is adding threat prevention tools like IDPS, malware analysis, sandboxing and URL filtering to its NSX Gateway Firewall and enhanced bot management to the NSX Advanced Load Balancer (ALB). The last two along with Project Watch, which aims to offer a continuous risk and compliance assessment model to multicloud environments are part of VMwares Elastic App Secure Edge (EASE), a strategy announced last year to offer a range of data plane services around networking and security.

As we noted earlier this week, VMware also is embracing data processing units (DPUs) from Nvidia for a number of its cloud-based offerings, including vSphere 8 and, for this case, NSX. Cloud providers like AWS and Oracle already are using DPUs and many in the industry believe that servers and other hardware in the near future will routinely include the chips. Monclus says customers that will gravitate toward DPUs or smartNICs for performance and security. For organizations like telcos that demand high performance and where their datacenters are revenue-generating facilities enabling CPUs to offload networking or compute tasks to DPUs is attractive.

There is a tradeoff they may save 15 percent in CPU utilization, which they can sell back to customers, but there also is the cost of the DPUs themselves. However, where datacenters are a cost factor, increasing security by leveraging the workload isolation offered by the DPUs and that likely will be a fast-growing use case for the chips, Monclus says.

Looking to the near future, VMware offered a look at Project Northstar and Project Trinidad, along with the aforementioned Project Watch. Project NorthStar is in technical preview and is a software-as-a-service (SaaS) network and security offering that will deliver services, visibility and controls to NSX users who can manage them via central cloud control plane.

The services include VMwares NSX Intelligence, ALB, Network Detection and Response and Web Application Firewall.

We are taking the control plane of NSX and turning it into a SaaS service to enable true multicloud solutions, Monclus says. When we have a policy as a service, it works on vSphere environments but it works across VMware Cloud, VMware Cloud Network, AWS, Google, Azure, and we have the same advanced protection, we have the same load balancer.

Both Project Trinidad and Project Watch are aimed at addressing the needs of modern workloads, he says. Theyre not tied to physical endpoints; instead, the API becomes the endpoint, he says. Project Trinidad uses AI and machine learning models to understand what are normal and expected east-west API traffic patterns between microservices so that if something anomalous pops up, it can be quickly detected.

We basically discover all the API, the schemas, API data and we create a baseline and we can start from the baseline, Monclus says. Project Trinidad introduces is AI/ML deep correlations between workflows and microservices.

As noted, Project Watch brings continuous security, compliance and risk assessment as well as automated and encrypted connectivity across clouds AWS, Google Cloud and Microsoft Azure virtual private clouds (VPCs) and virtual networks (VNETs) and security operations and integrates workflows from such areas as security and cloud operations and lines of business onto a single platform.

It also addresses the challenge of not only enabling networks and security to adapt to modern workloads but also to ensure that legacy hardware that cant make that change are secure.

VMware will assess and report the security risks enterprises face, giving the necessary data to make decisions, he says, adding that the vendor wants to create a continuous monitoring model in the same way as high availability, which uses the metric of three 9s, four 9s, and so forth, he says. We are trying to create a metric of how well youre running your datacenter or your applications from across security points.

Read more here:
The Network Binds The Increasingly Distributed Datacenter - The Next Platform

Opinion: The line between data and privacy is measured only by success – Gambling Insider

If there is a modern subject of discussion which elicits a strong response from the public, it is that of data privacy.

Tech businesses such as Apple have made a conscious push towards informing the public about the subject, while making its products ever resistant to underhanded data retrieval.

In the gaming industry, data has become a way of life. Every bit that can be used is used by the industry to target new players, improve the flow of casinos thereby making it easier and more profitable to attract customers and generally improve market trends.

However, there is a catch.

At G2E Asia this year, Qliks Senior Director of Solutions and Value Engineering, Chin Kuan Tan, revealed the results of Qliks research into player preferences relating to data usage in the gaming and hospitality industry which threw up some interesting conundrums.

Tans presentation showed that 72% of people will stop engaging with a company completely if they have concerns over data collection, while also saying that 76% of players prefer hyper-personalisation over mass marketing techniques.

The duality of these two statistics shows that gaming operators are walking a knife edge when it comes to how the data gleaned from customers is used; and with the increased focus on data on an individual scale, the manner in which operators market themselves to customers has to evolve.

The more the industry uses servers and algorithms to solve and modernise everyday tasks, the more it relies on data collection to operate. This is something that Oosto CMO Dean Nicolls spoke about in a recent interview with Gambling Insider in relation to the companys facial recognition software.

When asked about how Oostos system protects the faces of millions of people that enter any of the locations where its technology is used, Nicolls spoke in depth about the subject in the upcoming September/October edition ofGambling Insidermagazine:

You might think a lot of the data is traversing from the casino to our central servers or to our cloud servers thats not the case. Everything is done locally. Traditionally, in a Vegas casino, all the servers are sitting on the premises and they are running our algorithms themselves, so were not getting that data on our servers. Now, naturally any data that goes from the camera to servers still needs to be encrypted; and it is, both in transit and at rest, but it isnt going anywhere on our servers.

The comments of the Oosto CMO show a willingness to please the audience, though perhaps it also shows a brushing off a want to get around the question as quickly and easily as possible, without wishing to be drawn into a larger conversation about ethical data practices.

On the whole, Oosto appears to do a good job of protecting the data of innocents; a difficult task when your business relies on filming and recognising large quantities of people en masse. However, the failure to categorically explain the safety precautions in place, outside of using the term encrypted, feels telling.

Data and the gaming industry is an odd mix, then.

In the modern day, the industry demands that players be protected from those that would do harm by obtaining data, while players themselves are ready to quit if they feel vulnerable for a second in signing up to a service.

Gaming companies want to use data to further the consumer experience while retaining and reassuring customers that any data provided will not be sold or used in other, nefarious ways as has been reported frequently since Edward Snowdens revelations in 2013.

Customers want what they always have wanted: a seamless service that benefits them without risk. But with the online nature of the modern world, this risk is accepted as long as it is mitigated, leaving gaming companies juggling the subjects of personalised experiences, data loss and customer satisfaction.

More:
Opinion: The line between data and privacy is measured only by success - Gambling Insider

VMwares Project Monterey bears hardware-accelerated fruit – The Stack

Back in September 2020 VMware announced what it called Project Monterey as a material rearchitecture of infrastructure software for its VMware Cloud Foundation (VCF) a suite of products for managing virtual machines and orchestrating containers. The project drew in a host of hardware partners, as the virtualisation heavyweight looked to respond to the way in which cloud companies were increasingly using programmable hardware like FPGAs and SmartNICs to power data centre workloads that could be offloaded from the CPU in the process freeing up CPU cycles for core enterprise application processing work and improving performance for the networking, security, storage and virtualisation workloads offloaded to these hardware accelerators.

Companies including numerous large VMware customers looking to keep workloads on-premises had struggled to keep pace with the way in which cloud hyperscalers were rethinking data centre architecture like this. Using the kind of emerging SmartNICs, or what are increasingly called Data Processing Units (DPUs) was requiring extensive software development to adapt them to the users infrastructure. Project Monterey was an ambitious bid to tackle that from VMwares side in close collaboration with SmartNIC/DPU vendors Intel, NVIDIA and Pensando (now owned by AMD) and server OEMs Dell, HPE and Lenovo in order to improve performance.

This week Project Monterey bore some very public fruit, with those server OEMs and chipmakers lining up to offer new products optimised for VMware workloads and making some bold claims about total cost of ownership (TCO) savings as well as infrastructure price-performance ultimately, by relieving server CPUs of networking services and running infrastructure services on the DPU isolated from the workload domain, security will also markedly improve, all partners emphasised, as will workload latency and throughput for enterprises.

NVIDIA, for example, thinks using its new data processing units (DPUs) can save $8,200 per server in total cost of ownership (TCO) terms or $1.8 million in efficiency savings over three years for 1,000 server installations. (A single BlueField-3 DPU replaces approximately 300 CPU cores, NVIDIA CEO Jensen Huang has earlier claimed.) Customers can get the DPUs with new Dell PowerEdge servers that start shipping later this year and which are optimised for vSphere 8: This is a huge moment for enterprise computing and the most significant vSphere release weve ever seen, said Kevin Deierling, a senior VP at Nvidia. Historically, weve seen lots of great new features and capabilities with VMwares roadmap. But for the first time, were introducing all of that goodness running on a new accelerated computing platform that runs the infrastructure of the data centre.

(The BlueField 3 DPUs accelerating these VMware workloads feature 16 x Arm A78 cores, 400Gbit/s bandwidth and PCIe gen 5 support accelerators for software-defined storage, networking, security, streaming, line rate TLS/IPSEC cryptography, and precision timing for 5G telco and time synchronised data centres.)

Certain latency and bandwidth-sensitive workloads that previously used virtualization pass-thru can now run fully virtualized with pass-thru-like performance in this new architecture, without losing key vSphere capabilities like vMotion and DRS said VMware CEO Raghu Raghuram. Infrastructure admins can rely on vSphere to also manage the DPU lifecycle, Raghuram added: The beauty of what the vSphere engineers have done is they have not changed themanagement model. It can fit seamlessly into the data center architecture of today

The releases come amid a broader shift to software-defined infrastructure. As NVIDIA CEO Jensen Huang has earlier noted: Storage, networking, security, virtualisation, and all of that all of those things have become a lot larger and a lot more intensive and its consuming a lot of the data centre; probably half of the CPU cores inside the data center are not running applications. Thats kind of strange because you created the data center to run services and applications, which is the only thing that makes money The other half of the computing is completely soaked up running the software-defined data centre, just to provide for those applications. [That] commingles the infrastructure, the security plane and the application plane and exposes the data centre to attackers. So you fundamentally want to change the architecture as a result of that; to offload that software-defined virtualisation and the infrastructure operating system, and the security services to accelerate it

Vendors are lining up to offer new hardware and solutions born out of the VMware collaboration meanwhile. Dells initial offering of NVIDIA DPUs is on its VxRail solution and takes advantage of a new element of VMwares ESXi (vSphere Distributed Services Engine), moving network and security services to the DPU. These will be available via channel partners in the near future. Jeff Boudreau, President, Dell Technologies Infrastructure Solutions Group added: Dell Technologies and VMware have numerous joint engineering initiatives spanning core IT areas such as multicloud, edge and security to help our customers more easily manage and gain value from their data.

AMD meanwhile is making its Pensandu DPUs available through Dell, HPE and Lenovo. Those with budgets for hardware refreshes and a need for performance for distributed workloads will be keeping a close eye on price and performance data as the products continue to land in coming weeks and months.

More:
VMwares Project Monterey bears hardware-accelerated fruit - The Stack

Microsoft to enact new cloud outsourcing and hosting licensing changes which still don’t address core customer… – ZDNet

Credit: Microsoft

On August 29, Microsoft went public with promised cloud outsourcing and hosting changes which officials first outlined earlier this year. These changes, which will take effect on October 1, 2022, still don't address some of the core customer and partner complaints which led to Microsoft revising its policies in these areas.

Microsoft introduced outsourcing restrictions in 2019, resulting in customers paying more to run Microsoft software in non-Microsoft cloud environments. Customers who had been using AWS and Google Cloud as dedicated hosts for running Windows Server and clients were affected directly, but some of them didn't realize the extent of the impact until their contracts with Microsoft were up for renewal this year. Microsoft's changes around its bring-your-own-license terms made their contracts more expensive if they wanted to run Microsoft software on anything but Azure.

Some European partners and customers took their complaints to European antitrust authorities. Microsoft responded with a set of "European Cloud Principles", which officials said would level the playing field -- to some extent -- for partners and customers who wanted to run Microsoft software on certain non-Microsoft cloud infrastructures.

What those principles didn't include was what many customers cared most about: The ability to run Microsoft software on Amazon Web Services, Google and Alibaba. They focused on customers who wanted to move their software licenses to other clouds outside of those "Listed Providers."

"This is just them circling back and lightening up the Cloud Solution Providers (CSPs) rules. It doesn't change the complexities and limitations that affect the 'listed providers': Amazon, Google, and Alibaba, and their joint customers with Microsoft," said Directions on Microsoft analyst Wes Miller. "While this is good news for a set of providers, there's no change to the complex and encumbered rules that affect those three providers and customers. "

Earlier this year, Microsoft officials said they would address some of the complaints from European cloud vendors about restrictive cloud licensing policies that resulted in customers paying more to run Microsoft software in non-Microsoft cloud environments. The list of changes Microsoft outlined today, company officials said, will make it easier for customers to bring their software to partners' clouds; to ensure partners have access to the products they need to sell cost-effective solutions that customers want, and to empower partners to build solutions with speed and scale.

Specifically, Microsoft is adding a new Flexible Virtualization benefit which officials said will allow customers with Software Assurance or subscription licenses to use their own licensed software to build and run solutions on any infrastructure except the Listed Providers (AWS, Google, Alibaba). Customers who want to run their own Microsoft-licensed software on those providers' infrastructure will have to buy the licenses from those providers. And any user with a Microsoft 365 F3, E3 or E5 license will be able to virtualize Windows 10 or 11 on their own servers or outsourcers' servers -- as long as those outsourcers are not AWS, Google or Alibaba -- without any additional licenses required. (Currently, customers need a VDA add-on license to virtualize qualifying Windows 10 or 11 editions.)

Microsoft also is adding a new Windows Server virtual core licensing option as part of the Flexible Virtualization benefit, which will allow Windows Server to be licensed on a virtual core basis and not on a physical core one, as exists currently. Microsoft officials said this change will help with moving Windows Server workloads to the cloud.

Microsoft's August 29 blog post outlines in more detail these coming hosting/outsourcing licensing changes, along with a few additional ones.

See the rest here:
Microsoft to enact new cloud outsourcing and hosting licensing changes which still don't address core customer... - ZDNet

Teradata takes on Snowflake and Databricks with cloud-native platform – VentureBeat

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.

Database analytics giant Teradata has announced cloud-native database and analytics support. Teradata already had a cloud offering that ran on top of infrastructure-as-a-service (IaaS) infrastructure, enabling enterprises to run workloads across cloud and on-premise servers. The new service supports software-as-a-service (SaaS) deployment models that will help Teradata compete against companies like Snowflake and Databricks.

The company is launching two new cloud-native offerings. VantageCloud Lake extends the Teradata Vantage data lake to a more elastic cloud deployment model. Teradata ClearScape Analytics helps enterprises take advantage of new analytics, machine learning and artificial intelligence (AI) development workloads in the cloud. The combination of cloud-native database and analytics promises to streamline data science workflows, support ModelOps and improve reuse from within a single platform.

Teradata was an early leader in advanced data analytics capabilities that grew out of a collaboration between the California Institute of Technology and Citibank in the late 1970s. The company optimized techniques for scaling analytics workloads across multiple servers running in parallel. Scaling across servers provided superior cost and performance properties compared to other approaches that required bigger servers. The company rolled out data warehousing and analytics on an as-a-service basis in 2011 with the introduction of the Teradata Vantage connected multicloud data platform.

Our newest offerings are the culmination of Teradatas three-year journey to create a new paradigm for analytics, one where superior performance, agility and value all go hand-in-hand to provide insight for every level of an organization, said Hillary Ashton, chief product officer of Teradata.

MetaBeat 2022

MetaBeat will bring together thought leaders to give guidance on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.

Teradatas first cloud offerings ran on specially configured servers on cloud infrastructure. This allowed enterprises to scale applications and data across on-premise and cloud servers. However, the data and analytics scaled at the server level. If an enterprise needed more compute or storage, it had to provision more servers.

This created an opening for new cloud data storage startups like Snowflake to take advantage of new architectures built on containers, meshes and orchestration techniques for more dynamic infrastructure. Enterprises took advantage of the latest cloud tooling to roll out new analytics at high speed. For example, Capital One rolled out 450 new analytics use cases after moving to Snowflake.

Although these cloud-native competitors improved many aspects of scalability and flexibility, they lacked some aspects of governance and financial controls baked into legacy platforms. For example, after Capital One moved to the cloud, it had to develop an internal governance and management tier to enforce cost controls. Capital One also created a framework to streamline the user analytics journey by incorporating content management, project management and communication within a single tool.

This is where the new Teradata offerings promise to shine. It promises to combine the new kinds of architectures pioneered by cloud-native startups with the governance, cost-controls and simplicity of a consolidated offering.

Snowflake and Databricks are no longer the only answer for smaller data and analytics workloads, especially in larger organizations where shadow systems are a significant and growing issue, and scale may play into workloads management concerns, Ashton said.

The new offering also takes advantage of Teradatas various R&D into smart scaling, allowing users to scale based on actual resource utilization rather than simple static metrics. The new offering also promises a lower total cost of ownership and direct support for more kinds of analytics processing. For example, ClearScape Analytics includes a query fabric, governance and financial visibility. This also promises to simplify predictive and prescriptive analytics.

ClearScape Analytics includes in-database time series functions that streamline the entire analytics lifecycle, from data transformation and statistical hypothesis tests to feature engineering and machine learning modeling. These capabilities are built directly into the database, improving performance and eliminating the need to move data. This can help reduce the cost and friction of analyzing a large volume of data from millions of product sales or IoT sensors. Data scientists can code analytics functions into prebuilt components that can be reused by other analytics, machine learning, or AI workloads. For example, a manufacturer could create an anomaly detection algorithm to improve predictive maintenance.

Predictive models require more exploratory analysis and experimentation. Despite the investment in tools and time, most predictive models never make it into production, said Ashton. New ModelOps capabilities include support for auditing datasets, code tracking, model approval workflows, monitoring model performance and alerting when models become non-performing. This can help teams schedule model retraining when they start to lose accuracy or show bias.

What sets Teradata apart is that it can serve as a one-stop shop for enterprise-grade analytics, meaning companies dont have to move their data, Ashton said. They can simply deploy and operationalize advanced analytics at scale via one platform.

Ultimately, it is up to the market to decide if these new capabilities will allow the legacy data pioneer to keep pace or even gain an edge against new cloud data startups.

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.

Read this article:
Teradata takes on Snowflake and Databricks with cloud-native platform - VentureBeat

Dell Servers with NVIDIA DPUs and VMware vSphere 8 Announced – StorageReview.com

NVIDIA used VMware Explore to announce a new data center solution with Dell Technologies bringing AI training, AI inference, data processing, data science, and zero-trust security capabilities to enterprises globally. The solution combines Dell PowerEdge servers with NVIDIA BlueField DPUs, NVIDIA GPUs, and NVIDIA AI Enterprise software and is optimized for the newly announced VMware vSphere 8 enterprise workload platform.

NVIDIA used VMware Explore to announce a new data center solution with Dell Technologies bringing AI training, AI inference, data processing, data science, and zero-trust security capabilities to enterprises globally. The solution combines Dell PowerEdge servers with NVIDIA BlueField DPUs, NVIDIA GPUs, and NVIDIA AI Enterprise software and is optimized for the newly announced VMware vSphere 8 enterprise workload platform.

NVIDIA has already added the technology to the NVIDIA LaunchPad hands-on lab to allow enterprises to experience the combination of these technologies and get access to hardware and software for end-to-end workflows in AI, data science, and more.

Manuvir Das, head of EnterpriseComputing at NVIDIA, said:

AI and zero-trust security are powerful forces driving the worlds enterprises to rearchitect their datacenters as computing and networking workloads are skyrocketing. VMware vSphere 8 offloads, accelerates, isolates, and better secures data centerinfrastructure services onto the NVIDIA BlueField DPU and frees the computing resources to process the intelligence factories of the worlds enterprises.

Dells Travis Vigil, senior vice president, portfolio and productmanagement, Infrastructure Solutions Group, added:

Dell and NVIDIAs long tradition of collaborating on next-generation GPU-accelerated data centers hasalready enabled massive breakthroughs. Now, through a solution that bringsNVIDIAs powerful BlueField DPUs along with NVIDIA GPUs to our PowerEdge server platform, ourcontinued collaboration will offer customers performance and security capabilities to help organizationssolve some of the worlds greatest challenges.

vSphere on BlueField DPUs will unlock hardware innovation helping customers meet the throughput and latency needs of modern distributed workloads. vSphere will enable this by offloading and accelerating network and security infrastructure functions onto DPUs from CPUs.

The BlueField DPUs will be managed through the iDRAC BMC with a special cable, allowing full out-of-band management versus a traditional edge-card installation. This means that the DPU in this deployment wont usually be a customer-installed device, but one that you select when the server is configured.

Customers running applications that demand high network bandwidth and fast cacheaccess such as in-memory databases can expect to reduce the number of cores required but achieve better performance. Reducing the number of cores will also improve TCO. Offloading to DPUs can also result in a higher transaction rate with lower latency by leveraging freed CPU cores and better cache locality, all while benefitting from vSphere DRS and vMotion.

By running infrastructure services on DPUs and isolating them from the workloaddomain, vSphere on DPUs will boost infrastructure security. Additionally, now in beta,NSX Distributed Firewall will offload to DPUs to scale customers security operationsby securing East-West traffic at line rate without the need for software agents.

vSphere 8 will dramatically accelerate AI and machine learning applications bydoubling the virtual GPU devices per VM, delivering a 4x increase of passthroughdevices, and vendor device groups that allow binding of high-speed networkingdevices and the GPU.

Krish Prasad, VMwares senior vice president and general manager of the CloudPlatform Business Unit, explained:

Dell PowerEdge servers built on the latest VMware vSphere 8innovations, and accelerated by NVIDIA BlueField DPUs, provide next-generation performance andefficiency for mission-critical enterprise cloud applications while better protecting enterprises fromlateral threats across multi-cloud environments.

As NVIDIA-Certified Systems, the Dell PowerEdge servers can run the NVIDIA and VMware AI-Ready Enterprise Platform, featuring the NVIDIA AI Enterprise software suite and VMware vSphere.

NVIDIA AI Enterprise is a comprehensive, cloud-native suite of AI and data analytics software optimizedto enable organizations to use AI on familiar infrastructure. It is certified for any deployment, fromthe enterprise data center to the public cloud, and includes global enterprise support to keep AIprojects on track.

An upcoming release of NVIDIA AI Enterprise will support new capabilities introduced inVMware vSphere 8, including the ability to support larger multi-GPU workloads, optimize resources andefficiently manage the GPU lifecycle.

With NVIDIA LaunchPad, enterprises can get access to a free hands-on lab of VMwarevSphere 8 running on NVIDIA BlueField-2 DPU.

Dell servers with vSphere 8 on NVIDIA BlueField-2 DPU will be available later in the year.

NVIDIA AIEnterprise with VMware vSphere is now available and can be experienced on NVIDIA LaunchPad hands-lab.

Engage with StorageReview

Newsletter|YouTube| PodcastiTunes/Spotify|Instagram|Twitter|TikTok|RSS Feed

Read the original here:
Dell Servers with NVIDIA DPUs and VMware vSphere 8 Announced - StorageReview.com

NAS vs. server: Which storage option should you choose? – TechTarget

Next to computer processing and applications, data storage is one of the most important IT activities. Data storage ensures all information created by a private user, small business or multinational corporation can be stored in a safe and secure location for future retrieval and use. Data storage technology is also essential from a DR perspective, as properly backed up files and information can help businesses recover operations if a disruption affects their ability to conduct business.

Before choosing a storage option, IT administrators should examine how NAS and server-based storage address planning, evaluating, selecting and implementing data storage.

NAS provides cost-effective and easy-to-implement options to increase storage capacity. As the term network-attached storage implies, the storage device attaches to a network -- most often, Ethernet or other TCP/IP-based networks -- and can be launched quickly into production. Other devices on the network use NAS storage capabilities. NAS devices are freestanding devices that typically have at least two bays into which storage modules are inserted. The more bays there are, the more storage can be implemented.

NAS devices typically come with their own OS and network interface software, so devices can be easily connected to an existing LAN, powered up and quickly placed into service. NAS devices are typically file-based, as opposed to server-based devices that can be either block- or file-based. This makes them compatible with most OSes. Capacities can range from a few terabytes to dozens of terabytes. NAS is ideal for individual users and SMBs that need easy-to-use storage with flexibility, convenience and moderate investments.

Server-based storage typically connects to a primary file server; uses the file handling functions of the server and its processing power; and connects either directly to the primary server(s) or via a network, such as Ethernet or a SAN designed for high-capacity data transfers among users and storage arrays. Other servers, such as application servers, coexist in the infrastructure.

Server storage is the vehicle of choice for large organizations because capacities can be dramatically expanded by adding more capacity to existing servers known as scale-up storage -- or by adding more physical storage servers to the infrastructure called scale-out storage. Server-based storage can support block and file storage formats, which makes them ideal for larger organizations with a variety of storage requirements.

Unlike NAS, storage servers are typically implemented in different forms, such as standalone towers or rack-based devices. In these situations, IT management must ensure the servers have enough power, are physically secure and are properly cooled. These and other criteria -- such as additional floor space to place server racks -- typically make server storage more expensive than NAS. Given the need to potentially manage dozens and possibly hundreds of storage arrays, the need for experienced storage technicians is significant; NAS devices are more of a DIY platform.

While the focus of this article is NAS and server-based storage, cloud-based storage offerings can accommodate virtually any storage requirement. NAS and server-based requirements can be implemented in cloud environments and offer the following advantages:

Cloud storage can serve as primary storage, secondary storage for specific operational requirements and backup storage for DR planning.

The table below provides a more detailed comparison of NAS and server storage based on specific criteria. It's important to note that, before any storage offering is selected, user and storage requirements -- now and in the near to long term -- must be defined, as well as the type of storage activities that will need to be supported, for example, primary storage or data backups.

Depending on the business requirements, both NAS and server-based storage can occupy the storage infrastructure. Both can be used for primary file storage for specific applications and backup storage for DR.

NAS can work alongside server storage, potentially for specialized requirements, such as secure data storage and retrieval.

Server storage can work alongside NAS; the key is to define the requirements and select the appropriate storage offering.

Read the original here:
NAS vs. server: Which storage option should you choose? - TechTarget

Apple eases subscription path to Xcode Cloud to keep devs in the ecosystem DEVCLASS – DevClass

Apple has opened subscriptions for Xcode Cloud, a continuous integration and delivery (CI/CD) service designed to work with Xcode, the official IDE for macOS and iOS development.

Xcode Cloud was introduced in June and is an add-on subscription for developers who are already signed up to the Apple Developer Program.

The cost starts at $14.99 per month for up to 25 compute hours, though this basic plan is free until the end of December 2022. A fee of $99 per year is still required for the developer program itself. Further compute hours are available at extra cost, for example 250 hours for $99 per month, and can now be obtained via the Apple Developer App.

Xcode Cloud is based on workflows defined in Xcode. The core actions in a workflow are build, analyze, test, and archive. The service also supports post-actions, such as distributing a new version of an app, and custom build scripts. When a build completes, the artifacts (output from the build) are stored online for 30 days so they can be downloaded, for example by App Store Connect, the web-based tools Apple offers for managing apps in its Store, including those for iPhone, iPad, Mac and Watch. There is also a service called Test Flight, which is for distributing preview releases to testers.

Apple considers these three services Xcode, TestFlight, and AppStore Connect as the core elements of its CI/CD system.

The service works in conjunction with a git repository, which must be one of either Bitbucket, GitHub or GitLab, though self-managed instances are supported as well as cloud-hosted. Xcode Cloud clones a repository temporarily onto its own servers, though Apple says: It doesnt store your source code and securely handles any stored data for example, your derived data and keeps it private.

Xcode Cloud is all about keeping developers within the Apple ecosystem. CI/CD is widely adopted, and without Xcode Cloud devs will use competing systems such as Github Actions or CircleCI. The advantage of Xcode Cloud is its integration.

I liked that with a single git push I could compile, archive, deploy to TestFlight, and send for beta review. I even pushed a fix from my iPhone using Working Copy one time while I was on a train, said one developer on Hacker News.

Developers who work entirely with Apple products may be pleased, but the company seems uninterested in scenarios such as cross-platform development, or developing web applications on a Mac, or using an IDE other than Xcode. Another disappointment is that Apples cloud build service does not enable development of Mac or iOS software from non-Mac computers.

See the original post here:
Apple eases subscription path to Xcode Cloud to keep devs in the ecosystem DEVCLASS - DevClass

Public cloud to double to $90b by 2025 in nation – China Daily

China's next wave of cloud migration is expected to be spearheaded by critical industrial and manufacturing sectors. [Photo/VCG]

China's next wave of cloud migration is expected to be spearheaded by critical industrial and manufacturing sectors, and the country's public cloud market will more than double from $32 billion in 2021 to $90 billion by 2025, said global management consulting firm McKinsey& Company.

According to the latest report from McKinsey, despite a relatively late start, China has made enormous progress in terms of cloud migration speed and has become the world's second-largest cloud market.

Over the next few years, the speed of cloud migration in China will be broadly in line with the rest of the world, with a 19-percentage-point increase expected in IT workloads shifting to the cloud between 2021 and 2025.

However, China differs from other countries in its high proportion of private cloud, which is expected to reach 42 percent by 2025, compared with 36 percent for the public cloud.

McKinsey's survey suggested that only 11 percent of the companies surveyed plan to be mostly on the public cloud. The remainder will continue to use a private cloud with traditional servers or use a hybrid cloud.

"Cloud adoption is strongly correlated with digital transformation. By 2025, 78 percent of all IT workloads will be on cloud in China," said Kai Shen, partner at McKinsey. "But when we look across the cloud adoption of business use cases with P&L impact, we find that adoption rates are much lower at between 0 percent to 25 percent."

P&L is an indicator that can show a company's ability to increase its profit, either by reducing costs and expenses or increasing sales.

"It demonstrates that Chinese companies still have enormous opportunities to develop, adopt and scale use of cloud, for example in dynamic pricing and personalization, digital twins and three-dimensional simulations, sales forecasting and inventory optimization," he said.

In terms of industries, the report also pointed out that sectors with numerous tech-savvy and digital-native companies, such as e-commerce and education, have already shifted a significant portion of their IT workloads to the cloud in China.

Labor-intensive industrial and manufacturing sectors, on the contrary, have not done that. But that could quickly change given the latest national policy guidance, it added.

The rest is here:
Public cloud to double to $90b by 2025 in nation - China Daily