Page 19«..10..18192021..3040..»

Why Microsoft thinks you don’t need another free Hyper-V Server SKU – TechRepublic

Running Windows Server on your own infrastructure and using cloud services to do it offers lots of benefits. But that cloud harmony means subscriptions rather than unsupported free operating systems.

Image: Shutterstock/alphaspirit.it

With the release of Windows Server 2022, there was a notable omission from the list of server versions: There won't be a new version of the free Windows Hyper-V Server.

Based on Server Core, this stripped-down server OS contains just the Microsoft Hyper-V hypervisor, Windows Server driver support, the virtualisation components needed for the Hyper-V role and some basic options for management. You can run control panel, Task Manager, Notepad and (if you install the Core App Compatibility feature) graphical admin tools like File Explorer and MMC snap-ins for disk management, failover clustering and the like. That doesn't include Hyper-V Manager though: The intention is that you manage Hyper-V Server remotely using Windows Admin Center, Remote Server Administration Tools or PowerShell.

Hyper-V Server was designed to give organizations wanting to run Linux VMs a free way to do that and still manage the infrastructure with the same tools they used for Windows Server, whether that's scripts or System Center. If you want to run Windows Server in a VM on Hyper-V Server, you need a Standard or Datacentre Edition licence for the virtualisation rights, so you might as well just run that pay-for version of Windows Server in the Hyper-V role.

Not releasing a new version of Hyper-V Server doesn't mean Microsoft's Hyper-V hypervisor is in any danger of going away: Hyper-V underpins the security features that are part of the reason for the Windows 11 hardware requirements, and it's used in Microsoft products from Azure to Xbox. Microsoft is even working on porting it to run on Linux (instead of just hosting Linux VMs).

SEE: Windows 11 cheat sheet: Everything you need to know (free PDF) (TechRepublic)

In fact there are new virtualisation features in Server 2022 that Hyper-V Server 2019 doesn't have, like nested virtualization support with AMD processors. Hyper-V Server only supports nested virtualization with Intel CPUs.

So why drop Hyper-V Server from the Windows Server 2022 list? Microsoft is directing customers to Azure Stack HCI insteada virtualisation service that runs on your own hardware, combining the benefits of hyperconverged infrastructure with the convenience, management, simple billing and regular updates of a cloud service. You don't have to buy a Windows Server licence for your virtualisation platform. Just pay monthly fees for the cloud service, and you get the software-defined storage and networking that customers kept asking Microsoft to add to Hyper-V Manager (which would have effectively turned it into an HCI platform). You can also run cloud services like Azure Kubernetes Service on it.

Vijay Kumar, director of product marketing for Windows Server and Azure Arc, calls Azure Stack HCI "the modern virtualization host" but also a path to containers.

"All of our innovation and all of our thinking about Hyper V, whether it's how do we think about Hyper V [on premises] or how do we think about it in harmony with Azureall of this investment we're trying to focus onto Azure Stack HCI and do it there. For customers that really want to want to use Hyper V and adopt it as a virtualization platform, we really want them to go to Azure Stack HCI," he told TechRepublic.

"We're also adding more and more ways, including through Azure Arc, to connect to Azure, so they can manage the lifecycle of virtual machines on Azure Stack HCI, they can manage the lifecycle of Kubernetes clusters or on AKS HCI that they're running on Azure Stack, from Azure.

"In the future, we'll come up with more and more investments to improve [virtualization on Azure Stack HCI]. Azure Stack HCI will have faster innovation than the long term servicing channel, so you will see more and faster implementation of functionality and capabilities there."

As a free product, Hyper-V Server appealed both to enthusiasts and to commercial organizations "kicking the tires" before adopting. For commercial customers, he noted that it doesn't have the same technical product support customers get with Azure Stack HCI, which is also available as a free 60-day trial that will help organizations do the kind of testing they used Hyper-V Server for.

SEE: Power checklist: Local email server-to-cloud migration (TechRepublic Premium)

That might also be quicker, he suggests. "We would argue customers don't have to spend a whole lot of time in the first 60 days configuring all of that and they can just test what it is that they want to test."

If 60 days isn't quite enough, he also points out that although you can't use old hardware you have lying around for Azure Stack HCI, you also don't have to buy a Windows Server licence or pay for the hardware up front the same way you're used to. "You can buy it as cloud-based billing or modern billing, where you don't have to invest a whole lot of money in terms of capital expenditure to buy Azure Stack HCI. You can buy per core, you can pay on a per month basis, and then you can get it for however short period of time that you want to continue to testing."

That kind of cloud pricing on your own infrastructure is no coincidence; the shift from a free server version to a low price hybrid option fits in with the hybrid direction of Windows Server generally, because Microsoft really thinks this kind of hybrid is the future.

"Increasingly we're trying to harmonise Windows Server with the cloud offering on Azure and trying to add value to customers that want to continue to use workloads on premises and give them more abilities to use Azure in harmony with their on-premises Windows Server," Kumar said.

"If they want to continue to use Windows Server on premises and not have anything to do with Azure, they can continue to do so, but we also are investing in related technologies with Azure Stack HCI, and other things to give them more value. If customers choose to run soup to nuts Microsoft technology so they have one throat to choke, so to speak, they can have Azure Stack HCI, they can run Windows Server 2022 VMs on Azure Stack HCI that are connected to Azure, lifecycle managed by Azure, and they're adding security, monitoring and policy and other features on that.

Azure Arc brings those same hybrid benefits to any infrastructure you run yourself, Windows or Linux, and that's something Kumar said Microsoft would continue to significantly invest in.

"If I could have my way, I'd have every Windows Server out there enabled with Arc, connected to Arc, managed from Azure and secured from Azure. You can easily use features like Azure Security centre or Azure Defender, Azure Monitor, Azure Policy. You can use Azure Policy across however many data centres you have and manage it all from Azure and you'll benefit from all the security investments that we make on Azure. It's a way to continue to build the workloads on premises that you're comfortable doing, but getting the value of the cloud to your on premises investments."

The way you buy ESUs for Windows Server reflects this same shift to hybrid. If you run an older version of Windows Server on your own infrastructure, you pay high prices for getting security updates once extended support is over, but they're free as an extra cloud benefit if you run the same workloads in Azure public cloud or on Azure Stack HCI.

Similarly, when Microsoft finally introduced the new Windows Server certification that IT pros had been asking for (which will likely be generally available in early 2022), it wasn't just about Windows Server. The Windows Server Hybrid Administrator Associate certification covers hybrid and cloud IaaS management as well as servers on your own infrastructure. And even there, the certification covers integrating Windows Server environments with Azure services like Azure Arc, Azure Automation Update Management, Azure Backup, Azure Security Center, Azure Migrate and Azure Monitor.

This is certainly an in-depth Windows Server certification; it covers configuring and administering core features like Active Directory, Desired State Configuration, Hyper-V, containers, backup, monitoring networking, storage including Storage Spaces Direct, security, failover clustering and a broad range of troubleshooting scenarios, and it will be useful if you're only dealing with Windows Server on premises. But a substantial portion of the skills it measures are about running Windows Server in Azure or using Azure services with your own servers, reflecting Microsoft's pitch that Windows Server is better when you integrate it with Azure services.

SEE: It's time to get off Windows and SQL Server 2012 (or run them on Azure) (TechRepublic)

As principal program manager for Azure Stack HCI Jeff Woolsey explained in a video introducing the exams for the new certification, this isn't just what Microsoft thinks matters but what enterprises are asking for and what IT pros will be using in their job. "IT managers are looking for someone that can manage this hybrid world because that's where folks are going.

"We've really got a body of people that are ready for a hybrid certification. They're going to be having some stuff down here and some stuff up here for a long time; hybrid coexistence is where people want to be."

And if you're not interested in any of this Azure integration and you want Hyper-V Server for something Linux-based that takes longer than the 60 days of the Azure Stack HCI trial without even the small monthly charge, the best option is to just carry on using Hyper-V Server 2019, which continues to be free and is supported until January 2029 (by which time Microsoft may have other options).

At the recent Windows Server 2022 event, Woolsey pointed out that, "it continues to be a great hypervisor available at no cost and can be managed from Windows Admin Center also, at no cost."

Given that many organizations run on older versions of Windows Server rather than upgrading to the latest release immediately, customers might well have stayed on Hyper-V Server 2019 for some time anyway. And seven years of support should give Microsoft enough time to discover if it's losing future customers by not giving the free Hyper-V enthusiasts a new version.

Be your company's Microsoft insider by reading these Windows and Office tips, tricks, and cheat sheets. Delivered Mondays and Wednesdays

More here:
Why Microsoft thinks you don't need another free Hyper-V Server SKU - TechRepublic

Read More..

Google goes all in on hybrid cloud with new portfolio of edge and managed on-prem solutions – TechCrunch

Today at Google Cloud Next, the companys annual customer conference, Google announced a broad portfolio of hybrid cloud services designed to deliver computing at the edge of Googles network of data centers, in a partner facility or in a customers private data center all managed by Anthos, the companys cloud native management console.

Theres a lot to unpack here, but Sachin Gupta, Googles GM and VP of Product for IaaS, says the strategy behind the announcement was to bring customers along who might have specialized workloads that arent necessarily well suited to the public cloud a need that he says they were continually hearing about from potential customers.

That means providing them with some reasonable alternatives. What we find is that there are various factors that prevent customers from moving to the public cloud right away, Gupta said. For instance, they might have low latency requirements or large amounts of data that they need to process and moving this data to the public cloud and back again may not be efficient. They also may have security, privacy, data residency or other compliance requirements.

All of this led Google to design a set of solutions that work in a variety of situations that might not involve the pure public cloud. A solution could be installed on the edge in one of Googles worldwide data centers, in a partner data center like a telco or a colo facility like Equinix, or as part of a managed server inside a companys own data center.

With that latter component, its important to note that these are servers from partner companies like Dell and HPE, as opposed to a server manufactured and managed by Google as Amazon does with its Outposts product. Its also interesting to note that these machines wont be connected directly to the Google cloud in any way, but Google will manage all of the software and provide a way for IT to manage cloud and on-prem resources in a single way. More on that in a moment.

The goal with a hosted solution is a consistent and modern approach to computing using either containers and Kubernetes or virtual machines. Google provides updates via a secure download site and customers can check these themselves or let a third party vendor handle all of that for them.

The glue here that really holds this approach together is Anthos, the control software the company introduced a couple of years ago. With Anthos, customers can control and manage software wherever it lives, whether thats on premise, in a data center or on public clouds, even from competitors like Microsoft and Amazon.

Google Cloud hybrid portfolio architecture diagram Image Credits: Google Cloud

The whole approach signals that Google is attempting to carve out its own share of the cloud by taking advantage of a hybrid market opening. While this is an area that both Microsoft and IBM are also trying to exploit, taking this comprehensive platform approach while using Anthos to stitch everything together could give Google some traction, especially in companies that have specific requirements that prevent them from moving certain workloads to the cloud.

Google reached 10% market share in the cloud infrastructure market for the first time in the most recent quarterly report from August with a brisk growth rate of 54%, showing that they are starting to gain a bit of momentum, even though they remain far behind Amazon with 33% and Microsoft with 20% market share.

Read the original post:
Google goes all in on hybrid cloud with new portfolio of edge and managed on-prem solutions - TechCrunch

Read More..

NASA Turns to the Cloud for Help With Next-Generation Earth Missions – NASA Jet Propulsion Laboratory

However, with missions like SWOT and NISAR, that wont be feasible for most scientists. If someone wanted to download a days worth of information from SWOT onto their computer, theyd need 20 laptops, each capable of storing a terabyte of data. If a researcher wanted to download four days worth of data from NISAR, it would take about a year to perform on an average home internet connection. Working with data stored in the cloud means scientists wont have to buy huge hard drives to download the data or wait months as numerous large files download to their system. Processing and storing high volumes of data in the cloud will enable a cost-effective, efficient approach to the study of big-data problems, said Lee-Lueng Fu, JPL project scientist for SWOT.

Infrastructure limitations wont be as much of a concern, either, since organizations wont have to pay to store mind-boggling amounts of data or maintain the physical space for all those hard drives. We just dont have the additional physical server space at JPL with enough capacity and flexibility to support both NISAR and SWOT, said Hook Hua, a JPL science data systems architect for both missions.

NASA engineers have already taken advantage of this aspect of cloud computing for a proof-of-concept product using data from Sentinel-1. The satellite is an ESA (European Space Agency) mission that also looks at changes to Earths surface, although it uses a different type of radar instrument than the ones NISAR will use. Working with Sentinel-1 data in the cloud, engineers produced a colorized map showing the change in Earths surface from more vegetated areas to deserts. It took a week of constant computing in the cloud, using the equivalent of thousands of machines, said Paul Rosen, JPL project scientist for NISAR. If you tried to do this outside the cloud, youd have had to buy all those thousands of machines.

Cloud computing wont replace all of the ways in which researchers work with science datasets, but at least for Earth science, its certainly gaining ground, said Alex Gardner, a NISAR science team member at JPL who studies glaciers and sea level rise. He envisions that most of his analyses will happen elsewhere in the near future instead of on his laptop or personal server. I fully expect in five to 10 years, I wont have much of a hard drive on my computer and I will be exploring the new firehose of data in the cloud, he said.

To explore NASAs publicly available datasets, visit:

https://data.nasa.gov/

Here is the original post:
NASA Turns to the Cloud for Help With Next-Generation Earth Missions - NASA Jet Propulsion Laboratory

Read More..

Oracle opens Israel cloud centre to withstand rocket attacks – Reuters

Jerusalem mayor Moshe Leon, Oracle Israel Country Manager Eran Feigenbaum, Israel's Minister of Economy Orna Barbivai and Israel's Minister of Communications Yoaz Hendel attend an event where Oracle announce the opening of a regional cloud facility, in Jerusalem October 13, 2021. REUTERS/Steven Scheer

JERUSALEM, Oct 13 (Reuters) - Oracle (ORCL.N) opened the first of two planned public cloud centres in Israel on Wednesday, enabling companies and other Israeli customers to keep their data on local servers rather than rely on other countries.

The underground data centre is nine floors - about 50 metres - below one of Jerusalem's technology parks. Designed to operate in the face of potential terror acts, the centre is estimated to have cost hundreds of millions of dollars.

"This facility ... can withstand a rocket direct hit, a missile direct hit or even a car bomb - and the services will keep running with customers not even knowing that something so horrible has happened," Eran Feigenbaum, Oracle's Israel manager, told Reuters.

The site, which has its own generators in case of power loss, is one of 30 such cloud centres globally. Until now, the closest to Israel was in the United Arab Emirates. Oracle also has a research and development centre in Israel.

Feigenbaum said there will be a second data centre in Israel as part of a plan to open 14 more centres by the end of 2022 to meet growing demand from Israeli technology companies and serve as a back-up to ensure data stays within Israel's borders.

Oracle has already signed up a number of customers in Israel, Feigenbaum said.

The company has said its cloud operation has been gaining momentum globally in the past year by adding video conferencing platforms Zoom and 8X8 in addition to being a security partner of the U.S. government.

For Israeli companies, having a local cloud could reduce costs because they would have the ability to rent storage instead of building their own servers or relying on other countries.

"They will not have to move to Silicon Valley or other places. They can do everything from here, with strong back-up and short distances," said Communications Minister Yoaz Hendel.

"It's good for us to keep our own information inside Israel."

The new cloud facility comes after Oracle lost out to Google (GOOGL.O) and Amazon (AMZN.O) this year in a government tender to provide cloud services for the country's public sector and military.

Reporting by Steven ScheerEditing by Elaine Hardcastle and David Goodman

Our Standards: The Thomson Reuters Trust Principles.

Read this article:
Oracle opens Israel cloud centre to withstand rocket attacks - Reuters

Read More..

How to End-to-End Encrypt Your WhatsApp Chat Backups in iCloud – Mac Rumors

WhatsApp end-to-end encrypted backups are now rolling out for iPhone users, Facebook has announced. Until now, WhatsApp let users back up their chat history to iCloud, but the messages and media contained in the backups weren't protected by WhatsApp's end-to-end encryption while in Apple's cloud servers.

End-to-end encryption ensures only you and the person you're communicating with can read or listen to what is sent, and nobody in between, not even WhatsApp, can gain access to this content. With the advent of end-to-end encrypted backup, you can now also add the same layer of protection to your iCloud backup.

That's important from a security perspective. Given that Apple holds the encryption keys for iCloud, a subpoena of Apple or an unauthorized iCloud hack could potentially allow access to WhatsApp messages backed up there. That security vulnerability has now been resolved because you can encrypt and password-protect your chat history before uploading it to Apple's cloud-based platform.

The following steps show you how. Note: If you don't see the encrypted backup option, sit tight the feature is being rolled out to more than 2 billion users.

Bear in mind that you won't be able to restore your backup if you lose your WhatsApp chats and forget your password or key. WhatsApp can't reset your password or restore your backup for you.

It's also worth noting that if you have iCloud Backups turned on for your entire iPhone, an unencrypted version of your chat history is also backed up to iCloud. To ensure your WhatsApp chats and media are only backed up with end-to-end encryption, turn iCloud Backup off on your device. You can do this in the Settings app by tapping your Apple ID banner at the top, selecting iCloud, and turning off iCloud Backup.

Read the rest here:
How to End-to-End Encrypt Your WhatsApp Chat Backups in iCloud - Mac Rumors

Read More..

SAP cloud hype leaves its shares in the gutter – Reuters

The SAP logo is pictured in Walldorf, Germany, May 12, 2016. REUTERS/Ralph Orlowski/File Photo

LONDON, Oct 13 (Reuters Breakingviews) - Christian Kleins strategy for 145 billion euro software giant SAP (SAPG.DE) seems to be working, but investors arent giving him credit. The chief executive wants so-called cloud revenue, which means sales from IT products that are hosted remotely rather than on local servers, to hit 22 billion euros by 2025. An ad-hoc market update on Wednesday, which pushed the share price up by 4.6%, showed hes on track. Cloud sales grew by 20% as corporate clients bought more of its subscription software; thats roughly the pace at which revenue needs to increase to hit Kleins 2025 target.

But the shares are still lower than they were last October, before Klein released his five-year plan. And investors dont value SAP like a fast-growing cloud specialist. U.S. rivals Salesforce.com (CRM.N), Workday (WDAY.O) and ServiceNow (NOW.N) on average trade at 20 times 2025 revenue. Apply the same multiple to Kleins targeted 22 billion euros of cloud sales, and that division alone should be worth 447 billion euros including debt roughly three times SAPs total enterprise value. Klein has won over customers but not yet investors. (By Liam Proud)

On Twitter http://twitter.com/breakingviews

Capital Calls - More concise insights on global finance:

Richard Li shows FWD investors a brighter sunset read more

Climate for net-zero changes Down Under read more

Supply chain pain is just one of ASOS concerns read more

Green investors get timely reminder of their power read more

Renren payout only helps prove Cayman rule read more

Editing Neil Unmack and Karen Kwok

Reuters Breakingviews is the world's leading source of agenda-setting financial insight. As the Reuters brand for financial commentary, we dissect the big business and economic stories as they break around the world every day. A global team of about 30 correspondents in New York, London, Hong Kong and other major cities provides expert analysis in real time.

Sign up for a free trial of our full service at https://www.breakingviews.com/trial and follow us on Twitter @Breakingviews and at http://www.breakingviews.com. All opinions expressed are those of the authors.

View original post here:
SAP cloud hype leaves its shares in the gutter - Reuters

Read More..

Microsoft has crushed the worst DDoS attack its Azure servers have ever encountered – PC Gamer

An unnamed Microsoft Azure customer has recently been targeted by a profound 2.4 Tbps DDoS attack. Thankfully the cloud service was able to fend off the onslaught and, despite its intensity, the customer's site remains unaffected.

Azure caters for huge household names such as Ubisoft, eBay, Samsung, and Boeing... even the City of Taipei council relies on the cloud data service. As such, we're pretty glad to hear the attacks were unsuccessful.

The charge came in the form of a User Datagram Protocol (UDP) flood, in which attackers target random host ports with IP packets in order to overwhelm their network and force sites offline.

Attacks of this size demonstrate the ability of bad actors to wreak havoc by flooding targets with gigantic traffic volumes

The intrusionwhich originated from around 70,000 sources across the USA, Vietnam, and Taiwan, among other countrieslasted just 10 minutes. But each short volley took mere seconds to reach heights of 2.4 Tbps, 0.55 Tbps, and 1.7 Tbps (via The Verge).

That puts it down as the most intense barrage Azure has ever had to deal with, at 140% higher than the company's 2020's DDoS attack.

An announcement from Microsoft Azure notes: "Attacks of this size demonstrate the ability of bad actors to wreak havoc by flooding targets with gigantic traffic volumes trying to choke network capacity.

"However, Azures DDoS protection platform, built on distributed DDoS detection and mitigation pipelines, can absorb tens of terabits of DDoS attacks."

It makes a nice change to report on an attack that was sidestepped,rather than one which causes a mega-calamity. But as always this is a frightful reminder that cybercrime is not going away, or letting up any time soon.

It'll keep evolving, as we know: "All things change in a dynamic environment. Your effort to remain what you are is what limits you." Ghost in the Shell quotes aside, do make sure you're keeping up with your cybersecurity best practises. Stay safe out there.

Read the rest here:
Microsoft has crushed the worst DDoS attack its Azure servers have ever encountered - PC Gamer

Read More..

Five things to watch October 15: Dell unveils CSP offerings, – Capacity Media

12h | Saf Malik

Here are the five things you need to know this morning, October 15, 2021, from around the world.

Dell Technologies unveils new offerings for CSPs

Dell Technologies has introduced its new telecom software, solutions and services to help communications services providers (CSPs) accelerate their open, cloud-native network deployments.

Dell says its new software modernises network deployment and management and its Bare Metal Orchestrator telecom software offers the scale to automate the deployment and management of servers across geographic locations to support ORAN and 5G deployments.

The company adds that the software will give CSPs the tools to discover inventory servers bring them online and deploy software. Bare Metal Orchestrator tells the targeted server what to do so that tasks can be completed quickly and efficiently.

Dennis Hoffman, senior vice president and general manager at Dell Technologies Telecom Systems Business, said: As server technology proliferates through increasingly open telecom networks, the industry sees an immediate and growing need for remote lifecycle management of a highly distributed compute fabric.

Bare Metal Orchestrator gives communication services providers an easier way to deploy and manage open network infrastructure while saving costs and time, allowing them to focus on delivering new and differentiated services to their customers.

Colt and Telia among companies committed to TM Forum Pilot of Inclusion

Colt and Telia are among the companies that have committed to TM Forums Pilot of Inclusion and Diversity Score to drive top talent to telecoms.

The TM Forum has completed the trial of the Inclusion and Diversity Score (IDS) which aims to create a single, universal index which can be used to measure both diversity and cultural inclusiveness.

Following a successful pilot with five companies which also include Accedian, Bain & Company and Rostelecom, IDS will now move to a beta phase with a broader set of organisations, ahead of a full launch in 2022.

As an industry, we have made very little progress in driving real change when it comes to inclusion and diversity, said Keri Gilder, CEO, Colt & Chair of TM Forum Inclusion & Diversity Council.

We all know that we need executive support and data to drive decisions to drive change in our business.

Although this is only the beginning of our journey, I truly believe that by having this metric in place, we can now set about actioning real change to ensure we have a healthy environment for the industry to attract and develop the talent that we require to drive the tremendous growth we are seeing and the innovation that we need in an environment for everyone to thrive.

DISH selects Spirent for 5G core automated testing

Dish has selected Spirent Communications to test its 5G network core and validate its performance.

This will allow DISH to continuously integrate functionality into its 5G network and deliver edge solutions to both retail and enterprise customers.

DISH is partnering with AWS on cloud infrastructure and it is set to become the worlds first telecom company to run its service on the public cloud. Spirents solutions will allow DISH to realise greater agility of OpenRAN, cloud-native 5G network, according to the company.

DISH is transforming the industry as it prepares to deploy 5G in a public cloud network and pioneer Open Radio Access Network (Open RAN) technology, said Doug Roberts, general manager of Spirents lifecycle service assurance business.

Spirent understands the inherent challenges that come with building an open, secure network and supporting a new, first-of-its-kind delivery model.

Were excited to assist DISH in driving operational excellence across the entire lifecycle with industry-leading automation, coverage and analytics capabilities.

KPN turns to Oracle to modernise operations

Oracle has revealed that it has been chosen by KPN to deploy its Oracle Fusion Applications programme for finance, supply chain management and human resources to help streamline the companys business operations.

Dutch-based KPN has over six million customers on broadband and mobile in the Netherlands and has been diversifying its business towards digital services over the last few years. Oracles integrated cloud platform will enable KPN to optimise financial planning and forecasting while modernising its HR processes.

It will also improve the employee experience and consolidate and streamline procurement and supply chain management.

"Digitalisation is transforming how we work and live, and also offers significant opportunities to recover from the pandemic and address many of the social and environmental challenges we face, said Chris Figee, CFO at KPN.

KPN is transforming its business to support our customers in this new world, and this requires us to simplify and consolidate our operations to become more agile, more adaptable, and more flexible in what is a continually shifting environment.

We believe Oracle can support us in this transformation.

Rogers 5G expands to 11 new markets across Quebec

Rogers Communications has announced it has extended its 5G network to 11 new cities and towns throughout Quebec.

Rogers 5G is now available in the following areas:

The network expansion is part of a long-term plan to bring 5G to Quebec and will see additional deployments in 2022.

These are critical investments to keep Quebecers, wherever they may reside, connected to award-winning wireless technology that will drive innovation and prosperity across our province, said Edith Cloutier, President of Qubec, Rogers Communications.

The company also recently announced an enhanced wireless network across more than 162 Quebec communities since January 2020 and plans to improve that to a total of 360 communities by the end of the year.

More:
Five things to watch October 15: Dell unveils CSP offerings, - Capacity Media

Read More..

Overcoming crucial barriers to cloud adoption in the telecommunications sector (Reader Forum) – RCR Wireless News

The most significant role of telecommunications companies is keeping people connected: keeping companies in touch with clients globally, keeping colleagues in sync while working from home, and keeping relatives in contact. During the COVID-19 crisis, this role has become even more evident and crucial. Weve watched how telcos have managed to facilitate remote work and learning, enhanced support of healthcare systems, aided governments at both national and local levels, and provided a solid backup for corporate customers all to keep vital processes going.

Even though telcos are used to relying on hardware to provide their network connections, the last few years have brought many changes. With the rise of cloud-native 5G technology, an unexpected spike in data traffic due to the global pandemic, a surge in broadband services usage, and increasing customer demands, telecom companies are challenged to find ways to modernize networks. To do so, they turn to virtualized and cloud architectures.

The main reason for telecommunications providers to start adopting cloud computing is the need to save money. I dont mean general reductions in spending only which by all means is a goal in itself but rather cutting expenses of various parts of the organization. On-premises systems and related license costs take a lot of money and dedication to maintain. On the contrary, cloud deployments with on-demand scalability and additional services (and without the need to maintain costly servers) are cheaper.

Additional key factors here are the flexibility, scalability and agility of cloud systems compared to on-premises systems. As we see in real-world cases, telecom systems face activity peaks, both operationally and in terms of data storage. This happens, for example, on Christmas or any other holiday. Most of the time, activity is much lower. Thus, on-premises systems have to be configured to withstand those peaks, requiring a bunch of servers at a cost of billions of dollars that are unused on regular days. With a cloud deployment, you pay as you go, using automatic vertical and horizontal scaling to keep expenses lower on regular days while ensuring the immediate availability of additional capacity.

Apart from the scalability factor, the cloud adopts AI/ML tools as COTS, which tempts telcos to create cloud-native data lakes, exploring enterprise data end to end while still saving on database license costs. Obviously, every case should be considered separately, including regulatory requirements and total costs. But the synergy of cloud tools cannot be ignored.

Its fair to say that telcos are not new to cloud computing, as the first big shift to the cloud happened in 2012. Some major telecom operators like AT&T, Orange, Telecom Italia, Deutsche Telecom and Telefonica have introduced a network functions virtualization (NFV) concept and transitioned from purely physical networking to virtual network functions (VNFs) to automate portions of their infrastructure.

Cloud-native network functions (CNFs) essentially offer a new way of providing network functionality and configuring VNFs that is more dynamic, flexible, and easily scaled. They also appear to make a better solution for a smooth transition to 5G. According to Analysys Mason, CSPs will cumulatively spend $114 billion on network cloud (which includes network functions, cloud software, hardware, and related professional services) between 2019 and 2025. The next few years are about to bring a massive shift of telcos to the cloud, which will mean greater focus on essential business services rather than on IT, server updates, and maintenance.

The first thing telcos have to do is build on their internal expertise with the help of top IT companies. Telcos can use IT companies for quick solutions or system implementation and to gain skills and increase the level of their own expertise. This is what the Ukrainian telecom services provider we worked with did, relying on our help as an expert in the field.

In this way, operators gain experience from IT companies, get a background in cloud solutions, accumulate expertise and keep all the key roles architects, managers of similar projects, security architects inside the company.

Cloud computing has already had a significant impact on the revenues and budgets of telco businesses. Cloud computing has proven to be more cost-efficient and a more flexible and agile way to store and work with data. With the help of cloud services providers, telcos can increase the popularity of their services, broaden their offering and improve overall business performance.

The final success of cloud adoption by telcos is related to several variables. First is the gradual development of internal expertise with the help of top IT companies. Here, telcos can learn from experience and use IT services as an agile on-demand resource to run their own projects.

Apart from that, it is immensely important to build close partnerships with cloud service providers, providing solid discounts for creating and implementing common strategic projects to cultivate the local market. These two strategies will ensure growth in the number of telcos shifting their workloads from on-premises data centers to public clouds.

Related Posts

Read more here:
Overcoming crucial barriers to cloud adoption in the telecommunications sector (Reader Forum) - RCR Wireless News

Read More..

Supermicro Expands GPU System Portfolio with Innovative New Servers to Accelerate a Wide Range of AI, HPC, and Cloud Workloads – PRNewswire

"Supermicro engineers have created another extensive portfolio of high-performance GPU-based systems that reduce costs, space, and power consumption compared to other designs in the market," said Charles Liang, president and CEO, Supermicro. "With our innovative design, we can offer customers NVIDIA HGX A100 (code name Redstone) 4-GPU accelerators for AI and HPC workloads in dense 2U form factors. Also, our 2U 2-Node system is uniquely designed to share power and cooling components which reduce OPEX and the impact on the environment."

The 2U NVIDIA HGX A100 server is based on the 3rd Gen Intel Xeon Scalable processors with Intel Deep Learning Boost technology and is optimized for analytics, training, and inference workloads. The system can deliver up to 2.5 petaflops of AI performance, with four A100 GPUs fully interconnected with NVIDIA NVLink, providing up to 320GB of GPU memory to speed breakthroughs in enterprise data science and AI. The system is up to 4x faster than the previous generation GPUs for complex conversational AI models like BERT large inference and delivers up to 3x performance boost for BERT large AI training.

In addition, the advanced thermal and cooling designs make these systems ideal for high-performance clusters where node density and power efficiency are priorities. Liquid cooling is also available for these systems, resulting in even more OPEX savings. Intel Optane Persistent Memory (PMem) is also supported on this platform, enabling significantly larger models to be held in memory, close to the CPU, before processing on the GPUs. For applications that require multi-system interaction, the system can also be equipped with four NVIDIA ConnectX-6 200Gb/s InfiniBand cards to support GPUDirect RDMA with a 1:1 GPU-to-DPU ratio.

The new 2U 2-Node is an energy-efficient resource-saving architecture designed for each node to support up to three double-width GPUs. Each node also features a single 3rd Gen Intel Xeon Scalable processor with up to 40 cores and built-in AI and HPC acceleration. A wide range of AI, rendering, and VDI applications will benefit from this balance of CPUs and GPUs. Equipped with Supermicro's advanced I/O Module (AIOM) expansion slots for fast and flexible networking capabilities, the system can also process massive data flow for demanding AI/ML applications, deep learning training, and inferencing while securing the workload and learning models. It is also ideal for multi-instance high-end cloud gaming and many other compute-intensive VDI applications. In addition, Virtual Content Delivery Networks (vCDNs) will be able to satisfy increasing demands for streaming services. Power supply redundancy is built-in, as either node can use the adjacent node's power supply in the event of a failure.

About Super Micro Computer, Inc.

Supermicro (SMCI), the leading innovator in high-performance, high-efficiency server technology, is a premier provider of advanced Server Building Block Solutions for Enterprise Data Center, Cloud Computing, Artificial Intelligence, and Edge Computing Systems worldwide. Supermicro is committed to protecting the environment through its "We Keep IT Green" initiative and provides customers with the most energy-efficient, environmentally-friendly solutions available on the market.

Supermicro, Server Building Block Solutions, and We Keep IT Green are trademarks and/or registered trademarks of Super Micro Computer, Inc.

Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries.

All other brands, names, and trademarks are the property of their respective owners.

SOURCE Super Micro Computer, Inc.

http://www.supermicro.com

Continue reading here:
Supermicro Expands GPU System Portfolio with Innovative New Servers to Accelerate a Wide Range of AI, HPC, and Cloud Workloads - PRNewswire

Read More..