Category Archives: Cloud Servers

What does Apples Xcode Cloud mean for the future of apps? Heres what devs say – Digital Trends

For consumers and outside observers, Apples Worldwide Developers Conference (WWDC) is always a chance to see what lies in store when the next versions of its operating systems come to their devices. For developers, though, it is all about learning what Apple is doing under the hood. At this years event, Apple revealed Xcode Cloud, a new feature of its Xcode development app that Apple believes will make life easier and simpler for app builders.

Folks at Apple told us they were incredibly excited for Xcode Cloud and disappointed that developers could not be on-site when it was announced at the companys online event and a quick perusal of the Twittersphere brings up a wealth of devs giddy with expectation for the new feature.

But what exactly is Xcode Cloud, and why is Apple convinced it is such a big deal? To find out, we sat down with both engineers at Apple and the developers its targeting to see how Xcode Cloud might impact their work, to hear out any apprehensions they might have, and tease out what it could mean for the future of apps.

Lets start with the basics. To make apps for Apple platforms, developers use an Apple-created Mac app called Xcode. Its been around since 2003 and remains one of the most important pieces of software in Apples catalog. Xcode Cloud is one of the biggest updates to Xcode in years, bringing new functionality that many developers had to leave Xcode for in the past.

Apple positions Xcode Cloud as a tool that puts previously complex tools within reach of all developers. I asked Wiley Hodges, the Director for Tools and Technologies at Apple, what they were hearing from developers that led to the creation of Xcode Cloud.

Weve seen that there are tasks like distributing the apps to beta testers, like managing feedback and crash reports, that are really critical to building great apps, Hodges said. And weve seen that more and more of our developers have been interested in continuous integration and using this automated build and automated test process to constantly verify the quality of software while its being built.

Those are exactly the problems Xcode Cloud is meant to address.

Xcode Cloud lets developers run multiple automated tests at once, uses continuous integration (CI) so app code can be quickly iterated and updated. It also simplifies the distribution of app builds to beta testers and lets devs catch up on feedback. It can build apps in the cloud rather than on a Mac to reduce load and allows for the creation of advanced workflows that automatically start and stop depending on set conditions.

We wanted to bring these tools and services in the reach of all our developers, because right now its been something that I think was more on the advanced level for developers to get this set up and running as part of their process, Hodges explained.

That sounds promising enough. But what do actual developers think?

Putting those tools front and center is something several developers told us was a key attraction of Xcode Cloud. Now that previously quite specialized capabilities have been integrated into the main tool they use to build apps, there is much less need to find third-party alternatives and add extra steps to their workflows.

Denys Telezhkin, a software engineer at ClearVPN, summed this feeling up in an interview with Digital Trends.

I was very interested [in Xcode Cloud] as there have been a variety of problems with different CIs, he told me. For example, Microsoft Azure is difficult to configure, GitHub Actions is expensive, and so on.

With everything integrated into Xcode Cloud, leaning on unreliable alternatives could become unnecessary. Of course, Apple will be happy to steer developers away from its rivals.

But the chief impetus, Hodges insists, was something different: The motivation for Xcode Cloud came from our observation that while there was a group of devoted Xcode Server users, most developers still werent implementing continuous integration. We started looking at the obstacles that prevented adoption and came to the conclusion that a cloud-hosted CI offering would be the best way to get broad adoption of CI as a practice, particularly with smaller developers for whom setting up and managing dedicated build servers was a bigger challenge.

Seeing tools and services like Xcode Cloud integrated directly into the dev platform got us excited.

For devs, its about more than just CI though. Scott Olechowski, Chief Product Officer and Co-Founder of Plex, got to try out a beta version of Xcode Cloud before Apples WWDC announcement. He told me the potential benefits are wide-ranging.

Seeing tools and services like Xcode Cloud integrated directly into the dev platform got us excited since it should really help us be more efficient in our development, QA [quality assurance], and release efforts.

Part of that increased efficiency will likely come in Xcode Clouds collaboration tools. Each team member can see project changes from their colleagues, and notifications can be sent when a code update is published. The timing is auspicious, given the way the ongoing pandemic has physically separated teams all over the globe. Yet it was also coincidental, said Hodges.

The reality is weve been on this path for quite a while, literally years and years, and so I think the timing may be fortuitous in that regard. This is definitely a long-term project that was well underway before our unfortunate recent events.

If there is one thing Apple is great at, its building an ecosystem of apps and products that all work together. Unsurprisingly, Xcode Cloud reflects that it connects to TestFlight for beta testers, lets you run builds on multiple virtual Apple devices in parallel, plays nice with App Store Connect, and more. For many developers, that integration could have a strongly positive impact on their work.

Vitalii Budnik, a software engineer at MacPaws Setapp, told me having everything in one place will mean more time spent actually coding and less time juggling multiple tools and options. For Budniks MacPaw colleague, Bohdan Mihiliev of Gemini Photos, the app distribution process will be faster and smoother than it currently is.

Apple sees Xcode Cloud as something that can improve life for developers large and small. Alison Tracey, a lead developer on Xcode Cloud at Apple, emphasized the way Xcode Cloud levels the playing field for smaller developers as well.

With the range of options that exist to you in the configuration experience when youre setting up your workflows, you really can support the needs of a small developer or somebody thats a small development shop or somebody thats new to continuous integration, all the way up to more of the advanced power users.

This ranges from a simple four-step onboarding process to integrating Mac apps and tools like Slack and dashboards thanks to built-in APIs.

Its not all smooth sailing, though. Apple refused to divulge pricing details for Xcode Cloud at WWDC, saying more information would not be available until the fall. Many developers I spoke to were concerned about that to one degree or another, and it seems to be putting a slight damper on the excitement a lot of devs are feeling about Xcode Clouds potential.

Questions have also been raised about Xcode Clouds value to developer teams that create apps for both Apple and non-Apple platforms since Xcode can only be run on the Mac. I put this to Alex Stevenson-Price, Engineering Manager at Plex, since Plex has apps for Mac, Windows, Linux, Android, iOS, and many other systems. He told me that Plexs various apps are built by different teams using different tools, so while it is a great new string in the Apple teams bow, it will not be of much use to the non-Apple teams because they will not be using Xcode anyway.

If you want to get Xcode Clouds benefits when building an Android app, you are out of luck.

Of course, it should not come as a surprise that Apple has limited interest in providing tools for rival ecosystems. If you want to get Xcode Clouds benefits when building an Android app, you are out of luck, but Xcode has always been restricted (Apple might say focused) in that way. That could pose problems for developers who have the same app on both iOS and Android or any number of other platforms.

Other developers told me they will have to wait and see whether Xcode Clouds reputed benefits play out in reality. Its use for solo developers was also questioned, partly because a number of its features are aimed towards teams with multiple members.

For instance, Lukas Burgstaller, the developer behind apps like Fiery Feeds and Tidur, told me Xcode Clouds utility depends on the setting.

While I dont think Im going to use it for my personal projects [as] I feel like continuous integration is moderately helpful at best for a solo developer setup, I will definitely start using it in my day job as an iOS team lead, where we were planning to set up some sort of CI for over a year but never got to it.

But even if he might not use every feature, Burgstaller still described Xcode Cloud as a finally announcement, saying he was extremely happy Apple is adding it to Xcode.

It is still early days for Xcode Cloud. Like many of the other updates and new features announced at WWDC 2021, from iOS 15 to MacOS Monterey, it is currently only available to beta testers. Despite a few concerns and bad memories from the spotty launch of another developer tool, Mac Catalyst, a few years ago the benefits seem to far outweigh the drawbacks, at least according to the developers I spoke to.

In fact, none of those devs said Xcode Cloud was completely without merit, suggesting there will be something for most people who work to create apps for the Apple ecosystem. Provided Apple continues to improve it as developer needs change, and as long as its pricing is not egregiously expensive, Apple might be onto a winner with Xcode Cloud.

As always, the proof is in the pudding, and a lot will depend on the state Xcode Cloud finds itself in at launch. For many developers, though, its fall release cant come soon enough.

Read the original post:
What does Apples Xcode Cloud mean for the future of apps? Heres what devs say - Digital Trends

Travelling to the cloud – ITWeb

When lockdown began, all industries suffered, but some were impacted even more than others. Much like travel and tourism, the transport industry in general and the cross-country passenger bus services in particular struggled to survive. Having been unable to operate at all for a period, this meant their customers and the need to keep them happy became even more important, because these businesses certainly wanted their clients to return in droves once the lockdown ended.

The forward-thinking ones have been investing in customer-facing applications as well as internal IT workloads, such as their core platform for information management, online ticketing, storage, analytics and other services.

Stone He, President, Huawei Cloud (Southern Africa), points out it is no surprise to discover that these companies are determined to continuously improve their end-user experience and provide travellers with more real-time information.

A critical foundation for doing so is cloud computing, which is why an increasing number were already shifting to the cloud even before the pandemic struck. During lockdown, cloud proved critical in making sure their business and production systems could continue to run without any impact, he says.

These organisations prioritised the development and consolidation of hybrid workloads, not only for the customer-facing systems, but also for their internal IT systems. Many are now searching for more agile, resilient and cost-effective cloud services to replace their current service providers or eliminate the need to host their own data centres.

Another key advantage of the cloud is that it helps them overcome scalability challenges related to the processing and analysis of their data, he adds. For businesses that deal with multiple national routes, thousands of passengers and any number of destinations, the sheer volume of data that needs to be processed is enormous.

This is why these entities need a world-class cloud provider that offers local hosting and more cost-efficient solutions to support their businesses.

Some key challenges a move to a locally hosted, international cloud provider can help to solve, adds He, include: significant cost reductions; access to local support services and a dedicated local cloud team; and a reduction in operational complexity.

It goes without saying that any such move would require a seamless migration, since these businesses like in most other sectors simply cannot afford to have their production workloads go down during a migration.

Asked what sort of benefits such businesses would glean from a move to a locally sited cloud data centre, He suggests it would enable access to real-time travel status updates and the opportunity to suggest alternative routes if things do go awry. Most vitally, it would allow head office to easily communicate with staff on the ground, who would have a full view of the situation. Furthermore, they can leverage the cloud to provide greater safety, thanks to real-time vehicle diagnostics, not to mention keeping track of passenger counts and, ultimately, being able to effectively deploy resources when responding to the needs of the business.

The days of maintaining and replacing on-premises servers at Intercape are over and the decision to move our production servers to cloud was made in March 2020," says Karl Rosch, IT Manager at Intercape. He continued: After investigating various platforms, the decision was made to move to Huawei Cloud and the cost and reliability of Huawei Cloud made it a very attractive offering."

Rosch also found that the transition to Huawei Cloud was seamless with the local support from a Huawei engineer who assisted with the set-up and migration project. After more than a year on the Huawei platform, the uptime has been excellent and support from the Huawei team has been outstanding, said Rosch.

The transnational transport sector is already being impacted by new disruptors like Uber, albeit that these are not direct competition yet, but these companies still need to offer the flexibility that customers exposed to ride-sharing apps have come to expect. This means a move away from the rigid approaches to timetables and scheduling of the past. The flexibility and scalability of the cloud will be a huge benefit with regard to how they manage their operations and approach their customers moving forward, explains He. Whats more, only the cloud can provide a genuine foundation to ensure easy adoption of future technology advances, particularly around machine learning and the Internet of things. Thanks to the cloud, these digital technologies can be quickly deployed, enabling these organisations to not only keep their future innovations on track, but also any potential disruptors at bay, He concluded.

Read the rest here:
Travelling to the cloud - ITWeb

Intel: I’m already the biggest DPU shipper Blocks and Files – Blocks and Files

Big beast Intel has come crashing out of the semiconductor jungle into the Data Processing Unit gangs watering hole, saying its the biggest DPU shipper of all and it got there first anyway.

Data Processing Units (DPUs) are programmable processing units dedicated to running data centre infrastructure-specific operations such as security, storage and networking offloading them from existing servers so they can run more applications. DPUs are built from specific CPU chips, FPGAs and/or ASICs by suppliers such as Fungible, Nvidia, Pensando, and Nebulon. Its a fast-developing field and device types include in-situ processors, SmartNICs, server offload cards, storage processors, composability hubs and components.

Navin Shenoy, Intel EVP and GM of its Data Platforms Group, said at the Six Five Summit that Intel has already developed and sold what it called Infrastructure Processing Units (IPUs) what everyone else calls DPUs. Intel designed them to enable hyperscale customers to reduce server CPU overhead and free up cycles for applications to run faster.

Guido Appenzeller, Data Platforms Group CTO at Intel, said in a statement: The IPU is a new category of technologies and is one of the strategic pillars of our cloud strategy. It expands upon our SmartNIC capabilities and is designed to address the complexity and inefficiencies in the modern data centre.

An IPU, he said, enables customers to balance processing and storage, and Intels system has dedicated functionality to accelerate applications built using a microservice-based architecture. Thats because Intel says inter-microservice communications can take up from 22 to 80 per cent of a host server CPUs cycles.

Intels IPU can:

Patty Kummrow, Intels VP in the Data Platforms Group and GM of the Ethernet Products Group, offered this thought: As a result of Intels collaboration with a majority of hyperscalers, Intel is already the volume leader in the IPU market with our Xeon-D, FPGA and Ethernet components. The first of Intels FPGA-based IPU platforms are deployed at multiple cloud service providers and our first ASIC IPU is under test.

The Xeon-D is a system-on-chip (SoC) microserver Xeon CPU, not a full-scale server Xeon CPU. Fungible and Pensando have developed specific DPU processor designs instead of relying on FPGAs or ASICs.

There are no DPU benchmarks, so comparing performance between different suppliers will be difficult.

Intel says it will produce additional FPGA-based IPU platforms and dedicated ASICs in the future. They will be accompanied by a software foundation to help customers develop cloud orchestration software.

Read this article:
Intel: I'm already the biggest DPU shipper Blocks and Files - Blocks and Files

Using Zero Trust Security to Protect Applications and Databases – Server Watch

Applications and databases play vital roles for organizations hosting services and consumers accessing data resources and protecting them is a top priority for any data center.

Connected to an internet full of hackers, billions of devices, and malware, networks are vulnerable to an array of web-based threats. Not long ago, the priority for network security was securing the network perimeter, but forces like remote work and the widespread adoption of cloud and edge computing make defending the perimeter increasingly tricky.

Its no longer a question of if malicious actors can gain access. Its whether theyre able to move laterally within the network when they do. As zero trust has evolved from buzzword to product in the last decade, a consensus has emerged that microsegmentation-based framework is the surest defense against the next generation of threats. To preserve server security, zero trust ensures intruders will never reach an organizations crown jewels.

Here we look at why zero trust is a significant boost to application and database security and how to adopt a zero trust architecture.

Downtime, machine failure, and cyberattacks can be devastating to organizations. When data is offline or unavailable, personnel and customers alike arent pleased. Knowing this, administrators secure the network with a suite of software and security tools to keep the network running and data available. For the data center, power redundancy and backup and disaster recovery solutions are essential protections.

Another crucial example of a network tool are traditional firewalls placed at the network edge to prevent intruders and malicious packets from gaining entry. As the perimeter has long been a cybersecurity priority, security policies inside the network and traffic between network segments changed little. As the years transformed network perimeters, accessing a network gateway has never been easier.

Also Read: SASE: Securing the Network Edge

A malicious actor can move laterally through the network with initial access, escalate privileges, and compromise sensitive data. Several attacks this year, including the SolarWinds Orion breach, showed how skilled advanced persistent threats (APT) could mask their activity while spreading malware across network systems.

In reversing the paradigm of designing devices to inherently trust other devices [allow all], zero trust calls for granular controls between network segments and eventually a day where only pre-categorized traffic is permissible [deny all]. Because SMB up to large enterprise organizations requires extensive data and application sharing capabilities, the network architects objective isnt to disrupt business-critical access instead, ensure abnormal traffic gets identified and managed.

By following the steps provided, network stakeholders can ensure that the organizations most important assets are secure, maximize visibility into network traffic, and adjust control policies to maintain regular business.

Todays network perimeter is rarely still. From the rise of remote work to the boom in endpoint devices in use, protecting an organizations attack surface is no longer entirely possible.

Network administrators need to take a birds eye view of their network and define where the most critical data and resources reside. Dubbed the protect surface, every organization has network segments vital to business continuity that likely deserve more substantial security than other segments. The Applications with client data, operational technology (OT) that controls industrial processes, and Active Directory come to mind.

With protect surfaces identified, the process of defining users and privileges begins. Who is accessing what resources? Does a user with initial access have access to the whole network segment or just a fraction of the data resources within an application?

Also Read: Top IAM Tools & Solutions for 2021

Applications and databases are responsible for storing and transmitting critical data across global networks. When resources move from defined protect surfaces, the flow, destination, device, time, location, user and role are all data points administrators need to inform next steps.

An image of how malicious actors could access your most important data and system controls will appear when analyzing how data moves. Equipped with valuable insight into traffic flows and vulnerabilities, administrators can start to test their findings.

At the heart of zero trust in practice is microsegmentation, the act of segmenting network components to ensure appropriate access levels for the relevant data resources.

The network fabric makes enforcing access betweens segments in your infrastructure seamless for data centers and software-defined data centers. By contrast, network fabrics arent ideal for microsegmentation in cloud environments. Fit for an SDDC environment, a virtual machine manager, also known as a hypervisor, can serve as an enforcement point for comprehensive network management.

And last but not least, next-generation firewalls (NGFW) are a popular choice for implementing microsegmentation because of their flexibility in deployment. Across environments, NGFWs can form a distributed internal layer of security throughout the network.

Also Read: Top Firewall (NGFW) Vendors 2021

No matter the microsegmentation route, administrators now can establish granular policy rules based on their prior findings. Essential information for establishing valid policies include clearly defining:

With the organization network mapped out, all packets, users, privileges, and protect surfaces defined, its time to configure policies to reflect an optimized security approach. Applying these policies can be one application at a time or en masse once its found successful. Administrators can then test flipping the trust switch for the first time. From allowing all to denying all traffic except whats prescribed the networks taken a giant leap.

Flipping the trust switch comes with its share of hiccups. As key personnel and clients begin using the network in its zero trust infrastructure, the IT department is sure to see a rise in technical support requests. Every request for greater access informs network and database administrators on adjusting controls to reflect the living organizations security framework. Monitor these requests and continue to track how sensitive data moves to optimize changes to policies.

Also Read: Top Rack Servers of 2021

There are no one-size-fits-all zero trust solutions. While vendors offer support, insight, and experience in implementing zero trust, a zero trust framework is custom to the organization and network it serves. With that in mind, the process for implementation described above isnt concrete. Organizations with initiative can take steps today to start the process of building a zero trust network architecture.

Zero trust covers the gamut of the OSI model to protect the organizations digital infrastructure. Implementing zero trust from network to application layers, databases, and software programs gives stakeholders the visibility to feel confident about the organizations security posture.

While an intimidating endeavor, moving towards zero trust is a process worth initiating to organize and secure your organizations data resources for years to come.

While databases and applications have long been mainstream components of the enterprise network, security services for protecting them are still a complex marketplace. To learn more about the industry, check out eSecurity Planets Top Database Security Solutions for 2021.

Also Read: Best Load Balancers of 2021

Read the original here:
Using Zero Trust Security to Protect Applications and Databases - Server Watch

Unsecured servers and cloud services: How remote work has increased the attack surface that hackers can target – ZDNet

The increase in the use of cloud services as a result of organisations and their employees shifting to remote work because of the COVID-19 pandemic is leaving corporate networks exposed to cyberattacks.

Many businesses had to swiftly introduce working from home at the start of the pandemic, with employees becoming reliant on cloud services including Remote Desktop Protocols (RDP), Virtual Private Networks (VPN) and application suites like Microsoft Office 365 or Google Workspace.

While this allowed employees to continue doing their jobs outside the traditional corporate network, it has also increased the potential attack surface for cyber criminals. Malicious hackers are able to exploit the reduced level of monitoring activity, while successfully compromising credentials that are used to remotely login to cloud services provides a stealthy route into corporate environments.

SEE:A winning strategy for cybersecurity(ZDNet special report) |Download the report as a PDF(TechRepublic)

Cybersecurity researchers at security company Zscaleranalysed the networks of 1,500 companies and found hundreds of thousands of vulnerabilities in the form of 392,298 exposed servers, 214,230 exposed ports and 60,572 exposed cloud instances all of which can be discovered on the internet. It claimed the biggest companies have an average of 468 servers exposed, while large companies have 209 at risk.

The researchers defined 'exposed' as something that anyone can connect to if they discover the services including remote and cloud services. Organisations are likely to be unaware that these services are exposed to the internet in the first place.

In addition to this, researchers discovered unpatched systems with 202,000 Common Vulnerabilities and Exposures (CVEs), an average of 135 per organisation, with almost half classified as 'Critical' or 'High' severity.

It's possible that cyber criminals will be able to discover and exploit these vulnerabilities in order to enter corporate networks and lay the foundations for cyberattacks including data theft, ransomware and other malware campaigns.

"The sheer amount of information that is being shared today is concerning because it is all essentially an attack surface. Anything that can be accessed can be exploited by unauthorised or malicious users, creating new risks for businesses that don't have complete awareness and control of their network exposure," said Nathan Howe, vice president for emerging technology at Zscaler.

While an increased attack surface can impact organisations of all sizes, international and large employers are the most at risk, due to their number of employees and a distributed workforce.

A global workforce may also make it more difficult to detect anomalous activity because the company is used to employees accessing the network from around the world, so a malicious intruder may not be immediately obvious.

But it's possible to take steps to reduce the attack surface and the potential risk to the organisation as a result. Zscaler recommends three steps for minimising corporate network risk.

SEE: GDPR: Fines increased by 40% last year, and they're about to get a lot bigger

The first is toknow your network by being aware of what applications and services are in use, it's easier to mitigate risk. The second is to know your potential vulnerabilities researchers recommend that information security teams stay informed about the latest vulnerabilities and the patches that can be applied to counter them.

The third thing organisations should do is adopt practices that minimise risk and act as a deterrent to cyber criminals. For example, secure login credentials for cloud services with multi-factor authentication, so in the event of a username and password being breached, it isn't as simple for criminals to actually access accounts and services.

"By understanding their individual attack surfaces and deploying appropriate security measures, including zero trust architecture, companies can better protect their application infrastructure from recurring vulnerabilities that allow attackers to steal data, sabotage systems, or hold networks hostage for ransom," said Howe.

Follow this link:
Unsecured servers and cloud services: How remote work has increased the attack surface that hackers can target - ZDNet

Google Announces AMD Milan-based Cloud Instances – Out with SMT vCPUs? – AnandTech

Today, Google announced the planned introduction of their new set of Tau VMs, or T2D, in their Google Compute Engine VM offerings. The hardware consists of AMDs new Milan processors which is a welcome addition to Googles offerings.

The biggest news of todays announcement however was not Milan, but the fact of what Google is doing in terms of vCPUs, how this impacts performance, and the consequences it has in the cloud provider space particularly in context of the new Arm server CPU competition.

Starting off with the most important data-point Google is presenting today, is that the new GCP Tau VMs showcase a staggering performance advantage over the competitor offerings from AWS and Azure. The comparisonVMdetails are published here:

Googles SPECrate2017_int methodology largely mimics our own internal usage of the test suite in terms of flags (A few differences like LTO and allocator linkage), but the most important figure comes down from the disclosure of the compilers, with Google stating that the +56% performance advantage over AWSs Graviton2 comes from an AOCC run. They further disclose that a GCC run achieving a +25% performance advantage, which clarifies some aspects:

Note that we also tested with GCC using -O3, but we saw better performance with -Ofast on all machines tested. An interesting note is that while we saw a 56% estimated SPECrate2017_int_base performance uplift on the t2d-standard-32 over the m6g.8xlarge when we used AMD's optimizing compiler, which could take advantage of the AMD architecture, we also saw a 25% performance uplift on the t2d-standard-32 over the m6g.8xlarge when using GCC 11.1 with the above flags for both machines.

Having this 25% figure in mind, we can fall back to our own internally tested data of the Graviton2 as well as the more recently tested AMD Milan flagship for a rough positioning of where things stand:

Google doesnt disclose any details of what kind of SKU they are testing, however we do have 64-core and 32-core vCPU data on Graviton2, scoring estimated scores of 169.9 and 97.8 with per-thread scores of 2.65 and 2.16. Our internal numbers of an AMD EPYC 7763 (64 core 280W) CPU showcase an estimated score of 255 rate and 1.99 per thread with SMT, and 219 rate and 3.43 per thread for respectively 128 threads and 64 thread runs per socket. Scaling the scores down based on a thread count of 32 based on what Google states here as vCPUs for the T2D instance, would get us to scores of either 63.8 with SMT, or 109.8 without SMT. The SMT run with 32 threads would be notably underperforming the Graviton2, however the non-SMT run would be +12 higher performance. We estimate that the actual scores in a 32-vCPU environment with less load on the rest of the SoC would be notably higher, and this would roughly match up with the companys quoted +25 performance advantage.

And here lies the big surprise of todays announcement: for Google's new Milan performance figures to make sense, it must mean that they are using instances with vCPU counts that actually match the physical core count which has large implications on benchmarking and performance comparisons between instances of an equal vCPU count.

Notably, because Google is focusing on the Graviton2 comparison at AWS, I see this as a direct attack and response to Amazons and Arms cloud performance metric claims in regards to VMs with a given number of vCPUs. Indeed, even when we reviewed the Graviton2 last year, we made note of this discrepancy that when comparing cloud VM offerings to x86 cloud offerings which have SMT, and where a vCPU essentially just means youre getting a logical core instead of a physical core, in contrast tothe newer Arm-based Graviton2 instances. In effect, we had been benchmarking Arm CPUs with double the core counts vs the x86 incumbents at the same instance sizes. Actually, this is still what Google is doing today when comparing a 32vCPU Milan Tau VM against a Azure 32vCPU Cascade Lake VM its a 32 core vs 16 core comparison, just the latter has SMT enabled.

Because Google is now essentially levelling the playing field against the Arm-based Graviton2 VM instances at equal vCPU count, by actually having the same number of physical cores available, it means that it has no issues to compete in terms of performance with the Arm competitor, and naturally it also outperforms other cloud provider options where a vCPU is still only a logical SMT CPU.

Google is offering a 32vCPU T2D instance with 128GB of RAM at USD 1.35 per hour, compared to a comparable AWS instance of m6g.8xlarge with also 32vCPUs and 128GB of RAM at USD 1.23 per hour. While Googles usage of AOCC to get to the higher performance figures compared to our GCC numbers play some role, and Milans performance is great, its really the fact that we seem to now be comparing physical cores to physical cores that really makes the new Tau VM instances special compared to the AWS and Azure offerings (physical to logical in the latter case).

In general, I applaud Google for the initiative here, as being offered only part of a core as a vCPU until now was a complete rip-off. In a sense, we also have to thank the new Arm competition in finally moving the ecosystem into bringing about what appears to be the beginning of the end of such questionable vCPU practices and VM offerings. It also wouldnt have been possible without AMDs new large core count CPU offerings. It will be interesting to see how AWS and Azure will respond in the future, as I feel Google is up-ending the cloud market in terms of pricing and value.

Go here to read the rest:
Google Announces AMD Milan-based Cloud Instances - Out with SMT vCPUs? - AnandTech

With new servers to the hybrid cloud – BioPrepWatch

Cisco recently introduced new UCS-X series servers. These have a new architecture that combines blade and rack servers, as well as management software to integrate hybrid cloud environments. []UCS-X servers must also be able to handle a wider range of tasks (c) Cisco

Cisco has expanded the Unified Computing System (UCS) to a new class of servers that should be more flexible and equipped with management software designed for hybrid clouds.

According to the network specialist, the UCS-X series is the largest remodel since the launch of UCS in 2009. In essence, the UCS-X can now combine blade and rack servers in the same chassis. Older UCS chassis were either blade systems for energy efficiency or rack systems for expandability.

UCS-X servers must also be able to handle a wide range of tasks, from virtual workloads, traditional company applications, and databases, to private cloud applications. In terms of network technology, individual modules are connected to form a fabric that can support IP networks, Fiber Channel SAN (Storage Area Network) and communication for administrative purposes.

UCS-X should also be able to integrate third-party devices, including volumes from NetApp, Pure Storage, and Hitachi.

The X series isnt just about hardware. It comes with a suite of new software, including Cisco Intersight Cloud Orchestrator, that can be used to simplify complex workflows. Additionally, the Intersight Cloud Orchestrator workflow designer can be used to create and automate workflows using a drag-and-drop interface. Furthermore, the Intersight Workload Engine provides a level of abstraction on Cisco devices with which to implement virtualized container-based workloads running directly on the server.

Finally, Cisco introduced the Service Network Manager. This is an extension of the Intersight Kubernetes service, in which Kubernetes containers can be installed and managed in hybrid cloud environments.

Thanks to todays announcements, Cisco makes it possible to operate and manage highly complex hybrid IT environments more easily. Companies can now implement their cloud strategy more easily no matter where they are located and no matter what provider they want, says Christoph Koch, chief technology officer at Cisco Switzerland.

View post:
With new servers to the hybrid cloud - BioPrepWatch

Insights on the Cloud Security Software Global Market to 2026 – by Type, Deployment, End-user, Vertical and Region – PRNewswire

DUBLIN, June 14, 2021 /PRNewswire/ -- The "Cloud Security Software Market: Global Industry Trends, Share, Size, Growth, Opportunity and Forecast 2021-2026" report has been added to ResearchAndMarkets.com's offering.

The global cloud security software market exhibited strong growth during 2015-2020. Cloud security software, also known as cloud computing security software, is employed for executing specific tasks to protect the cloud-based system, data and infrastructure. Companies nowadays transfer most of their data, applications and networks on cloud servers, which are highly distributed, dynamic and more susceptible to unauthorized access, data exposure, cyberattacks and other threats. Cloud security software provides multiple levels of control in network infrastructure to protect the privacy of the users, support regulatory compliance and establish authentication rules for individual users and devices. As a result, both government and private organizations utilize cloud storage and security software as they eliminate the need to invest in dedicated hardware and reduce administrative overheads. Looking forward, the publisher expects the global cloud security software market to grow at a CAGR of around 15% during 2021-2026.

Frequent cyberattacks and breaches have led to an increase in concerns regarding the security of information and data. Additionally, due to the dependence of organizations on cloud-based services for operations and data management, there has been a rise in the adoption of cloud security software to safeguard the integrity and continuity of resources at different levels. Apart from this, with the continuous development of innovative technology solutions using artificial intelligence and machine learning, the functionality of safety software has improved significantly. Different services such as Software as a Service (SaaS), Platform as a Service (PaaS), Infrastructure as a Service (IaaS) and private cloud have been introduced by market players for managing the security of applications and networks of an organization. For instance, IBM Cloud provides a core set of network segmentation and network security services to protect workloads from network threats. Moreover, various researchers are focusing on software testing as 'Testing as a Service' (TaaS) in cloud computing paradigm using a variety of new technologies and innovative service models with multiple features that are different from traditional software testing.

This report provides a deep insight into the global cloud security software market covering all its essential aspects. This ranges from macro overview of the market to micro details of the industry performance, recent trends, key market drivers and challenges, SWOT analysis, Porter's five forces analysis, value chain analysis, etc. This report is a must-read for entrepreneurs, investors, researchers, consultants, business strategists, and all those who have any kind of stake or are planning to foray into the cloud security software market in any manner.

Competitive Landscape:

The report has also analysed the competitive landscape of the market with some of the key players being Broadcom, Inc., TrendMicro, IBM Corporation, Cisco Systems, RSA Security, McAfee, Microsoft Corporation, Dell Corporation, Hewlett Packard Enterprise, BMC Software, Bitium, CipherCloud, Cloudpassage, Check Point Software Technologies, Fortinet, VMware, Sophos, Gemalto NV, Imperva, Inc, etc.

Key Questions Answered in This Report:

Key Topics Covered:

1 Preface

2 Scope and Methodology

3 Executive Summary

4 Introduction4.1 Overview4.2 Key Industry Trends

5 Global Cloud Security Software Market5.1 Market Overview5.2 Market Performance5.3 Impact of COVID-195.4 Market Breakup by Type5.5 Market Breakup by Deployment5.6 Market Breakup by End-User5.7 Market Breakup by Vertical5.8 Market Breakup by Region5.9 Market Forecast

6 Market Breakup by Type6.1 Cloud Identity and Access Management6.1.1 Market Trends6.1.2 Market Forecast6.2 Data Loss Prevention6.2.1 Market Trends6.2.2 Market Forecast6.3 Email and Web Security6.3.1 Market Trends6.3.2 Market Forecast6.4 Cloud Database Security6.4.1 Market Trends6.4.2 Market Forecast6.5 Network Security6.5.1 Market Trends6.5.2 Market Forecast6.6 Cloud Encryption6.6.1 Market Trends6.6.2 Market Forecast

7 Market Breakup by Deployment7.1 Public7.1.1 Market Trends7.1.2 Market Forecast7.2 Private7.2.1 Market Trends7.2.2 Market Forecast7.3 Hybrid7.3.1 Market Trends7.3.2 Market Forecast

8 Market Breakup by End-User8.1 Small and Midsize Business (SMBs)8.1.1 Market Trends8.1.2 Market Forecast8.2 Large Enterprises8.2.1 Market Trends8.2.2 Market Forecast8.3 Cloud Service Providers8.3.1 Market Trends8.3.2 Market Forecast8.4 Government Agencies8.4.1 Market Trends8.4.2 Market Forecast8.5 Others/Third Party Vendors8.5.1 Market Trends8.5.2 Market Forecast

9 Market Breakup by Vertical9.1 Healthcare9.1.1 Market Trends9.1.2 Market Forecast9.2 Banking, Financial Services and Insurance (BFSI)9.2.1 Market Trends9.2.2 Market Forecast9.3 Information Technology (IT) & Telecom9.3.1 Market Trends9.3.2 Market Forecast9.4 Government Agencies9.4.1 Market Trends9.4.2 Market Forecast9.5 Retail9.5.1 Market Trends9.5.2 Market Forecast9.6 Others9.6.1 Market Trends9.6.2 Market Forecast

10 Market Breakup by Region10.1 North America10.1.1 Market Trends10.1.2 Market Forecast10.2 Europe10.2.1 Market Trends10.2.2 Market Forecast10.3 Asia Pacific10.3.1 Market Trends10.3.2 Market Forecast10.4 Middle East and Africa10.4.1 Market Trends10.4.2 Market Forecast10.5 Latin America10.5.1 Market Trends10.5.2 Market Forecast

11 SWOT Analysis

12 Value Chain Analysis

13 Porters Five Forces Analysis

14 Price Analysis

15 Competitive Landscape15.1 Market Structure15.2 Key Players15.3 Profiles of Key Players15.3.1 Broadcom, Inc.15.3.2 TrendMicro15.3.3 IBM Corporation15.3.4 Cisco Systems15.3.5 RSA Security LLC15.3.6 McAfee15.3.7 Microsoft Corporation15.3.8 Dell Corporation15.3.9 Hewlett Packard Enterprise15.3.10 BMC Software15.3.11 Bitium15.3.12 CipherCloud15.3.13 Cloudpassage15.3.14 Check Point Software Technologies15.3.15 Fortinet15.3.16 VMware15.3.17 Sophos15.3.18 Gemalto NV15.3.19 Imperva Inc.

For more information about this report visit https://www.researchandmarkets.com/r/z2tiyg

Media Contact:

Research and Markets Laura Wood, Senior Manager [emailprotected]

For E.S.T Office Hours Call +1-917-300-0470 For U.S./CAN Toll Free Call +1-800-526-8630 For GMT Office Hours Call +353-1-416-8900

U.S. Fax: 646-607-1904 Fax (outside U.S.): +353-1-481-1716

SOURCE Research and Markets

http://www.researchandmarkets.com

Here is the original post:
Insights on the Cloud Security Software Global Market to 2026 - by Type, Deployment, End-user, Vertical and Region - PRNewswire

Western Digital takes it to the edge with its rugged Ultrastar Edge servers – ITP.net

Western Digital launched its new high-performance Ultrastar Edge server family that brings compute closer to where data is generated for faster processing, lower latency and real-time decision making, even when disconnected.

With the growing adoption of 5G, IoT and the cloud, businesses and consumers expect super-fast performance with their applications. This is creating demand for new, distributed intelligent architectures outside of core data centers to help ingest, analyse and transform data at the edge. In addition, organisations are running applications in extremely remote locations, such as deserts, seas or jungles, and are driving the need for ruggedised compute and storage where networks can be expensive, intermittent or nonexistent.

Designed for cloud service providers, telcos and system integrators, Ultrastar Edge servers are easy to transport, deploy and scale in the field, at colocation (colo) facilities, in a factory, or in remote data centers. The new family includes the Ultrastar Edge-MR, an extremely rugged, stackable and transportable server for military and specialised field teams working in harsh remote environments, and the Ultrastar Edge, a transportable 2U rack-mountable server with a portable case for colos and edge data centers. Both solutions are now sampling and orderable from Q4 2021.

As a storage technology leader, were constantly looking ahead and anticipating how well continue to serve our customers needs, said Kurt Chan, vice president, data center platforms at Western Digital.

The growth in data creation at the edge, the opportunities to extract value from that data, and the total available markets and customers innovating and doing work at the edge, gives us a great opportunity for our new Ultrastar Edge server family.

The Ultrastar Edge-MR is an extremely rugged, stackable solution that is designed and tested in accordance with MIL-STD-810G-CHG-1 standards for limits of shock and vibration, and to the MIL-STD-461G standard for electromagnetic interference. The unit is also rated IP32 to provide ingress protection against water and debris. Whether conducting a military operation, doing research in the Amazon, or analyzing data during oil and gas explorations, the Ultrastar Edge-MR can handle extremes.

Both Ultrastar Edge solutions also feature the Trusted Platform Module 2.0, a tamper-evident enclosure and is built to meet FIPS 140-2 Level 2 security standard to help store, secure, transfer and disseminate sensitive data.

From healthcare to intelligence missions, its critical for the federal government and its agencies to gather information quickly, and deliver insight when and where its needed, including at the networks edge, said Jeff Johnson, co-founder, Aeon Computing.

Were thrilled to have the new Ultrastar Edge-MR server in our arsenal, as meeting these stringent military specs is no small feat. We can now deliver a secure, rugged solution that brings the power of the cloud to virtually any edge or tactical environment around the world.

The core of each Ultrastar Edge solution is a durable, high-speed server that supports up to 40 cores with two 2nd Intel Xeon Scalable Processors, an NVIDIA T4 GPU and eight Ultrastar NVMe SSDs providing up to 61TB of storage. This unique combination delivers blazing speeds and capacity for real-time analytics, AI, deep learning, ML training and inference, and video transcoding at the edge. It features two 50Gb or one 100Gb Ethernet connection for sending critical data back to the cloud or data center when connected.

Link:
Western Digital takes it to the edge with its rugged Ultrastar Edge servers - ITP.net

Vapor IO to Enable 5G Services on Shared Infrastructure via Google Anthos and the Kinetic Grid – PRNewswire

"As telcos virtualize their network functions while looking for more cost-effective and agile ways to deploy next generation wireless infrastructure, platforms like Google Cloud's Anthos become a key part of the equation," said Cole Crawford, founder and CEO of Vapor IO. "By delivering Anthos on Vapor IO's Kinetic Grid platform, communications service providers can deploy 5G RAN and MEC services in a multi-cloud, shared-infrastructure environment capable of delivering solutions that require ultra-low latencies."

Google Anthos on the Kinetic GridGoogle Anthos enables operators to run Kubernetes clusters anywhere, including on multiple clouds, on virtualized infrastructure, or on bare metal. Because Vapor IO's Kinetic Grid platform is neutral-host infrastructure, service providers have the choice of deploying Anthos on their own private servers or on servers owned by the public clouds or bare metal providers. The Kinetic Grid platform combines all of the economic benefits of shared infrastructure with the microsecond latencies required by 5G radio access networks.

By leveraging Google Anthos, operators can get a consistent managed Kubernetes experience across all of their environments as well as the flexibility of using their own servers or servers provided by third parties, including Google Cloud, Microsoft Azure and Amazon Web Services.

"As connectivity at the network edge increases, businesses with edge presences can increasingly benefit from cloud capabilities and applications, delivered securely and with low latency on 5G and other networks," said Tanuj Raja, Global Head, Strategic Partnerships at Google Cloud. "We're excited to partner with Vapor IO to help communications service providers deliver cloud-native applications and capabilities to these customers, across multiple networks and infrastructure."

An Edge-to-Edge Platform for the Internet We NeedTightly integrated with public and private first and last mile networks, the Kinetic Grid platform supports software-driven, real-time applications operating between locally distributed sites, capable of supporting sub-100 microsecond latencies required by 5G RAN and other services. Built as a platform for the deployment of public and private 5G, the Kinetic Grid also supports cloud providers, CDNs, IoT and immersive entertainment, and Industry 4.0 applications over shared infrastructure.

Built upon Vapor IO's award-winning Kinetic Edge architecture, currently being deployed in 36 U.S. cities, the Kinetic Grid combines software-driven networking, colocation, interconnection, and intelligence into a comprehensive, carrier-neutral platform. Because of its platform-level integration, customers can certify once and deploy everywhere. No other platform delivers edge-to-edge consistency and pre-integration of services across multiple markets.

First Deployment in Las VegasVapor IO has recently announced plans and partnerships to deliver Kinetic Grid infrastructure in Las Vegas to support a multi-party testbed for Open Grid services. As part of its efforts in Las Vegas, Vapor IO will also bring Google Anthos to the testbed environment and invite public and private 5G service providers to build upon the platform.

Supporting Resources

About Vapor IOVapor IOis developing the largest nationwide edge-to-edge networking, colocation and interconnection platform capable of supporting the most demanding low-latency workloads at the edge of the wireless and wireline access networks. The company's Kinetic Grid platform combines multi-tenant colocation with software-defined interconnection and high-speed networking. The company's technologies deliver the most flexible, highly-distributed edge infrastructure at the edge of the wireless network. Vapor IO has deployed its Kinetic Edge services in Chicago, Atlanta, Dallas and Pittsburgh, and is actively deploying in 36 additional markets. Follow @VaporIOon Twitter.

Vapor, Kinetic Edge, Kinetic Grid and Kinetic Edge Exchange are registered trademarks or trademarks of Vapor IO, Inc.

Media ContactJessica ReesPhone: +1.415.889.7444Email: [emailprotected]

SOURCE Vapor IO

See the article here:
Vapor IO to Enable 5G Services on Shared Infrastructure via Google Anthos and the Kinetic Grid - PRNewswire