Category Archives: Cloud Servers
AMD will finally give investors its data-center data as business soars – MarketWatch
Advanced Micro Devices Inc.s data-center business will finally get its own spotlight.
After reporting record sales Tuesday and predicting another record this quarter, AMD AMD, +1.44% Chief Financial Officer Devinder Kumar said that beginning in the second quarter, the chip maker will delineate sales specifically from the division that has boosted its stock in recent years. This column has advocated, as have several Wall Street analysts, for AMD to break out this business segment separately for investors for more than two years.
AMD has appeared to be gaining some market share in the important server business at the expense of its biggest rival Intel Corp. INTC, +0.22%, but it was hard to compare the two directly because AMD did not provide raw sales information for its data-center segment, as Intel does. After closing its merger with chip maker Xilinx Inc. and announcing the acquisition of data-center software company Pensando in recent weeks, though, AMD plans to solve that problem.
From 2018: Why AMD believes it can challenge Intel in servers
While the change took too long, it arrives at a perfect time, as the information AMD does provide shows that the data-center business is booming. AMD said revenue from its data-center business doubled from a year ago, helping the segment in which it currently resides enterprise, embedded and semi custom group sales increase revenue 88%. Intel, in contrast, said it saw data-center sales jump 22% in the first quarter, which was solid but still a slower rate than AMDs. AMD will move from reporting two segments to four: data center, client, gaming and embedded.
There have been fears of a slowdown in spending by cloud companies, such as Amazon.com Inc. AMZN, -0.20%, so an independent data-center segment should show signs of that. When asked on the call about recent comments by some cloud companies about slowing down their investments, Chief Executive Lisa Su described AMDs demand as still robust.
We havent seen that, Su said. We havent seen that particular phenomena. We do see is that there needs to be good planning, so good planning with our server customers and our large cloud customers, and were doing that. And our planning extends beyond 2022, extends into 2023 as well. And from what we can see, its robust demand.
More from Therese:The pandemic PC boom is over, but its legacy will live on
The server, or data-center business, has always been a big potential growth area for AMD, after it spent years with very slim market share before Su decided to challenge Intel, the dominant player. AMD has been trying to return as a serious challenger in that market, a role it played for a few years in the early 2000s.
Its more recent success in servers has joined big gains from personal computers and gaming consoles, leading to AMDs first $5 billion quarter and predictions of its first $6 billion quarter in the current period, even as the overall PC market is now slowing after a huge boost during the pandemic.
Investors were clearly pleased with AMDs progress, sending shares up 7% in after-hours trading Tuesday. Maybe executives who are still refusing to break out important business segments such as Microsoft Corp. MSFT, -0.95% and its Azure cloud-computing business, or Meta Platforms Inc. FB, +0.43% and Instagram will see those gains and finally take the plunge as well.
Continue reading here:
AMD will finally give investors its data-center data as business soars - MarketWatch
Supercharged IT, superclouds, and superpowered healthcare what they can deliver – MedCity News
When it comes to information technology (IT), the whether and why discussion about cloud use is pretty much over. As noted in some recent analysis from Accenture, The last two years have laid bare the power and agility of cloudand a new understanding that cloud at scale is essential for operations maturity, and ultimately, value.
Even in the slow-to-digitize healthcare sector, contemporary estimates indicate around 90% of the industry has leveled-up to using some degree of cloud computing for some functions and in various incarnations (private-, public-, hybrid-, multi-cloud).
Supercharging IT with cloud power may now be essential, but that doesnt necessarily mean its simple. Despite an accelerated cloud adoption curve over the past couple of years, a huge swath of healthcare organizations still rely on infrastructure predating the advent of the iPhone. And as everyone knows, hordes of valuable data remain confined to countless racks of servers siloed in hospital basements and assorted colocation data centers far and wide.
Working with assemblages of those very old systems and very new cloud deployments can get very, very complicated.
Its difficult to rectify the sheer magnitude of differences in both fundamental operation and capability between legacy on-premise infrastructure and cloud infrastructure. Picture someone from the horse-and-buggy age being presented with access to a rocket ship and trying to conceptualize whether it will fit in the barn or what to feed it. Thats kind of where healthcare finds itself.
The world of technology moves at lightning speed. For a host of reasons, the healthcare sector has struggled to keep pace. What lies between is a gulf of IT complexity that stymies even the most sophisticated organizations. Thus a host of promising models and solutions are continually evolving to help bridge the gap. The latest of these is the supercloud.
Supercloud
The supercloud term dates back to a 2016 Cornell University project describing an architecture that enables application migration as a service across different availability zones or cloud providers. The supercloud provides interfaces to allocate, migrate, and terminate resources such as virtual machines and storage and presents a homogeneous network to tie these resources together[and] span across all major public cloud providersas well as private clouds.
Much of the current excitement about the concept is centered around making everything portable across existing hyperscalers such as Amazon Web Services (AWS), Microsoft Azure, and Google Cloud, and the big business potential in constructing specialized clouds on top of them. Trendy examples of this model can be found in Snowflakes recently launched Healthcare & Life Sciences Data Cloud and Databricks new Lakehouse for Healthcare and Life Sciences.
The central value of the supercloud concept really hinges on provisioning the best cutting edge technology available while simplifying the way the organization interacts with it. The real magic of cloud power today isnt really in portable IT workloads; its in the vendor-specific cloud-native services that the big hyperscalers supply. For example, Amazon Web Services has some really cool database and stream management technology. Microsoft Azure has some really cool storage technology. Google Cloud has some really cool machine learning technology. But your average healthcare business cant afford to staff a huge IT department that:
Its just not feasible.
However, cloud resources are now incredibly varied and accessible, with a large ecosystem of industry-specific cloud-based managed services specializing in these complexities. Which means the average healthcare organization can, indeed, afford to tap into supercloud power they just get it as a service.
Essentially, healthcare organizations can get a service layer designed for their industry with sets of application programming interfaces (APIs) that are called to implement best-of-breed cloud services in a hybrid fashion amongst the appropriate hyperscalers. The right cloud is picked for particular use cases, and a mesh service layer covers all of it. Unique compliance and security requirements are automated, and the underlying implementation complexities are hidden from the business users of those services. So the healthcare organizations IT department can pretty much offload tasks 1 through 3 and focus entirely on innovating ways to help the business.
Youll sometimes see a similar ideal touted as industry cloud. As recently noted by Brian Campbell of Deloitte Consulting in HealthITSecurity, Industry clouds are a portfolio of business transformation-focused solutions, assets, and accelerators that ultimately help to reinvent and transform the business side of that specific industry, supplying an excellent option for healthcare organizations looking to keep pace with the changing digital landscape.
Superpower
Regardless of how a healthcare organization goes about increasing IT agility, reducing complexity, and reinventing business processes, the cloud should be central to the effort. A simple fact has been established: Cloud power increases healthcare power.
To demonstrate, consider a recent six-month study where a team of researchers shattered the record for diagnosing rare genetic diseases with DNA sequencing, and set a new Guinness World Record of 5 hours and 2 minutes to sequence a patients genome. At Stanford Hospital, the team dedicated specialized flow cell sequencing hardware to try to speed sequencing a single patients genome. But the amount of data being produced overwhelmed the labs computational systems.
According to Stanford study team member Euan A. Ashley, We werent able to process the data fast enough. We had to completely rethink and revamp our data pipelines and storage systems. Team member Sneha Goenka found a way to funnel the data straight to a cloud-based storage system where computational power could be amplified enough to sift through the data in real time.
The results?
They were able to sequence and diagnose a genetic illness in 7 hours and 18 minutes, which is about twice as fast as the previous record. For one teenaged patient in their study, their sequencing data showed his condition was rooted in genetics within a matter of hours, and he was immediately placed on a heart transplant list. He received a new heart three weeks later, and as of January this year, his mom says hes doing exceptionally well.
Super!
Photo: shylendrahoode, Getty Images
See original here:
Supercharged IT, superclouds, and superpowered healthcare what they can deliver - MedCity News
Enpass Business allows organizations to choose where they store their data – Help Net Security
Enpass released a new solution, Enpass Business, built for enterprises who want to maintain complete control of their password data. Enpass Business offers a strong value proposition as it eliminates the burden and overhead associated with on-premises server installation and ongoing monitoring and management.
Supporting Microsoft 365 integration, Enpass Business also allows businesses to leverage their existing cloud storage of OneDrive/SharePoint for storing their sensitive data.
Most password management solutions on the market today store data outside of an organizations IT infrastructure, most commonly in the service providers cloud. This poses a compliance problem for many businesses who have mandates in place preventing them from storing data outside their IT infrastructure, or are concerned about the data security issues associated with storing sensitive data outside their zone of safety.
With Enpass Business, all passwords remain within the trusted boundaries of the organizations local IT systems. Enterprises have the option to store data on employee devices, or use their existing cloud storage, enabling them to maintain control over their data without the need to host additional servers.
We have had great success with our B2C solution that creates strong and unique passwords, provides secure and convenient storage and automatic website login. In fact, many businesses were onboarding our consumer-centric solution because of its offline capability, said Hemant Kumar, CEO and Co-Founder of Enpass. Notably, there are major corporations out there who have spent millions securing their businesses, yet still dont use a password manager because they dont want their data to be stored in the service providers cloud. We developed Enpass Business to respond to this market need and provide enterprise-level password management across all platforms, with security top of mind.
Enpass Business maintains compliance while reducing security risk, as passwords, credentials and other information never leave the organization. The Enpass Business feature set includes:
With Enpass, none of the user data ever reaches our servers; we never have access to it. All the data is 100% encrypted with 256-bit AES and stored either locally on employee devices, or in the organizations business cloud, said Kumar. The concept of leveraging Microsoft 365 to provide all the capabilities of vault sharing and access rights management is unique and we are very excited to be bringing this one-of-a-kind solution to the market.
Enpass Business, along with the consumer version of Enpass are both available and sold via subscription directly through the website.
Read the original here:
Enpass Business allows organizations to choose where they store their data - Help Net Security
Concerned about cloud costs? Have you tried using newer virtual machines? – The Register
Better, faster, and more efficient chips are driving down cloud operating costs and pushing prices lower, according to research from IT infrastructure standards and advisory group, the Uptime Institute.
With each generation of processor family, cloud pricing has trended downward with one notable exception, Owen Rogers, research director for cloud computing at Uptime Institute, explained in a write-up this week.
The research tracked Amazon Web Services (AWS) pricing across six generations of AMD and Intel CPUs and three generations of Nvidia GPUs using data obtained from the cloud providers price list API. While Rogers acknowledged AWS Graviton series of Arm-compatible CPUs, they werent included in testing.
All tests were conducted on AWS US-East-1 region, however, Rogers notes his findings should be similar across all AWS regions.
Of the eight AWS instances Rogers tracked, the majority saw a steady decline in customer pricing with each subsequent CPU generation. Pricing for the AWS m-family of general purpose instances, for example, dropped 50 percent from the first generation to present.
Some instances AWS storage optimized instances in particular saw even more precipitous pricing drops, which he attributed to other factors including memory and storage.
It comes as no surprise that CPU performance in these instances tends to improve with each generation, Rogers noted, citing the various performance and efficiency advantages to architectural and process improvements.
For example, AMDs third-gen Epyc Milan processor family and Intels Ice Lake family of Xeon Scalable processors claim a 19-20 percent performance advantage over previous-generation chips. Both families are now available in a variety of AWS instances, including a storage-optimized instance announced last week.
Users can expect greater processing speed with newer generations compared with older versions while paying less. The efficiency gap is more substantial than simply pricing suggests, he wrote, adding that it is plain to see in AWS pricing.
In other words, while intuitively you may think instances based on older processor tech should be less expensive, more modern, more power efficient instances are often priced lower to incentivize their adoption.
"However, how much of the cost savings AWS is passing on to its customers versus adding to its gross margin remains hidden from view, he wrote.
Some of this can be attributed to customer buying habits, specifically those that favor cost over performance. Because of this price pressure, cloud virtual instances are coming down in price, he wrote.
The exception to this rule are GPU instances, which have actually become more expensive with each generation, Rogers found.
His research tracked AWS g-and p-series GPU-accelerated instances over three and four generations, respectively, and found that the rapid growth of total performance alongside the rise of demanding AI/ML workloads have allowed cloud providers and Nvidia to rise prices.
Customers are willing to pay more for newer GPU instances if they deliver value in being able to solve complex problems quicker, he wrote.
Some of this can be chalked up to the fact that, until recently, customers looking to deploy workloads on these instances have had to do so on dedicated GPUs, as opposed to renting smaller virtual processing units. And while Rogers notes that customers, in large part, prefer to run their workloads this way, that may be changing.
Over the past few years, Nvidia which dominates the cloud GPU market has, for one, introduced features that allow customers to split GPUs into multiple independent virtual processing units using a technology called Multi-instance GPU or MIG for short. Debuted alongside Nvidias Ampere architecture in early 2020, the technology enables customers to split each physical GPU into up to seven individually addressable instances.
And with the chipmakers Hopper architecture and H100 GPUs, announced at GTC this spring, MIG gained per-instance isolation, I/O virtualization, and multi-tenancy, which open the door to their use in confidential computing environments.
Unfortunately for customers, taking advantage of these performance and cost savings isnt without risk. In most cases, workloads arent automatically migrated to newer, cheaper infrastructure, Rogers noted. Cloud subscribers ought to test their applications on newer virtual machine types before diving into a mass migration.
There may be unexpected issues of interoperability or downtime while the migration takes place, Rogers wrote, adding: Just as users plan server refreshes, they need to make virtual instance refreshes a part of their ongoing maintenance.
By supporting older generations cloud providers allow customers to upgrade at their own pace, Rogers said. The provider doesnt want to appear to be forcing the user into migrating applications that might not be compatible with the new server platforms.
Follow this link:
Concerned about cloud costs? Have you tried using newer virtual machines? - The Register
SSE kicks the A out of SASE – The Register
Analysis The emergence of secure access service edge (SASE) dominated the networking market for the last few years as enterprises sought to address increasingly distributed IT environments.
SASE hit the lexicon after 2019 took hold as enterprises started to see a possible route in the convergence with software-defined WAN (SD-WAN) and network security functions for threat protection, zero-trust features, firewall-as-a-service (FWaaS) and cloud access security broker (CASB), all delivered as a cloud service.
Now comes security service edge (SSE), which pulls back the security functions in SASE into a unified services offering that includes CASB, zero-trust network architecture (ZTNA) and secure web gateway (SWG). SSE came in the wake of the COVID-19 pandemic, with most employees being sent home to work and putting in motion the ongoing trend toward hybrid work.
With many people working from home at least part of the time, the role of branch offices is lessened and the need for security features that follow workers where they are with work days starting from home and then moving to offices or other locations is growing.
What the role of SSE is in the larger network security space is and what it means for the future of SASE are the subjects of some debate in the industry. However, it puts a spotlight on the ongoing evolution of networking as the definition of work continues to change and the focus of IT shifts from the traditional central data center data and workloads in the cloud and at the edge.
Once the pandemic hit, "it was no longer about branch offices," said John Spiegel, director of strategy at Axis Security, which in April launched Atmos, its SSE platform. "It was our users taking their branch office to the home, to their garages, to their basements ... [and] collaborating with their fellow workers via Zoom. The whole thing changed and that's where we saw the utility of SD-WAN really decline."
Enterprises could put WAN devices in every employee's home, but that's expensive and complex, Spiegel told The Register.
"Instead, we pivoted back to this SSE model, which is really about delivering applications," he said. "At the end of the day, that's what a CIO, a leader cares about. It's the delivery of an application. We're getting down to that lowest common denominator and that's the user and that's really where secure service edge is and that's where we see the opportunity."
Gur Schatz, founder and COO at Cato Networks, sees it another way. The company recent months has added such features to its SASE platform as risk-based application access control to address what officials see as limitations in offerings that focus only on ZTNA and SSE and a CASB. People will continue to go to offices to work, there will always be SD-WAN and firewalls, data centers and cloud providers like Amazon Web Services and Microsoft Azure, Schatz told The Register.
The long-term trend will be adding more functions into the SASE environment, he said. SASE is not easy for enterprises to adopt and SSE is a step down the inevitable path toward SASE, which addresses issues of cost and complexity when trying to merge networking and security
"Maybe the topology changed from having branch offices communicating with headquarters to branch offices communicating with data centers or with SaaS applications, but the network is still there with you," Schatz said. "Everything converges and you have a single security posture that covers holistically what you need. It's unreasonable to get this amount of complexity and try to maintain security on top of it."
Gartner, which defined SASE, did the same with SSE last year and in February released its SSE Magic Quadrant, with Zscaler, Netskope and McAfee (which created Skyhigh Security by combining its SSE tools with FireEye's) as leaders and others like Palo Alto Networks, Cisco, Forcepoint and Lookout in play.
In addition, Gartner analysts last fall listed both SASE and SSE as must-have cloud security technologies for 2022, with SASE predicted to have a transformational impact in the next two to five year and SSE a high impact over three to five years.
While global SD-WAN revenue did slow in 2020 due to the pandemic and the dramatic to work-from-home, Dell'Oro Group analysts said the market came roaring back last year, growing 35 percent year-over-year and hitting record revenue of more than $2 billion as organizations optimized their branches for cloud services and adopted SD-WAN for their widely distributed workforce.
That said, there are issues with SD-WAN, including the costs that come with adopting it and an implementation phase that can take years, according to Netskope Chief Strategy Officer Jason Clark. In addition, SD-WAN tends to be an on-premises technology that addresses east-west network traffic, which doesn't fit as well when users are going into the cloud.
"For anything north-south, I'm going to my SSE," Clark told The Register.
SASE essentially has been trying to create a Frankenstein monster-like tool package, with network technologies coming from networking vendors and security tools from various security players, he said. Palo Alto is one of the few companies that owns both and is working to meld them together.
"The reality is that you have a really strong SD-WAN vendors who suck at security," Clark said. "You have really, really good security companies, but they're not SD-WAN companies. Then you've got people who are trying to play in the middle. What happened is the buyers told Gartner the security-minded buyers need the best-of-breed security. Two-thirds of them said, 'I need the best SD-WAN and I need the best security. I found nothing that does both awesome.'"
When a user moves off the SD-WAN and into the cloud from home, a lot of the controls in the on-prem network are gone. Netskope's worldwide network is designed to deliver security capabilities once the user hops into the cloud, which is important given that about half an enterprise's traffic is in the cloud, Clark said. Before the pandemic hit, it was about 15 percent, he said.
David Hughes, who was founder and CEO of SD-WAN vendor Silver Peak until Hewlett Packard Enterprise bought it last year for $925 million and folded it into its Aruba Networks business, said Gartner defining SSE is a plus because it clarifies what SASE is the on-prem SD-WAN and cloud-delivered security services.
"It gives the IT administrator a clearer idea of the tradeoffs they would be making if they go with one vendor for everything vs. going with a cloud vendor plus an on-prem vendor," Hughes, now Aruba's chief product and technology officer, told The Register.
"We've always felt that, especially for the larger enterprises, going with a leader in the cloud-delivered security plus a leader on-prem [is best]. That's what we see happening in the large enterprise. As you come down-market, there's a desire for being able to have one throat to choke. What the Magic Quadrant shows is as you come down there, you're having to make some compromises. The split in the analysis helps people see what those compromises might be."
However, the evolving demands for networking security will continue to push the market toward convergence, Cato's Schatz said.
"Eventually all roads lead to SASE," he said.
Visit link:
SSE kicks the A out of SASE - The Register
White Box Server Market Anticipated to Surpass USD 27.48 Billion by the Year 2030 with a CAGR of 17.2% – Report by Market Research Future (MRFR) -…
New York US, May 03, 2022 (GLOBE NEWSWIRE) -- Market Overview: According to a comprehensive research report by Market Research Future (MRFR), White Box Server Market information by Form Factor, by Processor and Region Forecast to 2030 market size to reach USD 24.48 billion, growing at a compound annual growth rate of 17.2% by 2030.
Market Scope: The increased acceptance of open platforms such as the project scorpio, open compute project, and others is likely to move the white box server market forward.
Dominant Key Players on White Box Server Market Covered are:
Get Free Sample PDF Brochure: https://www.marketresearchfuture.com/sample_request/5376
Market USP Exclusively Encompassed:Market DriversNeed for Low-Cost Servers to Boost Market Growth The market is expected to expand due to rising demand for low-cost servers, improved uptime, & a high degree of customization & flexibility in hardware design.
Less Consistency to act as Market Restraints The less consistency coupled with lack of redundancy may act as market restraints over the forecast period.
High Manufacturing Costs to act as Market Challenge High manufacturing as well as research & development costs may act as market challenges over the forecast period.
Browse In-depth Market Research Report (100 Pages) on White Box Server Market:https://www.marketresearchfuture.com/reports/white-box-server-market-5376
Segmentation of Market Covered in the Research:The global white box server Industry is bifurcated based on applications, form factors, operating systems, and components.
By operating systems, the white box server market is segmented into Windows, Linux, UNIX, and others.
By application, the global white box server market is segmented into data centers and enterprise.
By components, the white box server market is segmented into memory, processor, network adapter, motherboard, and power supply.
By form factors, the white box server market is segmented into rack towers, blade servers, and others.
Talk to Expert: https://www.marketresearchfuture.com/ask_for_schedule_call/5376
Regional AnalysisNorth America to Precede White Box Server Industry North America will precede the white box server market over the forecast period. North America plays a critical role in market expansion. During the forecast period, this region is expected to dominate the market. It's because industrialized countries such as Canada and the US have data centres. In the United States, data centres can be found in regions like New York, Silicon Valley, eastern Washington, and many others. Because North America is home to multiple multinational firms and a significant number of data centres, North America holds a significant portion of the worldwide industry. Increased usage of ICT technologies & enterprise digitalization are transforming the old IT environment, which is helping to boost the regional market. The region is densely populated, with the highest concentration of data centre facilities, including some of the world's largest data centres. Because of the expanding usage of ICT technologies and the digitization of businesses, the region dominates the industry. Besides, the emerging economies have the highest number of data centre facilities, hosting some of the world's largest data centres, all of which will contribute to the region's white box server market's continued expansion.
APAC to Have Favorable Growth in White Box Server Market The expansion of the market will also be heavily influenced by the Asia-Pacific area. It's due to the growing popularity of high-capacity mobile devices. China is a significant country that is significantly responsible for the Asia-Pacific region's growth. Because of the expanding presence of major cloud service providers in APAC, APAC is likely to be the fastest-growing market for white box servers over the projection period. The increasing number of internet users, the increasing demand for infrastructure refresh in the old data centres, and the expanding significance of data sovereignty as data privacy regulations mature in Southeast Asia are all factors driving the expansion of the Asia-Pacific data centre sector. Hong Kong and Singapore are critical sites for the white box server market, with major businesses such as Tecent Holdings Ltd. (China), Alibaba Group Holding Ltd. (China), and Baidu Inc. (China) playing a significant role. In addition, companies such as Microsoft Corp. (USA), Facebook Inc. (USA), Amazon Web Services Inc. (USA), & Google Inc. (USA) are expanding their presence in the region. Furthermore, many cloud service companies prefer to use white box servers rather than branded servers. In the coming years, large organizations in APAC are also projected to employ white box servers. The expanding presence of leading cloud service providers in the region, as well as rising usage of mobile devices and digital services are expected to drive demand for data centres to accommodate a variety of consumer and enterprise needs throughout the projected period. Increase in internet users, the rising demand for infrastructure refresh in older data centres, and the increasing significance of data sovereignty as data privacy regulations mature in Southeast Asia are all factors driving the expansion of the Asia-Pacific data centre sector. Furthermore, several cloud service providers prefer to use white-box servers rather than branded servers.
Buy this Report:https://www.marketresearchfuture.com/checkout?currency=one_user-USD&report_id=5376
COVID-19 Impact on the Global White Box Server MarketDue to the technical movement to cloud-based services & reliance on data centres, the ongoing COVID-19 situation has a favourable impact for the worldwide white box server industry. Several firms' work-from-home policies, as well as the development of online education systems, are boosting the demand for powerful servers around the world. Furthermore, travel limitations imposed by some governments, the growing importance of the e-comm industry, and expanding internet usage are driving the demand for efficient and adaptable servers, facilitating market expansion. Market growth is expected to be aided by such factors in the post-pandemic period.
Related Reports:Blockchain IoT Market Research Report: By Offering (Hardware, Software, and Services), Application (Data Sharing, Smart Homes, Data Security, Smart Contracts, Asset Tracking and Management, and Others), End-User (Supply Chain and Logistics, Automotive, Healthcare, Manufacturing, Retail, Construction, Government, Energy and Utility and others) and Region (North America, Europe, Asia-Pacific, Middle East, and Africa and South America) - Forecast till 2027
Augmented Intelligence Market Research Report: By Offering (Hardware, Software and Services), Technology (Machine Learning, Natural Language Processing, and Computer Vision), End-Use Industry (Healthcare, Manufacturing, Automotive, Agriculture and others) and Region (North America, Europe, Asia Pacific, Middle East, and Africa and South America) - Forecast till 2027
WI-SUN Technology Market Research Report: By Component (Hardware Products, Software Solutions, Services), By Application (Smart Meters, Smart Street Lightings, Smart Building, and Industrial Application, Others) - Forecast till 2030
About Market Research Future:Market Research Future (MRFR) is a global market research company that takes pride in its services, offering a complete and accurate analysis regarding diverse markets and consumers worldwide. Market Research Future has the distinguished objective of providing the optimal quality research and granular research to clients. Our market research studies by products, services, technologies, applications, end users, and market players for global, regional, and country level market segments, enable our clients to see more, know more, and do more, which help answer your most important questions.
Follow Us:LinkedIn|Twitter
View original post here:
White Box Server Market Anticipated to Surpass USD 27.48 Billion by the Year 2030 with a CAGR of 17.2% - Report by Market Research Future (MRFR) -...
What is Server Virtualization? Benefits and advantages discussed – TheWindowsClub
Server virtualization, have you ever heard of it? Youd be surprised how important it is and how much it is used around the world. Now, since not a lot of people have knowledge of server virtualization, we aim to explain all the important bits.
Understanding server virtualization is very important to many people, which is why weve decided to explain what it is all about.
Virtualization is the creation of a virtual variant of anything. There wont be physical hardware, though it shares the underlying physical hardware with an operating system that acts as the host along with virtual devices.
It is the process of creating virtual servers which act as real servers. To allow something like this to happen, the virtualization is installed on a host computer that was designed to deliver the necessary computing power and hardware.
The problem with the traditional server setup is that they are usually designed to support single applications, forcing the servers to run a single workload. This can effectively waste resources, and no one wants that.
Virtual servers are better because they allow companies to cut down on the cost of having to deploy multiple physical servers which will take up additional space and use more electrical power.
Virtual servers can ascend to new heights with the help of a layer of software known as a hypervisor. What is this? Well, it is all about abstracting the underlying hardware from all the software that runs above it.
In laymans terms, a hypervisor is similar to an emulator, a virtualization software, if you will. It is designed to run several virtual machines on single computer hardware, and it is responsible for allocating resources on physical servers on the main hardware to different instances of virtual machines.
There are two types of hypervisors that can be used without problems in a virtual server. The names are Type 1, and Type 2, and we are going to explain a few things about them.
Cost: A virtual server is cheaper because the user will not have to worry about hardware maintenance. This is a huge boon for companies because their IT department wont have to invest in on-site resources or a separate space to house massive physical servers.
Read:How to enable Automatic .NET Updates in Windows Server
Server virtualization is all about separating the physical server from the Guest operating system which provides additional benefits and capabilities.
As for network virtualization, this is where network applications are moved onto a network device, which provides more capabilities and benefits as well.
A server is a bunch of physical computers that run services designed to serve the needs of other computers on a network.
Read more here:
What is Server Virtualization? Benefits and advantages discussed - TheWindowsClub
Has the cloud industry solved a big problem for digital pathology? – Digital Health
Pathology produces immense amounts of imaging data compared to other disciplines but could a different approach to cloud storage prevent a potential cost crisis? Sectras sales director, Chris Scarisbrick, explores a sustainable strategy some healthcare providers are now taking.
Digitisation in pathology is taking place at an unprecedented pace. Healthcare providers almost everywhere are now progressing their plans for the biggest transformational change that the centuries-old discipline has ever seen.
Such progress is exciting and important with significant implications for clinical collaboration and enhanced patient care. The UK government has placed such importance on modernising diagnostics, that it is currently investing hundreds of millions of pounds into digitising diagnostics, within the space of a single year. Gone are the days when we can continue to expect pathologists to stand over microscopes, working in relative isolation from each other.
But as necessary as digital pathology is, an inevitable challenge to the longer term sustainability of initiatives has continued to trouble some people the cost of storage.
How big is the problem?
It has been a big challenge, from a data generation point of view at least. Pathology is by far the largest consumer of digital storage when compared to other diagnostic disciplines. In radiology, a typical x-ray might consume about 35 megabytes of data. A more complex examination, like a CT scan, might produce images in the region of 300 megabytes. But in pathology, digital images created from the scanned biopsy slides associated with just a single average patient examination generate as much as five gigabytes of data.
Putting the challenge into context, one of the worlds most advanced digital diagnostic initiatives recently reported that it had produced half a petabyte of radiology data over a 10-year period. Having also now digitised pathology, the programme soon expects to produce around three petabytes of data every single year from scanned slides. Thats 3,000 terabytes of data every year, for a relatively modest regional population, and just from digital pathology.
For healthcare organisations with ready access to expansive storage options, this is less of a challenge. But for many others, who might produce several times the data in the above example, alternative solutions are being sought to ensure the cost of digital pathology storage remains sustainable.
Solving the storage problem
Despite its immense storage footprint, pathology has one very significant advantage. Once digital slides have been reported and the clinical diagnostic cycle is complete, images are relatively less likely to be needed again.
This differs to other diagnostic arenas. In radiology, for example, access to historical imaging is clinically important, allowing healthcare professionals to quickly see what might be historically normal for a patient, or to monitor progression of areas of interest over time. A single x-ray might be looked at many times as a point of reference during a persons life, especially if it highlights potential areas of concern.
But in the vast majority of pathology cases this isnt a requirement. Any valuable information is typically extracted at the point of reporting. Once a clinical decision has been made and the patient is on a pathway, biopsies are not usually revisited for ongoing patient care.
Some recent regional digital pathology initiatives I have spoken to are now taking strategic advantage of this situation, coupled with emerging developments in cloud computing. In particular, they are opting to utilise archive storage capabilities that started to emerge a few years ago and which have now become common solutions from major cloud providers.
Retrieving data from such deep layers of archive storage can come with a cost, but overall, it means that vast quantities of data can be stored at scale whilst remaining affordable and sustainable.
Ending the storage of glass slides altogether?
If such images are so infrequently needed, you might legitimately question why they need to be stored in the first place.
Some initiatives have decided to try to manage without storing images in the longer term. They have chosen to purge imaging data from servers, instead opting to spend time retrieving the original physical slide that is kept in storage and to then re-scan that slide at the point the image is needed.
When slides are revisited, it is often for medical-legal reasons. For example, if a cancer has been missed, an inquiry may want to understand if a cancer should have been detected, and to see what was visible to the pathologist at the time of reporting.
One potential challenge with this approach is that the quality of physical slides can degrade over time, meaning that what is visible when that slide is rescanned, might differ to the original image at the time the diagnostic report was made. A high quality digital image, on the other hand, will remain the same indefinitely providing a highly reliable record that might also provide significant value for research or for the training of AI, for example.
Novel cloud archiving options being put into practice now are likely to defeat the case for data purging strategies. Indeed, they might even raise questions as to whether physical slides should be retained. Current guidance from organisations like the Royal College of Pathologists do for the time being require tissues to be retained and stored. But is the storage of slides an unnecessary cost in itself if a reliable digital image is all that is needed?
Cloud is the way forward
Nearly every digital pathology initiative I have encountered recently is reliant on the cloud, for many reasons. It is a more secure option when it comes to cyber security. Cloud providers invest vast resources into their cyber resilience whereas an on-premise solution managed by an already busy hospital IT team, can only defend against so much.
Cloud also offers flexibility of scale, and to pay as you go rather than investing large amounts of capital into hardware, capital that does not exist for many healthcare providers.
Cloud helps to drive forward consolidation and regional multi-organisation pathology programmes. It can help to standardise and simplify digital pathology deployments. And it can help to reduce the time to deployment with projects not dependant on sourcing increasingly scarce hardware that would otherwise dictate timescales.
For that and other reasons, cloud is the way forward. But storing petabytes upon petabytes of data in traditional online environments would likely become too expensive, too quicky for most initiatives. Archives might now be the answer many have been searching for.
Read more here:
Has the cloud industry solved a big problem for digital pathology? - Digital Health
Varjo Reality Cloud: Ultra-Reality Experience The Easy Way – Ubergizmo
Varjo has announced the Varjo Reality Cloud, a secure SaaS platform that lets customers stream XR content from Autodesk VRED rendered on powerful enterprise-class hardware to consumer-level clients such as XR headsets, laptops, and mobile devices.
Varjo is well-known for its world-class VR headsets that feature eye-tracking and impressive human-resolution displays. The level of detail is so high that things like cockpit flight instruments are completely readable: an absolute necessity for the most immersive XR apps.
Previously, such a level of detail required connecting the XR headsets to a powerful local workstation to drive the rendering with a beefy (and very expensive) GPU and CPU configuration.
There are many situations where having physical possession of such an expensive computer might not be convenient. For instance, you might want to set up demo rooms in different locations, and physically deploying these computers is costly and might require on-site engineering expertise.
Varjos reality cloud is a remote rendering solution that solves this elegantly and efficiently. The rendering is done in a data center (AWS in this case) and streamed at high speed / low latency to a cheap thin client. For the end-user, Its no more complicated than launching a regular app.
There are other cloud solutions like this, but Varjo is the only one Ive seen that takes full advantage of the companys Human Resolution rendering and displays. Thats a massive advantage in the enterprise XR business and an excellent reason to pay attention to this service.
Although you need a good Internet connection, it works on your typical consumer-level Internet. I tested it when Varjo rented a photo studio in San Francisco with run-the-mill internet connectivity. The service requires only 35Mbps (megabits per second).
Varjo uses little bandwidth thanks to a proprietary foveated transport algorithm. Foveated refers to the tracking of the eye gaze to prioritize something.
Varjo supports foveated rendering during the construction of each frame. However, foveated transport happens in the compression and transmission of the rendered frame over the network. The company claims it can achieve a lossless compression ratio of 1000:1.
Me (middle) with Varjos CTO Urho Konttori (right) and CBO Jussi Mkinen (left)
Overall, the experience was great and comparable to an offline Varjo XR experience. There was no noticeable compression artifact, and the framerate was very acceptable for applications such as Architectural previews where you dont always need 90 FPS+.
The first demo was a virtual car showroom with excellent integration of the car into the real-world studio. Varjo did a great job capturing the local light probes to render the 3D car as if it was in the physical room. I could even peek inside the vehicle, and every instrument was sharp and readable.
The meta-human demo. Photo from Varjos website, not from my actual session
The second demo was a meta-human (virtual character) that needed to be rendered realistically. Again, the extremely high resolution of Varjo headsets makes a world of difference when it comes to fine details such as hair or clothes texture (jeans, etc.). A lot of small things aggregate into very perceptible improvements. The meta-human is real enough that it felt weird to enter their personal space.
I havent created any Varjo Reality Cloud servers myself, but Im well-familiar with the concept, and theres little doubt that the Varjo Reality Cloud is attractive simply because it makes life much easier.
Additionally, renting virtual workstation instances for a short period instead of buying them makes it possible to rapidly scale and shut down utilization for special events or even weekly executive content reviews. Thats potentially a massive increase in usage, associated with a modest cost increase. The added value is straightforward to measure
Filed in Gaming. Read more about Virtual Reality (VR).
See more here:
Varjo Reality Cloud: Ultra-Reality Experience The Easy Way - Ubergizmo
IBM outlines first major update to i OS for Power servers in three years – The Register
IBM has outlined a major update to the "I" operating system it offers for its Power servers.
i 7.5, which will debut on May 10, supersedes the version 7.4 that appeared in April 2019. If that feels like a long time between updates, remember that servers packing IBM's POWER CPUs can also run IBM's own AIX or Linux a variant of which IBM also packages thanks to its ownership of Red Hat and its Linux distros.
The i OS update which should not be confused with Apple's iOS or Cisco's IOS runs only on Power 10 or Power 9 hardware. IBM will happily talk to users of earlier Power servers about an upgrade proprietary hardware and associated software are massive contributors to the company's revenue and profit.
The new release improves scalability to a maximum of 48 processors per partition in SMT8 mode. That change lets servers packing Power 10 or 9 to run up to 384 threads.
Other additions include:
IBM's announcement of the update also mentions a couple of odd-seeming changes. One allows clients to change the scope of two-digit year date ranges, so that base years can be moved from 1940 to 1970. If you've been hanging out for that feature, huzzah. Another allows the operating system's FTP client to accept a server certificate that is not signed by a trusted certificate authority but sensibly leaves that turned off by default.
Another change to the Power ecosystem revealed today is the introduction of a module that allows the use of U.2 15mm NVMe solid state disks. Power 9 and 10 boxes can now run such disks with capacity of 800GB, 1.6TB, 3.2TB, or 6.4TB.
Curiously, IBM's announcement of that feature includes verbose cautions about the lifecycle of such drives, and notes that IBM considers three full disk writes a day for five years to be the devices' expected working life.
Big Blue has also teased an update to the Enterprise Edition of AIX but it's mainly a change to the bundle offered for one cut of that OS, rather than the more significant update of features offered in i 7.5.
Read this article:
IBM outlines first major update to i OS for Power servers in three years - The Register