Category Archives: Cloud Servers
Global Hyper-Converged Infrastructure Markets to 2025 – IBM, Cisco, Huawei and Microsoft are the Forerunners of this $27 Billion-Projected Industry -…
Dublin, March 19, 2020 (GLOBE NEWSWIRE) -- The "Global Hyper-Converged Infrastructure Market, by Component, by Organization Size, by Application, by End-user, by Region, Industry Analysis and Forecast, 2019-2025" report has been added to ResearchAndMarkets.com's offering.
The Global Hyper-Converged Infrastructure Market size is expected to reach $27 billion by 2025, rising at a market growth of 33.2% CAGR during the forecast period.
Hyper-converged technology has gained momentum in the early stages of development because of its ability to provide virtual desktops. With the ever-increasing digital data and diversified sources, companies are moving towards high-performance, alongside a linear scaling architecture that is the primary focus of hyper-converged infrastructure. As the hyper-converged architecture has grown from traditional server workloads such as web servers, regular software, research & development to mission critical workloads such as SAP, SQL, and Oracle, the industry has achieved high adoption rates that influence market growth positively.
The need for scalable infrastructure with increasing demand for architectures, which can manage heavy workloads, like business analytics and big data tools, are some of the key drivers in the growth of the global hyper-converged infrastructure market. Major business leaders push companies to embrace software-centric architecture that lets them incorporate, process, and store in a single suite. This encouraged leading virtualization providers to move towards hybrid cloud solutions to help their customers migrate transactional workloads to the public cloud and execute heavy workloads, typically mission-critical on-premise.
The major strategies followed by the market participants are Partnerships. Based on the Analysis presented in the Cardinal matrix, IBM Corporation, Cisco Systems, Inc., Huawei Technologies Co., Ltd., and Microsoft Corporation are some of the forerunners in the Hyper-Converged Infrastructure Market. Companies such as Hewlett Packard Enterprise Company, Fujitsu Limited, and Dell Technologies, Inc., Nutanix, Inc., NetApp, Inc., NEC Corporation, and Hitachi, Ltd. are some of the key innovators in Hyper-Converged Infrastructure Market.
Partnerships, Collaborations, and Agreements
Acquisition and Mergers
Product Launches and Product Expansions
Market Segmentation
By Component
By Organization Size
By Application
By End-user
Companies Profiled
For more information about this report visit https://www.researchandmarkets.com/r/g27vxt
Research and Markets also offers Custom Research services providing focused, comprehensive and tailored research.
Pockets of Density Will Be More Common in Future Data Centers – Data Center Frontier
High-density racks will become more common as AI gains traction. (Photo: Rich Miller)
Will high-density, liquid-cooled racks become more common in data centers? Thats todays topic as we continue our Data Center Executive Roundtable, a quarterly feature showcasing the insights of thought leaders on the state of the data center industry, and where it is headed. In todays discussion, our panel of experienced data center executives Iron Mountains Michael DeVito, Chris Sharp of Digital Realty, Kristen Kroll-Moen from Chatsworth Products, Intels Jeff Klaus, Gary Niederpruem of Vertiv, and Amber Caramella of Netrality and Infrastructure Masons discuss trends in rack power density and their implications for design and operations.
The conversation is moderated by Rich Miller, the founder and editor of Data Center Frontier.
Data Center Frontier:Last year several studies indicated that rack density level is increasing. How might the growing use of artificial intelligence and edge computing impact rack density and the world of data center cooling?
GARY NIEDERPRUEM, Vertiv
Gary Niederpruem:What we are seeing from our customers is that average rack densities arent rising dramatically for many operators. But in select applications, such as AI, extremely high densities are becoming more common. Thats created an interesting environment where you have some users with relatively stable densities and some with densities that werent even practical 10 years ago. This is occurring both within the core data center and on the edge.
From a cooling perspective, operators need the ability to efficiently cool facilities that support standard density racks (5 kW to 8 kW), high density racks (30 kW and higher), and have some mix of both. Key to accomplishing that is being able to adapt the right cooling technology to the application.
Vertiv is enabling this by offering a range of different technologies that include air-based, compressor-based and liquid-based cooling. That has given high-performance computing facilities, for example, the flexibility to design racks around their specific requirements and then adapt the appropriate cooling technology rather than designing racks based on the capacity of their cooling system.
KRISTEN KROLL-MOEN, Chatsworth Products
Kristen Kroll-Moen: As rack density increases, airflow management becomes critical in traditional air-cooled facilities. In the late 2000s, the industry was projecting significant rack density increases. CPI conducted lab tests to observe and verify performance of airflow management technology and we discovered that air cools very dense racks, up to 30 kW, with disciplined airflow management. Since then, there have been more hardware improvements: hard drives have transitioned to solid state, power supplies have increased in efficiency, and equipment operating ranges have widened.
These improvements provide even more opportunity to accommodate new computing with traditional airflow cooling models. Where space is limited, perhaps in edge sites or where there is a high-density compute node, an alternative is to shift to liquid cooling, either supplemental indirect cooling, or direct liquid cooling. The challenge for enterprise operators is the availability of off-shelf direct liquid cooled solutions.
AMBER CARAMELLA, Netrality Data Centers and Infrastructure Masons.
Amber Caramella:As artificial intelligence (AI) and edge computing proliferate, rack density will continue to increase. More data-intensive workloads naturally require more compute power, which increases the amount of electricity used by servers and the amount of heat the servers produce. This makes powering and cooling data centers more expensive and increases their carbon footprint.
Conventional data center cooling methods will not scale under the processing demands of AI, 5G wireless, Internet of Things (IoT), and the rise of Smart Cities. Luckily, there are a growing number of new approaches to cooling that actually reduce energy consumption and costs. As data center needs transform, dynamic infrastructure is needed to respond to high, mixed and variable power densities, enabling environments to evolve without stranding capacities. Newer cooling systems are being developed that are purpose-built for data centers.
These and other promising new cooling methods will be needed to ensure that data centers can keep up with data processing demands and become more green and sustainable.
CHRIS SHARP, CTO, Digital Realty
Chris Sharp:With the explosion of AI and IoT in the enterprise, data gravity has become one of the biggest challenges to successful digital transformations. The explosive growth of data means it is now heavier, denser, and more expensive to move. At the same time, these new technologies require data centers to support a higher level of computational power, electricity usage, and heat generation, requiring specialized power and cooling techniques that arent available in the enterprise basement.
Organizations across all industries are using AI to meet business challenges and increase efficiency. Its also important to note that not every colocation facility is prepared to support these compute-intensive technologies in a multi-tenant environment. Per-rack power demands for AI can easily and regularly do exceed what standard data centers can deliver. To put into context, the average kilowatt per rack is around 7kW, but with AI, applications can pull more than 30 kW per rack. As per-rack power demands rise, so does the need for highly efficient cooling.
One of the ways data center operators are addressing these data-intensive technologies is through various next-generation cooling technologies, including liquid cooling and direct air cooling. While it isnt the right fit for all workloads, liquid cooling enables ultra-high-density equipment be deployed in otherwise low- or medium-density facilities, essentially retrofitting a data center for future applications.
As part of our commitment to lead the data center into the future, were partnering with companies like Submer Technologies to help customers evaluate the potential applications of new cooling technologies and support their future data center infrastructure needs.
MICHAEL DeVITO, Iron Mountain
Michael DeVito:At the high level, AI requires a lot of processing. It must go through many different data sets quickly and requires algorithms capable of processing that data.
Along with the demand for low latency and high processing comes greater compute capability. This will increase the power required at the rack level. Once you increase power at the rack level, you must be able to cool it.
As the need for greater cooling grows, data centers must ensure we are doing this efficiently. As we consume more and more cooling, we need to be cognizant of any waste and operate in the most sustainable manner possible. Finding sustainable sources of power are key.
JEFF KLAUS, Intel
NEXT: Our panel discusses the data center industrys progress on diversity.
Keep pace with the fact-moving world of data centers and cloud computing by following us onTwitterandFacebook, connecting with me onLinkedIn, and signing up for our weekly newspaper using the form below:
View post:
Pockets of Density Will Be More Common in Future Data Centers - Data Center Frontier
Google: This is what caused CPU throttling at our cloud data center – ZDNet
Google says a set of crushed wheels used for moving its server racks triggered a chain reaction that may have disrupted Search, Gmail, and other services for some users.
A rack of servers at one of its data centers started overheating to the point where CPUs were automatically throttled, ultimately because a set of rack wheels couldn't bear the weight of Google's cloud kit.
Steve McGhee, a solutions architect at Google Cloud, says Google users "most likely" wouldn't have noticed errors caused by the rack's crushed wheels. But the chain of events resulted in enough CPU throttling to cause "user harm".
Fortunately, the incident wasn't as serious as one from June last year,caused by a failure in Google's automation software, which took down Gmail, YouTube, and customers' applications. That incident prompted a big apology to customers and a commitment to do better in future.
SEE: Cloud v. data center decision (ZDNet special report) | Download the report as a PDF (TechRepublic)
This time the company has decided to tell the story to illustrate the lengths it goes to to find the root cause of disruptions even when they don't noticeably impact users.
The latest event came to light when Google recently kicked off an investigation after a site reliability engineer noticed a spike in errors from machines on its edge network that cache content users frequently access. The machines were immediately taken offline to stop them impacting customers, allowing other machines to take up the slack.
Google engineers noticed some border gateway protocol (BGP) network errors but their characteristics suggested issues with the machines rather than the router. Further investigation turned up kernel messages in machines on the edge network that revealed CPU clock throttling.
The engineers found that failing systems were isolated to machines on a single rack. All of this investigation was happening remotely. Unable to explain why the rack was overheating enough to cause kernel errors, the engineers then requested Google's on-site data-center workers to physically check out the problem rack.
Soon after the data-center team reported back with a brief message and a picture of the rack's crushed wheels.
"Hello, we have inspected the rack. The casters on the rear wheels have failed and the machines are overheating as a consequence of being tilted," the team explained.
"The wheels (casters) supporting the rack had been crushed under the weight of the fully loaded rack," said McGhee.
"The rack then had physically tilted forward, disrupting the flow of liquid coolant and resulting in some CPUs heating up to the point of being throttled."
SEE: There's more to Google than Google: Dataset Search comes out of beta
It's not clear why the wheels were crushed but Google engineers feared it could be a more widespread problem and so they replaced all the racks that could be vulnerable to the same broken-wheel tilting issue.
The problem has caused Google to reconsider how it moves new racks into its data centers when they're being built.
Google's engineers discovered that casters on the rear wheels had failed, ultimately causing the machines to overheat.
The alarming tilt of a refrigeration unit also pointed to the underlying problem.
Read more:
Google: This is what caused CPU throttling at our cloud data center - ZDNet
Cloud of uncertainty for restaurants, bars, and servers – WKBW-TV
EAST AURORA, N.Y. (WKBW) Changes are coming to restaurants and bars as all establishments will have to close their dining and bar areas effective Monday at 8 p.m. Businesses will have to work under a take-out service only.
"We all have children and we all have homes and mortgages and payments that we have to make," Lori Cubins, a server at Bar-Bill Tavern said. "It creates a lot of uncertainty."
Cubins has been a server at the restaurant in East Aurora for 17 years. For the single mother of three, serving is her main source of income.
"If we can get back to business within two weeks, we're probably going to be okay," Cubins said. "But if it continues to go on longer than that, a lot of us will struggle with that."
The location in East Aurora has always had a take-out service but every patron knows how packed the dining area normally is.
"Ultimately we're buckling down and preparing for an extended negative impact to the business," Bar-Bill owner Clark Crook said.
Not only is the hit to business expected to be painful, Crook says he's worried about his staff.
"The impact on them is enormous," he added. "It can't be understated so either way, our employee impact is something we're the most concerned about."
In Tonawanda, Mississippi Mudds has moved completely to curbside service and are offering free delivery within a two-mile radius. It's all to help provide customers with a quick meal.
"It's gonna impact everybody but it's the right move to do," part-owner Tony Berrafato said. "I still believe people want the food they enjoy and we're keeping people working as best you can."
And during this time of uncertainty, employers can only look for the light at the end of the tunnel and hope one day soon, it's business as usual.
"It will be a struggle if it continues on for more than the two weeks," Cubins said. "But we will get through this, absolutely."
Read more here:
Cloud of uncertainty for restaurants, bars, and servers - WKBW-TV
Google reveals the wheels almost literally fell off one of its cloudy server racks – The Register
Google has revealed that the wheels almost literally fell off some of its servers.
A late Friday post about the virtues of its site reliability engineering (SRE) teams told the story of a recent incident in which its uptime squad found evidence of packet loss, isolated to a single rack of machines.
On closer inspection the servers in said rack were found to be rife with CPU throttling and some border gateway protocol weirdness to boot.
After plenty of remote probing by the SRE team failed to diagnose the problem, a Googler was despatched to endure the indignities of meatspace and inspect the problem rack with their actual eyes.
And heres what they found:
Crushed castors beneath a Google server rack. Click image to enlarge.
The wheels (castors) supporting the rack had been crushed under the weight of the fully loaded rack, wrote Google Cloud Solutions Architect Steve McGhee. The rack then had physically tilted forward, disrupting the flow of liquid coolant and resulting in some CPUs heating up to the point of being throttled.
The rack was duly propped back up and McGhee says Google has since performed a systematic replacement of all racks with the same issue, while avoiding any customer impact and also considered how to better transport and install its kit.
The post is of course self-promotion for how seriously Google takes its quest for uptime. But it is nonetheless interesting for revealing that Google has two internal aphorisms. One states that "All incidents should be novel" and should never occur more than once. The other posits At Google scale, million-to-one chances happen all the time.
The Register suggests the first is applicable anywhere. And the second is, thankfully, hardly ever a problem for our readers. Until they move into a hyperscale cloud.
One more thing to note: the post includes a photo of the leaning rack, a rare image of a Google bit barn's innards even if it reveals very little.
Sponsored: Webcast: Why you need managed detection and response
Read more here:
Google reveals the wheels almost literally fell off one of its cloudy server racks - The Register
The Last Hurrah Before The Server Recession – The Next Platform
Excepting some potholes here and there and a few times when the hyperscalers and cloud builders tapped the brakes, it has been one hell of a run in the last decade for servers. But thanks to the coronavirus outbreak and some structural issues with sections of the global economy lets stop pretending economies are national things anymore, because they clearly are not this could be peak server for at least a few quarters. Maybe a few years.
We started The Next Platform in 2015, but our experience in the systems market goes back to the aftermath of the 1987 stock market crash that eventually caused a recession in the late 1980s and early 1990s that really didnt get resolved until the dot-com boom came along and injected a whole lot of hope and cash into the tech sector and then into every other sector that needed to become an e-business. When we think about transition points in IT, we think that the Great Recession was the point in time when a lot of different industries pivoted. And thus our financial analysis usually goes back to the Great Recession (when we are able to get numbers back that far) because we want to see how what is going on now compares to the difficult time we were going through then.
According to market researcher IDC, in the fourth quarter of 2019, which is technically a dozen years since the last recession started, server shipments were up 14 percent to 3.4 million units and revenues rose by 7.5 percent to $25.35 billion.
The big reason for that revenue increase was that the hyperscalers and cloud builders invested heavily in machinery in the quarter, with 1.05 million machines being sold by the ODMs who supply iron to these companies, up a stunning 53 percent and driving revenues up 37.9 percent to $6.47 billion. Clearly, with the hyperscalers and cloud builders buying mostly X86 servers and with increasing competition between Intel and AMD, the hyperscalers are getting great deals on processors with AMD leading the price/performance drive and Intel huffing and puffing to try to keep up without wrecking its profit margins. Dont feel bad for Intel the chip giant is driving historic revenues and very high operating profits in its Data Center Group even with the competitive pressure. IBMs System z mainframes also perked up in the quarter, driving revenues for Big Blue up 17.6 percent to just a tad under $2.3 billion. Inspur, thanks to a very aggressive X86 and Power server business in China, saw revenues grow by 12.1 percent to $1.74 billion.
The rest of the server makers were either up a few points or down a few points. As we recently discussed in our analysis of the datacenter businesses of Dell and Hewlett Packard Enterprises, these two companies are exemplary of what is happening in the enterprise and among smaller Tier 2 clouds, telcos, and service providers. Dell and HPE have fiscal years that are distinct from calendar years, so IDC reconciles their numbers to the solar cycle for us. In the fourth quarter, IDC reckons that Dell had $3.99 billion in sales, down 9.9 percent, against 549,488 servers shipped out of its factories to the channel or to customers, down 5.4 percent. HPE, including its H3C partnership in China, actually saw shipment growth of 4.7 percent to 507,228 units and raked in $4.14 billion in revenues against that, down 3.4 percent but giving HPE the mantel of top server shaker money maker in the quarter the first time that has happened in a while. Lenovo had $1.42 billion in server sales, down 2.6 percent, Huawei Technology had $1.28 billion, up 1.8 percent, and we estimated that Cisco had just under $1 billion in sales, up 6 percent.
To be fair, OEMs had a pretty good fourth quarter in 2018, making it a tough compare to this time around, even as the ODMs saw a pretty steep decline, making it an easier compare.
Heres the table of server revenues, which we have had to estimate in a few points (shown in red bold) for the past two years by source:
And here is the same data extended back to the belly of the Great Recession presented in a chart:
Now, if you do a little math on these numbers from IDC, you will see that if you take out the effect of the ODMs, who together comprised 25.5 percent of the sales in Q4 2019, the rest of the server market was flat as a pancake revenue-wise. That was better than the 8 percent revenue decline in Q2 and the 6.6 percent revenue decline in Q3, but the big different is really those incremental System z15 sales from IBM. Take that out, we are back in negative territory for X86 servers in the enterprise, service providers, and telcos once again. (At least as a group. Intel said that its telco and service provider customers from Data Center Group had 14 percent growth in its fourth quarter and enterprises were off 7 percent, which matches this period in server sales being analyzed by IDC.)
Thanks in large part to competition among Intel and AMD in the server CPU racket and falling DRAM and flash memory prices, the average cost of an X86 server has been trending downwards, as you can see:
The X86 server platform still represents something north of 98 percent of shipments, which grew by 12.9 percent to 3.35 million units (98.5 percent of shipments), with revenues of $22.44 billion (89 percent of revenues). Sales of non-X86 server shipments rose by 17.8 percent to $2.91 billion, and IBMs System z and Power Systems machines accounted for 78.8 percent of that non-X86 slice of the server pie.
The amount of compute we are consuming is growing a lot faster than the price is dropping these days, as we have calculated since the Great Recession:
The amount of compute acquired in recent years, mostly due to the hyperscalers and cloud builders, is enormous, as you can see. How much steeper can that curve get if there is a recession? We may find out.
There was not a peep out of IDC about the coronavirus outbreak, and that is to be expected because the effects of the interruption to the supply chain for servers as well as the impact on buying patterns for enterprises, governments, service providers of all stripes, hyperscalers, and cloud builders. And the reason why is simple: No one knows. The error bars on any thought experiment, much less simulation, about the global economy right now are too large because some, many, all (take your pick) of the underlying variables that go into trends in the economy are changing.
What we can say honestly is this: If we do go into a recession, there is no question that some aspects of the platforms that we build will change. This has happened time and time again. Platform transitions are not caused by recessions, but they are often accelerated by them, particularly if companies can save money or do things they have always wanted to do or, better still, never even dreamed of doing. Lets walk through it.
The move to proprietary minicomputers was certainly helped by the recession in the mid-1970s, which lingered for a while and then there was a mild one in 1980 and then again in 1981 and 1982 after the Iranian Revolution in 1979. Again an oil pricing shock jolted the system, although unlike last week, where we were worried that oil prices would be too low, in those two cases we knew they were going to be too high. IBMs and Hewlett-Packards proprietary minicomputers took off then because companies wanted to computerize their back-office and factories, but they could not afford mainframes.
In the late 1980s, another oil price shock combined with irrational exuberance on Wall Street shocked the economy and the RISC/Unix transition was there to benefit. The client/server revolution of the late 1980s to early 1990s was not only a reaction to a sluggish economy where central host systems were wildly more expensive than PCs, which were on everyones desktops at work and which had to be made more useful for the sake of the IT budget, but it was also a precursor to the Internet age, where hybrid computing across PCs and servers became so normal that we dont really talk about it much any more.
The dot-com bubble from around 1995 through 2001 coincided with the Unix revolution and then the rise of Intel iron and Linux and Windows Server, and this was an architectural change that was funded by fear of missing out and being stuck in a personal corporate recession as upstarts blew by you and left you in the ditch of economic ruin. We could argue about how much of the spending in the dot-com boom was wasted on hope and ideas, but the fear was running pretty high and companies like Sun Microsystems, EMC, and Oracle benefited mightily from all the hype and hope.
And after the September 11 attacks in the United States, we had another recession and that really put the nail in the coffin of RISC/Unix systems and marked the rise of Intel X86 server chips and within a few years AMD Opterons, and these systems rose until the Great Recession kicked in during 2009, when Intel essentially copied off AMDs homework and created the Nehalem Xeon architecture that we are still using predominantly in the datacenter today. When that last recession hit, VMware was there with a credible, enterprise-grade server virtualization platform that allows companies to get their existing iron to run at a higher utilization by converging the workloads on physical servers onto virtual machines on a physical server, and this helped save the day. AMD had made some architectural compromises and also had some bugs in chips, and server makers were in no mood to be patient. They all fell in behind the Nehalems, and Cisco Systems went so far as to converge compute and networking and set off the whole server industry on a tear for converged platforms at the same time Nutanix was being founded to offer us hyperconvergence, which emerged in 2011.
This time around, if a recession should come to pass and we surely hope that it does not then AMD, Ampere Computing, and Marvell might be the big beneficiaries. Not Intel.
Read this article:
The Last Hurrah Before The Server Recession - The Next Platform
Google Translates real time transcription feature is out now for Android – The Verge
Google Translates new transcription feature, first demoed back in January, is out now for Android users as part of an update to the artificial intelligence-powered mobile app. The feature will allow you to record spoken words in one language and transform them into translated text on your phone, all in real time and without any delay for processing.
The feature will begin rolling out starting today and will be available to all users by the end of the week. The starting languages will be English, French, German, Hindi, Portuguese, Russian, Spanish, and Thai. That means youll be able to listen to any one of those languages spoken aloud and translate it any one of the other available languages.
This will work live for speeches, lectures, and other spoken word events and from pre-recorded audio, too. That means you could theoretically hold your phone up to computer speakers and play a recording in one language and have it translated into text in another without you having to input the words manually. Google told The Verge in January that it will not support the option to upload audio files at launch, but listening to a live audio source, like your laptop, should work as an alternative method.
Prior to this feature, you could have used Google Translates voice option for turning a spoken word, phrase, or sentence from one language into another, including in both text and verbal form. But a Google spokesperson says that part of the app wasnt well suited to listen to a longer translated discussion at a conference, a classroom lecture or a video of a lecture, a story from a grandparent, etc.
To start, this feature will require an internet connection, as Googles software has to communicate with its Tensor Processing Units (TPUs), a custom type of AI-focused processing chip for use in cloud servers, to perform the transcription live. In fact, a Google spokesperson says the feature works by combining the existing Live Transcribe feature built into the Recorder app on Pixel phones, which normally works offline, with the power of its TPUs in the cloud, thereby creating real-time translated transcription so long as you have that internet connection to facilitate the link.
Google says the new transcription feature will be Android-only at launch, but the company has plans to bring it to iOS at some point in the future. It should show up as its own transcribe option in the app after youve updated it. Google also says youll be able to pause or restart the transcription by tapping the mic icon as well as change the text size and customize dark theme options in the Translate settings menu.
More:
Google Translates real time transcription feature is out now for Android - The Verge
Data storage in the cloud: 5 ways to make it faster and cheaper – TechGenix
The way cloud computing is becoming ubiquitous and major vendors like Google, Microsoft, and Amazon are competing to stay ahead, it is safe to say that cloud-based services will get more accessible and cheaper to use. In the next few years, not just organizations or the government but numerous smaller businesses and individuals are also expected to adopt the cloud. In such a situation, it becomes important to understand the basic factors that decide the cost and the performance factors for any application you wish to host in the cloud. To help you decide upon the right option, the key factors are explained here, which will help you make the right choice for faster and cheaper data storage in the cloud.
The main elements that work together to form the fundamental architecture of the cloud suitable for any application are the frontend, the backend platforms, applications, databases, and software capabilities. Different cloud types, namely private, public, hybrid, and multicloud, have different combinations and user-controls making them suitable for different needs. For example, a public cloud is highly scalable, cost-effective, and highly reliable. In contrast, a private cloud is a bit expensive but provides better security and customization. A hybrid offers a combination of public and private cloud solutions into unique cloud storage, while multicloud offers multiple public cloud services in a single heterogeneous architecture.
A business can have unique requirements for hosting its applications, so it is important to understand their requirements first. Then based on requirements and budget, a business can identify the right cloud data storage architecture that could deliver storage on demand in a scalable way.
Again, since businesses have different needs and offer various kinds of services to their customers, the nature of their files or data will also be different. For example, an enterprise that offers a streaming service may have a vast amount of media data, so it would probably need large volumes of storage and high bandwidth support.
After choosing the right architecture (public, private, or hybrid), you need to understand how the data will be stored, which can be defined and understood in three levels as file, block, and object-based storage. File-based storage refers to the storage of an individual file (document or spreadsheets) as a single entity. It can be used by applications that often need shared access to files and require a file system. It works well for organizing data in a simple, arranged and accessible platform. The block-level storage is used in SAN (storage area network) architectures, and it refers to an individual block of raw storage data. This format is convenient for enterprise applications like databases or ERP systems. Object-based storage is useful to solve the unorganized data such as videos, audio, photos, and scanned images. It is suitable for building modern applications from scratch that requires scale and flexibility. Selecting the right storage can help improve the performance of the application.
Some examples of data storage solutions provided by cloud vendors are Azure Storage and Amazon S3. You can get the required amount of storage capacity and other features by paying a monthly or annual subscription fee. For such subscription-based services, public cloud options are often considered economical, but some enterprises are cautious of using them because the stored data is sent outside of their network premises. So, in case the privacy of the stored data is a major concern, the organization can choose a private cloud, where the management of data always remains within the premises of an enterprises network. Some organizations even use a hybrid cloud, in which some resources are handled in-house while others are provided to third-party cloud providers. Leading enterprise storage vendors who sell these types of services are Dell EMC Enterprise Hybrid Cloud, IBM Elastic Storage Server, and Microsoft Azure Stack. There are many more to choose from.
Another major concern of data storage in the cloud is security. Cloud offers a less-expensive alternative compared to expanding physical storage, but it also has security-related concerns. Organizations must tackle challenges like security and performance to prevent any data breach or compromise.
To protect such sensitive information, one straightforward option is using encryption. All data stored in the cloud is first encrypted so that in case any hacker gets access to any sensitive data, they wont be able to misuse that data without knowing the correct decryption key. But this method has its own concerns choosing an outdated algorithm like MD5 over reliable encryption (SHA 3) may doom the entire effort. Also, it impacts the performance of your application, as encryption may slow down the transfer rates if the volume of data is high.
Another major factor is the high availability of data, which can be ensured by opting for georedundancy (physical separation of datacenters between geographic locations). This can ensure that your application will always be available, but this increases the overall cost and network complexity of the system. Also, when opting for this, IT teams should make sure they consider the issues related to regulatory compliance, administration, and cost. Organizations should also consider several factors like latency, performance, and resiliency requirements before making such investments.
There are many other factors you should look for with your cloud storage, such as automated upload and sync of data, auto-scaling options, or capping/notifications for max limits. Having auto-upload enabled may result in increased storage cost or may exhaust your existing data storage limit quickly. You also need to consider if your application requires auto-scaling of storage capacity (for example, automated subscription of extra storage spaces as soon as it reaches existing capacity). Turning on this feature by default may be a very convenient and hassle-free option for your application, but it may easily lead to high operational budgets. Setting up alerts or notification when storage reaches a threshold capacity gives you ample time to consider whether to expand capacity or clean up existing data and create additional storage space.
Besides the above-mentioned visible factors, there are several additional factors to look for, based on special needs. For example, your business requires holding a two-day online event with thousands of customers joining in or providing time-based discount schemes to your customers. Such events may result in a requirement of additional storage space for your applications to cater to the peak load of traffic. You must ensure that the selected cloud storage space provides support for these kinds of special requirements and that youve configured your resources appropriately.
An unsuitable cloud can increase your expenditure or negatively impact the performance of your application. Cloud providers have many data storage services, and each one of them serves a different purpose. So, individuals or businesses should carefully analyze their business requirements and then opt for a suitable option. We have listed several considerations that can help you improve your data storage capabilities in the cloud, but awareness of the latest trends and offerings is surely a major factor to help you identify the fastest and cheapest option for your cloud storage.
Featured image: Pixabay
Post Views: 45
Home Cloud Computing Data storage in the cloud: 5 ways to make it faster and cheaper
Follow this link:
Data storage in the cloud: 5 ways to make it faster and cheaper - TechGenix
Spectro Cloud Launches With $7.5 Million to Help Enterprises Realize the Promise of Kubernetes – Container Journal
First Company to Hit Sweet Spot Between Managed Kubernetes Offerings That Are Restrictive and Complex DIY
SANTA CLARA, Calif., March 17, 2020 (GLOBE NEWSWIRE) Today Spectro Cloud, an enterprise cloud-native infrastructure company, emerged from stealth and unveiled its first product: Spectro Cloud. Spectro Cloud provides scalable, policy-based management of Kubernetes for enterprises that need a high degree of control over their infrastructure, whether it is in public cloud, private cloud, bare metal or in any combination. The product has been in private beta since January and will be generally available next quarter.
Enterprises are struggling to realize the promise of Kubernetes due to its operational complexity. While the managed Kubernetes services solve this problem for those that want/need a completely pre-packaged approach, for the majority, they can become too restrictive for the varied needs that enterprises have. Spectro Cloud has created a flexible solution that provides the scalable automation and ease-of-use of the managed services, but enables enterprises to retain greater control, said Roy Illsley, Distinguished Analyst, Enterprise IT, Omdia.
Spectro Cloud lets enterprises customize a Kubernetes infrastructure stack for specific business needs by using a declarative model to define cluster profiles. Spectro Cloud uses these cluster profiles to automate deployment and maintenance of clusters across the enterprise. Canary deployments, patterns for rolling out releases to a subset of users or servers, ensure Kubernetes upgrades dont break dependencies on other ecosystem components while keeping everything consistent with enterprise-wide standards.
Sbastien Morissette, P.Eng., IT Architect Specialist Infrastructure, Security and IT Services at Intact Financial Corporation, Canadas largest provider of property and casualty insurance, said: Our business units end up choosing different Kubernetes providers as they all have different niches and varying maturity levels in different fields like AI, machine learning, public cloud vs on premises offerings, etc. Operationally, this becomes a nightmare because IT needs multiple support structures to address the different infrastructure stacks.
Morissette continued: A platform like Spectro Cloud addresses both the day 1 and day 2 operations of our Kubernetes ecosystem by normalizing the way IT deploys, operates and manages Kubernetes clusters over a broad spectrum of endpoints, both on premises and in the cloud. The control IT gets from Spectro Clouds cluster profiles means they can customize offerings to each business unit while maintaining responsibility for overall operations.
Weve seen enterprises struggle with managed Kubernetes options, and weve also seen them waste time and money trying to do everything in-house. With Spectro Cloud, were giving enterprises a way to run Kubernetes at scale without having to convert their entire way of working to whatever one large vendor thinks is correct. Theyve been burned by that approach before, said Tenry Fu, co-founder and CEO of Spectro Cloud. Fu most recently led the architecture for the Cisco CloudCenter Suite and Cisco Container Platform after his previous company, CliQr, was acquired by Cisco. CliQrs technology enabled applications to run more efficiently across public and private clouds.
Instead of converting their entire business to a single way of working, enterprises can experiment with new approaches at the pace that makes sense for them. Developers can work at the speed they need, while security and audit controls are embedded into the process, regardless of where clusters are deployed. Enterprises can make use of public cloud, private cloud, whatever suits their needs at the time, and change their mind as circumstances require.
With Spectro Cloud, the promise of Kubernetes can finally be realized.
Today Spectro Cloud also announced $7.5 million in seed funding led by Sierra Ventures with participation from Boldstart Ventures.
The market for Kubernetes has crossed the chasm. What weve heard from our CXO Advisory Board of Global 1000 IT executives is that enterprises are still struggling with the operational complexity that comes with Kubernetes. Spectro Clouds team has a deep understanding of the needs of enterprises and has found a unique way to make Kubernetes easy to use for its rapidly growing customer base, said Mark Fernandes, managing director at Sierra Ventures.
From our dozens of conversations with Fortune 500s, it was clear that deploying Kubernetes was a top priority but there was still no solution that met their needs. Spectro Cloud is the first company that not only gives customers fine grained control, flexibility and multi-cloud capabilities for their Kubernetes stack but also the ease of use and scalability of a managed SaaS platform. The teams deep background in cloud infrastructure (founded CliQr sold to Cisco) and their design first ethos has been well received by large enterprises, and were thrilled to be partnered with Spectro Cloud as they redefine the infrastructure ecosystem, said Ed Sim, founder and managing partner at Boldstart Ventures.
About Spectro CloudSpectro Cloud is an enterprise cloud-native infrastructure company that makes Kubernetes manageable at scale for enterprises that need superior control and flexibility. Spectro Cloud provides solutions that help enterprises run Kubernetes their way, anywhere. Spectro Cloud is founded by multi-cloud management experts and is backed by Sierra Ventures and Boldstart Ventures. For more information, visithttps://www.spectrocloud.comor follow @spectrocloudinc.
Related
See the rest here:
Spectro Cloud Launches With $7.5 Million to Help Enterprises Realize the Promise of Kubernetes - Container Journal
Need to build a high-performance private cloud? You need the QNAP TVS-1282T3 Thunderbolt 3 NAS – ZDNet
The other day I covered Synology's new DS220j and DS420j NAS boxes. Great devices for those looking for an entry-level network attached storage box to create a private cloud. But some of you need more. More power. More storage capacity. More performance. More of everything.
If you need more, then you should take a look at the QNAP TVS-1282T3 Thunderbolt 3 NAS.
This is a beast of a system, with performance -- and price -- that isn't for the faint of heart.
Must read: The ultimate MacBook Pro accessory just got cheaper
There's a lot to the QNAP TVS-1282T3 Thunderbolt 3 NAS.
QNAP TVS-1282T3 Thunderbolt 3 NAS
Tech specs QNAP TVS-1282T3 Thunderbolt 3:
There's a lot of customization options there, from picking the processor you need, to the RAM options, to how to load the system out with drives.
The 2.5-inch SSD trays and 3.5-inch hard drive trays have been designed to be tool-less for easy installation and replacement, although if you want to dig deeper into the system you will need to wield a screwdriver. That said, the TVS-1282T3 has been designed to be taken apart and rebuilt, which is nice. Everything is well thought out, and engineered in such a way that makes it easy to take apart and put back together.
The NAS is also fast. Using Thunderbolt 3, the QNAP TVS-1282T3 can achieve file transfer speeds up to 1,600 MB/s. The Thunderbolt 3 ports are also compatible with USB-C cables/devices and supports 10 Gbps USB 3.2 Gen 2, allowing compatibility with a broad range of external drives/enclosures.
Thanks to QNAP's Qtier Technology, the TVS-1282T3 is smart, and features Auto Tiering that continuously optimizes storage efficiency across M.2 SSD, SSD and SATA drives by allowing the system to move frequently-used "hot" data to high-performance storage tiers and less-accessed "cold" data to lower-cost, higher-capacity drives. This allows you to get the very best out of your investment in drives.
The system is surprisingly quiet in normal use, although the harder you push it, the more cooling it will require, and the noisier the cooling fans will be.
It's not cheap though. A diskless QNAP TVS-1282T3 with a Core i5 processor and 16GB of RAM will set you back over $3,300. But, if you need power and performance, it doesn't get much better than this.
See also:
Continued here:
Need to build a high-performance private cloud? You need the QNAP TVS-1282T3 Thunderbolt 3 NAS - ZDNet