Category Archives: Cloud Servers

OnShip Brings its Parcel & Freight Shipping Transportation Management Platform to the Cloud with Cameyo – Supply and Demand Chain Executive

Cameyoannounced that The OnShip Group selected Cameyo to enable their new OnShip Cloud offering. Utilizing Cameyo, OnShip was able to web-enable its powerful shipping software suite, allowing their customers to access OnShip, from the browser, with no software to install.

OnShip includes all the features and functionality of Fortune 1000 shipping software in an easily configurable and affordable package, accessible to companies of any size. The premise-based version of OnShip is currently used by hundreds of businesses to streamline workflows, reduce expenses, and maximize bottom-line revenues. But increasingly, OnShips customers needed the ability to quickly deploy OnShip to all employees who needed access across a corporate office and in multiple shipping locations.

We built OnShip to make powerful shipping software available for organizations of any size. When our customers began asking for any time, anywhere access to the software, we knew we needed to web-enable OnShip and make it available via the cloud, said Jay Rajcevich, President of The OnShip Group, Inc.

But to completely redevelop the application for the web would have taken months and require a significant financial investment. It also would have resulted in a web version without full-featured parity to their powerful premise-based software, which was a non-starter for OnShip.

Our customers choose OnShip because we offer the best of both worlds full-featured, powerful shipping software at a purposely low price point. In other words, our customers get all the features they need at a fraction of the cost of competitors still holding on to their overpriced, one-size-fits-all pricing models. We were not willing to pursue any path to web-enablement that would compromise the user experience or deliver anything less than the full power of our existing software suite, said Rajcevich. With Cameyo, we were able to make our full software version of OnShip available, across an enterprise, at any time, via a web browser. The customer experience doesnt change one bit, and theres nothing new they need to learnthats critically important.

While OnShip helps simplify and reduce the cost of shipping for any organization, the launch of OnShip Cloud makes OnShip easily accessible for smaller organizations that do not have full-time IT support. Cameyos virtual application delivery platform enables OnShip Cloud to be utilized by smaller organizations with no installation or management required.

Our premise-based shipping software is installed on a PC or server, behind corporate firewalls. However, most smaller shippers dont have dedicated IT resources, so that can sometimes present a challenge, said Rajcevich. By utilizing Cameyo, we can instantly give smaller shippers access to the platform and have them up and running without any IT support, because nothing needs to be installed or managed locally on their servers or employee workstations. And we can do all of this within hours, not weeks.

OnShip Cloud is the result of OnShips commitment to constantly enhance their platforms capabilities without compromising their users experience, said Andrew Miller, Co-Founder, and CEO of Cameyo. The launch of OnShip Cloud via Cameyo gives organizations of all size access to incredibly powerful and efficient shipping software, now simply and securely available. And as they continue to offer new capabilities over time, OnShip Cloud can provide users with instant access to those features without having to manage updates.

View original post here:
OnShip Brings its Parcel & Freight Shipping Transportation Management Platform to the Cloud with Cameyo - Supply and Demand Chain Executive

Couchbase Announces $105 Million Equity Investment Led by GPI Capital to Fuel Its Next Phase of Growth and Cloud Innovation – GlobeNewswire

Santa Clara, May 21, 2020 (GLOBE NEWSWIRE) -- Couchbase, the creator of the enterprise-class, multicloud to edge NoSQL database, today announced that it has completed a $105 million all-equity Series G round of fundraising. The latest funding round, led by GPI Capital, also included oversubscribed participation from existing investors Accel, Sorenson Capital, North Bridge Venture Partners, Glynn Capital, Adams Street Partners, and Mayfield. Proceeds from this financing will be used to expand product development and global go-to-market capabilities.

Today more than 500 enterprises, including over 30% of the Fortune 100, rely on the Couchbase NoSQL database. In its latest fiscal year, Couchbase delivered over 70% total contract value growth, 50%+ new business growth, and 35%+ growth in average subscription deal size. The company has nearly $100M in committed annual recurring revenue.

Couchbase is accelerating its trajectory and will use the funding to build further differentiation in already industry leading products and services, while simultaneously expanding its customer facing operations. The company will complement feature development in its best-in-class enterprise NoSQL server and mobile database platform with Couchbase Cloud, a fully managed Database-as-a-Service offering. Now, more than ever, enterprises are accelerating their migration to both NoSQL databases and cloud deployments to increase agility and flexibility while simultaneously reducing costs.

To be competitive today, enterprises must transform digitally and use technology to get closer to their customers and improve the productivity of their workforces. To do so, they require a cloud-native database built specifically to support modern web, mobile and IoT applications. Application developers and enterprise architects rely on Couchbase to enable agile application development on a platform that performs at scale, from the public cloud to the edge, and provides operational simplicity and reliability, said Couchbase President and CEO Matt Cain. More and more, the largest companies in the world truly run their businesses on Couchbase, architecting their most business-critical applications on our platform. This has become even more pronounced today as all companies are closely evaluating their digital strategies while carefully managing their capital allocation plans. Completing this funding round in the current climate is a testament to the importance of modern databases and the relevance of Couchbase as we continue our path to becoming a large, public company.

We are excited to partner with Couchbase and view Couchbase Servers highly performant, distributed architecture as purpose-built to support mission critical use cases at scale, said Alex Migon, Partner of GPI Capital and new member of the companys board of directors. Couchbase has developed a truly enterprise grade product, with leading support for cutting-edge application development and deployment needs. We are thrilled to contribute to the next stage of the companys growth.

Khai Ha, Partner at GPI Capital, added, Having evaluated how companies innovate across a wide variety of industries and are increasingly turning to modern non-relational databases for their operational needs, we were pleased to see the increasing awareness of and growth acceleration at Couchbase. We look forward to serving as a partner to the company over the coming years.

The latest funding round occurs at a time when companies across all industries are looking to increase their investment in solutions that enable powerful digital experiences for both employees and customers. This reality is that the database market will undergo a generational market transition over the next many years. The market is already worth over $100 billion, and the NoSQL database market in particular is projected to grow at 32% CAGR from 2018-2023, according to 451 Research.

Since January, the company was named to JMP Securities Hot 100 List of Best Privately-Held Software Companies, has earned Top Rated 2020 Awards from TrustRadius for Best NoSQL Database, and was named a 2020 Bay Area Best Place to Work. Couchbases growing list of customers include American Express, Cisco, Comcast/Sky, Disney, LinkedIn, Tesco and more.

Resources

Read the blog.

About GPI Capital

GPI Capital is an alternative investment firm specializing in leading growth equity investments in technology, consumer and industrial companies. GPI focuses on identifying high quality businesses looking to accelerate growth and execute on transformational opportunities with an engaged and value-add partner. For more information about GPI, please visit http://www.gpicap.com/.

About Couchbase

Unlike other NoSQL databases, Couchbase provides an enterprise-class, multicloud to edge database that offers the robust capabilities required for business-critical applications on a highly scalable and available platform. As a distributed cloud-native database, Couchbase runs in modern dynamic environments and on any cloud, either customer-managed or fully managed as-a-service. Couchbase is built on open standards, combining the best of NoSQL with the power and familiarity of SQL, to simplify the transition from mainframe and relational databases. For more information, visit http://www.couchbase.com

Continue reading here:
Couchbase Announces $105 Million Equity Investment Led by GPI Capital to Fuel Its Next Phase of Growth and Cloud Innovation - GlobeNewswire

What are the different types of cloud load balancing? – TechTarget

Load balancing is the process of distributing network traffic across two or more instances of a workload. IT teams use load balancing to ensure each instance performs at peak efficiency, without any one instance becoming overburdened or failing due to excess network traffic.

Traditionally, a load balancer exists in a local data center as a dedicated physical network appliance. However, load balancing is more frequently performed by an application installed on a server and offered as a network service. Public cloud providers use the service paradigm and provide software-based load balancers as a distinct feature.

Once a load balancer is implemented, it acts as a network front end and often uses a single IP address to receive all network traffic intended for the target workload. The load balancer can evenly distribute the network traffic to each available workload instance, or it can throttle traffic to send specific percentages of traffic to each instance.

With a load balancer, the target workloads can be in different physical places. Cloud load balancing provides similar benefits that enable users to distribute network traffic across multiple instances within the same region or across multiple regions or availability zones.

Load balancing is defined by the layer that the network traffic is handled by based on the traditional seven-layer Open Systems Interconnection network model. Each layer corresponds to specific traffic types. Cloud load balancing is most commonly performed at Layer 4 (transport or connection layer) or Layer 7 (application layer).

For example, AWS' Network Load Balancer service operates at Layer 4 to direct data from transport layer protocols, including Transmission Control Protocol (TCP), User Datagram Protocol (UDP) and Transport Layer Security (TLS). Google Cloud Platform (GCP) refers to this as TCP/UDP Load Balancing, while Microsoft calls its Layer 4 service Azure Load Balancer. Since traffic is handled at a lower level of the network stack, Layer 4 load balancing provides the best performance. Cloud load-balancing services can handle millions of network requests per second and ensure low latencies. They are, therefore, great options for erratic or unpredictable network traffic patterns.

At the top of the network stack, Layer 7 handles more complex traffic, such as HTTP and HTTPS requests. Each of the major cloud providers has its own feature or service for this:

Since this traffic is much higher up the network stack, IT teams can implement more advanced options, such as content- or request-based routing decisions. This type of cloud load balancing works well with modern application instances and architectures, including microservices and container-based workloads.

The choice of a cloud load balancer should extend beyond traffic types alone. Cloud providers also differentiate load-balancing services based on scope and framework. For example, GCP suggests global load-balancing services when workloads are distributed across multiple regions, while regional load-balancing services are a good fit when all workloads are in the same region. Similarly, GCP suggests external load balancers when traffic is coming into the workloads from the internet and internal load balancers when traffic is intended for use within GCP.

Be sure to consider the broader suite of features and capabilities available with cloud load-balancing services. In particular, features can include support for a single front-end IP address, support for automatic workload scaling, and integration with other cloud services, such as monitoring and alerting.

Continue reading here:
What are the different types of cloud load balancing? - TechTarget

The Register calls for aid, and Microsoft’s Rohan Kumar will answer… our questions about SQL Edge and Azure Synapse – The Register

Build There was SQL Edge and Azure Synapse news at Microsoft's reimagined Build gathering this week, so The Register had a chat with corporate vice president Rohan Kumar about the company's database ambitions.

Having lurked in limited preview for a while, Azure SQL Edge has been pushed out to a broader audience as the platform creeps toward general availability. Running on x64 or ARM, with Windows or Linux doing OS duty, Azure SQL Edge is all about shunting the smarts of its bigger SQL Server brother to edge devices for more local processing and storage rather than maintain a constant, and potentially laggardly and expensive, connection to the mothership.

Kumar cited AI training as an example use case for the technology. "[We] see a lot of customers using our big data analytics in the cloud to train and run their machine learning models," he said, "and once they come up with a model that they believe is meeting the requirements, then they mass deploy onto the SQL Edge devices."

Those devices can then run in environments that are online, occasionally connected or fully offline.

"A lot of decisions can then be made very close to where the data originates," he added.

Kumar told us that the content of the wider public preview was pretty much what would go on to hit general availability, although the gang still had a way to go in order to reduce the current 500MB footprint down to 300MB. The limited capacity of Edge devices, such as the popular Raspberry Pi, makes the bloat loss essential.

"This is not the SQL Server of 15 or 20 years ago," laughed Kumar. "It runs everywhere." Indeed, the first SQL Server this writer used was version 4.21 and it ran happily enough under NT with a massive 64 megabytes of memory. Still, it is heartening to see the footprint being reduced while other products continue to pile on the pounds.

Also hitting public preview, and some way from the grungy edge shenanigans of SQL Edge, was Azure Synapse Link, a "cloud native implementation of hybrid transactional analytical processing (HTAP)", according to Microsoft.

Kumar had previously shown off Azure Synapse, a revved-up and rebranded of Azure SQL Data Warehouse (or "evolution", as Microsoft would have it), and its analytical capabilities at last year's Ignite event.

Synapse also gained some new toys in Public Preview at Build (something Kumar described as a "milestone"). The analytics service added new Synapse SQL features and, inevitably, built-in Power BI authoring. The widening of the preview also means that more users can check out if the team's performance and ease of use claims cut the mustard.

However, getting data from operational systems into analytical services has traditionally been a bit of pain. The new Link functionality allows customers to hit that real-time transactional data in Azure without adding a burden to operational systems.

There are, of course, gotchas.

It is undoubtedly neat if you're in the Microsoft ecosystem, but right now Azure Synapse Link only supports Azure CosmosDB in the preview. Other Azure databases such as PostgreSQL, MySQL or even Azure SQL are still in the "coming soon" bucket, which is a little disappointing.

Kumar assured us that the gang were "actively working on both SQL and Postgres", but that "there is a certain set of priorities that every team works through" and CosmosDB came out ahead.

A frequent complaint levelled at Microsoft is that much of the new stuff depends on the Azure cloud while many customers still prefer to keep their data close at hand. Azure Stack is an oft-touted solution to the problem, but Kumar pointed to Microsoft's crack at multi-cloud, Azure Arc, for an indication of where Synapse might go next.

"That could be the next step in the journey," he said, "if a customer is willing to maintain connectivity with Azure, where we're able to get a certain amount of telemetry, then progressively we can add more and more capabilities."

The cloud-versus-on-premises argument continues to rage concerning SQL Server 2019. Could that be the last major on-premises release of the former flagship? "No, absolutely not," stated Kumar. However, he added: "Here's the thing: if you look at the kind of investments we are making in SQL Server, even on-prem, it's essentially not just to support the customers, but to prepare them for the transition to the cloud."

That transition continues to rumble on as Microsoft attempts to persuade those still on the elderly 2008 and 2008 R2 platforms to move to something more modern (faced with the IT curse that those boxes simply work, and most admins know that if something works, it is best not to go fiddling with it).

And the on-premises versus cloud balance? At present it stands at around half and half. There are an awful lot of SQL Servers still alive and well in data centres around the world.

Sponsored: Webcast: Simplify data protection on AWS

The rest is here:
The Register calls for aid, and Microsoft's Rohan Kumar will answer... our questions about SQL Edge and Azure Synapse - The Register

How data centers will become automated and self-reliant – TechHQ

In the thick of the pandemic,TechHQcovered the story of theunsung heroesof the tech world data center workers.

Stringent lockdown measures have impacted the daily workflow of various businesses, and only key essential workers are given the green light to offices and other facilities.

Ambiguity arises when contract data center workers are not given the same pass for movement. Yet these operators and contractors are the front liners when it comes to maintaining and keeping data centers running.

In this light, the significant role of data centers is highlighted more than ever, and some trends are spotted to take shape due to the unique challenges brought upon by the pandemic.

To understand the impact of COVID-19 and remote working on the evolution of the data centers,TechHQinterviewed Lenovo DCGs APAC Director for Software Defined Infrastructure, Kumara Raghavan.

In the past, data center administrators would have to schedule downtime during the weekends and be on standby to do upgrades and updates when the power users were using the mainstream applications. Raghavan added those were the norms by and large, especially if you have a quality hyper converged infrastructure and software combination.

In todays climate, recurring themes Raghavan noted were automation and self-managing tech in data centers that minimize the reliance of human workers being physically present all the time.

We have administrators who do their firmware updates while doing their shopping its a one click upgrade.

Thats just an example of the various levels of automation that has crept into the data centers, he said.

Besides that, for companies with hybrid cloud, they are able to preset the provisioning for servers and applications when a spike occurs. Citing it as a result of automation and scripting, Raghavan shared that this is the reality for companies with a high degree of resilience due to the increasing level of automation baked into their data centers.

Furthermore, Raghavan shared that companies are focused on balancing the workload of data servers by diversifying their traffic through multiple data centers across specific geographical regions. In light of the pandemic, another consideration for companies would be to fulfill the demand of a rise in number of employees working from home.

These may be some of the general trends that are observed, but the decision and evolving role of data center management are dependent on a companys available resources, budgets, purposes, and essentially, experience with the cloud and data centers.

Raghavan explained there are two kinds of behaviors that are driving the way companies manage their data centers in the current environment.

For companies that are still at the early stages of digital transformation, they are most likely to pivot as they face the drastic surge in work from home capacity requirements.

The heightened demand includes the need for extra bandwidth, more compute power or storage but without change to existing budgets. Hence, companies have to rethink how to best cater to these growing needs without completely blowing out on their cash reserves.

Once, they have planned their own transformation, I think its easy for them to move capacity and that is one of the facilities that come from the principle of the hybrid cloud, Raghavan shared.

As for companies that didnt have the full capacity or are inadequately prepared for upheaval times of the pandemic, opening up capability on public clouds seems a viable solution. Raghavan noted companies aiming for want an immediate fix to their problems and are looking to public cloud.

Despite the spike of interest in public cloud as a temporary relief, Raghavan said we dont see this as a permanent move.

I think the underlying concerns of why everything has not shifted to public cloud, contrary to some expectations, havent really gone away, he said.

Some of the considerations impeding mass migration to public cloud relate to the control of data and costs. In the short term, a monthly bill, as opposed to a single capital expenditure, may be a more feasible option for companies but in the long run, the cost proves to be quite expensive.

Raghavan also pointed out the aspect of latency in public cloud as a factor to the trend not taking off, especially in todays environment that there is so much demand on the network.

In sum, we see some temporary increase in the public cloud utilization, but in reality, customers are now thinking through accelerating their digital transformation [] and we see that they have better management of the workflow.

In essence, the management of data centers is noted to be influenced by emerging technologies such as the cloud, edge, and artificial intelligence (AI).

Raghavan shared that the AI tools related to common management will gain prominence.

Besides that, edge devices have the potential to help companies monitor and manage different zones of data centers. For instance, companies can change the settings of data center cooling based on the volume of workload, potentially saving huge amounts of costs spend on cooling and driving efficiency. The key is to try and reduce the dependency on humans when it comes to intervening and that happens within typical servers, said Raghavan.

Adding to the mix, self-managing technologies and predictive technologies will be major players in the space. Cutting-edge technologies will be able to monitor a range of parameters of drives. For instance, algorithms can track the performance of disks and predict with a fair amount of accuracy in the malfunction of it. By doing so, data center administrators are alerted of possible failures and are able to act in a swift manner, minimizing disruption.

In sum, the utilization of AI tools in data centers may not be scaled completely and widely in the discussed level but it does indicate the direction data center management is heading.

View original post here:
How data centers will become automated and self-reliant - TechHQ

Masayoshi Son says AWS and Microsoft will buy more chipsets from the SoftBank Vision Fund-backed Arm, and not – Business Insider India

Out of the 88 companies that the SoftBank Vision Fund 1 has invested in, Son assumes that as many as 15 companies could go bankrupt, while 60 others could post average performance. That leaves 13 companies, which Son believes could end up being successful.British chipmaker Arm Holdings could be one among those 13 companies. In fact, Son considers Arm to be one of the important assets of his fund. And why wont he Arms shipments have been growing exponentially.

Advertisement

Amazon Web Services, the current market leader in public cloud services, has announced Graviton2 chipset based on Arms designs for its cloud infrastructure. According to SoftBank, Graviton2 is believed to be 65% faster than Intels Xeon chipsets, and could help save up to 40% of the cost.

Advertisement

Arm is reportedly looking at an IPO in the next 5 years. However, that may make it difficult for SoftBank, which needs at least $3 billion every year in just equity dividends.SoftBank reported its first annual loss in 15 years, posting a net loss of $894 million in FY2019. During the same period last year, it posted a net profit of $19.6 billion.

Advertisement

SoftBank acquired Arm in 2016 for $32 billion. In 2018, it sold 25% of the stake for $8 billion. In its latest earnings, the Japanese group revealed that its stake in Arm is valued at $25 billion, which implies an appreciation of $1 billion in the chipmakers value.

Advertisement

Arm shipped 100 billion chipsets in the first 27 years the next 60 billion came in 3 yearsEstablished in 1991, Arm has shipped more than 160 billion chipsets till date. The first 100 billion shipments took 27 years, while the next 60 billion came in just 3 years.

Advertisement

SEE ALSO:Top 10 scary quotes from Masayoshi Son during the SoftBank earnings callSoftBank Group lost $6 billion on Alibaba shares and another $1.4 billion on Uber during the COVID-19 crisisWeWork India lays off 20% staff, says tough step needed for sustainable structureAdvertisement

More:
Masayoshi Son says AWS and Microsoft will buy more chipsets from the SoftBank Vision Fund-backed Arm, and not - Business Insider India

Chinese IPOs hang in the balance as Senate and Nasdaq change rules – Data Economy

EmpoweringIndiasEnterprise Cloud Adoption, the service provides customers an easy, scalable and secure way to connect to multiple Cloud platforms

GPX India Pvt. Ltd., a global data center and interconnection leader providing next-generation, carrier-neutral, cloud-agnostic services, announces the launch of GPX Open Cloud Exchange at itsMumbaidata center campus.GPX is the first data center and interconnection provider inIndiato offer an Open Cloud Exchange service which enables direct, private and secure connection to multiple Cloud Service Providers (CSPs) hosted inside the same GPX data center campus where the GPX Open Cloud Exchange is hosted.

GPX Cloud Exchange strengthens GPXs market-leading and high-performance Interconnection Ecosystem that exists in the GPX Mumbai data center campus. Utilizing the GPX Open Cloud Exchange service, Enterprises can seamlessly connect to multiple Cloud providers via a single port, accelerating their Cloud adoption and establishing enterprise edge nodes to optimize their hybrid-cloud and multi-cloud network strategies.With burgeoning Digital Transformation underway inIndia, the nations Cloud market is expected to grow threefold to$7.1 billionby 2022, according to a recent Nasscom report.

Building upon GPXs theme of service neutrality, we are looking forward to making our customers journey to the Cloud seamless. Currently, we have 8 CSPs present in GPXs Mumbai Data Center campus, three of which offer private direct connection services. The GPX Interconnection Ecosystem consists of 12 Carriers, 130+ ISPs, 4 IXPs, 8 CSPs, 9 CDNs and leading global content providers with geographic proximity to subsea cables offering the richest interconnection platform inIndia, saidNick Tanzi, President and CEO of GPX Global Systems, Inc.

As part of GPX Cloud Solutions, GPX has been offering GPX Direct Cloud Connect service to enterprise customers for over four years. This service provides direct connection services to GPX three CSP partners AWS Direct Connect, Google Cloud Dedicated Interconnect and Oracle Cloud FastConnect, all of whom have an edge node hosted inside the GPX Mumbai data center campus.

Through the GPX Open Cloud Exchange service offering, GPX will provide easy and scalable way for enterprises to off-load their IT workloads to these CSPs, and shift loads with high flexibility with scalable interconnection capacity from 100 Mbps to 100 Gbps. GPX will be partnering with additional CSPs with direct connection capabilities to increase options for customers. In addition, as part of GPX Cloud Value-Added Services, GPX offers Cloud Data Upload Service that enables customers to upload their data to AWS in a cost-effective and efficient manner.

We have been using GPXs Direct Cloud Connect service for almost two years to connect to Amazon Web Services. GPX has taken the complexity out of Cloud connectivity and made our journey to the Cloud easy and efficient. We like GPXs pricing model of a combined charge for port and Cross Connect, with the option to upgrade rapidly. GPX team is very process oriented and supportive, saidBhisham Sharma, Manager, Network & Security, Jubilant Life Sciences.

Our innovative Open Cloud Exchange service will empower the Indian enterprises to leverage multiple Cloud platforms in an easy and cost-efficient way. GPX is the only data center and interconnection provider to offer an Open Cloud Exchange service connected to multiple CSPs hosted within GPX Mumbai data center campus, therefore offering highly reliable and scalable service, saidManoj Paul, Managing Director, GPX India Pvt. Ltd.

About GPX

Incorporated inAugust 2002, GPX develops and operates next generation, private, carrier-neutral data centers and interconnection platforms in fast-growing commercial markets at cable landing stations in the African andSouth Asiaregions. GPXs data centers are thriving carrier-neutral and connectivity-rich Internet Ecosystems, home to the largest carriers, content providers, cloud service providers, content distribution networks, Internet companies and enterprise edge nodes. It launched its Indian data center,Mumbai1 in 2012 to provide Tier-IV Colocation and Interconnection Services. GPXs second data center inMumbaiwas launched in 2018 which further expands this ecosystem backed by state-of-art infrastructure and connects toMumbai1 via GPXs Data Center Interconnect (DCI) service. Visit the website here.

Read the latest from the Data Economy Newsroom:

Read more from the original source:
Chinese IPOs hang in the balance as Senate and Nasdaq change rules - Data Economy

Portworx upbeat on container storage revenues Blocks and Files – Blocks and Files

Portworx, the California container storage startup, today issued a so-called momentum release, boasting of customer and revenue growth.

As usual with US startups, the company does not mention actual figures but by any reckoning it is a small fish in the data storage world annual revenues according to this possibly out-of-date estimate are $14m. However, Portworxs bullishness is an indicator that the container storage market could shaping up into serious money.

Portworx today said it had more than 145 customers, including 54 Global 2000 or government accounts. It reports 136 per cent growth in revenue year over year in Q1 2020, and 92 per cent revenue growth from Q4 2019. Thirteen sales were over $100,000, up from five sales over $100,000 in Q1 2019.

A 2019 Portworx survey showed 87 per cent of respondents said that they used container technologies, compared with 80 per cent in 2018 and 55 per cent in 2017.

Almost Ninety per cent of enterprises are already running container technology and more than half the containers they run are stateful, according to the 451 Research survey: Voice of the Enterprise DevOps, Security, AI/ML, and Cloud Native 2020. And those customers need storage.

Portworx emerged from stealth in 2015 and has bagged $55.5m funding over three rounds. Its software runs on commodity servers and aggregates their storage into a virtual SAN providing scale-out block storage. It provides storage from this pool for containers, at container granularity, and with a global namespace. File and object storage are on its roadmap.

Portworxs pitch is that the storage supplied to containers through an orchestration layer like Kubernetes should be containerised itself and also enterprise class, with features like security, data protection, backup and recovery, disaster recovery, SLA management, and compliance.

It says traditional enterprise storage, with the features, is suited to virtual server environments but not cloud-native ones, even if they Kubernetes CSI plug-ins. Storage provision for containers has to be supplied at the speed and scale of container instantiation, deployment and removal. Portworx claims that only cloud-native storage, its cloud-native storage, can meet this need not legacy SANs.

Read more here:
Portworx upbeat on container storage revenues Blocks and Files - Blocks and Files

New study Global Managed Servers Market 2019 | Growth Opportunities, Investment Feasibility, Market Share And Forecast 2025 – Cole of Duty

Global Managed Servers Market Research Report offers complete knowledge, forecast and statistical analysis on past, present and forecast industry situations. The risks and growth opportunities associated with Managed Servers market are highlighted in this study.

The Managed Servers study will drive investment decisions and strategic business plans for a successful and sustainable business. The market growth in terms of CAGR value is presented from 2019-2025. The high-level data pertaining to Managed Servers market trends, supply-demand statistics, production volume and market demand is evaluated. Also, the cost structures, the latest Industry plans and policies and management strategies are explained.

FREE Sample Report Copy Here: https://www.globalmarketers.biz/report/technology-and-media/global-managed-servers-market-report-2019,-competitive-landscape,-trends-and-opportunities/136852#request_sample

The Outlook Of Global Managed Servers Market:

Sungard Availability ServicesCapgeminiHivelocity VenturesAtosiPageLeaseWebAlbatross CloudXLHostHostwayViglan SolutionsEasyspaceHetznerTata Consultancy ServicesInfosysIBM

The Global Managed Servers Market data is represented in graphical format to ease the understanding. This report also lists the Managed Servers driving factors, growth and development opportunities and restraints. Additionally, the Global Managed Servers Market Report provides complete study on product types, Managed Servers applications, research regions and other sub-segments.

The company profile covers the end-user applications, sales channel analysis, competitive landscape view, and expansion plans. The industry plans & policies, value analysis, downstream consumers and Managed Servers market dynamics are presented. The sales value, industry share, growth opportunities and threats to the development are explained. The contribution of worldwide players to the Global Managed Servers Market and its impact on forecast development is analyzed in this study. The global position of Global Managed Servers Industry players, their profit margin, volume analysis, and market dynamics are studied.

Types Of Global Managed Servers Market:

Cloud-BasedOn-Premise

Applications Of Global Managed Servers Market:

BFSIIT & TelecommunicationEducationGovernmentRetailManufacturingConsumer GoodsEnergy & UtilityOthers

Fill Out Inquiry Form For More Details: https://www.globalmarketers.biz/report/technology-and-media/global-managed-servers-market-report-2019,-competitive-landscape,-trends-and-opportunities/136852#inquiry_before_buying

Implemented Data Sources And Research Methodology:

The Global Managed Servers Market details are obtained via primary and secondary research techniques. The data is gathered from vendors, service providers, Global Managed Servers industry experts and third-party data providers. Also, various distributors, service providers and suppliers are interviewed in this study. Besides, Managed Servers Report also states the competitive scenario, SWOT analysis and market size.

The supply-demand side of Global Managed Servers Industry is analyzed by the data gathered from paid primary interviews and through secondary sources. The secondary research techniques involve the Managed Servers data gathered from company reports, consumer surveys, Government databases, economic and demographic data sources. Also, product sources like sales data, custom group data and case studies are analyzed.

Enquire Here For Customization:https://www.globalmarketers.biz/inquiry/customization/ 136852

There Are 8 Sections In Managed Servers Report As Follows:

Section 1: Objectives, Definition, Scope, Global Managed Servers Market Overview, Market Size Estimation, Concentration Ratio and Growth Rate from 2014-2025;

Section 2: Global Managed Servers Industry Segmentation by Type, Application and Research Region;

Section 3: Top Regions of Global Managed Servers Industry (North America, Europe, Asia-Pacific, Middle East & Africa, South America) with the Production Value and Growth Rate;

Section 4: The Changing Global Managed Servers Market Dynamics, Growth Drivers, Limitations, Industry Plans & Policies, and Growth Opportunities are Explained.

Section 5: Industry Chain Analysis, Manufacturing Base, Cost Structures, Production Process, Marketing Channels, and Downstream Buyers.

Section 6: The Top Managed Servers Players, Market Share, Competition, Market Size and Regional Presence is Specified.

Section 7: Forecast Market Trends, Consumption, Value, Production Forecast and Growth Estimates are Analyzed

Section 8: Lastly, Vital Conclusions, Research Techniques, and Data Sources are Listed.

Thanks for reading. We also provide a report based on custom requirements from our clients.

Request for more detailed information (TOC and Sample): https://www.globalmarketers.biz/report/technology-and-media/global-managed-servers-market-report-2019,-competitive-landscape,-trends-and-opportunities/136852#table_of_contents

Visit link:
New study Global Managed Servers Market 2019 | Growth Opportunities, Investment Feasibility, Market Share And Forecast 2025 - Cole of Duty

Get your head in the cloud: why cloud is crucial for sustainable business – New Zealand News Centre – Microsoft

The Covid-19 outbreak has reinforced two lessons for businesses the importance of cloud-based services and the need to ensure their model is sustainable. Cloud platforms have really come into their own, providing accessibility for remote workers and customers, while providing the ultimate scalability for businesses facing an uncertain future. But in a world where both the economy and environment are facing unprecedented challenges, it is more vital than ever for business owners and CFOs to make informed business decisions.

Choosing the right cloud option can be daunting, and a truly sustainable business needs a clear understanding of the financial and business case drivers to help make the right decisions.

Changes are happening at a rapid pace in todays business environment, with many companies looking at downsizing and improving their remote working capabilities. Even beyond the extremes of a pandemic, acquisitions, new ventures, upturns and downturns all provide daily challenges for senior managers.

Over the last few years, the focus of major decision-makers has moved away from in-house tech infrastructure and experts and towards more flexible and agile cloud models and platforms like Microsoft Azure that best fit their way of working. Before making any investment in IT infrastructure or platforms, ask whether they can adapt if your business needs change in the future.

Nothing illustrates how quickly the business environment can change as the infographic below.

The infographic shows the vast drop in consumption of electricity since Covid-19 hit New Zealand. Many businesses were unable to operate from their normal offices and stores. While most have adapted to working from home, the shutdown had a huge impact on commercial electricity consumption.

While major industry consumes a third of all our energy, the wider commercial sectors consume a further quarter of New Zealands electricity demand. It is this quarter that we can reasonably assume to have almost completely evaporated when the lights went off.

Although the future is opening up and consumption is starting to rebound, businesses are now focused on new ways of consuming energy or delivering services.

Many CFOs and CIOs are trying to figure out this new way of working and how it affects their own businesses. The days of the road warrior salesperson may be coming to an end. How we engage with, incentivise and add value to our clients will look very different, as some may prefer a call from the office or a virtual meeting to a corporate lunch. Customers too will be feeling the impact, with online engagement becoming the predominant form of communication, and Microsoft Teams, Zoom, and Hangouts becoming an integrated part of working culture.

In addition, Covid-19 has caused a large shift in the global economy and supply chain. Secure production and supply are increasingly of greater importance than the cheapest or most efficient options, which has led to a greater focus on in-house production, multiple suppliers or regional stockpiling.

The result of all this is we can no longer trust the stability of the surrounding environment. While we may see some return to the old ways of working, some business processes will never be the same again. Business managers therefore need to be prepared to constantly review business models and consider whether their technology needs are still being met by their current system. This will help ensure their business remains sustainable in a world that can change drastically in what seems like a second.

Businesses are now furiously planning ahead for the future. Many will need to start scaling up or down to stay on track with fluctuating demand. Working through the challenges including staffing, supply chain logistics, stock management and managing demand will impact most businesses, not to mention the increasing expectation they will reduce their environmental footprint.

Kiwifruit producer Zespri is a classic example of how to approach this kind of situation. In 2010, it was dealing with the PSA Virus, which caused entire crops of kiwifruit to fail. The popular gold kiwifruit was the most affected variety, spurring Zespri scientists to research a resistant strain.

They knew the situation could have gone either way with the opportunity to double production if the new strain was successful or ultimately halve if the research failed. Zespri was concerned about having enough computing power to cover existing demand while preparing for both the best- and worst-case scenarios.

With support from Total Utilities to assess its existing IT costs and consumption, Zespri was given a list of options that projected the business outcomes and costs for each. It could either continue managing and maintaining its own data centre, outsource its data to another local vendor, or switch to the public cloud so it could replicate the same platforms around the world. Zespri chose the latter, moving its infrastructure and associated platforms onto the Microsoft Azure Cloud.

This decision helped Zespri cover a multitude of potential problems by removing the financial risk of investing in its own tech infrastructure, allowing rapid expansion of a global supply chain and delivering detailed cost control mechanisms. Providing Zespri with financial operations toolsets allowed it to efficiently manage costs and consumption, which was repaid as Zespris research gamble paid off and the business grew in scale. Measures such as cost per tray of kiwifruit shipped have become an important way of tracking success. Zespri has used subscription cloud services as an effective way to manage, analyse and contain its costs ever since.

Not only does that mean Zespri is able to adapt its model to any scenario, not having a data centre on site reduces both energy consumption and space. The business is therefore more sustainable in every sense of the word something consumers around the world increasingly expect. Microsoft itself has committed to removing all carbon it has ever emitted directly or by energy consumption from the environment by 2050, reinforced by its pledge to support New Zealands sustainability goals through its new datacentre investment. As every organisation on the planet is challenged to review its impact on the environment, choosing greener IT options is a great way to minimise your footprint.

As the adage goes, the only certainty in life is change. While an upfront investment during downturns can be daunting, the best way for any business to safeguard its sustainability long-term is to invest in an IT system that doesnt become obsolete, that meets modern expectations around environmental impacts and which allows workers the greatest accessibility in an era when many of us are now working remotely. And that means embracing the cloud.

A resilient network and good technical support are essential to every modern business. There is an expectation for email, purchasing or sales automation to be working around the clock. Software updates, testing or hardware failures are no longer an excuse for disrupted services, which can instantly see customers go elsewhere.

Just five years ago, businesses were put off moving their platforms and operations to the cloud because they werent sure about achieving the level of compliance and technical competence they needed to operate the systems. Every business we consulted felt the skills to manage cloud migration were a barrier to digitising their operations, and that only in-house experts could provide the support needed. That figure is now just 40 per cent. Trust in the cloud and cloud providers to manage their businesses and tailor their services to their needs has skyrocketed.

Likewise, secure and reliable connections are more available than ever. While some thought the demand put on the internet during this period of working from home wouldnt hold up, the Covid-19 lockdown has proven how resilient the internet can be. It is a credit to our network providers, whether fibre, copper or mobile data-based, that these services have remained largely in place as millions of people have suddenly put tremendous demand on capacity.

This shows that network availability is no longer a constraint holding back businesses from placing their operations and services in the cloud. Those organisations using public cloud services are also better placed to combat the predatory players who sought to take advantage of the situation via scams and cyber-attacks, with regular security upgrades not available to those using an outdated server in the back office.

No longer can you place your trust in simply doing it yourself. Instead, managed cloud-based services can prove more secure and reliable. Security and connectivity is complex to establish and even more challenging to maintain, especially when scarce, skilled resources are in high demand.

Whichever cloud service you use, make sure to choose a partner or platform that can provide real-time analysis and reporting so you can see exactly how its working for your business and change your plan if you need to.

Governance, cost control and resource efficiency have always been top priorities for businesses. Now more than ever, businesses are focused on getting the best value for money from their technological solutions while growing a sustainable business. One thing cloud-based platforms do very well, thanks to their sheer accessibility and ease of use, is enable workers to use a huge range of resources and implement their own changes and updates. However, if unconstrained, this can result in wastage and bill-shock.

While governance and budget setting have provided the framework for cost control and planning for decades, moving to the cloud requires a new level of collaboration between Finance and IT. Ensuring these two teams remain communicative through the cloud integration process is vital to ensuring it runs smoothly and efficiently, that the right functionality is baked in from the beginning and there are no budget surprises.

As well as close co-operation, the key to ensuring total visibility and that cloud services are providing the best value, is using a monitoring service to provide real-time data on how cloud is being consumed across the business. The best services can illustrate exactly how resources are being used either energy or data and enable CFOs and other decision-makers to rapidly change to new plans that are better for the environment and the bottom line. Clear and regular reporting is essential and takes a great deal of time and effort out of maintaining good governance.

Cloud is the future of many businesses and in a time of so much change and uncertainty, it is important to know your business has a model it can rely on to save costs and make governance far less onerous. To know that your business is making most of out your cloud service, make sure you have reporting in place so you can accurately reflect usage in real time, limit your expenses and energy consumption and create a business model thats truly sustainable for many years to come.

The Covid-19 outbreak has reinforced two lessons for businesses the importance of cloud-based services and the need to ensure their model is sustainable. Cloud platforms have really come into their own, providing accessibility for remote workers and customers, while providing the ultimate scalability for businesses facing an uncertain future. But in a world where both the economy and environment are facing unprecedented challenges, it is more vital than ever for business owners and CFOs to make informed business decisions. Choosing the right cloud option can be daunting, and a truly sustainable business needs a clear understanding of the financial and business case drivers to help make the right decisions.

Total Utilities is a multi-utility consulting company and Microsoft partnerwith a client base of over 1,500 businesses throughout New Zealand offering contract management and reporting, procurement and strategic consulting. Using world-class frameworks of business and financial data analysis, Total Utilitiesdevelops and implements solutions for organisations that are long-term oriented, sustainable, and fit for purpose.

View original post here:
Get your head in the cloud: why cloud is crucial for sustainable business - New Zealand News Centre - Microsoft