Category Archives: Cloud Hosting
If you run a Fast-Casual or QSR and want to improve efficiency, Redcat is on hand – Hospitality Magazine
Whether youre a restaurant, cafe, franchise or any other type of hospitality business, Redcat is the specialist hospitality platform that can solve your IT needs and help you grow your business.
Redcat improves QSR efficiency and connects you to your customers through online ordering, loyalty, iPhone and Android apps, integration with delivery partners (likeUber Eats) right through to self-service kiosks, QR code ordering, and the technology that helps QSRs run the kitchen.
Providing end-to-end integrated POS, Accounting & Business Management software solutions, Redcat allows you to run your business more efficiently.
So you can spend less time on laborious administrative tasks and more time focusing on your customers, providing them with a seamless experience across all of your touchpoints.
Our solution, Redcat Polygon, provides real time reporting and full visibility of every aspect of your business. And with reliable and scalable AWS cloud hosting, you dont need to be tech-savvy to work with Polygon (although if you do need our help, round-the-clock support designed with the hospitality industry in mind is available.)
To make things even simpler, we have only one number to call and when you do, youll speak to local people who themselves have worked in hospitality, so theyll know where youre coming from.
Our systems covers everything from ordering systems to deliveries, loyalty & marketing through to operations & reporting. And our 3rd party integration lets you manage rostering, inventory control and and payments.
Our key functionalities for Hospitality include:
Loyalty & Marketing With iOS and Android apps, your customers can use vouchers, buy and receive gift cards, use their digital wallets, view digital menu boards and more
Ordering incorporating contactless table and mobile app ordering, along with Redcat online ordering, Google ordering and virtual brands
Delivery integrating with a range of delivery partners including UberEats,Menulog, Deliveroo, Doordash and others
3rd Party Integration seamlessly integrating with partners like Tyro, eliminating the need for double entry which speeds up transaction workflows and reduces keying errors
Operations & Reporting whether youre using POS terminals, tablets, kiosks or a paperless kitchen, real-time reporting is at the ready
To make things even better, youll only pay for what you need right now and you can simply add additional functionality as your business grows.
Call 1300 473 322 or visit redcat.com.au/contact to request and demo and find out how you can take control of your business.
Related Stories
Read the original here:
If you run a Fast-Casual or QSR and want to improve efficiency, Redcat is on hand - Hospitality Magazine
ShapeShift AG Forms FOX Foundation to Support the Decentralization and Success of ShapeShift DAO – PR Newswire
The majority of DAOs form from the ground up, as DAOs from the outset. ShapeShift made history as the first company to announce it would fully decentralize all corporate structure and open source all code in July of 2021. Venturing into uncharted waters, the establishment of the FOX Foundation was a solution that offered some degree of neutralitythe Foundation isn't led by either ShapeShift AG or the ShapeShift DAO, and instead has the singular charter of decentralizing assets and infrastructure as efficiently as possible, using decentralized solutions as are currently and soon-to-be available. Once its mission is complete, the foundation will dissolve and distribute all remaining funds to the ShapeShift DAO treasury.
"ShapeShift has made strong progress in decentralizing already, such as launching the DAO, releasing the new open-source platform and implementing solutions such as SafeSnap, which give FOX Token holders autonomous control of the treasury," said Willy Ogorzaly, head of decentralization for the FOX Foundation. "While the necessary tooling and infrastructure are maturing rapidly, it will take time for the DAO to achieve its final, fully autonomous form. The FOX Foundation exists as a stepping stone in this journey, fulfilling the legacy centralized responsibilities while supporting the DAO in implementing sustainable, decentralized alternatives. I'm super excited about the stellar team we have at the foundation to take this over the finish line."
The FOX Foundation team includes:
About ShapeShift
Since 2014, ShapeShift has been pioneering self-custody for digital asset trading. Today's ShapeShift DAO is an engaged community of builders working to advance the state of crypto trading, investing and access to open, decentralized financial systems. Our web and mobile platforms empower users to safely buy, hold, trade, invest and interact with digital assets such as Bitcoin, Ethereum and Cosmos.
All Chains, All protocols, All Wallets. Share Our Vision at app.shapeshift.com.
Learn more at ShapeShift.com
Media Contact:Lindsay Smith[emailprotected]
SOURCE ShapeShift DAO
Read more from the original source:
ShapeShift AG Forms FOX Foundation to Support the Decentralization and Success of ShapeShift DAO - PR Newswire
How the Metaverse Is Giving Birth to Humans as a Service – ITPro Today
Although definitions of the metaverse are still being hashed out, most center on the idea that the metaverse is a virtual 3D world where people can interact. In other words, the metaverse is what you get when you fuse social media with virtual reality.
But here's another way of thinking about what the metaverse means: It's what happens when you combine cloud computing architectures with human beings. In other words, what the metaverse proposes to do is instrument humans as a service by deploying humans as virtualized services, just like the cloud has done to servers and software.
Related: Top 10 Industries Profiting From the Metaverse
Here's what this means, and why looking at the metaverse from this angle is important.
First, let me explain what I mean by terms like humans as a service, or HaaS, and how they relate to cloud computing.
Related: DevOps Teams to Play Big Role in Tackling Metaverse Challenges
The concept at the core of the cloud, of course, is that resources can be delivered "as a service" over the internet. Instead of standing up your own servers in your own data center, you can use cloud-based servers, which is an example of infrastructure as a service, or IaaS. Instead of installing and managing your own applications, you can use software as a service, or SaaS.
Viewed from one perspective, the metaverse does exactly the same thing to people: It makes them available as a hosted, fully managed service that anyone can consume via the internet.
More specifically, consider how the metaverse is similar to cloud computing in these respects:
The list could go on, but I hope the point is clear: The metaverse promises to transform humans and human relationships into abstract, scalable resources that can be consumed on demand with no strings attached.
Viewing the metaverse as the application of cloud computing architectures to human relationships is useful because it provides new perspective on both the positive and negative potentials of the metaverse.
On a positive front, the "cloudification" of humans promises to make it easier to interact with other people. Just as cloud computing brought world-class infrastructure within reach of businesses that might not otherwise be able to access it, a humans-as-a-service metaverse would extend access to human communities for people who would otherwise not engage with them due to geographic, political, cultural, or other barriers.
On the other hand, expect folks to criticize the metaverse for cheapening human relationships by, for example, placing constraints around how humans can interact, and how humans can represent themselves within virtual worlds. Such criticisms would not be unlike complaints that cloud computing limits the control that organizations have over their computing infrastructure that you typically can't access the bare-metal hardware, for instance, or control how SaaS applications manage your data.
Such worries about the metaverse could eventually push some early adopters of virtual communities to retreat from virtual communities back into the "real world." If that happens, it would be sort of like the cloud repatriation trend, which involves businesses migrating workloads from the cloud back on-premises.
To compare the metaverse to cloud computing is not to draw a mere analogy. In many cases, actual cloud infrastructure will be responsible for hosting metaverse communities, so there's already a clear technical link between metaverse and cloud.
I think, however, that it's valuable also to recognize the clear conceptual and cultural links between the metaverse and cloud computing. Ultimately, the metaverse stands to do to human beings what the cloud has done to servers and software: Make us available as an on-demand, scalable service.
If you thought the cloud computing revolution was over that IaaS and SaaS were as far as cloud computing would evolve just wait. The emerging metaverse suggests that there is a whole new chapter playing out in the cloud industry, and its focus is not on servers or code. It's on us.
More:
How the Metaverse Is Giving Birth to Humans as a Service - ITPro Today
The Envoy Gateway project wants to bring Envoy to the masses – TechCrunch
The Cloud Native Computing Foundation (CNCF) is hosting its semi-annual KubeCon + CloudNativeCon conference this week, so its maybe no surprise that well hear quite a bit of news around open source cloud infrastructure projects in the next few days. But even a day before the event, the CNCF has a bit of news: its launching a new project built around Envoy, the popular proxy originally developed and open sourced by Lyft in 2016. The new Envoy Gateway takes the core of Envoy with a simplified deployment model and API layer to make it easier for new users to get started with Envoy as an API gateway.
In addition, the CNCF is also merging two existing CNCF API gateway projects, Contour and Emissary, with Envoy Gateway. Both of these projects were already building out API gateway features for Envoy, but the CNCF argues that this new approach will allow the community to converge around a single Envoy-branded API gateway core. The new project, the organization explains in todays announcement, is meant to reduce duplicative efforts around security, control plane technical details, and other shared concerns and allow vendors to focus on building on top of Envoy and this new project instead of trying to re-invent the wheel.
The Envoy API will essentially be the Kubernetes Gateway API with Envoy-specific extensions and the overall project aims to reduce the complexities of deploying Envoy as an API gateway.
The flip side of Envoys success as a component of many different architecture types and vendor solutions is that it is inherently low level; Envoy is not an easy piece of software to learn, the CNCF explains. While the project has had massive success being adopted by large engineering organizations around the world, it is only lightly adopted for smaller and simpler use cases, where nginx and HAProxy are still dominant.
Read more:
The Envoy Gateway project wants to bring Envoy to the masses - TechCrunch
OAuth Security in a Cloud Native World The New Stack – thenewstack.io
These days most software companies use cloud deployment with modern hosting capabilities that make everyone productive, from developers to DevOps and InfoSec staff.
Gary Archer
Gary is a product marketing engineer at Curity. For 20 years, he has worked as a lead developer and solutions architect.
However, not all cloud deployments are the same, and you still need to make sound choices to meet your architectural requirements.
In this article, I will explain how my thinking has evolved after working with various cloud deployment types and integrating security into many kinds of apps.
I will start with a discussion on APIs and then highlight the key supporting security components. One of the most important of these is your identity and access management (IAM) system.
Nowadays, most application-level components implement security using the OAuth family of specifications, which provides modern security capabilities for web apps, mobile apps and APIs. This provides companies with the most cutting-edge options for authenticating users with one or more proofs of their identity, and protecting data in APIs according to business rules.
The authorization server defined in the OAuth specification deals with authentication, token issuing and user management. It enables many security solutions, or flows, to be built over time. Its the heart of any modern IAM system.
When I first started using cloud deployment, like many people, I was attracted by the thought of not having to host any backend servers and using the cloud infrastructure as a black box instead. For a single page application, this might lead to the following backend components that use PaaS:
Technologies like serverless enable you to develop APIs that use PaaS hosting. This can be a cost-effective solution for small startups or for developers to host their own solutions. Meanwhile, developers can use the cloud providers built-in authorization server when getting started with OAuth integration. This is sometimes referred to as Identity as a Service (IdaaS).
Your APIs or microservices are your core intellectual property (IP), and most companies implement them in a mainstream programming language, such as Java or C#. In doing so, organizations will want to leverage these technologies to their full capabilities without restrictions. In addition, code should be kept portable in case you want to use multiple cloud providers in the future. This can enable you to extend your digital solutions to emerging markets, where certain cloud providers may be blocked.
One downside to using PaaS for APIs is that you may run into limitations that lead to vendor lock-in, making it expensive to migrate APIs to another host in the future. Some compute-based API hosting may also have other limitations. For example, in-memory storage may be impossible if a system must spin up a new API instance for every request. These issues can add complexity and work against your technical architecture.
You must also control which API endpoints are exposed to the internet and secure the perimeter in your preferred way. A zero-trust approach is recommended for connections between APIs, as it can enforce both infrastructure and user-level security. Finally, APIs connect to sensitive data sources, so they should be hosted behind a reverse proxy or API gateway as a hosting best practice. This makes it more difficult for attackers to gain access to that data.
These requirements lead many companies to host APIs using a different cloud building block. Although virtual machines used to be more common, container orchestration platforms such as Kubernetes now provide the best API hosting features. This creates an updated deployment picture for APIs, where they are hosted inside the cluster while you continue to use PaaS for some other components:
Once API hosting is updated to use container-based deployment, there are no restrictions on code execution, and you have a portable backend that can be migrated between clouds. Your technical staff will also learn how to use modern patterns that deal with deployment and availability in the best ways. You then need to think more about other critical components that support your APIs.
As you integrate OAuth into your applications and APIs, you will realize that the authorization server you have chosen is a critical part of your architecture that enables solutions for your security use cases. Using up-to-date security standards will keep your applications aligned with security best practices. Many of these standards map to company use cases, some of which are essential in certain industry sectors.
APIs must validate JWT access tokens on every request and authorize them based on scopes and claims. This is a mechanism that scales to arbitrarily complex business rules and spans across multiple APIs in your cluster. Similarly, you must be able to implement best practices for web and mobile apps and use multiple authentication factors.
The OAuth framework provides you with building blocks rather than an out-of-the-box solution. Extensibility is thus essential for your APIs to deal with identity data correctly. One critical area is the ability to add custom claims from your business data to access tokens. Another is the ability to link accounts reliably so that your APIs never duplicate users if they authenticate in a new way, such as when using a WebAuthn key.
All of this leads to the preferred option of using a specialist cloud native authorization server. This is more efficient because the authorization server is hosted right next to your APIs. It also gives you the best control over security, limiting which authorization server endpoints are exposed to the internet.
As well as a hosting entry point, the API gateway (or reverse proxy) is a crucial architectural component. The API gateway can perform advanced routing and security-related tasks like token translation before your APIs receive requests. By externalizing security plumbing, your API code is simpler and more business-focused.
It is recommended to use the Phantom Token pattern so that internet clients receive only opaque access tokens. Unlike JSON Web Tokens (JWTs), which are easily readable, Phantom Tokens cannot reveal any private details that might disclose personally identifiable information (PII). When a client calls an API, the gateway can then perform introspection to translate from opaque access tokens to JWT access tokens. This flow is illustrated below.
There are many other gateway use cases, but a critical capability is running plugins that can perform both HTTP translation and routing as a single unit of work. There should be no limitations on the code you can write in the plugin. This is another area where cloud native solutions may provide better capabilities than the cloud providers generalist solution.
The authorization server and API gateway are key security components, and some companies also use an entitlement management system for their business authorization. Meanwhile, additional specialized components are required to support your APIs. These also must be chosen wisely, based on the providers capabilities and your requirements.
Each company must decide which third-party components they need. For example, it is common to host individual components for monitoring, log management and event-based data flows alongside your APIs. A possible setup is shown below:
PaaS is still an excellent choice for some component roles, though, and these days I follow a mix and match approach. Components that are a vital part of your API architecture should be hosted inside the cluster. I often prefer a serverless approach for other components if it is easier to manage.
The classic example where PaaS works better than CaaS is when delivering static web content to browsers. A content delivery network (CDN) can push the content to many locations at a low cost to enable globally equal web performance. This is more efficient than hosting CaaS clusters in all of those locations. See the Token Handler pattern for further details on using this approach, while also following current browser security best practices.
When companies are new to OAuth, there is often a fear that the authorization server could become unavailable, leading to downtime for user-facing applications. This concern remains valid, but when using cloud native APIs, you are already assuming this risk, and you should be able to follow identical patterns for third-party components. When using a cloud native authorization server, check that its deployment and availability behavior provides what you need.
Also, consider the people-level requirements. An InfoSec stakeholder will want a system with good auditing of identity events. These days DevOps staff should be able to perform zero-downtime upgrades of the authorization server or use canary deployment, where both old and new versions run simultaneously. The system should also have modern logging and monitoring capabilities so that technical support staff can troubleshoot effectively when there are configuration or connection problems.
Companies need to push their software down a pipeline, and discovering issues early on saves costs considerably. The benefits of a productive developer setup are often overlooked, but it is an area where cloud native provides some compelling advantages.
A developer, architect or DevOps person can run most cloud native components on a local computer. This can be a great way to first test the cloud native authorization server and API gateway and design end-to-end application flow.
Operational behavior such as upgrades can then be verified early, using a local cluster. Once the system is working with the desired behavior, you can simply update your Docker-based deployment, and the rest of the pipeline will also work in the same way.
Cloud native architecture provides the most portable and capable platform for hosting and managing your APIs, but keep an eye on the important security requirements. This will lead you to choose best-of-breed supporting components and host all of them inside your cluster. Choose an authorization server based on the security features you need and review it from an operational viewpoint.
At Curity, we provide a powerful identity and access management system designed to be cloud native from the ground up. It also integrates with modern cloud native platforms. As well as having rich support for standards, the system is based on a separation-of-concerns philosophy and is extensible to provide customers with the behaviors they need. There is also a free Community Edition, and it is trivial to spin up an initial system using a Docker container.
As a final note, the security components in your cloud native cluster will enable many powerful design patterns. Still, good architecture guidance is also a key ingredient when building cloud native security solutions. Our resource articles, guides, and code examples provide many end-to-end cloud native flows to help you along the way.
The New Stack is a wholly owned subsidiary of Insight Partners, an investor in the following companies mentioned in this article: Docker.
Image byThomas BreherfromPixabay
Originally posted here:
OAuth Security in a Cloud Native World The New Stack - thenewstack.io
New 5 Star Community Center will support veterans and community groups – SC Times
Following a move, a former St. Cloud business building is now a new community center to be used by veterans groups, the St. Cloud Police Department and other community groups.
The former 5 Star Plumbing, Heating and Air location at 1522 Third St. Nin St. Cloud is now 5 Star Community Center. Owner Dave Sherwood said the company donated the location after its move to Sauk Rapids.
Sherwood said he has several veterans in his familyand he saw them go through difficult times and need support. When he saw the work St. Cloud Stand Down does, he was impressed with the program.
"It was a really easy choice for me to make," Sherwood said of the building donation.
St. Cloud Stand Down is a non-profit veterans organization formed in 1998 that focuses on providing resources to veterans, including providing clothes and necessities, haircuts, meeting spaces, connections and help accessing assistance.
More: St. Cloud Stand Down sharpens focus on serving female veterans with new boutique
Bob Behrens, president of St. Cloud Stand Down, said the organization's presence at the building will be relatively minimal because the group already has a location at 724 33rd Ave. N, across from the former Electrolux building. However, he said the new community center will "be very well utilized" by the police department, veterans organizations and the community.
St. Cloud Stand Down owns the building, but the St. Cloud Police Department will manage the community meeting rooms in the basement and the Disabled American Veterans Organization will have a drop site for its clothing donation program in the back portion of the main floor. The main level also has space for veterans groups and community groups to meet, and will be managed by St. Cloud Stand Down.
St. Cloud Police Department Sgt. Tad Hoeschen said the department plans to use the space for community engagement work in the neighborhood. That could mean hosting neighborhood meetings, doing outreach with local businesses and community members, hosting parent education or working with other community partners.
"Honestly, the sky's the limit," Hoeschen said.
The community center won't be occupied like the COP House is, but Hoeschen said it's an another opportunity to "meet people where they're at."
"This is just one more option for us," Hoeschen said.
More: St. Cloud Police Department makes strides in community policing
The building was signed over to St. Cloud Stand Down in the summer, while Sherwood was in the hospital with a serious case of COVID-19 that he didn't expect to outlive. He signed the paperwork from the hospital, he said.
"I wanted to make sure no matter what happened to me, we got the place (taken care of)," he said.
Sarah Kocher is thebusiness reporter for the St. Cloud Times. Reach her at 320-255-8799or skocher@stcloudtimes.com. Follow her on Twitter @SarahAKocher.
Support local journalism. Subscribe to sctimes.com today.
See the original post:
New 5 Star Community Center will support veterans and community groups - SC Times
OVH: The cloud should be open, reversible, interoperable – The Register
Interview OVHcloud is perhaps best known for cloud computing, hosting, and dedicated servers in its network of datacenters but the company has also made news in other arenas. For instance, taking on Microsoft in the European courts or the fire at one of its datacenters that destroyed customer data.
Backups? It's the cloud, right?
CEO Michel Paulin addresses the fallout from the March 2021 fire during a chat with The Register. "We have decided to increase the level of resilience," he says, "above all the regulations."
This comprises, according to Paulin, containerization, fire extinguishing systems, and batteries. The opening of a new datacenter is being delayed "just to readapt with higher standards of security and safety of our servers."
A recent report on the conflagration highlighted a lack of an automatic fire extinguisher system and a wooden ceiling that was only rated to resist fire for an hour. Expensive lessons, we'd wager, were learned.
Affected customers who assumed their data was safe were not pleased. A class-action lawsuit was filed over the situation and Paulin notes that more clarity is required over who is responsible for backups. "Many customers, especially small customers, were not really very interested in that and they believed 'because it's in the cloud... my backups are secured.'"
The devil will be in the legal detail. As for the assumption that backups were part of the deal in the time before the "incident," Paulin says: "In fact, it was not. So we decided to clarify that."
Backups are now the default and it is up to the customer to opt out.
It's a timely reminder to all those who have shoveled their data into the cloud over the years to check their contracts to see exactly what Ts&Cs they and their vendor have agreed.
OVHcloud's legal team was also recently called to action for an entirely different reason. Along with several other companies, OVH has filed a complaint with the European Commission's antitrust unit over how Microsoft runs its licensing operation. Namely, trying to run the Windows giant's services somewhere that isn't Azure can result in some hefty fees. Not ideal for competing cloud vendors.
"We see today that there are a lot of practises to avoid [allowing] the customer to choose, but also to use trojan horses to impose their cloud. And this is a type of claim we've made against Microsoft," says Paulin.
CEO Michel Paulin, left, and Hiren Parekh, VP for Northern Europe
"Because we are not on Azure, and we refuse to resell Azure, the conditions financially, legally, and technically are very different. We pay more... if we agree to resell Azure, most of them change; the pricing changes, it's less expensive..."
Unsurprisingly, Paulin also isn't keen on the antics of the other major cloud vendors, such as Google and AWS. "I think it's not good for customers. Because in the end, if only one or two companies or three companies have 100 per cent of the market, innovation, pricing, everything will be very bad."
In calendar Q1, AWS, Microsoft and Google sucked in 62 percent of customers' spending globally on infrastructure cloud services, according to Canalys.
Paulin worries that due to their size, dominance, and deep pockets, the big tech vendors "have a lot of capacity to impose their solutions on the rest of the world and on the rest of the customers," thus eroding freedom of choice and innovation in the long term.
"Each time there is a new player, which is threatening a little bit, this type of monopoly immediately buys them or they are killed," he tells us.
It all sounds awfully familiar to anyone who has followed the activities of some of the current cloud giants over the years, even those that seem to have a born-again attitude to openness nowadays.
"The cloud should stay open," says Paulin. "Reversible, interoperable."
By "reversible," the CEO refers to the occasionally heart-stopping fees demanded by vendors for extracting a customer's data. "Interoperable," however, brings the EU GAIA-X initiative to mind.
"The principle of GAIA-X," explains Paulin, "is to maintain openness. On paper, everybody is OK; the governments, the customers, and industry."
For the uninitiated, GAIA-X is Europe's data infrastructure initiative to take on the US and Chinese cloud businesses, the idea being to address the concept of digital sovereignty. It started off with 22 members and has swelled to 343, including Microsoft, Amazon, Google Ireland and other titans.
Paulin used the word "sabotage" in reference to some of the players in the project. "They don't want to make in practice the objective, which is to create an open cloud," he said.
As the charge to the cloud accelerates, there is a danger that by the time GAIA-X is ready to go, the world will have moved on to the extent that the initiative is irrelevant. "Yes, this is a threat," Paulin admits. "Speed of execution [and] speed of implementation will be a key factor of success."
"As we are many, it's difficult to find sometimes a consensus to be able to move forward quickly," he adds. And, of course, the processes could end up being delayed by other factors (or contributors perhaps more keen on the status quo), "to be sure that they will never be visible, and they will never be implemented."
According to Paulin, 80 percent of OVHcloud's revenues come from Europe, although the company is keen to expand outside of the region. He also reckons the future is not just hybrid, but also multi-cloud as customers minimize their exposure to local regulations.
"Multi-cloud is the fact that the customer will have the capacity to give workloads, storage, and to exchange data across the different vendors easily. And not very costly. And again, sometimes to be compliant with the constraints of data sovereignty or any type of regulations."
OVHcloud also makes its own servers, and while Paulin is keen to boast of the company's approach to sustainability thanks to its designs that require "zero air-conditioning in our datacenters," OVH is up against the supply chain issues faced by the rest of the industry.
"We have a capacity to switch our CPUs and GPUs and to offer a similar type of services but with different hardware, depending on the how the supplies are working.
"We have stocks," he adds, which goes some way to mitigate the supply chain risk and keep its French and Canadian production lines ticking over.
OVHcloud's stock price is currently not far off its IPO last year, having risen as 2021 drew to a close and subsequently fallen back during 2022. It reported 382 million in revenues for its first half-year 2022 results, an increase of 13.3 percent year-on-year. The company subsequently raised its revenue growth guidance to 15-17 percent from 12.5 to 15 percent. France, however, remains the big beast in terms of its revenues accounting for nearly half at 190 million.
See the original post:
OVH: The cloud should be open, reversible, interoperable - The Register
NZ could become land of the long ‘green’ cloud, NZTE report finds – Stuff
New Zealand is well-placed to build a green data centre industry providing cloud computing services to Australians, a report commissioned by NZTE has concluded.
But the report by management consultant Analysys Mason found high wholesale electricity prices could be a fly in the ointment, and that the country was likely to face competition from Tasmania, which shares much of the advantage of New Zealands cool climate.
Technology giants Microsoft and Amazon Web Services (AWS) have budgeted billions of dollars to build new data centres in Auckland that will allow them to serve more of their customers locally, instead of from Sydney and further afield.
AWS has estimated it will invest $7.5 billion in its New Zealand facilities over 15 years.
READ MORE:* Bitcoin mining to be performed beside Southland power station * Data centre could be mining cryptocurrency in Central Otago by October* Land bought for $1 billion bid to turn Southland into global IT hub
But Analysys Mason also emphasised the potential for the South Island to become a hub for serving up cloud computing services to Australians from green data centres, following the expected availability of a new submarine fibre-optic cable network connecting Southland to multiple continents from 2025.
NZTE general manager of investment Dylan Lawrence said the report showed New Zealand, and specifically the South Island, are strong potential locations to build green data centres and serve overseas demand.
Data centres all over the world are power hungry. Building them here in New Zealand means investors and centre operators can take advantage of our renewable energy sources, he said.
In the South Island, they can also take advantage of the lower temperatures, which in turn helps lower the centres emissions.
Analysys Mason estimated that co-location data centres set up to provide services to third parties could be raking in annual revenues of $898m a year in New Zealand by 2030, almost double their revenues last year.
It also forecast the power requirements of data centres in New Zealand would grow from 81 megawatts last year to 303MW by 2030, which would be equivalent to a little over half of the demand of the Tiwai Point aluminium smelter.
Lawrence said the further development of the industry would have a number of benefits.
More data centres here will help support our businesses to move to the cloud, improve connectivity and decrease our reliance on offshore centres. And it will provide a potential export opportunity and therefore income.
Getty-Images
The data centre industry has been booming worldwide, creating weightless export opportunities for countries with cool climates and cheap renewable power.
New Zealand-founded technology company Datagrid is planning to build a huge data centre on a 43 hectare site in North Makarewa, near Invercargill, with an eye to serving the Australian market.
Its chief executive, Remi Galasso, is also the driving force behind the venture to connect Southland to Australia, North America and Southeast Asia with the Hawaiki Nui cable network, and is involved in a Chilean government initiative that could connect Southland to South America and potentially Antarctica.
Analysys Mason said Southlands 240 terabit-per-second connection on the Hawaiki Nui cable network would give it an advantage over Tasmania in becoming a hub for Australian cloud computing services.
Two existing internet cables connecting Tasmania had a limited capacity of 1Tbps each while a third cable had multiple reliability issues, it said.
Luke Tscharke/Tourism Tasmania
Tasmania is the obvious regional competitor to Southland in the bid to build a bigger data centre industry, but both may share the spoils a report suggests.
Those cables were also not connected directly to landing stations in Sydney or Melbourne.
But spot electricity prices were much lower in Tasmania at about 2.9 US cents (NZ 4.6c) a kilowatt-hour and all of its electricity was renewably generated, Analysys Mason said.
Spot electricity prices have hovered above NZ 20c/kWh in New Zealand for much of this year and spiked above 50c/kWh on Friday.
But data centres in New Zealand could still be competitive if they negotiated favourable supply contracts direct with generators, the consultant said.
Overall, it expected the South Island and Tasmania might share the spoils.
Galasso said Southland and Tasmania were the only locations in the region that had the advantage of large hydropower stations and cool weather.
The demand for electricity from the data centre industry was likely to be such that both would become hubs for cloud computing, he said.
Supplied
Hawaiki Cable founder Remi Galassos drive to get NZ better connected with subsea cables has been key to creating more opportunities for hosting cloud computing services in the country.
Analysys Masons estimate that Kiwi data centres would need 303MW of power by 2030 could be an underestimate if the Chilean governments Humbolt subsea cable progressed and New Zealand became an important data exchange between South America and Singapore, he said.
Datagrid was finalising its design ahead of putting in resource consent for its North Makarewa data centre, he said.
British data centre company Lake Parime plans to open a smaller data centre facility in central Otago by October, that will be powered by Contact Energys Clyde hydro scheme.
It is expected to be used in part for the environmentally-controversial practice of Bitcoin mining.
Another company, Grid Share, is involved in a similar initiative at Pioneer Energys Monowai Power Station.
A fourth business, local start-up T4 Group, is planning to build a mid-sized high-spec $50m data centre in Southland that would be used to house more critical cloud computer applications.
Read more:
NZ could become land of the long 'green' cloud, NZTE report finds - Stuff
Significance of colocation in hybrid working – CXOToday.com
Colocation in itself is not sufficient for a complete digital transformation. The entire functionality is now dependent on a host of services. Business owners may have difficulty accessing data due to security breaches, disasters, or decentralised networks that disrupt business continuity. A multitude of supplementary activities, therefore, need to be taken that ensure your hybrid model of work has uncontested data availability.
With geographically distributed teams, it puts an added onus on the technologies to ensure virtual collaboration, asynchronous communication, and result-based tracking. In these scenarios, Colocation Data Centers play a critical role in handling Cloud storage, virtual meetings, VOIP traffic, and web-based applications.
Here are some of the crucial services to ask from your Colocation Data Center provider:
It is not a revelation that effective business continuity is dependent on creating ample redundancies within the Data Center ecosystem. Where IT teams are already overextended in the pandemic age, managing dispersed endpoints and increasingly complex cyber threats, BaaS protects the organisations information by replicating the entire contents on an offsite location. This makes an organisation less susceptible to evolving threats and frees up resources for revenue-generating operations.
In essence, organisations should look for:
When there are bundles of assets and information to protect, an organisation needs to develop a varied set of capabilities to keep up with DDoS and data loss threats in todays time. SECaaS consolidates them all to provide network security, vulnerability scanning, identity and access management, encryption, intrusion prevention, continuous monitoring and security assessments. It is outsourcing the security of your company within a Cloud architecture. This ensures that you dont have to increase your costs while scaling your business in a hybrid model.
Essentially, organisations must look for:
Data loss is accelerating at an unprecedented pace due to both natural and man-made disasters. In the face of this uncertainty and its relative impact on a hybrid style of working, it is absolutely important to document and deploy a Disaster Recovery (DR) solution that provides failover for your Cloud computing environment. DRaaS is an effective option that can help you save recurring costs, ensure a faster recovery time, build in-house controls, and get cohesive data security.
Organisations should look for:
STaaS works as Cloud storage rented from a Cloud Service Provider (CSP) for data repositories, multimedia storage, backup, and DR. It is designed to handle heavy workloads without disrupting the ongoing business operations. A key benefit here is the offloading of any cost or effort involved in managing a full-fledged infrastructure and technology while giving you the bandwidth to scale up resources on demand. You can respond to market conditions faster and spin up new applications which turn out to be service differentiators in the evolving digital landscape.
Organisations should look for:
Round-the-clock monitoring and issue resolution are the two indispensable aspects required to keep an organisations remote environment functional. This includes software patching, reporting, and performance tuning on a need-to-implement basis. In totality, a third party vendor within a Colocation Data Center can cover your applications, middleware, web hosting, data management and other critical missions within the ecosystem that allow you to access technology and scalability without breaking the budget.
To ensure 247 availability, organisations should look for:
A unified policy around the Clouds is crucial to optimise application experience and automate Cloud-agnostic connectivity. Hence, a Cloud networking solution should deliver extensive integrations, flexibility, scalability and security while promising a consistent Colocation environment that consolidates all enterprise functions. The ideal way to ensure seamless remote communication is to go for a low-latency route optimisation engine with maximum network performance.
Organisations should look for:
To sum up
The pandemic has given rise to this new trend of working from home and office simultaneously, thus calling for a better infrastructure to support the trend. As this model is taken up by most companies, a hybrid data center architecture is needed as well, particularly for colocation providers. It is only this way that they can adjust to the changing dynamics of digital India.
(The author Mr.Nikhil Rathi, Founder & CEO, Web Werks and the views expressed in this article are his own)
Original post:
Significance of colocation in hybrid working - CXOToday.com
Global Virtual IT Lab Software Market Forecasts, 2021-2022 & 2028: Benefits of Using Virtual Sandbox Tests & Increasing Spending on…
DUBLIN--(BUSINESS WIRE)--The "Virtual IT Lab Software Market Forecast to 2028 - COVID-19 Impact and Global Analysis By Deployment and Organization Size" report has been added to ResearchAndMarkets.com's offering.
Virtual IT Lab Software Market is expected to grow from US$ 1,461.29 million in 2021 to US$ 3,174.88 million by 2028; it is estimated to grow at a CAGR of 11.7% during 2021-2028.
Benefits of Using Virtual IT Lab Software for Educating Employees on Advanced Tools/Projects
Several businesses use Virtual IT Lab Software to educate employees on new development practices and train them on advanced and modern tools/projects. The software is well-suited for employee training because it provides real-world resources without impacting the live applications, websites, or networks.
Moreover, practical training for employees is essential for the success of each organization as it increases the productivity of an employee. As technology has become an integral part of operations, employees must be trained to use the advanced tools effectively. Also, the virtual employment training approach is a cost-effective method.
Additionally, employee training extends an employee's knowledge, creates subject matter and in-house experts, reduces attrition rates and recruitment costs, and improves working productivity.
In the Virtual IT Lab Software Market ecosystem, some online course providers offer virtualized IT labs platforms as their training platform and range of courses. Several core Virtual IT Lab Software are used within a company for internal purposes. For instance, ReadyTech Corporation offers employee training with virtual training labs platform.
ReadyTech's virtualized IT labs platform provides employees a hands-on virtual training environment where they can practice actual on-the-job skills without jeopardizing the company's production systems. Virtual employment training with the help of Virtual IT Lab Software offers several benefits to the organization, which drives the Virtual IT Lab Software Market.
With the digital transformation, the use of cloud-based platform is increasing due to its simple deployment and reduced deployment time and cost. Moreover, the internet infrastructure has matured in developed countries and is flourishing in several developing countries, allowing end users to access the cloud-based platform.
A few benefits of cloud-based Virtual IT Lab Software are the secure hosting of critical data, improved security and scalability, and quick recovery of files. The backups are stored on a private or shared cloud host platform. Cloud-based Virtual IT Lab Software also reduces repair and maintenance costs and enhances customer satisfaction.
Therefore, due to the multiple benefits of cloud-based Virtual IT Lab Software, the adoption of the software is increasing by large enterprises and small & medium enterprises (SMEs) for educational purposes, which fuels the Virtual IT Lab Software Market growth.
Key Findings of Study:
In 2021, the cloud-based segment led the Virtual IT Lab Software Market. Virtual training has a promising future. As technology become more integrated into daily lives and work, more advancements and changes are expected in the near future. The cloud-based segment generates a significant amount of share in 2021.
Most virtual training labs are cloud-based, allowing learners to be educated from any location in the world, in any time zone, at any time, and at any speed. Users do not need an IT team to set up a complex training lab with virtual training labs. Instead of setting up each workstation with the appropriate software and files for in-person instruction, the virtual lab is set up once in the cloud and easily accessible by learners anywhere, contributing to the Virtual IT Lab Software Market growth.
Based on organization size, the Virtual IT Lab Software Market is segmented into SMEs and large enterprises. In 2021, the large enterprises segment led the Virtual IT Lab Software Market. The growing popularity among large enterprises and benefits associated with the utilization of the virtual IT labs is the major factor anticipated to drive the segment.
Virtual IT Lab Software for large enterprises offers an integrated corporate training platform for every aspect of the business. Large enterprises can educate their employees online faster and more efficiently. Companies can use this software to intelligently simplify, automate, and optimize their entire learning process.
It enables businesses to teach their personnel by building online courses, encouraging active learning, and increasing course completion rates in a simple and user-friendly environment. The growing popularity among large enterprises and benefits associated with the utilization of the virtual IT labs is the major factor anticipated to drive the segment growth in the Virtual IT Lab Software Market.
In 2021, North America accounted for the largest share in the global Virtual IT Lab Software Market.
Key Market Dynamics
Market Drivers
Market Restraints
Market Opportunities
Future Trends
Company Profiles
For more information about this report visit https://www.researchandmarkets.com/r/yzcnqx
Read more from the original source:
Global Virtual IT Lab Software Market Forecasts, 2021-2022 & 2028: Benefits of Using Virtual Sandbox Tests & Increasing Spending on...