Page 2,033«..1020..2,0322,0332,0342,035..2,0402,050..»

What Is Cloud Automation?: a Quick, All-Around Guide – TechDecisions

Automation is about making life easier; as simple as using a free meeting notes template.

For large-scale companies, automation reduces errors, saves time and, performs repetitive tasks. This frees up their human employees to focus on more important jobs.

One area where its highly effective is in the cloud.

Companies like Netflix and Amazon and even those who buy vanity numbers use public clouds for their services. It allows them to innovate without being setback by legacy technology.

But, as organizations grow, the number of cloud-based tasks increases. It can take an army of humans days to complete. Cloud automation will take only minutes.

Solutions like Google cloud RPA or any alternatives to WebEX can empower enterprises. Yet, cloud automation is often lacking in enterprise environments. It can sound intimidating.

In this guide, well discuss this technology, answering two important questions.

What is cloud automation, and how can it help you reach the full potential of the cloud?

Cloud automation uses automated tools to carry out workflows in a cloud environment. These workflows would otherwise occur manually.

Its like on-premise automation, and many of the same tools are used for both. But, there are specialized tools for the cloud.

Unlike on-premise automation, cloud automation focuses on automating services and virtual infrastructure. Its also suited to handle the scalability and complexity of the cloud.

Automation isnt built into the cloud, so it can be costly and hard to set up. It requires expertise to carry out, but its a crucial element for any cloud strategy.

Good cloud infrastructure encourages automation. This enables you to get the most value from your cloud services.

Cloud automation means cloud resources are used efficiently. It reduces manual workloads, minimizes errors, and improves security.

Automation and b2b seo agency provide big opportunities for scalability, agility, and efficiency. In simple terms, its able to perform complex tasks with the click of a button.

This speeds up how well your organization can adapt, which is helpful in todays business climate. With cloud automation, you can respond to challenges and innovate faster.

Thus, its essential to understand what you can automate and softwares youll need, from the best affiliate marketing tools to the most reliable data backup software, that will help in achieving your aims.

From this, you can build an effective cloud strategy and accelerate digital transformation.

Weve answered two questions. What is cloud automation and why you should use it. Now lets look at a few examples of how it can be applied.

Reducing manual workloads is a crucial part of any automation. Infrastructure provisioning is a typical use case when it comes to cloud automation.

Imagine you want to set up a collection of virtual servers. Configuring them one-by-one would take a team a long time.

Cloud automation tools can perform this task by automating template creation. The templates define each virtual server configuration. It can then apply them.

This type of infrastructure provisioning is known as an infrastructure-as-code (IAC) tool. It can be used to configure other types of cloud resources.

This type of automation allows organizations to scale their cloud infrastructure quickly. It gives them the added advantage of agility, and the ability to innovate more rapidly.

Some organizations could have hundreds of staff members, each requiring different privileges.

Setting up each policy manually will be a drawn-out processlong and potentially error-ridden. As employees come and go, managing access rights to cloud resources will be difficult.

Like in the example above, cloud automation can create templates. This time its for Identity and Access Management (IAM). These templates set up the user roles within your cloud environment.

This can be integrated into a central enterprise directory service. With it, identities across both cloud and non-cloud resources can be managed. This could be helpful for organizations that use mdm software solutions.

Using automation here goes beyond organizational agility. Onboarding new team members and modifying roles become easier and more efficient.

Not only does this save time, but it ensures a greater level of security.

Its not unusual for companies to use many private and public clouds at once.

In this situation, cloud automation is crucial. It allows teams to deploy workloads to many clouds at once. They can then manage them from a single interface.

Organizations with a multi-cloud strategy can increase efficiency with centralized, automated tools.

These are a few common cloud automation examples. Several other typical tasks can be automated in the cloud:

Cloud automation can be tricky to set up, so why would you go through with it at all?

Repetitive tasks are tedious tasks. By automating low-level manual processes, your staff saves time. With less pressure, theyre able to focus on more exciting projects and tasks.

The more people who can access a sensitive task, the more likely an accidental security leak will be. Ransomware attacks have also become a cause for concern. Automation reduces security vulnerabilities like these, by limiting non-essential access.

With cloud automation, tasks are carried out faster, with the same high quality. What once took several days might now take a few minutes.

Human error is as sure as death. Using enterprise robotic process automation reduces errors for non-cloud processes. The same applies in the cloud.

As long as automation rules are correctly configured, errors will be a rarity. The need for oversight is also no longer required.

You can manage a small environment without automation.

But, if you want to grow and scale your business, cloud automation is a necessity. After all, more users mean more tasks.

Cloud automation and orchestration are often used like they mean the same thing. But there is a difference when it comes to cloud automation vs cloud orchestration.

Cloud automation refers to the automation of a single task. Cloud orchestration refers to the automation of a host of tasks. It involves the automation of workflows across separate services and clouds.

In simpler terms, different cloud automation tasks can be coordinated and automated with orchestration.

For example, imagine you want to install an operating system on a server. The cloud automation steps you might use could be:

With cloud automation, these are four distinct tasks. Each task has to be done individually and in the right order.

With orchestration, these tasks would be combined into one workflow. This permits the entire server set up to be automated in the correct order. Its like pressing one button instead of four.

Cloud orchestration is essential for an enterprise setting. Here there are often too many cloud automation processes to manage on an individual basis.

Using a mix of both cloud automation and orchestration is vital. Used in conjunction it increases productivity and efficient workflows.

Its particularly useful for multi-cloud solutions. Here, you may need to coordinate tasks across different services, teams, and environments. This would also reduce costly errors.

Cloud automation is in many ways a sub-category of cloud orchestration. You can have cloud automation without cloud orchestration, but cloud orchestration needs automation.

The tools available for cloud native automation could fill a book. But, it can be split into two distinct categories:

These automation tools are built into their respective platforms. As a result, they offer the highest level of integration. New cloud functionalities are thus immediately available.

Some examples are AWS CloudFormation and Azure Resource Manager.

As with anything though, there is a downside.

These tools generally only support the clouds that they are a part of. You cant apply them to any other cloud, and youre very much locked into your platform.

Independent vendors normally create third-party tools that are usable on any platform.

In general, these tools will work with any public, private or hybrid cloud platform. They are often open-source, though there are commercial options available.

They can have extra features and versatility that built-in platforms lack.

Some examples include Puppet, Ansible, Chef, Salt, and Hashicorp Terraform.

Unfortunately, these automation tools lag in implementing functionality. As they arent as integrated, theyre often playing catch up.

So, when a cloud provider introduces a new feature, it could be a while before you can use it.

As with any decision in the business world, the one you make will be based on your needs. Sticking to mature, established platforms will bring you greater stability than newer technology.

The last few years have shown that things can change very quickly for businesses. How well you can adapt can be the defining trait that determines your survival.

A cloud-based business process automation solution can streamline, optimize and scale your business. Its the only way to reach the full potential of your cloud environment.

By automating cloud management tasks, businesses are more agile. This gives them the ability to innovate quicker when faced with challenges. Its essential for any large-scale cloud environment.

Furthermore, employees no longer have to spend time and resources on repetitive tasks. They can focus on developing exciting new ideas and tasks that arent automated.

Your long-term cloud management success depends on choosing an appropriate automation tool.

Define your budget and goals to have a clear picture of what you want to achieve.

Now that you have an idea of what you can automate, consider what tools you need for your uses to orchestrate your clouds. The rest is childs play.

Grace Lau is the Director of Growth Content at Dialpad, an AI-powered cloud communication platform for better and easier team collaboration through a better local caller ID service. She has over 10 years of experience in content writing and strategy. Currently, she is responsible for leading branded and editorial content strategies, partnering with SEO and Ops teams to build and nurture content.

More:
What Is Cloud Automation?: a Quick, All-Around Guide - TechDecisions

Read More..

Why the Metaverse Will Be a Boon for Cloud Computing – ITPro Today

What does the metaverse the set of interconnected 3D environments that is poised to revolutionize the way people interact on the internet mean for cloud computing?

The answer is anyone's guess, given that the metaverse remains a somewhat fuzzy concept and that we're still in the very early stages of actually building a metaverse.

Related: DevOps Teams to Play Big Role in Tackling Metaverse Challenges

Still, the ideas behind the metaverse are sufficiently well-formed at this point to facilitate informed guesses about how the rise of the metaverse may change the way we use and manage the cloud.

Toward that end, here's a look at five main ways that the metaverse whatever form it ends up taking is likely to affect the cloud.

Related: 10 Ways IT Can Get Ready for the Metaverse

First and foremost, the rise of the metaverse is likely to increase demand for cloud computing services even further.

The reason why is simple: Hosting 3D environments requires a lot of compute and storage resources, and it's a safe bet that few businesses that want to run a metaverse environment will be purchasing their own hardware to do so. Instead, they'll turn to the cloud to host the metaverse, as they already do for a majority of other workloads.

So, add the metaverse to the list of reasons why cloud computing providers are likely to become even richer in coming years.

That said, the profits that the metaverse drives for cloud computing platforms may not flow primarily to the major generic public cloud providers namely Amazon, Google, and Microsoft.

Instead, we may see alternative clouds emerge that specialize in metaverse hosting. That's especially true given that the infrastructure necessary to run metaverse environments may require specialized hardware such as GPUs that are not currently a major focus of large public cloud providers. AWS, Microsoft Azure, and Google Cloud Platform (GCP) offer some GPU-enabled VM instances, but they don't specialize in that market, which creates an opportunity for smaller cloud providers to fill the gap.

To the extent that the large public clouds do invest in metaverse hosting, I suspect they will do it by launching managed services that will amount to metaverse-as-a-service which means fully hosted and managed offerings that enable customers to deploy their own, custom metaverse environments with little effort.

It's possible the cloud providers will build these metaverse services entirely from scratch just as AWS built its ECS container orchestration service by scratch, for example. Or, they may leverage open source metaverse platforms, like Vircadia, as the foundation for metaverse-as-a-service offerings in a fashion very similar to what they have done with Kubernetes.

One of the major technical challenges that may arise as the metaverse expands is ensuring that bandwidth limitations or internet connectivity disruptions don't disrupt users' ability to experience the metaverse seamlessly.

Another challenge may be securing personal data that users store or create within the metaverse. Although it remains to be seen how regulators will define or treat personal information in the metaverse, there is already good reason to believe that it will need to be protected in the same way that personally identifiable information is protected in a conventional cloud environment.

Both of these challenges the need for better performance and the need for high data security are likely to result in demand for hybrid cloud architectures as a means of hosting metaverse environments. Hybrid cloud can boost performance and availability by placing hosting resources closer to end users. It can also improve data security by allowing data to stay on private servers instead of exposing it to the public cloud.

Expect at least some metaverse entities, then, to be hosted in hybrid cloud environments.

Getty Images

Another way to improve performance and availability for the metaverse is to push metaverse hosting and analytics to the "edge." In other words, users' own devices instead of cloud data centers and servers will be responsible for running at least some of the software that powers the metaverse. That approach will allow organizations to sidestep performance problems associated with relying on the internet as the only means of delivering metaverse connectivity.

Thus, expect edge computing however you define it to become even more important thanks to the metaverse. By extension, we'll likely see more investing in edge computing management platforms, like Kubernetes, that can help businesses keep track of the distributed edge infrastructure that hosts their metaverse environments.

Again, it's too early to say definitively how the metaverse will reshape cloud computing. But if I had to make early bets, they'd center on increased use of the cloud overall, and of hybrid and edge cloud architectures in particular. I also think we'll see cloud providers start building their own metaverse-as-a-service offerings, and there may be an opening for alternative cloud providers to serve the metaverse market in a way that the large public clouds won't.

Originally posted here:
Why the Metaverse Will Be a Boon for Cloud Computing - ITPro Today

Read More..

Global Hyperscale Cloud Market Report 2022-2026 – SaaS Vendors Re-platform Onto Hyperscale Infrastructure & Hyperscalers to Dominate the IT…

DUBLIN--(BUSINESS WIRE)--The "Global Hyperscale Cloud Market: Analysis By End-User, By Region Size and Trends with Impact of COVID-19 and Forecast up to 2026" report has been added to ResearchAndMarkets.com's offering.

In 2021, the global hyperscale cloud market was valued at US$191.15 billion and is expected to grow to US$693.49 billion by 2026.

Some of the reasons companies are switching to hyperscale cloud computing are speed, reduced downtime losses, easier management, easier transition into the cloud, scalability based on demand, etc. The hyperscale cloud market is projected to expand at a CAGR of 29.40% over the forecast period of 2022-2026.

Market Segmentation Analysis:

In 2021, the BFSI segment lead the hyperscale cloud market, accounted for a 25.0% share of the market. The BFSI industry is expected to experience high growth, owing to the increasing number of banking applications, which has resulted in the exponential growth of data in the banking and financial services industry.

The manufacturing hyperscale cloud market is expected to grow at the highest CAGR of 31.35%. The scope for scaling operations up and down via the cloud enables manufacturing companies to mitigate market demand volatility.

The future of all levels of the manufacturing industry is expected to incorporate cloud computing technology to stay more securely connected with consumers and the supply chain, hence contributing to market growth

North America dominated the market in 2021 with almost 38% share of the global market. North America is anticipated to lead the global hyperscale computing market during the forecast period, due to the presence of well-established providers of hyperscale computing and increasing investment in technological advancements.

North America is further divided into three regions: The US, Canada, and Mexico. The emergence of 5G technology along with growth in Industrial IoT (IIoT), complemented by technologies like big data, blockchain, and artificial intelligence (AI) would boost the adoption of hyperscale cloud services in the US.

The hyperscale cloud market in the Asia Pacific is expected to hold a significant share, due to the presence of various developing countries and a growing number of hyperscale data centers. China held a major share of more than 45% in the Asia Pacific hyperscale cloud market in 2021

Global Hyperscale Cloud Market Dynamics:

Growth Drivers

One of the most important factors impacting hyperscale cloud market dynamics is the increasing adoption of cloud in SMEs. Most IT enterprises in SMEs need the advanced technology of cloud computing services to flourish their businesses and leave their footprints in various geographies. An increase in the demand for cloud computing by SMEs led to the growth in the hyperscale cloud market.

Furthermore, the market has been growing over the past few years, due to factors such as increasing penetration of IoT devices, growing usage of video streaming apps, growing AI software market, growing internet traffic, and an increasing number of hyperscale data centers, etc

Challenges

However, the market has been confronted with some challenges specifically, insecurity of data, need to incur huge capital expenditure as technology advances, etc

Trends: The market is projected to grow at a fast pace during the forecast period, due to various latest trends such as SaaS vendors re-platform onto hyperscale infrastructure, acceleration of digital transformation, hyperscalers dominating the IT spending, increasing 5G adoption, escalating edge computing, big data analytics, etc.

The spending on IT would increase significantly, with companies increasingly using IT to digitalize their service offerings. The three main hyperscalers are as likely to dominate the new additional total addressable market (TAM) as they increasingly become integrated into company service offerings. This would allow the hyperscalers to maintain high levels of growth over the coming years

Impact Analysis of COVID-19 and Way Forward:

Due to the pandemic, most companies have increased their cloud usage by more than they planned, resulting in higher cloud spending. In fact, according to a recent study by McKinsey & Company, companies globally have accelerated their cloud adoption compared to pre-pandemic adoption rates.

This marks a significant shift in the use of cloud-based solutions, from being purely data storage solutions to environments in which data is used transactionally and supports day-to-day business operations. Therefore, an increase in the demand for cloud computing services has led to significant growth in the hyperscale cloud market.

Demand for hyperscaling would continue to be driven by the accelerated digital transformation post-COVID, which would see corporates accelerate their shifting of on-premise systems to the cloud, and the adoption of hyperscale platforms as the main resource for software development, testing, and deployment.

Competitive Landscape:

The global hyperscale cloud market is highly concentrated, with few major players holding almost two-third of the market share

The top infrastructure cloud providers, called hyperscalers, such as AWS, continue to invest massively in their data centres. After a relatively weak 2019, their capex growth started to accelerate again in 2020 and has continued to do so

In the hyperscale revolution, Amazon, Google and Microsoft used their software development skills to disrupt several traditional industries, such as retailing (Amazon.com), advertising (Google Search) and productivity (Microsoft Office 365).

Then, these hyperscalers have extended their capabilities in data processing and IT networking to disrupt the IT industry itself, providing massive storage and computing platforms to enterprises, replacing the need to own datacenters filled with servers and customised software. This act is set to accelerate further over the next three years, with COVID triggering an acceleration of digitalisation trends.

Key Topics Covered:

1. Executive Summary

2. Introduction

2.1 Hyperscale Cloud: An Overview

2.1.1 Introduction to Cloud Computing

2.1.2 Introduction to Hyperscale

2.1.3 How Does Hyperscale Work

2.1.4 Benefits of Hyperscale Cloud

2.2 Hyperscale Cloud Segmentation: An Overview

2.2.1 Hyperscale Cloud Segmentation by End-Users

3. Global Market Analysis

3.1 Global Hyperscale Cloud Market: An Analysis

3.1.1 Global Hyperscale Cloud Market by Value

3.1.2 Global Hyperscale Cloud Market by End-User (Banking, Financial Services, and Insurance (BFSI), IT & Telecom, Retail & Consumer Goods, Media & Entertainment, Manufacturing, Energy & Utilities, Government & Public Sector, Healthcare, and Others)

3.1.3 Global Hyperscale Cloud Market by Region (North America, Europe, Asia Pacific, Latin America, and Middle East & Africa)

3.2 Global Hyperscale Cloud Market: End-User Analysis

3.2.1 Global Banking, Financial Services, and Insurance (BFSI) Hyperscale Cloud Market by Value

3.2.2 Global IT & Telecom Hyperscale Cloud Market by Value

3.2.3 Global Retail & Consumer Goods Hyperscale Cloud Market by Value

3.2.4 Global Media & Entertainment Hyperscale Cloud Market by Value

3.2.5 Global Manufacturing Hyperscale Cloud Market by Value

3.2.6 Global Energy & Utilities Hyperscale Cloud Market by Value

3.2.7 Global Government & Public Sector Hyperscale Cloud Market by Value

3.2.8 Global Healthcare Hyperscale Cloud Market by Value

3.2.9 Global Others Hyperscale Cloud Market by Value

4. Regional Market Analysis

4.1 North America Hyperscale Cloud Market: An Analysis

4.2 Europe Hyperscale Cloud Market: An Analysis

4.3 Asia Pacific Hyperscale Cloud Market: An Analysis

4.4 Latin America Hyperscale Cloud Market: An Analysis

4.5 Middle East & Africa Hyperscale Cloud Market: An Analysis

5. Impact of COVID-19

5.1 Impact of COVID-19 on Hyperscale Cloud Market

5.2 Impact of COVID-19 on IaaS Public Cloud Services Market

5.3 E-Commerce Boom

5.4 Post COVID-19 Impact

6. Market Dynamics

6.1 Growth Drivers

6.1.1 Increasing Penetration of IoT Devices

6.1.2 Growing Usage of Video Streaming App

6.1.3 Growing AI Software Market

6.1.4 Growing Internet Traffic

6.1.5 Increasing Number of Hyperscale Data Centers

6.1.6 Increase in Adoption Of Cloud in SMEs

6.2 Challenges

6.2.1 Insecurity of Data

6.2.2 Need to Incur Huge Capital Expenditure as Technology Advances

6.3 Market Trends

6.3.1 SaaS Vendors Re-platform Onto Hyperscale Infrastructure

6.3.2 Acceleration of Digital Transformation

6.3.3 Hyperscalers to Dominate the IT Spending

6.3.4 Increasing 5G Adoption

6.3.5 Escalating Edge Computing

6.3.6 Big Data Analytics

7. Competitive Landscape

7.1 Global Hyperscale Cloud Players by Market Share

7.2 Global Hyperscale Cloud Players by Cloud Revenue

8. Company Profiles

8.1 Business Overview

8.2 Operating Segments

8.3 Business Strategy

For more information about this report visit https://www.researchandmarkets.com/r/r15t42

Read more:
Global Hyperscale Cloud Market Report 2022-2026 - SaaS Vendors Re-platform Onto Hyperscale Infrastructure & Hyperscalers to Dominate the IT...

Read More..

ExpressVPN deal: Get the world’s best VPN with free cloud backup from Backblaze – TechRadar

In our latest round of VPN testing, ExpressVPN came out as the overall winner - once again. The provider beat its competitors on pretty much all the fronts, desereving its place once again at the top of our list.

If you're not yet a subscriber, but like the idea of getting the benefit of all the privacy tools and unblocking smarts ExpressVPN has to offer, then you should ensure that you take advantage of its current offer that bundles in not one, but two freebies.

TechRadar readers that want to sign up to its best value one-year plan, will enjoy three extra months of protection for free together with a whole year of cloud backup from Backblaze without any added cost.

Ready to protect your digital privacy? Read on to discover how to benefit from this tempting deal as well as what makes ExpressVPN our favorite provider right now.

There are several reasons why ExpressVPN keeps scoring a five-star rating every time we test it out.

One of those is that the service keeps improving. As our last check shows, the company's router app has seen a notable upgrade. In fact, you can now sort your devices into groups, while connecting each of those to a different location.

The provider has also fully open-sourced its speedy Lightway protocol, making it available on all platforms. This technology is a key contributor to the service's excellent and stable performance. As our cybersecurity specialist Mike Williams wrote: "ExpressVPN's Lightway protocol more than doubled OpenVPN performance to 570-580Mbps in the UK.

If you are looking for a good streaming VPN, Express won't let you down as it unlocks more than 25 platforms - these include Netflix, BBC iPlayer and Amazon Prime. Plus, during our tests, its live support has always been helpful on the rare occasions there have been issues.

When we looked at its privacy features, we couldn't have been more happy. As Mike Williams highlighted: "The company doesn't just tell you how great it is it also has an impressively lengthy list of features to help justify every word."

Among those, its DNS support is worth a mention. In fact, alongside a DNS leak protection, Express also runs its own private, zero-knowledge, 256-bit encrypted DNS on each of its own servers.

Check out our full ExpressVPN review to discover all its features in more detail.

Today's best ExpressVPN deals

Continue reading here:
ExpressVPN deal: Get the world's best VPN with free cloud backup from Backblaze - TechRadar

Read More..

Elastics.cloud, Inc. Announces an Additional $17M of Funding to Accelerate Global Growth and Product Development – PR Newswire

SAN JOSE, Calif., April 14, 2022 /PRNewswire/ -- Elastics.cloud, a Smart Interconnect Technology company, has raised an additional $17 million in funding, bringing the total Pre-Series A capital investment to over $26 million. The proceeds will enable the company to expand its engineering teams in San Jose, CA and Austin, TX. In addition, the company has opened a new design center in Bangalore, India, while also strengthening its strategic customer partner/support resources in the USA, Asia, and Europe.

"This funding comes as a direct effect of our technology and our engagement with the ever-growing ecosystem of companies worldwide that have embarked on the architecture revolution towards composability," said George Apostol, Founder and CEO of Elastics.cloud. "We are now able to grow the execution team globally to deliver our robust, innovative solutions rooted in the evolution of Compute Express Link (CXL)."

The company is planning to provide a glimpse into its technology at the Intel Vision Event held on May 10th and May 11th, 2022, in Grapevine, TX. The technology demonstration will showcase two methods of memory expansion and pooling using FPGA cards connected via a CXL interface:

These are the first steps in creating more efficient and performant composable architectures which allow memory to be expanded and shared across multiple intelligent servers or compute complexes to meet the varying demands of heterogeneous workloads, enabling system solutions with the best performance, flexibility, and lowest total cost of ownership.

About Elastics.cloud Elastics.cloud, Inc. is a Smart Interconnect technology company focused on enabling efficient and performant architectures to create flexible, scalable, low latency composable systems. The company provides silicon, hardware, and software which leverages the Compute Express Link (CXL) interconnect standard to provide high-performance connectivity to a broad ecosystem of components.

Formoreinformationvisit:www.elastics.cloud.

MediaContact:Kishore Moturi, Sr. Director Corporate Strategy [emailprotected] 408-396-5962

SOURCE Elastics.cloud

See more here:
Elastics.cloud, Inc. Announces an Additional $17M of Funding to Accelerate Global Growth and Product Development - PR Newswire

Read More..

Is Your Business at Risk With Your Current IT Infrastructure? – Mighty Gadget

Sharing is caring!

Also, if the past events like the unhurriedly subsiding pandemic are any indicator, its not enough to have an IT setup. You also need to adapt rapidly or die just as fast. Not forgetting security, redundancy, backup, and the other nitty-gritty that your tech team emphasises.

Do you have your vital corners covered, or is your IT infrastructure putting your business at risk? Heres what you need to know.

The corona pandemic was a wake-up call in many ways. It limited movement and forced people to work remotely, which caused other challenges, especially for businesses operating on-premise IT infrastructure. They include:

Colo, too, makes for a great alternative to on-premise IT infrastructure. You get all the advantages of owning private servers, including privacy, customizability, and compliance, while enjoying better security, tech support, cost-saving, and scalability.

For instance, a colocation hosting provider such as Safehosts would offer you state-of-the-art premises with excellent security and the optimal environment for your servers. You would also get a redundant network and power infrastructure that guarantees uptime and scalability.

As for accessibility, your IT team may operate your servers physically or remotely. The best colocation hosts will usually have a team of experts 24/7/365 on-site to help you keep things running even if you cant get to your servers in time.

The more you take your staff and data online, the more hackers can get into your system. Businesses often get crippled after even the slightest data breaches. For example, Talk Talk recently lost half its profits following a cyber-attack. You cant afford to compromise!

Hackers will get to your data if they are determined, no matter how good your security is. You should always have a great backup and disaster recovery plan.

Custom-building everything might seem cool and unique, but standard protocols and APIs save you from many future headaches. They allow you to run updates as required and without any limitations. Moreover, open APIs are less subject to back doors and security risks.

This option can significantly help you keep your IT infrastructure up to date. It also helps you avoid relying too much on a small internal IT department that may have limited skills and resources. Consequently, your IT team gets more freedom to deal with strategic objectives in the house.

In any case, popular cloud server hosts such as OCI have comprehensive teams of some of the most qualified experts in IT. Using their services lets you tap into various skills you would otherwise be unable to stock within your company.

The only caveat with cloud hosting is that it may raise compliance concerns, especially when dealing with sensitive data. It may also be difficult to scale up beyond a certain point when your business requires a more complicated data exchange and storage model.

You may also have a hybrid setup, such that you store your primary servers with a colocation provider and still use cloud hosting, either private or public.

This arrangement leaves room for future migration to the cloud. At the same time, you retain control over your main servers and most sensitive data operations.

Continued here:
Is Your Business at Risk With Your Current IT Infrastructure? - Mighty Gadget

Read More..

Web Hosting Services Market Size 2022 Is Approximately to Reach US$ 170 Billion and Growing at CAGR of 14.7% by 2028 – Digital Journal

The Web Hosting Services market report included major key players analysis & Regional Estimations of Amazon Web Services, Inc., Endurance International Group, 1&1 IONOS Inc., Liquid Web, LLC, & more.

This press release was orginally distributed by SBWire

London, UK (SBWIRE) 04/12/2022 Intelligencemarketreport.com Publish a New Market Report On "Web Hosting Services Global Size & Share Report Forecasts 2022-2028".

Web hosting is the service that allows you to publish your website on the Internet. The web hosting services provided by companies allow you to host web applications or websites on their servers. The services include virtual private servers, collocated hosting, dedicated hosting, shared hosting, and cloud hosting. Advanced web hosting services include many benefits such as better performance, increased security, and improved security.

This report examines global Web Hosting Services market trends and developments in past years, and critically evaluates the most promising products and technological innovations in the global market. It details market size in both regional and country-specific terms. This report brings together data analytics, prospecting insights and industry expert opinions to provide a comprehensive study of Web Hosting Services market's competitive landscape.

Get a Sample Report of Web Hosting Services Market @ https://www.intelligencemarketreport.com/report-sample/101386

for more information or customization mail us at [emailprotected]

Then Major Key Players Covered in Web Hosting Services Market are:

-Amazon Web Services, Inc.-Endurance International Group-1&1 IONOS Inc.-Liquid Web, LLC-GoDaddy Operating Company, LLC-Google LLC-Hetzner Online GmbH-Alibaba Cloud-Equinix, Inc.-WPEngine, Inc.

The global market's emerging and high-growth segments; high-growth regions; and market drivers, restraints, and opportunities. This research report has been dedicated to several amounts of analysis industry research and global Web Hosting Services market share analysis of major players, as well as company profiles, and which collectively include fundamental opinions about the market landscape.

Web Hosting Services Market Segmentation Analysis

This study segments the Web Hosting Services -based search market by product type, application, and geography. The report provides a critical perspective on the market. It analyzes each market segment in terms of current and future developments. It also determines the most profitable sub-segments in terms of revenue contribution for both the base year and estimate year. The report includes information on the fastest-growing sub-segments in terms of revenue growth over the previous five years.

The Web Hosting Services Market Segments and Sub-Segments are Listed Below:

Type Outlook-Shared Hosting-Dedicated Hosting-Virtual Private Server (VPS) Hosting-Colocation Hosting-Others

Application Outlook-Intranet Website-Public Website-Mobile Application

Deployment Outlook-Public-Private-Hybrid

End-user Outlook-Enterprise-SMEs-Large Enterprises-Individual

Regional Analysis Covered in this report:-North America [United States, Canada]-Europe [Germany, France, U.K., Italy, Russia]-Asia-Pacific [China, Japan, South Korea, India, Australia, China Taiwan, Indonesia, Thailand, Malaysia]-Latin America [Mexico, Brazil, Argentina]-Middle East & Africa [Turkey, Saudi Arabia, UAE]

Enquiry before buying @ https://www.intelligencemarketreport.com/send-an-enquiry/101386

(Do you have any specific query regarding this research? Let's talk to our market experts to analyse better.)

In this study, the years considered to estimate the market size of Web Hosting Services are as follows:

-History Year: 2016-2020-Base Year: 2021-Estimated Year: 2022-Forecast Year 2022 to 2028

Research Methodology of Web Hosting Services Market

In order to analyze the target market, several methodologies and tools were used in this study. The research report's market estimates and predictions are based on extensive secondary research, primary interviews, and in-house expert opinions. It aims to estimate the global Web Hosting Services market's current market size and growth potential across various segments such as application and representatives.

The analysis also includes a comprehensive examination of the global market's key players, including company profiles, SWOT analysis, the most recent advancements, and business plans. The impact of various political, social, and economic factors, as well as current market conditions, on market growth is examined in these market projections and estimates.

Competitive Outlook

The Web Hosting Services market-prospects section will include a look at company competition, including company overview, business description, product portfolio, major financials, and so on. The research report will also include market-probability scenarios, a PEST analysis, a Porter's Five Forces analysis, a supply-chain analysis, and market expansion strategies. This section will look at the various industry competitors currently operating in the global market.

Table of Contents Major Key Points

1 Web Hosting Services Market Overview2 Market Competition by Manufacturers3 Production and Capacity by Region4 Global Web Hosting Services Consumption by Region5 Production, Revenue, Price Trend by Type6 Consumption Analysis by Application7 Key Companies Profiled8 Web Hosting Services Manufacturing Cost Analysis9 Marketing Channel, Distributors and Customers10 Market Dynamics11 Production and Supply Forecast12 Consumption and Demand Forecast13 Forecast by Type and by Application (2022-2027)14 Research Finding and Conclusion15 Methodology and Data Source

Buy Single User PDF of Web Hosting Services Market Report [emailprotected] https://www.intelligencemarketreport.com/checkout/101386

About Us:

Intelligence Market Report includes a comprehensive rundown of statistical surveying reports from many distributers around the world. We brag an information base traversing basically every market classification and a much more complete assortment of statistical surveying reports under these classifications and sub-classifications.

Intelligence Market Report offers premium reformist factual looking over, statistical surveying reports, investigation and gauge information for businesses and governments all throughout the planet.

For more information on this press release visit: http://www.sbwire.com/press-releases/web-hosting-services-market-size-2022-is-approximately-to-reach-us-170-billion-and-growing-at-cagr-of-147-by-2028-1356046.htm

Originally posted here:
Web Hosting Services Market Size 2022 Is Approximately to Reach US$ 170 Billion and Growing at CAGR of 14.7% by 2028 - Digital Journal

Read More..

With Aquila, Google Abandons Ethernet To Outdo InfiniBand – The Next Platform

Frustrated by the limitations of Ethernet, Google has taken the best ideas from InfiniBand and Crays Aries interconnect and created a new distributed switching architecture called Aquila and a new GNet protocol stack that delivers the kind of consistent and low latency that the search engine giant has been seeking for decades.

This is one of those big moments when Google does something thatmakes everyone in the IT industry stop and think.

The Google File System in 2003. The MapReduce analytics platform in 2004. The BigTable NoSQL database in 2006. Warehouse-scale computing as a concept in 2009. The Spanner distributed database in 2012. The Borg cluster controller in 2015 and again with the Omega scheduler add-on in 2016. The Jupiter custom datacenter switches in 2015. The Espresso edge routing software stack in 2017. The Andromeda virtual network stack in 2018. Google has never done a paper on its Colossus or GFS2 file system, the successor to GFS and the underpinning of Spanner, but it did mention it in the Spanner paper above and it did give a video presentation during the coronavirus pandemic last year about Colossus to help differentiate Google Cloud from its peers.

Two asides: Look at where the problems are. Google is moving out from data processing and the systems software that underpins it and through scheduling and into the network, both in the datacenter and at the edge, as it unveils its handiwork and technical prowess to the world.

The other interesting thing about Google now is that it is not revealing what it did years ago, as in all of those earlier papers, but what it is doing now to prepare for the future. The competition for talent in the upper echelon of computing is so intense that Google needs to do this to attract brainiacs who might otherwise go to a startup or one of its many rivals.

In any event, it has been a while since the search engine and advertising behemoth dropped a big paper on us all, but Google has done it again with a paper describing Aquila, a low-latency datacenter fabric that it has built with custom switch and interface logic and a custom protocol called GNet that provides low latency and more predictable and substantially lower tail latencies than the Ethernet-based, datacenter-wide Clos networks the company has deployed for a long time now.

With Aquila, Google seems to have done what Intel might have been attempting to do with Omni-Path, if you squint your eyes a bit as you read the paper, which was published during the recent Network Systems Design and Implementation (NSDI) conference held by the USENIX Association. And specifically, it borrows some themes from the Aries proprietary interconnect created by supercomputer maker Cray and announced in the Cascade CX30 machines back in November 2013.

You will remember, of course, that Intel bought the Aries interconnect from Cray back in April 2012, and had plans to merge some of its technologies with its Omni-Path InfiniBand variant, which it got when it acquired that business from QLogic in January 2012. Aries had adaptive routing and a modicum of congestion control (which sometimes got flummoxed) as well as a dragonfly all-to-all topology that is distinct from the topologies of Clos networks used by the hyperscalers and cloud builders and the Hyper-X networks sometimes used by HPC centers instead of dragonfly or fat tree topologies. It is harder to add capacity to dragonfly networks without having to rewire the whole cluster, but if you are podding up machines, then it is perfectly fine. The Clos all-to-all network allows for machines to be added fairly easy, but the number of hops between machines and therefore the latency is not as consistent as with a dragonfly network.

Steve Scott, the former chief technology officer at Cray who led the design of its SeaStar and Gemini and aforementioned Aries interconnects, which were at the heart of the Cray XT3, XT4, and XC machines, joined Google back in 2013 and stayed through 2014 before rejoining Cray to lead its supercomputing resurgence with the Rosetta Slingshot interconnect. Scott told us that being a part of Googles Platform Group made him really appreciate the finer points of tail latency in networks, but it looks like Scott impressed upon them the importance of proprietary protocols tuned for specific work, high radix switches over absolute peak bandwidth, the necessity of congestion control and adaptive routing that is less brittle than the stuff used by the Internet giants, and the dragonfly topology. (Scott joined Microsoft Azure as a Technical Fellow in June 2020, and it is reasonable to expect that the cloud giant is up to something relating to networks with Scotts help.)

In short, Intel, which spent $265 million buying those networking assets from QLogic and Cray and heaven only knows how much more developing and marketing Omni-Path before selling it off to Cornelis Networks, is probably now wishing it had invented something like Aquila.

There are a lot of layers to this Aquila datacenter fabric, which is in a prototype phase right now, and it is not at all clear how this will interface and interleave with the Mount Evans DPU that Intel designed in conjunction with Google and that the hyperscaler is presumably already deploying in its server fleet. It could turn out that the converged switch/network device that is at the heart of the Aquila fabric and the Mount Evans DPU have a common destination on their respective roadmaps, or they drive on parallel roads until one gets a flat tire.

Aquila, which is the Latin word for eagle, explicitly does not run on top of or in spite of the Ethernet, IP, or TCP and UDP protocols that underpin the Internet. (For a great image of the differences between these nested protocols, here is a great explanation: Imagine one of those pneumatic tube message systems. Ethernet is the tube used to send the message, IP is an envelope in the tube, and TCP/UDP is a letter in the envelope.

Forget all that. Google threw it all out and created what it calls a cell-based Layer 2 switching protocol and related data format that is not packets. A cell in fact, is smaller than a packet, and this is one of the reasons why Google can get better deterministic performance and lower latency for links between server nodes through the Aquila fabric. The cell format is optimized for the small units of data commonly used in RDMA networks, and with the converged network functionality of a top of rack switch ASIC and a network interface card, across the 1,152 node scale of the Aquila interconnect prototype, it can do an RMA read in an average of 4 microseconds.

This converged switch/NIC beast, the cell data format, the GNet protocol for processing it very efficiently with its own variant of RDMA called 1RMA, the out-of-band software-defined networking fabric, and the dragonfly topology create a custom, high speed interconnect that bears a passing resemblance to what would happen if Aries and InfiniBand had a lovechild in a hyperscale datacenter and that child could speak and hear Ethernet at its edges where necessary.

Lets start with the Aquila hardware and then work our way up the stack. First off, Google did not want to spend a lot of money on this.

To sustain the hardware development effort with a modest sized team, we chose to build a single chip with both NIC and switch functionality in the same silicon, the 25 Google researchers who worked on Aquila explain in the paper. Our fundamental insight and starting point was that a medium-radix switch could be incorporated into existing NIC silicon at modest additional cost and that a number of these resulting NIC/switch combinations called ToR-in-NIC (TiN) chips could be wired together via a copper backplane in a pod, an enclosure the size of a traditional Top of Rack (ToR) switch. Servers could then connect to the pod via PCIe for their NIC functionality. The TiN switch would provide connectivity to other servers in the same Clique via an optimized Layer 2 protocol, GNet, and to other servers in other Cliques via standard Ethernet.

There is a lot of stuff on this torrinic chip. Amin Vahdat, who is the engineering fellow and vice president who runs the systems and services infrastructure team at Google and who led the network infrastructure team for a long time before that, told us this time last year that the SoC is the new motherboard and the focus of innovation, and it comes as no surprise to us that each Aquila chip is in fact a complex with two of these TiNs in the same package (but not necessarily on the same doe, mind you). Vahdat is one of the authors of the Aquila paper, and no doubt drove the development effort.

As you can see, there are a pair of PCI-Express 3.0 x16 slots coming out of the device, which allows for one fat 256 Gb/sec pipe into a single server or two 128 Gb/sec half-fat pipes for two servers. Sitting on the other side of this PCI-Express switch is a pair of network interface circuits one that speaks 100 Gb/sec IP and can pass through the chip to speak Ethernet and another that speaks the proprietary 1RMA protocol and that hooks into the GNet cell switch.

That cell switch has 32 ports running at 28 Gb/sec not that many ports as switches go and not that fast as ports go, as Google pointed out above. With encoding overhead taken off, these GNet cell switch lanes run at 25 Gb/sec, which is the same speed as IBMs Bluelink OpenCAPI ports on the Power9 processor and as the lanes in the NVLink 3.0 pipes in the Ampere A100 GPU and related NVSwitch switches. There are 24 of these 25 Gb/sec lanes that are used to link all of the server nodes in pod over copper links, and there are eight links that can be used to interconnect up to 48 pods into a single GNet fabric, called a clique, using optical links. The Dragonfly topology used at Cray and now at Google is designed explicitly to limit the number of optical transceivers and cables need to do long-range linking of pods of servers. Google has apparently also designed its own GNet optical transceiver for these ports.

The Aquila TiN has input and output packet processing engines that can interface with the IP NIC and the Ethernet MAC if the data from the cell switch needs to leave the Aquila fabric and reach out into the Ethernet networks at Google.

Google says that the single chip design of the Aquila fabric was meant to reduce chip development costs and also to streamline inventory management. Anyone who has been waiting for a switch or NIC delivery during the coronavirus pandemic knows exactly what Google is talking about. The dearth of NICs is slowing down server sales for sure. It was so bad a few months ago that some HPC shops we heard about through resellers were turning to 100 Gb/sec Omni-Path gear from Intel because it was at least in stock.

The main point of this converged network architecture is that Google is nesting a very fast dragonfly network inside of its datacenter-scale Clos network, which is based on a leaf/spine topology that is not an all-to-all network but does allow for everything to be interlinked in a cost-effective and scalable fashion.

Google says that the optical links, allows for the Aquila pods to be up to 100 meters apart and impose a 30 nanosecond Google said nanosecond per hop latency between the interconnected pods, and that is with forward error correction reducing the noise and creating some latency.

These days, most switches have compute built in, and Google says that most switches have a multicore, 64-bit processor with somewhere between 8 GB and 16 GB of main memory of its own. But by having an external SDN controller and by using the local compute on the Aquila chip package as an endpoint local processor for each TiN pair, the Aquila package can get by with a 32-bit single-core Cortex-M7 processor with 2 MB of dedicated SRAM to handle the local processing needs of the SDN stack. The external servers running the GNet stack were not divulged, but this is a common design for Google.

The SDN software is written in C and C++ and is comprised of around 100,000 lines of code; the Aquila chip runs the FreeRTOS real-time operating system and the lwIP library. The software exposes all of the low-level APIs up to the SDN controller, which can reach in and directly manipulate the registers and other elements of the device. Google adds that having the firmware for Aquila distributed and most of it on the controller, and not the device was absolutely intentional, and that the idea is that the TiN device can bring up the GNet and Ethernet links and attempt to link to the DHCP server on the network and await further configuration orders from the central Aquila SDN controller.

One interesting bit about the Aquila network is that because it is a dragonfly topology, you have to configure all of the nodes in the network from the get-go or you have to recable every time you add machines to get the full bandwidth of the network. (This is the downside of all-to-all networks.) So Google does that and then adds servers as you need them. Here is what the schematic of the servers and network look like all podded up:

The Aquila setup has two 24-server pods in a rack and 24 racks in a clique. Google is using its standard server enclosures, which have NICs on adapter cards, in this case a PCI adapter card that links to a switch chassis that has a dozen of the T1N ASICs on six adapter cards. The first level of the dragonfly network is implemented on the chassis backplane, and there are 96 optical GNet links coming out of the pod to connect the 48 pods together, all-to-all, with two routes each.

One side effect of having many ASICs implementing the network is that the blast radius for any given ASIC is pretty small. If two servers share a T1N and the T1N package has two ASICs, then the failure of one package only knocks out four servers. If a top of rack switch in a rack of 48 servers burns up, then 48 servers are down. If a whole Aquila switch chassis fails, it is still only 24 machines that get knocked out.

Looking ahead, Google is investigating adding more compute to the TiN device in future Aquila devices, on the order of a Raspberry Pi to each NIC, so that it can run Linux. This would allow Google to add a higher-level P4 programming language abstraction layer to the network, which it most definitely wants to do.

In early tests, the Aquila fabric was able to have tail latencies of under 40 microseconds for a fabric round trip time (RTT in the network lingo), and had a remote memory access of under 10 microseconds across 500 host machines on a key-value store called CliqueMap. This tail latency is 5X smaller compared to an existing IP network, even under high load.

One last thought. The scale of the Aquila network is not all that great, and to scale the compute more will mean scaling up the T1N ASICs with more ports and possibly but not necessarily with higher signaling rates to increase the bandwidth to match PCI-Express 5.0 speeds. (This was a prototype, after all.) We think Google will choose higher radix over higher bandwidth, or at least split the difference.

There is another performance factor to consider, however. When Google was talking about Borg seven years ago, it had 10,000 to 50,000 servers in a pod, which is a lot. But the servers that Google was using had maybe a handful to a dozen cores per socket and probably two sockets per machine. Aim high and call it an average of 20 cores. But today, we have dozens of cores per server socket and we are on the verge of several hundred cores per socket, so it may only take a few thousand nodes to run all but the biggest jobs at Google. And even the big jobs can be chunked across Aquila pods and then aggregated over regular Ethernet links. There is a factor of 10X improvement in core count along with about a 2X factor increase in instructions per clock (IPC) for integer work over that time; floating point performance has gone up by even more. Call it a 20X factor of improvement in performance per node. For all we know, the pod sizes at Google dont need to be stretched all that far.

More importantly, (O)1000 clusters, as technical papers abbreviate clusters on the order of thousands of nodes, are big enough to do important HPC and AI workloads, even if they cannot run the largest models. It will be interesting to see what jobs fit into an Aquila fabric and what ones do not, and interestingly, this technology might be perfect for the scale of many startups, enterprises, academic, and government enterprises. So even if Aquila doesnt scale far now, it could be the foundation of a very high performance HPC and AI service on Google cloud where (O)1000 is just right.

See the original post:
With Aquila, Google Abandons Ethernet To Outdo InfiniBand - The Next Platform

Read More..

Thick Client vs. Thin Client: Learn The Difference to Choose the Best For You. – TechGenix

Do I need a thick client or a thin client?

In the computer world, clients are essential for the architecture of systems. Clients are programs that interact with servers, so you can get information from that server.

Clients allow you to work with data without connecting to another computer. They can come in many forms, like desktop, web-based, or mobile applications.

Generally, clients split into 2 types: thick clients and thin clients. While both have different purposes, its essential to understand the distinctions between the two, to make the most informed decision when it comes to your business or personal computing needs. Lets explore the best for you.

Ill first dive into thick clients.

The most expensive and powerful computers in the world are nothing without their manual labor. A thick client, or a fat client, is a computing workstation that includes most or all components essential for operating software applications independently. That also includes monitor screens with input capabilities, so you can interact directly on-screen.

We cant say that a computer system that only has monitors is a thick client. Why? Because you already have an option where you dont need anything else besides your keyboard!

The thick client is a component that has access to resources on a server, but doesnt require any processing power for its use. Its also been the go-to for many years because of its customizable features and greater control over system configuration.

Workplaces often provide thick clients to their employees so they can continue even when they disconnect. Thick client computers communicate with one another in a P2P fashion. As a result, they dont require constant server communication, because they always have at least one active connection between them.

Clients with thick client operating systems also experience faster response times and more excellent durability.. Conversely, those who dont use thick clients need to lease server computing resources from an outside source. Unfortunately, thatll cost them speed and money.

If your environment has limited storage and computing capacity, youll likely need a thick client. That said, the rise in the work-from-home model may create issues with thick clients. Thats because youll need access at all times. These issues could also lead to potential problems because the client is too slow while online, so it might not always function correctly. That is, unless you dont get interrupted while using it!

The thick client is a computer that company employees receive. In general, its safe to assume most of them will need the same applications and files on their device! Thats why, the thick client is also a perfect option for businesses that require all the hardware and software needed. The employee only needs to connect their computer with company servers and download any updates or data required; they wont ever disconnect from work!

Thick clients are also excellent if you want to work remotely. You can now get your job done without an internet connection, which means you wont disconnect from the office even if youre in the field! You also wont be wasting money on data plans. Finally, a thick client will allow you to work with all the files saved on the hard drive, assuming you dont need internet access.

Lets now move on to thin clients.

Thin clients are the new wave of computer technology. They work remotely in an environment where most applications and sensitive data exist on servers, not locally!

They also offer more power than the typical laptop or PC. Thin clients are powerful workstations that have the memory and storage needed to run applications and in-house computing tasks. As a result, they dont rely heavily upon outside resources. That also cuts down the waiting time to fetch data from afar!

The concept of a thin client device is to function as a virtual desktop, using the computing power residing on networked servers. The central server may also be an on-premise or cloud-based system.

Companies with limited resources may use thin clients, because they dont want employees to use up data while browsing online. Thin clients are also perfect because they still allow workers to perform essential tasks without having any hiccups in service.

A thin client is an excellent choice if you focus on the perfect balance of performance and portability. In addition, machine learning solutions can also help businesses optimize their resources by analyzing data from all over your network in real-time! Many companies specialize in this field, but some very reputable manufacturers offer both desktop computers and laptops, like Dell and HP.

Generally, an in-house developer develops the thick client that resides on a local machine. On the other hand, a thin client is where all of the processing happens on the server-side and displays data to the user through a browser or app. In the table below, I summarize the differences between a thick client and a thin client.

Basically, this is a head-to-head comparison of thick and thin clients. Consider these features carefully and decide which client you want to adopt. Each client also has its advantages and disadvantages, so you should weigh the risks against the benefits.

Thick clients are programs that reside on the local machine. On the other hand, thin clients are where all the processing happens on the server-side and displays data to users through a browser or app. If youre looking for an easy way to decide which type of application is best for your needs, think about how much control you want over the user interface and how important security is to you. In this article, Ive explained everything you need to know about the thick client and thin client, so you can make the best decision for your applications.

Still have questions? Check out the FAQs and Resources below.

Get The Latest Tech News

Microsoft Outlook, G-Talk, Yahoo messenger, and online trading portals are examples of thick clients. A thick client is basically a functional computer that can connect to a server. It also has its own operating system, software, and processing capabilities. In all, theyre ideal for workplaces that encourage remote work, because they also allow for working offline.

Thin client applications are web-based, browser-based programs that dont require any installation on the users side. Its mainly a gateway to the network. Thin clients are also good for minimal workloads, as they cant handle much data processing. The most common thin client we see today is the web browser!

Laptops may be small and portable, but theyre not always the best option. They need configuration to sync to your companys resources, and you may be stuck working on two devices when you go to the office. A better alternative is a thin client. Its an economy-sized desktop computer designed to function primarily as your resource server for most tasks accessible remotely. You can transfer everything you pay for on a thin client to a desktop, and its perfect if you want to begin working from home.

Employees across industries use thin clients, because theyre cost-effective and convenient. They also help replace computers, especially if you need the processing power that comes with them locally on your network.

Thin clients are a great way to get online without having an expensive computer. You can use them at home just so long as you have good internet access. If youre working from home, you can support, manage, and configure thin clients remotely. That makes it an amazing option if youre worried about the configuration time! Its also good for those who lack the necessary IT knowledge to manage their client.

Learn all about network segmentation here.

Explore the top 5 open source storage projects for Kubernetes in this article.

Learn more about cloud cost management: purpose, advantages, and best practices here.

Understand the limitations of TCP vs. UPD here.

Find out all about restructuring a legacy network with a VLAN here.

Read the original here:
Thick Client vs. Thin Client: Learn The Difference to Choose the Best For You. - TechGenix

Read More..

Atlassian comes clean on what data-deleting script behind outage actually did – The Register

Who, Us? Atlassian has published an account of what went wrong at the company to make the data of 400 customers vanish in a puff of cloudy vapor. And goodness, it makes for knuckle-chewing reading.

The restoration of customer data is still ongoing.

Atlassian CTO Sri Viswanath wrote that approximately 45 percent of those afflicted had had service restored but repeated the fortnight estimate it gave earlier this week for undoing the damage to the rest of the affected customers. As of the time of writing, the figure of customers with restored data had risen to 49 per cent.

As for what actually happened well, strap in. And no, you aren't reading another episode in our Who, Me? series of columns where readers confess to massive IT errors.

"One of our standalone apps for Jira Service Management and Jira Software, called 'Insight Asset Management,' was fully integrated into our products as native functionality," explained Viswanath, "Because of this, we needed to deactivate the standalone legacy app on customer sites that had it installed."

Two bad things then happened. First, rather than providing the IDs of the app marked for deletion, the team making the deactivation request provided the IDs of the entire cloud site where the apps were to be deactivated.

The team doing the deactivation then took that incorrect list of IDs and ran the script that did the 'mark for deletion magic.' Except that script had another mode, one that would permanently delete data for compliance reasons.

You can probably see where this is going. "The script was executed with the wrong execution mode and the wrong list of IDs," said Viswanath, with commendable honesty. "The result was that sites for approximately 400 customers were improperly deleted."

Yikes.

The good news is that there are backups, and Atlassian retains them for 30 days. The bad news is that while the company can restore all customers into a new environment or roll back individual customers that accidentally delete their own data, there is no automated system to restore "a large subset" of customers into an existing environment, meaning data has to be laboriously pieced together.

The company is moving to a more automated process to speed things up, but currently is restoring customers in batches of up 60 tenants at a time, with four to five days required end-to-end before a site can be handed back to a customer.

"We know that incidents like this can erode trust," understated Viswanath.

Viswanath's missive did not mention compensation for businesses suffering a lengthy outage other than stating he and his team were committed to "doing what we can to make this right for you."

The Register contacted the company to clarify what this includes and will update should Atlassian respond.

With many other companies not being this transparent, especially at the point while the problem is still ongoing, it's commendable to get a proper explanation.

Read more from the original source:
Atlassian comes clean on what data-deleting script behind outage actually did - The Register

Read More..