Category Archives: Cloud Servers

Salesforce to ramp up hiring by 1,500 by end of this year – Mint

BENGALURU :India is a critical market for cloud-based software company Salesforce.com Inc., having grown to be the largest centre outside of its headquarters in San Francisco. The India unit is currently hiring to address growing demand from its customers, especially small and medium businesses (SMBs). In an interview, Arundhati Bhattacharya, chairperson and CEO of Salesforce India, talks about the growth in the India business, the SMB opportunity and drivers for cloud solutions. Edited excerpts:

How has the India business grown and what are your hiring plans?

We have doubled our headcount in the last 18 months in India alone. When I came in, the headcount was around 2,500 and its around 5,000 now. We plan to exit the fiscal year at about 6,500 people (Salesforces fiscal year ends on 31 January). So, we are doing a lot of recruitments currently. We may not double exactly, but then our plans are pretty large. And we will definitely be growing quite fast, even the next year. Our people are not just doing sales and distribution, but there is also a very large team that supports our global operations. Like most other large multinationals, we have global innovation centres in India comprising engineering, R&D, support and all of the services. We are the largest centre for Salesforce outside of the US.

For which skills and roles are you hiring?

Almost 90% of the people we hire are for technology roles and they are basically engineers. There are, of course, people in other areas like sales and finance, but most of the roles are very technology-oriented. We look at people who are Salesforce certified. Salesforce has a very nice gamified platform that is already available in the public domain and is free as well. Its called Trailhead, and you can actually get on to Trailhead and certify yourself. But even if someone is not Salesforce certified, it does not prevent them from coming in as long as they can do Java programming and things like that. We are also recruiting in the area of HR because if you are doing so much of recruitment, you also need recruiters and employee success business partners. The roles will be across all areas of a rapidly growing organization ranging from general administration to technology to sales.

What is the opportunity from SMBs in India?

SMB is probably the largest opportunity for Salesforce. When Salesforce was initially set up 23 years back, it started with the SMB segment. The enterprise segment only came about 10 years later. The idea was to solve the SMB issues. For instance, in India, most SMBs are not capital-rich, so they do not want to lock their capital to get the best of the systems. We offer subscription-based solutions so that they are not required to set up their own hardware. And its a monthly subscription, which also ensures that we stay on our toes to give them the best service possible in order to ensure that those subscriptions get renewed year after year. Three quarters of our customer base constitutes SMBs.

A recent Salesforce-IDC report stated that cloud-related technologies will account for 27% of all the IT spending in India. Whats driving the demand for cloud?

India is a capital-poor country and on-premise systems can be very costly because you are not only having to put down your servers in one place but you will actually be needing them in three pieces because you need the business continuity plan at a near site as well as disaster recovery at a far site. So, getting the servers itself is a long process. Whereas, if you look at cloud applications and you do the same job with a cloud service provider, the cloud service provider can provision you in a matter of hours. What could have taken days and a lot of money can actually be got done in a matter of hours. Its not only a question of convenience, but also about cost as with cloud, you are paying as you go.

When you are trying to be aligned to your customer, you need a number of analytics and artificial intelligence tools. The larger the data set, the better will be the outcome. To have such kind of elasticity in an on-premise system would be very costly.

Subscribe to Mint Newsletters

* Enter a valid email

* Thank you for subscribing to our newsletter.

Never miss a story! Stay connected and informed with Mint. Download our App Now!!

More here:
Salesforce to ramp up hiring by 1,500 by end of this year - Mint

5 cybersecurity best practices for businesses to support their workforces – Review – Review

It has been almost two years since the Covid-19 pandemic began, with the first lockdown in March 2020 forcing businesses to adopt a remote working approach. Now that South Africa is opening up, a hybrid model is quickly becoming the norm, with employees splitting their time between the office and their home.

As a result, the IT departments role has become more complicated than ever, owing to the rapid increase in remotely connected devices. Cyberattacks have, in turn, become more common.

The most high-profile cyberattack happened in July, when Transnet, the state-owned railway company, was forced to shut down for a week. This attack, however, was only one of many, as global statistics show that a cyberattack takes place every 11 seconds.

This article looks at five factors that businesses can implement to secure their workforce:

When establishing policies and standards, companies must consider their cloud platforms, software development lifecycles, DevOps procedures and technologies, and compliance with regional regulations. Basic security hygiene alone is not sufficient at enterprise level to protect against advanced cyberattacks.

When putting together policies, businesses should keep in mind the following:

It is importantto educate all employees on the evolving threatlandscape. Businesses should educateall stakeholders about the many types of dangers from phishing to ransomware to social engineering. Are your staff aware of these threats, the damaging results of such an attack, andtrained to know what to do andwhom to call in the event of an attack?

Businesses should provide basic security tools to their employees, such as password managers, multi-factor authentication, data backup, and behaviour threat analytics. Threat analytics, especially, can help warn users and administrators when an account is accessed from an unknown IP during odd hours.

Perhaps consider incentivising employees with a rewards programme. For instance, internal cybersecurity and bug bounty initiatives at Zoho have aided immensely in educating and rewarding responsible staff.

Identity and key protection should be a primary priority for every cybersecurity team.Its critical to securelyauthenticate and authorise individuals, services, devices, and apps toensure that only valid accounts/devices are able toaccess the companys data. For example, many businesses now use SSH keys and SSL certificates in the background to perform safe cryptographicoperations.

When it comes to identity management, the beginning point is to implement tactics such as strong passwords, passwordless authentication, multi-factor authentication, role-based access, identity-based perimeters, and zero-trust access control strategies.

Once an identity has been granted access, a user can gain access to numerous endpoints and applications owned by the company using the identity. In a hybrid environment, enterprise data is communicated over smartphones, IoT devices, BYOD, cloud servers, and more,and many companies still rely on traditional firewalls and VPNs to restrict access.

Rather than relying on these legacy models, companies should adopt a least-privilege access strategy for users, applications, systems, and connected devices. Its important to provide only a minimum level of access based on job roles and responsibilities. This technique has the following important benefits:

Unpatched systems and apps are some of the easiest targets for hackers. Whenever a new security patch is issued, attackers will attempt to exploit the flaw before the patch is applied in order to obtain access to corporate data.Thus, enterprises should take advantage of patch management and vulnerability management tools thatoffer immediate implementation. Other benefits include improved efficiency and simplified compliance, helping avoid unwarranted fines.

Businesses in South Africa are currently more interconnected than they have ever been. While this is a development that will help many industries thrive, it also implies that businesses must prioritise cybersecurityto ensure successful benefits realisation. The truth is that its a matter of when, not if, your company will be targeted, and being preparedwith a robust cybersecurity and resilience strategyis the greatestdefence.

See the original post here:
5 cybersecurity best practices for businesses to support their workforces - Review - Review

St. Cloud Schools Looking to Solve Worker Shortage – WJON News

St. Cloud School District 742 is still experiencing a bus driver and substitute teacher shortage. That according to Superintendent Willie Jett. He says the Human Resources office is very busy looking for many open positions that aren't exclusive to the before mentioned positions. Jett says they are seeing people in the community stepping up, coming to them saying they are getting training and/or volunteering their services. He says they are working "tirelessly to fill the shortages". Jett says this challenge of a worker shortage isn't unique to them...they are seeing this happen throughout the nation in education.

Jett says their students and staff are doing a great job with handling the necessary requirements to accommodate COVID-19. He says both students and staff have adjusted well to mask wearing and social distancing when they can. Jett is pleased they've been able to offer in-person learning 5 days a week every week since the school year began. He is students and staff are also staying home when they aren't feeling well and that is very important.

Jett says this past month their Regional Center of Excellence in Education recognized 20 of the District 742 licensed staff members for their accomplishments as outstanding education leaders. They are recognized as positive role models for students and staff. This is called a LEO award.

My conversation with Willie Jett is available below.

See the original post:
St. Cloud Schools Looking to Solve Worker Shortage - WJON News

The New Agility and Resiliency Model Businesses Need to Survive – CEOWORLD magazine

Organizations operating today balance flexibility and time-to-market. Theyre managing cyberattacks, COVID-19 shutdowns, and other threats to a businesss ability to function. Managing these issues and moving forward in a digital world requires a responsive and iterative infrastructure.

Businesses require hybrid work models offering the flexibility of home or the office. Making this happen seamlessly requires the right technology and resources in place to enable people to do their jobs regardless of their location or access device. Theres a need for a new model for business infrastructure, where digital tools, security tech, and connectivity all come together. Realizing this new modern infrastructure involves a host of strategies, including a more secure virtual desktop, hybrid clouds, and more available business applications.

Leverage Virtual Desktop Infrastructure (VDI)

To manage security concerns for remote workers, many firms offer virtual desktop infrastructure, or VDI. With VDI, companies can offer virtual desktop PC environments. These are hosted on a server, and users can then access virtual desktop images that are running a central server. All the virtual machines are host based and managed through a virtual machine monitor which establish and run the various VMs running on the VDI.

Its an efficient way to give organizations the workforce flexibility they and their workers need along with enhanced security. VDI centralizes the infrastructure, so information is never flowing outside of the data center where it stays under strict monitoring. If the device is compromised, the data is not at risk. It eliminates a core concern about virtual desktop usage, where employees open them on unsecured devices that arent under corporate control.

For firms handling large remote workforces, VDIs are purpose built for scale. It reduces capital costs because theres less hardware required, and the company does not need to provide laptops for the employees home usage. Theres also inexpensive deployment for firms that have server architectures in place, and theres long-term cost benefits for firms that need a bigger initial investment.

VDI simplifies remote working, by allowing access through any device and location for maximum flexibility for todays global teams. This boosts productivity and VDIs reliability removes downtime issues, so workers can gain access with confidence. The centralized control means IT and management can provide seasonal workers, consultants, and other vendors with access without exposing the companys data.

Use Containers for Applications

Remote workers utilize network resources throughout the day and night. With global teams and flexible hours, some staff need application access at midnight on a Saturday. Supporting this requires firms to transform their legacy applications so they function with a remote mobile device-based workforce. Pick some of the companys applications accessed through mobile and introduce a container architecture, where code and dependencies come together, so IT can shift an application to a different server without needing any adjustments. This removes downtime caused by server maintenance because the application moves to another server, so theres little to no downtime and reduced infrastructure costs.

These containerized applications also offer enhanced security, since theyre running as their own processes, independent of others. With this structure, a threat would not reach the centralized host system, giving IT more time and opportunity to remedy the situation. It improves application availability to enable productivity and reduces the costs of a potential breach.

Bring in a Qualified Vendor for Hybrid Clouds Capabilities

Developing a hybrid cloud strategy is an ideal way to add agility to an organization. It provides flexibility and security for remote workforces. Firms using this model can support both distributed and remote employees with instant data access that is not coming through a single central location. The firm can shift sensitive data to on-premises secure servers as it moves apps and various employee and partner-focused services to a public cloud. IT and senior leadership leveraging hybrid clouds also hedge against spikes in demand because they can simply pay for more cloud resources, instead of worrying about massive capital costs with growing their infrastructure.

Consider shifting applications to public clouds within a hybrid model to reduce latency. If some of the remote workforce lives within a narrow area, then pick cloud services close by for the optimal performance. Encourage IT to look at the most often used applications and those that receive more complaints and a poor user experience due to connectivity issues. Another tactic is using edge caching to reduce latency. Talk to IT about caching some content on your internal servers, such as static information like profile data or product documentation.

Leveraging a third-party firm for hybrid cloud implementation gives companies a guided hand and outside perspective. The best partners will utilize a DevOps-down approach that includes discussions with the development and application departments. Theyll propose cloud strategies that align with the clients change management initiatives, operations, complementary technology choices, and conceptual architecture. Its part of a new way of looking at business infrastructure that optimizes security, connectivity, and growth.

Written by Michael Norring.

Track Latest News Live on CEOWORLD magazine and get news updates from the United States and around the world. The views expressed are those of the author and are not necessarily those of the CEOWORLD magazine. Follow CEOWORLD magazine on Twitter andFacebook. For media queries, please contact: info@ceoworld.biz

Read this article:
The New Agility and Resiliency Model Businesses Need to Survive - CEOWORLD magazine

Function-as-a-Service Poised for Rapid Growth – RTInsights

FaaS provides businesses without massive IT teams the opportunity to build and deploy the applications they need without having to support servers.

Function-as-a-Service (FaaS) allows users to forgo servers while developing applications. It has use cases in the microservices world, especiallyIT automation, chatbots, data processors, and the like. It relieves headaches with server maintenance, allowing companies to focus more on app-specific code and support companies who dont have in-house teams capable of maintaining servers.

As such, the adoption of Function-as-a-Service is on the rise. A recent report by Reports and Data expects the FaaS market to make huge gains in the next few years. FaaS had a 2018 market size of $3.33 billion, but companies needing management capabilities for multiple platforms have driven the market share up. It is expected to have a worth of 31.53 billion by 2026, which represents a CAGR of 32.3%.

It has use cases in the microservices world, especiallyIT automation, chatbots, data processors, and the like. It relieves headaches with server maintenance, allowing companies to focus more on app-specific code and support companies who dont have in-house teams capable of maintaining servers.

Serverless mobile apps offer the same capabilities. Theyre cheaper and faster to deploy and reduce barriers to development. FaaS will take advantage of cloud adoption and provide more flexibility for business.

FaaS allows customers to budget only for the functions they need without wasted resources. Its a viable solution for teams looking to streamline operations and take on a more composable architecture.

According to the report, North American FaaS companies dominated the market, but this wont hamper development. It has a high penetration rate in a number of industry verticals. As businesses seek to reduce latency and jump into digital transformation, the cost-effective and disruption-proof solutions for their operations needs, FaaS could offer just the capability they need.

FaaS is part of a suite of smart services gaining traction. These focus on reducing the cumbersome nature of traditional operations and deployment and lean up business areas such as decision making or development. FaaS provides businesses without massive IT teams the opportunity to build and deploy the applications they need while supporting the infrastructure side. The report highlights the potential for FaaS to be status quo in the near future.

Continue reading here:
Function-as-a-Service Poised for Rapid Growth - RTInsights

Pushing to the edge with hybrid cloud – iTWire

Over the last few years, technology has evolved through an acceleration of innovation across industries, bringing forth new combinations of technologies, new use cases and new business models. Technologies such as the Internet of Things (IoT), cloud computing, machine learning and big data have combined to solve business challenges that plagued industries for decades.

What is a hybrid cloud?

According to IBM, a hybrid cloud is an infrastructure that connects at least one public cloud and at least one private cloud, but the definition can vary. A hybrid cloud provides orchestration, management, and application portability between public and private clouds to create a single, flexible, optimal cloud infrastructure for running a companys computing workloads.

Despite the growth in public cloud computing, enterprises often need to use a combination of public and private (on-prem) clouds. Often overlooked in the hype around public cloud computing, private clouds offer greater flexibility, security and compliance.

A private cloud environment is generally accessible only through private and secure network links, rather than the public internet. Industries such as healthcare and finance have specific regulations about storing and processing data and thus favour using private clouds. A company can run a private cloud on-premises in its data centre, local server room or access it as a securely hosted offering by a cloud service provider (CSP).

Crucially, hybrid cloud computing enables companies to accelerate their digital transformation efforts, primarily if they work with legacy hardware and infrastructure. They can extend their existing infrastructure by adding one or more public cloud deployments modernizing applications and processes in stages rather than a complete digital transformation upheaval.

What is computing on the edge?

IoT technology is ubiquitous, with connected devices collecting more and more information through sensors, cameras, accelerometers, LiDAR and depth sensors. All this information requires collection, storage, processing and analysis to create data-driven insights. Some of this data comes from mission-critical applications where a split-second delay can have significant consequences. For example, factories, smart traffic consoles, an insulin pump, and smoke and noxious gas monitoring.

As a consequence, edge computing use cases have grown. Edge computing places processing (and some storage) capabilities close to the data source, enabling fast data analysis in real-time. Its particularly useful in poorly connected environments such as oil refineries, mines and wells. Companies are moving more of their compute and financial investments toward edge computing. Grand View Research predicts that companies will spend $43.4 billion on edge computing by 2027, a compound annual growth rate of 37.4%.

Despite the predictions of some analysts, this does not mean the death of cloud computing. Cloud computing and edge computing have a beneficial functional relationship. And this relationship extends the hybrid cloud concept.

According to Gartner, Edge computing augments and expands the possibilities of todays primarily centralized, hyper-scale cloud model and supports the systemic evolution and deployment of the IoT and entirely new application types, enabling next-generation digital business applications.

Combining hybrid cloud and edge computing

A hybrid environment with workloads at the edge and various cloud locations offer advantages to companies seeking greater efficiency and cost savings. Running a business and time-critical workloads at the edge ensures low latency and self-sufficiency. This means transactions can occur even in rugged environments where internet connections are poor.

Take the example of industrial IoT and a factory that uses sensors to monitor machines for temperature, sound, pressure and vibration. The factory can use a locally hosted compute device from a nearby cloud provider, or even something like a Raspberry Pi, to process and filter and aggregate data from the machines in near real-time. If this edge compute instance detects an urgent anomaly, then it can generate an alert for investigation. It can send the filtered and aggregated data to a public cloud instance during regular operation to perform further analysis, machine learning processing, decision making, and storage with a service that provides better efficiency and value for such tasks.

Connected cars are another example, which are effectively data centres on wheels with hundreds of in-car sensors creating a deluge of data. Autonomous driving systems, such as those tested by Equinix customer Continental, must aggregate, analyse and distribute that data, as well as data from other sources such as traffic and weather information, in real-time with all the necessary security and privacy controls in place. And as the degree of autonomy advances (from level one for some driver assistance to level five for fully autonomous), the amount of data to aggregate and analyze will continue to soar. Current test drives for L2 autonomy are generating up to 20 terabytes (TB) of data a day, while more advanced sensor sets for higher levels of autonomy (L4 and above) may generate up to 100 TB/day.

A car needs some of this data in real-time to make split-second decisions, like whether to move lanes or whether the road is clear of pedestrians. The processing of this data could happen on the onboard computer or on any available local edge compute instances the vehicle happens to be near at the time. When the car returns to a WiFi connection, it can then upload any other less important data to a public cloud instance, receive software and machine learning model updates, a driver can review their data, or the manufacturer can download for analytical purposes.The communication between edge computing and the rest of the hybrid cloud neednt be in one direction. Once compute services have processed, analyzed and reached decisions on the data they have, they can then push relevant updates to edge compute instances.

Are you looking to introduce a hybrid cloud solution?

Like many other aspects of modern infrastructure, containers and orchestrating them with Kubernetes can help standardize edge and cloud deployments. Kubernetes standard runtime layer enables you to develop, run and operate workloads consistently across computing environments and move workloads between edge and cloud.

Equinix Metal provides the foundational building blocks that give businesses the ability to create and consume interconnected infrastructure with the choice and control of physical hardware and the low overhead and developer experience of the cloud. Digital leaders use Equinix Metal to create a digital advantage by activating infrastructure globally, connecting it to thousands of technology ecosystem partners, and leveraging DevOps tools to deploy, maintain and scale their applications. This means that on-demand bare metal servers with dedicated GPUs optimized for edge-type workloads such as machine learning are within your reach.

Metal integrates with a range of common hybrid cloud toolings such as Anthos, VMWare Tanzu, and RedHat OpenShift, allowing public cloud vendors and users alike to leverage any existing infrastructure and tooling.

Equinix Fabric supplements Equinix Metal by offering software-defined interconnection to connect Equinix Metal and your other infrastructure together, including all leading cloud providers. Equinix Fabric helps companies who want to take advantage of hybrid multi-cloud but need to reinforce privacy and security for data as it travels between edge and public cloud locations. On top of providing these security guardrails, Equinix Fabric is affordable and performant, not adding any other overheads to applications.

To learn more about how to enable the hybrid cloud for your organisation today, download the Equinix Whitepaper on enabling the hybrid cloud.

By Equinix

References

View original post here:
Pushing to the edge with hybrid cloud - iTWire

7 Open Source Cloud-Native Tools For Observability and Analysis – Container Journal

In 2021, observability is close to gaining buzzword status. This is perhaps because, for years, monitoring wasnt as standardized in software development. Tracing was given less forethought, and applications produced logs in varying formats and styles. Without unifying layers to analyze a growing number of services, this led to a chaotic mess of jumbled application analysis.

Now, with cloud-native technology, engineers arent trying to repeat these mistakes from the past. Also, with increased user expectations and digital innovations demands, there is now more focus on maintaining overall stability, performance, and availability. This has given rise to the growth of observability and analysis tools. These open source projects are making logs more actionable, tracing events with detailed metadata, and exposing valuable metrics from Kubernetes environments. Such insights can inform business metrics, help pinpoint bugs and spur quick recovery measures. For these reasons, deep observabilty across the cloud-native application stack is a must.

So, below well explore six well-established CNCF projects related to observability, telemetry and analysis. Many of these projects help collect and manage observability data such as metrics, logs and traces.

The popular monitoring system and time series database

GitHub | Website

Prometheus is the most popular graduated CNCF project related to observability and likely needs no introduction, as many engineers are already familiar with it. Large companies such as Amadeus, Soundcloud, Ericsson and others already use Prometheus to power their monitoring and alerting systems.

Prometheus has built-in service discovery and functions by collecting data via a pull model over HTTP. It then stores metrics organized as time-series key-value pairs. These metrics can be customized to the application at hand and set to trigger alerts for example; an e-commerce site may need to identify slow load times to stay competitive. Prometheus has great querying abilities; the PromQL query language can be used to search data and generate visualizations.

A Prometheus environment is comprised of the main Prometheus server, client libraries, a push gateway, special-purpose exporters, an alert manager and various support tools. To get started, developers can review the getting started guide here.

Open source, end-to-end distributed tracing

GitHub | Website

With the move toward distributed systems, the process of debugging, networking and supporting observability for many components has become exponentially more challenging. Jaeger is one project that aims to solve this dilemma; its designed to monitor and troubleshoot transactions in complex distributed systems. According to the documentation, its features are as follows:

Jaeger works by implementing various APIs for retrieving data. This data follows the OpenTracing Standard, which organizes traces into spans; each span details granular details like the operation name, a start timestamp, a finish timestamp and other metadata. Jaeger backend modules can export Prometheus metrics, and logs are structured using zap, a logging library.

A unified logging layer

GitHub | Website

Fluentd is a logging layer designed to be decoupled from backend systems. The philosophy is that a Unified Logging Layer can rid the chaos of incompatible logging formats and disparate logging routines.

Fluentd can track events from many sources, such as web apps, mobile apps, NGINX logs and others. Fluentd centralizes these logs and can also port them to external systems and database solutions, like Elasticsearch, MongoDB, Hadoop and others. To enable this, Fluentd sports over 500 plugins. Using Fluentd could be helpful if you need to send out alerts in response to certain logs or enable asynchronous, scalable logging for user events.

To get started with Fluentd for logging, one can download it here for any operating system or find it on Docker. Once installed, Fluentd offers a graphical UI to configure and manage it.

Highly available Prometheus setup with long-term storage capabilities

GitHub | Website

For those that want to get more out of Prometheus, Thanos is an option. Its framed as an available metric system with unlimited storage capacity that can be placed on top of existing Prometheus deployments. Using Thanos to obtain a global view of metrics could be helpful for organizations that use multiple Prometheus servers and clusters. Thanos also enables extensions to your own storage of choice, making data retention theoretically limitless. As Thanos is designed to work with larger amounts of data, it incorporates downsampling to speed up queries.

Horizontally scalable, highly available, multi-tenant, long-term Prometheus.

GitHub

Cortex is another CNCF project designed to work with multiple Prometheus setups. Using Cortex, teams can collect metrics from various Prometheus servers and perform globally aggregated queries on all the data. Availability is a plus with Cortex, as it can replicate itself and run on multiple machines. Like Thanos, Cortex provides long-term storage capabilities, with integrations for S3, GCS, Swift and Microsoft Azure.

According to the documentation, Cortex is primarily used as a remote write destination for Prometheus, with a Prometheus-compatible query API. To begin working with Cortex, check out the getting started guide here.

An observability framework for cloud-native software.

GitHub | Website

OpenTelemetry is a project built to collect telemetry data, such as metrics, logs and traces, from various sources to integrate with many types of analysis tools. The package supports integrations with popular frameworks such as Spring, ASP.NET Core, Express and Quarkus, making it easy to add observability mechanics to a project. Of note is that OpenTracing and OpenCensus recently merged to form OpenTelemetry, making this one powerhouse of an open source telemetry solution.

In todays digital age, metrics are the lifeblood of a business. Having a holistic assortment of application performance data and end-user actions information is vital for analysis. But thats not the only end goal quality filtering and navigation for such data are just as crucial for turning stale metadata into actionable insights.

Above, weve covered some of the most adopted CNCF projects related to observability, monitoring, and analysis. But these arent the only options available there is a lot more exciting development occurring within CNCF-hosted projects and the surrounding ecosystem.

At the time of writing, CNCF hosts the following projects in sandbox status. As you can see, these emerging projects involve more active monitoring, such as via chaos engineering and Kubernetes health checks, as well as deeper Kubernetes-first observability.

Related

Read the original:
7 Open Source Cloud-Native Tools For Observability and Analysis - Container Journal

Pros and cons of cloud infrastructure types and strategies – Information Age

Abby Kearns, CTO of Puppet, delves into the pros and cons of cloud infrastructure types and strategies in the market today

Establishing what kind of environment and strategy is right for your business is key to cloud success.

You dont have to do much Googling to find articles, podcasts and tweets featuring me talking about both multi-cloud and hybrid cloud. More companies than ever are choosing a hybrid cloud approach that leverages a multi-cloud strategy, so its prudent to revisit the pros and cons of hybrid and multi-cloud, as well as public and private cloud infrastructure.

I am defining public and private clouds as the environments in which an organisation chooses to host its infrastructure. Hybrid and multi-cloud are the strategies organisations employ for these environments.

I will caveat everything Im about to say with the fact that you should choose the right environment for the right workloads. What problem are you solving and why? There is a case to be made for private cloud infrastructure, and there is a case to be made for public cloud it entirely depends on what type of applications you are running and what the requirements are around that application.

Today, one of the key private cloud advantages is data be it addressing data sovereignty requirements, or because you have a large data lake in your private cloud and need close access to it (for low latency application requirements), or you have specific regulation requirements on who has access to that data and where that data sits. Data is often at the heart of private cloud strategies.

Another benefit of private cloud to an organisation is the customisation it gives the business, granting greater flexibility and the means to design a bespoke environment for specific business needs and users.

So, what are some of the drawbacks of private cloud?

They can be high maintenance. A dedicated team is required to manage the environment full-time, ensuring the environment stays up to date, including addressing any CVEs, as well as reliability, minimising any failures or downtime. A private cloud can be costly, as it requires a data centre as well as the physical infrastructure, in addition to the customised private cloud software to manage the environment in a way that mimics a public cloud experience (ease of access, self-service, etc.). You also run into limitations on scale that you would not have in a public cloud; while this is addressable, it does require forward-looking planning on what possible scale your environment would need to run in a variety of scenarios.

What about public clouds?

Public cloud can be less expensive because the data centre, hardware, and software are owned and operated by a third-party provider. Because a public cloud provider is responsible for dozens or hundreds or thousands of customers, the network of servers is vast and largely diminishes the risk of failure, so count high reliability as yet another perk of public cloud. This combination of a massive network of servers and a 24/7 service team provides an additional benefit: scalability.

Are there any downsides to public cloud?

Remember how private clouds are able to be customised for an organisations specific business needs? Public cloud is often a one-size-fits-all solution, meaning a company no longer has as much control or flexibility with the public cloud. Public clouds can also be costly, especially if you have a growing footprint of workloads that require intensive data requirements. Additionally, egress fees can be quite high if you are looking to pull your data out of the cloud.

Adrian Rowley, senior director EMEA at Gigamon, discusses why the accidental hybrid cloud exists, and how to effectively manage it. Read here

A multi-cloud strategy simply means that an organisation has chosen to use multiple public cloud providers to host their environments. A hybrid cloud approach means that a company is using a combination of on-premises infrastructure, private cloud and public cloud and possibly more than one of the latter, meaning that company would be implementing a multi-cloud strategy with a hybrid approach. At times, these terms are used interchangeably.

Companies choose a multi-cloud strategy for a multitude of reasons, not least of which is avoiding vendor lock-in. Spreading workloads across multiple cloud providers increases reliability, as a company is able to fail over to a secondary provider if another provider experiences an outage.

Optionality is a huge benefit to companies who want to be able to pick and choose which services will most seamlessly integrate into their environments, as each major public cloud provider provides some unique services for different types of workloads. Furthermore, when a company uses multiple public cloud providers, it retains flexibility and can transfer workloads from one provider to another. Finally, global organisations can leverage multi-cloud to address complex compliance requirements, which vary from country to country.

These are all very strong cases for multi-cloud, but what are the downsides?

Cost forecasting and containment can be challenging when using multiple providers charging different rates for different services. Also, spreading workload across multiple cloud providers does increase reliability, but it can also increase risk and make it more difficult to know where data is and who has access to it. There are both benefits and downsides to multi-cloud, but I am a proponent of a multi-cloud strategy whenever possible.

Why do organisations choose a hybrid approach?

Many companies especially large enterprises that existed for decades host their environments in the data centre, and lack of resources, funding, staff, executive buy-in, or a host of other reasons may prevent them from shifting their legacy architecture to the cloud. However, certain teams within the company may be spinning up cloud-native environments for new projects, and other teams may be working on implementing a lift-and-shift to the cloud from the data centre.

For most organisations large and small, a multi-cloud strategy with a hybrid cloud approach is the way of the future. As applications grow across organisations, their infrastructure needs change as well. For example, you might be running a large CRM system in a private cloud, but you may choose to run newer, cloud-native applications in a public cloud where you can leverage the cloud infrastructure to the fullest extent.

At the end of the day, organisations should choose the right infrastructure for the right workload and business needs, whether thats a hybrid and/or multi-cloud strategy using either/both public and/or private clouds.

View post:
Pros and cons of cloud infrastructure types and strategies - Information Age

AWS admits cloud ain’t always the answer, intros on-prem vid-analysing box – The Register

Amazon Web Services, the outfit famous for pioneering pay-as-you-go cloud computing, has produced a bit of on-prem hardware that it will sell for a once-off fee.

The device is called the "AWS Panorama Appliance" and the cloud colossus describes it as a "computer vision (CV) appliance designed to be deployed on your network to analyze images provided by your on-premises cameras".

"AWS customers agree the cloud is the most convenient place to train computer vision models thanks to its virtually infinite access to storage and compute resources," states the AWS promo for the new box. But the post also admits that, for some, the cloud ain't the right place to do the job.

"There are a number of reasons for that: sometimes the facilities where the images are captured do not have enough bandwidth to send video feeds to the cloud, some use cases require very low latency," AWS's post states. Some users, it adds, "just want to keep their images on premises and not send them for analysis outside of their network".

Hence the introduction of the Panorama appliance, which is designed to ingest video from existing cameras and run machine learning models to do the classification, detection, and tracking of whatever your cameras capture.

Sometimes the facilities do not have enough bandwidth to send video feeds to the cloud

AWS imagines those ML models could well have been created in its cloud with SageMaker, and will charge you for cloud storage of the models if that's the case. The devices can otherwise run without touching the AWS cloud, although there is a charge of $8.33 per month per camera stream.

The appliance itself costs $4,000 up front.

Charging for hardware is not AWS's usual modus operandi. Its Outposts on-prem clouds are priced on a consumption model. The Snow range of on-prem storage and compute appliances are also rented rather than sold.

The Panorama appliance's specs page states that it contains Nvidia's Jetson Xavier AGX AI edge box, with 32GB RAM. The spec doesn't mention local storage, but lists a pair of gigabit ethernet ports, the same number of HDMI 2.0 slots, and two USB ports.

AWS announced the appliance at its re:invent gabfest in December 2020, when The Register opined that the cloudy concern may be taking a rare step into on-prem hardware, but by doing so would be eating the lunches of server-makers and video hardware specialists alike. Panorama turns out to not have quite the power to drive cloud services consumption as other Amazonian efforts, since the ML models it requires could come from SageMaker or other sources. That fact, and the very pre-cloud pricing scheme, mean the device could therefore be something of a watershed for AWS.

See the article here:
AWS admits cloud ain't always the answer, intros on-prem vid-analysing box - The Register

Bust latency with monitoring practices and tools – TechTarget

Latency originates from two separate sources: a data center's network or its storage system. To reduce latency in your data center, consider its potential causes, then evaluate the various ways to troubleshoot it.

You can implement a variety of tools to help manage latency. Consider using latency monitoring software -- such as EdgeX Foundry or traceroute -- to pinpoint bottlenecks or keep tabs on network speeds, or adopt more latency-resistant hardware, including NVMe drives, persistent memory and SD-WANs.

Latency is the primary way to judge overall performance when it comes to storage systems. Low latency leads to faster transactions, which in turn leads to reduced storage costs for your business.

Storage latency comes from four main sources: storage controllers, storage software stacks, internal interconnects and external interconnects. You can reduce latency in each of these sources by selecting a fast CPU for your storage controller server, adopting storage software that prioritizes efficiency and CPU offload, implementing remote direct memory access networking and utilizing NVMe drives.

Persistent memory can also optimize storage and cut down on storage latency. It connects directly to the memory bus and offers two separate operating modes -- one to convert it to volatile memory, and the other to use it as a high-performance storage tier.

Network latency determines how long it takes between a request for data and the delivery of that data, which affects an entire infrastructure. High network latency can increase load times and even render certain applications unusable. Network latency usually comes from sources such as poor cabling, routing or switching errors, storage inefficiencies or certain security systems.

To improve network latency, start by measuring packet delay. Know how long it takes for your network to fulfill a request. Tools such as Ping, Traceroute and MTR can help you with this. Next, identify potential bottlenecks in your network. Depending on the source of your network latency, you can take steps such as improving routers or implementing network speed amplifiers. Finally, introducing nearby edge servers can also reduce networking strain and improve latency. Such edge servers can shorten the distance that a request packet must travel, thereby improving your system's response time.

Cloud latency can create significant issues for both organizations and end users. Distance often causes the majority of cloud latency, but equipment such as WANs can also create cloud latency problems.

Implementing SD-WAN instead of WAN networking can reduce cloud latency. Most SD-WAN offerings feature increased reliability, end-to-end security, and extensibility and management automation. You can improve remote connections, but SD-WAN requires virtual endpoint appliances.

Edge computing moves data and calculations out of the data center to edge locations. To minimize decision-to-action latency, some cloud providers have even moved their cloud environments to the edge. This process cuts out public commercial internet traffic to enable faster and more efficient delivery of services to customers.

However, due to its remote nature, the edge can present its own problems with latency. Software that monitors edge devices should measure latency in real time. Edge device monitoring services such as AWS IoT services, EdgeX Foundry and FNT Command all possess latency monitoring tools or features.

When monitoring latency in large, complex systems, first ensure that monitoring latency won't increase latency. Synthetic monitoring and log monitoring tools can often do more harm than good when it comes to latency issues. Metrics- and event-based monitoring tools cause less strain in comparison but can also still increase latency.

You can ensure your latency monitoring tools don't negatively affect latency by evaluating and altering the sequence of scripts your monitoring tools run. This enables you to scale back on the frequency of latency testing and prevents your latency monitoring tools from creating issues.

Continued here:
Bust latency with monitoring practices and tools - TechTarget