Category Archives: Cloud Servers

The art of the multicloud deployment in your organization – TechRepublic

Image: iStock

As cloud adoption has constantly been on the rise, it is becoming increasingly risky for organizations to host all of their applications and data on one cloud provider. Risk can be mitigated through multicloud deployment, which spreads resources across multiple cloud providers.

Jump to:

Multicloud refers to a deployment that depends on cloud services provided by two or more cloud vendors. It involves having workloads in multiple cloud providers.

Multicloud deployments also involve a calculated approach to the design and deployment of resources to ensure application architecture and the strengths of prospective infrastructure providers are complementary.

SEE:Multicloud explained: A cheat sheet (TechRepublic)

A key benefit of a multicloud deployment approach is that it ensures mission-critical services do not suffer outages when a cloud provider suffers an outage. Such resilience is crucial for systems and applications that need to serve end users around the clock.

Todays business needs are constantly changing. Multicloud deployments enable organizations to stay flexible and agile in the face of constant and rapid change. It also allows organizations to satisfy different data needs and ensure data is available.

Organizational IT compliance requirements around areas of data privacy and data sovereignty often vary. When dealing with data that involves stringent data security measures, multicloud deployments allow organizations to store sensitive data in a hardened private cloud and control how public cloud environments query them.

Multicloud deployments provide enterprises with a way out of being tied to one provider, as the alignment between a provider and an enterprise may change with time. Misalignment may yield increased cost and ineffective service delivery. Furthermore, changing providers as a result of this misalignment may be expensive and time-consuming. Multicloud environments limit organizations exposure to vendor lock-in.

Multicloud deployments may provide an organization with the means to optimize the costs of cloud technologies and the reliability of workloads. As cloud providers vary in offering and cost, organizations can choose which providers cost-effectively align with their strategic initiatives.

A single cloud provider may introduce a sharp learning curve to teams as a result of the processes and systems IT teams are required to learn in addition to familiarity with the services these providers introduce. Now, consider the impact of the adoption of more providers. It may be challenging to ensure teams remain competent across all environments.

Overall, cost proves to be a challenge for multicloud deployments. An extra cost is generated from the additional traffic and management layer between cloud environments. Unnecessary expenses can arise when organizations fail to grasp the differences in costs between cloud providers.

Also, the cost of hiring and training staff for all of the cloud environments and the cost of unutilized resources that can go unnoticed in complex cloud environments shows that costs can easily spiral out of control without proper management and monitoring.

There are a number of considerations that need to be made for successful multicloud deployment. These include infrastructure, operations and applications.

A multicloud deployment plan should be specific about the target infrastructure based on the current and future needs of various stakeholders. The plan has to also take into consideration the impact of advanced technologies like software-defined infrastructure, virtualization and more.

The deployment plan needs to consider a multicloud deployment that supports these advanced infrastructure technologies in complex hybrid and multicloud environments. It is also crucial to determine how required data format conversions will be carried out during the movement of data across public cloud and on-premises environments. This consideration still holds for the transit of data between different cloud providers.

It is also important to determine whether a prospective multicloud deployment supports infrastructure self-provisioning as much as it can. These include infrastructure-as-code (IAC) templates, particularly since IAC tools by cloud providers are vendor-specific and often tough to manage in multicloud environments.

Finally, the data that is stored in containerized environments needs to be correctly managed and secured. Containerized environments benefit multicloud environments, as they run code in the same way, regardless of deployment infrastructure.

A multicloud deployment plan ought to address a number of operational issues. There should be an understanding of the impact of the deployment on the IT landscape and where new roles may need to be established.

For example, business relationship management roles may need to be introduced to ensure business needs and IT services work in alignment. These roles should also be created with access control and multicloud security in mind.

One of the greatest challenges plaguing multicloud deployments is cost management. As a result, the deployment plan must incorporate a cost management process to handle both current and future right-sizing.

It should also be easy to move data from one cloud to another when required. Users need to consider multicloud deployment tools that approach data replication and synchronization and multicloud data transfer cost-effectively.

Organizations should also consider multicloud deployment tools that manage and deploy the whole data fabric from a unified dashboard to provide transparency to the whole spectrum of multicloud end users. Such transparency ought to also cover the billing and pricing models for these end users.

For effective multicloud application deployment, teams should evaluate which applications and workloads are best suited for specific cloud platforms. This can be determined by the availability of specialized compute, how simple it is to integrate a cloud providers services and resources with other cloud environments, and the geographic locations of the providers data centers.

Securing and protecting data must be a priority, as data security stands as one of the top challenges to multicloud deployments. Multicloud application deployment should be augmented by effective authorization and authentication features to secure data.

Encryption of data at rest and data in transit is one of the approaches that could be taken to secure data. Additionally, this data needs to be protected against corruption and loss and has to be a consideration in a multicloud deployment plan.

Furthermore, standardization and coordination of development stacks across clouds have to be considered to ensure consistent and swappable deployments across multiple clouds. Considering continuous integration and delivery solutions for multicloud environments can ease the shift to multicloud environments and make multicloud application deployment more consistent and manageable.

SEE: iCloud vs. OneDrive: Which is best for Mac, iPad and iPhone users? (free PDF) (TechRepublic)

Flexera is a cloud management tool with a rich array of discovery, operational monitoring, management, governance, template-based provisioning, orchestration and automation, and cost optimization across multicloud environments and virtual and bare-metal servers. It is suitable but not limited to small and medium-sized businesses in need of a potent orchestration engine and workflow automation capabilities.

VMwares multicloud solutions offer organizations the ability to seamlessly migrate to the cloud without having to recode their applications. They enable them to modernize their infrastructure and consistently operate across multicloud environments, data centers and the edge. VMware offers numerous multicloud products including VMware Cloud Foundation, Tanzu, Cloud on AWS, vRealize Cloud Management, CloudHealth by VMware Suite and more.

Azure Arc extends the Azure platform to enable users to create applications and services that can flexibly run in multicloud environments, at the edge and across data centers. Arc runs on new and legacy hardware, integrated systems, IoT devices, and Kubernetes and virtualization platforms.

Formerly known as Nutanix Beam, Nutanix Cloud Manager Cost Governance is a cloud management platform that offers organizations visibility into cloud consumption patterns and provides solutions for cost management and security optimization. Nutanix Cloud Manager Cost Governance also simplifies and drives multicloud governance. Cloud teams seeking insight into their expenditures will find great value in this tool.

Mist is an open-source multicloud management platform aiming to simplify multicloud and provide a unified interface for multicloud management. Mist supports all relevant infrastructure technologies such as private and public clouds, containers, bare-metal servers, and hypervisors.

Organizations should keep an eye on multicloud if they seek options that single providers do not provide. If flexibility, resilience and control over applications and data appeal to you, then you should consider multicloud deployment. However, as multicloud deployments are large-scale transformative endeavors for any enterprise, the deployment plan should be executed in an agile manner.

Continue reading here:
The art of the multicloud deployment in your organization - TechRepublic

VMware Carbon Black causing BSOD crashes on Windows – BleepingComputer

Windows servers and workstations at dozens of organizations started to crash earlier today because of an issue caused by certain versions of VMwares Carbon Black endpoint security solution.

According to some reports, systems at more than 50 organizations started to display the dreaded blue screen of death (BSOD) a little after 15:00 (GMT+1) today.

The root of the problem is a ruleset deployed today to Carbon Black Cloud Sensor 3.6.0.1979 - 3.8.0.398 that causes devices to crash and show a blue screen at startup, denying access to them.

Microsoft Windows operating systems impacted by the issue are Windows 10 x64, Server 2012 R2 x64, Server 2016 x64, and Server 2019 x64.

On systems impacted by the issue, the stop code may identify the error as "PFN_LIST_CORRUPT."

Tim Geschwindt, an incident responder for S-RM Cyber, told BleepingComputer that starting at 15:30 (GMT+1), clients started to complain that their servers and workstations were crashing and suspected Carbon Black to be at fault.

After investigating, the researcher determined that all clients running Carbon Black sensor 3.7.0.1253 were affected. They couldnt boot into any of their devices at all. Complete no go, Geschwindt said.

One adminsaidthat they had about 500 endpoints BSOD across our estate from approx 15:15 UK time.

It appears that there is a conflict between Carbon Black and AV signature pack 8.19.22.224.

VMware explainsin a knowledge base today that an updated Threat Research ruleset was rolled out to Prod01, Prod02, ProdEU, ProdSYD, and ProdNRT after internal testing showed no signs of issues.

An investigation is ongoing right now and the troublesome ruleset is being rolled back, which is expected to eliminate the problem.

As a temporary workaround, VMware recommends putting sensors into Bypass mode via Carbon Black Cloud Console. This enables affected devices to boot successfully so the faulty ruleset can be removed.

VMware is advising clients experiencing this issue to open a support case and include the following info: Org_Key, Device Name(s), Device ID(s), and Operating System(s).

Update [August 23rd, 17:50]: VMware has provided the following statement for BleepingComputer shortly after publishing the article:

"VMware Carbon Black is aware of an issue affecting a limited number of customer endpoints, where certain older sensor versions were impacted by an update of our behavioral preventative capabilities. The issue has been identified and corrected, and VMware Carbon Black is working with impacted customers."

Continued here:
VMware Carbon Black causing BSOD crashes on Windows - BleepingComputer

Antivirus Software Global Market Report 2022: Cloud-Based Antivirus a Key Trend Gaining Traction and Pres – Benzinga

DUBLIN, Aug. 24, 2022 /PRNewswire/ --The "Antivirus Software Global Market Report 2022, By Type, Operating System, End User" report has been added to ResearchAndMarkets.com's offering.

The global antivirus software market is expected to grow from $3.92 billion in 2021 to $4.06 billion in 2022 at a compound annual growth rate (CAGR) of 3.6%. The change in growth trend is mainly due to the companies stabilizing their output after catering to the demand that grew exponentially during the COVID-19 pandemic in 2021. The market is expected to reach $4.75 billion in 2026 at a CAGR of 4.0%.

The antivirus software market consists of sales of antivirus software by entities (organizations, sole traders, and partnerships) that are used to protect computers from viruses by scanning, detecting, and removing them. Most antivirus software operates in the background once downloaded, providing real-time protection against virus attacks. All programs behavior is monitored by the anti-virus software, which flags any questionable behavior.

The main types of antivirus software are computers, tablets, smartphones, and others. Computer anti-virus software is used in computers to prevent, scan and detect the virus and malware that harm the computer. The different operating systems include Windows, MAC, Android or IOS, or Linux and are used by various verticals such as corporate, personal, government.

North America was the largest region in the antivirus software market in 2021. Europe was the second-largest region in the antivirus software market. The regions covered in this report are Asia-Pacific, Western Europe, Eastern Europe, North America, South America, Middle East, and Africa.

The increasing number of cyber-attacks is expected to propel the growth of the antivirus software market in the coming years. A cyberattack is a cyberspace-based attack aimed at disrupting, disabling, destroying, or manipulating a computer or other device intentionally.

There is an increase in hacking and data breaches in computers, laptops, and mobiles using viruses or malware. Antivirus software can block or prevent the virus or malware from entering the device and prevents cyberattacks. For instance, in 2020, Air India, an Indian-based carrier airline company, reported hackers had compromised their servers and accessed the personal data of 4.5 million fliers. In India, in 2020 alone 1.16 million cyber security cases are registered. Therefore, the increasing number of cyber-attacks drives the market for antivirus software.

Cloud-based antivirus is a trend gaining popularity in the antivirus software market. Cloud antivirus or cloud-based antivirus is a solution that offloads the work to a cloud-based server instead of bogging down the computer with an antivirus suite. Cloud antivirus protects PCs, laptops, and mobile devices by providing behavioral-based screening and updating malware software capable of transferring data. For instance, according to Tracxn Technologies Limited, an India-based software company report in 2021, major companies including Malwarebytes, Avast, Panda Security, Qihoo 360 Technology, AVG Technologies are using cloud-based antivirus solutions.

In July 2020, NortonLifeLock, a US-based cybersecurity software company acquired Avira for a $360 million deal amount. Through this acquisition, Avira serves a large customer base in Europe and important emerging markets with a consumer-focused array of cybersecurity and privacy solutions. Avira is a Germany-based company offering security software and specializes in antivirus software.

Scope

Markets Covered:

1) By Type: Computers; Tablets; Smart Phones; Others

2) By Operating System: Windows; MAC; Android Or IOS Or Linux

3) By End User: Corporate; Personal; Government

Key Topics Covered:

1. Executive Summary

2. Antivirus Software Market Characteristics

3. Antivirus Software Market Trends And Strategies

4. Impact Of COVID-19 On Antivirus Software

5. Antivirus Software Market Size And Growth

6. Antivirus Software Market Segmentation

7. Antivirus Software Market Regional And Country Analysis

8. Asia-Pacific Antivirus Software Market

9. China Antivirus Software Market

10. India Antivirus Software Market

11. Japan Antivirus Software Market

12. Australia Antivirus Software Market

13. Indonesia Antivirus Software Market

14. South Korea Antivirus Software Market

15. Western Europe Antivirus Software Market

16. UK Antivirus Software Market

17. Germany Antivirus Software Market

18. France Antivirus Software Market

19. Eastern Europe Antivirus Software Market

20. Russia Antivirus Software Market

21. North America Antivirus Software Market

22. USA Antivirus Software Market

23. South America Antivirus Software Market

24. Brazil Antivirus Software Market

25. Middle East Antivirus Software Market

26. Africa Antivirus Software Market

27. Antivirus Software Market Competitive Landscape And Company Profiles

28. Key Mergers And Acquisitions In The Antivirus Software Market

29. Antivirus Software Market Future Outlook and Potential Analysis

30. Appendix

Companies Mentioned

For more information about this report visit https://www.researchandmarkets.com/r/9zwldu

Media Contact:

Research and MarketsLaura Wood, Senior Managerpress@researchandmarkets.com

For E.S.T Office Hours Call +1-917-300-0470For U.S./CAN Toll Free Call +1-800-526-8630For GMT Office Hours Call +353-1-416-8900

U.S. Fax: 646-607-1904Fax (outside U.S.): +353-1-481-1716

Logo: https://mma.prnewswire.com/media/539438/Research_and_Markets_Logo.jpg

SOURCE Research and Markets

Excerpt from:
Antivirus Software Global Market Report 2022: Cloud-Based Antivirus a Key Trend Gaining Traction and Pres - Benzinga

StorPool adds NVMe/TCP and NFS and ports to AWS Blocks and Files – Blocks and Files

StorPool Storage has added NVMe/TCP access to its eponymous block storage system, file access with NFS, and ported it to AWS.

This v20.0 release also adds more business continuity, management and monitoring upgrades, and extends the softwares compatibility. StorPool was started in Bulgaria in 2011 to provide a virtual SAN using the pooled disk and SSD storage StorPool of clustered servers running KVM. It has been extensively developed and improved steadily over the years since then. For example, v19.3 came along in August last year adding management features and broad NVMe SSD support. v19.4 in February brought faster performance, updated hardware and software compatibility, management and monitoring changes, and improvements in the business continuity area.

A statement from CEO Boyan Ivanov said: With each iteration of StorPool Storage, we build more ways for users to maximize the value and productivity of their data. These upgrades offer substantial advantages to customers dealing with large data volumes and high-performance applications, especially in complex hybrid and multi-cloud environments.

StorPool says its storage systems are targeted at storing and managing data of primary workloads such as databases, web servers, virtual desktops, real-time analytics solutions, and other mission-critical software. The companys product was classed as a challenger in GigaOms radar report looking at Primary Storage for Midsize Businesses in January this year.

The added NVMe/TCP access, which is becominga block access standard, provides an upgrade for iSCSI access, using the same Ethernet cabling. Customers experience high-performance, low-latency access to standalone NVMe SSD-based StorPool storage systems, using the standard NVMe/TCP initiators available in VMware vSphere, Linux-based hypervisors, container nodes, and bare-metal hosts. The NVMe target nodes are highly available. If one fails, StorPool fails over the targets to a running node in the cluster.

The NFS server software instances on v20 are also highly available. They run in virtual machines backed by StorPool volumes and managed by the StorPool operations team. These NFS servers can have multiple file shares. The cumulative provisioned storage of all shares exposed from each NFS Server can be up to 50TB.

StorPool is careful to say that NFS is for specific use cases, mentioning three. Firstly, this NFS supports moderate-load use cases for access to configuration files, scripts, images, and for email hosting. Secondly, it can support cloud platform operations, such as secondary storage for Apache CloudStack and NFS storage for OpenStack Glance. Thirdly, its good for throughput-intensive file workloads shared among internal and external end users. Think of workloads such as video rendering, video editing, and heavily loaded web applications.

However, NFS file storage on StorPool is not suitable for IOPS-intensive file workloads like virtual disks for virtual machines.

StorPool storage can now be deployed in sets of three or more i3en.metal instances in AWS. The solution delivers more than 1.3 million balanced random read/write IOPS to EC2 r5n and other compatible compute instances (m5n, c6i, r6i, etc.). StorPool on AWS frees users of per-instance storage limitations and can deliver this level of performance on any instance type with sufficient network bandwidth. It achieves these numbers while utilizing less than 17 percent of client CPU resources for storage operations, leaving the remaining 83 percent for the user application(s) and database(s).

A chart shows latency vs IOPS of 4KB mixed read/write storage operations on an r5n client instance. The StorPool storage system, when running on 5x i3en instances, delivers more than 1,200,000 IOPS at very low latency, compared to io2 Block Express, which tops out at about 260,000 of the same type of IOs.

Read the technical details about StorPool on AWS here.

StorPool on AWS is intended for single-node workloads needing extremely low latency and high IOPS, such as large transactional databases, monolithic SaaS applications, and heavily loaded e-commerce websites. Also, workloads that require extreme bandwidth block performance can leverage StorPool to deliver more than 10 GB/sec of large block IO to a single client instance. Several times more throughput can be delivered from a StorPool/AWS storage system when serving multiple clients.

ESG Practice Director Scott Sinclair said: Adding NVMe/TCP support, StorPool on AWS and NFS file storage to an already robust storage platform enables StorPool to better help their customers achieve a high level of productivity with their primary workloads.

With its 20th major release, StorPools storage software is mature, reliable, fast and feature-rich. Think of it as competing with Dell PowerStore, NetApp ONTAP, HPE Alletra, IBM FlashSystem, and Pure Storage in the small and medium business market, where customers may need unified file and block access in a hybrid on-premises and AWS cloud environment. Find out more about StorPool v20 here.

See the rest here:
StorPool adds NVMe/TCP and NFS and ports to AWS Blocks and Files - Blocks and Files

EFI Fiery Command WorkStation Integration with IQ Cloud Services Increases User Productivity – WhatTheyThink

Industry-leading unified job management interface now includes EFI IQ cloud services for a fast, flexible way to roll out tailored job management preferences and resources

FREMONT, Calif. Fiery, the digital front end (DFE) server and print workflow business of graphic arts technology company Electronics For Imaging, Inc., has introduced powerful new cloud capabilities and additional productivity enhancements for print businesses in the new version 6.8 of the EFI Fiery Command WorkStation job management solution.

An EFI IQ cloud integration gives Command WorkStation users the ability to back up and restore their customized user interface settings, local presets, and imposition templates. With a click of button, they can invite other Command WorkStation users in their print shop, or at other company locations, to download and install common settings and resources. A shop can optimize Command WorkStation for all users even in multiple locations for a faster and more flexible way to consistently manage jobs with the confidence that their configurations are saved securely in their companys EFI IQ cloud account.

EFI IQ cloud capabilities for better business management and controlWith the same EFI IQ account, managers can easily take advantage of cloud applications to extract value from their print shop data to minimize bottlenecks, optimize equipment utilization, and track performance by shift. Features available at no charge for use with an unlimited number of cut-sheet digital printers include:?? EFI IQ Dashboard, an application that provides a personalized view of printer status, consumables and job status right now;

EFI Insight, an application that helps managers transform print production trend data into actionable analytics that drive business improvement;

EFI Notify, an application that delivers alerts for production-blocking events and enables automatic production report distribution; and

EFI Go, a mobile app that delivers EFI IQ dashboard metrics, notifications and more to busy managers to monitor operations even while they are not on site.

This new version of Command WorkStation is a significant milestone in our strategy to improve Fiery capability with EFI IQ cloud services, said John Henze, vice president of sales and marketing, EFI Fiery. Using the familiar Command WorkStation interface, users can now access cloud services to better manage both their print jobs and their business.

New enhancements for more efficient, faster job managementPrint businesses also gain important flexibility and faster job set up by using the new selective preset capability in Command WorkStation 6.8. Now, users can define settings that only apply to specific aspects of a job, leaving all other original settings untouched. Using server presets can reduce the time it takes to prepare incoming files and get jobs ready for production by 80%.

Additional features in version 6.8 are being well received by customers. As Darin Lerbs, Production Print Solutions Architect at Marco, an EFI reseller and technology services company located in Minneapolis, commented: Its great to see several of my, and my customers, suggested features in this new Command WorkStation release. For me, being able to rearrange Fiery servers in the server list and easily see server IP addresses helps a great deal. And my customers are going to love the ability to see how long its going to take for a job to complete printing thats really going to help them plan their workload.

The latest version of EFI Fiery Command WorkStation is available for download at no charge at http://www.efi.com/cws. For more information about advanced digital print production solutions from EFI, visit http://www.efi.com.

http://www.efi.com

Follow EFI online:Follow us on Twitter: https://twitter.com/EFIPrintFind us on Facebook: http://www.facebook.com/EFIPrintView us on YouTube: http://www.youtube.com/EFIDigitalPrintTech

Go here to read the rest:
EFI Fiery Command WorkStation Integration with IQ Cloud Services Increases User Productivity - WhatTheyThink

Why app awareness is key to clearing the cloud visibility haze – IT Brief Australia

Article by Gigamon's John Gudmundson.

As organisations flock to the cloud, they are initiating new architectures and migrating existing applications to Infrastructure-as-a-Service (IaaS) providers and hybrid clouds via 'lift and shift' or refactoring.

They are scaling deployments with more servers and VMs, running high-capacity links, leveraging containers, and routinely adding new observability, security, and monitoring tools. On top of that, they're often running hundreds or even thousands of apps which, unknown to IT, could include rogue software such as crypto mining or BitTorrent.

With ever-increasing volumes of application-oriented data, it's difficult for IT teams and tools to focus on the most actionable activity and avoid wasting resources processing irrelevant traffic.

Often we inundate security, observability, compliance and network monitoring tools with low-risk, low-value traffic, making them less effective and requiring needless scaling.

Additionally, false positives and alerts can overwhelm NetOps, CloudOps and SecOps teams, obscuring the root causes of network and application performance issues and the real threats buried in volumes of undifferentiated traffic.

'Old school' solutions

Traditionally, IT teams have taken laborious steps to identify applications based on network traffic by either hardwiring ports to specific applications or writing regular expressions to inspect traffic patterns and identify apps.

Such manual workarounds bring their own challenges. When change occurs, such as growth in an application's usage or the introduction of new applications, NetOps teams must update network segmentation. And app updates can change traffic patterns and behaviour, meaning IT must constantly test and update their homegrown regex signatures. For the cloud, implementing such stopgap measures is difficult, if not impossible.

Until now, it's been hard to isolate cloud traffic by application type and specify whether or not it gets inspected by tools. Visibility has been siloed, and filtering options often only go up to Layer 4 elements, forcing organisations to pass all traffic through their tools or risk missing potential threats.

However, having each tool (intrusion detection system, data loss prevention, advanced threat detection, network analytics, forensics and so on) inspect packets to filter irrelevant traffic is inefficient and costly, as most tool pricing is based on traffic volume and processing load.

While packet brokering can reduce traffic, it requires programming knowledge to maintain complex rules. And although some systems provide a level of application filtering, it's hard to use, identifies a limited number of applications, and doesn't typically share this insight. Further, the filters require ongoing maintenance to keep up with changing application behaviour.

Visualise and filter cloud apps

Application filtering intelligence (AFI), such as my own company's, brings application awareness to multi-cloud environments. The technology automatically extends Layer 7 visibility to identify more than 3,500 common business and network applications traversing the network and lets users select and deliver only high-value or high-risk data based on application, location and activity.

Applications are classified into categories that are automatically updated as the landscape evolves. This allows a team to take actions on a 'family' of applications versus setting policies on individual apps. Examples of application families include antivirus, audio/video, database, ERP, gaming, messenger, peer-to-peer, telephony, webmail, and dozens more.

Now each tool is more efficient since it no longer needs to store and process large volumes of irrelevant traffic. NetOps can apply existing tools across a larger area by prioritising only core business applications and accelerate the investigation of network and application performance issues with easier data isolation.

SecOps teams can extend current tools to a larger attack surface, securing more of the network and preventing sensitive data, such as personally identifiable information (PII), from being routed to monitoring and recording tools.

While identifying applications is a serious challenge in the cloud, obtaining even basic metadata such as NetFlow is problematic in public IaaS. However, it's possible to derive basic details such as which IP addresses are used and by whom, along with port and protocol details.

But the real need is for summarised information, context-aware information about raw packets, based on Layers 47, that provides insights into user behaviour, security breaches, customer experience and infrastructure health.

Advanced metadata attributes expand on app layer visibility and support a comprehensive approach to obtaining application behaviour. Especially when deploying workloads in the cloud, users can acquire critical flow details, reduce false positives by separating signal from noise, identify nefarious data extraction, and accelerate threat detection through proactive, real-time traffic monitoring as well as troubleshooting forensics.

Observability and SIEM solutions use this information to correlate and analyse log data from servers and security appliances. Network security and monitoring tools leverage this metadata to deliver the insight and analytics needed to manage the opportunities and risks associated with cloud deployments.

And administrators can automate anomaly detection, stop cyber threats that overcome perimeter or end-point protection, identify bottlenecks, and understand latency issues.

Based on Layers 47, application metadata intelligence (AMI) supplies network and security tools with more than 5,000 metadata characteristics that shed light on the application's performance, customer experience, and security. Advanced tech extracts and appends these elements to NetFlow and IPFIX. Records include:

Advanced L7 metadata can be applied in a variety of use cases. AMI's principal deployment is in providing metadata to SIEM and observability tools for security analysis. This can help to:

While IaaS and private cloud orchestration and management platforms are remarkably resilient, dynamic, and infinitely scalable, they don't offer next-generation network packet brokers (NGNPB) with a deep observability pipeline. Such brokers aggregate, filter and distribute all traffic to the proper security and networking tools. They also provide the compute power behind AFI and AMI.

See the original post here:
Why app awareness is key to clearing the cloud visibility haze - IT Brief Australia

Cloud Performance Management Market Worth $3.9 Billion By 2027 – Exclusive Report by MarketsandMarkets – PR Newswire

CHICAGO, Aug. 18, 2022 /PRNewswire/ --Cloud Performance Management Marketto grow from USD 1.5 billion in 2022 to USD 3.9 billion by 2027, at a Compound Annual Growth Rate (CAGR) of 17.6% during the forecast period, according to a new report by MarketsandMarkets. The major factors driving the growth of the Cloud Performance Management market include increasing demand of AL, Big data, cloud solutions.

Browse in-depth TOC on "Cloud Performance Management Market"233 Tables47 Figures225 Pages

Download PDF Brochure: https://www.marketsandmarkets.com/pdfdownloadNew.asp?id=239116385

Large Enterprises segment to hold the highest market size during the forecast period

Organizations with more than 1,000 employees are categorized as large enterprises. The traction of cloud performance management in large enterprises is said to be higher than SMEs, as they are adopting cloud performance management solutions to improve business operational efficiency across regions.

The increasing deployment of SaaS offerings such as customer relationship management, human capital management, enterprise resource management, and other financial applications creates an advantageous environment for cloud monitoring adoption, particularly in large organisations, improve the overall cloud system, improve the cloud monitoring, and sustain themselves in intense competition. Large enterprises introspect and retrospect on implementing best practices to ensure effective performance management. CMaaS (Cloud-Monitoring-as-a-Service) is a popular software solution for large businesses seeking a fully managed cloud monitoring service for cloud and virtualized environments. These solutions are provided by third-party providers and are monitored 24 hours a day by IT experts with access to the most recent APM technologies and services.

Banking, Financial Services, and Insurance to record the fastest market size during the forecast period

The BFSI vertical is crucial as it deals with financial data. Economic changes significantly affect this vertical. Regulatory compliances and the demand for new services have created an environment where financial institutions are finding cloud computing more important than ever to stay competitive. A recent worldwide survey on public cloud computing adoption in BFSI states that 80% of the financial institutions are considering hybrid & multi-cloud strategies to avoid vendor lock-in. It provides these critical financial institutions the much-needed flexibility to switch to alternate public cloud operators in case of an outage to avoid any interruptions in the services.

New competitors, new technologies, and new consumer expectations are impacting the BFSI sector. Digital transformation provides organizations access to new customer bases and offers enhanced visibility into consumer behaviour through advanced analytics, which helps organizations in creating targeted products for their customers. Most banks are adopting cloud performance management solutions owing to their benefits, such as configuration management and infrastructure automation to increase stability, security, and efficiency. The BFSI business is expected to hold a significant share of the cloud performance management market due to different advantages offered by cloud-based technologies, such as improved performance, reduced total cost of ownership, improved visibility, and standard industry practices. Cloud performance management is adopted for mission-critical industry verticals, such as BFSI, extensively to improve revenue generation, increase customer insights, contain costs, deliver market-relevant products quickly and efficiently, and help monetize enterprise data assets.

Request Sample Pages: https://www.marketsandmarkets.com/requestsampleNew.asp?id=239116385

Asia Pacific is projected to be the highest CAGR during the forecast period

The Asia Pacific region comprises emerging economies, such as China, Japan, Australia and New Zealand, and the rest of Asia Pacific. The demand for managed cloud and professional services is growing, particularly in countries with a mature cloud landscape, such as Japan. This is due to the increasing migration of complex Big Data and workloads such as enterprise resource planning (ERP) to cloud platforms. The expansion of open source technologies, as well as advancements in API-accessible single-tenant cloud servers, also helps to promote acceptance of managed private cloud providers. Furthermore, with the rise of the Internet of Things (IoT), the cloud is becoming increasingly important in enabling the development and delivery of IoT applications. To deal with the data explosion, more businesses in Asia-Pacific are redesigning their networks and deploying cloud services.

The huge amount of data lead to the complexity of managing workloads and applications manually, which would act as the major factor in the adoption of cloud performance management solutions among enterprises in this region. Also, the affordability and ease of deployment of cloud performance management solutions would act as the driving factors for the adoption of cloud technologies among enterprises. The increasing trend toward cloud-based solutions is expected to trigger the growth of the cloud performance management market in this region. Integration of latest technologies, such as AI, analytics, ML, drives the demand for cloud performance management solutions in the region. The availability of advanced and reliable cloud infrastructure presents attractive opportunities for cloud-based technologies. Increase in investments in Asia Pacific by giant cloud providers such as Google is the driver for the growth of CPM market in this region. Strong technological advancements and government initiatives have driven the cloud performance management market. Increase in urbanization, technological innovation, and government support for the digital economy with suitable policies and compliance (regulations) have driven the cloud performance management market.

Get 10% Free Customization on this Report: https://www.marketsandmarkets.com/requestCustomizationNew.asp?id=239116385

Market Players

Some prominent players across all service types profiled in the Cloud Performance Management Market study include Microsoft (US), IBM (US), HPE (US), Oracle (US), VMware (US), CA Technologies (US), Riverbed (US), Dynatrace (US), App Dynamics (US), BMC Software (US).

Browse Adjacent Markets: Cloud Computing Market Research Reports & Consulting

Related Reports:

Cloud Storage Marketby Component (Solutions and Services), Application (Primary Storage, Backup and Disaster Recovery, and Archiving), Deployment Type (Public and Private Cloud), Organization Size, Vertical and Region - Global Forecast to 2027

Integrated Cloud Management Platform Marketby Component (Solutions and Services), Organization Size, Vertical (BFSI, IT & Telecom, Government & Public Sector) and Region - Global Forecast to 2027

About MarketsandMarkets

MarketsandMarkets provides quantified B2B research on 30,000 high growth niche opportunities/threats which will impact 70% to 80% of worldwide companies' revenues. Currently servicing 7500 customers worldwide including 80% of global Fortune 1000 companies as clients. Almost 75,000 top officers across eight industries worldwide approach MarketsandMarkets for their painpoints around revenues decisions.

Our 850 fulltime analyst and SMEs at MarketsandMarkets are tracking global high growth markets following the "Growth Engagement Model GEM". The GEM aims at proactive collaboration with the clients to identify new opportunities, identify most important customers, write "Attack, avoid and defend" strategies, identify sources of incremental revenues for both the company and its competitors. MarketsandMarkets now coming up with 1,500 MicroQuadrants (Positioning top players across leaders, emerging companies, innovators, strategic players) annually in high growth emerging segments. MarketsandMarkets is determined to benefit more than 10,000 companies this year for their revenue planning and help them take their innovations/disruptions early to the market by providing them research ahead of the curve.

MarketsandMarkets's flagship competitive intelligence and market research platform, "Knowledge Store" connects over 200,000 markets and entire value chains for deeper understanding of the unmet insights along with market sizing and forecasts of niche markets.

Contact:Mr. Aashish MehraMarketsandMarkets INC.630 Dundee RoadSuite 430Northbrook, IL 60062USA: +1-888-600-6441Email: [emailprotected]Research Insight: https://www.marketsandmarkets.com/ResearchInsight/cloud-performance-management-market.aspVisit Our Website: https://www.marketsandmarkets.comContent Source: https://www.marketsandmarkets.com/PressReleases/cloud-performance-management.asp

Logo: https://mma.prnewswire.com/media/660509/MarketsandMarkets_Logo.jpg

SOURCE MarketsandMarkets

Follow this link:
Cloud Performance Management Market Worth $3.9 Billion By 2027 - Exclusive Report by MarketsandMarkets - PR Newswire

Google is exiting the IoT services business. Microsoft is doing the opposite – ZDNet

Credit: Microsoft

Google will be shuttering its IoT Core service; the company disclosed last week. Its stated reason: Partners can better manage customers' IoT services and devices. (So much for the idea that IoT workloads are key to growing the cloud business....)

While Microsoft also is relying heavily on partners as part of its IoT and edge-computing strategies, it is continuing to build up its stable of IoT services and more tightly integrate them with Azure. CEO Satya Nadella's "intelligent cloud/intelligent edge" pitch is morphing into more of an intelligent end-to-end distributed-computing play.

Following a reorg in April this year, which resulted in the Azure IoT engineering and PM teams moving into the Azure Edge + Platform group, Microsoft has been working to consolidate its IoT and edge-computing teams and merge those offerings more seamlessly with Azure. Microsoft officials said at the time that they wanted to integrate IoT/edge with the company's Azure Arc hybrid-management service; Azure Stack, its family of appliances and hyper-converged infrastructure (HCI) products; and Azure Edge Zones, its 5G-connected cloud services available from edge facilities. By doing so, Microsoft can pitch edge devices as being manageable from Azure across the globe.

Among Microsoft's current IoT offerings: Azure IoT Hub, a service for connecting, monitoring and managing IoT assets; Azure Digital Twins, which uses "spatial intelligence" to model physical environments; Azure IoT Edge, which brings analytics to edge-computing devices; Azure IoT Central; Windows for IoT, which enables users to build edge solutions using Microsoft tools. On the IoT OS front, Microsoft has Azure RTOS, its real-time IoT platform; Azure Sphere, its Linux-based microcontroller OS platform and services; Windows 11 IoT Enterprise and Windows 10 IoT Core -- a legacy IoT OS platform which Microsoft still supports but which hasn't been updated substantially since 2018.

(I'm not anywhere near as familiar with what AWS has in the space, but a quick search indicates it has a full suite of IoT servicesfor industrial, commercial and automotive. It also offers FreeRTOS, its IoT Greengrass open-source edge runtime and a dev kit for education-centric IoT devices. Like Microsoft, AI/ML looks to be a key workload here. Unlike Microsoft, AWS also has a substantial home/consumer IoT presence.)

I've been asking Microsoft since April this year for an update on the company's IoT and edge-computing plans and have been told repeatedly that it wasn't a good time for a briefing.

However, at the company's Build developers conference in May, Microsoft officials presented a few sessions about the company's evolving IoT and edge strategies.

A few takeaways:

Microsoft also played up heavily at Build this year the idea of a "hybrid loop." The concept: Hybrid apps will be able to allocate resources locally on PCs and in the cloud dynamically. The cloud becomes an additional computing resource for these kinds of applications, and applications -- especially AI/ML-enabled ones -- can opt to do processing locally on an edge device or in the cloud (or both). This concept definitely relies on IoT and edge devices and services becoming more deeply integrated with Azure.

I'm thinking we'll hear more about Microsoft's updated IoT and edge-computing vision at its upcoming Ignite 2022 IT pro conference in mid-October, if not before.

Originally posted here:
Google is exiting the IoT services business. Microsoft is doing the opposite - ZDNet

Top 20 IT KPIs and Metrics You Must Track Today – Security Boulevard

Non-technical executives have long, and unjustly, considered IT as a call center function. However, we (in the IT industry) of course know that IT is in fact a strategic business function. This gap between perceived value and actual value stems from IT historically not setting or tracking many key performance indicators (KPIs).

Sincewe are potentially headed into a recession, its never been more important to define KPIs, set baselines, measure current performance, evaluate trends, report on your successes and take action where needed. This goes for both internal and external IT service providers. Internal IT teams need to protect both their headcount and budgets. On the other hand, MSPs need to make sure their clients understand the value they provide to reduce churn and ensure there are upsell channels.

What are IT KPIs?

A KPI or key performance indicator is a measure of how effectively a particular department in an organization is achieving its key business objectives. As the name suggests, IT KPIs are used to evaluate the performance of internal IT departments and MSPs.

IT KPIs track all critical aspects of quality associated with IT projects and help deliver them most effectively in a timely manner and within allocated budgets. This is achieved by tracking, analyzing and optimizing various critical parameters associated with IT such as IT cost management, problem-solving and ticket management.

Why is it important to track IT KPIs?

Tracking IT KPIs helps:

20 IT KPIs and metrics to track

We have segregated the various IT KPIs and metrics into four categories based on the various measures of success they track, namely financial metrics, operational metrics, system metrics and security metrics. Lets take a look at all these metrics in detail below:

Financial metrics

Tracking the performance of an IT initiative is imperative to understand the value of the invested resources. Financial metrics help evaluate the financial performance of IT projects and initiatives and help bridge the gap between their perceived value and actual value. Some of the most important financial metrics are:

1. IT spend vs. plan

This financial metric helps keep track of your IT expenses and analyze how effectively you are spending the IT budget allocated to you. Are you spending the entire budget? Are you consistently saving money on your IT initiatives? Is your function well-managed? This metric helps answer these finance-related questions and more.

2. Money saved in negotiations

Through this financial metric, you can find out whether you have been able to save any additional costs on negotiations. It tracks savings from cutting unused seats, consolidating multiple tools into a single solution, swapping to a lower cost solution or swapping to a longer-term contract with a cheaper per-year rate.

This metric helps evaluate the actual return of investment (ROI) for the dollars spent on IT projects and initiatives. Successful organizations usually aim for a 3:1 ROI to make the most of their IT investments. You can calculate your IT ROI by dividing the benefits of your IT program by the cost of investment.

Operational metrics

Operational metrics are used to track the performance of an organization in real time or over a specific time period. When considering the IT department, operational metrics are focused at measuring the performance of IT functions and resources such as services, technologies and workforce used to conduct business operations. Common operational metrics include:

4. Project success rate

This operational metric helps measure the percentage of successfully completed IT projects as well as the percentage of projects that are successfully completed on time.

To calculate the SLA hit rate, internal IT teams and MSPs agree upon the numbers in terms of performance and quality and measure them either monthly or quarterly to see whether the service-level agreements are being delivered upon. You can calculate the percentage of tickets with the previously agreed upon service-level agreement.

6. First contact resolution rate

With this metric, you can calculate the percentage of tickets that are resolved on the first touch point. This is an important measure of how efficiently the IT team is working to resolve incidents.

Most IT departments use this metric to track the number of tickets generated on a daily or weekly basis. The tracking of this operational metric depends entirely on what your executive or clients care to know about.

8. Number of tasks automated

This metric should always record a YoY increase. Common tasks that you can automate to boost efficiency are patching, user onboarding and auto-remediation of common tickets. If your RMM or endpoint management solution isnt helping you with automation of common IT processes, you should upgrade to a best-in-class solution that can help you automate your everyday tasks.

9. Endpoints per technician

This particular metric will vary largely depending on whether youre an internal IT department or an MSP. This metric helps you evaluate the number of endpoints each of your technicians is responsible for. A best-in-class RMM solution can easily support the ratio of 500 endpoints to 1 technician without burning out the technicians due to lack of automation.

10. Retention rate of staff

With this metric, you can measure how long you retain your staff. In case the turnover is high, you might want to track the training costs, time it takes to train a new hire or the time it takes to hire new staff. While measuring the retention rate, you must exclude staff members that have been terminated for poor performance.

System metrics

System metrics are focused on ensuring that all IT systems such as hardware and applications are operating reliably. These metrics help organizations evaluate historical system performance and accordingly predict future performance. System metrics equip IT teams with the information required to scale their business and also pursue new opportunities that are largely reliant on a stable IT infrastructure.

With this KPI, you can track and record the number of IT assets in your IT infrastructure. You can also segment this asset information based on the type of IT asset such as the number of desktops, laptops, phones and servers.

12. System availability (uptime)

Another important system KPI to track is system availability or uptime, which is the percentage of time that end users are able to work on your IT systems. To adequately measure this KPI, you must refer to the rule of 9s. You must aim for a minimum of 99.9% uptime that is about 9 hours per year or 10 minutes per week.

13. Server availability (uptime)

Server availability is measured as the percentage of time that the servers on your network are up and running. You can calculate server uptime by subtracting the total downtime from the total time and dividing the result by the total amount of time over a specific period. Similar to system availability, server uptime of over 99.9% is considered favorable.

14. Server and/or cloud utilization

Server utilization is yet another important metric that helps monitor and track system performance. With this metric, you can track the amount of time a server is busy. Some organizations also track cloud utilization as a way of measuring their system performance.

15. Aggregate workstation utilization

Workstation utilization is another important KPI serving as a measure of system performance. Ittracks the percentage of total workstation memory utilized in an organization.

Security metrics

Security metrics are a critical IT KPI focused on measuring how efficiently your security efforts are working toward keeping your systems and networks protected from security threats. Tracking security metrics is critical to maintaining the integrity of your IT infrastructure and making regular adjustments to ensure you stay on top of your security efforts.

16. Antivirus/antimalware deployment

This metric tracks the deployment status of antivirus/antimalware on your systems. As an important security metric, this KPI ideally should be 100% for your IT infrastructure to be performing most efficiently and securely.

17. Number of open vulnerabilities

With this metric, IT teams and MSPs can track the number of open vulnerabilities for both computers and servers on their network. It helps monitor your IT infrastructures exposure to potential threats and enables you to come up with strategies to bolster your companys IT security posture with quicker incident remediation.

Monitoring the success rate of patch deployment is critical for ensuring that your systems are well-patched and up-to-date. The patch deployment success metric captures the percentage of patches deployed and helps track the progress of the patch deployment process by your IT department.

19. Days since last incident

With this metric, you can calculate the number of days that have passed since the last incident in your IT infrastructure. This helps IT teams and MSPs measure the efficacy of any strategic and technological changes they may have made to reduce the likelihood of incidents. Ideally, the gap between two incidents should keep on steadily increasing to account for security upgrades.

Another great security metric that can help monitor and improve your security health. It helps track the percentage of machines backed up and measures the number of days since last backup. With this metric, you can stay on top of your backup routine and significantly minimize the possibility of losing your critical data to a security incident.

Improve IT performance with Kaseya

If you feel like your current IT stack is getting in the way of you tracking your KPIs and boosting IT performance, lets talk. Kaseya has a wide range of industry-leading IT management solutions that can be yours at 30% less cost than traditional IT solutions. Learn more about how Kaseya VSA can automate your everyday tasks and help improve your IT performance by requesting your free demo today.

The post Top 20 IT KPIs and Metrics You Must Track Today appeared first on Kaseya.

*** This is a Security Bloggers Network syndicated blog from Blog - Kaseya authored by Kaseya. Read the original post at: https://www.kaseya.com/blog/2022/08/23/it-kpis-metrics/

Original post:
Top 20 IT KPIs and Metrics You Must Track Today - Security Boulevard

Meet New ODBC Drivers for Cloud Data Warehouses and Services – openPR

Prague, Czech Republic, August 24, 2022 --(PR.com)--Devart, a recognized vendor of world-class data connectivity solutions for various data connection technologies and frameworks, released 14 new ODBC drivers for Cloud Data Warehouses and Services. These drivers allow easy access to the below data sources from various ETL, BI, reporting, and database management tools and programming languages. Data access is possible on x32-bit and x64-bit Windows, as well as Linux and macOS.

Also, the drivers fully support standard ODBC API functions and data types and enable fast access to live data from anywhere.

Here is the detailed list of the sources:1. Cloud Data Warehouses: Azure Synapse Analytics, QuestDB, Snowflake;2. Cloud CRM: PipeDrive;3. Communication: Slack;4. E-commerce: WooCommerce;5. Help Desk: Zendesk;6. Marketing: Active Campaign, EmailOctopus, Klaviyo, Marketo;7. Payment Processing: Square;8. Project Management: Asana;9. Other Applications: WordPress.

To learn more about the recent release, visit:https://blog.devart.com/14-new-odbc-drivers-for-cloud-data-warehouses-and-services-released.html

ODBC Drivers are high-performance connectivity solutions with enterprise-level features for accessing the most popular database management systems and cloud services from ODBC-compliant reporting, analytics, BI, and ETL tools on 32-bit and 64-bit Windows, macOS, and Linux.

About Devart

Devart is one of the leading developers of database tools and administration software, ALM solutions, data providers for various database servers, data integration, and backup solutions. The company also implements Web and Mobile development projects.

For additional information about Devart, visit https://www.devart.com/.

See the article here:
Meet New ODBC Drivers for Cloud Data Warehouses and Services - openPR