Category Archives: Cloud Servers

Inspur AI Servers Achieve Record-Breaking Results in the Latest MLPerf v2.1 Inference Benchmarks – Business Wire

SAN JOSE, Calif.--(BUSINESS WIRE)--Inspur Systems, a leading data center, cloud computing and AI solutions provider, announced that Inspur AI servers achieved record-breaking results with massive performance gains in the newly-released MLPerf Inference v2.1 AI benchmark results. Inspur AI servers took the lead in more than half of the tasks in the Closed division, posting improvements in performance over 100% in multiple tasks compared to previous results.

Inspur AI Servers were top ranked in 19 out of 30 tasks in the Closed division, which offers an apples-to-apples performance comparison between submitters. Among them, Inspur AI servers won 12 titles out of 16 tasks in the datacenter category and 7 titles out of 14 tasks in edge category. Inspur successfully defended 11 performance records and saw performance improvements of approximately 100% in several tasks like BERT (natural language processing) and 3D U-Net (medical image segmentation).

Strong lead in BERT, greatly improving Transformer performance

21 global companies and research institutions submitted more than 10,000 performance results for the Inference v2.1 benchmarks. The Inspur NF5468M6J AI Server has a pioneering design with 24 GPUs in a single machine. Inspur improved BERT inference performance, which is based on Transformer architecture, with strategies including in-depth optimization of Round Robin Scheduling of GPUs to make full use of the performance of each GPU, enabling the completion 75,000 question-and-answer tasks per second. This is a massive 93.81% jump compared with the previous best performance in the v2.0 results. It is also marked the 4th time that an Inspur AI Server was the benchmark leader for the MLPerf inference BERT task.

The Inspur NF5468M6J AI Server achieved record-breaking performance that was 20% higher than the runner-up In the BERT task. The success of NF5468M6J is due to its excellent system design. It supports up to 24 A100 GPUs with a layered and scalable computing architecture, and earned 8 titles with excellent performance. Among the participating high-end mainstream models utilizing 8 GPUs with NVLink technology, Inspur AI servers achieved top results in 7 out of 16 tasks in the Data Center category, showing leading performance among high-end models. Among them, NF5488A5, Inspurs flagship high-performance AI server, supports 8 third-generation NVlink interconnected A100 GPUs and 2 AMD Milan CPUs and 8 GPUs in a 4U space. The NF5688M6 is an AI server with extreme scalability optimized for large-scale data centers. It supports 8 A100 GPUs and 2 Intel Ice Lake CPUs and 8 GPUs, and supports up to 13 PCIe Gen4 I/O expansion cards.

Optimization on algorithm and architecture, further enhancing performance

Inspur is the first to apply the hyperparameter optimization solution in MLPerf training, which greatly improves performance. Inspur pioneered a ResNet convergence optimization solution. In the ImageNet dataset, only 85% of the original iteration steps were used to achieve the target accuracy. This optimization scheme improved training performance by 15%. Inspur is also the first to use the self-developed convolution merging algorithm plugin operator solution in the MLPerf Inference benchmarks. The algorithm improves performance from 123TOPS to 141TOPS, a performance gain of 14.6%.

In terms of architecture optimization, Inspur took the lead in using the JBOG solution to greatly improve the ability of Inspur AI servers to adopt a large number of GPUs in a single node. In addition, the high-load multi-GPU collaborative task scheduling and the data transmission performance between NUMA nodes and GPUs are deeply optimized. This enables a linear expansion of CPU and GPU utilization and the simultaneous operation of multiple concurrent task, which greatly improves performance.

Inspur is committed to the full stack innovation of an AI computing platform, resource platform and algorithm platform, and jointly accelerates the process of AI industrialization and intelligent development of various industries through its MetaBrain ecosystem partners.

As a member of MLCommons, Inspur has actively promoted the development and innovation of the MLPerf benchmark suite, participating in the benchmarks 10 times and winning multiple performance titles. Inspur continues to innovate in aspects such as overall system optimization, software and hardware synergistic optimization, and reduction of energy consumption ratio, constantly breaking MLPerf performance records, and sharing the technology with the MLCommons community, which has been used by a large number of participating manufacturers and is widely used in subsequent MLPerf benchmarks.

To view the complete results of MLPerf Inference v2.1, please visit:https://mlcommons.org/en/inference-datacenter-21/ https://mlcommons.org/en/inference-edge-21/

About Inspur Systems

Inspur Systems is a leading data center, cloud computing and AI solutions provider, ranked among the worlds top 3 server vendors. It is Inspur Informations San Francisco-based subsidiary company. Inspurs cutting-edge hardware products and designs are widely delivered and deployed in major data centers around the globe, serving important technology arenas like open computing, cloud, AI and deep learning. Inspur works with customers to develop purpose-built, performance-optimized solutions that empower them to tackle different workloads, overcome real-world challenges, and grow their business. To learn more, visit https://www.inspursystems.com.

Excerpt from:
Inspur AI Servers Achieve Record-Breaking Results in the Latest MLPerf v2.1 Inference Benchmarks - Business Wire

Camcloud More than just a cottage industry – IT World Canada

Lets say youre at the cottage and your home alarm goes off. But home is hundreds of kilometers hours away. What do you do? If youre like Brendan Harrison, co-founder of Camcloud, you start a company.

Our origins stem from one time I was at my cottage in Eastern Ontario, said Harrison in a recent interview. My home then had a traditional alarm system. I received a call, and as I wasnt close to home the security company offered to disable my home alarm remotely or call the police. I told them as I didnt want to drive all the way back home for a false alarm, they had better call the police. They did so, and I ended up getting a $150 bill from the police for what indeed turned out to be a false alarm.

This incident inspired Harrison, along with co-founders Dan Burkett and Alen Zukich, to start Camcloud.

Weve made it our mission to deliver a solution that is easy-to-use, reliable, completely cloud-driven, and requiring no specialized on-premise systems, said Harrison.

The cottage false alarm incident got Harrison thinking about the flaws of most video surveillance solutions, which were too expensive and complicated, and too time-consuming to manage, especially for a small business.

The standard solution at the time was little more than cameras linked to a standalone computer that did the recording for a single location. Harrison, Burkett, and Zukich wondered if it was possible to eliminate the server, moving the management into the cloud.

We eliminated the need to have a piece of hardware, like an appliance or a server, at 100 different locations. So we avoided replicating all that infrastructure. We reduced the maintenance costs, and it simplified the deployment.

The case for the cloud-based solution was compelling. Storing critical surveillance footage safely offsite and reducing complex onsite hardware were two key benefits. It also provided access and management of a private cloud archive with multiple cameras on the web with mobile apps.

Replacing proprietary hardware in the cloud for many clients involved interfacing with a variety of cameras. As a result, Camclouds solution is hardware-agnostic it can work with a wide range of cameras. This allows companies to knit together cameras across many locations.

The emerging solution filled a real market need, particularly with small- and medium-sized businesses. But Camcloud had yet to find its true focus and have its biggest success.

Multi-site

Eliminating the single remote machine, and taking the recording and monitoring to the cloud, has made Camclouds solution particularly valuable to companies with multiple locations. Camclouds multi-user feature allows users to efficiently manage a large number of cameras, users, and physical locations.

Remote management was expanded to include camera health checks that detect problems with a customers cameras.

You get to store all your video surveillance in the cloud securely in your own account. You get away from equipment getting stolen and camera management scalability. These are the benefits we bring. We offer a hardware-free approach to this space.

By offering a modern solution to the age-old problem of security installation, access, and deployment, Camcloud has earned huge accolades for their solution. In 2016 they won the Best New Product of the Year Award for Cloud Solutions and Service from Security Products Magazine.

Artificial Intelligence and Machine Learning For Intelligent Monitoring

Once the solution was in the cloud, Camcloud was able to take advantage of developments in artificial intelligence (AI), using machine learning to solve a problem that plagues other systems.

The issue arises with motion detection. It is costly to monitor cameras 24/7 for businesses, but especially for those with a large number of locations and cameras. Motion detection is critical setting off an alert when the camera detects movement. The problem is that cameras cant distinguish between something harmless, like a tree branch moving in the wind, and a real threat such as a person on the property. Once again, false alarms are costly in terms of time, and can also result in charges for when police or security respond where there is no threat.

The answer, said Harrison, is in leveraging machine learning to understand when a motion should be ignored and when it requires investigation.

How was it possible for a relatively small startup to develop a complex AI solution? The company did receive the support of a government-funded university collaboration, but their solution required much more than a clever algorithm. To be fully implemented, machine learning needs to learn. That requires a huge amount of data designed to teach the algorithm to tell the difference between, for example, the movement of a tree branch and that of a person. And it must have data for a wide range of other conditions, and be able to learn how to make distinctions.

This is where having their solution in the cloud gave them leverage. This type of library of data, it turns out, was already available from Amazon Web Services (AWS), their cloud provider.

Today, Camcloud is offering a world-class AI solution that will detect and respond appropriately when people, vehicles, animals, and hundreds of other types of object are detected in video surveillance footage. Customers can now use the optional and affordable Cloud AI module to analyze and classify motion events detected by their camera.

Despite the high level of technical sophistication, Camcloud is able to make their solution affordable to a wide range of companies. Cloud hosting eliminates many hardware costs. AI-based monitoring allows companies to manage a large number of cameras in various locations without expensive human intervention. This has allowed Camcloud to expand its customer base, which is highly varied and is now drawn from many sectors, including small businesses, food chains and schools [who] all want security and peace of mind. And yes, Camcloud also serves cottage owners.

Solutions to Client Challenges

Many current video surveillance solutions are too complicated and expensive, and can be time-consuming and difficult to manage. Customers often have problems to solve. They have varied requirements, some of which may include requiring surveillance with easy access, no hardware, and only cloud-based while being supported with mobile access.

Harrison is excited about solving those problems. What he is most proud of? Seeing our customers use our platform, use the technology for more active operational management and drive efficiencies in their business.

Camcloud has found reliable ways to solve the many complex problems that can arise when the security of its customers is paramount. Business owners have many other important things to focus on every day; they dont need to be worrying about surveillance.

Engaging the Customers in a Competitive Market

Surveillance has become an increasingly competitive market. With the likes of Google (Nest), Amazon (Ring) and Apples smart home, the market has become swallowed up by the home automation space.

Leveraging the cloud, Camcloud found a niche and addressed a real customer need. From their initial cottage solution, they found that customers were coming to Camcloud and saying, We love your open platform and want to use it for our business.

Camcloud has found that other businesses integrators and resellers are contacting them, asking how they can sell their service.

Camcloud recruits channel partners who are installing the cameras. But then, said Harrison, what they can do is offer our cloud-based service instead of an on-premise service. Our partners love our platform because it allows them to mix and match the cameras they use depending on their specific needs.

Download: SMBs Leveraging AI in the Cloud 4 Innovative Organizations

See the article here:
Camcloud More than just a cottage industry - IT World Canada

Update your domain’s name servers | Cloud DNS | Google Cloud

After you create a managed zone, youmust change the name servers that are associated with your domain registrationto point to the Cloud DNS name servers. The process differs by domainregistrar provider. Consult the documentation for your provider todetermine how to make the name server change.

If you don't already have a domain name, you can create and register a newdomain name atGoogle Domains or Cloud Domains,or you can use a third-party domain name registrar.

If you are using Cloud Domains, see Configure DNS for thedomain in theCloud Domains documentation.

If you are using Google Domains, follow these instructions to update yourdomain's name servers.

For Cloud DNS to work, you must determine the name servers thathave been associated with your managed zone and verify that they match the nameservers for your domain. Different managed zones have different name servers.

In the Google Cloud console, go to the Cloud DNS zonespage.

Go to Cloud DNS zones

Under Zone name, select the name of your managed zone.

On the Zone details page, click Registrar setup at the top rightof the page.

To return the list of name servers that are configured to serveDNS queries for your zone, run thedns managed-zones describecommand:

Replace ZONE_NAME with the name of the managed zone forwhich you want to return a list of name servers.

The IP addresses of your Cloud DNS name servers change, andmay be different for users in different geographic locations.

To find the IP addresses for the name servers in the a name server shard,run the following command:

For private zones, you can't query name servers on the public internet.Therefore, it's not necessary to find their IP addresses.

To find all the IP address ranges used by Google Cloud, seeWhere can I find Compute Engine IP ranges?

Verify that the name servers for the domain match the name servers listed inthe Cloud DNS zone.

To look up name servers that are currently in use, run the dig command:

Now that you have the list of Cloud DNS name servers hosting yourmanaged zone, use your domain registrar toupdate the name servers for your domain. Your domain registrar might be Google Domains,Cloud Domains, or a third-party registrar.

Typically, you must provide at least two Cloud DNS name serversto the domain registrar. To benefit from Cloud DNS's highavailability, you must use all the name servers.

After changing your domain registrar's name servers, it can take a while forresolver traffic to be directed to your new Cloud DNS nameservers. Resolvers could continue to use your old name servers until the TTL onthe old NS records expire.

More here:
Update your domain's name servers | Cloud DNS | Google Cloud

Cloud servers are proving to be an unfortunately common entry route for cyberattacks – TechRadar

Cloud servers are now the number one entry route for cyberattacks, new research has claimed, with 41% of companies reporting it as the first entry point.

The problem is only getting worse, with the number of attacks using cloud servers as their initial point of entry rose 10% year-on-year, and they've also leapfrogged corporate servers as the main way for criminals to find their way into organizations.

The data, collected by cyber insurer Hiscox from a survey of 5,181 professionals from eight countries, found it's not just cloud servers that are letting hackers in, as 40%of businesses highlighted business emails as the main access point for cyberattacks.

Other common entry methods included remote access servers (RAS),which were cited by 31% of respondents, and employee-owned mobile devices,which were cited by 29% (a 6% rise from the year before).

Distributed denial of service (DDoS)attacks were also a popular method, cited by 26% of those surveyed.

The data also provided some into how cyberattacks are impacting different countries.

Businesses in the United Kingdom were found to be the least likely out of all the countries surveyed to have experienced a cyberattack in the last year at 42%, significantly beating out the Netherlands and France, who had figures of 57% and 52% respectively.

However, on the flip side, the UK had the highest median cost for cyberattacks out of all the countries looked at, coming in at $28,000.

It's not just the smaller, underfunded firms that can fall victim to cloud server-based attacks.

Accenture, one of the worlds largest IT consultancy firms, recently suffered an attack involving the LockBit ransomware strain which impacted a cloud server environment.

View original post here:
Cloud servers are proving to be an unfortunately common entry route for cyberattacks - TechRadar

Recycling the Cloud: Singapore facility gives second life to mega servers Recycling International – Recycling International

Microsoft has opened a plant to tackle the growing stream of e-scrap from data centres. The Circular Center in Singapore provides services for the reuse of computer components in schools, for job training, and much more.

Microsoft aims to reuse 90% of its cloud computing hardware assets by 2025. The launch of this first facility in Asia is claimed to be an important step towards that goal, while also reducing Microsofts carbon footprint and creating jobs.

Microsoft Cloud is powered by millions of servers in hundreds of data centres around the world and demand for cloud services is growing exponentially. At these facilities, decommissioned servers and other types of hardware can be repurposed or dissembled by technicians before the components and equipment move on to another phase of life.

Microsofts Intelligent Disposition and Routing System (IDARS) uses AI and machine learning to establish and execute a zero-waste plan for every piece of decommissioned hardware. IDARS also works to optimise routes for these hardware assets and provide Circular Center operators with instructions on how to dispose of each one.

Singapore, with strong government and private sector commitments and agile policy environment, has already laid the foundations for an advanced recycling infrastructure to take advantage of those opportunities. A Microsoft Circular Center in Singapore is in line with this approach, says the tech multinational.

Microsofts first Circular Center opened in Amsterdam in 2020. Since its inception, the company has reused or recycled 83% of all decommissioned assets. Plans are underway to expand the programme in Washington, Chicago, Sydney and in other locations.

Would you like to share any interesting developments or article ideas with us? Don't hesitate to contact us.

Read this article:
Recycling the Cloud: Singapore facility gives second life to mega servers Recycling International - Recycling International

Application Server Market to Hit Valuation of $40.96 Billion by 2028 | Increasing Number of Cyberattacks is growing Concerns among End-Users -…

Westford, USA, Sept. 08, 2022 (GLOBE NEWSWIRE) -- As the world continues to become more digital, businesses are increasingly looking for application servers that can facilitate large-scale web and mobile deployments. The growth of the global application server market is only expected to increase in the coming years, as market players figure out new ways to stay competitive.

There is a growing demand for application servers and companies are rushing to invest in these technologies in order to meet the needs of their customers. Application servers are now essential for any business that depends on web applications, as well as traditional desktop applications. This demand in the global application server market is due to the popularity of cloud-based services and the need for businesses to reduce IT costs. Many businesses are seeking solutions that allow them to use existing hardware and software infrastructure while offloading some of the processing burden to a third-party. This can be especially beneficial for companies that have limited resources or cannot afford to hire additional IT staff.

To meet this growing demand, vendors in the application server market are investing in new product lines and innovation. For example, IBM is introduced its Bluemix platform in 2018, which makes it easier for developers to build cloud-based applications using IBMs Hypervisor technology. Hewlett Packard Enterprise has also made considerable investments in its Applied Data Science Platform, which provides databases and analytics capabilities for application development.

Get sample copy of this report:

https://skyquestt.com/sample-request/application-server-market

Why Businesses are Rapidly Turning to Application Services?

There are a number of reasons why businesses are turning to application server market. For one, these systems can help speed up web and mobile deployments by handling the heavy lifting required to run complex applications. Additionally, application servers offer reliability and security benefits that can be priceless for organizations that depend on their websites and apps for business success.

Today, web-based applications are increasingly being used to replace desktop applications. In addition, businesses are finding that application servers are a more efficient way to manage their software infrastructure than traditional hosting providers. This is because application servers offer higher performance and reliability than traditional hosting providers.

SkyQuest in the global application server market found that most of the business are using the product to configure and run multiple applications simultaneously without slowing down. This means that businesses can use application servers to run their business applications, as well as their own personal websites and applications.

Also, there's the increasing demand from cloud services providers for application servers. Cloud services providers want to use application servers so that they can provide customers with a scalable infrastructure. By using an application server, a cloud service provider can reduce the amount of time and effort it takes to set up a new service.

SkyQuests report on application server market offers insights on market dynamics, opportunities, trends, challenges, threats, pricing analysis, average spend per user, major spender by companies, consumer behavior, market impact, competitive landscape, and market share analysis.

Browse summary of the report and Complete Table of Contents (ToC):

https://skyquestt.com/report/application-server-market

High Risk of Ransomware in Application Server Infrastructure is Posing Challenge

From the last few years, the global application server market witnessed around 129 major attacks on application server infrastructure. The increasing risk of cyber-attacks on application servers is something that businesses need to be aware of and they are a key part of many organizations, and when they are attacked, it can open up a lot of opportunities for criminals. Cyber risks to application servers have increased in recent years, as attackers have become increasingly skilled at targeting these systems. At the same time, companies are increasingly reliant on these systems to provide critical services, making them targets for hackers. In 2021, on average, ransomware attack would cost application server hack around $17,000.

SkyQuest has recently conducted a survey on application server market to understand the frequency and insights about cyber-attack 150 large and 150 small enterprises. Wherein, it was obsefved that 13% of surveyed organizations have suffered at least one cyber-attack in the past two years. The small enterprises were at least 200% more susceptible cyberattacks. More than 26% of these organizations have suffered two or more attacks during that time period. Additionally, 44% of these same organizations reported that their cyber security capabilities were inadequate to respond to the attacks they experienced. As per our finding, 88% of all detected data breaches began with stolen or illegally accessed user credentials.

Top cyberattacks in application server market

SkyQuest has published a report on global application server market. The report provides a detailed analysis of cyberattacks on the application server consumers and their overall performance. The report also offers valuable insights about top players and their key advancements in order to avoid such attacks.

Speak to Analyst for your custom requirements:

https://skyquestt.com/speak-with-analyst/application-server-market

Top Players in Global Application Server Market

Related Reports in SkyQuests Library:

Global Electronic Data Interchange (EDI) Software Market

Global Human Resource (HR) Technology Market

Global Smart Label Market

Global Field Service Management (FSM) Market

Global Point Of Sale (POS) Software Market

About Us:

SkyQuest Technologyis leading growth consulting firm providing market intelligence, commercialization and technology services. It has 450+ happy clients globally.

Address:

1 Apache Way, Westford, Massachusetts 01886

Phone:

USA (+1) 617-230-0741

Email:sales@skyquestt.com

LinkedInFacebookTwitter

The rest is here:
Application Server Market to Hit Valuation of $40.96 Billion by 2028 | Increasing Number of Cyberattacks is growing Concerns among End-Users -...

Improving Splunk and Kafka Platforms with Cloud-Native Technologies – InfoWorld

Intel Select Solutions for Splunk and Kafka on Kubernetes use containers and S3-compliant storage to increase application performance and infrastructure utilization while simplifying the management of hybrid cloud environments.

Executive Summary

Data architects and administrators of modern analytic and streaming platforms like Splunk and Kafka continually look for ways to simplify the managementofhybrid or multi-cloud platforms, while also scaling these platforms to meet the needs of their organizations. They are challenged with increasing data volumes and the need for faster insights and responses. Unfortunately, scaling often results in server sprawl, underutilized infrastructure resources and operational inefficiencies.

The release of Splunk Operator for Kubernetes and Confluent for Kubernetes, combined with Splunk SmartStore and Confluent Tiered Storage, offers new options for architectures designed with containers and S3-compatible storage. These new cloud-native technologies, running on Intel architecture and Pure Storage FlashBlade, can help improve application performance, increase infrastructure utilization and simplify the management of hybrid and multi-cloud environments.

Intel and Pure Storage architects designed a new reference architecture called Intel Select Solutions for Splunk and Kafka on Kubernetes and conducted a proof of concept(PoC) to test the value of this reference architecture. Tests were run using Splunk Operator for Kubernetes and Confluent for Kubernetes with Intel ITs high-cardinality production data to demonstrate a real-worldscenario.

In our PoC, a nine-node cluster reached a Splunk ingest rate of 886 MBps, while simultaneously completing 400 successful dense Splunk searches per minute, with an overall CPU utilization rate of 58%.1 We also tested Splunk super-sparse searches and Splunk ingest from Kafka data stored locally versus data in Confluent Tiered Storage on FlashBlade, which exhibited remarkable results. The outcomes of this PoC informed the Intel Select Solutions for Splunk and Kafka on Kubernetes.

Keep reading to find out how to build a similar Splunk and Kafka platform that can provide the performance and resource utilization your organization needs tomeet the demands of todays data-intensive workloads.

Solution Brief

Business challenge

The ongoing digital transformation of virtually every industry means that modern enterprise workloads utilize massive amounts of structured and unstructured data. Forapplications like Splunk and Kafka, the explosion of data can be compounded by other issues. First, thetraditional distributed scale-out model with direct-attached storage requires multiple copies of data to be stored, driving up storage needs even further. Second, many organizations are retaining their data for longer periods of time for security and/or compliance reasons. These trends createmany challenges, including:

Beyond the challenges presented by legacy architectures, organizations often have other challenges. Large organizations often have Splunk and Kaka platforms in both on-prem and multi-cloud environments. Managing the differences between these environments creates complexity for Splunk and Kafka administrators, architects and engineers.

Value of Intel Select Solutions for Splunk and Kafka on Kubernetes

Many organizations understand the value of Kubernetes, which offers portability and flexibility and works with almost any type of container runtime. It has become the standard across organizations for running cloud-native applications; 69% of respondents from a recent Cloud-Native Computing Foundation (CNCF) survey reported using Kubernetes in production.2 To support their customers desire to deploy Kubernetes, Confluent developed Confluent for Kubernetes, and Splunk led the development of Splunk Operator for Kubernetes.

In addition, Splunk and Confluent have developed new storage capabilities: Splunk SmartStore and Confluent Tiered Storage, respectively. These capabilities use S3compliant object storage to reduce the cost of massive data sets. In addition, organizations can maximize data availability by placing data in centralized S3 object storage, while reducing application storage requirements by storing a single copy of data that was moved to S3, relying on the S3 platform for data resiliency.

The cloud-native technologies underlying this reference architecture enable systems to quickly process the large amounts ofdata todays workloads demand; improve resource utilization and operational efficiency; and help simplify the deployment and management of Splunk andKafkacontainers.

Solution architecture highlights

We designed our reference architecture to take advantage of the previously mentioned new Splunk and Kafka products and technologies. We ran tests with a proof of concept (PoC) designed to assess Kafka and Splunk performance running on Kubernetes with servers based on high-performance Intel architecture and S3-compliant storage supported by Pure Storage FlashBlade.

Figure 1 illustrates the solution architecture at a high level. The critical software and hardware products and technologies included in this reference architecture are listed below:

Additional information about some of these components is provided in the A Closer Look at Intel Select Solutions for Splunk and Kafka on Kubernetes section that follows.

Figure 1. The solution reference architecture uses high-performance hardware and cloud-native software to help increase performance and improve hardware utilization and operational efficiency.

A Closer Look at Intel Select Solutions for Splunk and Kafka on Kubernetes

The ability to run Splunk and Kafka on the same Kubernetes cluster connected to S3-compliant flash storage unleashes seamless scalability with an extraordinary amount of performance and resource utilization efficiency. The following sections describe some of the software innovations that make this possible.

Confluent for Kubernetes and Confluent TieredStorage

Confluent for Kubernetes provides a cloud-native, infrastructure-as-code approach to deploying Kafka on Kubernetes. It goes beyond the open-source version of Kubernetes to provide a complete, declarative API to build a private cloud Kafka service. It automates the deployment of Confluent Platform and uses Kubernetes to enhance the platforms elasticity, ease of operations and resiliency for enterprises operating at any scale.

Confluent Tiered Storage architecture augments Kafka brokers with the S3 object store via FlashBlade, storing data on the FlashBlade instead of the local storage. Therefore, Kafka brokers contain significantly less state locally, making them more lightweight and rebalancing operations orders of magnitude faster. Tiered Storage simplifies the operation and scaling of the Kafka cluster and enables the cluster to scale efficiently to petabytes of data. With FlashBlade as the backend, Tiered Storage has the performance to make all Kafka data accessible for both streaming consumers and historical queries.

Splunk Operator for Kubernetes and SplunkSmartStore

The Splunk Operator for Kubernetes simplifies the deployment of Splunk Enterprise in a cloud-native environment that uses containers. The Operator simplifies the scaling and management of Splunk Enterprise by automating administrative workflows using Kubernetes best practices.

Splunk SmartStore is an indexer capability that provides a way to use remote object stores to store indexed data. SmartStore makes it easier for organizations to retain data for a longer period of time. Using FlashBlade as the high-performance remote object store, SmartStore holds the single master copy of the warm/cold data. At the same time, a cache manager on the indexer maintains the recently accessed data. The cache manager manages data movement between the indexer and the remote storage tier. The data availability and fidelity functions are offloaded to FlashBlade, which offers N+2 redundancy.4

Remote Object Storage Capabilities

Pure Storage FlashBlade is a scale-out, all-flash file and object storage system that is designed to consolidate complete data silos while accelerating real-time insights from machine data using applications such as Splunk and Kafka. FlashBlades ability to scale performance and capacity is based on five key innovations:

A complete FlashBlade system configuration consists of up to 10 self-contained rack-mounted servers. A single 4U chassis FlashBlade can host up to 15 blades and a full FlashBlade system configuration can scale up to 10 chassis (150 blades), potentially representing years of data for even higher ingest systems. Each blade assembly is a selfcontained compute module equipped with processors, communication interfaces and either 17TB or 52 TB of flash memory for persistent data storage. Figure 2 shows how the reference architecture uses Splunk SmartStore andFlashBlade.

Figure 2. Splunk SmartStore using FlashBlade for the remote object store.

Proof of Concept Testing Process andResults

The following tests were performed in our PoC:

For all the tests, we used Intel ITs real-world high-cardinality production data from sources such as DNS, Endpoint Detection and Response (EDR) and Firewall, which were collected into Kafka and ingested into Splunk through Splunk Connect for Kafka.

Test #1: Application Performance and InfrastructureUtilization

In this test, we compared the performance of a baremetal Splunk and Kafka deployment to a Kubernetes deployment. The test consisted of reading data from four Kafka topics and ingesting that data into Splunk, while dense searches were scheduled to run every minute.

Bare-Metal Performance

We started with a bare-metal test using nine physical servers. Five nodes served as Splunk indexers, three nodes as Kafka brokers and one node served as a Splunk search head. With this bare-metal cluster, the peak ingest rate was 301 MBps, while simultaneously finishing 90 successful Splunk dense searches per minute (60 in cache, 30 from FlashBlade), with an average CPU utilization of 12%. The average search runtime for the Splunk dense search was 22seconds.

Addition of Kubernetes

Next, we deployed Splunk Operator for Kubernetes and Confluent for Kubernetes on the same nine-node cluster. Kubernetes spawned 62 containers: 35 indexers, 18 Kafka brokers and nine search heads. With this setup, we reached a peak Splunk ingest rate of 886 MBps, while simultaneously finishing 400 successful Splunk dense searches per minute (300 in cache, 100 from FlashBlade), with an average CPU utilization of 58%. Theaverage search runtime for the Splunk dense search was 16 secondsa 27% decrease from the Splunk average search time on the bare-metal cluster. Figure 3 illustrates the improved CPU utilization gained from containerization using Kubernetes. Figure 4 shows the high performance enabled by the reference architecture.

Figure 3. Deployment of the Splunk Operator for Kubernetes and Confluent for Kubernetes enabled 62Splunk and Kafka containers on the nine physical serversinthe PoC cluster.

Figure 4. Running Splunk Operator for Kubernetes and Confluent for Kubernetes enabled up to 2.9X higher ingest rate, up to 4x more successful dense searches, and a 27% reduction in average Splunk search time, compared to the bare-metal cluster.

Test #2: Data Ingest from Kafka Local Storage versus Confluent Tiered Storage

Kafkas two key functions in event streaming are producer (ingest) and consumer (search/read). In the classic Kafka setup, the produced data is maintained at the broker's local storage, but with Tiered Storage, Confluent offloads the data from the Tiered Storage to the object store and enables infinite retention. If any consumer is looking for data that is not in the local storage, the data would be downloaded from the object storage.

To compare the consumer/download performance, we started the Splunk Connect workers for Kafka after one hour of data ingestion into Kafka with all data on the local SSD storage. The Connect workers read the data from Kafka and forwarded it to the Splunk indexers, where we measured the ingest throughput and elapsed time to load all the unconsumed events. During this time, Kafka read the data from the local SSD storage, and Splunk was also writing the hot buckets into the local SSD storage that hoststhe hot tier.

We repeated the same test when the topic was enabled with Tiered Storage by starting the Splunk Connect workers for Kafka, which initially read the data out of FlashBlade and later from the local SSD storage for the last 15 minutes. We then measured the ingest throughput and the elapsed time to load all the unconsumed events.

As shown in Figure 5, there is no reduction in the Kafka consumer performance when the broker data is hosted on Tiered Storage on FlashBlade. This reaffirms that offloading Kafka data to the object store, FlashBlade, gives not only similar performance for consumers but also the added benefit of longer retention.

Figure 5. Using Confluent Tiered Storage with FlashBlade enables longer data retention while maintaining (or even improving) the ingest rate.

Test #3: Splunk Super-Sparse Searches in SplunkSmartStore

When data is in the cache, Splunk SmartStore searches are expected to be similar to non-SmartStore searches. When data is not in the cache, search times are dependent on the amount of data to be downloaded from the remote object store to the cache. Hence, searches involving rarely accessed data or data covering longer time periods can have longer response times than experienced with non-SmartStore indexes. However, FlashBlade accelerates the download time considerably in comparison to any other cheap-and-deep object storage available today.4

To demonstrate FlashBlades ability to accelerate downloads, we tested the performance of a super-sparse search (the equivalent of finding a needle in a haystack); the response time of this type of search is generally tied to I/O performance. The search was initially performed against the data in the Splunk cache to measure the resulting event counts. The search returned 64 events out of several billion events. Following this, the entire cache was evicted from all the indexers, and the same super-sparse search was issued again, which downloaded all the required data from FlashBlade into the cache to perform the search. We discovered that FlashBlade supported a download of 376 GB in just 84 seconds with a maximum download throughput of 19 GBps (see Table 1).

Table 1. Results from Super-Sparse Search

Results

Downloaded Buckets

376 GB

Elapsed Time

84 seconds

Average Download Throughput

4.45 GBps

Maximum Download Throughput

19 GBps

A super-sparse search downloading

376 GB in 84 Seconds

Configuration Summary

Introduction

The previous pages provided a high-level discussion of the business value provided by Intel Select Solutions for Splunk andKafka on Kubernetes, the technologies used in the solution and the performance and scalability that can be expected. This section provides more detail about the Intel technologies used in the reference design and the bill of materials for building the solution.

Intel Select Solutions for Splunk and Kafka on Kubernetes Design

The following tables describe the required components needed to build this solution. Customers must use firmware with the latest microcode. Tables 2, 3 and 4 detail the key components of our reference architecture and PoC. Theselection of software, compute, network, and storage components was essential to achieving the performance gains observed.

Table 2. Key Server Components

Component

Description

CPU

2x Intel Xeon Platinum 8360Y (36 cores, 2.4 GHz)

Memory

16x 32 GB DDR4 @ 3200 MT/s

Storage (Cache Tier)

1x Intel Optane SSD P5800x (1.6 TB)

Storage (Capacity Tier)

1x SSD DC P4510 (4 TB)

Boot Drive

1x SSD D3-S4610 (960 GB)

Network

Intel Ethernet Network Adapter E810-XXVDA2 (25 GbE)

Table 3. Software Components

Software

Version

Kubernetes

1.23.0

Splunk Operator for Kubernetes

1.0.1

Splunk Enterprise

8.2.0

Splunk Connect for Kafka

2.0.2

Confluent for Kubernetes

2.2.0

Confluent Platform

7.0.1 using Apache Kafka 3.0.0

Table 4. S3 Object Storage Components

Read more from the original source:
Improving Splunk and Kafka Platforms with Cloud-Native Technologies - InfoWorld

3 practical ways to fight recession by being cloud smart – IT Brief New Zealand

As COVID almost starts to feel like a distant memory, you think wed all cop a break. But no, the threat of recession now darkens the horizons. This makes it an excellent time to get smart about how you use cloud and ensure it delivers short- and long-term value to your organisation.

In this article, we suggest three specific ways to nail down some genuine savings or optimise the benefits (and savings) from your cloud and cloud applications.

1. Save more when you choose a cloud-native application

Depending on where you are on your roadmap to cloud adoption, you may want to look sideways at some of your legacy line-of-business applications and ask if they will serve you equally well in your transformed state.

If you have enough budget, practically any application can be retrospectively modernised to work in the cloud. And, unwilling to be left behind, some vendors have re-engineered their applications to run in the cloud with varying degrees of success. But its important to realise that unless the application was specifically built from the ground up to run on the cloud (i.e., a cloud-native), it may not deliver an ROI or enable your business to keep up with the current pace of change.

Think of it this way: Its like adapting your petrol-fuelled car to run on an EV battery. While innovation may prolong your beloved vehicle's life, it will never perform to the standard of a brand spanking new state-of-the-art Tesla.

Cloud-native applications are built from birth to be inherently efficient; to perform to a much better standard than applications with non-native features, and to cost less to run.

Lets break those benefits down a bit:

2. Check out that cloud sprawl

Its easy to rack up spikes on your cloud invoice when your organisation has gone cloud crazy. Cloud crawl is when your cloud resources have proliferated out of control, and you are paying for them, often unknowingly.

So, how does that happen? It usually comes about because of a failure to eliminate services that are no longer, or never were, part of your overall cloud strategy. Its like still paying a vehicle insurance policy on a Ferrari when youve made a sensible downgrade to a family-friendly Toyota.

Cloud sprawl can come around through departments adding on or trialling cloud applications, then not unsubscribing from them. Or from maintaining unneeded storage despite deleting the associated cloud server instance. Or from services you once needed when making the original move to the cloud and not decommissioning them.

Make your cloud strategy a living document to ensure youre only paying for what you need and use. One thats shared and compared with the real-life status quo regularly. Implement policies to control those random or one-off cloud application trials when theyre done with. Talk to your technology partner about setting up automated provisioning to shut down old workloads that are no longer of value or could be managed off-peak and, therefore more cost-effectively.

And compare every invoice to identify if you are paying for cloud services that you no longer need or use. If its all sounding a bit hard, a cloud crawl health check by your managed services partner could provide a great ROI.

3. Get more value from your no-where-near dead legacy applications

While cloud-native applications may seem to offer it all, we all know that sometimes its simply not practical to move on from your investment in a legacy solution. In that case, a lift and shift (think of it as uplifting your house as is, where is - from a slightly down-at-heel suburb to a more upmarket one with better facilities) may be the best option to breathe life into ageing technology without having to invest in renovations (or buy new servers).

When done well, lift and shift is a very cost-effective way to onramp your organisation onto the cloud. Just be aware that while you will save money by not modernising your application, youll not realise the true cloud benefits from native constructs (i.e., cheaper storage, elasticity, or additional security).

Dont forget to count your savings

If youre wondering where else you can make immediate or long-term savings, dont forget that your original decision to move to the cloud has delivered your organisation a positive ROI since Day One.

And if youve chosen fully managed services, youve saved even more.

Youve already walked away from the significant overheads of expensive servers stacked in a dust-free, temperature-controlled environment, the disruption caused by software upgrades or server downtime, and the need for IT resources to manage your environment and safeguard your data from cyberattacks. And youve said hello to a low-risk, secure, highly available environment from anywhere your people work, at any time.

If youd like to discuss how to optimise your cloud benefits, and get some well-considered, practical answers, contact us here.

Continue reading here:
3 practical ways to fight recession by being cloud smart - IT Brief New Zealand

Security pros say the cloud has increased the number of identities at their organizations – SC Media

The Identity Defined Security Alliance (IDSA) on Wednesday reported that 98% the vast majority of companies surveyed confirmed that the number of identities has increased in their organization, with 52% saying its because of the rapid adoption of cloud applications.

Other factors increasing identities at organizations are an increase in third-party relationships (46%) and in new machine identities (43%).

Given the growing number of identities in organizations as they migrate to cloud, it makes sense that 84% of respondents report having had an identity-related attack in the past year.

The IDSA report said managing and monitoring permissions at such a high scale and in convoluted environments has become extremely difficult. Attackers are exploiting this challenge and continuously attempt to escalate their attack capabilities.

Identity breaches are by far one of the most common breaches, said Alon Nachmany, Field CISO at AppViewX, who said he dealt with two breaches of this kind when he was a CISO. Nachmany said the industry slowly evolved to privileged identities and ensured that privileged accounts were a separate identity, but when organizations moved to the cloud, the lines blurred.

The days of managing your own systems with your own systems were gone, Nachmany said. As an example, with on-prem Microsoft Exchange Servers migrating to Microsoft O365 we no longer managed the authentication piece. Our local accounts were now accessible from everywhere. And a lot of security best practices were overlooked. Another issue is that as some companies blew up and more systems came onboard, they were quickly deployed with the thought that we will go back and clean it up later. With the cloud making these deployments incredibly easier and faster, the issues just evolved.

Darryl MacLeod, vCISOat LARES Consulting, said while its effective to invest in IAM solutions, organizations need to go back to the basics and educate their employees about the importance of security. MacLeod said employees need to understand the dangers of phishing emails and other social engineering attacks. They should also know how to properly manage their passwords and other sensitive information, and in doing so, MacLeod said organizations can significantly reduce their identity-related risks.

With the growth of cloud computing, organizations are now entrusting their data to third-party service providers without thinking of the implications, MacLeod said.This shift has led to a huge increase in the number of identities that organizations have to manage. As a result, its made them much more vulnerable to attack.If an attacker can gain access to one of these cloud-based services, they can potentially access all of an organizations data. If an organization doesnt have the right security controls in place, they could be left scrambling to contain the damage.

Joseph Carson, chief security scientist and advisory CISO at Delinea, said the growth in mobility and the cloud greatly increases the complexity of securing identities. Carson pointed out that organizations still attempt to try and secure them with the existing security technologies they already have, which results in many security gaps and limitations.

Some organizations even fall short by trying to checkbox security identities with simple password managers, Carson said. However, this still means relying on business users to make good security decisions.To secure identities, you must first have a good strategy and plan in place. This means understanding the types of privileged identities that exist in the business and using security technology designed to discover and protect them. The good news is that many organizations understand the importance of protecting identities.

Originally posted here:
Security pros say the cloud has increased the number of identities at their organizations - SC Media

Hardcoded API keys threaten data security in the cloud – TechHQ

Mobile apps are ubiquitous. Smartphones do a great job of running software that would have previously meant lugging a laptop around, and thats made mobile apps a popular choice for enterprises. Delving into the software, there are a number of use cases. But a common one is the use of mobile apps as a gateway to accessing information in the cloud for example, to query a corporate database (or more likely, several databases). For businesses, the productivity gains are clear. And often software designed to be run on smartphones, tablets, or other devices (vehicles are becoming another popular environment for app developers) turns out to be more widely used than its PC equivalent. So far so good, until security issues get in the way.

Search online for hardcoded credentials on mobile apps and the problem becomes clear (for the MITRE definition, check out CWE-798). In the early stages of software development it can be tempting to write API keys into the code for example, to quickly test an app idea, or prototype different solutions. These keys (which are unique and allow servers to identify the application making the request) provide authorization for software on a remote device to read values stored in a database hosted in the cloud. API keys, in themselves, work well and can help servers in other ways too for example, by allowing them to rate-limit requests; quenching denial of service (DoS) attacks. But keys are keys, and like the real world versions in your pocket or bag, you wouldnt want everyone to have access to them, while they remain valid.

Security search engines such as CloudSEKs BeVigil found (in April 2021) that 0.5% of mobile apps expose AWS API keys. And given how many apps are out there, thats a lot of keys and a lot of data that is potentially at risk of being breached. Its important to note that AWS is not the story here. AWS is one of the most popular cloud hosts on the planet, so theres no surprise to see its keys being widely used. The problem is at the app level and in the software supply chain that goes with it. More recently, Symantec looked into the issue and reported in September 2022 that in a survey of 1859 publicly available apps (featuring Android and iOS operating systems) it had found:

Worst still, hardcoded credentials is a problem that hasnt gone away Symantecs team raised the same issue three years ago. One reason for the problems persistence is that there are numerous ways that these API issues can arise. Kevin Watkins, a security researcher at the firm, notes that some companies outsource the development of their mobile apps, which can lead to vulnerable external software libraries and SDKs being unknowingly introduced. Internally, the use of cross-team libraries can also present issues, when vulnerabilities havent been picked up. And shared libraries add to the problem too, where access tokens have been hardcoded.

If the issue lies in an upstream library, vendors may not even realize that they are using hardcoded credentials, which emphasizes the importance of running security scanning and safe coding tools during software development Snyk is one example, and there are others too. And touching back on the software supply chain issue raised by Symantec, there are solutions that can be deployed here as well, such as software composition analysis integrations provided by Sonatype.

At this point in the discussion, its worth noting that adversaries may have to do a little digging to get their hands on the baked-in secrets. But they will find them. And if those API keys open the doors to a treasure trove of sensitive business data then victims of the breach will be in trouble. There are online guides showing how easy it is to scan code repositories such as GitHub for secrets and credentials. And even simply typing the Linux command strings (which lists all of the strings used in a program) could be enough to reveal clumsily hidden secrets. Tools such as MobSF a security framework for analyzing Android and iOS software are useful indicators of how good, or bad, the situation is. And Microsoft is very clear in its advice to coders. When a key is hard-coded, it is easily discovered, writes the computing giant. Even with compiled binaries, it is easy for malicious users to extract it.

Data breaches happen for all kinds of reasons, but shutting the door on the use of hardcoded credentials will certainly help to raise defenses. And there are lots of useful cheat sheets on how to implement secure cryptographic key management. Also, cloud providers such as Google and Amazon offer tools for keeping secrets such as API keys safe. Solutions such as AWSs secrets manager take much of the heavy lifting out of belt and braces approaches, which includes key rotation a way of further bolstering security. API hubs can help too. RapidAPI has a useful section explaining how to perform API key rotation or reset an API key that you know to be compromised.

Continued here:
Hardcoded API keys threaten data security in the cloud - TechHQ