Category Archives: Cloud Servers
Uncover and overcome cloud threat hunting obstacles – TechTarget
Threat hunting -- the process of proactively searching for signs of malware or an unauthorized intruder -- is a critical part of modern cybersecurity programs. Traditional antivirus programs and intrusion detection systems often miss cutting-edge malware, such as Emotet, or the subtle signs of an advanced persistent threat. An informed, manual threat hunting program can help to find these threats in time to prevent the next stage of attacks, such as ransomware installation.
But what happens when threats invade your cloud environment? Effective cloud threat hunting depends on strong threat intelligence: you need good information in order to successfully hunt down invaders. Many organizations have advanced threat intelligence capabilities in their on-premises environment, but when it comes to the cloud, they are nearly blind.
Now is the time to build your cloud threat hunting program. The problem is that, unlike in on-premises environments, defenders do not have ready access to the same wealth of threat intelligence in the cloud. Here are some of the challenges to threat hunting in the cloud, and tips for surmounting them.
Availability. The cloud is just "someone else's computer," goes the joke. When it comes to logging and monitoring, this is often painfully clear. Many cloud providers offer only very limited event logs, such as records of user authentication, and some do not even provide that. Under pressure from customers, some providers are expanding logging and monitoring capabilities, but security professionals are often foiled by decision-makers who see these features as nice to have rather than as required.
Advanced environments, such as AWS and Azure, offer you an enormous amount of control over "your" systems -- but due to the nature of their shared environments, the ability for users to monitor network traffic is limited. In on-premise environments, defenders can collect network flow records and sniff traffic to detect malicious activity. In the cloud, tools for monitoring virtual networks are not as readily accessible. Amazon and Microsoft both introduced virtual network terminal access point (TAP) capabilities in recent years, but few security professionals have experience using these tools, and the Azure virtual network TAP appears to be under development (the feature has not been consistently available).
Aggregation. To hunt for threats efficiently, practitioners need to be able to easily access intelligence from various sources, ideally using one central console. In on-premise environments, it's easy enough to set up a central server orSIEM to collect logs from various applications and pieces of network equipment. When it comes to the cloud, however, aggregating logs is not so simple. Cloud providers may or may not support log export. When they do, the format of data can vary widely -- and it may change without notice, unexpectedly foiling SIEM ingestion.
This brief video outlines threat hunting's objectives and the key ingredients for a fruitful hunting program.
Expense. Detailed logging in the cloud is rarely on by default. In AWS, for example, CloudWatch monitoring is disabled unless explicitly turned on -- and then a pop-up warns, "additional charges apply." In Microsoft's Office 365, exchange mailbox auditing is now on by default for all new commercial instances -- a change that took place in 2019 after a huge number of customers suffered business email compromise breaches and found that they did not have mailbox logs that they needed to investigate. However, the default retention time is limited to 90 days for many tenants, and customers have to pay for longer retention times.
When it comes to aggregating threat intelligence in the cloud, customers may be charged at every step of the way: for turning logging on, for storing log data in the cloud, for the bandwidth or processing power needed to transfer data to another system, and more. For example, let's say you want to collect log data from AWS and send it to a central Splunk server on Azure. Enabling CloudWatch on AWS requires opening a new Simple Storage Service bucket for local log storage, which costs money. You can use the firehose to push data to another source, which means you are charged for processing power. On Azure, you have to pay for the underlying VM that you use to set up Splunk, as well as for the Splunk license itself. All of this adds up.
Analysis Tools. Tools for cloud threat hunting are nascent. More advanced cloud providers, such as Microsoft and Amazon, have built-in analysis tools, but they often have surprising -- and poorly understood -- limitations. For example, security professionals frequently use Microsoft's graphical Security & Compliance Center to pull Unified Audit Logs (UAL) from Office 365 -- not realizing that the results are limited to 5,000 sorted records or 50,000 unsorted records. Incomplete threat intelligence, of course, leads to shoddy results! Instead, hunters need to use third-party products or custom Powershell scripts to recursively extract large volumes of UAL records. For analysis, products such as Splunk, Extrahop or the open-source Kibana are invaluable.
The cloud is the emerging battleground for bleeding-edge cybersecurity threats. Unfortunately, the constant evolution of threat intelligence, difficulty and expense of aggregation, and nascent cloud-based analysis tools are all challenges for today's defenders. The good news is that cloud monitoring and logging is slowly maturing, and security professionals who push for cloud threat hunting capabilities will reap the rewards.
More here:
Uncover and overcome cloud threat hunting obstacles - TechTarget
This extraordinary motherboard is being used by server CPU scavengers – TechRadar India
Its hard to believe, but even motherboard vendors have factory outlet stores (FOS). Straight from AliExpress comes the Shenzhen FOS, which specialises in new motherboards for obsolete server processors.
Dual X79 motherboard - $76.50 from AliExpress(roughly 63/AU$120)Every now and again, eBay and AliExpress are awash with old servers ditched by the world's cloud computing giants. With this competitively-priced motherboard from Shenzhen FOS, you can take full advantage of these server CPU flash sales.View Deal
Shenzhen FOS has managed to carve out a niche based on the fact that, every now and then, tens of thousands server CPUs flood the market as hyperscalers and cloud computing providers (web hosting, cloud storage, website builders, VPN companies etc.) change platforms.
Suddenly, eBay and AliExpress are awash will old (but still useful) servers dumped by the likes of Microsoft, Google and Amazon - and they're extremely cheap. The problem, however, is that they don't have a consumer-focused, user-friendly motherboard to slot into.
Enter the Shenzhen FOS and a handful of other craftspeople, who fulfil that specific need at a very competitive price.
For example, take this dual X79 motherboard, which can accommodate a pair of Intel Xeon CPUs, supporting E5-1600/E5-2600 Series V1/V2 processors.
You can get a pair of them for sometimes as little as $10 (about 8, AU$12), delivering up to eight cores. Add in the motherboard, which costs $76.50 excluding delivery (about 63, AU$120), and you have a decent barebones system.
If this product comes from mainland China, it will take at least a month to reach either the US or the UK (and potentially more). You may be levied a tax either directly or through the courier.
Have you managed to get hold of a cheaper product with equivalent specifications, in stock and brand new? Let us know and we'll tip our hat to you.
However, we havent tested this motherboard and the usual caveats apply, especially when the website's opening statement reads: Due to different batches of productions, there might be some difference between the pictures you've seen and the motherboard you get. Retail boxes, colors of DIMM slots, SATA ports, PCI or PCI-E Slots and other ports, are subject to change without prior notice.
TL,DR: you may end up with a motherboard that's rather different from the one you thought you were ordering.
Read the original:
This extraordinary motherboard is being used by server CPU scavengers - TechRadar India
VMware reduces hardware footprint of its shiny new K8s-on-vSphere toys – The Register
VMware has shrunk the hardware requirements for its shiny new native Kubernetes on vSphere product, making it rather more affordable.
The new offering runs on Cloud Foundation, VMware's software-defined-data-centre bundle aimed at service providers and users that wish to build hybrid clouds that touch VMware-powered cloud operators. Cloud Foundation requires a four-host "Management Domain" as a first infrastructure effort.
But as discussed in March by "vNinja" Christian Mohn in a post titled "The Problem with VMware vSphere 7 with Kubernetes", taking the new K8s product for a spin required the Management Domain and another three hosts for the Kubernetes infrastructure.
"That's a tall order that comes with a hefty price tag, if someone wants to dip their toes in the sea of containers," he wrote.
It's a fine observation because VMware wants to use its strength among operations folks to improve its standing with developers and then have them all hold hands and sing Kubernetes-Ba-Yah together. Larger organisations may have seven hosts to spare. Plenty won't.
Mohn thinks he has spotted the way out: he noticed a new VMware white paper titled "Announcing VMware vSphere with Kubernetes Support on the VMware Cloud Foundation Management Domain" [PDF] that reveals VMware has reduced the required host count to four.
The four are all from the Management Domain and all need to be VSAN Ready Nodes the storage-centric servers with plenty of disk slots and at least half a dozen Xeon or EPYC cores. Unlike Raspberry Pis or home-lab-centric micro servers from the likes of HPE or Supermicro, which are all options for testing Kubernetes clusters, Ready Nodes are not cheap or small or something you'll plug into that old power board in your bottom drawer.
But it's still less hardware than was required last month, leading Mohn to observe: "This should make it much easier to set up a Proof-of-Concept, or lab environment. It's even supported for production, although for small environments."
And VMware needs those tests to take place if it is to achieve its ambition of becoming a K8s player.
Sponsored: Webcast: Simplify data protection on AWS
Excerpt from:
VMware reduces hardware footprint of its shiny new K8s-on-vSphere toys - The Register
How Zoom plans to better secure meetings with end-to-end encryption – TechRepublic
A new document from Zoom illustrates how the company hopes to beef up the security and privacy of its virtual meeting platform.
As the coronavirus has forced quarantines, there's been a surge in demand for virtual meeting and video chat apps. Though many such apps have seen an increase in use, Zoom has been one of the top beneficiaries, popular both with individuals and organizations. But Zoom has also been criticized for its weak security and privacy measures, leading to problems such asZoom bombing. Further, Zoom currently lacks the full type of end-to-end encryption that more traditional business services employ. A document posted by Zoom on Friday explains how the company hopes to more fully protect sensitive meeting data and communications.
In its Friday blog post, Zoom announced the draft publication for its end-to-end-encrypted offering. Contending that security and privacy are the two "pillars" of its new plan, Zoom has published its document on GitHub for peer review, hoping to kick off discussions and get feedback from cryptographic experts, nonprofits, advocacy groups, and customers.
SEE:Zoom 101: A guidebook for beginners and business pros(TechRepublic Premium)
Zoom meetings currently offer encryption but with certain limitations. Encryption is used to protect the identity of users, call data between Zoom clients and Zoom's infrastructure, and meeting contents. When a Zoom client is authorized to join a meeting, that client is given a 256-bit security key from Zoom's server. But the Zoom server retains the security key provided to meeting participants, thereby lacking true end-to-end key management and encryption.
The lack of full end-to-end encryption means that an attacker who can monitor Zoom's server infrastructure and gain access to the memory of the relevant Zoom servers could defeat the encryption for a specific meeting. As such, that person could then view the shared meeting key, derive session keys, and decrypt all meeting data.
To fix some of its security holes, Zoom outlined the goals of its proposal as follows: 1) Only authorized meeting participants should have access to their meeting's data; 2) Anyone excluded from a meeting should not have the ability to corrupt the content of that meeting; 3) If a meeting participant engages in abusive behavior, there should be an effective way to report that person to prevent further abuse.
To advance its goals, Zoom has organized its proposal into four phases.
Phase 1. In the first phase, every Zoom application will generate and manage its own public/private security key pairs with those keys known only to the client. The clients will be able to generate and exchange its session keys without needing to trust the server. During this initial phase, this specific security key improvement will support only native Zoom clients and Zoom Rooms, and only scheduled meetings.
Phase 2. In the second phase, Zoom plans to unveil two features for users to track each other's identities without having to trust Zoom's servers. One feature is an Identity Provider Initiated Single Sign-On (SSO IdP) that can cryptographically vouch for the identity of each user.
Phase 3. In the third phase, Zoom will launch a feature that forces its servers to sign and immutably store each user's security keys, ensuring Zoom provides a consistent reply to all clients about the keys. This will be created through a "transparency tree," a feature similar to those used in Certificate Transparency and Keybase.
Phase 4. In the final phase, devices will be even more strongly authenticated. A meeting participant will have to sign new devices using existing devices, use an SSO IdP to reinforce device additions, or delegate authentication to an IT manager. Until one of these conditions is met, the participant's devices will not be trusted.
With these new security initiatives, Zoom also proposed certain changes to its client application.
The interface for setting up a meeting will feature a new checkbox called End-to-End Security. If this box is checked, the "Enable Join Before Host" checkbox becomes grayed out and deselected, the cloud recording feature becomes disabled, and all clients must run the official Zoom client software; those using the Zoom website, legacy Zoom-enabled devices, or a dial-in connection will be locked out of the meeting.
After the meeting starts, all participants will see a meeting security code they can use to verify that no one's connection to the meeting was intercepted. The host can read this code out loud, and all participants can check that their clients display the same code.
"We have proposed a roadmap for bringing end-to-end encryption technology to Zoom Meetings," Zoom said in its document. "At a high level, the approach is simple: use public key cryptography to distribute a session key to a meeting's participants and provide increasingly stronger bindings between public keys and user identities. However, the devil is in the details, as user identity across multiple devices is a challenging problem, and has user experience implications. We proposed a phased deployment of end-to-end security, with each successive stage giving stronger protections."
After reviewing the feedback from customers and other interested parties, Zoom will update and refine its document and finally announce its plans for deploying the new end-to-end encryption and other security enhancements.
This is your go-to resource for XaaS, AWS, Microsoft Azure, Google Cloud Platform, cloud engineering jobs, and cloud security news and tips. Delivered Mondays
Image: Alistair Berg / Getty Images
Link:
How Zoom plans to better secure meetings with end-to-end encryption - TechRepublic
VMware, Dell level up their combined on-prem cloud with much more computing grunt – The Register
VMware and Dell have revealed version 2.0 of their combined on-prem cloud, and upgraded it to handle meatier workloads.
VMware Cloud on Dell EMC sees the latter ship customers a rack full of servers based on the VxRail hyperconverged infrastructure products, running VMwares best private cloud bits vSphere, VSAN and NSX. Users are expected to let Dell techs into their data centres and watch as the hardware is installed switched on, then leave it all alone because Dell owns the rig and is responsible for managing every aspect of its operations including software updates. Buyers can pay as they go, as if it were a public cloud.
Users get a cloud console with which to manage workloads, and more-or-less the same experience as using a public cloud in terms of not having to worry about hardware ops or the underlying software. The product therefore includes dark nodes spare servers that kick in if one of the main nodes has a problem. The dark nodes are included as recognition that stuff sometimes breaks, techs cant teleport in to perform repairs and fixing stuff takes time.
Version 1.0 came in half-height rack and was aimed at the edge and/or important-but-not-enormous workloads.
Version 2.0 ups the ante with new and more powerful host types, full-height racks and the addition of a tech preview of the HCX cloud migration tool, all in the service of taking on more demanding applications, or just more applications. Theres a little more flexibility in that adding nodes is now a scaling option. Support for VMware Horizon VDI and Dells PowerProtect data management products are new additions, while Veeams backup wares have also been certified for the systems.
Also new is a tweak to the cloud console so that it can manage the on-prem VMware Cloud on Dell EMC and VMware Cloud on AWS.
As is VMware's wont, the product presents as vanilla vSphere, so can be stretched into hybrid clouds across the many clouds that run Virtzilla's stack.
Kit Colbert, veep and CTO of VMwares Cloud Platform business unit told The Register customers have asked for bigger rigs, so VMware and Dell has delivered. He said the product will keep evolving, envisaging future variants that employ GPUs or FPGAs.
Sponsored: Webcast: Simplify data protection on AWS
Read the original here:
VMware, Dell level up their combined on-prem cloud with much more computing grunt - The Register
Accelerator Card Market Will Witness Substantial Growth in the Upcoming years by 2027 – WaterCloud News
What is Accelerator Card?
An accelerator card is used in cloud servers, high-performance computing, and data centers to accelerate various workloads. The accelerator cards can be plugged in via a PCIe slot and are programmable, enabling the user to instruct the card to perform various tasks. The accelerator cards are more efficient as compared to general-purpose microprocessors. Some of the extensively used accelerator cards in high-performance computing, and data centers are GPUs and CPUs. FPGAs and ASICs are also being utilized to accelerate machine learning applications in data centers.
The latest market intelligence study on Accelerator Card relies on the statistics derived from both primary and secondary research to present insights pertaining to the forecasting model, opportunities, and competitive landscape of Accelerator Card market for the forecast period 20212027.
The Covid-19 (coronavirus) pandemic is impacting society and the overall economy across the world. The impact of this pandemic is growing day by day as well as affecting the supply chain. The COVID-19 crisis is creating uncertainty in the stock market, massive slowing of supply chain, falling business confidence, and increasing panic among the customer segments. The overall effect of the pandemic is impacting the production process of several industries including Electronics and Semiconductor, and many more. Trade barriers are further restraining the demand- supply outlook. As government of different regions have already announced total lockdown and temporarily shutdown of industries, the overall production process being adversely affected; thus, hinder the overall Accelerator Card market globally. This report on Accelerator Card market provides the analysis on impact on Covid-19 on various business segments and country markets. The report also showcase market trends and forecast to 2027, factoring the impact of Covid -19 Situation.
Get Sample Copy of this Report @https://www.theinsightpartners.com/sample/TIPRE00010524/
Scope of the Report
The research on the Accelerator Card market concentrates on extracting valuable data on swelling investment pockets, significant growth opportunities, and major market vendors to help understand business owners what their competitors are doing best to stay ahead in the competition. The research also segments the Accelerator Card market on the basis of end user, product type, application, and demography for the forecast period 20212027.
The rising growth of the cloud computing market and increasing demand for AI and HPC technologies in data centers are some of the significant factors anticipated to drive the accelerator card market. Additional, integration with emerging technologies is predicted to act as an opportunity for the global accelerator card market during the forecast period.
The report also includes the profiles of key Accelerator Card Market companies along with their SWOT analysis and market strategies. In addition, the report focuses on leading industry players with information such as company profiles, components and services offered, financial information of the last three years, key developments in the past five years.
Here we have listed the top Accelerator Card Market companies in the world
1. Achronix Semiconductor Corporation2. Advanced Micro Devices, Inc3. Cisco Systems, Inc.4. Huawei Technologies Co., Ltd.5. IBM Systems6. Intel Corporation7. NVIDIA Corporation8. Oracle9. Xilinx, Inc.
Our reports will help clients solve the following issues:
Insecurity about the future:
Our research and insights help our clients anticipate upcoming revenue compartments and growth ranges. This help our client invest or divest their assets.
Understanding market opinions:
It is extremely vital to have an impartial understanding of market opinions for a strategy. Our insights provide a keen view on the market sentiment. We keep this reconnaissance by engaging with Key Opinion Leaders of a value chain of each industry we track.
Understanding the most reliable investment centers:
Our research ranks investment centers of market by considering their future demands, returns, and profit margins. Our clients can focus on most prominent investment centers by procuring our market research.
Interested in purchasing this Report? Click here @https://www.theinsightpartners.com/buy/TIPRE00010524/
The research provides answers to the following key questions:
About us: The Insight Partners is a one stop industry research provider of actionable intelligence. We help our clients in getting solutions to their research requirements through our syndicated and consulting research services. We are a specialist in Technology, Healthcare, Manufacturing, Automotive and Defense.
Contact us: Call: +1-646-491-9876Email:[emailprotected]
Read more from the original source:
Accelerator Card Market Will Witness Substantial Growth in the Upcoming years by 2027 - WaterCloud News
Uber India deploys Canon information management solution- Therefore for operational workflow – CRN.in
Inefficient and inconvenient information access prompted Uber India to look for a solution that could organise its business data on a central, secure and easy-to-use platform. It was essential for Uber India to integrate the solution into its existing cloud-based infrastructure for different departments and branches to share and retrieve information easily. Hence, on the advice of Canons team, Uber India implemented Therefore Online, a cloud-based information management solution designed to securely store, manage and process all types of business information.
We evaluated several solutions, but found them too complex to deploy, and they were unable to meet the security standards that we needed. With Therefore Online, the cloud-based information management solution allowed us to manage our documents more efficiently without additional expenditure on server infrastructure, greatly improving our operational workflow, Brish Bhan Vaidya, Head of Strategic Sourcing, APAC Uber India.
As a notable player in the industry, Uber India is expected to collect, store and appropriately use the data of its users via a secure platform. A large part of Uber Indias processes involves conducting background checks of the drivers it on-boards and retaining their records, which could then be provided to regulators and authorities, when needed.
Before Canons solution, Uber India was depended on several external vendors performing the background checks to store the records separately, and provide the detailed reports upon requests. Maintaining high volume of paperwork and large amount of sensitive information at the vendors repository also appeared to be risky on top of the labour-intensive task of retrieving information. The cab aggregator had to spend precious time searching for current and updated versions of the reports, as the vendors did not provide version search capabilities for the records.
Canon also helped to create a cloud-based folder that allowed the upload of records to Therefore Online automatically by simply dropping files into the folder. The solution cut down considerable time and effort spent on storing and calling up reports, boosting productivity across the board.
Uber India found a scalable, cost-effective and value-adding total solution with Therefore Online. The information management solution integrated seamlessly with Uber Indias existing cloud-based infrastructure. Instead of purchasing expensive servers, the company opted for a pay-per-use subscription that could be upgraded as and when required. The solution also helped Uber generates real-time reports on usage workflows, providing useful data to inform and improve business processes.
If you have an interesting article / experience / case study to share, please get in touch with us at editors@expresscomputeronline.com
The rest is here:
Uber India deploys Canon information management solution- Therefore for operational workflow - CRN.in
Potential Impact of COVID-19 on Research Report prospects the Server Backup Software Market – Cole of Duty
Global Server Backup Software Market Growth Projection
The new report on the global Server Backup Software market is an extensive study on the overall prospects of the Server Backup Software market over the assessment period. Further, the report provides a thorough understanding of the key dynamics of the Server Backup Software market including the current trends, opportunities, drivers, and restraints. The report introspects the micro and macro-economic factors that are expected to nurture the growth of the Server Backup Software market in the upcoming years and the impact of the COVID-19 pandemic on the Server Backup Software . In addition, the report offers valuable insights pertaining to the supply chain challenges market players are likely to face in the upcoming months and solutions to tackle the same.
The report suggests that the global Server Backup Software market is projected to reach a value of ~US$XX by the end of 2029 and grow at a CAGR of ~XX% through the forecast period (2019-2029). The key indicators such as the year-on-year (Y-o-Y) growth and CAGR growth of the Server Backup Software market are discussed in detail in the presented report. This data is likely to provide readers an understanding of qualitative and quantitative growth prospects of the Server Backup Software market over the considered assessment period.
Get Free Sample PDF (including COVID19 Impact Analysis, full TOC, Tables and Figures) of Market Report @ https://www.marketresearchhub.com/enquiry.php?type=S&repid=2665102&source=atm
The report clarifies the following doubts related to the Server Backup Software market:
Do You Have Any Query Or Specific Requirement? Ask to Our Industry [emailprotected] https://www.marketresearchhub.com/enquiry.php?type=E&repid=2665102&source=atm
Segmentation of the Server Backup Software Market
Regional and Country-level AnalysisThe report offers an exhaustive geographical analysis of the global Server Backup Software market, covering important regions, viz, North America, Europe, China, Japan, Southeast Asia, India and Central & South America. It also covers key countries (regions), viz, U.S., Canada, Germany, France, U.K., Italy, Russia, China, Japan, South Korea, India, Australia, Taiwan, Indonesia, Thailand, Malaysia, Philippines, Vietnam, Mexico, Brazil, Turkey, Saudi Arabia, U.A.E, etc.The report includes country-wise and region-wise market size for the period 2015-2026. It also includes market size and forecast by each application segment in terms of revenue for the period 2015-2026.Competition AnalysisIn the competitive analysis section of the report, leading as well as prominent players of the global Server Backup Software market are broadly studied on the basis of key factors. The report offers comprehensive analysis and accurate statistics on revenue by the player for the period 2015-2020. It also offers detailed analysis supported by reliable statistics on price and revenue (global level) by player for the period 2015-2020.On the whole, the report proves to be an effective tool that players can use to gain a competitive edge over their competitors and ensure lasting success in the global Server Backup Software market. All of the findings, data, and information provided in the report are validated and revalidated with the help of trustworthy sources. The analysts who have authored the report took a unique and industry-best research and analysis approach for an in-depth study of the global Server Backup Software market.The following players are covered in this report:AcronisMSP360SolarWindsVeeam Availability SuiteNAKIVO Backup & ReplicationCohesity DataPlatformRubrikAltaro VM BackupVeeamUnitrendsServer Backup Software Breakdown Data by TypeCloud-basedWeb-basedServer Backup Software Breakdown Data by ApplicationLarge EnterprisesSMEs
You can Buy This Report from Here @ https://www.marketresearchhub.com/checkout?rep_id=2665102&licType=S&source=atm
Vital Information Enclosed in the Report
Here is the original post:
Potential Impact of COVID-19 on Research Report prospects the Server Backup Software Market - Cole of Duty
Do You Know Where Your Servers Come From? Heres Why Securing The Supply Chain Matters – Forbes
Creative Commons
Supply chain - its a term and topic that is now discussed around dinner tables as families and friends discuss and debate COVID-19s spotlight on the US dependence on other countries to provide the essential products and materials we need in time of crisis. The other dinner table discussion seems to focus around cybersecurity. With a precipitous increase in attacks and exploits in this new remote work environment, IT pros and those working from the home office are equally concerned.
While the focus of supply chain discussions has largely been around medicines and emergency supplies, there is another conversation that has been simmering in the tech sector for a long time. Flare ups occur every time the press reports on a suspected exploit found in infrastructure. How dependent can supply chain infrastructures become on foreign suppliers (or suppliers with factories based in other countries) before they become overly dependent? And how can an IT infrastructure manufacturer (in this case server) assure that the components being used in a bill of materials (BOM) are genuine and contain no microcode or other components that can be used to exploit that equipment over time?We will address this over the next few paragraphs.
Supply chain concerns are legitimate
The supply chain can be used as another attack vector for bad actors to exploit data, be it hackers looking to hold IP and sensitive information for ransom, or nation states looking to wreak havoc or disable critical functions of our companies or government. Such actors can insert motherboard implants that can go overlooked or insert malicious microcode that can create a backdoor once a platform is in production. Additionally, components such as a baseboard management controller (BMC), the control plane of the server, can have built-in vulnerabilities. These are all exploits that are not just theoretically possible - they have, in fact, already happened.
We often talk about cybersecurity in terms of perimeter defenses such as firewalls, or access control from companies like Aruba. IT organizations that are more serious about security look to technologies like HPEs designed Silicon Root of Trust (SiROT) as the starting point where cybersecurity starts. While SiROT is a critical and fundamental element of a cybersecurity strategy, the reality is that cybersecurity starts in the supply chain ordering the parts and components that go into the server. From storage and memory to the CPU, to the inductors, capacitors and resistors that go on to the motherboard.
Strangely enough, securing the supply chain is not just about security. Its also about ensuring quality that is assured through authenticity. One of the challenges that exist today is ensuring the components that populate a server at the time it is stored at in a datacenter are the same components that were in that server at time of assembly. And that those parts are genuine manufacturers parts.
Supply chain is really complex
Per John Grosso, Vice President of Global Operations Engineering, Global Supply Chain at HPE, the average 1U or 2U ProLiant rack server has between 3,5004,000 components. That is, 3,5004,000 components that have to be tracked across hundreds of suppliers around the worldchecked for security and for quality purposes.
The HPE supply chain is complex.
Consider the very simplified graphic above. The team at HPE (or any manufacturer) must ensure quality and integrity from left to right. Meaning, every component coming from every supplier is authentic and untainted as it leaves the suppliers factory and arrives at HPEs manufacturing facility. The team then must ensure the servers are assembled with those very same authentic components and the integrity of the server is intact. After assembly, every server must be tested prior to shipping out to customers or distributors and resellers (in the case of the indirect sale, HPE must ensure that these servers are not modified or compromised in any way as they sit on warehouse floors, ready to fulfill orders). Upon arrival at a customers datacenter, HPE must ensure that server boots up with the hardware, firmware and software components that were installed when the server left the factory floor.
But, how does this happen? Grosso described his teams approach to driving integrity across the process, and its quite comprehensive. He uses a term called roving cyber validation, whereby team members embedded with suppliers perform regular audits and informal spot checks on a regular basis to ensure the genuineness of components. As components are shipped to HPE factories, random x-raying takes place to ensure no tampering took place during shipment.
As servers are assembled in HPE or partner facilities, a cryptographic manifest is built after assembly and validation of components is complete. This manifest is attested at the customer site at first boot at the customers datacenter, through HPEs Silicon Root of Trust (SiROT).
Supply chain management does not end at first boot. HPE (and other server vendors) must be able to ensure the integrity and quality of that server throughout its lifecycle. Through maintenance, upgrades and repairs, Grossos team is charged with ensuring the integrity of that server.
In the case of HPE, SiRoT and other utilities built into the companys integrated lights out (iLO) management platform can immediately detect, remove and recover a server from malware and ransomware (to learn more about this, check out my coverage here and here).
How does HPE do it?
During a time where buy American is starting to gain steam, its unrealistic to believe that the server supply chain can be brought on the shores of the US to guarantee security and quality. This may be an unpopular view, but its realistic. Given this reality, infrastructure companies need to be vigilant about these things. And this commitment to securing the supply chain must be viewed as a pillar of a companys strategy.
Securing the supply chain for HPE appears to be equal parts organizational, technical and cultural. And based on conversations I was able to have with Grosso and Security CTO Gary Campbell, this is nothing new. Campbell says the seeds of todays focus on supply chain management were sewn about 10 years ago in a briefing he and Antonio Neri (now CEO) had with a large government customer.
In 2014, HPE started to develop a holistic security architecture that could help the company in its fight with the counterfeiting of products and overall security. Out of this effort, SiRoT was developed and implemented across the HPE portfolio.
Upon Neris appointment to CEO, one of his top priorities was to embed a security first mindset across HPE, understanding this could be a real value to companies of all sizes and a true differentiator in the market. It feels as though this message has permeated the company. Security is a key messaging pillar around every product the company introduces to the market, and its PointNext services has a very healthy consulting practice focused on cybersecurity.
One of the interesting things I learned from speaking with Campbell was the fact that HPE is the only server company to design and develop its own BMC. Why is this important? Think of the BMC as the control plane of the server. It is the lowest level management interface and provides the basis for all of the physical monitoring of a servers condition. A compromised BMC can lead to a compromised server, and as previously mentioned, there are many news articles about this very thing happening. By developing its own BMC, HPE not only ensures the security of its servers, it has the ability to enable greater controls through its iLO management technology.
Bringing in Grosso and centralizing the management of the product lifecycle under his direction was a smart move by HPE. This enabled a single view of the product, spanning design, NPI (new product introduction), supplier quality, factory output and customer quality. Why does this matter? It enables a critical input to the product requirements and development process, ensuring security is fleshed out and given appropriate consideration across all stages of product life.
To ensure the team was being complete in its thinking and efforts, a Supply Chain Center of Excellence (COE) was built with representation from across the company. Its charter included three areascapturing the needs (and feedback) of customers and the market, sharing of best practices across the various teams and ensuring consistency of security practices across all product lines.
Finally, to make sure product and supply chain security remains a priority to HPE, its board of directors (BoD) has a committee headed by Mary Agnes Wilderotter that receives quarterly reports on the status of end-to-end product security, including the supply chain.
Whats next?
Considering the typical server has 3,5004,000 components (upwards of 7,000 components for converged infrastructure), it is hard to envision shifting the supply chain entirely to domestic suppliers. However, companies like HPE continue to work on reducing their dependence on suppliers who may not be able (or willing) to deliver in a time of need or crisis. As Grosso says, his team never stands still.
Given the scrutiny the government has put on foreign suppliers over the last couple years and the bright spotlight COVID has put on supply chain, I do expect to see further development from companies like HPE in ensuring these risks around dependency are not only mitigated, but minimized or removed.
Closing thoughts
As an analyst who has experience as an IT executive, I can fully appreciate the approach HPE takes to supply chain security. I never considered the integrity of the servers coming into my datacenter (or the quality of their performance), because I never had to worry. The upfront work of companies like HPE simplified my life and allowed me to deploy and run infrastructure with one less thing to worry about.
While securing the supply chain may not be as cool to talk about as edge computing, data analytics or cloud-native application development, it is arguably the most important consideration in choosing infrastructure to enable these environments. Its something we should all thinking about, even after this COVID craziness passes.
Disclosure:My firm, Moor Insights & Strategy, like all research and analyst firms, provides or has provided research, analysis, advising, and/or consulting to many high-tech companies and consortia in the industry, including HPE. I do not hold any equity positions with any companies or organizations cited in this column.
See original here:
Do You Know Where Your Servers Come From? Heres Why Securing The Supply Chain Matters - Forbes
Live analytics without vendor lock-in? It’s more likely than you think, says Redis Labs – The Register
In February, Oracle slung out a data science platform that integrated real-time analytics with its databases. That's all well and good if developers are OK with the stack having a distinctly Big Red hue, but maybe they want choice.
This week, Redis Labs came up with something for users looking for help with the performance of real-time analytics of the kind used for fraud detection or stopping IoT-monitored engineering going kaput without necessarily locking them into a single database, cloud platform or application vendor.
Redis Labs, which backs the open-source in-memory Redis database, has built what it calls an "AI serving platform" in collaboration with AI specialist Tensorwerk.
RedisAI includes deploying the model, running the inferencing and performance monitoring within the database bringing analytics closer to the data, and improving performance, according to Redis Labs.
Bryan Betts, principal analyst with Freeform Dynamics, told us the product was aimed at a class of AI apps where you need to constantly monitor and retrain the AI engine as it works.
"Normally you have both a compute server and a database at the back end, with training data moving to and fro between them," he said. "What Redis and Tensorwerk have done is to build the AI computation ability that you need to do the retraining right into the database. This should cut out a stack of latency at least for those applications that fit its profile, which won't be all of them."
Betts said other databases might do the same, but developers would have to commit to specific AI technology. To accept that lock-in, they would need to be convinced the performance advantages outweigh the loss of the flexibility to choose the "best" AI engine and database separately.
IDC senior research analyst Jack Vernon told us the Redis approach was similar to that of Oracle's data science platform, where the models sit and run in the database.
"On Oracle's side, though, that seems to be tied to their cloud," he said. "That could be the real differentiating thing here: it seems like you can run Redis however you like. You're not going to be tied to a particular cloud infrastructure provider, unlike a lot of the other AI data science platforms out there."
SAP, too, offers real-time analytics on its in-memory HANA database, but users can expect to be wedded to its technologies, which include the Leonardo analytics platform.
Redis Labs said the AI serving platform would give developers the freedom to choose their own AI back end, including PyTorch and TensorFlow. It works in combination with RedisGears, a serverless programmable engine that supports transaction, batch, and event-driven operations as a single data service and integrates with application databases such as Oracle, MySQL, SQLServer, Snowflake or Cassandra.
Yiftach Shoolman, founder and CTO at Redis Labs, said that while researchers worked on improving the chipset to boost AI performance, this was not necessarily the source of the bottleneck.
"We found that in many cases, it takes longer to collect the data and process it before you feed it to your AI engine than the inferences itself takes. Even if you improve your inferencing engine by an order of magnitude, because there is a new chipset in the market, it doesn't really affect the end-to-end inferencing time."
Analyst firm Gartner sees increasing interest in AI ops environments over the next four years to improve the production phase of the process. In the paper "Predicts 2020: Artificial Intelligence Core Technologies", it says: "Getting AI into production requires IT leaders to complement DataOps and ModelOps with infrastructures that enable end-users to embed trained models into streaming-data infrastructures to deliver continuous near-real-time predictions."
Vendors across the board are in an arms race to help users "industrialise" AI and machine learning that is to take it from a predictive model that tells you something really "cool" to something that is reliable, quick, cheap and easy to deploy. Google, AWS and Azure are all in the race along with smaller vendors such as H2O.ai and established behemoths like IBM.
While big banks like Citi are already some way down the road, vendors are gearing up to support the rest of the pack. Users should question who they want to be wedded to, and what the alternatives are.
Sponsored: Practical tips for Office 365 tenant-to-tenant migration
Originally posted here:
Live analytics without vendor lock-in? It's more likely than you think, says Redis Labs - The Register