Category Archives: Cloud Servers
DARPA plans shift from AWS and on-prem to multicloud by 2022 – DatacenterDynamics
In a presentation at an industry day held earlier this month by the agency's internal IT administrative division, the Information Technology Directorate, slides detail DARPA's internal computing resources.
Network Ops maintain 2.3 petabytes of storage and 542 servers for unclassified work, along with 600 terabytes of storage and 294 servers for classified work. Servers are refreshed every 48 months. Its HPC support has "15 HPC Projects" and has access to 25 million CPU hours.
Separate documents reveal that email services are "currently based on Exchange 2013 Servers," while active directory services are "currently based on Windows Server 2012 R2."
The presentation continues: "ITD procured nearly 7,000 substantial items (servers, network infrastructure, laptops, monitors, etc.) over the past year. This is in addition to smaller items (e.g., cables, mice, phone chargers, etc.)." Reference is made to an internal data center, as well as a disaster recovery site.
But, as the document notes, "compliance with US Government and DoD mandates to migrate to consolidated data centers or utilize commercial cloud." Since 2015, the documents reveal, DARPA has used AWS GovCloud for some unclassified workloads.
"Currently migrating all unclassified workloads to Amazon Web Services GovCloud," a slide states. "[Approximately] 30% of unclassified workloads have been migrated."
Here is the original post:
DARPA plans shift from AWS and on-prem to multicloud by 2022 - DatacenterDynamics
What AMD And Intel Quarterly Numbers Say About Datacenter Business – Forbes
Earnings season is in full swingboth AMD and Intel announced 4th quarter (and annual) results recently. Both companies claimed strong quarters with robust growth. The following few paragraphs will attempt to dissect those numbers in a little more detail and provide some guidance on what these numbers say about the 2020 business outlook.
AMD EESC strong results, lots of EPYC activity
AMD quarterly results.
Trying to pull actual EPYC numbers from AMDs reporting is tricky as they are grouped with the companys embedded and semicustom businesses, which tend to be much higher volume and much lower margin.
At first glance, these numbers can seem a little mixed. Net revenue numbers look fairly flat to down across the board, while operating income looks pretty strong year over year. What this says to me is that the embedded and semi-custom business was soft, while EPYC continues to ramp in the enterprise. Consider this: Y/Y segment revenue was down 14%, while operating income was up 61%. And when looking at Q4 19 versus Q4 18, revenue was up only 7% while operating income rose 850%.
In addition to these numbers, AMD showed strength in building market momentum for EPYC with over 100 EPYC-based server platforms in market. Perhaps the strongest indicator of EPYCs momentum is this line from AMDs presentation: Dell (EMC) began shipping full portfolio of servers powered by EPYC processors Why is this so significant? One of Dell EMCs strengths is its pragmatism. Fully embracing AMD as a server silicon partner and having that manifest in a full suite of platforms is an indication that customers are asking again and again.
As difficult as it is to discern AMDs EPYC results for the past quarter, its near impossible to look at its 2020 guidance as an indicator for continued EPYC ramp. I can only go on what I hear from the industry. Demand continues to build in the enterprise market, and the addition of ex-Intel executive Dan McNamara should help in the go-to-market (GTM) drive. So expect to see EPYC continue to gain traction in the enterprise and for the numbers to (indirectly) reflect this growth.
A continued focus on building a strong channel presence is critical for EPYCs long term success. Channel programs are more than MDF and campaign budget. Its about the people, relationships and joint strategic planning that drive meaningful revenue and a run rate transactional business.
Intel DCG cloud growth is staggering enterprise and government down
Intel quarterly results.
Intels Data Center Group (DCG) had a killer quarter. Theres no other way to describe its performance after looking at the numbers. The company had a strong showing in platform (Xeon), and strong growth in adjacencies. The company saw especially strong growth in the cloud service provider (CSP) space, seeing a 48% YoY jump, accompanied by a healthy 14% growth in the comms space (no doubt buoyed by 5G rollouts).
In addition to this strong growth and record revenue, the company continued to ramp its 10nm part, codenamed Cascade Lake. Its important for the company to show a strong rollout with performance and power efficiency numbers that stand against AMDs 7nm Rome CPU. Its fending off arguably the stiffest competition it has ever faced in the datacenter.
Intels Q4 DCG numbers, along with the Q3, can indicate a couple of things. The server market contraction is reversing, and the cloud providers have resumed their buying trends. Secondly, the comms providers have resumed infrastructure acquisitions in support of 5G rollouts. And finally, the impact of cloud is being felt in the rollout of servers at enterprise and government. This trend is nothing new to anybody who has been following the server market, but the chart below clearly shows the correlation.
Intel DCG growth.
One question that pops out from looking at the above chart is whether Intel is facing pricing pressures from AMD. ASP has tracked strong relative to unit volume (UV) and has generally mapped to cloud and comms growth. However, Q4 showed a slight dip in ASP, while Cloud showed very strong growth alongside a healthy comms quarter. There could be a number of reasons for this. Still, it is an interesting break in the ASP trend given the fact that it follows the quarter in which AMDs Rome hit the market. Perhaps Intel is using pricing to hold off a competitive AMD? This could be especially interesting in the comms space, where Rome should be a good fit without the customizations required by the large cloud providers. Regardless, Intels numbers look impressive and its guidance for DCG in 2020 is high single digit growth.
What does all of this say?
While Xeon obviously had a strong quarter, digging into the numbers shows that EPYC also had a solid quarter of growth. Further, the partnerships and activities that help build a solid run rate business seem to be there for AMD, as demonstrated by Dell EMCs strong EPYC portfolio.
For those looking for spikes in AMDs EESC numbers as evidence of EPYC ramp, be patient. Pardon the pun, but Rome was not built in a day. The qualification and deployment cycle of servers in the enterprise market is slow. It will take another quarter or two to see the strength of EPYCs ramp, and that will be seen through the numbers and announced wins.
Expect to see Intels continued growth in cloud and for it to find new opportunities as the AI space begins to heat up. Additionally, Cascade Lake should bolster the companys prospects for the year. Also, watch for continued EPYC growth in 2020. Methinks AMD may be a little conservative in its guidance.
Disclosure:Moor Insights & Strategy, like all research and analyst firms, provides or has provided research, analysis, advising and/or consulting to many high-tech companies in the industry, including AMD and Intel.The author does not have any investment positions in any of the companies named in this article.
Here is the original post:
What AMD And Intel Quarterly Numbers Say About Datacenter Business - Forbes
Netskope hauls in another $340M investment on nearly $3B valuation – TechCrunch
Netskope has always focused its particular flavor of security on the cloud, and as more workloads have moved there, it has certainly worked in its favor. Today the company announced a $340 million investment on a valuation of nearly $3 billion.
Sequoia Capital Global Equities led the round, but in a round this large, there were a bunch of other participating firms, including new investors Canada Pension Plan Investment Board and PSP Investments, along with existing investors Lightspeed Venture Partners, Accel, Base Partners, ICONIQ Capital, Sapphire Ventures, Geodesic Capital and Social Capital. Todays investment brings the total raised to more than $740 million, according to Crunchbase data.
As with so many large rounds recently, CEO Sanjay Beri said the company wasnt necessarily looking for more capital, but when brand name investors came knocking, they decided to act. We did not necessarily need this level of capital but having a large balance sheet and a legendary set of investors like Sequoia, Lightspeed and Accel putting all their chips behind Netskope for the long term to dominate the largest market in security is a very strong signal to the industry, Beri said.
From the start, Netskope has taken aim at cloud and mobile security, eschewing the traditional perimeter security that was still popular when the company launched in 2012. Legacy products based on traditional notions of perimeter security have gone obsolete and inhibit the needs of digital businesses. Todays urgent requirement is security that is fast, delivered from the cloud, and provides real-time protection against network and data threats when cloud services, websites, and private apps are being accessed from anywhere, anytime, on any device, he explained.
When Netskope announced its $168.7 million round at the end of 2018, the company had a valuation over $1 billion at that time. Today, it announced it has almost tripled that number, with a valuation close to $3 billion. Thats a big leap in just two years, but it reports 80% year-over-year growth, and claims to be the fastest-growing company at scale in the fastest-growing areas of cybersecurity: secure access server edge (SASE) and cloud security, according to Beri.
The next natural step for a company at this stage of maturity would be to look to become a public company, but Beri wasnt ready to commit to that just yet. An IPO is definitely a possible milestone in the journey, but its certainly not limited to that and were not in a rush and have no capital needs, so were not commenting on timing.
See the article here:
Netskope hauls in another $340M investment on nearly $3B valuation - TechCrunch
How an Accounting Tweak Will Make Amazon’s Most Profitable Business Even More Profitable – The Motley Fool
Amazon (NASDAQ:AMZN) reported blockbuster fourth-quarter resultslast week, sending the e-commerce giant's market cap back above $1 trillion. The move higher was fueled by heavy investments in one-day delivery that are already starting to pay off. As usual, the Amazon Web Services (AWS) cloud infrastructure business carried overall profitability, representing two-thirds of the company's total operating income during the quarter.
CFO Brian Olsavsky also disclosed that AWS is about to get even more profitable.
Image source: Getty Images.
On the conference call, Olsavsky noted that Amazon's guidance for the first quarter includes $800 million less in depreciation expenses due to an accounting tweak: Amazon is extending the useful life of its data center servers.
Amazon has historically estimated the useful life of its servers at three years but is now increasing that time frame to four years effective this year. The move follows the completion of a useful life study that Amazon conducted in Q4, the company notes in its10-K. Changing that estimate is expected to boost operating income in 2020 by a whopping $2.3 billion.
The adjustment does not affect any depreciation that Amazon has already recognized or cash it has already spent, but merely changes how the company accounts for depreciation of those assets going forward. Importantly, this isn't purely an accounting adjustment. Amazon has been working hard to improve the operating efficiency of its cloud infrastructure, and AWS has continued to refine its software in a way that makes its servers last longer by reducing stress on the hardware, Olsavsky added. Those improvements apply to both AWS and the server infrastructure that powers the core e-commerce platform.
"So we are essentially reflecting the fact that we have gotten better at extending the useful life here and [are] now building that into our financials looking forward," the finance chief said. The improvements will also reduce the capital intensity of the AWS business, as Amazon can extend its capital expenditure cycles and increase capital efficiency.
"We expect technology and content costs to grow at a slower rate in 2020 due to an increase in the estimated useful life of our servers, which will impact each of our segments," Amazon states in its annual report.
When companies invest in long-lived assets, instead of expensing those costs up front, those investments are capitalized and placed on the balance sheet. Management then needs to estimate the useful life of those assets and determine a depreciation method, such as accelerated or straight-line, among others.
For server infrastructure, Amazon uses straight-line depreciation over the estimated useful life; extending the useful life of an asset results in lower depreciation expense per year. With 13 years of experience under its belt, AWS continues to get even stronger as competition in the cloud infrastructure market heats up.
See more here:
How an Accounting Tweak Will Make Amazon's Most Profitable Business Even More Profitable - The Motley Fool
Difference Between Authorization and Authentication – Security Boulevard
By Cassa Niedringhaus Posted February 6, 2020
Authentication (AuthN) and authorization (AuthZ) are industry terms that are sometimes confused or used interchangeably. Theyre also presented together in AAA (authentication, authorization, and accounting). However, theyre individual concepts with separate effects on organizational security.
Here, well cover how theyre defined and how to implement them in enterprises.
Authentication refers to identity: Its about verifying that a user is who they say they are.
Just as in the real world, where we might verify a persons identity by their facial features, we need measures to verify a users digital identity. A user can authenticate their identity with credentials such as a username and password, an SSH key, or biometrics.
Multi-factor authentication (MFA) strengthens the process by requiring a user to enter something they know (i.e. password) and something they have (i.e. time-based one-time token). That way, even if a password is compromised, an account is still protected by the TOTP, which is more difficult to compromise.
Newer methods of authentication, such as biometrics or hardware keys, still stem from the idea that users provide something they know and/or something they have to authenticate their identities.
There are many considerations for organizations as they decide how users will authenticate and whether that process should differ by resource such as requiring MFA for systems and SSH keys for cloud servers. They also need to ensure that verification happens over secure channels.
Authorization is an orthogonal concept to authentication: Its about privilege and verifying what resources a user is allowed to access after youve verified their identity.
Organizations should heed the concept of least privilege so users have access only to the resources and data they need to get their jobs done and nothing more.
In an enterprise, for example, employees in the engineering department would be granted access to a different set of resources than employees in the sales department. Furthermore, within individual resources, different users might be granted different access levels.
Authentication and authorization are (Read more...)
See the original post here:
Difference Between Authorization and Authentication - Security Boulevard
IGEL Teams with AMD to Optimize the UD3 Endpoint for Cloud Workspaces – PRNewswire
MUNICH, Feb. 6, 2020 /PRNewswire/ --IGEL,provider of the next-gen edge OSfor cloud workspaces, announced thenewly updated IGEL UD3 (Universal Desktop model 3) endpoint powered by the AMD Ryzen Embedded R1505Gsystem-on-chip (SoC). A versatile endpoint for accessing virtualized apps, desktops, and cloud workspaces, IGEL UD3 is designed to offer a high-performance computing experience that drives productivity and collaboration across all industries.
"We are proud to be collaborating with AMD on the launch of the new generation of the IGEL UD3," said Matthias Haas, CTO, IGEL. "We have enjoyed a very long and successful relationship with AMD, and found the AMD Ryzen Embedded R1505G SoC processor to be the best option for providing our customers with fast and secure access to their cloud workspaces."
Optimized for Productivity, Flexibility and EfficiencyLeveraging the powerful AMD Ryzen Embedded R1505G SoC with Radeon Vega 3 Graphics and extensive connectivity options, IGEL UD3 provides a secure, high performance computing experience for a broad range of demanding tasks across all industries.
Key features available with IGEL UD3 include integrated WiFi and Bluetooth. Both features are optional and this is the first time IGEL has them integrated into its endpoint hardware. Additional configurable connectivity options designed to offer flexibility, seamless integration and ease of use across a broad range of use cases include integrated smart card readers and VESA mount. IGEL UD3 also features support for two 4K displays, SuperSpeed USB Type-C and standard legacy ports for convenience and productivity.
"One of the things we are most excited about with the new UD3 offering is the optimization of the processor for maximum energy efficiency," said Haas. "Conserving energy is important to us. That's why we are the only endpoint device manufacturer to have taken an extra step to implement a customized version of the AMD Ryzen Embedded R1505G SoC, which has a low 10W TDP at 2.0GHz base and up to 2.7GHz boost frequency."1
Secure "Chain of Trust" Safeguards Cloud WorkspacesIGEL and AMD have extended the secure "chain of trust" which extends all the way to the target server or cloud, with a step before the Unified Extensible Firmware Interface (UEFI) boot, to include the AMD Secure Processor technology, a hardware-based security processor built right into the AMD Ryzen Embedded R1505G SoC. Putting the protection right on the processor, this integration leverages a dedicated security system,initiating IGEL's secure chain of trust at the physical hardware layer.
The AMD Ryzen Embedded processor checks whether the UEFI binary is cryptographically signed by IGEL, verifying if the UEFI binary is authentic and not manipulated. The UEFI then checks the bootloader for a UEFI Secure Boot signature. Next, the bootloader checks the IGEL OS Linux kernel, and if the OS partitions signatures on disk are correct, IGEL OS is initiated and the partitions are mounted. Finally, for users connecting to a VDI or cloud environment, access software such as Citrix Workspace App or VMware Horizon 7 checks the certificate of the connected server, thus creating a complete "chain of trust."
"We are pleased to work together with IGEL to integrate this low power AMD Ryzen Embedded R1505G into the newly optimized generation of IGEL UD3," said Stephen Turnbull, director of product management and business development, Embedded Solutions, AMD. "Together, AMD Ryzen Embedded processors and IGEL endpoints offer advanced performance, power efficiency and security features that begin where it all starts at the processor level."
New UD3 the First IGEL Hardware to Feature Teradici's PCoIP UltraThough IGEL OS has supported PCoIP Ultra since June 2019, the IGEL UD3 is the first IGEL endpoint hardware to be optimized for remote cloud connectivity with Teradici's PCoIP Ultra Software Client for Linux. With PCoIP Ultra and the UD3, end-users benefit from greater flexibility of choice with the ability to securely connect with Teradici Cloud Access Software for a rich, high-fidelity user experienceto any cloud, including AWS (including Amazon WorkSpaces), Microsoft Azure and Google Cloud.
Availability and SupportIGEL UD3 is part of IGEL's family of Universal Desktop endpoints, and designed for virtual desktops and cloud workspace environments. IGEL UD3 with the AMD Ryzen Embedded R1505G SoC will be generally available starting in May 2020 through IGEL's network of Platinum- and Gold-level Partners, Authorized IGEL Partners (AIPs) and resellers.
For more information on the IGEL UD3 with the AMD Ryzen Embedded R1505G SoC,read the info sheet "AMD and IGEL optimize the AMD RyzenTM embedded R1505G system-on-chip for the IGEL UD3."You can learn more about IGEL's next-gen endpoint hardware design here.For more information on IGEL OS, visit https://www.igel.com/igel-os-universal-desktop-operating-system/.
IGEL on Social MediaTwitter: http://www.twitter.com/IGEL_TechnologyFacebook:www.facebook.com/igel.technologyLinkedIn: http://www.linkedin.com/company/igel-technologyYouTube: http://www.youtube.com/user/IGELTechnologyTV IGEL Community: http://www.igel.com/community
AMD, the AMD Arrow logo, Ryzen, and combinations thereof are trademarks of Advanced Micro Devices, Inc. Other product names used in this publication are for identification purposes only and may be trademarks of their respective companies.
1Max boost for AMD Ryzen processors is the maximum frequency achievable by a single core on the processor running a bursty single-threaded workload. Max boost will vary based on several factors, including, but not limited to: thermal paste; system cooling; motherboard design and BIOS; the latest AMD chipset driver; and the latest OS updates.
About IGELIGELprovides the next-gen edge OS for cloud workspaces.The company's world-leading software products include IGEL OS,IGEL UD Pocket (UDP) and IGEL Universal Management Suite (UMS). These solutionscomprisea more secure, manageable and cost-effective endpoint managementand controlplatform across nearly any x86 device.Easily acquired via just two feature-rich software offerings, Workspace Edition and Enterprise Management Pack IGEL software presents outstanding value per investment.Additionally, IGEL's German engineered endpoint solutions deliver the industry's best hardware warranty (5 years), software maintenance (3 years after end of life) and management functionality. IGEL enables enterprises tosave vast amounts of money by extending the useful life of their existing endpoint devices whileprecisely controllingall devices running IGEL OS from a single dashboard interface. IGEL has offices worldwide and is represented by partners in over 50 countries. For more information on IGEL, visit http://www.igel.com.
SOURCE IGEL
Read the original post:
IGEL Teams with AMD to Optimize the UD3 Endpoint for Cloud Workspaces - PRNewswire
Options Partners with Pure, Leverages Pure as-a-Service to Deliver All-NVMe, All Flash Cloud – HPCwire
NEW YORK and LONDON, Feb 5, 2020 Options, the leading provider of cloud-enabled managed services to the global financial markets, today announced that it has collaborated with Pure Storage to become the first managed service provider (MSP) to deliver all-NVMe, all-flash cloud to all capital markets.
Built exclusively on Storage-as-a-Service (STaaS) infrastructure from Pure Storage, this new solution allows Options customers to reliably store data at scale and access the information instantly across its global financial network. Customers can also more effectively deploy containerized environments, large-scale datasets and other low-latency applications.
Options Pure as-a-Service deployment will be integrated into the firms enterprise-grade network, comprised of 40+ data center sites worldwide. The first deployment of its kind, the solution offers fully optimized accessibility, intra-regional replication, and enhanced performance standards, outclassing precursory data storage solutions.
As one of the fastest-growing enterprise IT companies in history and one of the worlds leading data storage providers, Pure develops flash-based, enterprise-class, storage products and storage-as-a-service to deliver a modern data experience for customers. The Pure as-a-Service platform includes block, file and object storage services, available on-premises, in co-located/hosted environments or within the public cloud, and all backed by an advanced management framework using artificial intelligence and machine learning.
Options VP Head of Infrastructure,James Lamingcommented, Our collaboration with Pure represents a step change in how Options delivers storage services for its clients and partners. The petascale implementation of all-flash Pure as-a-Service over Options robust global financial network dramatically increases our ability to meet the demanding uptime and performance SLAs of our customers in capital markets. Our combined and continued focus on delivering world-leading services will undoubtedly enhance and inform how the financial markets understand, leverage, and consume data.
Options VP Product Development,Michael Russoadded, Given the performance, availability, and durability underpinning it, Options collaboration with Pure demonstrates a significant milestone for our managed data storage service. With an initial multi-regional launch across 13 Options data centers, Pures full suite of storage is now available over our global network backbone and will provide clients with unrivalled storage performance and replication capabilities. With enhancements to machine learning and increases in processing performance, organizations now require an equally optimised, cost-effective way in which to rapidly store, recall and configure their data. Options data storage solution will be transformational, offering clients a departure from costly, antiquated data storage providers and vendors.
For large and legacy-style big data applications, there is a direct and significant correlation between application value and the performance of underlying storage, said Rob Walters, General Manager for Pure as-a-Service, Pure Storage. With best-in-class, all-NVMe Storage-as-a-Service from Pure Storage, Options can provide customers with an architecture that dramatically improves performance across the entire application stack.
Todays announcement comes followingrecent news of Options growth investment from Abry Partners.
About Options
Options Technology is the leading provider of cloud-enabled managed services to the global financial services sector. Founded in 1993, the company began life as a hedge fund technology services provider. Today over 200 firms globally leverage our award-winning front to back office managed infrastructure:Managed Platform,Managed Colocation,Managed Applications and technology consultancy services. Options clients include the leading global investment banks, hedge funds, funds of funds, proprietary trading firms, market makers, broker/dealers, private equity houses and exchanges. For more on Options, please visitwww.options-it.com, follow us on Twitter at@Options_ITand visit ourLinkedIn page.
About Pure Storage
Pure Storage, the markets leading independent solid-state array vendor, enables the broad deployment of flash in the data center. The companys all-flash enterprise arrays offer significant performance and efficiency gains over mechanical disk, at a lower price point per gigabyte stored. Pure Storage FlashArrays are ideal for performance-intensive applications, including server virtualization and consolidation, VDI, OLTP database, real-time analytics and cloud computing. To learn more, visit:www.purestorage.com.
Source: Options
See the original post here:
Options Partners with Pure, Leverages Pure as-a-Service to Deliver All-NVMe, All Flash Cloud - HPCwire
Infrastructure-as-code templates are source of cloud infrastructure weaknesses – TechCentral.ie
(Image: Stockfresh)
High percentage of IaC template misconfigurations in cloud deployments vulnerable to attack
Read More: GitHub Infrastructure infrastructure as code Palo Alto Networks security
In the age of cloud computing where infrastructure needs to be extended or deployed rapidly to meet ever-changing organisational needs, the configuration of new servers and nodes is completely automated. This is done using machine-readable definition files, or templates, as part of a process known as infrastructure as code (IaC) or continuous configuration automation (CCA).
A newanalysis by researchers from Palo Alto Networksof IaC templates collected from GitHub repositories and other places identified almost 200,000 such files that contained insecure configuration options. Using those templates can lead to serious vulnerabilities that put IaC-deployed cloud infrastructure and the data it holds at risk.
Just as when you forget to lock your car or leave a window open, an attacker can use these misconfigurations to weave around defences, the researchers said. This high number explains why, in a previous report, we found that 65% of cloud incidents were due to customer misconfigurations. Without secure IaC templates from the start, cloud environments are ripe for attack.
There are multiple IaC frameworks and technologies, the most common based on Palo Altos collection effort being Kubernetes YAML (39%), Terraform by HashiCorp (37%) and AWS CloudFormation (24%). Of these, 42% of identified CloudFormation templates, 22% of Terraform templates and 9% of Kubernetes YAML configuration files had a vulnerability.
Palo Altos analysis suggests that half the infrastructure deployments using AWS CloudFormation templates will have an insecure configuration. The report breaks this down further by type of impacted AWS service Amazon Elastic Compute Cloud (Amazon EC2), Amazon Relational Database Service (RDS), Amazon Simple Storage Service (Amazon S3) or Amazon Elastic Container Service (Amazon ECS).
For example, over 10% of S3 storage buckets defined in templates were publicly exposed. Improperly secured S3 buckets has been the source of many publicly reported data breaches in the past.
The absence of database encryption and logging, which is important to protect data and investigate potential unauthorised access, was also a commonly observed issue in CloudFormation templates. Half of them did not enable S3 logging and another half did not enable S3 server-side encryption.
A similar situation was observed with Amazons Redshift data warehouse service. Eleven percent of configuration files produced Redshift instances that were publicly exposed, 43% did not have encryption enabled, and 45% had no logging turned on.
Terraform templates, which support multiple cloud providers and technologies, did not fare any better. Around 66% of Terraform-configured S3 buckets did not have logging enabled, 26% of AWS EC2 instances had SSH (port 22) exposed to the internet and 17% template-defined AWS Security Groups allowed all inbound traffic by default.
Other common misconfigurations found in Terraform templates include:
Kubernetes YAML files had the smallest incidence of insecure configurations, but those that did were significant. Of the insecure YAML files found, 26% had Kubernetes configurations that ran as root or with privileged accounts.
Configurations allowing containers as root provide attackers with an opportunity to own virtually any aspect of that container, the Palo Alto researchers said. This also makes the process of performing container escape attacks easier, thus opening the host system to other potential threats. Security and DevOps teams should ensure that containers do not run with root or privileged accounts.
The types of IaC template misconfigurations and their prevalence the absence of database encryption and logging or publicly exposed services is in line with the type of issues detected by Palo Alto Networks in real-world cloud infrastructure deployments in and covered in past reports:
This suggests that the use of IaC templates in automated infrastructure deployment processes without first checking them for insecure configurations or other vulnerabilities is a big contributing factor to the cloud weaknesses observed in the wild.
Cybercriminal groups often target cloud infrastructure to deploy cryptomining malware that takes advantage of the processing power paid for by the victims. However, some of these groupsare also venturing beyond cryptomininganduse hacked cloud nodes for other malicious purposes.
It is readily apparent that attackers are using the default configuration mistakes implemented by weak or insecure IaC configuration templates, bypassing firewalls, security groups, or VPC policies and unnecessarily exposing an organisations cloud environment to attackers, the Palo Alto researchers said. Shift-left security is about moving security to the earliest possible point in the development process. Organisations that consistently implement shift-left practices and procedures within cloud deployments can quickly outpace competitors. Work with DevOps teams to get your security standards embedded in IaC templates. This is a win-win for DevOps and security.
IDG News Service
Read More: GitHub Infrastructure infrastructure as code Palo Alto Networks security
Originally posted here:
Infrastructure-as-code templates are source of cloud infrastructure weaknesses - TechCentral.ie
Windows Server and the future of file servers in the cloud computing world – TechRepublic
We still run our businesses on files. How is Microsoft upgrading Windows Server to use files in a hybrid world?
We do a lot with servers today -- much more than the age-old file and print services that once formed the backbone of business. Now servers run line-of-business applications, host virtual machines, support collaboration, provide telephony services, manage internet presence. It's a list that goes on and on -- and too often we forget that they're still managing and hosting files.
There are occasional reminders of Windows as a file server, with Microsoft finally deprecating the aging SMB 1 file protocol, turning it off in Windows 10. It was a change that forced system administrators to confront insecure connections and the applications that were still using them. There's an added problem: many legacy file servers are still running the now unsupported Windows Server 2008R2.
Microsoft hasn't forgotten the Windows File Server and the services that support it. There's still a lot of work going into the platform, using it as a bridge between on-premise storage and the growing importance of cloud-scale storage in platforms like Azure. New hardware is having an effect, with technologies like Optane blurring the distinction between storage and memory and providing a new fast layer of storage that outperforms flash.
As much as organizations use tools like Teams and Slack, and host documents in services like SharePoint and OneDrive, we still run our businesses on files. We might not be using a common shared drive for all those files anymore, but we're still using those files and we still need servers to help manage them. Windows Server's recent updates have added features intended to help modernize your storage systems, building on key technologies including Storage Replica and new tools to build and run scale-out file servers.
Much of Microsoft's thinking around modern file systems is focused on hybrid storage scenarios, bridging on-premise and cloud services. It's a pragmatic choice: on-premise storage can benefit from cloud lessons, while techniques developed for new storage hardware on-premise can be used in the cloud as new hardware rolls out. That leads to a simple process for modernizing file systems, giving you a set of steps to follow when updating Windows Server and rolling out new storage hardware. In a presentation at Ignite 2019, Ned Pyle, principal program manager on the Windows Server team, breaks it down into four steps: Learn/Inventory, Migrate/Deploy, Secure, and Future.
You can manage multiple server migrations (to newer hardware or VMs) from the Windows Admin Centre interface.
Image: Microsoft
The latest version of SMB, SMB 3.1.1adds new security features to reduce the risks to your files. It improves encryption and adds protection from man-in-the-middle attacks. It's a good idea to migrate much of your file system traffic over to it, removing NTLM and SMB 1 from your network.
You shouldn't forget Microsoft's alternate file system technology ReFS. Offering up to 4TB files, it can use its integrity stream option to validate data integrity, as well as supporting file-system level data deduplication. You can get a significant data saving with ReFS as part of Windows Server's Storage Spaces.
Microsoft now offers a Storage Migration Service to help manage server upgrades. As well as supporting migrations from on-premise to Azure, it can help bring files from older Windows Server versions to Windows Server 2019 and its newer file system tools and services. It will map storage networks, copy data, ensure file security and validity, before obfuscating old endpoints and cutting over to the new.
Part of the future for Windows Server's file protocols is an implementation of SMB over the QUIC protocol, using UDP. It's designed to be spoofing resistant, using TLS 1.3 over port 443. Microsoft is working on adding SMB compression to file traffic, reducing payload size and offering improved performance in congested networks and over low-bandwidth connections.
One option for building a hybrid file system is using Azure Files. On-premise systems can use VPN connections with either NFS or SMB 3.0 connections to Azure to work with what looks like a familiar share, except that it's hosted on Azure. If you're not using a VPN you still have secure connectivity options, with SMB 3.0 over port 445 or using the Azure File Sync REST API over SSL. All you need is the Windows network name of the share, using it the same way you'd use any Windows Server share locally.
Those Azure file shares aren't only for on-premise data; they're accessible using the same protocols inside Azure. With data now a hybrid resource, you can use Azure for scalable compute and analytics, or for gathering and sharing IoT analytics with on-premise applications, or as a disaster recovery location that's accessible from anywhere in the world. There's no change to your servers, or the way you work, only to where that data is stored. With Azure storage able to take advantage of its economies of scale, you can expand those shares as needed, without having to invest in physical storage infrastructure.
SEE:Windows 10: A cheat sheet(TechRepublic)
There's certainly a lot of capacity in Azure file shares: over 100TB of storage per share, with 10,000 IOPS in standard drives (which can be 10 times faster if you pay for premium services). There's support for Azure Active Directory, so you can apply the same access control rules as in your on-premise systems. Ignite 2019 saw Microsoft add support for NFS shares, as well as increasing the maximum file size to 4TB, and adding support for Azure Backup. To simplify things further, Azure File Shares can be managed through Windows Admin Center.
Perhaps the most important recent change is the shift to workload-optimized service tiers. By picking a plan that's closest to your needs you can be sure that you're not paying for features you don't want. At one end of the scale is high I/O and throughput, with Premium storage on SSDs, while at the other archival storage on Cool disks with slow startup times keeps costs to a minimum.
Users will be able to access these Azure-hosted file shares as if they're a Windows Server file share, allowing you to begin phasing out local file servers and reduce the size of the attack surface on your local systems. Attackers will not be able to use the file system as a route into line-of-business servers, or as a vector for privilege escalation. Domain-joined Azure file shares will be accessible via SMB 3.0 over VPN connections or via ExpressRoute high-speed dedicated links to Azure.
A modern file server architecture will mix on-premise and cloud. Tiering to Azure makes sense, as it gives you business continuity as well as providing an extensible file system that no longer depends on having physical hardware in your data center. You're not constrained by space or power and can take advantage of it when it's needed.
Similarly, moving traffic to SMB 3.1.1 and using Windows Admin Center will improve performance and give you a future-proof management console that will work for both on-premise and in-cloud storage resources. Putting it all together, Microsoft is delivering a hybrid filesystem solution that you really should be investigating.
Be your company's Microsoft insider with the help of these Windows and Office tutorials and our experts' analyses of Microsoft's enterprise products. Delivered Mondays and Wednesdays
Read more:
Windows Server and the future of file servers in the cloud computing world - TechRepublic
Government proposal to put police child abuse image database on the cloud raises hacking fears – Telegraph.co.uk
The Government is considering putting a police cache of tens of millions of child abuse images onto Amazon's cloud network, in a move privacy advocates warned would introduce new risks for the highly sensitive data-set.
Documents seen by The Telegraph show the Home Office has launched a study into uploadingthe "child abuse image database" onto the cloud. The database was set up in 2014and is comprised of millions of images and videos seized during previous operations.
Up until now, the images have only been accessible within police premises given they have been deemed "incredibly sensitive".
However, accordingto studydocuments, there have been a"number of limitations and concerns" in having thedata-set only accessible on physical sites.
The Home Office is looking into what the challenges would be in creating a copy of the images and then putting them on a cloud server, the documents said.
Such a move would likely prompt concerns over whether the database would be at higher risk of being stolen by criminals, given previously physical access has had to be granted.
A report released last year by cyber security firmPalo Alto Networks suggested there were tens of millions of vulnerabilities across cloud server providers, with these at risk of exploited by hackers to gain access to uploaded material although that same report said the fault did not lie with cloud providers themselves, but the way their systems were used.
The Home Office appears to have held initial conversations with Amazon Web Services, the cloud arm, over the database.
Amazon's cloud servers already play host to some of the most sensitive police data, with the company having been chosen as a supplier for the police super-database set up recently, which combined criminal conviction records withintelligence information. The companyisamong the biggest investors insecurity and compliance.
It is thought the feasibility study is being conducted as a "fact-finding" exercise, and that there are currently no plans to upload the images.
However, in the documents, it said such a move would bring more flexibility, as currently police cannot access the data remotely. The Home Office declined to comment.
Privacy advocated raised concernsover whetherimages of child abuse required even more protections, given they would be a "high value target".
A spokeswoman for Privacy International said the move would remove physical access controls and introduce a "different set of risks to what is a highly sensitive database".
"As the Home Office increasingly turns to cloud providers to hold sensitive data which would constitute a high value target, the public needs a great deal of reassurance."
The group urged for a consultation to be held with children's charities and those with technical expertise into whether the risks outweighed the benefits.
"Some of the justifications for such a move include a desire to facilitate remote access to the database and permit 'innovation activity'. This indicates that a broadening of access to a greater number of individuals outside the police, which is a clear cause for concern," the spokeswoman said.
Follow this link:
Government proposal to put police child abuse image database on the cloud raises hacking fears - Telegraph.co.uk