Page 2,740«..1020..2,7392,7402,7412,742..2,7502,760..»

Critical RCE Vulnerability Found in VMware vCenter Server Patch Now! – The Hacker News

VMware has rolled out patches to address a critical security vulnerability in vCenter Server that could be leveraged by an adversary to execute arbitrary code on the server.

Tracked as CVE-2021-21985 (CVSS score 9.8), the issue stems from a lack of input validation in the Virtual SAN (vSAN) Health Check plug-in, which is enabled by default in the vCenter Server. "A malicious actor with network access to port 443 may exploit this issue to execute commands with unrestricted privileges on the underlying operating system that hosts vCenter Server," VMware said in its advisory.

VMware vCenter Server is a server management utility that's used to control virtual machines, ESXi hosts, and other dependent components from a single centralized location. The flaw affects vCenter Server versions 6.5, 6.7, and 7.0 and Cloud Foundation versions 3.x and 4.x. VMware credited Ricter Z of 360 Noah Lab for reporting the vulnerability.

The patch release also rectifies an authentication issue in the vSphere Client that affects Virtual SAN Health Check, Site Recovery, vSphere Lifecycle Manager, and VMware Cloud Director Availability plug-ins (CVE-2021-21986, CVSS score: 6.5), thereby allowing an attacker to carry out actions permitted by the plug-ins without any authentication.

While VMware is strongly recommending customers to apply the "emergency change," the company has published a workaround to set the plug-ins as incompatible. "Disablement of these plug-ins will result in a loss of management and monitoring capabilities provided by the plug-ins," the company noted.

"Organizations who have placed their vCenter Servers on networks that are directly accessible from the Internet [...] should audit their systems for compromise," VMware added. "They should also take steps to implement more perimeter security controls (firewalls, ACLs, etc.) on the management interfaces of their infrastructure."

CVE-2021-21985 is the second critical vulnerability that VMware has rectified in the vCenter Server. Earlier this February, it resolved a remote code execution vulnerability in a vCenter Server plug-in (CVE-2021-21972) that could be abused to run commands with unrestricted privileges on the underlying operating system hosting the server.

The fixes for the vCenter flaws also come after the company patched another critical remote code execution bug in VMware vRealize Business for Cloud (CVE-2021-21984, CVSS score: 9.8) due to an unauthorized endpoint that could be exploited by a malicious actor with network access to run arbitrary code on the appliance.

Previously, VMware had rolled out updates to remediate multiple flaws in VMware Carbon Black Cloud Workload and vRealize Operations Manager solutions.

Read the rest here:
Critical RCE Vulnerability Found in VMware vCenter Server Patch Now! - The Hacker News

Read More..

Meet the Influential Ex-Amazon Cloud Employees Making Waves in Tech – Business Insider

Teresa Carlson, Splunk's new president and chief growth officer, was the vice president of worldwide public sector at AWS. Splunk

Teresa Carlson was the vice president of worldwide public sector and industries at AWS, and first founded the Amazon cloud giant's public sector division in 2010.

Prior to joining Amazon, Carlson was in charge of Microsoft's federal government business. Her over two decades of experience in the public sector have made her a well-known leader in federal IT circles, and she is credited with building up the bulk of Amazon cloud's now significant federal business, as well as leading the charge for Amazon's bid for the Joint Enterprise Defense Infrastructure (JEDI) cloud contract.

After Carlson was announced as the new president and chief growth officer of $22 billion data company Splunk in April, CEO Doug Merritt told Insider that she would operate as a "mini-CEO within the business," running her own playbook across its sales, marketing, and services organizations, and utilizing her relationships within the public sector.

Now, Carlson seems to have set a trend: Less than two months after Splunk announced her appointment, Splunk revealed another prominent AWS leader is joining the company.

Shawn Bice first joined AWS in 2016, where he was responsible for database products including Aurora, DynamoDB, DocumentDB, and many others. Like Carlson, Bice also worked at Microsoft prior to Amazon, and spent 17 years overseeing Microsoft products including SQL Server and Azure data services.

When he joins Splunk as President of Products and Technology on June 1, Bice will be responsible for the company's technical units, including the CIO, CTO, and CISO functions, and have a focus on cloud technologies. Splunk recently lost its CTO Tim Tully to venture capital firm Menlo Ventures.

"When it comes to data, we have only scratched the surface, and there is a tremendous opportunity for customers to reimagine and accelerate their business, both in the cloud and on-premises edge," Bice said in the company's press release.

According to the company, Splunk has attracted over a dozen new execs over the past year from companies including Salesforce, Google, Okta, Dropbox, and Autodesk to help navigate its next phase of growth.

Read more from the original source:
Meet the Influential Ex-Amazon Cloud Employees Making Waves in Tech - Business Insider

Read More..

Red Hat brings JBoss platform to Azure, easing the shift of Java apps to the cloud – DataCenterNews Asia

Red Hat has brought its JBoss enterprise application platform to Microsoft Azure, helping to ease the transition to the cloud for traditional Java applications.

The open source company has announced the JBoss Enterprise Application Platform (JBoss EAP) on Microsoft Azure, which enables organisations to benefit from a cloud-based architecture and update their existing Jakarta EE (previously Java EE) applications and build new ones on Azure.

JBoss EAP is currently available as a native offering in Azure that comes fully configured and ready to run, and will be available in the near future as a fully supported runtime in the Azure app service managed by Microsoft.

Java continues to be a popular programming language, with an estimated 8 million developers worldwide and steady growth in the use of Jakarta EE to accelerate application development for the cloud. The Jakarta EE-compliant application server, JBoss EAP, offers Java developers management and automation capabilities designed to improve productivity, and a lightweight architecture for building and deploying modern cloud-native applications.

Red Hat says that by bringing JBoss EAP to Azure, it will ease the shift to the cloud and give organisations more choice and flexibility in how they plan for the future. Customers can bring existing applications to Azure, including JBoss EAP applications running on-premises or other Jakarta EE applications running on different application servers, and choose how they want to manage business critical, Java-based applications in the cloud.

Red Hat and Microsoft have long been strategic partners. Red Hats JBoss platform continues to be the cornerstone of Red Hats commitment to enterprise Java, says Red Hat senior director of product management, Rich Sharples.

By offering JBoss EAP on Azure, we are combining our expertise and enabling customers to successfully choose how they want to manage applications on the cloud.

According to Red Hat, JBoss EAP on Microsoft Azure allows customers to:

As two of the biggest names in enterprise software, it just makes sense that we have such a strong relationship with Red Hat, says Microsofts principal group manager, Martijn Verburg.

Bringing JBoss EAP to Azure customers means not only faster time to market and remaining competitive, but also yields more options for building, deploying, and managing a security-focused cloud environment that meets business needs today while adapting for future change."

Continued here:
Red Hat brings JBoss platform to Azure, easing the shift of Java apps to the cloud - DataCenterNews Asia

Read More..

Datto to Present at the Bank of America Global Technology Conference – Business Wire

NORWALK, Conn.--(BUSINESS WIRE)--Datto Holding Corp. (Datto) (NYSE: MSP), the leading global provider of cloud-based software and security solutions purpose-built for delivery by managed service providers (MSPs), today announced that Tim Weller, Chief Executive Officer and John Abbot, Chief Financial Officer, are scheduled to present virtually at the Bank of America Global Technology Conference on Wednesday, June 9, 2021 at 9:15 a.m. ET. A live webcast of the presentation will be accessible by visiting Dattos investor website at investors.datto.com. An archived version will be available shortly after the completion of the presentation.

About Datto

As the worlds leading provider of cloud-based software and security solutions purpose-built for delivery by managed service providers (MSPs), Datto believes there is no limit to what small and medium businesses (SMBs) can achieve with the right technology. Dattos proven Unified Continuity, Networking, and Business Management solutions drive cyber resilience, efficiency, and growth for MSPs. Delivered via an integrated platform, Dattos solutions help its global ecosystem of MSP partners serve over one million businesses around the world. From proactive dynamic detection and prevention to fast, flexible recovery from cyber incidents, Dattos solutions defend against costly downtime and data loss in servers, virtual machines, cloud applications, or anywhere data resides. Since its founding in 2007, Datto has won numerous awards for its product excellence, superior technical support, rapid growth, and for fostering an outstanding workplace. With headquarters in Norwalk, Connecticut, Datto has global offices in Australia, Canada, China, Denmark, Germany, Israel, the Netherlands, Singapore, and the United Kingdom.

MSP-F

Read more:
Datto to Present at the Bank of America Global Technology Conference - Business Wire

Read More..

Oil and gas firm halves backup licence cost with Hycu move – ComputerWeekly.com

Oil and gas exploration company Summit E&P has cut backup licence costs in half after switching from Veeam to Hycu and migrating from VMware to Nutanix and its native hypervisor, AHV.

Summit, a UK-based subsidiary of the Sumitomo Corporation, has only 17 employees but holds about 250TB of data, which is gathered by geophysical survey boats and exploratory wells.

Projects come as large files (300GB to 400GB) in large datasets in flat files (up to 1TB) that are subject to analysis via high-end workstations.

The infrastructure had comprised NetApp storage, VMware virtualisation and Veeam and Symantec (for physical servers) backup software.

An initial move to Nutanix hyper-converged infrastructure came in 2017 with the deployment of a three-node cluster. Then, in 2020, we decided Nutanix was the way to go, said Summit IT and data manager Richard Inwards.

NetApp had become expensive and difficult to maintain and we got rid of ESX for [the Nutanix] AHV [hypervisor]. We got rid of Veeam because we got rid of ESX, but also because Hycu for Nutanix could back up physical servers.

He added: Veeam didnt backup physical servers very well at the time.

Summit holds about 200TB of data on-site with some held off-site and streamed to the cloud. It runs 13 virtual machines (VMs) plus four physical servers.

So, Summit deployed Nutanix, with the AHV hypervisor and Hycu backup, which provides incremental backup.

Inwards said licensing costs for Hycu are about half those for Veeam, but the key benefits are in ease of use.

Its a much more simple interface, he said. And we can now use one product for virtual and physical instead of two. Also, when we moved from ESX to Nutanix, Hycu handled the migration. We backed up from ESX and restored to Nutanix.

The big benefit of Hycu is that it can do what Veeam couldnt do, which is to integrate well with the Nutanix environment.

Summit backs up about 50GB to 60GB per day via Hycu.

Hycu was spun off from the Comtrade Group into its own company in 2018.It offers backup software tailored to Nutanix and VMware virtualisation environmments as well as Google Cloud, Azure and Office 365 cloud workloads. It also offers a product aimed at Kubernetes backup.

See the original post:
Oil and gas firm halves backup licence cost with Hycu move - ComputerWeekly.com

Read More..

The Increasingly Uneven Race To 3nm/2nm – SemiEngineering – SemiEngineering

Several chipmakers and fabless design houses are racing against each other to develop processes and chips at the next logic nodes in 3nm and 2nm, but putting these technologies into mass production is proving both expensive and difficult.

Its also beginning to raise questions about just how quickly those new nodes will be needed and why. Migrating to the next nodes does boost performance and reduce power and area (PPA), but its no longer the only way to achieve those improvements. In fact, shrinking features may be less beneficial for PPA than minimizing the movement of data across a system. Many factors and options need to be considered as devices are designed for specific applications, such as different types of advanced packaging, tighter integration of hardware and software, and a mix of processing elements to handle different data types and functions.

As more devices become connected and more applications become available, were seeing exponential growth in data. Weve also seen fundamentally different workloads, and can expect to see more changes in workloads as data and different usage models continue to evolve. This data evolution is driving changes to hardware and a different need for compute than what was historically experienced, said Gary Patton, vice president and general manager of design enablement at Intel, during a keynote at SEMIs recent Advanced Semiconductor Manufacturing Conference. We absolutely need to continue to scale the technology, but thats not going to be enough. We need to address heterogeneous integration at the system level, co-optimization of the design in the process technology, optimization between software and hardware, and importantly, continue to drive AI and novel compute techniques.

So while transistor-level performance continues to be a factor, on the leading edge its just one of several. But at least for the foreseeable future, its also a race that the largest chipmakers are unwilling to abandon or concede. Samsung recently disclosed more details about its upcoming 3nm process, a technology based on a next-generation transistor type called a gate-all-around (GAA) FET. This month, IBM developed a 2nm chip, based on a GAA FET. Plus, TSMC is working on 3nm and 2nm, while Intel also is developing advanced processes. All of these companies are developing one type of GAA FET called a nanosheet FET, which provides better performance than todays finFET transistors. But they are harder and more expensive to make.

With 3nm production expected to commence by mid-2022, and 2nm slated by 2023/2024, the industry needs to get ready for these technologies. But the landscape is confusing, and announcements about new nodes and capabilities arent quite what they seem. For one thing, the industry continues to use the traditional numbering scheme for different node, but the nomenclature doesnt really reflect which company is ahead. In addition, chipmakers are moving in different directions at the so-called 3nm node, and not all 3nm technologies are alike.

The benefits are for each new node are application-specific. Chip scaling is slowing and price/performance benefits have been shrinking over the past several process nodes, and fewer companies can afford to design and manufacture products based solely on the latest nodes. On the other side of that equation, the cost of developing these processes is skyrocketing, and so is the cost of equipping a leading-edge fab. Today, Samsung and TSMC are the only two vendors capable of manufacturing chips at 7nm and 5nm.

After that, transistor structures begin to change. Samsung and TSMC are manufacturing chips at 7nm and 5nm based on todays finFETs. Samsung will move to nanosheet FETs at 3nm. Intel is also developing GAA technology. TSMC plans to extend finFETs to 3nm, and then will migrate to nanosheet FETs at 2nm around 2024.

IBM also is developing chips using nanosheets. But the company has not manufactured its own chips for several years, and currently outsources its production to Samsung.

Scaling, confusing nodesFor decades, the IC industry has attempted to keep pace with Moores Law, doubling the transistor density in chips every 18 to 24 months. Acting like an on-off switch in chips, a transistor consists of a source, drain and gate. In operation, electrons flow from the source to the drain and are controlled by the gate. Some chips have billions of transistors in the same device.

Nonetheless, at an 18- to 24-month cadence, chipmakers introduce a new process technology with more transistor density, thereby lowering the cost per transistor. At this cadence, referred to as a node, chipmakers scaled the transistor specs by 0.7X, enabling the industry to deliver a 40% performance boost for the same amount of power and a 50% reduction in area. This formula enables new and faster chips with more functions.

Each node is given a numerical designation. Years ago, the node designation was based on a key transistor metric, namely gate length. For example, the 0.5m technology node produced a transistor with a 0.5m gate length, explained Nerissa Draeger, director of university engagements at Lam Research.

Over time, gate length scaling slowed, and at some point, it didnt match the corresponding node number. Over the years, the technology node definition has evolved, and is now considered more of a generational name rather than a measure of any key dimension, Draeger said.

And for some time, the node numbers have become mere marketing names. For example, 5nm is the most advanced process today, but there is no agreed-upon 5nm spec. The same is true for 3nm, 2nm and so on. Its even more confusing when vendors use different definitions for the nodes. Intel is shipping chips based on its 10nm process, which is roughly equivalent to 7nm for TSMC and Samsung.

For years vendors more or less followed by transistor scaling specs as defined by the International Technology Roadmap for Semiconductors (ITRS). In 2015, the ITRS work was halted, leaving the industry to define its own specs. In its place, IEEE implemented the International Roadmap for Devices and Systems (IRDS), which instead focuses on, among other things, continued scaling (More Moore) and advanced packaging and integration (More Than Moore).

What remains the same is our expectation that node scaling will bring better device performance and greater power efficiency and cost less to build, Draeger said.

That hasnt been easy. For years, vendors developed chips using traditional planar transistors, but these structures hit the wall at 20nm a decade ago. Planar transistors still are used in chips at 28nm/22nm and above, but the industry needed a new solution. Thats why Intel introduced finFETs at 22nm in 2011. Foundries followed with finFETs at 16nm/14nm. In finFETs, the control of the current is accomplished by implementing a gate on each of the three sides of a fin.

FinFETs enabled the industry to continue with chip scaling, but they are also more complex with smaller features, causing design costs to escalate. The cost to design a mainstream 7nm device is $217 million, compared to $40 million for a 28nm chip, according to Handel Jones, CEO of IBS. In this case, the costs are determined two or more years after a technology reaches production.

At 7nm and below, static leakage has become problematic again, and the power and performance benefits have started to diminish. Performance increases are now somewhere in the 15% to 20% range.

On the manufacturing front, meanwhile, finFETs require more complex processes, new materials and different equipment. This in turn drives up manufacturing costs. If you compare 45nm to 5nm, which is happening today, we see a 5X increase in wafer cost. Thats due to the number of processing steps required, said Ben Rathsack, vice president and deputy general manager at TEL America.

Over time, fewer companies had the resources or saw the value in producing leading-edge chips. Today, GlobalFoundries, Samsung, SMIC, TSMC, UMC and Intel are manufacturing chips at 16nm/14nm. (Intel calls this 22nm). But only Samsung and TSMC are capable of manufacturing chips at 7nm and 5nm. Intel is still working on 7nm and beyond, and SMIC is working on 7nm.

Moving to nanosheetsScaling becomes even harder at 3nm and below. Developing low-power chips that are reliable and meet spec presents some challenges. In addition, the cost to develop a mainstream 3nm chip design is a staggering $590 million, compared to $416 million for a 5nm device, according to IBS.

Then, on the manufacturing front, foundry customers can go down two different paths at 3nm, presenting them with difficult choices and various tradeoffs.

TSMC plans to extend finFETs to 3nm by shrinking the dimensions of 5nm finFETs, making the transition as seamless as possible. TSMCs volume ramp of 3nm finFETs is planned for Apple in Q3 2022, with high-performance computing planned for 2023, IBS Jones said.

Its a short-term strategy, though. FinFETs are approaching their practical limit when the fin width reaches 5nm, which equates to the 3nm node. The 3nm node equates to a 16nm to 18nm gate length, a 45nm gate pitch, and a 30nm metal pitch, according to the new IDRS document. In comparison, the 5nm node equates to a 18nm to 20nm gate length, a 48nm gate pitch and a 32nm metal pitch, according to the document.

Once finFETs hit the wall, chipmakers will migrate to nanosheet FETs. Samsung, for one, will move directly to nanosheet FETs at 3nm. Production is slated for the fourth quarter of 2022, according to IBS.

TSMC plans to ship nanosheets FETs at 2nm in 2024, according to IBS. Intel also is developing GAA. Several fabless design houses are working on devices at 3nm and 2nm, and companies such as Apple plan to use that technology for next-generation devices.

A nanosheet FET is an evolutionary step from a finFET. In a nanosheet, a fin from a finFET is placed on its side and is then divided into separate horizontal pieces. Each piece or sheet makes up the channels. The first nanosheet FET will likely have 3 or so sheets. A gate wraps around all of the sheets or channel.

Nanosheets implement a gate on four sides of the structure, enabling more control of the current than finFETs. In addition to having a better gate control verses a finFET, GAA-stacked nanosheet FETs offer higher DC performance thanks to higher effective channel width, said Sylvain Barraud, a senior integration engineer at Leti.

Nanosheet FETs have other advantages over finFETs. In finFETs, the width of the device is quantized, which impacts the flexibility of designs. In nanosheets, IC vendors have the ability to vary the widths of the sheets in the transistor. For example, a nanosheet with a wider sheet provides more drive current and performance. A narrow nanosheet has less drive current, but takes up a smaller area.

The wide range of variable nanosheet widths provide more design flexibility, which is not possible for finFETs due to a discrete number of fins. Finally, GAA technology also proposes multiple threshold voltage flavors thanks to different workfunction metals, Barraud said.

The first 3nm devices are starting to trickle out in the form of early test chips. At a recent event, Samsung disclosed the development of a 6T SRAM based on a 3nm nanosheet technology. The device addresses a major issue. SRAM scaling shrinks the device, but it also increases bitline (BL) resistance. In response, Samsung incorporated adaptive dual-BL and cell-power assist circuits into the SRAM.

Gate-all-around SRAM design techniques are proposed, which improve SRAM margins more freely, in addition to power, performance, and area, said Taejoong Song, a researcher from Samsung, in a paper. Moreover, SRAM-assist schemes are proposed to overcome metal resistance, which maximizes the benefit of GAA devices.

IBM, meanwhile, recently demonstrated a 2nm test chip. Based on nanosheet FETs, the device can incorporate up to 50 billion transistors. Each transistor consists of three nanosheets, each of which has width of 14nm and a height of 5nm. All told, the transistor has a 44nm contacted poly pitch with a 12nm gate length.

Still in R&D, IBM is targeting the chip for 2024. But at any node, nanosheet devices face several challenges before they move into production. There no limit of the number of challenges, said Mukesh Khare, vice president of hybrid cloud research at IBM. I would say the biggest challenges include leakage. How do you reduce power? How do you improve performance in that small dimension when your sheet thickness is 5nm and in the channel length is 12nm? How do you get reasonable RC benefit in 2nm? At the end, the chip has to be superior compared to the prior node.

Making a nanosheet FET is difficult. In gate-all-around nanosheets/nanowires, we have to do processing underneath the structure where we cant see, and where its much more challenging to measure. And thats going to be a much more difficult transition, said David Fried, vice president of computational products at Lam Research.

In a process flow, a nanosheet FET starts with the formation of a super-lattice structure on a substrate. An epitaxial tool deposits alternating layers of silicon-germanium (SiGe) and silicon on the substrate.

That requires extreme process control. In-line monitoring of the thickness and composition of each Si/SiGe pair is essential, said Lior Levin, director of product marketing at Bruker. These parameters are key for the device performance and yield.

The next step is to develop tiny vertical fins in the super-lattice structure. Then, inner spacers are formed. Then, the source/drain are formed, followed by the channel release process. The gate is developed, resulting in a nanosheet FET.

More than transistorsStill, transistor scaling is only part of the equation. And while the scaling race continues, competition is becoming equally fierce on the heterogeneous integration side. Instead of just one monolithic chip developed at a single process node, many of the most advanced architectures incorporate multiple processing elements, including some highly specialized ones, and different types of memories.

Distributed computing is driving another trenda growing range of architectures that are domain specific, Intels Patton said. Another trend we are seeing is domain-specific architectures that are disaggregated from the whole, mainly driven by AI and tailored for efficiency gains.

Advanced packaging, which integrates complex dies in a package, is playing a role. Packaging innovations are now starting to play more of a role in achieving improvements in product performance, Patton said.

Theres definitely more factors involved in performance, power and area from one node to another, said Peter Greenhalgh, vice president of technology and fellow at Arm. If the world was relying just on the fab for all of its gains, youd be pretty disappointed. Arm provides one piece of the LEGO design. That LEGO is added to other LEGO pieces to build a really interesting chip. There are many expensive ways to do this, but there also will be some level of commoditization and harmonization.

Concurrent with the shift toward heterogeneous architectures is the build-out of the edge which spans everything from IoT devices to various levels of server infrastructure as well as moves by systems companies such as Google, Alibaba, AWS and Apple to design their own hardware to optimize their particular data flow inside of enormous data centers. This has set off a frenzy of design activity that incorporates both custom and non-custom hardware, non-standard packages, and a variety of approaches such as in-memory and near-memory processing that never gained much traction in the past. It also has put a focus on how to partition processing, which components and processes need to be prioritized in a microarchitecture, and what is the optimum process node for various components based upon a particular heterogeneous design.

A great example of that is video acceleration, said Greenhalgh. If youre a cloud server company and youre doing huge amounts of video decode and encode, you dont want to do that on a CPU. You want to put a video accelerator in there. This is a paradigm shift.

So there are more and different kinds of processor elements. There also are more extensions being developed for existing processor cores.

Weve always had the ability to extend the architecture (for ARC processors) by adding custom instructions or bolting on custom accelerators, said Rich Collins, senior segment marketing manager at Synopsys. Whats different now is that more and more customers are taking advantage of that. AI is a big buzzword and it means a lot of different things, but behind that term were seeing a lot of changes. More and more companies adding a neural network engine onto a standard processor.

These changes are more than just technological. It also requires changes inside of chip companies, from the makeup of various engineering teams to the structure of the companies themselves.

It used to be that you would invent a bunch of products, put them in a list in a bunch of data books, and people would try to find them, said Shawn Slusser, senior vice president of sales, marketing and distribution at Infineon. That is not going to work anymore because of the complexity and longevity of devices. Were now looking at a model that is more like a superstore for semiconductors. If you want to link the real world to the digital world, everything is there in one place, including the products, the people and the expertise.

Bigger companies have been developing this expertise in-house. This is evident in Apples M1 chip. The chip was developed using TSMCs 5nm process. It incorporates Arm V8 cores, GPUs, custom microarchitectures, a neural engine, and an image signal processor, all of which is bundled together in a system-in-package. While that design may not perform as well as other chips using standard industry benchmarks, the performance and power improvements running Apple applications are readily apparent.

As of today, some 200 companies either have developed, or are currently developing accelerator chips, according to industry estimates. How many of those will survive is unknown, but the move toward disaggregation is inevitable. On the edge, there is simply too much data being generated by cars, security systems, robots, AR/VR, and even smart phones, to send everything to the cloud for processing. It takes too long and requires too much power, memory and bandwidth. Much of that data needs to be pre-processed, and the more the hardware is optimized for handling that data, the longer the battery life or lower the power costs.

This is why VC funding has been pouring money into hardware startups for the past several years. Over the next 12 to 24 months, the field is expected to narrow significantly.

On the inferencing side, the window will start to close as companies come to market and engage with customers, said Geoff Tate, CEO of Flex Logix. Over the next 12 months, investors will start to get hard data to see which architectures actually win. For the last few years, it was a matter of who had the best slide deck. Customers view acceleration as a necessary evil to run a neural network model. For my model, how fast will it run, how much power will it take, and how much will it cost? Theyre going to pick the horse thats the best in their race or for their conditions.

Designs are changing on the cloud side, as well. In the cloud, faster processing and the ability to determine exactly where that processing happens can have a big impact on energy efficiency, the amount of real estate required, and the capacity of a data center. For example, rather than just connecting DRAM to a chip, that DRAM can be pooled among many servers, allowing workloads to be spread across more machines. That provides both more granularity for load balancing, as well as a way of spreading out heat, which in turn reduces the need for cooling and helps prolong the life of the servers.

Youve got tens of thousands of servers in some of these data centers, and many tens of data centers worldwide, said Steven Woo, fellow and distinguished inventor at Rambus. Now you have to figure out how to lash them together. There are some new technologies that will be coming out. One is DDR5, which is more power efficient. And a little further out is Compute Express Link (CXL). For a long time, the amount of memory that you could put into a server has been limited. You can only get so much in there. But with the ability to do more work in the cloud, and to rent virtual machines, theres a much larger range of workloads. CXL gives you this ability to have a base configuration in your system, but also to expand the amount of memory bandwidth and capacity thats available to you. So now you can suddenly support a much larger range of workloads than before.

ConclusionThe race is still on to reach the next few process nodes. The question that remains is which companies will be willing to spend the time and money needed to develop chips at those nodes when they may achieve sufficient gains through other means.

The economics and dynamics of different markets are forcing chipmakers to assess how to best tackle market opportunities with a maximum return on investment, which in some cases may extend well beyond the cost of developing an advanced chip. There are many options for achieving different goals, and often more than one way to get there.

Related StoriesBreaking The 2nm BarrierNew interconnects and processes will be required to reach the next process nodes.Challenges At 3/2nmNew structures, processes and yield/performance issues.New Transistor Structures At 3nm/2nmGate-all-around FETs will replace finFETs, but the transition will be costly and difficult.Big Changes In Tiny InterconnectsBelow 7nm, get ready for new materials, new structures, and very different properties.Moving To GAA FETsWhy finFETs are running out of steam, and what happens next.

Go here to see the original:
The Increasingly Uneven Race To 3nm/2nm - SemiEngineering - SemiEngineering

Read More..

Cybersecurity and Smart Manufacturing – Automation.com

Summary

Data from automation systems used to stay put. It was produced by sensors, PLCs, and recorders; stored on local OPC servers and databases; and accessed by a few skilled operators and engineers. Although highly secure, data access was limited.Smart Manufacturing and IoT are driving a variety of positive business outcomes and data must be shared with new systems, new networks, and a variety of tools for diverse users and roles. Determining the best security strategy for Smart Manufacturing efforts is a struggle, but this article offers a brief review of these key security issues:

New applications and data destinations

New user roles and expectations

Evolving threat landscape

Data collection and analysis represent a significant competitive advantage. Data is the new oil; access to data and expert analysis can drive significant cost savings and revenue increases. IoT buzzwords and ad-hoc technologies have given way to real solutions that drive measurable business outcomes. The challenge is to create and execute a digital transformation strategy to become a secure Smart Manufacturing environment.Operations networks were walled gardens, managed by groups unrelated to those managing business infrastructure. Over the last twenty years, business intelligence, network analysis, data gathering, and real-time analytics have become commonplace. Data sharing and analysis from manufacturing systems can no longer stop at purpose-built software solutions, supervisory control applications, stand-alone statistical process control, process historians, and relational databases.New applications enable digital transformation. They do not usually need to occupy layers 0/1/2/3 of a Purdue Model manufacturing network (nor should they!), but its imperative to obtain data from these layers. How do we securely enable access between business systems and process control networks? How do we secure data in the cloud?Purdue and ISA define network layers, and many standards, protocols, and applications can move data securely. Especially for critical operations infrastructure, air gaps can be maintained while providing access to data. Custom hardware-based data diodes offer a physical air gaptrue network isolationthrough unidirectional physical mediums where not a single electron can pass back to the control network. These use custom protocols that flow over the unidirectional cable from diode input hardware to diode output hardware. In the output hardware, data server interfaces facilitate passage of data to upper-level applications like Kepware Server or the destination application. These interfaces might include HTTP/HTTPS, MQTT, OPC DA, or OPC UA.While a true data diode air gap is one of the best ways to prevent unintended access, it might be enough to implement data diodes using standard Ethernet hardware. Unidirectional protocols, such as Ethernet Global Data over UDP, can be sent over bidirectional mediums like CAT5 or CAT6 with the Ethernet infrastructure and networking rules, operating systems, and application stacks to prevent bidirectionality; inbound access to the control network is not permitted.Transport Layer Security and Secure Socket Layer protocols have become common for bidirectional protocols from demilitarized zones (DMZ) or higher-level network segments interacting with systems on control networks. TLS and SSL offer unambiguous identification of requester and requestee, message authenticity, and message encryption. The ease of integration of plug-and-play protocols like OPC UA that need a single inbound open port in the control network firewall can outweigh the concerns about network access. Note: Protocols are only as secure as certificate maintenance and product update strategy. To stay secure, you must embrace the administrative overhead of reissuing certificates frequently and maintaining products with vendor-released updates. Despite the overhead and maintenance of the applications, protocols, and security practices around firewalls and network segmentation; it is relatively simple, low-cost, and secure. These solutions can create a foundation for secure communication from business management to plant floor; for operator feedback from manufacturing or real-time changes to PLC for process efficiency.Once data is securely accessible, it must reach the right destination. If access is from a DMZ, access from applications within that network can typically be realized. Moving data between DMZs is typically accomplished through a bidirectional TLS-based protocol (such as OPC UA, HTTPS, MQTT, or a proprietary offering from a software vendor). If moving data to a public cloud, assuming the DMZ has an Internet-facing connection, MQTT or HTTPS can secure travel across the public Internet. Cloud vendors may offer software for the network segment with Internet access, gathering data from local systems using OPC UA, MQTT, HTTP, database, or file access and transferring the data to the cloud using HTTP, MQTT, AMQP, or custom solutions. VPNs may also be employed to increase security between data source and destination.New users and roles demand data access and the secure infrastructure to provide it. Do these new users want fast refresh rates for real-time analytics or are they using historical data for trend analysis? If real-time isnt necessary, a SQL replication from a relational database on the process control network to a relational database on the DMZ or secure OPC UA between control network and DMZ to populate a database on the DMZ may be adequate. Direct access to a control networks protocol stream isnt often necessary. Understand what these new users and roles need before designing to accommodate them.Threats to industrial control systems occur with increasing frequency. Its almost enough for this author to only recommend air gaps with hardware data diodes for any digital transformation effort! However, its unrealistic for all organizations, unnecessary for every environment, and still not a flawless guarantee of security!

To quote Robert Rash, Principal Architect at solutions provider Microland: The greatest myth is the idea of air gapping. The idea that a separate network, VLAN, or segment that isnt connected to the Internet stays that way and keeps them isolated and protected is almost always false.Theres always a technician, engineering station, or remote connection that provides connectivity to these 'air gapped' networks and typically is done without any guidance or control and without the SecOps knowledge.With attack vectors even in isolated environments, it is more important than ever to secure and manage every aspect of networks. This includes training and behavior modifications for users, the use of only company-approved software and hardware, and multiple layers of authentication.This article covered problems and solutions related to security in Smart Manufacturing initiatives. Data is critical to the future of business and proper use of technology and well-developed strategies can ensure a high degree of security while businesses transform.

Sam leads and manage PTCs Kepware Applications Engineers, a global team of industrial connectivity experts who help our users create connectivity solutions for industrial automation and enterprise digital transformation. He has over fifteen years experience in IT (information tech), OT (industrial operations tech) and business development. Sam has proven expertise in systems design, industrial networking and systems integration, enterprise and technical account management, technical training and education programs, and business operations.

Check out our free e-newsletters to read more great articles..

Continue reading here:
Cybersecurity and Smart Manufacturing - Automation.com

Read More..

Better cybersecurity means finding the unknown unknowns – MIT Technology Review

During the past few months, Microsoft Exchange servers have been like chum in a shark-feeding frenzy. Threat actors have attacked critical zero-day flaws in the email software: an unrelenting cyber campaign that the US government has described as widespread domestic and international exploitation that could affect hundreds of thousands of people worldwide. Gaining visibility into an issue like this requires a full understanding of all assets connected to a companys network. This type of continuous tracking of inventory doesnt scale with how humans work, but machines can handle it easily.

For business executives with multiple, post-pandemic priorities, the time is now to start prioritizing security. Its pretty much impossible these days to run almost any size company where if your IT goes down, your company is still able to run, observes Matt Kraning, chief technology officer and co-founder of Cortex Xpanse, an attack surface management software vendor recently acquired by Palo Alto Networks.

You might ask why companies dont simply patch their systems and make these problems disappear. If only it were that simple. Unless businesses have implemented a way to find and keep track of their assets, that supposedly simple question is a head-scratcher.

But businesses have a tough time answering what seems like a straightforward question: namely, how many routers, servers, or assets do they have? If cybersecurity executives dont know the answer, its impossible to then convey an accurate level of vulnerability to the board of directors. And if the board doesnt understand the riskand is blindsided by something even worse than the Exchange Server and 2020 SolarWinds attackswell, the story almost writes itself.

Thats why Kraning thinks its so important to create a minimum set of standards. And, he says, Boards and senior executives need to be minimally conversant in some ways about cybersecurity risk and analysis of those metrics. Because without that level of understanding, boards arent asking the right questionsand cybersecurity executives arent having the right conversations.

Kraning believes attack service management is a better way to secure companies with a continuous process of asset discovery, including the discovery of all assets exposed to the public internetwhat he calls unknown unknowns. New assets can appear from anywhere at any time. This is actually a solvable problem largely with a lot of technology that's being developed, Kraning says. Once you know a problem exists, actually fixing it is actually rather straightforward. And thats better for not just companies, but for the entire corporate ecosystem.

A leadership agenda to take on tomorrow, Global CEO Survey survey, PwC

Laurel Ruma: From MIT Technology Review, I'm Laurel Ruma, and this is Business Lab, the show that helps business leaders make sense of new technologies coming out of the lab and into the marketplace.

Our topic today is attack surface management. Where will your next cybersecurity breach come from? Enterprises have more and more things attached to their internet, including ever-expanding networks and aging infrastructure. And as attackers become more creative, executives will have to as well.

Two words for you: unknown unknowns.

My guest is Matt Kraning, who is the chief technology officer and co-founder of Expanse, which was recently acquired by Palo Alto Networks. Matt is an expert in large-scale optimization, distributed sensing, and machine learning algorithms run on massively parallel systems. Prior to co-founding Expanse, Matt worked for DARPA, including a deployment to Afghanistan. Matt holds PhD and master's degrees from Stanford University. This episode of Business Lab is produced in association with Palo Alto Networks. Welcome, Matt.

Matt Kraning: Thank you so much. Very happy to be here.

Laurel: From the very beginning, youve been an expert in large-scale distributed sensing and machine learning algorithms run on massively parallel systems. How did that expertise lead you to co-found a company in the field of attack surface management?

Matt: Well, I'll say a few things. Attack surface management is what we wound up calling it, but it was actually a very long journey to that and we didn't really set out knowing that that's exactly what it would be called or what precisely we would be doing. So there's not even a Gartner category, which is a certain way of validating the existence for a market segment. That is actually still coming out. So the field of attack surface management, we actually invented ourselves. And a lot of invention means that there's a lot of discovery going into that.

Unlike a lot of enterprise security and IT companies where, in a lot of cases, most companies founded are usually going into an existing marketthey're doing usually an incremental or evolutionary advancement on top of what has already been inventedwe actually took another approach and said, "We're really, with fresh eyes, asking, What is not being served in the market today?" And came up with the idea of, "Is the internet, with all of its promise, actually going to be a strategic liability for organizations, no longer just a strategic asset?"

We developed a lot of techniques and technologies to basically look at all of the internet as a dataset: to gather, continuously, information about the internet, which is really where our backgrounds came in both from academia and then also from our work in the defense and intelligence communities, in places like DARPA, and at various places in the US intelligence agencies. And we said, actually, there seems to be a whole bunch of stuff broken on the internet, and surprisingly, a lot of it is actually associated with very large, very important companies. It was scratching on that question that actually led us to both founding Expanse and then also creating what would be the first and is the leading product in what is now known as attack surface management, which is really understanding all of the assets that you have, understanding the risks that they might pose and then also fixing problems.

But when we founded Expanse back in 2012, we didn't know that it was going to be attack surface management. We didn't even have the name attack surface management. Instead it was very problem-focused on, "We're seeing a lot of weird and dangerous things on the internet and a lot of security vulnerabilities. Let's double-click on that a lot and actually see if there's a way to build a business around that."

Laurel: And how much the internet has changed in these nine short years, right? When you talk about that data set and in trying to find information of where the biggest security risks are, how hard was it to find? Did you look around and see, "Oh, look, there are entire datasets, you could track back easily to these companies. They're leaking." Or, "Things aren't secure."

Matt: I love the phrase, "Everything is obvious once you know the answer." I think initially one of the main challenges is that in order to even show how large this problem is, you actually need to gather the data. And gathering the data is not easy, especially on a continuous or regular basis, you actually have to have a lot of systems engineering background, a lot of distributed systems background to actually gather data on everything. I think what made our approach unique is that we actually said, "What if we gather data on every single system on the internet?" Which is actually enabled by a lot of both cost advantages enabled by things like cloud computing, but also software advantages both in open source and things that we would write ourselves. And then, rather than starting from things that you know about a company and trying to assess their risks, we said, "Why don't we start with everything on the internet and then try to whittle it down to what is interesting?"

And a lot of very good insights came out of that where again, almost by accident, we started discovering that we would actually find many, many more security problems than organizations actually knew about themselves. When I'm talking to organizations, I'm not talking to small businesses. I'm talking military services. I am talking Fortune 500 companies, Fortune 100 companies, Fortune 10 companies. Even the largest, most complex, but also the best finance, most elite customers had problems for security. And what really our discovery and our journey in creating the category, in creating attack surface management as an idea was that we find all of these security vulnerabilities and all of these assets in far-flung places anywhere on the internet, and they will occur for a multitude of reasons.

But it was actually interesting because while the security challenges and security risks were very real, the real symptoms that we found, that we discovered, were actually that organizations did not have an effective means to track all of the assets that they had online and to simultaneously assess the security posture of those assets and to simultaneously fix and remediate and mitigate the risks of those opposed to the organization.

And I think that was one of the very interesting things was that looking back, we can now say, "Obviously, you want to do all of these activities." But because we were actually doing something new that had never been done before, it was a new category, we had to discover all of that starting from the point of really, "There seems to be a lot of stuff broken on the internet. We don't exactly know why, but let's go investigate."

Laurel: That's a good way of thinking of it, starting with a different place and then working your way backwards. So Matt, according to a recent PwC survey of more than 5,000 CEOs around the world, 47% are extremely concerned about cybersecurity. Now, 47% doesn't sound like a large number to me, shouldn't it be closer to 100%?

Matt: I would say that every CEO Ive talked to is concerned about it on some level. And I think a lot depends on where they are. Overall, what we've noticed is a very large uptick, especially in the last five years, of the attentiveness of the CEOs and boards of directors to cybersecurity issues. Where I think we've seen a lag, though I think there are a few exceptions in this area, is that a lot of both tools and presentations that go, especially for executive audiences, for cybersecurity risks do not effectively convey everything that those people need to make effective decisions. And I think this is challenging for a variety of reasons, especially that a lot of CEOs and boards do not necessarily have the full technical background in order to do so. But I think it's also been a failure to date in industry to be able to provide those tools. And I think we're going to see more and more changes there.

I equate it to really the state of finance before Sarbanes-Oxley that basically started to require CEOs to get training, and boards as well, to start to understand certain financial metrics, to actually have certain controls in place. I think at the high level, we are going to have to see something like that in the coming years be implemented in some way to say that there are a minimum set of standards and that boards and senior executives need to be minimally conversant in some ways about cybersecurity risk and analysis of those metrics. Right now, I've seen a lot of people say, "I am concerned about this, but then I also don't really know where to go next" or, "I'm conversant. We got a report. We hired some firm. They had this presentation that had a whole bunch of PowerPoint slides with a lot of charts that would have Christmas tree lights that made my brain melt. And I could not really understand the concepts."

I think people get it, but we're still in the early days of, How do you have effective controls over this? And then how do you actually have programs that are robust around it? Again, we need to move in that direction because more and more boards need to see this as a foundational aspect of their company, especially as pretty much all companies today, I don't care what industry you're in, what size, your company actually runs on IT. It's pretty much impossible these days to run almost any size company where if your IT goes down, your company is still able to run. And as a result of the understanding of cybersecurity at those levels, with attack surface being now a part of that, is very important for organizations to be able to understand, because otherwise you will put your organization at a very large amount of risk by not being able to properly assess things like that.

Laurel: Yeah. And that gets back to the old adage, every company is a technology company. But maybe this is a more specific example of how it is. Could you briefly describe what attack surface management is, maybe perhaps for that executive audience?

Matt: The way that we describe attack surface management is it's effectively a three-step process where all steps are done continuously in the form of cycle, but it is a process and procedure by which you, or really a vendor, in this case Expanse or Palo Alto Networks, continuously discover all assets that an organization has. In our case, from external attack surface, all assets that you have on the public internet. And that is a continuous process because at any given time, and I can go into this later, but at any given time, new assets could appear from anywhere on the internet. So you need to have a continuous discovery process that says, "At any given time, I might not know everything about my assets so I should have mechanisms to gather information about anywhere that they could be and try to associate them to my organization."

At the same time as soon as an asset is discovered, you have to have means to evaluate it across a variety of different characteristics. In many cases, if I've discovered a new asset, is this asset actually truly new? And if it is not, then matching, normalizing, deduplicating that with other things. If it is a new asset, then in most cases, it's actually going to be unmanaged. So how do I actually start a slew of activities to say, "This is an asset that exists with mine, but it usually exists outside of an intended set of security controls. So how do I start a process to both assess what controls need to be put in place and then bring it under management." And the third part of evaluation is also understanding what is the risk that this poses immediately to my organization to help me prioritize activities.

The final step is what we call mitigation. Once you've evaluated everything that you've discovered, what do you actually do about it? What actions do you take and how do you do so in highly automated and effective ways. And for us, there are two primary steps that mitigation involves. I mentioned prioritization, but it's one, bringing systems under management. In a lot of cases, what that also means is that for most systems associated with our large customers, it actually means taking them either off the internet directly, so we're putting them behind a VPN or other sort of corporate device, or making sure that they are then known and then up-to-date because in a lot of cases, the real symptom of security problems that we find happens to be around the fact that an asset was just unmanaged for a very long time and may contain security vulnerabilities that were later discovered simply because you would have security patches that exist for known security issues that had not been applied.

In certain cases, such as zero-day attacks, it's actually just much more important to know where all the assets are so you can patch them as soon as possible. But for the larger majority of assets that we discover for our customers and help manage their attack surface, the real problem is that the assets are just not known. And for executives, the real key is that the existing processes and tools that a lot of companies use can be very good from this certain side of security, but they assume that networks are effectively a lot more static.

Laurel: So what are the ramifications of an enterprise not knowing their actual attack surface?

Matt: The large, most obvious one is an increased risk of breach. I think it was an adage throughout a lot of the 2000s, helped on in no small part by vendors, that everything started from email phishing. And there's very, very large email security vendors that still pumped this message that it's every single security incident is effectively a phishing email and that humans are the weakest link when they're clicking on things, and therefore buy more email security.

I don't think that's wrong. I think it's actually correct that security is a big thing, you can buy it. But it's also much easier to mitigate especially now with a lot of good tools, like you actually have full visibility over all emails being sent to employees because they have to go through a central mail server. It's actually a question of just being able to detect bad things but not actually needing to find out that there were, say, emails being sent that you didn't have visibility into.

I think in contrast, what we've seen, especially more recently over the last decade and really even the last five years, is some of the absolute worst breaches, the ones that cause hundreds of millions to billions of dollars damage, are not coming from phishing. They are actually coming from usually unknown and unmonitored assets and that in many cases, were actually on the public internet. So I think some of the largest examples of this are actually things like the WannaCry attack, which caused, it's estimated over $10 billion worldwide in damage, shut down entire companies, putting most of the health-care system of the United Kingdom back on pen and paper for actual days.

And the real ramifications are, you have all these extra avenues to get in because there are so many more assets that are online that are not being tracked by organizations, and that is actually how attackers are getting in because it turns out that there are very efficient, automated ways for attackers to understand and probe for and exploit these attacks surfaces. And the ramifications are quite bold. You see most of the healthcare of a first-world country reduced to pen and paper for days. Very, very serious because it's not just hacking someone's email, it's actually hacking the critical infrastructure of the network itself.

Laurel: Speaking of critical infrastructure, another recent attack is the water treatment plant in Florida, where an attacker was able to remotely change the chemical makeup of the water to add lye to it, which could have poisoned an entire community. So then, infrastructure is an enormous issue for very large companies, like water treatment plants or oil and gas companies, etc.?

Matt: Absolutely. In that case, to the best of my understanding, the attack vector there was actually a remote access server that someone at that plant left open, was on the internet, and allowed someone to go in. What our tech services are about is we're finding ways in that are effectively tools of IT convenience but that are able to be subverted by attackers because the tools of IT convenience are not hardened to the same degree as other things that are meant to be on the internet and are left out as a matter of course. We have this line that we like to view the internet in most ways as what most of us experienced through our web browsers or on our phone. It's this really nice setup consumer experience and all of the webpages we view looks very nice and pleasing and we go there.

And it's a good analogy to the physical world like I guess, soon after we're all vaccinated from covid-19, we'll be back shopping outside. You might go to a Starbucks and the store is really nice, you have this great experience, you get your latte, you go out, but then if you look beneath all of the glitz on the streets, you actually have much older infrastructure. You have things like no sewer pipes and other things that are greasy and cracking. And that's the infrastructure that supports the more beautiful world on top.

A lot of what we see as part of attack surface is an IT analogy that most people view the internet really as just, "What's in their web browser? What's on the phone, these nice consumer websites?" But there's entire backend IT infrastructure that supports that. And it's somewhat creaky and it's not always well-configured. Without something like ASM, you have problems that you don't actually know the state of your network because it's so large, distributed, and complex. And as in the case with Florida, which by the way was a smaller organization, it goes to the heart of how do you know that something is not going on? Under any IT security policy, having a remote access service on the internet should not be allowed. But it's very hard even for smaller organizations to get that continuous visibility of, what do I actually look like from the outside? What do I look like to an attacker with legacy tools?

Laurel: And that's a good example of an attack that's not a phishing attack. It has nothing to do with the email. While we're on the discussion of attacks, most memorably this year again, SolarWinds and Exchange, how would implementing ASM have changed those outcomes for organizations? Or how about those lucky organizations that actually understood their attack surface management options and were able to find this and thwart the attack?

Matt: I'll speak to both because a number of our customers had both of those kinds of systems and we helped them respond. I think the Microsoft Exchange hacks, and for your listeners, a bit of background: there was actually a set of zero-days announced for the sets of versions of the Microsoft Exchange email services earlier in February and March of this year. Very, very dangerous because in effect, these are the mail servers of an organization and if you followed this XY chain, what it basically allowed you to do was send a message to a mail server to grant you effectively unfettered administrative access to the entire mail server. And there were actually hundreds of thousands of these that we detected online. And effectively, if you think about it, having an attacker being able to download all or most of the corporate mail server and with all of these sensitive information that's stored there, is a very serious attack.

So what we noticed were actually two things, which was, for large organizations, they were very aware of this and they were patching very, very rapidly. But there were a number of customers that we were able to help where they're so large that they actually don't even have one central set of mail servers. So without Expanse, they wouldn't have been able to find even all of their mail servers and be able to patch them in time because they are so distributed, they actually needed an inventory of even their mail servers. And it's very hard to aggregate that in one central way unless you're using an ASM tool like Expanse. Because instead, in a lot of cases, you're usually using Microsoft Outlook and Microsoft Excel. You're going to be sending emails to different business units. You're going to be asking IT leaders in those different business units. If they're patched, they will be sending emails and spreadsheets back. It's a very, very manual process.

So able to actually identify that and really help them in a very short order of, like, a day, find and be able to fix every single server they had on their estate, which we think really, really changed the outcome, because they could have been vulnerable for weeks in certain cases. For SolarWinds as well, I think the details are a bit different because not all SolarWinds assets are necessarily exposed to the internet. And also in a lot of cases, they'd been there for months. As part of broader Palo Alto, we had other products that were able to stop SolarWinds: the SolarWinds attack in particular, our endpoint framework called XDR. But even there for SolarWinds, once the attack was known, customers still have the problem of, they didn't even know where all of their SolarWinds servers were, which again goes back to this inventory problem and choosing capabilities, both like Expanse and other capabilities we now have as part of Palo Alto, we were able to actually help customers very rapidly understand everywhere they had a SolarWinds exposure so that they could mitigate that very quickly. So there was effectively a two-step process. At Palo Alto, we were able to prevent the attack on our customers even without knowing that the supply chain had been breached. And then once it was more public, we were actually able to then also help everyone identify all of the servers that they had and make sure that they were all up to date and not infected with the supply-chain Trojan.

Laurel: That's really interesting because some companies may be thinking, "Oh, well, we don't have water plants and aging infrastructure to worry about." But do you actually know where all your mail is stored and how many different servers it may be on and different cloud instances or wherever? And when you do only have a matter of hours to make this critical patch, how quickly can you do it?

Matt: Exactly. And a lot of the questions that I asked our customers are just, "How do you have confidence that, effectively, your systems are up to date?" Answering even seemingly basic sounding questions with existing IT, if you don't have Expanse or ASM, is actually surprisingly hard. I'll give another fun example. I ask chief information security officers this all the time: "How many routers does your organization have?" It seems like a pretty basic question, seems like theyd know, at least to a very good approximation, the IT team should probably know exactly how many routers they have. They're very important pieces of networking equipment, especially at the enterprise level, they're more expensive. So it's not just like that home Wi-Fi hotspot that we're used to. These things can cost tens, in some cases, hundreds of thousands of dollars to handle enterprise-grade workloads.

And what we find is that when you ask that question, there's actually usually not one central place where all that's tracked. Instead, it will be tracked by local development and IT teams in different ways. It will be tracked in multiple spreadsheets. There may be certain local IT management systems that know that, but at the end of it, if you said like, "How many routers do you have right now?" The process that they would use to answer that is not going into a system or logging in, it's actually starting an email chain. That's actually the one of the main problems that attack surface management attempts to solve, is, How do you have an accurate and up-to-date inventory of everything so that you can then build a variety of processes on top of that, including security? But if you don't have an up-to-date inventory or you think you do, but you don't, then when you start to pull on that thread, a lot of business processes, a lot of IT processes, a lot of security processes that you want to have apply across your entire enterprise, all of a sudden you're realizing, "Wait, this actually is only being partially implemented because if I don't have a full inventory, how do I actually know what's going over all of my assets as opposed to just the assets I know about?" And that's what we talk about when we say unknown unknowns. As you mentioned at the top, it's, "I know some degree of my systems, but do I know all of them?" That delta can be everything for organizations because most of their risk is in the parts of their network they did not even know to investigate.

Laurel: What other data-driven decisions can be made from this sort of focus on actually knowing where all your assets are. How else can this help the business?

Matt: Two areas that this really helps organizations with is actually cloud governance and M&A. Particularly, these are very sprawling enterprises. So for a lot of our customers, they might actually have hundreds of different cloud accounts in the public cloud providers, so AWS, Azure, Oracle, Google, Alibaba in a lot of cases, and they had no way to actually rationalize this because they would have a whole bunch of different development teams and they couldn't get something. And so, when they say that they are moving to the cloud, a typical refrain from our customers will be like, "Yes, we are. We have deals with Amazon and we're hedging our bets a little bit. We're also exploring Azure so we're not solely locked into one cloud." What we find is that the average customer for Expanse is in 11 different infrastructure providers.

I'm not talking SaaS, I'm talking in places that you actually get like renting a server, putting data on yourself. It's amazing and astronomical and we could say, "Well, yeah, you are on Azure. You're also on AWS. Did you know that you're also in DigitalOcean? You're also in Linode. Your general manager in Europe probably put you in OVH or Orange hosting. You have something else in the Malaysian data center. I'm not exactly sure what that is. And that's typical. One customer for us was actually in over a hundred different providers because they're a very large multinational. I think that's when we see that people's cloud governance plans versus cloud reality are dramatically different. And helping them with that will enable them to move both securely and quickly to the cloud.

Second one is mergers and acquisitions. I think this is something that is increasingly happening. As a lot of industries are consolidating, there's a lot of M&A activity more recently. But when you think about it, an M&A is one of the largest IT change events an organization can have, especially if it's a large acquisition. So I know a little bit about this, having recently gone through this process with Palo Alto Networks on ourselves on the other side of the table, but the number of things you have to integrate is quite large. And in the case of Expanse, we're integrated with a top security company in the world and also we are relatively small. So the integration headaches have been almost nonexistent, and it's been a really great process.

But for larger organizations where you might, an organization with 50,000 people is acquiring an organization with 10,000 people, the number of different steps you have to go through, the amount of IT that you have to transfer, the amount of legacy that you have to understand is gigantic. And in a lot of ways, these are in many cases only partially implemented because as an acquirer, you might not even know where all the assets you're acquiring are. As an example, for an airline, there was a series of mergers and we're actually able to find assets of the merged airline that no longer exists, but were still on the internet more than a decade after the merger.

Which gives you an idea of just how long some of these things take. That's the other side of, how we really help with our customers, is actually understanding, "When you actually acquire an asset, how do you actually complete that process? How do you measure it? How do you monitor it and how do you do that at the scale of the internet rather than with a lot of consultants, Excel spreadsheets, pieces of paper and emails?"

Laurel: So from our conversation today, I feel like this is the, If you don't know what you don't know, you should really figure it out warning, if you haven't heard it before. But there are glimmers of hope in this, right? Because if the asset exists, you can at least find it, track it and assess what you're going to do with it, mediate any changes you need to make or assess it to bring it back to full cybersecurity compliance. What gives you hope about what's possible after seeing the first three months of this year and what's happened with attacks, the ongoing issues that we're going to have? But there is opportunity there, right? There is hope. What are you seeing that makes you optimistic about cybersecurity and what we're looking forward to in the next five years?

Matt: Yeah, I'm actually quite optimistic in not even the long-term but even in the medium term I think, even three, four years out. Near-term, definitely there's going to be some rough seas ahead, but here's what makes me most optimistic. One, I think that this is actually a solvable problem largely with a lot of technology that's being developed. And by that, it is clear that once you know a problem exists, actually fixing it is actually rather straightforward. There's a lot of mechanistic steps to get better at that. There's a lot of automations that can be put on that. And there's a lot of things coming to bear. But in many cases, the actual hard part is seeing what you actually need to fix and knowing all of the set of problems and then being able to prioritize them effectively and then start working on them.

And I think in particular, the things that I've seen are within the industry, I think there are a lot of technologies in the few years that are going to meet the marketing hype that has been around for years. I talk a lot with industry partners. We use substantial amounts of data. With my background where I have a PhD from Stanford in operations research and machine learning, we actually do use some real actual machine learning in our products. We also use a lot of heuristics as well. I joke that we sometimes have machine learning classifiers to solve a problem. Other times we have SQL queries that solve the problem.

We have some really well-written SQL queries. I'm very proud of those. But I think that the industry itself, especially in marketing material, you would think that everything in cybersecurity is this automated AI, ML-enabled everything. In most cases, but not all, but in a lot across the industry, and this is especially true in startups, its just a line to pitch. And what companies really call AI are just standard software rules and there's really nothing special going on.

Or there's an old joke that, "Oh, I have this great AI thing. What is it? Well, we have a bunch of analysts that are former intelligence officers, usually in Maryland or outside of Tel Aviv and they're the ones doing everything. But we have a system that efficiently routes work to them and that's our AI." And they're like, "Wait, that's people." I think what I've seen is that one, automation broadly defined is a real thing. But automation actually means on the ground, is you take something that previously took hours and days and 10 people. And then with software right now, it's more so how do you take that down to 15 minutes and two or three people?

I think that we're going to see even larger gains or even start to take humans out of the loop entirely in certain business processes. And I think what we're seeing and this is a lot of what we're working on and I'm working on now is that over the next months and years, actual large-scale machine learning capability is actually being deployed in production. I think there are some that are out there in piecemeal. There's a lot more rules than anyone wants to talk about, but we are now seeing there's enough assemblage of data, there's enough normalization of data in that, especially at the larger companies, and that enterprises are more willing to share information with vendors if it demonstrably improves the security service that they are getting, that we are actually going to be able to deploy increasingly sophisticated capabilities along those lines and have the product/reality match. I think thats what at least the broader industry marketing zeitgeist had been.

I've seen a lot of them, they are very, very real and they're very much coming. And they're coming at an industrial scale for defenders. And I think that's what I'm most excited about because despite the fact that there's the old adage of, attackers need to be right once, defenders need to be right all the time, increasingly, it is now more scalable for defenders to be right much of the time and to actually set up very vast monitoring networks so that if the attackers slip up once, the defenders can completely wipe them out in that attack. And that both asymmetrically affects cost and also I think will help tilt the field back to defense.

Matt: I think when you had partial AI solutions and ML solutions and partial automation, it helped attackers much more because they could duct-tape together a few different parts, scale up certain things very highly and then just see what came back to them in a great way. I think defenders are going to be able to have similar capabilities that are effective because they actually cover everything going on in an enterprise. And that's going to allow us to turn the tide.

Laurel:Matt, thank you so much for joining us today in what has been a fantastic conversation on the Business Lab.

That was Matt Kraning, the chief technology officer and co-founder of Expanse, who I spoke with from Cambridge, Massachusetts, the home of MIT and MIT Technology Review, overlooking the Charles River.

That's it for this episode of Business Lab. I'm your host, Laurel Ruma. I'm the director of Insights, the custom publishing division of MIT Technology Review. We were founded in 1899 at the Massachusetts Institute of Technology. And you can also find us in print, on the web, and at events each year around the world.

For more information about us and the show, please check out our website at technologyreview.com.

This show is available wherever you get your podcasts. If you enjoyed this episode, we hope you'll take a moment to rate and review us. Business Lab is a production of MIT Technology Review. This episode was produced by Collective Next. Thanks for listening.

This podcast episode was produced by Insights, the custom content arm of MIT Technology Review. It was not produced by MIT Technology Reviews editorial staff.

Continued here:
Better cybersecurity means finding the unknown unknowns - MIT Technology Review

Read More..

Going to the Moon via the Cloud – The New York Times

Before the widespread availability of this kind of computing, organizations built expensive prototypes to test their designs. We actually went and built a full-scale prototype, and ran it to the end of life before we deployed it in the field, said Brandon Haugh, a core-design engineer, referring to a nuclear reactor he worked on with the U.S. Navy. That was a 20-year, multibillion dollar test.

Today, Mr. Haugh is the director of modeling and simulation at the California-based nuclear engineering start-up Kairos Power, where he hones the design for affordable and safe reactors that Kairos hopes will help speed the worlds transition to clean energy.

Nuclear energy has long been regarded as one of the best options for zero-carbon electricity production except for its prohibitive cost. But Kairos Powers advanced reactors are being designed to produce power at costs that are competitive with natural gas.

The democratization of high-performance computing has now come all the way down to the start-up, enabling companies like ours to rapidly iterate and move from concept to field deployment in record time, Mr. Haugh said.

But high-performance computing in the cloud also has created new challenges.

In the last few years, there has been a proliferation of custom computer chips purposely built for specific types of mathematical problems. Similarly, there are now different types of memory and networking configurations within high-performance computing. And the different cloud providers have different specializations; one may be better at computational fluid dynamics while another is better at structural analysis.

The challenge, then, is picking the right configuration and getting the capacity when you need it because demand has risen sharply. And while scientists and engineers are experts in their domains, they arent necessarily in server configurations, processors and the like.

This has given rise to a new kind of specialization experts in high-performance cloud computing and new cross-cloud platforms that act as one-stop shops where companies can pick the right combination of software and hardware. Rescale, which works closely with all the major cloud providers, is the dominant company in this field. It matches computing problems for businesses, like Firefly and Kairos, with the right cloud provider to deliver computing that scientists and engineers can use to solve problems faster or at lowest possible cost.

Read the rest here:
Going to the Moon via the Cloud - The New York Times

Read More..

Investors Should Check Out This Cloud Computing Stock That’s Down More Than 25% – The Motley Fool

As corporate computing infrastructures become more complex, it's important for information technology teams to keep tabs on how their tech stack is performing. Datadog (NASDAQ:DDOG) was founded to make this easier by helping its customers observe and monitor all facets of their network and every application users need. The stock has more than doubled since the company went public in September 2019, but shares have pulled back with the tech sell-off in the market. Even with shares off double digits from their recent high, Fool contributor Brian Withers explains why this cloud specialist is worth a look on a Fool Live episode that was recorded on May 13.

Brian Withers: I'm going to talk about Datadog. I really like this company. It's set up for the future in the cloud. If you don't know what Datadog does, it has a set of what it calls observability tools, which allow information technology teams to observe or look into their applications, their network, their logs. If you're not familiar with software, it creates a bunch of logs, which are just dumps of huge amounts of data. What Datadog does is pull all the stuff together in a semblance where you can look at it all on one pane of glass.

What happens over time is networks are getting more complicated and applications. Companies may have stuff that they host on-premise in their own data center that has been around for a while, some Oracle. I remember when I was in corporate, we had an Oracle instance to run our manufacturing business, and that was hosted on site. But then there's all these cloud platforms that are hosted somewhere else, on Azure, AWS, whatnot. More and more companies are getting into this hybrid environment, where it's almost users can be anywhere and the software can be anywhere. It's really important for companies that depend on their websites being available for customers, which is just about everybody nowadays.

Datadog is really helpful in pulling things together. In fact, they shared one customer this past quarter who took eight different observability tools they were using, and they centralized, consolidated down to just Datadog.

Some of the numbers are just really impressive if you look at the most recent quarter, 51% top-line revenue change. I really like how they're having customers land and then expand. They have a bunch of different products. Customers spending $100,000 or more make up a majority of their annual recurring revenue, about 75% and it's growing at a healthy 50% a year. They have about 1,400 customers that are spending more than $100,000 a year.

Their remaining performance obligations, the sum of all the contracts together, and how long they are, and the monthly fees, and whatnot, that's grown 81% year over year. To me, what that says is, since that's growing faster than revenue, customers are signing up for bigger contracts and potentially longer contracts. That really bodes well for the future of this company.

I talked about the products being a land-and-expand model. At the end of the first quarter, 75% of all their customers were using more than one product, which is up from 63% last year, and those using four or more doubled to 25%.

This company has got an addressable market of about $35 billion that it shared in its filing to go public. But this monitoring and observability stuff is really just getting started.

To me, I'm still really super positive about this company. The only thing that's changed for me is the stock price. If you haven't taken a look at this one, maybe it's time you did.

This article represents the opinion of the writer, who may disagree with the official recommendation position of a Motley Fool premium advisory service. Were motley! Questioning an investing thesis -- even one of our own -- helps us all think critically about investing and make decisions that help us become smarter, happier, and richer.

Continued here:
Investors Should Check Out This Cloud Computing Stock That's Down More Than 25% - The Motley Fool

Read More..