Category Archives: Cloud Servers

To HADES and Back: UNC2165 Shifts to LOCKBIT to Evade Sanctions – Mandiant

The U.S. Treasury Department's Office of Foreign Assets Control (OFAC) sanctioned the entity known as Evil Corp in December 2019, citing the group's extensive development and use and control of the DRIDEX malware ecosystem. Since the sanctions were announced, Evil Corp-affiliated actors appear to have continuously changed the ransomware they use (Figure 1). Specifically following an October 2020 OFAC advisory, there was a cessation of WASTEDLOCKER activity and the emergence of multiple closely related ransomware variants in relatively quick succession. These developments suggested that the actors faced challenges in receiving ransom payments following their ransomware's public association with Evil Corp.

Mandiant has investigated multiple LOCKBIT ransomware intrusions attributed to UNC2165, a financially motivated threat cluster that shares numerous overlaps with the threat group publicly reported as "Evil Corp." UNC2165 has been active since at least 2019 and almost exclusively obtains access into victim networks via the FAKEUPDATES infection chain, tracked by Mandiant as UNC1543. Previously, we have observed UNC2165 deploy HADES ransomware. Based on the overlaps between UNC2165 and Evil Corp, we assess with high confidence that these actors have shifted away from using exclusive ransomware variants to LOCKBITa well-known ransomware as a service (RaaS)in their operations, likely to hinder attribution efforts in order to evade sanctions.

OFAC sanctions against Evil Corp in December 2019 were announced in conjunction with the Department of Justice's (DOJ) unsealing of indictments against individuals for their roles in the Bugat malware operation, updated versions of which were later called DRIDEX. DRIDEX was believed to operate under an affiliate model with multiple actors involved in the distribution of the malware. While the malware was initially used as traditional banking Trojan, beginning as early as 2018, we increasingly observed DRIDEX used as a conduit to deploy post-exploitation frameworks onto victim machines. Security researchers also began to report DRIDEX preceding BITPAYMER deployments, which was consistent with a broader emerging trend at the time of ransomware being deployed post-compromise in victim environments. Although Evil Corp was sanctioned for the development and distribution of DRIDEX, the group was already beginning to shift towards more lucrative ransomware operations.

UNC2165 activity likely represents another evolution in Evil Corp affiliated actors' operations. Numerous reports have highlighted the progression of linked activity including development of new ransomware families and a reduced reliance on DRIDEX to enable intrusions. Despite these apparent efforts to obscure attribution, UNC2165 has notable similarities to operations publicly attributed to Evil Corp, including a heavy reliance on FAKEUPDATES to obtain initial access to victims and overlaps in their infrastructure and use of particular ransomware families.

BEACON C&C

Description

mwebsoft[.]comrostraffic[.]comconsultane[.]comtraffichi[.]comamazingdonutco[.]comcofeedback[.]comadsmarketart[.]comwebsitelistbuilder[.]comadvancedanalysis[.]beadsmarketart[.]com

In June 2020, NCC Group reported on the WASTEDLOCKER ransomware, which they attributed to Evil Corp with high confidence. In these incidents, the threat actor leveraged FAKEUPDATES for initial access.

cutyoutube[.]comonlinemoula[.]com

In June 2021, Secureworks reported on HADES ransomware intrusions attributed to "GOLD WINTER." In these incidents, the threat actor leveraged FAKEUPDATES or VPN credentials for initial access. This activity was later attributed to GOLD DRAKE (aka Evil Corp) after further analysis of the ransomware and overlaps with other families believed to be operated by GOLD DRAKE.

potasip[.]comadvancedanalysis[.]befirsino[.]comcurrentteach[.]comnewschools[.]infoadsmarketart[.]com

In February 2022, SentinelOne published an in-depth report on the Evil Corp lineage in which they assessed with high confidence that WASTEDLOCKER, HADES, PHOENIXLOCKER, PAYLOADBIN, and MACAW were developed by the same threat actors. The researchers also noted overlaps in infrastructure between FAKEUPDATES and BITPAYMER, DOPPELPAYMER, WASTEDLOCKER, and HADES ransomware.

Overlaps With SilverFish Reporting

UNC2165 also has overlaps with a cluster of activity dubbed "SilverFish" by ProDaft. Mandiant reviewed the information in this report and determined that the analyzed malware administration panel is used to manage FAKEUPDATES infections and to distribute secondary payloads, including BEACON. We believe that at least some of the described activity can be attributed to UNC2165 based on malware payloads and other technical artifacts included in the report.

While UNC2165 activity dates to at least June 2020, the following TTPs are focused on intrusions where we directly observed ransomware deployed.

Initial Compromise and Establish Foothold

UNC2165 has primarily gained access to victim organizations via FAKEUPDATES infections that ultimately deliver loaders to deploy BEACON samples on impacted hosts. The loader portion of UNC2165 Cobalt Strike payloads have changed frequently but they have continually used BEACON in most intrusions since 2020. Beyond FAKEUPDATES, we have also observed UNC2165 leverage suspected stolen credentials to obtain initial access.

Escalate Privileges

UNC2165 has taken multiple common approaches to privilege escalation across its intrusions, including Mimikatz and Kerberoasting attacks, targeting authentication data stored in the Windows registry, and searching for documents or files associated with password managers or that may contain plaintext credentials.

Internal Reconnaissance

Following UNC1543 FAKEUPDATES infections, we commonly see a series of built-in Microsoft Windows utilities such as whoami, nltest, cmdkey, and net used against newly accessed systems to gather data and learn more about the victim environment. The majority of these commands are issued using one larger, semicolon-delineated list of enumeration commands, followed up by additional PowerShell reconnaissance (Figure 4). We attribute this initial reconnaissance activity to UNC1543 as it occurs prior to UNC2165 BEACON deployment; however, collected information almost certainly enables decision-making for UNC2165. During intrusions, UNC2165 has used multiple common third-party tools to enable reconnaissance of victim networks and has accessed internal systems to obtain information used to guide its intrusion operations.

Lateral Movement and Maintain Presence

UNC2165 relies heavily on Cobalt Strike BEACON to enable lateral movement and maintain presence in a victim environment. Beyond its use of BEACON, UNC2165 has also used common administrative protocols and software to enable lateral movement, including RDP and SSH.

Complete Mission

In most cases, UNC2165 has stolen data from its victims to use as leverage for extortion after it has deployed ransomware across an environment. In intrusions where the data exfiltration method could be identified, there is evidence to suggest the group used either Rclone or MEGASync to transfer data from the victims' environments prior to encryption. The Rclone utility is used by many financially motivated actors to synchronize sensitive files with cloud storage providers, and MEGASync synchronizes data to the MEGA cloud hosting service.

UNC2165 has leveraged multiple Windows batch scripts during the final phases of its operations to deploy ransomware and modify systems to aid the ransomware's propagation. We have observed UN2165 use both HADES and LOCKBIT; we have not seen these threat actors use HADES since early 2021. Notably, LOCKBIT is a prominent Ransomware-as-a-Service (RaaS) affiliate program, which we track as UNC2758, that has been advertised in underground forums since early 2020 (21-00026166).

Based on information from trusted sensitive sources and underground forum activity, we have moderate confidence that a particular actor operating on underground forums is affiliated with UNC2165. Additional details are available in Mandiant Advantage.

The U.S. Government has increasingly leveraged sanctions as a part of a broader toolkit to tackle ransomware operations. This has included sanctions on both actors directly involved in ransomware operations as well as cryptocurrency exchanges that have received illicit funds. These sanctions have had a direct impact on threat actor operations, particularly as at least some companies involved in ransomware remediation activities, such as negotiation, refuse to facilitate payments to known sanctioned entities. This can ultimately reduce threat actors' ability to be paid by victims, which is the primary driver of ransomware operations.

The adoption of an existing ransomware is a natural evolution for UNC2165 to attempt to obscure their affiliation with Evil Corp. Both the prominence of LOCKBIT in recent years and its successful use by several different threat clusters likely made the ransomware an attractive choice. Using this RaaS would allow UNC2165 to blend in with other affiliates, requiring visibility into earlier stages of the attack lifecycle to properly attribute the activity, compared to prior operations that may have been attributable based on the use of an exclusive ransomware. Additionally, the frequent code updates and rebranding of HADES required development resources and it is plausible that UNC2165 saw the use of LOCKBIT as a more cost-effective choice. The use of a RaaS would eliminate the ransomware development time and effort allowing resources to be used elsewhere, such as broadening ransomware deployment operations. Its adoption could also temporarily afford the actors more time to develop a completely new ransomware from scratch, limiting the ability of security researchers to easily tie it to previous Evil Corp operations.

It is plausible that the actors behind UNC2165 operations will continue to take additional steps to distance themselves from the Evil Corp name. For example, the threat actors could choose to abandon their use of FAKEUPDATES, an operation with well-documented links to Evil Corp actors in favor of a newly developed delivery vector or may look to acquire access from underground communities. Some evidence of this developing trend already exists given UNC2165 has leveraged stolen credentials in a subset of intrusions, which is consistent with a suspected members underground forum activity. We expect these actors as well as others who are sanctioned in the future to take steps such as these to obscure their identities in order to ensure that it is not a limiting factor to receiving payments from victims.

MITRE ATT&CK Mapping

Mandiant has observed UNC2165 use the following techniques.

Impact

T1486: Data Encrypted for ImpactT1489: Service StopT1490: Inhibit System RecoveryT1529: System Shutdown/Reboot

Defense Evasion

T1027: Obfuscated Files or InformationT1027.005: Indicator Removal from ToolsT1036: MasqueradingT1055: Process InjectionT1055.002: Portable Executable InjectionT1070.001: Clear Windows Event LogsT1070.004: File DeletionT1070.005: Network Share Connection RemovalT1070.006: TimestompT1078: Valid AccountsT1112: Modify RegistryT1127.001: MSBuildT1134: Access Token ManipulationT1134.001: Token Impersonation/TheftT1140: Deobfuscate/Decode Files or InformationT1202: Indirect Command ExecutionT1218.005: MshtaT1218.011: Rundll32T1497: Virtualization/Sandbox EvasionT1497.001: System ChecksT1553.002: Code SigningT1562.001: Disable or Modify ToolsT1562.004: Disable or Modify System FirewallT1564.003: Hidden WindowT1620: Reflective Code Loading

Command and Control

T1071: Application Layer ProtocolT1071.001: Web ProtocolsT1071.004: DNST1090.004: Domain FrontingT1095: Non-Application Layer ProtocolT1105: Ingress Tool TransferT1573.002: Asymmetric Cryptography

Collection

T1056.001: KeyloggingT1113: Screen CaptureT1115: Clipboard DataT1560: Archive Collected DataT1602.002: Network Device Configuration Dump

Discovery

T1007: System Service DiscoveryT1010: Application Window DiscoveryT1012: Query RegistryT1016: System Network Configuration DiscoveryT1033: System Owner/User DiscoveryT1049: System Network Connections DiscoveryT1057: Process DiscoveryT1069: Permission Groups DiscoveryT1069.001: Local GroupsT1069.002: Domain GroupsT1082: System Information DiscoveryT1083: File and Directory DiscoveryT1087: Account DiscoveryT1087.001: Local AccountT1087.002: Domain AccountT1482: Domain Trust DiscoveryT1518: Software DiscoveryT1614.001: System Language Discovery

Lateral Movement

T1021.001: Remote Desktop ProtocolT1021.002: SMB/Windows Admin SharesT1021.004: SSH

Exfiltration

T1020: Automated Exfiltration

Execution

T1047: Windows Management InstrumentationT1053: Scheduled Task/JobT1053.005: Scheduled TaskT1059: Command and Scripting InterpreterT1059.001: PowerShellT1059.003: Windows Command ShellT1059.005: Visual BasicT1059.007: JavaScriptT1569.002: Service Execution

Persistence

T1098: Account ManipulationT1136: Create AccountT1136.001: Local AccountT1543.003: Windows ServiceT1547.001: Registry Run Keys / Startup FolderT1547.009: Shortcut Modification

Credential Access

T1003.001: LSASS MemoryT1003.002: Security Account ManagerT1552.002: Credentials in RegistryT1558: Steal or Forge Kerberos TicketsT1558.003: Kerberoasting

Initial Access

T1133: External Remote ServicesT1189: Drive-by Compromise

Resource Development

T1588.003: Code Signing CertificatesT1588.004: Digital CertificatesT1608.003: Install Digital Certificate

LOCKBIT YARA Rules

The following YARA rules are not intended to be used on production systems or to inform blocking rules without first being validated through an organization's own internal testing processes to ensure appropriate performance and limit the risk of false positives. These rules are intended to serve as a starting point for hunting efforts to identify LOCKBIT activity; however, they may need adjustment over time if the malware family changes.

Follow this link:
To HADES and Back: UNC2165 Shifts to LOCKBIT to Evade Sanctions - Mandiant

Bridging On-Premise and Cloud Data – International Society of Automation

Hybrid data architectures empower process manufacturers to more quickly realize the business benefits from their cloud and IIoT investments.

By 2028, cloud computing and the Internet of Things (IoT) in manufacturing will be poised to achieve the plateau of productivity, or the phase when they drive transformational impact on business outcomes, according to business analyst firm Gartner. At this point in their digital transformation journeys, many manufacturers have completed their Industrial Internet of Things (IIoT) pilot projects and are approaching mid- to late-stage adoption in operations.

While the term IIoT was coined just a few years ago, the large volumes of data associated with it are familiar to the process control and automation industries. For decades, manufacturers have generated and collected more data than they know what to do with via sensors, legacy digital networks, and various host systems.

But a great deal of data was stranded in process historians and other databases, collecting dust. Today, manufacturers can fully benefit from this data and information in the cloud by using hybrid data architectures coupled with advanced analytics applications.

Transitioning to agile production requires optimizing the entire supply chain, from improving overall equipment effectiveness and asset reliability to reducing inventory. IIoT implementations can help organizations clear common optimization hurdles, because they empower staff to access, collect, and analyze more data in near real time. This enables process experts and operators to make timely and productive decisions to enhance product quality, optimize operations, and reduce waste.

With Internet connectivity, IIoT implementations can directly access the vast computing power and scalability of the cloud. Each year, the variability, speed, and volume of process data grows exponentially, rendering IIoT architectures as the only suitable options for compute-intensive Industry 4.0 projects.

Some of the leading cloud applications and components include digital twins, machine learning (ML) tools, autonomous robot artificial intelligence (AI) repositories, and augmented reality simulators. Each of these use cases requires high CPU processing power, which can be difficult for on-premise servers to provide because information technology (IT) teams cannot scale up the required computing resources on demand.

According to Gartner, when it comes to cloud computing for manufacturing operations, the industry is currently in a trough of disillusionment, or a state of lowered expectations. This mindset is largely a result of the unproven idea that IIoT and related databases must feed a central data lake, which is intended to serve as the single source of truth and common access point for all users worldwide.

If this were true, cloud-based data lakes would need to replace all existing process historiansalong with other host systems such as those used for asset management, laboratory information, or inventory trackingto provide the data required for analysis. In reality, this is not the best approach because many legacy on-premise servers, such as those hosting process historians, collect and store highly valuable operational technology (OT) data. The context housed in these rich data archives is required to ensure Industry 4.0 initiatives, such as predictive maintenance via ML, succeed. Attempts to move or copy this OT data to the cloud are often time consuming and costly.

To properly aggregate and analyze the data produced by legacy sensors and infrastructure alongside new born-in-the-cloud IIoT sensor data, a bridge is required.

To address this issue and provide combined access to OT, IIoT, and other data, process manufacturers use a hybrid data architecture approach to:

This is not a rip-and-replace approach but is instead a bridge connecting traditional manufacturing data infrastructure with cloud-native data to leverage the best data from both sides by creating a continuum of access. Process automation systems can continue to use on-premise or edge data for real-time decision making where low latency is required. Simultaneously, the hybrid model empowers organizations to apply global reporting and compute-intensive tasks, like ML, to cloud-native IIoT data (figure 1).

This approach requires a data abstraction layer to facilitate traffic flow among various data sources (figure 2).

Figure 1. Hybrid data architectures empower manufacturing organizations to leverage IIoT in the cloud for compute-intensive processes, while executing real-time process control using on-premise data. Figure 2. Data abstraction layers facilitate data access and transfer among multiple data sources, including on premise and cloud databases. Data abstraction indexes and facilitates access to data in its native locations, a key differentiating point from data-lake functionality. Because data is not copied or moved, its management is significantly simplified. Once data abstraction is implemented, organizations can add advanced analytics applications to simultaneously query and make use of information from multiple, and often previously disparate, data sources. This improves awareness and predictive maintenance capabilities across the organization.

For example, when training and executing ML models, organizations must access maintenance records and historical process data. Staff must then access results to proactively identify issues and adjust the operational model. Abstraction makes it easy for personnel and software applications to access multiple datasets through a single source.

Asset monitoring is a critical task for many process manufacturers. For common assetsincluding pumps, valves, heat exchangers, and othersmanufacturers deploy a variety of maintenance methods to maximize productivity over the assets life. At the two extremes, these methods include run to fail in the most basic case, and condition monitoring for predictive maintenance in more advanced situations.

By monitoring asset performance to detect anomalies in near real time, manufacturers can identify potential issues before failure, reducing unplanned downtime and maintenance costs. When these anomalies are detected, advanced analytics software can generate alerts to inform personnel, so they can schedule inspections and maintenance of affected assets.

These monitoring applications can be scaled to hundreds of assets across multiple sites. Therefore, it is critical to normalize data before generating alerts and to streamline notification paths so the right personnel are informed.

By working together, OT and IT teams can use a hybrid data architecture to achieve these asset monitoring goals. First, OT teams must deploy suitable sensors, in addition to data acquisition and storage technologies, to populate asset hierarchies with data for grouping equipment and devices of a common process or location. These asset hierarchies include sets of metadata collected for each asset of a common taxonomy. Once the hierarchies are in place, assets can be analyzed within process groups, rather than individually, or solely as unrelated assets of the same type.

Next, OT works with IT personnel to ensure the former group can access this data securely by implementing cloud data storage, advanced analytics, and workflow automation tools. IT and data science teams collaborate with OT subject matter experts to configure ML models that create insights and effectively predict asset failure, generating intelligent alerts to improve issue remediation and decrease downtime.

When evaluating hybrid data infrastructure, organizations should consider these questions before implementation:

Hybrid data architectures empower process manufacturers to more quickly realize the business benefits from their cloud and IIoT investments. By using IIoT data and pipelines, on-premise process data, abstraction, and advanced analytics, organizations can quickly pass through the trough of disillusionment and reach the digitalization plateau of productivity.

We want to hear from you! Please send us your comments and questions about this topic to InTechmagazine@isa.org.

See the original post:
Bridging On-Premise and Cloud Data - International Society of Automation

Everything is gone: Russian business hit hard by tech sanctions – Ars Technica

vladimir18 | Getty Images

Russian companies have been plunged into a technological crisis by Western sanctions that have created severe bottlenecks in the supply of semiconductors, electrical equipment, and the hardware needed to power the nations data centers.

Most of the worlds largest chip manufacturers, including Intel, Samsung, TSMC and Qualcomm, have halted business to Russia entirely after the US, UK, and Europe imposed export controls on products using chips made or designed in the US or Europe.

This has created a shortfall in the type of larger, low-end chips that go into the production of cars, household appliances, and military equipment. Supplies of more advanced semiconductors, used in cutting-edge consumer electronics and IT hardware, have also been severely curtailed.

And the countrys ability to import foreign tech and equipment containing these chipsincluding smartphones, networking equipment, and data servershas been drastically stymied.

Entire supply routes for servers to computers to iPhoneseverythingis gone, said one Western chip executive.

The unprecedented sweep of Western sanctions over President Vladimir Putins war in Ukraine are forcing Russia into what the central bank said would be a painful structural transformation of its economy.

With the country unable to export much of its raw materials, import critical goods, or access global financial markets, economists expect Russias gross domestic product to contract by as much as 15 percent this year.

Export controls on dual use technology that can have both civilian and military applicationssuch as microchips, semiconductors, and serversare likely to have some of the most severe and lasting effects on Russias economy. The countrys biggest telecoms groups will be unable to access 5G equipment, while cloud computing products from tech leader Yandex and Sberbank, Russias largest bank, will struggle to expand their data center services.

Russia lacks an advanced tech sector and consumes less than 1 percent of the worlds semiconductors. This has meant that technology-specific sanctions have had a much less immediate impact on the country than similar export controls had on China, the behemoth of global tech manufacturing, when they were introduced in 2019.

While Russia does have several domestic chip companies, namely JSC Mikron, MCST, and Baikal Electronics, Russian groups have previously relied on importing significant quantities of finished semiconductors from foreign manufacturers such as SMIC in China, Intel in the US, and Infineon in Germany. MCST and Baikal have relied principally on foundries in Taiwan and Europe for the production of the chips they design.

See original here:
Everything is gone: Russian business hit hard by tech sanctions - Ars Technica

Next-Generation Memory Market Estimated to Reach USD 14 Billion, at a CAGR of 29.9% by 2030 – Report by Market Research Future (MRFR) – GlobeNewswire

New York, US, June 03, 2022 (GLOBE NEWSWIRE) -- Market Overview: According to a comprehensive research report by Market Research Future (MRFR), Next-Generation Memory Market information by Product, by Application and Region Forecast to 2030 market size to reach USD 14 billion, growing at a compound annual growth rate of 29.9% by 2030.

Market Scope: Over the past few years, next-generation memory has evolved extensively due to rising uses in small, niche appliances led by its low switching energy, non-volatility, and low power consumption. Next-generation memory is becoming critical for many applications as it needs a very small amount of energy for polarization compared to other technologies.

Report Scope:

Get Free Sample PDF Brochure: https://www.marketresearchfuture.com/sample_request/2448

Storage and memory have become foundational pillars of todays digital-first economy. This trend is projected to continue through 2022 and years ahead, witnessing robust demand for storage and memory technologies. Increasing developments of next-generation memory technologies to address gaps in todays storage hierarchy and needs for real-time data processing influence the market landscape.

For so many years, the automotive industry has been looking for a breakthrough memory solution with the same attributes as DRAM and flash and could transform memory space. MRAM, PCM, and ReRAM technologies have emerged as these solutions to meet the market demand. STT-MRAM features the speed of SRAM and the non-volatility of flash with unlimited endurance.

Market USP Exclusively Encompassed:Market DriversOver recent years, several next-generation memory types have been seen emerging rapidly. There are still many research activities in the pipeline working on ramping product launches. Moreover, increasing R&D funding required for extensions of new memory technologies and advances in MRAM, phase-change memory (PCM) and ReRAM are shaping market dynamics.

Browse In-depth Market Research Report (100 Pages) on Next-Generation Memory Market:https://www.marketresearchfuture.com/reports/next-generation-memory-market-2448

Some new memories are entirely based on advanced technologies/ involve architectural changes, such as near-or in-memory computing, bringing processing tasks inside of memory. Additionally, increasing R&D investments are poured into developing a solution helpful in overcoming several technical and business hurdles. Some of them are potentially targeted to replace new age DRAM, NAND, and SRAM.

Furthermore, efforts in pushing the development of 3D SRAM that stacks SRAM and can replace planar SRAM in the future are likely to influence the market scenario. Also, there are vast improvements in new memory types with new materials, storage concepts, and materials technology. This indicates significant potential challenges in the material and structural characterization.

Segmentation of Market Covered in the Research:The next-generation memory market is segmented into products, applications, and regions. The product segment comprises volatile memories (static random access memory (SRAM), dynamic random access memory (DRAM), others) and non-volatile memories (PCM, FeRAM, MRAM, RE RAM, others).

The application segment comprises consumer electronics (mobile phones, smartphones, laptops, iPods, tablets, others), manufacturing, IT & telecommunication, aerospace & defense, etc. The region segment comprises the Asia Pacific, Middle East & Africa, North America, Europe, and rest-of-the-world.

Talk to Expert: https://www.marketresearchfuture.com/ask_for_schedule_call/2448

Regional AnalysisNorth America dominates the global next-generation memory market. Factors such as increased investments in technology developments and wide adoption of PCM memory, MRAM, and DRAM, among end-users, drive the regions market shares. Besides, the presence of several notable players and the technical expertise drive the regional market growth. The introduction of innovative memory and storage solutions developed using disruptive technologies, such as AI, ML, and IoT, to cater to autonomous vehicle manufacturing companies boost the market size.

Europe is the second-largest market for next-generation memories. The rising demand for next-generation memory in various end-use applications and the adoption of next-generation memory devices in data centers for cloud computing applications to gain scalability and improve the memory capacity escalate the market value. Additionally, the growing IT industry and high deployments of cloud computing technologies in SMEs & large enterprises in the region push market revenues.

APAC has emerged as a rapidly growing market for next-generation memory solutions. The spurring rise in the financial and healthcare sectors and the increasing uptake of on-premises and on-cloud database infrastructures fosters regional market growth. Furthermore, the rising numbers of data centers and cloud servers in the region boost the size of the market.

With the growing investments in efficient next-generation memory technology, the APAC next-generation memory market is likely to create a substantial revenue pocket during the assessment period. Also, the rising trend of outsourcing services in real-time intelligence and the advanced predictive analytics on different applications propel the regions market shares.

Competitive AnalysisThe next-generation memory market witnesses significant strategic initiatives, such as collaboration, mergers & acquisitions, expansion, and advanced technology integration. Matured industry players make strategic investments in driving research and development activities and expansion plans. The market witnesses several innovative product launches and related technology launch each year.

Dominant Key Players on Next-Generation Memory Market Covered are:

Buy this Report:https://www.marketresearchfuture.com/checkout?currency=one_user-USD&report_id=2448

For instance, on April 14, 2022, Keysight Technologies, Inc., a leading technology company, announced a funding deal with SK Hynix to enable speed semiconductor memory technology development. SK Hynix can use integrated peripheral component interconnect express (PCIe) 5.0 test platforms by Keysight to speed the development of memory semiconductors used to design advanced solutions capable of supporting high data speeds and managing massive data.

Keysight delivers advanced design and validation solutions to help accelerate innovation to connect and secure the world. Keysights integrated PCIe test solutions are used by leading memory chip makers to validate Compute Express Link (CXL) technology. Keysights PCIe test platforms enable SK Hynix to validate the performance of bandwidth memory expansion modules deployed in many Industry 4.0 applications.

Related Reports:Next Generation Biometrics Market, By Component, Technology, Verticals, Authentication Type- Forecast till 2027

Next-Generation Power Semiconductors Market Information, by Product Material, by Device, by Applications- Forecast- 2027

Next Generation Batteries Market Research Report: By Type and By Application - Forecast till 2027

About Market Research Future:Market Research Future (MRFR) is a global market research company that takes pride in its services, offering a complete and accurate analysis regarding diverse markets and consumers worldwide. Market Research Future has the distinguished objective of providing the optimal quality research and granular research to clients. Our market research studies by products, services, technologies, applications, end users, and market players for global, regional, and country level market segments, enable our clients to see more, know more, and do more, which help answer your most important questions.

Follow Us:LinkedIn|Twitter

Go here to see the original:
Next-Generation Memory Market Estimated to Reach USD 14 Billion, at a CAGR of 29.9% by 2030 - Report by Market Research Future (MRFR) - GlobeNewswire

Security and Backup Alignment Critical to Ransomware Recovery | eWEEK – eWeek

Ransomware continues to plague organizations all over the world. The most common entry points of ransomware into enterprise IT environments are phishing emails, malicious links, and dubious websites, according to Veeam Softwares 2022 Ransomware Trends Report.

The provider of backup, recovery, and data management solutions found that 44 percent of organizations have experienced such cyberattacks in the past year, while 41 percent named infected patches and software packages as the other ransomware culprits.

Veeam surveyed 1,000 independent IT leaders in 16 countries to determine the impact ransomware has had on various organizations. All respondents experienced at least one cyberattack in 2021, with many experiencing at least two attacks. Approximately half (47 percent) of their data was encrypted by ransomware.

Also see: The Successful CISO: How to Build Stakeholder Trust

Once bad actors enter the IT environment, they typically target mainstream platforms like backup repositories (94 percent) and production platforms (80 percent). Many ransomware attacks are based on known vulnerabilities within mainstream hypervisors and operating systems or database servers. Hence,organizations should have broader conversations not only with their cybersecurity team, but also database administratorsto ensure that database servers are secure, hypervisors are patched, and updates are routinely administered.

Veeam also surveyed 3,393 organizations in a separate 2022 Data Protection Trends Report. According to those findings, 76 percent of organizations have experienced at least one ransomware attack last year, while 24 percent either avoided attacks or werent aware of an attack. This data underscores the fact that ransomware attacks are extremely common and affect most organizations.

The most shocking data points in the study revolved around what happens when customers pay for the ransom. Only about half (52 percent) of those with encrypted data successfully recovered their data when the ransom was paid.

A whopping 24% of organizations were not able to get their data back, even when the ransom was paid. When paying ransom, most (72 percent) organizations used some form of insurance. Fifty-seven percent of respondents said they have cyber insurance that includes ransomware coverage, 30 percent have cyber insurance without ransomware coverage, and 13 percent dont have cyber insurance.

The likelihood that organizations will pay a ransom following a cyberattack is high, not only because they value their data but also to avoid remediating significant portions of their infrastructure. Veeams ransomware report found most organizations were able to start remediation either the same or the following day. Those respondents said recovery took an average of 18 days to complete.

The usual remediation options for ransomware are either restoring frombackups orpaying a ransom to recover data.Restoring from backup can result in data loss if an organization doesnt have the right capabilities in place.Organizations should have tested and secure backups that can be restored quickly. For this reason, a backup infrastructure should be part of every organizations cybersecurity defense plan.

Its important to understand the difference between protecting backup repositories and having clean data within the repositories. Just because a repository is protected doesnt mean the contents are malware-free. Using clean backups is key to successful recovery. Nearly a third (31 percent) of organizations surveyed by Veeam relied on immutability, while 46 percent used a sandbox and 36 percent restored directly to production and then scanned for safety.

Also see: Secure Access Service Edge: Big Benefits, Big Challenges

Secure backup is the last line of defense against ransomware. Organizations should have at least three copies of data on two different media with one copy being offsite, one being offline, air-gapped, or immutable. By applying this modern 3-2-1-1-0 rule, organizations can be better prepared to deal with cyberattacks, especially as the threat of ransomware keeps growing.

Luckily, 95 percent of organizations now use at least one method of retaining isolatable backup data. The report found 74 percent of organizations use some type of cloud-service and 67 percent use on-premises storage.

Most organizations use a combination of cloud services and tape data storage, which is a lower cost alternative to other storage solutions. Smaller organizations are likely to choose cloud services, while large enterprises are more selective when it comes to long-term data retention due to scale and regulatory challenges.

While backup is fundamental to data recovery, the alignment between cybersecurity and backup teams is currently lacking at most organizations. In fact, 52% of the respondents said their organization needs to make significant improvements for cybersecurity and IT backup teams to collaborate successfully.

Therefore, in addition to having backup solutions and comprehensive disaster response plans in place,sharing those plans with other security teams is what will help organizations improve their incident response.

Also see: Best Website Scanners

Visit link:
Security and Backup Alignment Critical to Ransomware Recovery | eWEEK - eWeek

Amazon Is About to Kill Your Cloud Cam (And Offer a Replacement) – Review Geek

Amazon

Since its launch in 2017, the Amazon Cloud Cam has only grown more and more obscure. It was quickly superseded by Ring and Blink, and on December 2nd of 2022, the Cloud Cam will die. Thankfully, Amazon will offer you a replacement.

In an email sent to active Cloud Cam users, Amazon confirmed that its abandoning the Cloud Cam platform in favor of more popular and modern smart home devices. Your Cloud Cam will work as normal until December 2nd, at which point it will become a doorstop. I suggest downloading any important recordings from the device now.

As the number of Alexa smart home devices continues to grow, we are focusing efforts on Ring, Blink, and other technologies that make your home smarter and simplify your everyday routines. Therefore, we have decided to no longer continue support for Amazon Cloud Cam and its companion apps.

What this means for you: On December 2, 2022, you will no longer be able to use your Cloud Cam device or its companion apps. Until then, you will be able to download any video recordings if available. All video history will be deleted on December 2, 2022.

On the bright side, Amazon will send a complementaryBlink Mini (plus a year of Blink Subscription Plus) to all affected customers. The Blink Mini is nearly identical to the old Cloud Cam, though of course, its a bit more modern and offers better integration with Alexa devices.

A free replacement camera is better than nothing. Still, were frustrated by how many smart devices have kicked the bucket this year. It seems that connecting all this stuff to cloud servers is a bad ideait guarantees an early end of life.

Amazon will offer you the free Blink Mini and Blink Subscription Plus before the December 2nd cutoff date. Keep an eye on your inbox so you dont miss the offer.

Read the original:
Amazon Is About to Kill Your Cloud Cam (And Offer a Replacement) - Review Geek

The state of global cloud ecosystem, report by MIT Technology – Wire19

The concept of cloud computing has been around for quite some time now. It started as an experiment in cost savings, flexibility and innovation but quickly became indispensable to businesses everywhere because it enabled them with new technologies like 5G wireless networks that will be able to deliver data faster than ever before possible!

MIT Technology Review Insights, in partnership with Infosys Cobalt recently released The Global Cloud Ecosystem Index 2022 which encompasses a picture of worldwide cloud development and innovation. The key findings of the survey are as follows.

Infrastructure

Cloud infrastructure indicates how well each country is served by telecommunications networks and computing resources that enable cloud-centric production models.

The fast-evolving digital infrastructure is promoting access to cloud capabilities and helping bring computing resources, and the applications they enable, closer to consumers and enterprises with greater speed and trust. It is now necessary to build smaller facilities to use edge computing resources, instead of relying on vast clusters of data centers. It can also accelerate growth in broadband networks and 5G infrastructure, leading to the creation of high-capillary networks. For a nation to enhance its cloud capabilities, requires lots of computing resources that are widely dispersed, rather than just a few resources that are densely packed together. The list of countries leading and lagging in cloud infrastructure development is shown below.

Ecosystem adoption

Global cloud ecosystem adoption represents an aggregation of indicators that rate the usage of digital channels amongst a nations constituents, the density of SaaS organizations in the economy, and the relative affordability of broadband prices. When looked upon simultaneously, these indicators show how much and how well people and businesses use cloud resources to enhance their productivity and promote economic progress.

Cloud computing is becoming more popular because it makes consumer markets and industry more efficient. It also makes governments more responsive to the people. By fostering greater efficiency of cloud usage, digital transformation can be accelerated across the economy to create a virtuous cycle for the entire ecosystem.

Growth in the adoption of open API-based architectures is enabling organizations to share data, insights, and computing resources with their partners more efficiently and reciprocally. It also enables the ecosystem partners to integrate their applications much better. Cloud-aspirational governments are experimenting with frameworks that will accelerate the propagation of APIs and open development platforms.

Scores of countries based on ecosystem adoption are shown below.

Security and assurance

Since the cloud is globally interconnected, public policymakers and regulatory environments must support cross-border flows of secure data and access to application and computing resources. For this, the national governments must try to increase their unique digital assets to attract investment and increase their international competitiveness. At the same time, they must also promote an open, global, and frictionless digital connectivity.

According to the report, the governments must promote a trust infrastructure as they work to nurture their cloud-centric digital economies. This implies that the public policy and regulatory and social conventions must assure that the digital channels used by an economys consumers and businesses are efficient, effective, and secure. The survey rates countries for security and assurance based on a collection of indicators including the quality of their cybersecurity regime, the effectiveness of their regulatory framework, and press freedom. Countries that score the most in this pillar are Finland and the Netherlands.

Talent and human affinity

Cloud-related talent is important for a countrys market. The availability of talent and skills is an important consideration for cloud services companies, and SaaS providers when determining where to set up service nodes and regional operations globally.

Cloud service providers can improve performance by locating their servers closer to users. This is because digital infrastructure helps reduce the amount of time it takes for data to travel between users and providers. Additionally, having a talent pool nearby allows providers to better serve their customers.

Many cloud ecosystem leaders attract and foster pools of talent that are important for cloud-centric operations in site locations. This includes having a lot of available skilled workers. For cloud companies looking to implement infrastructure- and process-intensive projects at scale, the availability of skilled workers is also an important consideration. This makes India, Vietnam, and the Philippines, rank second, third, and fourth in the Global Cloud Ecosystem Index 2022, among the top ten emerging economies.

Conclusion: The future is extremely exciting, almost unimaginably so, when we consider the metaverse and the future of digital businessand the cloud is a very important enabler of both, says Vishal Salvi, senior vice president, chief information security officer, and head of Infosys cybersecurity practice.

According to reports, the global cloud infrastructure spending in the last quarter of 2021 grew 34% year-on-year, to $53.5 billion and about two-thirds of it was captured by the worlds top three hyperscale providers. However, increasingly cloud-dependent businesses are struggling to control their spending.

To maintain their leadership in the cloud ecosystem, countries will require vigilance to ensure that cloud resources are maintained in an environment that is, distributed, readily accessible, and fit for purpose. In the case of cloud laggards, the challenges are no less intimidating and are aggravated by the need for foundational infrastructure and equitable access to digital services to start boosting consumer and business adoption.

Also read:IDC expects the Indian public cloud services market to grow at a CAGR of 24% by 2026

See the article here:
The state of global cloud ecosystem, report by MIT Technology - Wire19

The next iteration of cloud transformation: a cloud-like experience, anywhere – IT World Canada

When people think about digital transformation, they often think public cloud. However, the public cloud isnt suited to every application, or every project. There is massive appeal to the cloud experience not having to pay anything up front, only paying for what you use, and having a monthly bill that is metered up and down. It has also become impactful in this new working age of remote, hybrid, and onsite workers, where data needs to be accessible everywhere and at any time while still providing security. So, if not everything can be moved to the public cloud, how can we bring a cloud-like experience to data and applications, no matter where they are housed?

Perhaps we should stop thinking about cloud as a destination, and start thinking about cloud as an experience (or operating model) one where you can scale your technology requirements up and down on demand, and only pay for what you use. I posit that the multi-cloud experience is the next iteration of digital transformation one where data can be stored in a combination of places: public and/or private, and on premises, in colocation centres, at the edge and in the cloud while accessible through a single platform.

A decade after the public cloud emerged, more than 70 per cent of applications remain outside the public cloud due to challenges related to compliance, data privacy, latency, and app entanglement. For these workloads, the multi-cloud model will be a worthy solution, especially in an age where working anywhere at any time has become table stakes.

In the past, clients had to balance running their main business, plus managing a datacentre, and managing the real estate to house that datacentre. They had to predict their needs several years in advance of use, and pivoting for any changes was a major challenge. In the months ramping up a project, they would have costly servers sitting dormant. The RFP process could be daunting in this regard; determining needs for a project that hadnt yet started and finding ways to maximize datacentres while launching new ideas and projects.

With a decade of buzz surrounding cloud models, the technology of the past will ultimately not be replaced by public cloud on its own, but by the multi-cloud model, which will reign for the next decade at minimum. This new archetype of a mixed approach to data processing and storage that has some on premises technology for items under regulatory or latency pressures, some at the edge for example in factories, branch offices, or on distant oil rigs and some in the public cloud will prove to be the most competitive, effective, and flexible solution that organizations will adopt for their needs.

The multi-cloud model helps prepare clients for what they know they have coming and the unknowns that the future can bring. The benefits of a multi-cloud model are many, and they include:

In Canada, we have a unique opportunity to not only bring this multi-cloud approach to our clients applications and data, but to help them monetize their data in new and unique ways. By establishing their digital core in highly available real estate with global software defined interconnection capabilities, networks can be rearchitected on-demand optimizing data transmission from remote sites to cloud apps in minutes, not months.So, not only are clients experiencing a hyper scalable and metered costing system that provides them full control of their operating expenses, theyre also seeing value from data sharing at the edge.

More:
The next iteration of cloud transformation: a cloud-like experience, anywhere - IT World Canada

Why mTLS Should Be Everywhere in Kubernetes and Not Just Entry and Exit – Security Boulevard

Kubernetes has rapidly become one of the most widely used tools for managing containerized applications. The 2021 Cloud Native Computing Foundation Annual Survey found that 96% of the more than 2300 Kubernetes-specific respondents were either already using Kubernetes or evaluating it. In addition, the survey identified 5.6 million Kubernetes users, an increase of nearly 70% in a single year.

Accordingly, Kubernetes is also a common target for cyberattacks. In a survey by Veritas Technologies, 94% of respondents expressed concern about ransomware attacks on Kubernetes environments, and 56% have already suffered at least one such attack.

Unfortunately, many organizations are rushing ahead with Kubernetes deployments before fully understanding all relevant security issues. By doing so, they are unnecessarily increasing their attack surface and exposing themselves to hacks.

There are many steps companies can take to secure their Kubernetes workloads. One best practice that Kubernetes itself recommends is the extensive use of transport layer security (TLS). TLS helps prevent traffic sniffing in a client-server connection by verifying the server and encrypting the traffic between the client and server.

An even better option that developers should apply everywhere possible is mutual TLS (mTLS). This article discusses the benefits of mTLS and how developers can use it to their advantage to frustrate would-be attackers.

Mutual TLS takes TLS to the next level by authenticating both sides of the client-server connection before exchanging communications. This may seem like a common-sense approach, but there are many situations where the clients identity is irrelevant to the connection.

When only the servers identity matters, standard unidirectional TLS is the most efficient approach. TLS uses public-key encryption, requiring a private and public key pair for encrypted communications. To verify the servers identity, the client sends a message encrypted using the public key (obtained from the servers TLS certificate) to the server. Only a server holding the appropriate private key can decrypt the message, so successful decryption authenticates the server.

To have bi-directional authentication would require that all clients also have TLS certificates, which come from a certificate authority. Because of the sheer number of potential clients (browsers accessing websites, for example), generating and managing so many certificates would be extremely difficult.

However, for some applications and services, it can be crucial to verify that only trusted clients connect to the server. Perhaps only certain users should have access to particular servers. Or maybe you have API calls that should only come from specific services. In these situations, the added burdens of mTLS are well worth it. And if your organization reinforces security with zero trust policies where every attempt to access the server must be verified, mTLS is necessary.

mTLS adds a separate authentication of the client following verification of the server. Only after verifying both parties to the connection can the two exchange data. With mTLS, the server knows that a trusted source is attempting to access it.

mTLS is generally valuable for defeating a variety of attacks. Among the attacks mTLS can fend off are:

With Kubernetes, many different communication pathways can benefit from mTLS, particularly communications between microservices and communications between microservices and any API server. Using mTLS secures these communications without needing any specific identity management or verification process at the application level.

Containerized applications may include many different microservices that exchange data, including sensitive customer data. mTLS keeps hackers from intercepting these communications and creating a data breach.

mTLS also gives you added visibility into potential attacks. Just as with other tools that give you real-time information on anomalous behaviors (network traffic analysis tools like BlueHexagon, website alteration and defacement monitoring tools like Visualping, etc.), by reviewing audit logs, you can quickly pick up unauthorized activity.

Finally, mTLS need not create added difficulties for the user. Instead, properly implemented, mTLS can give users an added sense of security without extra complexity, enhancing the overall user experience. When mTLS acts at the platform level, only a single authentication action is necessary. Users dont have to reauthenticate for every microservice within the application.

While using mTLS in your Kubernetes environment can give you an added comfort level about security, it comes with a cost. That cost comes from the need for an effective certificate provisioning and management system.

When applying mTLS widely in a Kubernetes container deployment, recall that many clients are individual services. Each of these services will need its own certificate, and there will be many.

In addition, Kubernetes services, by their very nature, are impermanent. With replicas of Kubernetes services being created and destroyed dynamically, the challenge of managing certificates can be daunting.

Your certificate management system must also be robust enough to handle the deprovisioning and reprovisioning of certificates according to your internal security policies. If, like many, you rely on certificate rotation (giving certificates limited lifetimes and issuing new certificates on expiration) to minimize the chances that a hacker can exploit them, you must be able to assign new certificates to all affected services quickly.

Fortunately, many different tools are available to help you easily manage certificates, even if you have a massive number of certificates for your Kubernetes services.

As more and more businesses transition to containerization and the use of container orchestration services like Kubernetes, more need will exist for effective and efficient methods for securing data flows between services in the containers. By applying mTLS at every point where it is possible, developers can reduce an applications attack surface and minimize the risk that a hacker can access sensitive data and systems.

Continued here:
Why mTLS Should Be Everywhere in Kubernetes and Not Just Entry and Exit - Security Boulevard

GigaIO Announces Series of Composability Appliances Powered by AMD, First Edition Purpose-Built for Higher Education and Launched at ISC – Business…

SAN DIEGO--(BUSINESS WIRE)--GigaIO, provider of the worlds only open rack-scale computing platform for advanced scale workflows, today announced the launch of a new composability appliance. The GigaIO Composability Appliance: University Edition, powered by AMD, is a flexible environment for heterogeneous compute designed for Higher Education that can easily accommodate the different workloads required for teaching, professor research, and grad-student research. Future iterations of the appliance will bring the benefits of composability to Manufacturing and Life Science users over the coming year.

With the launch of this rack-scale appliance, we are bringing easy-to-use infrastructure to the classroom, where composability can provide students a wide array of flexible technology for learning and growth, said Alan Benjamin, CEO of GigaIO. AMD is the perfect partner for this venture because we share a commitment to create an open, industry standards-based platform. We are keen to make it easy for people to avail themselves of this new technology, and with the experience and success that the company has had in the Higher Ed space, our first joint product with AMD is well positioned for critical success.

Composability can supply students with access to a range of the equipment they will use in the real world, so they will be better prepared for the job market, said Brock Taylor, Director, Global HPC Solutions, AMD. Our recent joint deployments of composable infrastructure at the San Diego Supercomputing Center at the University of California San Diego and the Texas Advanced Computing Center at the University of Texas, Austin demonstrate the promise of composability to solve complex computational problems.

The GigaIO Composability Appliance: University Edition, powered by AMD, was built with ease of use in mind, so that it can be used in a classroom or laboratory setting without requiring dedicated IT expertise. It is a complete, highly efficient, future-proofed composable infrastructure solution that provides cloud-like agility to on-prem infrastructure, allowing cloud bursting as needed within a single interface. Flexibility and composability means that systems dont remain idle while not being used for teaching they can instead be reconfigured for actual simulation work and swapped back into teaching mode as needed.

For ease of use, the GigaIO Composability Appliance: University Edition is delivered with NVIDIA Bright Cluster Manager pre-installed, combining its ability to easily build and manage clusters with GigaIOs ability to connect AMD accelerators, AMD-powered servers, and other devices in a seamless dynamic fabric. Native integration of GigaIOs universal dynamic memory fabric, FabreXTM, within NVIDIA Bright Cluster Manager allows owners to easily assign configurations prior to use, dividing hardware among students to allow them the experience of running actual simulation workloads on the same compute infrastructure they will utilize upon graduation.

FabreX enables an entire server rack to be treated as a single compute resource, handling all compute communication, including server-to-server traffic (such as MPI and NVMe-oF). Resources normally located inside of a server including accelerators, storage, and even memory can now be pooled in accelerator or storage enclosures, where they are available to all of the servers in a rack. These resources and servers continue to communicate over a native PCIe memory fabric for the lowest possible latency and highest possible bandwidth performance, just as they would if they were plugged into the server motherboard.

GigaIO Composability Appliances are designed to accommodate a variety of accelerator types and brands and provide a truly vendor-agnostic environment. The University Edition units are container-ready and easily composed via bare metal, and feature AMD EPYCTM processors and AMD InstinctTM MI210 accelerators. The GigaIO Composability Appliance: University Edition, powered by AMD, is offered in three configurations and is available now. Learn more.

About GigaIO

Headquartered in Carlsbad, California, GigaIO democratizes AI and HPC architectures by delivering the elasticity of the cloud at a fraction of the TCO (Total Cost of Ownership). With its universal dynamic infrastructure fabric, FabreX, and its innovative open architecture using industry-standard PCI Express/soon CXL technology, GigaIO breaks the constraints of the server box, liberating resources to shorten time to results. Contact info@gigaio.com, visit http://www.gigaio.com, or follow on Twitter and LinkedIn.

AMD, the AMD Arrow logo, EPYC, AMD Instinct, and combinations thereof are trademarks of Advanced Micro Devices, Inc.

See the original post:
GigaIO Announces Series of Composability Appliances Powered by AMD, First Edition Purpose-Built for Higher Education and Launched at ISC - Business...