Page 1,917«..1020..1,9161,9171,9181,919..1,9301,940..»

AWS launches fresh challenges to on-prem hardware vendors – The Register

Amazon Web Services has launched two significant challenges to on-prem hardware.

One is the addition of Dedicated Hosts to its on-prem cloud-in-a-box Outposts product.

Outposts see AWS drop a rack full of kit, or individual servers, onto customers' premises. AWS manages that hardware, which is designed to run its own cloud services such as the Elastic Compute Cloud (EC2) on-prem.

AWS slices and dices its physical hardware into many different virtual server configurations, either in its cloud or on Outposts. But EC2 also allows customers to rent Dedicated Hosts that AWS describes as "a physical server fully dedicated for your use."

As of June 1, Dedicated Hosts can run on Outposts meaning AWS effectively offers standalone on-prem servers. They're not quite bare metal, because Dedicated Hosts ship with a hypervisor. But they're otherwise just servers that customers can configure as they choose, rather than being confined to the possibilities offered by EC2 instance types.

AWS suggests Dedicated Hosts to make software licenses more portable. Many software vendors charge by the server, socket, or core for on-prem products, but offer different terms when their wares run in a public cloud. Because Dedicated Hosts aren't abstracted into EC2 instances, applying those licenses is possible.

AWS still manages the Outposts hardware employed as Dedicated Hosts, meaning this isn't quite AWS just renting an on-prem server but it's very, very, close.

Another move that brings AWS closer to pre-cloud computing is its decision to offer its on-prem Storage Gateway Hardware Appliance for sale through real-world resellers.

The Gateway devices serve as local storage and sync to the Amazonian cloud. They can present as a single logical storage resource, spanning the on-prem box and AWS storage services.

Distribution giant TD Synnex has picked up the product, meaning its tens of thousands of resellers can offer the AWS box.

Selling the Gateways through the channel means AWS has the muscle to challenge on-prem storage vendors like never before. And bringing Dedicated Hosts into Outposts means AWS has an offering that challenges server vendors in their own backyards.

Excerpt from:
AWS launches fresh challenges to on-prem hardware vendors - The Register

Read More..

FDT Group introduces the FDT Unified Environment (UE) for field to cloud data harmonisation – Control Engineering Website

06 June 2022

Driven by digital transformation use cases to support new Industrial Internet of Things (IIoT) businessmodels, the standard has evolved to include a new distributed, multi-user, FDT Server application that includes built-in and pre-wired OPC UA and Web servers enabling an FDT Unified Environment (FDT 3.x) merging IT/OT data analytics supporting service-oriented architectures.

The new Server environment which is deployable in the cloud or on-premise delivers the same use cases and functionally as the previous generation FDT hosting environment, but now provides data storage for the whole device lifecycle at the core of the architecture allowing information modeling and data consistency to authenticated OPC UA and browser-based clients (tablets and phones) to address the challenges of IIoT.

Collaboration and data harmonisation are the keys to manufacturing modernisation, said Steve Biegacki, managing director at FDT Group. "FDT UE delivers a data collaborative engineering specification and toolset to enable modern distributed control improving operations and production reliability, impacting the bottom line for new IIoT architectures. I am proud to witness our first group of members showcasing their FDT 3.0 WebUI-based DTM prototypes mixed with 2.0 DTMs in the new Server and Desktop environments running IO-Link and HART at Hannover Messe 2022, live and in person. To be present as a guest in the OPC Foundation booth to demonstrate field-to-cloud connectivity, OPC UA enterprise access and services along with mobile field device operation is one for industry history books.

FDT UE consists of FDT Server, FDT Desktop, and FDT DTM components. System and device suppliers can take a well-established standard they are familiar with and easily create and customize standards-based, data-centric, cross-platform FDT 3.0 solutionsexpanding their portfolio offerings to meet requirements for next-generation industrial control applications. Each solution auto-enables OPC UA integration and allows the development team to focus on value-added features that differentiate their products, including WebUI and App support. FDT Desktop applications are fully backward compatible supporting the existing install base.

Read the original here:
FDT Group introduces the FDT Unified Environment (UE) for field to cloud data harmonisation - Control Engineering Website

Read More..

Global DevOps Market Expected to Witness Remarkable Growth by 2027 due to the Increasing Demand for Advanced Technologies to Optimize Business…

New York, USA, June 06, 2022 (GLOBE NEWSWIRE) -- According to a report published by Research Dive, the global DevOps market is anticipated to generate revenue of $23,362.8 million and grow at a noteworthy CAGR of 22.9% over the analysis timeframe from 2020 to 2027.

As per our analysts, the increasing demand for advanced DevOps technologies to improve various business operations with the rapidly changing market requirements is expected to fortify the growth of the DevOps market over the estimated period. Besides, the rising need for fast and constant application delivery systems is expected to bolster the growth of the market during the forecast period. Moreover, the increasing incorporation of innovative technologies such as machine learning, and artificial intelligence to deliver scalable DevOps platforms and solutions is expected to create massive investment opportunities for the market throughout the analysis timeframe. However, the high costs of implementing advanced DevOps technologies may hamper the growth of the market during the forecast period.

Grow your business globally, Request a PDF Sample of the DevOps Market

Segments of the DevOps Market

The report has divided the market into various segments based on solution, deployment type, end-user, and region.

Solution: Monitoring and Performance Sub-Segment to be Most Lucrative

The monitoring & performance management sub-segment is predicted to garner a revenue of $6,410.3 million during the forecast timeframe. This is mainly because of the increasing utilization of DevOps tools for performance management of infrastructures such as cloud networks, apps, web servers, and many more. Moreover, the constant monitoring of customer behavior to optimize the timely response to customers and deliver complete customer satisfaction is expected to propel the growth of the market sub-segment over the forecast period.

Deployment Type: Cloud Sub-Segment to be Most Productive

The cloud deployment type sub-segment accounted for $2,944.2 million in the year 2019 and is expected to experience exponential growth over the analysis period. This is mainly because of the numerous benefits of cloud based-platforms such as remote access of files, lower deployment costs, and many more. Moreover, the growing demand for software automation is enhancing the demand for cloud-based DevOps services, which is expected to foster the growth of the DevOps market sub-segment during the forecast timeframe.

End-User: Small and Medium Enterprises Sub-Segment to be Most Profitable

The small and medium enterprises (SMEs) sub-segment generated $2,292.1 million in 2019 and is expected to continue steady growth over the analysis timeframe. This is majorly due to the rapid adoption of DevOps platforms by SMEs in software optimization and development services. In addition, several other benefits of DevOps technologies such as saving time for testing, designing, ideas, and many more, are expected to augment the growth of the market sub-segment during the estimated period.

Region: North America Region to Have Expansive Growth Opportunities

The North America region of the DevOps market held the maximum share of the market by growing at a CAGR of 47.5% and is expected to have a significant growth throughout the forecast period. This is mainly due to the presence of technically advanced economies that adopt DevOps technologies in this region. Moreover, the strong existence of highly competitive rivalry in this region which has a high focal point in application and software development is predicted to thrive the growth of the market over the analysis period.

Avail 10%OFF on DevOps Market Research Report Customization

Covid-19 Impact on the DevOps Market

The outbreak of the Covid-19 pandemic has devastated various industries, however, it has had a positive impact on the DevOps market. Many businesses have adopted cloud systems and platforms to increase their business growth during the pandemic period. Moreover, many organizations started launching highly scalable, reliable and secured IT infrastructure to continue their business operations. All these factors have boosted the growth of the market during the period of crisis.

Get More Post COVID-19 Insights of DevOps Market. Get in touch with our Expert Analyst

Key Players of the Market

The major players in the DevOps market include

These players are widely working on the development of new business strategies such as product development, mergers and acquisitions, and partnerships and collaborations to acquire the leading position in the global industry. Avail full report here

For instance, in August 2021, Lucid, a leading provider of visual collaboration software, announced its collaboration with Microsoft Azure DevOps, a Microsoft product that provides version control, reporting, requirements management, and many more, for virtual whiteboard, Lucidspark. With this collaboration, the companies aimed to provide a flexible workspace to visualize backlogs, work items, and delivery plans which enabled users to identify project delays, complete reviews, and observe the journey of customers.

Further, the report also summarizes other important aspects such as the financial performance of the key players, SWOT analysis, the latest strategic developments, and product portfolio.

More about DevOps Market:

More here:
Global DevOps Market Expected to Witness Remarkable Growth by 2027 due to the Increasing Demand for Advanced Technologies to Optimize Business...

Read More..

Millions of MySQL servers found exposed online – is yours among them? – TechRadar

Millions of MySQL servers (opens in new tab) were recently discovered to be publicly exposed to the internet, and using the default port, researchers have found.

Nonprofit security organization, The ShadowServer Foundation, discovered a total of 3.6 million servers are configured in such a way that they can easily be targeted by threat actors.

Out of the total 3.6 million, 2.3 million are connected over IPv4, while 1.3 million over IPv6. Theyre all using the default TCP port 3306.

"While we do not check for the level of access possible or exposure of specific databases, this kind of exposure is a potential attack surface that should be closed," the non-profit explained in an announcement.

Most of the servers are found in the United States (more than 1.2 million), with China, Germany, Singapore, the Netherlands, and Poland, also hosting significant numbers of servers.

Internet-connected servers are a major pillar in todays enterprise, as it allows web services and applications to operate remotely. But misconfigured servers are one of the most frequent errors that lead to data loss (opens in new tab), as many ransomware attacks, and remote access trojan (RAT) deployments, have started with a misconfigured database.

Researchers have been very vocal about the need to properly secure databases, which includes strict user policies, changing and monitoring ports, enabling binary logging, keeping a close eye on queries, and encrypting all of the data, BleepingComputer reminds in its report.

A report from IBM published in May 2021 claimed that 19% of data breaches happen because IT teams fail to properly protect the assets found within their cloud infrastructure.

This time last year, the company polled 524 organizations that suffered a data breach between August 2019 and April 2020, and also found that the average cost of a data breach increased by half a million dollars during that time.

Via: BleepingComputer (opens in new tab)

See more here:
Millions of MySQL servers found exposed online - is yours among them? - TechRadar

Read More..

How to Watch Love Island UK in the US and Abroad in 2022 – Cloudwards

Popular British dating series Love Island packs its glamorous contestants off to a Mallorcan villa in the hopes that love will blossom. If youre in the U.K., catching the new season on ITV2 or the ITV Hub will be a breeze. If youre overseas, though, read on to find out how to watch Love Island U.K. in the U.S. and abroad.

If youve been worried about missing out on all the action, we hope this guide will bring back your summer chill. You can access Love Island from anywhere with a little bit of know-how.

You can watch Love Island U.K. on ITV 2 or online on the ITV Hub.

Yes. Love Island UK Season 1 through 7 are on Hulu, so season 8 is likely to appear on Hulu sometime soon, though its unclear when. Meanwhile, you can watch it on ITV Hub with a VPN.

Love Island season 8 will premiere in the U.K. on ITV2 on Monday, June 6, 2022 at 9:00 p.m. GMT. Viewers in the U.S. and abroad hoping to catch the summer season may find themselves the victim of regional restrictions when they try to access U.K. streaming sites. This geoblocking means you can stream certain content in a limited number of locations worldwide.

For example, if youre in the U.S. and try to access the ITV Hub, you wont be able to because the ITV Hub is only available for U.K. viewers. Fortunately, theres a workaround using a VPN.

There are a couple of options for watching Love Island U.K. from the U.S. All seven seasons of Love Island U.K. are currently on U.S. streaming service Hulu, so season 8 will likely also appear at some point. However, when that will happen is unclear.

Alternatively, you can connect to a quality streaming VPN like ExpressVPN and watch Love Island episodes on the ITV Hub on June 6. Check out our tutorial below to find out how to make that happen.

Follow the below steps to watch Love Island U.K. on the ITV Hub with a VPN. If you want to stream on Hulu from outside the U.S., you can follow the same steps but connect to a U.S. server instead.

Go to ExpressVPNs website and sign up for a plan. All plans come with a 30-day money-back guarantee.

Go to products and download the ExpressVPN app for your device.

Open the ExpressVPN app, then click the three horizontal dots in the location box to view the servers. Finally, choose a U.K. server and click connect.

Go to ITV Hub and sign up for an account. For the postcode, do a quick Google search and pick a random U.K. postcode to enter.

Stream the season or episode you want to watch on the ITV Hub. Just bear in mind that the free version is not ad-free.

Youll need a VPN to access U.K. streaming sites, including watching Love Island. If youre new to VPNs, it can be frustrating to sift through all the reviews to find out which ones are good and which ones are not. To make things a little easier, here are our top recommendations.

ExpressVPN is the best VPN for streaming, including reality shows like Love Island.

Pros:

ExpressVPN is always our go-to recommendation for streaming. We test a lot of VPNs with streaming services, and ExpressVPN is very consistent when it comes to breaking through geoblocks. ExpressVPN is blazing fast, especially on its Lightway protocol, increasing your chances of uninterrupted binge-watching.

If youre looking for a VPN thats unfailingly secure, consistent and fast for streaming Love Island, you cant go wrong with ExpressVPN. To find out more about ExpressVPN, check out our ExpressVPN review or try ExpressVPN with its 30-day money-back guarantee.

NordVPN has plenty of British servers to get you the UK IP address needed for UK streaming sites.

Pros:

NordVPN is another great VPN for watching content online, with its NordLynx protocol keeping your streaming experience smooth and pain-free. Its also comparable to ExpressVPN in terms of security, ease of use and consistency with streaming services. NordVPN is also the cheapest of the two.

NordVPN is in second place to ExpressVPN because you have to enable its obfuscation servers, whereas theyre a default with ExpressVPN. Additionally, ExpressVPN is slightly easier to use. Otherwise, this VPN is an excellent option. If youd like to learn more about what NordVPN has to offer, check out our NordVPN review or use its 30-day money-back guarantee.

Surfshark is an easy-to-use VPN thats great for streaming.

Pros:

If youre looking for a VPN on the cheaper side, Surfshark is a wallet-friendly service with a lot to like. One of our favorite things about Surfshark other than the price is that it allows unlimited simultaneous connections. Its also pretty consistent with major streaming services. We tested it with ITV Hub and we were able to stream Love Island in a matter of seconds (after the ads, of course).

Surfshark is behind NordVPN and ExpressVPN in our recommendations because it offers fewer features. That said, its still a great streaming VPN at a budget price on the two-year plan. For more on Surfshark, check out our Surfshark review or try it out with the 30-day money-back guarantee.

Surfshark Plans

We recommend caution when using free VPNs as you cant guarantee that every service is secure and private. However, free VPNs were happy to vouch for to stream Love Island U.K. are Windscribe and TunnelBear, which are both trustworthy services. Windscribe offers 10GB of free data per month, and TunnelBear offers 500MB.

These data caps mean youll likely only get through a few episodes before your data runs out. ProtonVPN is another secure VPN that offers a free plan, but unfortunately, its free plan doesnt include U.K. servers. All three of these services offer paid plans, though, if youd like to benefit from unlimited bandwidth.

We hope weve been able to ease your worries about missing the latest season of Love Island from overseas. Fortunately, its pretty straightforward to stream if youre using a quality VPN like ExpressVPN, NordVPN or Surfshark.

Do you plan on using a VPN to stream Love Island? If so, which VPN are you going to use? Let us know in the comments and, as always, thanks for reading!

Let us know if you liked the post. Thats the only way we can improve.

YesNo

See the article here:
How to Watch Love Island UK in the US and Abroad in 2022 - Cloudwards

Read More..

What Hugging Face and Microsofts collaboration means for applied AI – The Next Web

This article is part of our series that explores thebusiness of artificial intelligence.

Last week, Hugging Face announced a new product in collaboration with Microsoft calledHugging Face Endpoints on Azure, which allows users to set up and run thousands of machine learning models on Microsofts cloud platform.

Having started as a chatbot application, Hugging Face made its fame as a hub fortransformer models, a type of deep learning architecture that has been behind many recent advances in artificial intelligence, including large language models like OpenAIGPT-3and DeepMinds protein-folding modelAlphaFold.

Subscribe now for a weekly recap of our favorite AI stories

Large tech companies like Google, Facebook, and Microsoft have been using transformer models for several years. But the past couple of years has seen a growing interest in transformers among smaller companies, including many that dont have in-house machine learning talent.

This is a great opportunity for companies like Hugging Face, whose vision is to become the GitHub for machine learning. The company recently secured$100 million in Series C at a $2 billion valuation. The company wants to provide a broad range of machine learning services, including off-the-shelf transformer models.

However, creating a business around transformers presents challenges that favor large tech companies and put companies like Hugging Face at a disadvantage. Hugging Faces collaboration with Microsoft can be the beginning of a market consolidation and a possible acquisition in the future.

Transformer models can do many tasks, including text classification, summarization, and generation;question answering; translation; writingsoftware source code; and speech to text conversion. More recently, transformers have also moved into other areas, such as drug research and computer vision.

One of the main advantages of transformer models is their capability to scale. Recent years have shown that the performance of transformers grows as they are made bigger and trained on larger datasets. However, training and running large transformers is very difficult and costly. Arecent paper by Facebookshows some of the behind-the-scenes challenges of training very large language models. While not all transformers are as large as OpenAIs GPT-3 and Facebooks OPT-175B, they are nonetheless tricky to get right.

Hugging Face provides a large repertoire of pre-trained ML models to ease the burden of deploying transformers. Developers can directly load transformers from the Hugging Face library and run them on their own servers.

Pre-trained models are great for experimentation and fine-tuning transformers for downstream applications. However, when it comes to applying the ML models to real products, developers must take many other parameters into consideration, including the costs of integration, infrastructure, scaling, and retraining. If not configured right, transformers can be expensive to run, which can have a significant impact on the products business model.

Therefore, while transformers are very useful, many organizations that stand to benefit from them dont have the talent and resources to train or run them in a cost-efficient manner.

Hugging Face Endpoints on Azure

An alternative to running your own transformer is to use ML models hosted on cloud servers. In recent years, several companies launched services that made it possible to use machine learning models through API calls without the need to know how to train, configure, and deploy ML models.

Two years ago, Hugging Face launched its own ML service, called Inference API, which provides access to thousands of pre-trained models (mostly transformers) as opposed to the limited options of other services. Customers can rent Inference API based on shared resources or have Hugging Face set up and maintain the infrastructure for them. Hosted models make ML accessible to a wide range of organizations, just as cloud hosting services brought blogs and websites to organizations that couldnt set up their own web servers.

So, why did Hugging Face turn to Microsoft? Turning hosted ML into a profitable business is very complicated (see, for example,OpenAIs GPT-3 API). Companies like Google, Facebook, and Microsoft have invested billions of dollars into creating specialized processors and servers that reduce the costs of running transformers and other machine learning models.

Hugging Face Endpoints takes advantage of Azures main features, including its flexible scaling options, global availability, and security standards. The interface is easy to use and only takes a few clicks to set up a model for consumption and configure it to scale at different request volumes. Microsoft has already created a massive infrastructure to run transformers, which will probably reduce the costs of delivering Hugging Faces ML models. (Currently in beta, Hugging Face Endpoints is free, and users only pay for Azure infrastructure costs. The company plans a usage-based pricing model when the product becomes available to the public.)

More importantly, Microsoft has access to a large share of the market that Hugging Face is targeting.

According to theHugging Face blog, As 95% of Fortune 500 companies trust Azure with their business, it made perfect sense for Hugging Face and Microsoft to tackle this problem together.

Many companies find it frustrating to sign up and pay for various cloud services. Integrating Hugging Faces hosted ML product with Microsoft Azure ML reduces the barriers to delivering its products value and expands the companys market reach.

Image credit: 123RF (with modifications)

Hugging Face Endpoints can be the beginning of many more product integrations in the future, as Microsofts suite of tools (Outlook, Word, Excel, Teams, etc.) have billions of users and provide plenty of use cases for transformer models. Company execs have already hinted at plans to expand their partnership with Microsoft.

This is the start of the Hugging Face and Azure collaboration we are announcing today as we work together to bring our solutions, our machine learning platform, and our models accessible and make it easy to work with on Azure. Hugging Face Endpoints on Azure is our first solution available on the Azure Marketplace, but we are working hard to bring more Hugging Face solutions to Azure, Jeff Boudier, product director at Hugging Face, toldTechCrunch. We have recognized [the]roadblocks for deploying machine learning solutions into production[emphasis mine] and started to collaborate with Microsoft to solve the growing interest in a simple off-the-shelf solution.

This can be extremely advantageous to Hugging Face, which must find a business model that justifies its $2-billion valuation.

But Hugging Faces collaboration with Microsoft wont be without tradeoffs.

Earlier this month, in aninterview with Forbes, Clment Delangue, Co-Founder and CEO at Hugging Face, said that he has turned down multiple meaningful acquisition offers and wont sell his business, like GitHub did to Microsoft.

However, the direction his company is now taking will make its business model increasingly dependent on Azure (again, OpenAI provides a good example of where things are headed) and possibly reduce the market for its independent Inference API product.

Without Microsofts market reach, Hugging Faces product(s) will have greater adoption barriers, lower value proposition, and higher costs (the roadblocks mentioned above). And Microsoft can always launch a rival product that will be better, faster, and cheaper.

If a Microsoft acquisition proposal comes down the line, Hugging Face will have to make a tough choice. This is also a reminder of where the market for large language models and applied machine learning is headed.

In comments that were published on the Hugging Face blog, Delangue said, The mission of Hugging Face is to democratize good machine learning. Were striving to help every developer and organization build high-quality, ML-powered applications that have a positive impact on society and businesses.

Indeed, products like Hugging Face Endpoints will democratize machine learning for developers.

Buttransformers and large language models are also inherently undemocraticand will give too much power to a few companies that have the resources to build and run them. While more people will be able to build products on top of transformers powered by Azure, Microsoft will continue to secure and expand its market share in what seems to be the future of applied machine learning. Companies like Hugging Face will have to suffer the consequences.

This article was originally published by Ben Dickson onTechTalks, a publication that examines trends in technology, how they affect the way we live and do business, and the problems they solve. But we also discuss the evil side of technology, the darker implications of new tech, and what we need to look out for. You can read the original articlehere.

Excerpt from:
What Hugging Face and Microsofts collaboration means for applied AI - The Next Web

Read More..

Amazon finally opens doors to its serverless analytics – The Register

If you want to run analytics in a serverless cloud environment, Amazon Web Services reckons it can help you out all while reducing your operating costs and simplifying deployments.

As is typical for Amazon, the cloud giant previewed this EMR Serverless platform EMR once meaning Elastic MapReduce at its Re:Invent conference in December, and only opened the services to the public this week.

AWS is no stranger to serverless with products like Lambda. However, its EMR offering specifically targets analytics workloads, such as those using Apache Spark, Hive, and Presto.

Amazons existing EMR platform already supported deployments on VPC clusters running in EC2, Kubernetes clusters in EKS, and on-prem deployments running on Outposts. And while this provides greater control over the application and compute resources, it also required the user to manually configure and manage the cluster.

Whats more, the compute and memory resources needed for many data analytics workloads are subject to change depending on the complexity and volume of the data being processed, according to Amazon.

EMS Serverless promises to eliminate this complexity by automatically provisioning and scaling compute resources to meet the demands of open-source workloads. As more or less resources are required to accommodate changing data volumes, the platform automatically adds or removes workers. This, Amazon says, ensures that compute resources arent underutilized or over-committed. And customers are only charged for the time and number of workers required to complete the job.

Customers can further control costs by specifying a minimum and maximum number of workers and the virtual CPUs and memory allocated to each worker. Each application is fully isolated and runs within a secure instance.

According to Amazon, these capabilities make the platform ideal for a number of data pipeline, shared cluster, and interactive data workloads.

By default EMS Serverless workloads are configured to start when jobs are submitted and stop after the application has been idle for more than 15 minutes. However, customers can also per-initialize workers to reduce the time require starting the process.

EMR Serverless also supports shared applications using Amazons identity and access management roles. This enables multiple tenants to submit jobs using a common pool of workers, the company explained in a release.

At launch, EMS Serverless supports applications built using the Apache Spark and Hive frameworks.

Regardless of how the application is deployed, workloads are managed centrally from Amazons EMR Studio. The control plane also allows customers to spin up new workloads, submit jobs, and review diagnostics data. The service also integrates with AWS S3 object storage, enabling Spark and Hive logs to be saved for review.

EMR Serverless is available now in Amazons North Virginia, Oregon, Ireland, and Tokyo regions.

Read more here:
Amazon finally opens doors to its serverless analytics - The Register

Read More..

To HADES and Back: UNC2165 Shifts to LOCKBIT to Evade Sanctions – Mandiant

The U.S. Treasury Department's Office of Foreign Assets Control (OFAC) sanctioned the entity known as Evil Corp in December 2019, citing the group's extensive development and use and control of the DRIDEX malware ecosystem. Since the sanctions were announced, Evil Corp-affiliated actors appear to have continuously changed the ransomware they use (Figure 1). Specifically following an October 2020 OFAC advisory, there was a cessation of WASTEDLOCKER activity and the emergence of multiple closely related ransomware variants in relatively quick succession. These developments suggested that the actors faced challenges in receiving ransom payments following their ransomware's public association with Evil Corp.

Mandiant has investigated multiple LOCKBIT ransomware intrusions attributed to UNC2165, a financially motivated threat cluster that shares numerous overlaps with the threat group publicly reported as "Evil Corp." UNC2165 has been active since at least 2019 and almost exclusively obtains access into victim networks via the FAKEUPDATES infection chain, tracked by Mandiant as UNC1543. Previously, we have observed UNC2165 deploy HADES ransomware. Based on the overlaps between UNC2165 and Evil Corp, we assess with high confidence that these actors have shifted away from using exclusive ransomware variants to LOCKBITa well-known ransomware as a service (RaaS)in their operations, likely to hinder attribution efforts in order to evade sanctions.

OFAC sanctions against Evil Corp in December 2019 were announced in conjunction with the Department of Justice's (DOJ) unsealing of indictments against individuals for their roles in the Bugat malware operation, updated versions of which were later called DRIDEX. DRIDEX was believed to operate under an affiliate model with multiple actors involved in the distribution of the malware. While the malware was initially used as traditional banking Trojan, beginning as early as 2018, we increasingly observed DRIDEX used as a conduit to deploy post-exploitation frameworks onto victim machines. Security researchers also began to report DRIDEX preceding BITPAYMER deployments, which was consistent with a broader emerging trend at the time of ransomware being deployed post-compromise in victim environments. Although Evil Corp was sanctioned for the development and distribution of DRIDEX, the group was already beginning to shift towards more lucrative ransomware operations.

UNC2165 activity likely represents another evolution in Evil Corp affiliated actors' operations. Numerous reports have highlighted the progression of linked activity including development of new ransomware families and a reduced reliance on DRIDEX to enable intrusions. Despite these apparent efforts to obscure attribution, UNC2165 has notable similarities to operations publicly attributed to Evil Corp, including a heavy reliance on FAKEUPDATES to obtain initial access to victims and overlaps in their infrastructure and use of particular ransomware families.

BEACON C&C

Description

mwebsoft[.]comrostraffic[.]comconsultane[.]comtraffichi[.]comamazingdonutco[.]comcofeedback[.]comadsmarketart[.]comwebsitelistbuilder[.]comadvancedanalysis[.]beadsmarketart[.]com

In June 2020, NCC Group reported on the WASTEDLOCKER ransomware, which they attributed to Evil Corp with high confidence. In these incidents, the threat actor leveraged FAKEUPDATES for initial access.

cutyoutube[.]comonlinemoula[.]com

In June 2021, Secureworks reported on HADES ransomware intrusions attributed to "GOLD WINTER." In these incidents, the threat actor leveraged FAKEUPDATES or VPN credentials for initial access. This activity was later attributed to GOLD DRAKE (aka Evil Corp) after further analysis of the ransomware and overlaps with other families believed to be operated by GOLD DRAKE.

potasip[.]comadvancedanalysis[.]befirsino[.]comcurrentteach[.]comnewschools[.]infoadsmarketart[.]com

In February 2022, SentinelOne published an in-depth report on the Evil Corp lineage in which they assessed with high confidence that WASTEDLOCKER, HADES, PHOENIXLOCKER, PAYLOADBIN, and MACAW were developed by the same threat actors. The researchers also noted overlaps in infrastructure between FAKEUPDATES and BITPAYMER, DOPPELPAYMER, WASTEDLOCKER, and HADES ransomware.

Overlaps With SilverFish Reporting

UNC2165 also has overlaps with a cluster of activity dubbed "SilverFish" by ProDaft. Mandiant reviewed the information in this report and determined that the analyzed malware administration panel is used to manage FAKEUPDATES infections and to distribute secondary payloads, including BEACON. We believe that at least some of the described activity can be attributed to UNC2165 based on malware payloads and other technical artifacts included in the report.

While UNC2165 activity dates to at least June 2020, the following TTPs are focused on intrusions where we directly observed ransomware deployed.

Initial Compromise and Establish Foothold

UNC2165 has primarily gained access to victim organizations via FAKEUPDATES infections that ultimately deliver loaders to deploy BEACON samples on impacted hosts. The loader portion of UNC2165 Cobalt Strike payloads have changed frequently but they have continually used BEACON in most intrusions since 2020. Beyond FAKEUPDATES, we have also observed UNC2165 leverage suspected stolen credentials to obtain initial access.

Escalate Privileges

UNC2165 has taken multiple common approaches to privilege escalation across its intrusions, including Mimikatz and Kerberoasting attacks, targeting authentication data stored in the Windows registry, and searching for documents or files associated with password managers or that may contain plaintext credentials.

Internal Reconnaissance

Following UNC1543 FAKEUPDATES infections, we commonly see a series of built-in Microsoft Windows utilities such as whoami, nltest, cmdkey, and net used against newly accessed systems to gather data and learn more about the victim environment. The majority of these commands are issued using one larger, semicolon-delineated list of enumeration commands, followed up by additional PowerShell reconnaissance (Figure 4). We attribute this initial reconnaissance activity to UNC1543 as it occurs prior to UNC2165 BEACON deployment; however, collected information almost certainly enables decision-making for UNC2165. During intrusions, UNC2165 has used multiple common third-party tools to enable reconnaissance of victim networks and has accessed internal systems to obtain information used to guide its intrusion operations.

Lateral Movement and Maintain Presence

UNC2165 relies heavily on Cobalt Strike BEACON to enable lateral movement and maintain presence in a victim environment. Beyond its use of BEACON, UNC2165 has also used common administrative protocols and software to enable lateral movement, including RDP and SSH.

Complete Mission

In most cases, UNC2165 has stolen data from its victims to use as leverage for extortion after it has deployed ransomware across an environment. In intrusions where the data exfiltration method could be identified, there is evidence to suggest the group used either Rclone or MEGASync to transfer data from the victims' environments prior to encryption. The Rclone utility is used by many financially motivated actors to synchronize sensitive files with cloud storage providers, and MEGASync synchronizes data to the MEGA cloud hosting service.

UNC2165 has leveraged multiple Windows batch scripts during the final phases of its operations to deploy ransomware and modify systems to aid the ransomware's propagation. We have observed UN2165 use both HADES and LOCKBIT; we have not seen these threat actors use HADES since early 2021. Notably, LOCKBIT is a prominent Ransomware-as-a-Service (RaaS) affiliate program, which we track as UNC2758, that has been advertised in underground forums since early 2020 (21-00026166).

Based on information from trusted sensitive sources and underground forum activity, we have moderate confidence that a particular actor operating on underground forums is affiliated with UNC2165. Additional details are available in Mandiant Advantage.

The U.S. Government has increasingly leveraged sanctions as a part of a broader toolkit to tackle ransomware operations. This has included sanctions on both actors directly involved in ransomware operations as well as cryptocurrency exchanges that have received illicit funds. These sanctions have had a direct impact on threat actor operations, particularly as at least some companies involved in ransomware remediation activities, such as negotiation, refuse to facilitate payments to known sanctioned entities. This can ultimately reduce threat actors' ability to be paid by victims, which is the primary driver of ransomware operations.

The adoption of an existing ransomware is a natural evolution for UNC2165 to attempt to obscure their affiliation with Evil Corp. Both the prominence of LOCKBIT in recent years and its successful use by several different threat clusters likely made the ransomware an attractive choice. Using this RaaS would allow UNC2165 to blend in with other affiliates, requiring visibility into earlier stages of the attack lifecycle to properly attribute the activity, compared to prior operations that may have been attributable based on the use of an exclusive ransomware. Additionally, the frequent code updates and rebranding of HADES required development resources and it is plausible that UNC2165 saw the use of LOCKBIT as a more cost-effective choice. The use of a RaaS would eliminate the ransomware development time and effort allowing resources to be used elsewhere, such as broadening ransomware deployment operations. Its adoption could also temporarily afford the actors more time to develop a completely new ransomware from scratch, limiting the ability of security researchers to easily tie it to previous Evil Corp operations.

It is plausible that the actors behind UNC2165 operations will continue to take additional steps to distance themselves from the Evil Corp name. For example, the threat actors could choose to abandon their use of FAKEUPDATES, an operation with well-documented links to Evil Corp actors in favor of a newly developed delivery vector or may look to acquire access from underground communities. Some evidence of this developing trend already exists given UNC2165 has leveraged stolen credentials in a subset of intrusions, which is consistent with a suspected members underground forum activity. We expect these actors as well as others who are sanctioned in the future to take steps such as these to obscure their identities in order to ensure that it is not a limiting factor to receiving payments from victims.

MITRE ATT&CK Mapping

Mandiant has observed UNC2165 use the following techniques.

Impact

T1486: Data Encrypted for ImpactT1489: Service StopT1490: Inhibit System RecoveryT1529: System Shutdown/Reboot

Defense Evasion

T1027: Obfuscated Files or InformationT1027.005: Indicator Removal from ToolsT1036: MasqueradingT1055: Process InjectionT1055.002: Portable Executable InjectionT1070.001: Clear Windows Event LogsT1070.004: File DeletionT1070.005: Network Share Connection RemovalT1070.006: TimestompT1078: Valid AccountsT1112: Modify RegistryT1127.001: MSBuildT1134: Access Token ManipulationT1134.001: Token Impersonation/TheftT1140: Deobfuscate/Decode Files or InformationT1202: Indirect Command ExecutionT1218.005: MshtaT1218.011: Rundll32T1497: Virtualization/Sandbox EvasionT1497.001: System ChecksT1553.002: Code SigningT1562.001: Disable or Modify ToolsT1562.004: Disable or Modify System FirewallT1564.003: Hidden WindowT1620: Reflective Code Loading

Command and Control

T1071: Application Layer ProtocolT1071.001: Web ProtocolsT1071.004: DNST1090.004: Domain FrontingT1095: Non-Application Layer ProtocolT1105: Ingress Tool TransferT1573.002: Asymmetric Cryptography

Collection

T1056.001: KeyloggingT1113: Screen CaptureT1115: Clipboard DataT1560: Archive Collected DataT1602.002: Network Device Configuration Dump

Discovery

T1007: System Service DiscoveryT1010: Application Window DiscoveryT1012: Query RegistryT1016: System Network Configuration DiscoveryT1033: System Owner/User DiscoveryT1049: System Network Connections DiscoveryT1057: Process DiscoveryT1069: Permission Groups DiscoveryT1069.001: Local GroupsT1069.002: Domain GroupsT1082: System Information DiscoveryT1083: File and Directory DiscoveryT1087: Account DiscoveryT1087.001: Local AccountT1087.002: Domain AccountT1482: Domain Trust DiscoveryT1518: Software DiscoveryT1614.001: System Language Discovery

Lateral Movement

T1021.001: Remote Desktop ProtocolT1021.002: SMB/Windows Admin SharesT1021.004: SSH

Exfiltration

T1020: Automated Exfiltration

Execution

T1047: Windows Management InstrumentationT1053: Scheduled Task/JobT1053.005: Scheduled TaskT1059: Command and Scripting InterpreterT1059.001: PowerShellT1059.003: Windows Command ShellT1059.005: Visual BasicT1059.007: JavaScriptT1569.002: Service Execution

Persistence

T1098: Account ManipulationT1136: Create AccountT1136.001: Local AccountT1543.003: Windows ServiceT1547.001: Registry Run Keys / Startup FolderT1547.009: Shortcut Modification

Credential Access

T1003.001: LSASS MemoryT1003.002: Security Account ManagerT1552.002: Credentials in RegistryT1558: Steal or Forge Kerberos TicketsT1558.003: Kerberoasting

Initial Access

T1133: External Remote ServicesT1189: Drive-by Compromise

Resource Development

T1588.003: Code Signing CertificatesT1588.004: Digital CertificatesT1608.003: Install Digital Certificate

LOCKBIT YARA Rules

The following YARA rules are not intended to be used on production systems or to inform blocking rules without first being validated through an organization's own internal testing processes to ensure appropriate performance and limit the risk of false positives. These rules are intended to serve as a starting point for hunting efforts to identify LOCKBIT activity; however, they may need adjustment over time if the malware family changes.

Follow this link:
To HADES and Back: UNC2165 Shifts to LOCKBIT to Evade Sanctions - Mandiant

Read More..

Bridging On-Premise and Cloud Data – International Society of Automation

Hybrid data architectures empower process manufacturers to more quickly realize the business benefits from their cloud and IIoT investments.

By 2028, cloud computing and the Internet of Things (IoT) in manufacturing will be poised to achieve the plateau of productivity, or the phase when they drive transformational impact on business outcomes, according to business analyst firm Gartner. At this point in their digital transformation journeys, many manufacturers have completed their Industrial Internet of Things (IIoT) pilot projects and are approaching mid- to late-stage adoption in operations.

While the term IIoT was coined just a few years ago, the large volumes of data associated with it are familiar to the process control and automation industries. For decades, manufacturers have generated and collected more data than they know what to do with via sensors, legacy digital networks, and various host systems.

But a great deal of data was stranded in process historians and other databases, collecting dust. Today, manufacturers can fully benefit from this data and information in the cloud by using hybrid data architectures coupled with advanced analytics applications.

Transitioning to agile production requires optimizing the entire supply chain, from improving overall equipment effectiveness and asset reliability to reducing inventory. IIoT implementations can help organizations clear common optimization hurdles, because they empower staff to access, collect, and analyze more data in near real time. This enables process experts and operators to make timely and productive decisions to enhance product quality, optimize operations, and reduce waste.

With Internet connectivity, IIoT implementations can directly access the vast computing power and scalability of the cloud. Each year, the variability, speed, and volume of process data grows exponentially, rendering IIoT architectures as the only suitable options for compute-intensive Industry 4.0 projects.

Some of the leading cloud applications and components include digital twins, machine learning (ML) tools, autonomous robot artificial intelligence (AI) repositories, and augmented reality simulators. Each of these use cases requires high CPU processing power, which can be difficult for on-premise servers to provide because information technology (IT) teams cannot scale up the required computing resources on demand.

According to Gartner, when it comes to cloud computing for manufacturing operations, the industry is currently in a trough of disillusionment, or a state of lowered expectations. This mindset is largely a result of the unproven idea that IIoT and related databases must feed a central data lake, which is intended to serve as the single source of truth and common access point for all users worldwide.

If this were true, cloud-based data lakes would need to replace all existing process historiansalong with other host systems such as those used for asset management, laboratory information, or inventory trackingto provide the data required for analysis. In reality, this is not the best approach because many legacy on-premise servers, such as those hosting process historians, collect and store highly valuable operational technology (OT) data. The context housed in these rich data archives is required to ensure Industry 4.0 initiatives, such as predictive maintenance via ML, succeed. Attempts to move or copy this OT data to the cloud are often time consuming and costly.

To properly aggregate and analyze the data produced by legacy sensors and infrastructure alongside new born-in-the-cloud IIoT sensor data, a bridge is required.

To address this issue and provide combined access to OT, IIoT, and other data, process manufacturers use a hybrid data architecture approach to:

This is not a rip-and-replace approach but is instead a bridge connecting traditional manufacturing data infrastructure with cloud-native data to leverage the best data from both sides by creating a continuum of access. Process automation systems can continue to use on-premise or edge data for real-time decision making where low latency is required. Simultaneously, the hybrid model empowers organizations to apply global reporting and compute-intensive tasks, like ML, to cloud-native IIoT data (figure 1).

This approach requires a data abstraction layer to facilitate traffic flow among various data sources (figure 2).

Figure 1. Hybrid data architectures empower manufacturing organizations to leverage IIoT in the cloud for compute-intensive processes, while executing real-time process control using on-premise data. Figure 2. Data abstraction layers facilitate data access and transfer among multiple data sources, including on premise and cloud databases. Data abstraction indexes and facilitates access to data in its native locations, a key differentiating point from data-lake functionality. Because data is not copied or moved, its management is significantly simplified. Once data abstraction is implemented, organizations can add advanced analytics applications to simultaneously query and make use of information from multiple, and often previously disparate, data sources. This improves awareness and predictive maintenance capabilities across the organization.

For example, when training and executing ML models, organizations must access maintenance records and historical process data. Staff must then access results to proactively identify issues and adjust the operational model. Abstraction makes it easy for personnel and software applications to access multiple datasets through a single source.

Asset monitoring is a critical task for many process manufacturers. For common assetsincluding pumps, valves, heat exchangers, and othersmanufacturers deploy a variety of maintenance methods to maximize productivity over the assets life. At the two extremes, these methods include run to fail in the most basic case, and condition monitoring for predictive maintenance in more advanced situations.

By monitoring asset performance to detect anomalies in near real time, manufacturers can identify potential issues before failure, reducing unplanned downtime and maintenance costs. When these anomalies are detected, advanced analytics software can generate alerts to inform personnel, so they can schedule inspections and maintenance of affected assets.

These monitoring applications can be scaled to hundreds of assets across multiple sites. Therefore, it is critical to normalize data before generating alerts and to streamline notification paths so the right personnel are informed.

By working together, OT and IT teams can use a hybrid data architecture to achieve these asset monitoring goals. First, OT teams must deploy suitable sensors, in addition to data acquisition and storage technologies, to populate asset hierarchies with data for grouping equipment and devices of a common process or location. These asset hierarchies include sets of metadata collected for each asset of a common taxonomy. Once the hierarchies are in place, assets can be analyzed within process groups, rather than individually, or solely as unrelated assets of the same type.

Next, OT works with IT personnel to ensure the former group can access this data securely by implementing cloud data storage, advanced analytics, and workflow automation tools. IT and data science teams collaborate with OT subject matter experts to configure ML models that create insights and effectively predict asset failure, generating intelligent alerts to improve issue remediation and decrease downtime.

When evaluating hybrid data infrastructure, organizations should consider these questions before implementation:

Hybrid data architectures empower process manufacturers to more quickly realize the business benefits from their cloud and IIoT investments. By using IIoT data and pipelines, on-premise process data, abstraction, and advanced analytics, organizations can quickly pass through the trough of disillusionment and reach the digitalization plateau of productivity.

We want to hear from you! Please send us your comments and questions about this topic to InTechmagazine@isa.org.

See the original post:
Bridging On-Premise and Cloud Data - International Society of Automation

Read More..

Everything is gone: Russian business hit hard by tech sanctions – Ars Technica

vladimir18 | Getty Images

Russian companies have been plunged into a technological crisis by Western sanctions that have created severe bottlenecks in the supply of semiconductors, electrical equipment, and the hardware needed to power the nations data centers.

Most of the worlds largest chip manufacturers, including Intel, Samsung, TSMC and Qualcomm, have halted business to Russia entirely after the US, UK, and Europe imposed export controls on products using chips made or designed in the US or Europe.

This has created a shortfall in the type of larger, low-end chips that go into the production of cars, household appliances, and military equipment. Supplies of more advanced semiconductors, used in cutting-edge consumer electronics and IT hardware, have also been severely curtailed.

And the countrys ability to import foreign tech and equipment containing these chipsincluding smartphones, networking equipment, and data servershas been drastically stymied.

Entire supply routes for servers to computers to iPhoneseverythingis gone, said one Western chip executive.

The unprecedented sweep of Western sanctions over President Vladimir Putins war in Ukraine are forcing Russia into what the central bank said would be a painful structural transformation of its economy.

With the country unable to export much of its raw materials, import critical goods, or access global financial markets, economists expect Russias gross domestic product to contract by as much as 15 percent this year.

Export controls on dual use technology that can have both civilian and military applicationssuch as microchips, semiconductors, and serversare likely to have some of the most severe and lasting effects on Russias economy. The countrys biggest telecoms groups will be unable to access 5G equipment, while cloud computing products from tech leader Yandex and Sberbank, Russias largest bank, will struggle to expand their data center services.

Russia lacks an advanced tech sector and consumes less than 1 percent of the worlds semiconductors. This has meant that technology-specific sanctions have had a much less immediate impact on the country than similar export controls had on China, the behemoth of global tech manufacturing, when they were introduced in 2019.

While Russia does have several domestic chip companies, namely JSC Mikron, MCST, and Baikal Electronics, Russian groups have previously relied on importing significant quantities of finished semiconductors from foreign manufacturers such as SMIC in China, Intel in the US, and Infineon in Germany. MCST and Baikal have relied principally on foundries in Taiwan and Europe for the production of the chips they design.

See original here:
Everything is gone: Russian business hit hard by tech sanctions - Ars Technica

Read More..