Category Archives: Cloud Servers

Central African Republic: Supporting the reconstruction of the national statistical system to help with better data for decision-making – Central…

WASHINGTON, July 28, 2022 - To enable the strengthening of the capacity of the statistical system to produce and manage data and enhance living conditions measurement in the Central African Republic, the World Bank today approved a $3 million additional financing in grant for the Data for Decision Making Project.

The Central African Republic (CAR) is one of the world's poorest and most fragile countries. In 2019, per capita gross domestic product (GDP) averaged 468 dollarsmuch lower than the average of 1,130 dollars in countries affected by fragility, conflict, and violence in Sub-Saharan Africa. The extreme poverty rate remained high at 71.4 percent in 2020. The 2012 politico-military crisis left the National Statistical System (NSS), which was reasonably developed before the crisis, in poor conditions. The looting of the offices of the Central African Institute for Statistics and Economic and Social Research (Institut Centrafricain des Statistiques et des Etudes Economiques et Sociales, ICASEES) caused the loss of much of the country's statistical data records. Additionally, this looting resulted in the loss of much of the statistical infrastructure as well as the physical records documents.

The original project has allowed to recover the data dispersed during the 2012 crisis and to archive them on a digital platform and store them on remote cloud servers to avoid future losses. In addition, the capacity of ICASEES staff was increased. Some of the databases outdated for decision-making have been updated including the Consumer Price Index (CPI), the agricultural survey, Communal Monographies Survey and Living Conditions Survey.

This additional financing will allow to update the National Accounts and the census cartography, and to improve ICASEES physical infrastructure, said Guido Rurangwa, World Bank Country Manager for the Central African Republic. Updating the census cartography is animportant step toward the implementation of the population census, he added.

Financed by a grant from the International Development Association (IDA) this additional financing will cover two components of the Data for Decision Making Project: (i) statistical recovery, rehabilitation, professionalization and modernization of ICASEES; and (ii) data collection, production and dissemination.

PRESS RELEASE NO: 2023/005/AFW

Contacts

Bangui:Boris Ngouagouni,(00236) 7513 5080,pngouagouni@worldbank.org

More:
Central African Republic: Supporting the reconstruction of the national statistical system to help with better data for decision-making - Central...

E-CORE RECOGNIZED BY ATLASSIAN AS SPECIALIZED PARTNER FOR ALL THREE CLOUD, AGILE AT SCALE, AND ITSM CATEGORIES – Yahoo Finance

Distinction Validates E-Core's Rigorous Training, Industry-Leading Specializations andHigh Customer Satisfaction

WHITE PLAINS, N.Y., July 28, 2022 /PRNewswire/ -- e-Core, one of the technology industry's most trusted partners for helping customers around the globe unlock the value of tech investments, was recognized today by Atlassian for becoming an Atlassian Specialized Partner to achieve cloud, agile at scale, and ITSM certifications.

Whether companies are navigating extremely complex migrations from on premise to the cloud, expanding to enterprise agility or managing end to end delivery of workplace technology from laptops to servers to apps, customers can now depend on e-Core to help them succeed and accelerate growth. Achieving these specializations isn't a designation easily won it takes ongoing expertise and dedication in a competitive field spanning some of the most accomplished tech brands around the globe.

Atlassian is one of the world's most prestigious software companies. Based in Australia with major operations in San Francisco and the U.K., the company is sought after by leading software developers and project managers. In the third quarter of 2022 alone, Atlassian reported serving 234,575 customers in over 190 countries, with 10 million monthly active users.

Launched in May 2021, the Atlassian Specialization Program provides a clear distinction to companies such as e-Core who have completed rigorous training and demonstrated a consistent track record of delivering high-quality services and customer satisfaction.

"When we first partnered with Atlassian more than 14 years ago, we wanted to help companies overcome immense technology integration challenges, break silos and grow faster than they ever thought possible," said Marco Roman, Head of North American Field Operations at e-Core. "Achieving Atlassian Specialized Partner certification in three areas that are key to the future of the technology industry is testament to our deep knowledge and commitment to innovation that we tap every day to fuel our customers' growth."

Story continues

For more information on e-Core's Atlassian Specialization Programs certifications in cloud, agile at scale, and ITSM, visit: https://www.e-core.com/na-en/blog-post/atlassian-specialized-badges/

About e-CoreLet our experience be the core of our partnership with you. For more than 22 years, e-Core has been a trusted technology partner for customers around the globe, helping them to unlock the value of technology investments. Leverage e-Core's expertise to improve processes, expand your software team, or build custom solutions on your behalf. Transform your business, scale for growth, and continuously improve your competitive advantage.

Contact:Michael JohnstonCo-Communications(617) 549-0639mjohnston@cocommunications.com

Cision

View original content:https://www.prnewswire.com/news-releases/e-core-recognized-by-atlassian-as-specialized-partner-for-all-three-cloud-agile-at-scale-and-itsm-categories-301595632.html

SOURCE e-Core

See the original post:
E-CORE RECOGNIZED BY ATLASSIAN AS SPECIALIZED PARTNER FOR ALL THREE CLOUD, AGILE AT SCALE, AND ITSM CATEGORIES - Yahoo Finance

Examining New DawDropper Banking Dropper and DaaS on the Dark Web – Trend Micro

SHA-256

Package name

Release date

Detection name

C&C server

Payload address

Payload family

022a01566d6033f6d90ab182c4e69f80a3851565aaaa386c8fa1a9435cb55c91

com.caduta.aisevsk

05/01/2021

AndroidOS_DawDropper.HRX

call-recorder-66f03-default-rtdb[.]firebaseio[.]com

hxxps://github.com/uliaknazeva888/qs/raw/main/1.apk

Octo

e1598249d86925b6648284fda00e02eb41fdcc75559f10c80acd182fd1f0e23a

com.vpntool.androidweb

11/07/2021

AndroidOS_DawDropper.HRXA

rooster-945d8-default-rtdb[.]firebaseio[.]com

hxxps://github.com/butcher65/test/raw/main/golgofan.apk

Hydra

8fef8831cbc864ffe16e281b0e4af8e3999518c15677866ac80ffb9495959637

com.j2ca.callrecorder

11/11/2021

AndroidOS_DawDropper.HRXA

call-recorder-ad77f-default-rtdb[.]firebaseio[.]com

hxxps://github.com/butcher65/test/raw/main/gala.apk

Octo

05b3e4071f62763b3925fca9db383aeaad6183c690eecbbf532b080dfa6a5a08

com.codeword.docscann

11/21/2021

AndroidOS_DawDropper.HRXA

doc-scanner-cff1d-default-rtdb[.]firebaseio[.]com

hxxps://github.com/lotterevich/lott/raw/main/maina.apk

TeaBot

f4611b75113d31e344a7d37c011db37edaa436b7d84ca4dfd77a468bdeff0271

com.virtualapps.universalsaver

12/09/2021

AndroidOS_DawDropper.HRXA

universalsaverpro-default-rtdb[.]firebaseio[.]com

hxxps://github.com/uliaknazeva888/qs/raw/main/1.apk

Octo

a1298cc00605c79679f72b22d5c9c8e5c8557218458d6a6bd152b2c2514810eb

com.techmediapro.photoediting

01/04/2022

AndroidOS_DawDropper.HRXA

eaglephotoeditor-2d4e5-default-rtdb[.]firebaseio[.]com

hxxps://github.com/butcher65/test/raw/main/lolipop.apk

Hydra

eb8299c16a311ac2412c55af16d1d3821ce7386c86ae6d431268a3285c8e81fb

com.chestudio.callrecorder

01/2022

AndroidOS_DawDropper.HRXA

call-recorder-pro-371bc-default-rtdb.firebaseio.com

hxxps://github.com/sherrytho/test/raw/main/golgol.apk

Hydra

d5ac8e081298e3b14b41f2134dae68535bcf740841e75f91754d3d0c0814ed42

com.casualplay.leadbro

04/23/2022

AndroidOS_DawDropper.HRXA

loader-acb47-default-rtdb[.]firebaseio[.]com

hxxps://github.com/briangreen7667/2705/raw/main/addon2.apk

Hydra

b4bd13770c3514596dd36854850a9507e5734374083a0e4299c697b6c9b9ec58

com.utilsmycrypto.mainer

05/04/2022

AndroidOS_DawDropper.HRXA

crypto-utils-l-default-rtdb[.]firebaseio[.]com

hxxps://github.com/asFirstYouSaid/test/raw/main/110.apk

hxxps://github.com/asFirstYouSaid/test/raw/main/SecureChat%20(1).apk

Ermac

77f226769eb1a886606823d5b7832d92f678f0c2e1133f3bbee939b256c398aa

com.cleaner.fixgate

05/14/2022

AndroidOS_DawDropper.HRXA

fixcleaner-60e32-default-rtdb[.]firebaseio[.]com

hxxps://github.com/butcher65/test/raw/main/latte.apk

Hydra

5ee98b1051ccd0fa937f681889e52c59f33372ffa27afff024bb76d9b0446b8a

com.olivia.openpuremind

05/23/2022

AndroidOS_DawDropper.HRX

crypto-sequence-default-rtdb[.]firebaseio.com

N/A

N/A

0ebcf3bce940daf4017c85700ffc72f6b3277caf7f144a69fbfd437d1343b4ab

com.myunique.sequencestore

2022/05/31

AndroidOS_DawDropper.HRX

coin-flow-a179b-default-rtdb.firebaseio.com

N/A

N/A

2113451a983916b8c7918c880191f7d264f242b815b044a6351c527f8aeac3c8

com.flowmysequto.yamer

05/2022

Excerpt from:
Examining New DawDropper Banking Dropper and DaaS on the Dark Web - Trend Micro

Raccoon Stealer v2: The Latest Generation of the Raccoon Family – Security Boulevard

Introduction

Raccoon is a malware family that has been sold as malware-as-a-service on underground forums since early 2019. In early July 2022, a new variant of this malware was released. The new variant, popularly known as Raccoon Stealer v2, is written in C unlike previous versions which were mainly written in C++.

The Raccoon Malware is a robust stealer that allows stealing of data such as passwords, cookies, and autofill data from browsers. Raccoon stealers also support theft from all cryptocurrency wallets.

In this blog, ThreatLabz will analyze Raccoon Stealer v2 in the exe format, and highlight key differences from its predecessors. The authors of the Raccoon Stealer malware have announced that other formats are available, including DLLs and embedded in other PE files.

Detailed Analysis

Raccoon v2 is an information stealing malware that was first seen on 2022-07-03. The malware is written in C and assembly.

Though we noticed a few new features in the newer variant as mentioned below, the data stealing mechanism is still the same as is seen in its predecessor:

Base64 + RC4 encryption scheme for all string literalsDynamic Loading Of WinAPI FunctionsDiscarded the dependence on Telegram API

We have noticed a significant change in the way list of command and control servers is obtained. The Raccoon Malware v1 was seen abusing the Telegram network to fetch the list of command and control servers, whereas the newer variant has abandoned the use of Telegram. Instead, they use a hardcoded IP address of a threat-actor-controlled server to fetch the list of command and control servers from where the next stage payload (mostly DLLs) is downloaded.

File Information

Malware Name: Raccoon Stealer v2Language: CFile Type: exeFile Size: 56832MD5: 0cfa58846e43dd67b6d9f29e97f6c53eSHA1: 19d9fbfd9b23d4bd435746a524443f1a962d42faSHA256: 022432f770bf0e7c5260100fcde2ec7c49f68716751fd7d8b9e113bf06167e03

Debug Information

The analyzed file has debug data intact. According to the Debug headers compilation date was Thursday, 26/05/2022 13:58:25 UTC as shown in Figure 1.

Figure 1: Raccoon v2 Debug Headers

We have also seen a change in how Raccoon Stealer v2 hides its intentions by using a mechanism where API names are dynamically resolved rather than being loaded statically. The stealer uses LoadLibraryW and GetProcAddress to resolve each of the necessary functions (shown in Figure 2). The names of the DLLs and WinAPI functions are stored in the binary as clear text.

Figure 2: Raccoon v2 dynamic resolution

List Of Loaded DLLs

kernel32.dllShlwapi.dllOle32.dllWinInet.dllAdvapi32.dllUser32.dllCrypt32.dllShell32.dll

Raccoon v1 did not employ dynamic resolution for used functions, therefore packed samples were often observed in the wild to evade detection mechanisms. Conversely, Raccoon v2 is often delivered unpacked. Figure 3 shows the imported DLLs for raccoon v1.

Figure 3: Raccoon Stealer v1 imports (unpacked)

Once resolution of functions is done, the stealer will run its string decryption routine. The routine is simple. RC4 encrypted strings are stored in the sample with base64 encoding. The sample first decodes the base64 encoding and then decrypts the encrypted string with the key edinayarossiya. This routine is followed for all the strings in function string_decryption(). The 'string_decryption' routine is shown in Figure 4.

Figure 4: Raccoon v2 String Decryption Routine

Previous versions of Raccoon Stealer did not encrypt string literals other than hard coded IP addresses. The Raccoon v2 variant overcomes this by encrypting all the plain text strings. Several of the plaintext strings of Raccoon v1 are shown in Figure 5.

Figure 5: Plaintext Strings In Raccoon v1

After manual decryption of the Raccoon v1 sample strings, the following (Figure 6 and Figure 7) strings were obtained in plaintext format.

Figure 6: Raccoon v2 Decrypted Strings

Figure 7: Raccoon v2 Decrypted Strings

The command and control IP addresses are saved in the malware and follow the same decryption routine but have a different key, 59c9737264c0b3209d9193b8ded6c127. The IP address contacted by the malware is hxxp://51(.)195(.)166(.)184/. The decryption routine is shown in Figure 8.

Figure 8: IP Address Decryption Raccoon v2

Decrypting Command and Control IP Address

The encrypted command and control IP Address can be easily decrypted by using public tools such CyberChef as shown in Figure 9.

Figure 9: Raccoon v2 IP Address (via cyberchef utils)

This technique is common between both versions of the malware. Figure 10 shows the same routine employed in Raccoon v1.

Figure 10: Raccoon v1 setting up overhead before IP Address decryption

Once all the overhead of setting up the functions and decryption of the strings is done, the malware will perform some checks before contacting the command and control server to download malicious DLLs and exfiltrate information.

Overhead Before Exfiltration

Before executing the core of the malware, certain checks are made to understand the execution environment. This includes making sure the malware isn't already running on the machine. Further the malware also checks if it's running as NT Authority/System.

The malware gets a handle on mutex and checks if it matches a particular value or not. If it matches, the malware continues execution.

Value: 8724643052.

This technique is used to make sure only one instance of malware is running at one time. Figure 11 depicts the Mutex check and creation for Raccoon v2, while Figure 12 depicts the similar procedure used in Raccoon v1.

Figure 11: Raccoon v2 Mutex Check

Figure 12: Raccoon v1 Mutex Check

By retrieving the Process token and matching the text "S-1-5-18," as shown in Figure 13, the malware determines if it is or is not operating as the SYSTEM user.

Figure 13: Raccoon v2 Enumerating Process Token

If running as a SYSTEM user, the enumeration of all the running processes is done with the help of fun_CreateToolhelp32Snapshot. Otherwise, the malware moves forward without the enumeration. Figure 14 shows the 'enumerate_processes()' function being called while Figure 15 shows the malware iterating over the Processes.

Figure 14: Raccoon v2 Enumerate Process

Figure 15: Raccoon v2 Iterating Process Struct

Fingerprinting Host

Once the malware is aware of the environment in which it's running, it starts to fingerprint the host. This malware uses functions such as:

RegQueryValueExW for fetching machine IDGetUserNameW

Figure 16 depicts the malware retrieving the Machine ID from the registry key "SOFTWAREMicrosoftCryptography" via the RegQueryKeyExW and RegQueryValueExW functions. Figure 17 depicts malware using the GetUserNameW function to retrieve a username.

Figure 16: Raccoon v2 Fetching MachineID

Figure 17: Raccoon v2 Fetching Username

Figure 18: Raccoon v2: Username Buffer

After all this is done, the malware will enumerate information such as MACHINE ID and username and then send the data to the remote command and control server.

For this purpose, the malware creates a char string and starts appending these values to it. It starts by adding machine id and username. Figure 19 shows the built payload in buffer.

Figure 19: Raccoon v2: Fingerprinting Payload

Next, it generates and appends configId which is the rc4 encryption key.

machineId=|&configId=

Communications with Command and Control

Communication with command and control takes place over plain text http protocol. The previously decrypted IP address hxxp://51(.)195(.)166(.)184/ is used for command and control communication.

The malware contacts the list of previously decrypted command and control IP addresses (stored in local_3c). Since this malware only contains one command and control IP Address, the post request is only made to one as seen in Figure 20.

Figure 20: Raccoon v2: Command and Control communication

Command and Control URL

Figure 21: Raccoon v2 URL in buffer

Request Headers

Figure 22: Raccoon v2 Request Headers

Once the request has been made, the malware checks if the content body length is zero or not. If no content is received from command and control or the content body length is zero, the malware exits. This check is made because the exfiltration mechanism of the malware requires command and control to respond with a list IP Addresses to exfiltrate data to. In Figure 23, this condition can be seen along with the 'ExitProcess()' function call.

Figure 23: Raccoon v2 Verifying Response Content

Discarded the dependence on Telegram bot

The Raccoon v1 relied on the Telegram Bot API description page to fetch command and control IP addresses and establish connections. The recent malware variants (v2) from this family have started to hard-code IP addresses in the binary to achieve this task. Raccoon Malware v2 uses 5 hard coded IP addresses and iterates over them.

Data Exfiltration

The malware relies on response from command and control server to down the required DLLs and decides on the next course of action.

As of the writing of this blog the command and control IP has died, thus analysis of traffic towards the host is not possible. ThreatLabz has previously observed that the command and control server provides information on where to download additional payloads from and which IP Address to use for further communications.

Figure 24: Raccoon v2 pinging extracted IP Address

Grepped DLLs

Figure 25: Raccoon v2 DLLs that are downloaded

The malware uses a WINAPI call to SHGetFolderPathW to get a path to C:UsersAppData and appends Local to it and uses it as the path to store stolen information before sending it to the command and control.

Figure 26: Raccoon v2 Storage Path In Buffer

Indicators Of Compromise

IP contacted by the analyzed sample of Raccoon v2.

55(.)195(.)166(.)184

List Of Other IPs that act as an C2 for other samples can be found here.

Downloaded DLLs

nss3.dllsqlite3.dllGdiPlus.dllGdi32.dll

Path Used By the Malware

C:UsersAppDataLocal

Other samples observed in the wild of Raccoon v2.

0123b26df3c79bac0a3fda79072e36c159cfd1824ae3fd4b7f9dea9bda9c7909022432f770bf0e7c5260100fcde2ec7c49f68716751fd7d8b9e113bf06167e03048c0113233ddc1250c269c74c9c9b8e9ad3e4dae3533ff0412d02b06bdf40590c722728ca1a996bbb83455332fa27018158cef21ad35dc057191a03539602562106b6f94cebb55b1d55eb4b91fa83aef051c8866c54bb75ea4fd304711c4dfc263c18c86071d085c69f2096460c6b418ae414d3ea92c0c2e75ef7cb47bbe69327e02b973771d43531c97eb5d3fb662f9247e85c4135fe4c030587a8dea725772911be45ad496dd1945f95c47b7f7738ad03849329fcec9c464dfaeb5081f67e47f3c8bf3329c2ef862cf12567849555b17b930c8d7c0d571f4e112dae1453b1516c81438ac269de2b632fb1c59f4e36c3d714e0929a969ec971430d2d63ac4e5d66919291b68ab8563deedf8d5575fd91460d1adfbd12dba292262a764a5c9962049575053b432e93b176da7afcbe49387111b3a3d927b06c5b251ea82e59757299026b22e61b0f9765eb63e42253f7e5d6ec4657008ea60aad220bbc7e22697322fbc16e20a7ef2a3188638014a053c6948d9e34ecd42cb9771bdcd0f82db0960ce3cc26c8313b0fe41197e2aff5533f5f3efb1ba2970190779bc9a07bea6399f510990f240215e24ef4dd1d22d485bf8c79f8ef3e963c4787a8eb6bf0b9ac9ee50e94a731872a74f47780317850ae2b9fae9d6c53a957ed7187173feb4f42bd8c1068561d366831e5712c2d58aecb21e2dbc2ae7c76102da6b00ea15e259ec6e669806594be6ab9b46434f196a61418484ba1eda3496789840bec0dff119ae309a7a942d390801e8fedc129c6e3c34e44aae3d1aced1d723bc531730b08f5f7b1aaae018d5287444990606fc43a0f2deb4ac0c7b2712cc28331781d43ae27

Conclusion

Raccoon Stealer sold as Malware-as-a-Service has become popular over the past few years, and several incidents of this malware have been observed. The Authors of this malware are constantly adding new features to this family of malware. This is the second major release of the malware after the first release in 2019. This shows that the malware is likely to evolve and remain a constant threat to organizations.

Zscaler coverage

We have ensured coverage for the payloads seen in these attacks via advanced threat signatures as well as our advanced cloud sandbox.

Figure 27: Zscaler Sandbox Detection

Zscaler's multilayered cloud security platform detects indicators at various levels, as shown below:

Win32.PWS.Raccoon

*** This is a Security Bloggers Network syndicated blog from Blog Category Feed authored by Sarthak Misraa. Read the original post at: https://www.zscaler.com/blogs/security-research/raccoon-stealer-v2-latest-generation-raccoon-family

Read the original post:
Raccoon Stealer v2: The Latest Generation of the Raccoon Family - Security Boulevard

Google And Oracle Cloud Servers Fail In The UK Heatwave And Take Down Websites July 2022 – Inventiva

Google and Oracle cloud servers fail in the UK heatwave and take down websites.

As the UK reached 40C, clouds exploded. Due to cooling problems, Google and Oracles cloud services and servers in the UK went offline as soon as the country underwent a record-breaking heatwave.

Datacenters couldnt handle the heat when the temperature in eastern England reached 40.3C (104.5F), the hottest temperature that has ever been recorded by a nation not accustomed to these temperatures. Various machines were turned off in order to prevent long-term harm, which rendered some resources, services, and virtual machines unusable and forced the closure of tragic websites and similar services.

Networking, storage, compute, and other Oracle Cloud Infrastructure resources are not available on its servers in the south of the UK. The equipment was shut down by specialists to prevent hardware overheating because cooling systems were to fault, according to a status update from Team Oracle.

The UK South (London) Data Centers cooling infrastructure has experienced a difficulty as a result of unseasonal temperatures in the region, Oracle stated on Tuesday at 1638 UTC. As a result, some customers might not be able to use or access resources hosted by Oracle Cloud Infrastructure in the region.

In order to stop more hardware failures, we are identifying service infrastructure that may be safely scaled back. The necessary service teams have been notified and are attempting to repair the damaged infrastructure. This measure is being taken to reduce the risk of any long-term effects on our clients.

According to reports, Oracles cooling system malfunctioned at least in part at noon, UK time.

Not just the IT behemoth Oracle has reported disruptions caused by temperature. One of Google Clouds London facilities, Europe-west2-a, has systems that are experiencing high error rates, latencies, or service unavailability, according to the company.

BigQuery, SQL, and Kubernetes are just a few of the storage and compute services impacted by these problems. At 1615 UTC, Google acknowledged the outage. One effect of this outage has been the destruction of WordPress websites hosted by WP Engine in the UK and supported by Google Cloud.

There has been a cooling-related issue in one of our facilities that hosts zone europe-west2-a for region europe-west2, according to another Google advisory.

Because it caused a partial breakdown of capacity in that zone, a limited number of our clients experienced VM terminations and machine losses as a result. We are working very hard to increase capacity and restore cooling in that zone. There shouldnt be any further affects on Zone Europe-West2-A, and the same goes for any virtual machines that are now running.

We have shut down a portion of the zone and are restricting GCE preemptible launches to prevent damage to equipment and a prolonged outage. A limited percentage of recently launched persistent disc volumes are experiencing regional effects, and were working to restore redundancy for the affected replicated serial disc devices.

The Register has reached out to Oracle and Google for further information.

In addition to causing fires, the extreme heat in parts of England has disrupted train, power, and road services. A melting runway at Luton Airport has forced the airport to close temporarily. If other internet services are impacted, well let you know.

The cooling problems in its data centres have been fixed by both Google and Oracle, with Googles service being restored on Tuesday and Oracles on Wednesday.

At 11:45 p.m. Eastern Time on Tuesday, Googles services were again available.

A cooling-related problem was resolved in one of our structures, which houses a portion of zone Europe-capacity. west2s There has been discussion on the effects of GCE, Persistent Disk, and Autoscaling. Virtual machines (VMs) can be launched by customers in any zone within Europe-West2. Some Persistent Disk volumes with HDD backups will still experience the effects and show IO faults. If you continue to experience issues with these services, get in touch with Google Cloud Product Support and mention this message.

Oracles cooling was finally restored on Wednesday at 7:00 a.m. EST after taking a little longer.

Two more astonishing units in the data centre failed after they were forced to work above their design limitations due to unusually high temperatures in the UK South (London) region. As a result, a portion of the computing infrastructure went into protective shutdown as data centre temperatures rose.

Edited by Prakriti Arora

Like Loading...

Related

More here:
Google And Oracle Cloud Servers Fail In The UK Heatwave And Take Down Websites July 2022 - Inventiva

Announcing Android Cloud Gaming & Media Processing & Delivery Solutions Based on the New Intel Data Center GPU Codenamed Arctic Sound-M -…

Supermicro to Expand its Total IT Solutions Using Intel Data Center GPU codenamed Arctic Sound-M (ATS-M), to Deliver Outstanding Performance for the Modern Enterprise - Over 540 1080p @60Hz Transcoded Streams Per System*

SAN JOSE, Calif., July 27, 2022 /PRNewswire/ -- Super Micro Computer, Inc. (Nasdaq: SMCI), a global leader in enterprise computing, storage, networking, and green computing technology, is announcing future Total IT Solutions for availability with Android Cloud Gaming and Media Processing & Delivery. These new solutions will incorporate the Intel Data Center GPU, codenamed Arctic Sound-M, and will be supported on several Supermicro servers.

Supermicro solutions that will contain the Intel Data Center GPUs codenamed Arctic Sound-M, include the 4U 10x GPU server for transcoding and media delivery, the Supermicro BigTwin system with up to eight Intel Data Center GPUs, codenamed Arctic Sound-M in 2U for media processing applications, the Supermicro CloudDC server for edge AI inferencing, and the Supermicro 2U 2-Node server with three Intel Data Center GPUs, codenamed Arctic Sound-M per node, optimized for cloud gaming. Additional systems will be made available later this year.

"Supermicro will extend our media processing solutions by incorporating the Intel Data Center GPU," said Charles Liang, President, and CEO, Supermicro. "The new solutions will increase video stream rates and enable lower latency Android cloud gaming. As a result, Android cloud gaming performance and interactivity will increase dramatically with the Supermicro BigTwin systems, while media delivery and transcoding will show dramatic improvements with the new Intel Data Center GPUs. The solutions will expand our market-leading accelerated computing offerings, including everything from Media Processing & Delivery to Collaboration, and HPC."

Systems Tuned to Workloads

Transcoding is a critical technology for today's media-centric world. The Supermicro 4U 10x GPU server is ideal for various industries, where multiple formats must be delivered to a wide range of video devices.

Story continues

Cloud-based gaming requires significant computing power and the ability to deliver interactive frame rates to gaming enthusiasts. With the Supermicro 2U 2-Node server featuring the Intel Data Center GPU, codenamed Arctic Sound-M, android cloud-based gaming takes on a new level of performance.

Media Processing for Content Delivery Networks (CDN) requires fast and efficient delivery of all media types. The Supermicro BigTwin system with the Intel Data Center GPU will deliver multiple video content streams to consumers.

"The Intel Data Center GPU is a highly-flexible solution to accelerate workloads that have become an integral part of daily life. The combination of advanced media capabilities, graphics rendering pipeline, and an open software stack deliver the high-density, low-latency, and exquisite visual quality required for next-generation video conferencing, media delivery, and cloud gaming deployments. Supermicro's new product offerings with the Intel Data Center GPU pave the way for rapid deployments in the market," said Jeff McVeigh, vice president and general manager of the Super Compute Group at Intel.

The Supermicro Building Block Solutions approach allows Supermicro to bring a range of products to market faster so that new technologies can be quickly incorporated into its server lines. For example, Android Cloud gaming with the new Intel Data Center GPU codenamed Arctic Sound-M will support more interactive users than previously possible. The streaming performance will see a dramatic increase with the industry's first hardware AV1 encoder and open-source media software stack, compared with software-only video transcoding.

The Intel Data Center GPU codenamed Arctic Sound-M is designed to be an open, and flexible solution for cloud gaming and media processing & delivery. This GPU will be supported by a full solution stack offering developers an open-source software stack for streaming media and Android cloud gaming, with broad support for the latest codecs, graphics APIs, and frameworks. With Intel oneAPI for unified programming across architectures, the Intel Data Center GPU codenamed Arctic Sound-M will provide an open alternative to proprietary language lock-in that will enable the full performance of the hardware with a complete, proven set of tools that complement existing languages and parallel models. This will allow developers the ability to design open, portable codes that will take maximum advantage of various combinations of CPUs and GPUs.

To learn more about these new solutions, please visit: https://www.supermicro.com/en/accelerators/intel

To learn more about Supermicrowww.supermicro.com

About Supermicro

Supermicro (NASDAQ: SMCI) is a global leader in Application-Optimized Total IT Solutions. Founded and operating in San Jose, California, Supermicro is committed to delivering first-to-market innovation for Enterprise, Cloud, AI, and 5G Telco/Edge IT Infrastructure. We are transforming into a Total IT Solutions provider with server, AI, storage, IoT, and switch systems, software, and services while delivering advanced high-volume motherboard, power, and chassis products. The products are designed and manufactured in-house (in the US, Taiwan, and the Netherlands), leveraging global operations for scale and efficiency and optimized to improve TCO and reduce environmental impact (Green Computing). The award-winning portfolio of Server Building Block Solutions allows customers to optimize for their exact workload and application by selecting from a broad family of systems built from our flexible and reusable building blocks that support a comprehensive set of form factors, processors, memory, GPUs, storage, networking, power and cooling solutions (air-conditioned, free air cooling or liquid cooling).

Supermicro, Server Building Block Solutions, and We Keep IT Green are trademarks and/or registered trademarks of Super Micro Computer, Inc.

All other brands, names, and trademarks are the property of their respective owners.

* Results achieved with a Supermicro SYS-420GP-TNR server with 2x Intel Xeon Gold 6338 processors, 1TB of DRAM, Samsung PM863a 1.92TB drive, Ubuntu 20.04, AMC Firmware 4.2.2.0, 10x Intel Data Center GPU, codenamed Arctic Sound-M, 75W GPUs, IFWI Software Stack on card (ECC Memory Disabled version)

(PRNewsfoto/Super Micro Computer, Inc.)

Cision

View original content to download multimedia:https://www.prnewswire.com/news-releases/announcing-android-cloud-gaming--media-processing--delivery-solutions-based-on-the-new-intel-data-center-gpu-codenamed-arctic-sound-m-301594858.html

SOURCE Super Micro Computer, Inc.

More:
Announcing Android Cloud Gaming & Media Processing & Delivery Solutions Based on the New Intel Data Center GPU Codenamed Arctic Sound-M -...

Edge Cloud 4 Production: How Audi is revolutionizing factory automation – Automotive World

Centralized, not decentralized; local servers, not hundreds of industrial PCs; software, not hardware: with the local server solution Edge Cloud 4 Production, Audi is initiating a paradigm shift in automation technology. After successful testing in the Audi Production Lab (P-Lab), three local servers will take over directing workers in the Bllinger Hfe. If the server infrastructure continues to operate reliably, Audi wants to roll out this automation technology the only one of its kind in the world for serial production throughout the entire Volkswagen Group.

Henning Lser, head of Audis Production Lab, pulls the plug on a server in Ingolstadts P-Lab. Simulated production in the Bllinger Hfe keeps going without interruption. Two other servers reliably continue controlling 36 cycles during the lab test in the P-Lab in Gaimersheim. Audi wants to be the first manufacturer in the world to turn to these kinds of centralized server solutions in cycle-dependent production. In the Bllinger Hfe near Neckarsulm, the Audie-tronGT quattro1and the R8 share an assembly line. The small-scale series e produced there are particularly well suited for testing projects from the P-Lab and trying things out for large-scale series.

With the Edge Cloud 4 Production, a few centralized and local servers will take on the work of countless expensive industrial PCs. The server solution makes it possible to level out spikes in demand over the total number of virtualized clients a far more efficient use of resources. Production will save time and effort, particularly where software rollouts, operating system changes, and IT-related expenses are concerned. What were doing here is a revolution, says Gerd Walker, Member of the Board of Management of AUDI AG Production and Logistics. We used to have to buy hardware when we wanted to introduce new functions. With Edge Cloud 4 Production, we only buy applications in the form of software. That is the crucial step toward IT-based production. For P-Lab boss Lser, the project is an operation at the heart of our automation technology and production management. Audi is the first manufacturer to put a centralized server solution into test operation in cycle-dependent production.

The crucial advantage of Edge Cloud 4 Production is that countless industrial PCs can be replaced along with their input and output devices and no longer need to be individually maintained. Process safety is also greatly improved. In the event of a disruption, the load can be shifted to other servers. In contrast, a broken industrial PC would have to be replaced. That takes time. On top of that, the solution reduces the workload for employees. In the future, thin clients capable of what is known as power-over-Ethernet will set the pace. These terminal devices get their electrical power via Ethernet cables and most of their computational power through local servers. They have USB ports for output devices. That enables managers directing workers to look at a monitor and see what needs to be mounted onto which vehicle. In the future, an oversized PC with processing and storage capacity will not be necessary for these tasks. Software-based infrastructures have proven themselves in data processing centers. Were convinced they will also work well in production, says Lser.

Together with the experts from the P-Lab, the IT managers around Christoph Hagmller, the Head of IT Services at Audi in Neckarsulm and co-manager for Production IT in the Bllinger Hfe, are rolling out the new solution. With its comparatively low unit and cycle numbers, the Bllinger Hfe is ideally suited to functioning as a real lab for testing the new concept in series production. Edge Cloud 4 Production has a hyper-converged infrastructure (HCI). This software-defined system combines all the elements of a small data processing center: storage, computing, networking, and management. The software defines functionalities like web servers, databases, and managing systems. The cloud solution can also be quickly scaled at will to adapt to changing production requirements. However, a public cloud link is out of the question due to productions stringent security requirements. Additionally, local servers make the necessary, very short latencies possible. These are the reasons why we install the servers near us. Thats also why we call the solution Edge Cloud: because its close to our shop floor environment, says P-Lab boss Lser.

The new IT concept also improves ease of maintenance and IT security. With industrial PCs, the patch cycles (the intervals between necessary updates) are usually longer. On top of that, updates can only be installed during pauses in production. With the cloud-based infrastructure, IT experts can roll out patches in all phases within a few minutes via the central servers. Moreover, IT colleagues install functionality updates in all virtual clients at the same time, such as a new operating system. Hagmller explains that the need for additional functionality will get increasingly elaborate and expensive in the future. He estimates that the cost of an update for instance, from Windows 10 to Windows 11 can be reduced by about one-third. Additionally, with the server solution, we arent dependent on loose timeframes in Production anymore. It gives ustremendous flexibility to ensure our software and operating systems are always completely up to date.

Both data processing centers in the Neckarsulm plant are slated for subsequent mass production. A fiber optic cable connects them with the Bllinger Hfe. According to Henning Lser, 5G will be relevant in the second stage. Thus far, a separate computer has been installed in every automated guided vehicle (AGV). Here too, experts must install costly security updates and new operating systems. It is conceivable that they could acquire new functionalities, but they are seldom transferable to their computers. We need a fast, high-availability network for that, says Lser. In our testing environment in the P-Lab, we have taken another step forward concerning 5G.

SOURCE: Audi

See the article here:
Edge Cloud 4 Production: How Audi is revolutionizing factory automation - Automotive World

How Our Bare Metal Cloud Keeps up with All the New OS Releases – thenewstack.io

When you run a bare metal cloud service, you spend a lot of time ensuring your hardware supports all the popular operating systems in their latest versions. Each new OS release has to be validated on each server configuration in your fleet, and each OS thats already on the list has to be tested on each new server config you add.

Sarah Funkhouser

Sarah manages the Delivery Engineering team at Equinix Metal. She's an expert in deploying and running Kubernetes on bare metal, and has recently been writing lots of Go to keep things running smoothly. When she's not working, you can find her hanging out with her two kids, playing with her three dogs, or maybe out running, biking and swimming around Raleigh, North Carolina.

New OS releases come out all the time, so keeping the list of operating systems validated on our product Equinix Metal up to date can easily eat up a lot of engineering hours. Still, this is a crucial capability for our business, so until recently, we would dedicate resources to manually add each new operating system. A few months ago, however, we replaced that time-consuming and repetitive manual process with an automated CI pipeline we created using Buildkite. We dubbed it, informally, Bob the Builder. Bob has already saved us tons of engineering hours hours that can be spent doing creative, more impactful work and Im here to share our experience.

For perspective, validating a new OS on Equinix Metal using the old process could take months. A simple update, say, from Ubuntu 18.04 to 20.04, was usually a weeks-long project. The problem was twofold. First, we didnt have a simple and automated CI pipeline for adding packages to OS images, customizing the configurations or running the builds. We did all that manually.

Second, each new image needed to be deployed and tested on multiple hardware platforms to ensure that it worked on each server configuration available in our bare-metal cloud. We would verify that each image worked as required by manually installing it to various servers, then poking around to validate functionality.

Not having a CI pipeline for OS images and having to test manually were the main reasons it took so long to prepare each new OS image. The process got the job done, but needless to say, it wasnt exactly an ideal way to spend valuable engineering resources.

Bob the Builder addresses exactly those two issues. Buildkite is a platform for running flexible and scalable CI pipelines. Using Buildkite automates our OS build and testing pipelines, dramatically speeding things up and freeing up our engineers.

Heres how the new process works:

Bob the Builder can run on a laptop and trigger one-off builds indeed, being a Go-based CLI tool that can run basically anywhere is one of Bobs benefits but we mostly use it as part of Buildkite pipelines.

We chose Buildkite to orchestrate OS image pipelines because of its several features that address our needs particularly well.

One is that it supports dynamic pipelines. Instead of defining a pipeline as a single set of steps and then running it, Buildkite allows us to set up conditions and stages. This means we can reuse the same pipeline for multiple image builds, which beats having to create a separate pipeline for each type of image.

It also lets us collect user input at any point in the pipeline. This makes our pipelines interactive, allowing for a lot of flexibility and control over complex OS build processes.

Most important of all is that it gives us total control over where builds happen. We can run them inside a container or directly on the hardware of our choosing. Thats a big deal for Metal because we often need to run builds on specific types of hardware, like a bare-metal Arm server, for instance.

In the end, we can easily publish an image across the various x86 and Arm server configurations that we want to support without having to set up a different pipeline for each. Thats not something your average CI server can manage.

Of course, Buildkites advanced feature set came with a learning curve. We didnt get our first Bob the Builder-powered pipeline up and running in a day. It was a months-long process, but once we learned to pair Buildkite and Bob the Builder, rolling out a new OS image became a breeze.

Feature image via Unsplash.

Read the original post:
How Our Bare Metal Cloud Keeps up with All the New OS Releases - thenewstack.io

Four Cultural Shifts Enterprises Should Embrace When Migrating to the Cloud – EnterpriseTalk

Simply implementing cloud technologies wont result in strategic innovation; the C-suite is still required for that. However, cutting-edge concepts can now be transformed into ground-breaking new products thanks to cloud computing, which will fuel future growth.

Nowadays, a lot of organizations have a hybrid cloud approach. Its getting harder and harder to deny how widely used the cloud is. Before making such a significant decision, there are many things to consider. Cloud adoption also transforms the corporate culture. The cultural aspect of an organizations operations is sometimes overlooked, and if it isnt managed properly, cloud adoption might be detrimental to the organization. But if done right, the business and its people will quickly embrace the many good improvements that result from it.

Shadow IT was a problem before cloud computing and it is still a problem now that the cloud has been incorporated. It is advised that organizations organize a cloud amnesty, and as they very certainly already utilize certain cloud technologies in their operations, they ought to be aware of them. To enable CEOs to identify where further efficiencies may be obtained through aggregations, departments should be requested to indicate which cloud products they utilize. Without this, services might be duplicated, resulting in lost money, weakened negotiating position, and increased shadow IT issues.

Also Read: Choosing the Right Cloud Storage for Enterprises

If a firm tries to force cloud computing upon a reluctant IT culture, the project can be challenging to complete. Technical resources will need to comprehend the change, accept it, and outgrow the previous perspective, since servers will probably become an obsolete resource. Since losing access to the physical servers may impact their confidence and competence, employees may even fight against the cloud plan. Therefore, it is essential to include everyone, allowing them the freedom and time to assess new technology, which will help personnel feel empowered and open-minded.

A company needs new skills if it wants to experiment with a new technology or operating strategy. The most important talents for cloud migration are project management, financial and business management, as well as technical, security, and compliance expertise. Simply said, if a person hasnt learned how and developed particular new abilities, they wont be able to use cloud-based project management solutions efficiently. Although this task can be outsourced, having in-house expertise will be more cost-effective. The most crucial skill sets for managing cloud services are those related to contracts and negotiations. The combination of expertise (from technical to commercial and legal) is truly exceptional.

Also Read: Increased Cloud Adoption among Businesses Resulting in Storage Management Challenges

As it is their technology, cloud service providers can have a significant impact on the overall roadmap, even if the team is proactive, knowledgeable, and effective in negotiations. CIOs will have to acknowledge this reality and not be astounded if the newest advancements in cloud services are what are guiding their plans and workloads. Cloud service providers have a great deal of control over the products they choose, but they may also alter their terms of service or product capabilities if they feel it would benefit the customer.

Enterprises nowadays encounter a wide range of difficulties. New digital technologies and the economy are producing great new opportunities as well as competitive challenges. For numerous companies, it is up to the CFOs to respond to these problems, and they can only do so by making their organizations lean, agile, and responsive. This is why cloud computing is so important since it gives modern businesses the competitive edge they require to succeed. Using technology must fuel growth to offer actual control, visibility, and agility so that firms can adjust and operate as they need to.

Check Out The NewEnterprisetalk Podcast.For more such updates follow us on Google NewsEnterprisetalk News.

Continued here:
Four Cultural Shifts Enterprises Should Embrace When Migrating to the Cloud - EnterpriseTalk

OVHcloud, the best dedicated server provider in the market – Startup.info

Managing multiple websites or large, traffic-heavy websites on a normal hosting plan can be a huge challenge as your business grows. Its also important to upgrade when you experience extensive downtime or because this significantly impacts your business. A dedicated server is a type of web hosting where your business gets its own server and doesnt compete with others for traffic.

This improved service comes at a higher price. Therefore you should consider multiple factors before upgrading to the dedicated server to ensure its the best option for your business. Indeed getting resources committed entirely to your site eliminates downtime and bottlenecks.

While there are many dedicated server hosting providers in the market, OVHcloud has positioned itself as the best hosting provider. This article reviews what a dedicated server is all about and the cost of upgrading to it.

A dedicated server offers advanced features, so the price is much higher than other hosting plans. Actually, a dedicated server is the most expensive hosting method. However, its certainly worth it because it gives you complete control and is highly customizable.

Other benefits include:

Dedicated Resources: The hosting gives you exclusive server resources such as disk space, CPU usage, bandwidth, and RAM. This eliminates risks such as network congestion, occasional crashes, frequent downtime, and more.

Enhanced Website Security: A dedicated server allow you to customize your security settings. You can limit admin access, set up new firewalls, use a preferred malware protection program, install an Intrusion Detection System (IDS), and much more.

Fast Load Speed: A dedicated server is much faster than a shared one. Your website will likely rank higher in search results when its faster. Also, speed has a huge impact on your bottom line because it affects the conversion rates.

OVHcloud offers hosting, servers, and cloud computing solutions. You can take advantage of its experience and expertise in bare metal servers and choose a dedicated server from its wide range of servers.

The following are the prices of its dedicated servers.

These are the most affordable servers and suitable for most applications.

These are versatile servers for SMEs.

These servers are designed for complex and high resilience infrastructures.

These are the most efficient servers and are optimized for critical workloads.

You can save much more with OVHcloud dedicated servers. Contact them for more information.

Continue reading here:
OVHcloud, the best dedicated server provider in the market - Startup.info