Category Archives: Cloud Computing

Cloud computing and blue-sky thinking: An atmospheric scientist … – Purdue University

WEST LAFAYETTE, Ind. Alexandria Johnson does hard science on the most nebulous of subjects: clouds. As an atmospheric scientist and assistant professor of practice in Purdue Universitys College of Science, she studies clouds wherever they are: in her lab, on Earth, throughout the solar system and into the galaxy.

The coolest thing about my research is that I can see clouds every day, Johnson said. I can look up into our own atmosphere and watch them change and evolve. Then I can take that knowledge and apply it to other planetary bodies, both within and outside our solar system.

The science of clouds covers a lot of ground. Her research shines light into topics ranging from the rainfall and microplastic precipitation in Indiana to the climates of moons and planets far outside the realm of human experience.

Studying clouds in their natural environments can be complex and subject to the variations of climate, weather and observation devices. Johnsons solution is to create her own homegrown clouds to study in her lab in the Department of Earth, Atmospheric, and Planetary Sciences. She strips the systems down to their basics to get a clear understanding on how the particles that make up clouds form, develop and interact with their environment. Nothing in her lab actually looks like a cloud; there are no mists swirling picturesquely in glass bottles. Its mostly lasers and big black boxes. But the behavior of these lab-based cloud particles mimics the behavior of cloud particles in massive sky-sweeping clouds, only in miniature.

Of course, we dont grow them at quite the same scale you see in an atmosphere, Johnson said. Instead, we can take one particle that is representative of a cloud, pump in different gases, and change the temperature and pressure of the system. We then watch as that particle grows, shrinks or changes phase with time, which are processes that happen in clouds everywhere.

Clouds on Earth dont often form without the aid of a nuclei, or a particle, and in some cases what would be considered a nuclei on Earth may be an exotic cloud elsewhere. The particles in Johnsons lab, like all particles, have a charge. Johnson and her team use an electric field to levitate and contain the individual particles so that they cant move around. These particles are then stable for extended periods of time, which enables long-term research experiments, where the pressure, temperature, electric field and laser illumination may be tweaked, and observations recorded. Other methods build upon these to allow the team to look at groups of particles and observe how they scatter and polarize light.

Using methods like these, Johnson can study how clouds form and what different cloud particle shapes and compositions can reveal, and she is able to understand the conditions that lead to different cloud types and behaviors. Like aeronautical engineers using a wind tunnel to observe how currents move around structures, Johnson uses these particles to understand the microphysics that underpin vast and complex systems.

Many scientists climatologists, meteorologists and planetary scientists, to name a few study clouds as part of their broader research. But Johnson is one of the few who studies the particular physics of clouds in the laboratory.

There are not many of us who dig into the microphysics of how clouds form, Johnson said. Anyone who studies the atmosphere has a general sense of knowledge about clouds. But none of those systems work without the physics. We have to understand the microphysics to truly grasp the complexities and implications.

Its a long-running joke that the nights of notable astronomical events on Earth seem to be almost supernaturally disposed to be cloudy. That is true of other planets, too.

Using enormous, advanced, vastly powerful telescopes, astronomers can peer through miles and light-years of space just to find clouds blocking their view of the planet itself. Rather than the planets surface, they can only perceive the opaque atmosphere that enswathes it.

Every planetary body in the solar system that has a dense atmosphere, and many outside of it, has clouds in that atmosphere. Even bodies with thin, wispy or intermittent atmospheres like Pluto have particulates hanging in the atmosphere that, while not true clouds, are a haze of particles and share many of clouds properties.

Clouds are a ubiquitous feature of planetary atmospheres, Johnson said. This is something weve seen from our own solar system, and when we look at exoplanet atmospheres, its no surprise that we find clouds there too. Unfortunately, they tend to block our view of the atmosphere that is below.

Scientists have been able to send probes and rovers to close planetary neighbors, including Venus and Mars. But for bodies that are farther away, including exoplanets planets in other star systems entirely scientists must come up with clever ways to conduct science.

The astronomers find the clouds to be an annoyance. They get in the way of the data they want, whether thats learning about the surface of the planet or its atmospheric composition, Johnson said. We see it a little differently. Yes, theyre there. We cant get rid of them. So lets use our understanding of clouds on Earth and planetary atmospheres of our solar system to learn about these things that we cant observe in exoplanets.

Most of the planets Johnson studies are cool planets. While Earth seems balmy (with planetary temperature averages around 60 degrees Fahrenheit), it is actually chilly by planetary standards, when contrasted to large, gas giants orbiting close to their stars like hot Jupiters.

Johnson and her team accumulate information about planetary bodies in Earths solar system or exoplanets. Astronomers can collect spectrographic data to analyze the chemical compounds that make up the atmosphere and use mathematical models, observations and gravitation studies to determine a planets mass, speed and orbit. Combining that information with insights from her laboratory studies, Johnson can help astronomers determine what a planets atmosphere might be like and extrapolate its chance for hosting life.

Our big questions are when, where and why do clouds form in these atmospheres? Johnson said. If we want to understand these enshrouded exoplanets, we need to understand the clouds. That understanding gives us insights into the atmospheric chemistry at work, atmospheric circulation and the climate. In a way we ground-truth astronomical observations.

Johnson is also looking up at the clouds from below, a little closer to home. In a current study, she is examining the role microplastics play in cloud formation. Microplastics pollution, which has been found just about everywhere, including large bodies of water like the Great Lakes, may form a part of clouds or be scavenged by precipitation, then shower the landscape in rainstorms and snowfall. Those microplastics have dire implications for ecosystem health, human health and agriculture.

Understanding how they become attached to clouds, move through weather systems and impact the landscape when deposited can help Johnson and her team protect life on Earth, just as they explore the possibility of livable conditions on other planets.

Its the same physics, Johnson said. Its the same processes, all throughout the universe, and it brings me a huge amount of wonder and joy. As an undergraduate physics major, I chose a senior research project studying how water droplets froze under varying conditions. I literally watched a droplet freeze hundreds of times to study the process and was entranced. I said, This is what I want to do with my life. This is amazing. I want to study clouds.

About Purdue University

Purdue University is a top public research institution developing practical solutions to todays toughest challenges. Ranked in each of the last five years as one of the 10 Most Innovative universities in the United States by U.S. News & World Report, Purdue delivers world-changing research and out-of-this-world discovery. Committed to hands-on and online, real-world learning, Purdue offers a transformative education to all. Committed to affordability and accessibility, Purdue has frozen tuition and most fees at 2012-13 levels, enabling more students than ever to graduate debt-free. See how Purdue never stops in the persistent pursuit of the next giant leap athttps://stories.purdue.edu.

Writer/Media contact: Brittany Steff, bsteff@purdue.edu

Source: Alexandria Johnson, avjohns@purdue.edu

Link:
Cloud computing and blue-sky thinking: An atmospheric scientist ... - Purdue University

Singapore on track to reach cloud migration goals asks suppliers to re-apply – The Register

Oh no, you're thinking, yet another cookie pop-up. Well, sorry, it's the law. We measure how many people read us, and ensure you see relevant ads, by storing cookies on your device. If you're cool with that, hit Accept all Cookies. For more info and to customize your settings, hit Customize Settings.

Here's an overview of our use of cookies, similar technologies and how to manage them. You can also change your choices at any time, by hitting the Your Consent Options link on the site's footer.

Original post:
Singapore on track to reach cloud migration goals asks suppliers to re-apply - The Register

Ampere Computing launches its custom chips aimed at cloud … – Reuters

May 18 (Reuters) - Ampere Computing on Thursday released a new family of data center chips with technology it has custom-designed for cloud computing companies.

Founded by former Intel Corp (INTC.O) president Renee James, Ampere has focused on courting cloud companies that buy thousands of chips at a time and in turn rent them out. The company has deals in place with Alphabet Inc's (GOOGL.O) Google Cloud, Microsoft Corp's (MSFT.O) Azure and Oracle Corp's (ORCL.N) cloud unit, among others.

Unlike Intel, Ampere uses a computing architecture from SoftBank Group Corp-owned (9984.T) Arm Ltd, which is also an investor in Ampere. But the new AmpereOne offerings announced Thursday are the first that use Ampere's own custom-designed computing cores, which are the most important part of the chips, which are in turn the brains of data center servers that power everything from business apps to social media sites.

The new Ampere chips will have as many as 192 of those cores where Intel chips tend to have only a few dozen. The high core counts are because cloud companies make money by slicing up chips and selling just a piece of their computing power to customers, and having a large number of cores makes doing so easier.

After Ampere disclosed its approach, Advanced Micro Devices (AMD.O) announced a 128-core chip based on the what is called the "x86" architecture used by AMD and Intel. Intel also has a high-core-count chip in the works.

"It's flattering that the x86 vendors have been able to get closer to us, but we're well on our way to higher core counts now," said Jeff Wittich, Ampere's chief product officer.

Ampere last year filed a confidential registration with U.S. securities regulators for an initial public offering. Oracle, where Ampere's CEO James sits on the board, is a major investor. James declined to say when Ampere might go public.

"We did not pull our registration. We are ever hopeful that the market will open and that it will open for growth companies," James said.

Reporting by Stephen Nellis in San FranciscoEditing by Mark Potter

Our Standards: The Thomson Reuters Trust Principles.

The rest is here:
Ampere Computing launches its custom chips aimed at cloud ... - Reuters

Red Hat Summit’s first day reveals key themes for the future of cloud … – SiliconANGLE News

As day one of Red Hat Summit came to a close in Boston, analysts and attendees were left reflecting on the key insights and takeaways from the event.

The big theme is how to make it simpler for the end users, said theCUBE analyst Rob Strechay (pictured, left), emphasizing the focus on driving users toward cloud, Kubernetes and Red Hat Inc.s OpenShift, all with an end-goal of improving accessibility and efficiency.

This push toward simplification was reiterated throughout the day. The announcements from day 1 were all about simplification, according to analyst Paul Gillin (right). Red Hats new offerings, including Lightspeed and an event-driven version of Ansible, are designed to reduce complexity and ease the lives of end users and developers.

Strechay, Gillin and co-analyst John Furrier broke down Red Hat Summit day 1, during an exclusive broadcast on theCUBE, SiliconANGLE Medias livestreaming studio. (* Disclosure below.)

Ansible, an automation platform acquired by Red Hat in 2015, saw a significant shift in positioning during this weeks Summit. Strechay observed a shift in emphasis from Ansible as a small configuration management niche to becoming a central theme of the conference.

They made Ansible the star of the show today, Gillin said, adding that he saw this as a sign of Red Hat recognizing the prime opportunity in addressing the escalating complexity of information technology landscapes with Ansibles automation capabilities.

The integration of Ansible into Red Hats event agenda was further underlined by Furrier.

Theyre shutting down and folding in AnsibleFest thats coming into the fold, he said. Thats big. And they were dominating most of the thematic content.

Another significant topic that emerged from the discussions was the relationship between AI and cloud computing. The panel debated the concept of AI guardrails, necessary guidelines that prevent AI from spiraling out of control.

Strechay connected this to Red Hats emphasis on hybrid cloud: Nobody knows where is AI going to really live and all that data.

On this theme, Gillin highlighted how AIs potential disasters are lurking in our future. While AIs potential problems are a hot topic, there are likely young innovators emerging, ready to solve these problems and create safer, more effective AI systems, he added.

Concerning the concept of multicloud, theCUBEs analysts expressed a certain level of skepticism. While the idea is full of promise, implementation often falls back on homegrown solutions, according to Gillin.

Strechay concurred, noting that the vendors selling the software are not the ones living with the complexities of implementation.

Despite these challenges, the analysts agreed on the essential role of open source in the future of cloud computing. Gillin asserted that the natural pull of the market now is toward open. In the context of AI, the analysts acknowledged the need for open-source AI to improve transparency and prevent monopolistic moats.

Heres the complete video interview, part of SiliconANGLEs and theCUBEs coverage of Red Hat Summit:

(* Disclosure: This is an unsponsored editorial segment. However, theCUBE is a paid media partner for the Red Hat Summit event. Red Hat Inc. and other sponsors of theCUBEs event coverage have no editorial control over content on theCUBE or SiliconANGLE.)

TheCUBEis an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate thecontent you create as well Andy Jassy

THANK YOU

Original post:
Red Hat Summit's first day reveals key themes for the future of cloud ... - SiliconANGLE News

Cloud Computing: Quality and Cataloging are Top Challenges … – Formtek Blog

By Dick Weisinger

Businesses are moving their data to the cloud but are being faced with challenges managing their data once it is there. A report by Forrester on behalf of Capital One found that two huge challenges include data quality and data that is not cataloged or categorized.

Hugo Noreno, editorial director at Forbes, said that the better the data quality, the more confidence users will have in the outputs they produce, lowering risk in the outcomes and increasing efficiency.

Capital One told Edward Segal, a senior contributor at Forbes, that without data cataloging, decision-makers struggle to understand what data they have, how the data is used, and who owns the data.

The Capital One report found that decision-makers need to address key challenges to ensure they are getting the most out of their data and can leverage that data at scale, gaining agility, increasing cost efficiency, and making better-informed decisions. Firms that fail to do this will miss the moment and fall behind.

Excerpt from:
Cloud Computing: Quality and Cataloging are Top Challenges ... - Formtek Blog

Evolution of Cloud Security | Looking At Cloud Posture Management … – SentinelOne

When cloud computing saw its earliest waves of adoption, businesses only had to decide whether or not they wanted to adopt it. The notion of cloud security in these first few years came as a secondary consideration. Though cloud computing has undergone many improvements since it made a splash following the advent of the World Wide Web, the challenge of cloud security has only become more complex and the need for it more acute.

Todays hyperconnected world sees the cloud surface face a variety of risks from ransomware and supply chain attacks to insider threats and misconfigurations. As more businesses have moved their operations and sensitive data to the cloud, securing this environment against developing threats continues to be an ever-changing challenge for leaders.

This post walks through a timeline of how cloud security has grown over recent years to combat new and upcoming risks associated with its use. Following this timeline, security leaders can implement the latest in cloud security based on their own unique business requirements.

When businesses first began to embrace the web in the 90s, the need for data centers boomed. Many businesses had a newfound reliance on shared hosting as well as the dedicated servers upon which their operations were run. Shortly after the turn of the century, this new, virtual environment became known as the cloud. Blooming demand for the cloud then spurred a digital race between Amazon, Microsoft, and Google to gain more shares across the market as cloud providers.

Now that the idea and benefits of cloud technology gained widespread attention, the tech giants of the day focused on relieving businesses of the big investments needed for computing hardware and expensive server maintenance. Amazon Web Services (AWS), and later, Google Docs and Microsofts Azure and Office 365 suite all provided an eager market with more and more features and ways to rely on cloud computing.

However, the accelerating rates of data being stored in the cloud bred the beginnings of a widening attack surface that would signal decades of cloud-based cyber risks and attacks for many businesses. Cyberattacks on the cloud during this time mostly targeted individual computers, networks, and internet-based systems. These included:

Cloud security, in this decade, thus put their focus on network security and access management. Dedicated attacks targeting cloud environments became more prominent in the following decades as cloud computing gained traction across various industries.

In the 2000s, the cybersecurity landscape continued to evolve rapidly, and the specific types and sophistication of attacks targeting cloud environments expanded. Cloud computing was becoming more popular, and cyberattacks specifically targeting cloud environments started to emerge. This decade marked a new stage of cloud security challenges directly proportional to the significant increase in the adoption of cloud.

While past its infancy, cloud computing was not as prevalent as it is now, and many businesses still relied on traditional on-premises infrastructure for their computing needs. Consequently, the specific security concerns related to cloud environments were not widely discussed or understood.

Cloud security measures in the 2000s were relatively basic compared to todays standards. To secure network connections and protect data in transit, security measures for cloud primarily focused on Virtual Private Networks (VPNs); commonly used to establish secure connections between on-premises infrastructure and the cloud providers network. Further, organizations relied heavily on traditional security technologies that were adapted for these new cloud environments. Firewalls, intrusion detection systems, and access control mechanisms were employed to safeguard network traffic and protect against unauthorized access.

The 2000s also saw few industry-specific compliance standards and regulations explicitly addressing cloud security. Since compliance requirements were generally focused on traditional on-premises environments, many businesses had to find their own way, testing out combinations of security measures through trial and errors since there were no standardized cloud security best practices.

Cloud security at the beginning of the millennium was largely characterized by limited control and visibility and heavily reliant on the security measures implemented by the cloud service providers. In many cases, customers had limited control over the underlying infrastructure and had to trust the providers security practices and infrastructure protection. This also meant that customers had limited visibility over their cloud environments, adding to the challenge of monitoring and managing security incidents and vulnerabilities across the cloud infrastructure.

In the 2010s, cloud security experienced significant advancements as cloud computing matured and became a staple of many businesses infrastructures. In turn, attacks on the cloud surface had also evolved into much more sophisticated and frequent events.

Data breaches occupied many news headlines in the 2010s, with attackers targeting cloud environments for cryptojacking or to gain unauthorized access to sensitive data. Many companies fell victim to compromises that leveraged stolen credentials, misconfigurations, and overly permissive identities. A lack of visibility into the cloud surface meant breaches could go undiscovered for extended periods.

Many high-profile breaches exposed large amounts of sensitive data stored in the cloud including:

The severity of cloud-based attacks lead to increased awareness of the importance of cloud security. Organizations recognized the need to secure their cloud environments and began implementing specific security measures. As cloud adoption continued to grow, so did the motivation for attackers to exploit cloud-based infrastructure and services. Cloud providers and organizations responded by increasing their focus on cloud security practices, implementing stronger security controls, and raising awareness for globally recognized countermeasures.

Enter the Cloud Shared Responsibility Model. Introduced by cloud service providers (CSPs) to clarify the division of security responsibilities between the CSP and the customers utilizing their services, the model gained significant prominence and formal recognition in the 2010s.

During this period, major providers such as Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP) began emphasizing the shared responsibility model as part of their cloud service offerings. They defined the respective security responsibilities of the provider and the customer, outlining the areas for which each party was accountable. This model helped a generation of businesses better understand their role in cloud security and enabled them to implement appropriate security measures to protect their assets.

This decade also popularized the services of cloud access security brokers (CASBs); a term coined by Gartner in 2012 and defined as:

On-premises, or cloud-based security policy enforcement points, placed between cloud service consumers and cloud service providers to combine and interject enterprise security policies as the cloud-based resources are accessed. CASBs consolidate multiple types of security policy enforcement. Example security policies include authentication, single sign-on, authorization, credential mapping, device profiling, encryption, tokenization, logging, alerting, malware detection/prevention and so on.

To help businesses navigate and address the changing cloud security landscape, CASBs emerged as a critical security solution for organizations, acting as intermediaries between cloud service providers and consumers. Their main goals were to provide visibility, control, and security enforcement across cloud environments through services such as data loss prevention (DLP), cloud application discovery, encryption and tokenization, compliance, and governance.

The 2010s saw the emergence of Cloud Security Posture Management solutions and was also the starting point for improved compliance and standardization for the use of cloud in modern businesses. Industry-specific compliance standards and regulations began to address cloud security concerns more explicitly. Frameworks such as the Cloud Security Alliance (CSA) Cloud Controls Matrix and both ISO 27017 and ISO 27018 now sought to provide guidelines for cloud security best practices.

In current times, cloud technology has laid down a foundation for a modern, digital means of collaboration and operations on a large scale. Especially since the COVID-19 pandemic and the rise of remote workforces, more businesses than ever before are moving towards hybrid or complete cloud environments.

While cloud technologies, services, and applications are mature and commonly used across all industry verticals, security leaders are still facing challenges of securing this surface and meeting new and developing threats. Modern businesses need a cloud posture management strategy to effectively manage and secure their cloud environments. This involves several key elements to ensure agile and effective protection against todays cloud-based risks.

CSPM solutions have now gained a large amount of traction, enabling organizations to continuously assess and monitor their cloud environments for security risks and compliance. CSPM tools offer visibility into misconfigurations, vulnerabilities, and compliance violations across cloud resources, helping organizations maintain a secure posture.

An essential element of CSPM is cloud attack surface management. Since cloud environments introduce unique security challenges, a cloud posture management strategy helps businesses assess and mitigate risks. It allows organizations to establish and enforce consistent security controls, monitor for vulnerabilities, misconfigurations, and potential threats, and respond to security incidents in a timely manner. A robust strategy enhances the overall security posture of the cloud infrastructure, applications, and data.

CSPM also encompasses whats called the shift-left paradigm, a cloud security practice that integrates security measures earlier in the software development and deployment lifecycle. Rather than implementing security as a separate and downstream process, the shift left addresses vulnerabilities and risks at the earliest possible stage, reducing the likelihood of security issues and improving overall security posture. It emphasizes the proactive inclusion of security practices and controls from the initial stages of development, rather than addressing security as an afterthought or at later stages.

In addition, Cloud Infrastructure Entitlement Management (CIEM) tools have emerged to help organizations manage access entitlements across multicloud environments, helping to reduce the risks associated with excessive permissions.

As cloud adoption rates continue to increase, many businesses have turned to Kubernetes (K8s) to help orchestrate and automate the deployment of containerized applications and services. K8s has risen as a popular choice for many security teams that leverage its mechanism for reliable container image build, deployment, and rollback, which ensures consistency across deployment, testing, and product.

To better assess, monitor and maintain the security of k8s, teams often use the Kubernetes Security Posture Management (KSPM) framework to evaluate and enhance the security posture of Kubernetes clusters, nodes, and the applications running on them. It involves a combination of various activities including risk assessments of the k8 deployment, configuration management for the clusters, image security, network security, pod security, and continuous monitoring of the Kubernetes API server to detect suspicious or malicious behavior.

Additionally, Cloud Workload Protection Platform (CWPPs) and runtime security helps protect workloads against active threats once the containers have been deployed. Implementing K8s runtime security tools protects businesses from malware that may be hidden in container images, privilege escalation attacks exploiting bugs in containers, gaps in access control policies, or unauthorized access to sensitive information that running containers can read.

The zero trust security model has gained prominence in the 2020s. It emphasizes the principle of trust no one and requires authentication, authorization, and continuous monitoring for all users, devices, and applications, regardless of their location or network boundaries. Zero trust architecture helps mitigate the risk of unauthorized access and lateral movement within cloud environments.

Implementing the zero trust security model means taking a proactive and robust approach to protecting cloud environments from evolving cyber threats. Compared to traditional network security models, which relied on perimeter-based defenses and assuming that everything inside the network is trusted, zero trust architecture:

Cloud-native security solutions continue to evolve, providing specialized tools designed specifically for cloud environments. These tools offer features such as cloud workload protection, container security, serverless security, and cloud data protection. Many businesses leverage cloud-native tools to address the unique challenges of modern cloud deployments in a way that is scalable, effective, and streamlined to work in harmony with existing infrastructure.

Cloud-native security tools often leverage automation and orchestration capabilities provided by cloud platforms. Based on predefined templates or dynamically changing conditions, they can automatically provision and configure security controls, policies, and rules to reduce manual effort. Since many cloud breaches are the result of human errors, such tools can help security teams deploy consistent and up-to-date security configurations across their businesses cloud resources.

Continuous monitoring of cloud environments is essential for early threat detection and prompt incident response. Cloud-native security tools enable centralized monitoring and correlation of security events across cloud and on-premises infrastructure. As they are designed to detect and mitigate cloud-specific threats and attack vectors, cloud-native solutions can cater to characteristics of cloud environments, such as virtualization, containerization, and serverless computing, identifying the specific threats targeting these technologies.

The use of advanced analytics, threat intelligence, artificial intelligence (AI) and machine learning (ML) is on the rise in cloud security. These technologies enable the detection of sophisticated threats, identification of abnormal behavior, and proactive threat hunting to mitigate potential risks.

Both AI and ML are needed to accelerate the quick decision-making process needed to identify and respond to advanced cyber threats and a fast-moving threat landscape. Businesses that adopt AI and ML algorithms can analyze vast amounts of data and identify patterns indicative of cyber threats. They can detect and classify known malware, phishing attempts, and other malicious activities within cloud environments.

By analyzing factors such as system configurations, vulnerabilities, threat intelligence feeds, and historical data, the algorithms allow security teams to prioritize security risks based on their severity and potential impact. This means resources can be focused on addressing the most critical vulnerabilities or threats within the cloud infrastructure.

From a long-term perspective, the adoption of AI and ML in day-to-day operations enable security leaders to build a strong cloud security posture through security policy creation and enforcement, ensuring that policies adapt to changing cloud environments and truly address emerging threats.

Securing the cloud is now an essential part of a modern enterprises approach to risk and cyber threat management. By understanding how the cloud surface has evolved, businesses can better evaluate where they are on this development path and where they are headed. Business leaders can use this understanding to ensure that the organizations security posture includes a robust plan for defending and protecting cloud assets. By prioritizing and investing in cloud security, enterprises can continue to safeguard their organizations against developing threats and build a strong foundation for secure and sustainable growth.

SentinelOne focuses on acting faster and smarter through AI-powered prevention and autonomous detection and response. SentinelOnes Singularity Cloud ensures organizations get the right security in place to continue operating in their cloud infrastructures safely.

Learn more about how Singularity helps organizations autonomously prevent, detect, and recover from threats in real time by contacting us or requesting a demo.

Singularity Cloud

Simplifying security of cloud VMs and containers, no matter their location, for maximum agility, security, and compliance.

More:
Evolution of Cloud Security | Looking At Cloud Posture Management ... - SentinelOne

Integrating Network Function Virtualization with the DevOps Pipeline … – Open Source For You

The fourth part of this series on integration of network function virtualization with the DevOps pipeline discusses open source cloud computing platforms in general, and OpenStack in particular.

It is very easy these days to deploy any server and make your service public through the internet. All you have to do is opt for paid hosting infrastructures such as Amazon Elastic Compute Cloud (EC2), Google Cloud Platform, Microsoft Azure, or any other from the many available. You can choose an internet-based system as per your requirement and thats it your service is online. You dont have to bother about what system is driving your application or how it is being hosted you just pay for the specifications you are using. This new model for providing computing services is called cloud computing.

Theres been a surge in the use of technology in the past decade, and todays applications ask for numerous computing and storage requirements. Since this demand cannot be catered to by in-house infrastructure, many companies are now looking for vendors of cloud services to fulfill this requirement. Other factors pushing their adoption include the reliability and robustness that the cloud provides, and the fact that applications using cloud services experience less downtime. Users dont have to care about where this infrastructure is deployed, because for them everything is available locally through the internet.With the availability of cloud services, organisations can now choose their hardware configurations based on their requirements, operating systems, middleware applications, and other platform-based tools. As the traffic on the application changes, the infrastructure can be easily up scaled or downscaled, while eliminating any cost associated with its internal deployment.

Cloud computing is usually viewed under three stacks of service models. These are: Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS). SaaS gives the capability to use an application over the cloud environment of a service provider. The application can be used via a web browser. PaaS allows the user to provision, facilitate, and run the applications over the cloud computing environment. In this environment, users need not worry about the infrastructure underneath. Its typically associated with developing and deploying the application and its configurations. IaaS provides its users with computation, networking, and storage resources in the cloud computing environment. The user has control over the operating system, storage, and deployed applications.

The open source community is also making numerous contributions to open source cloud computing projects. These projects are being developed to help deploy the cloud computing solutions and interfaces to manage the infrastructure underneath. An open source cloud promises no vendor lock-in, and endeavours seamless integration of applications deployed over different platforms. The source code is widely available for these cloud projects and adopters can modify it as per their requirements. Today, there is a growing concern about the confidentiality of data of organisations, and in-house open source cloud computing platforms help secure the perimeters of the data. A few common open source cloud computing projects are OpenStack, CloudStack, and OpenNebula.

One of the more popular open source cloud computing projects is OpenStack. It is deployed worldwide as Infrastructure-as-a-Service both in public and private clouds. OpenStack serves virtual machines and other computing resources to its users while abstracting the physical hardware it is deployed on. It controls large pools of resources, which include computation, networking, and storage. These resources are managed via APIs, which can be accessed through the command-line interface or the graphical user interface. OpenStack is well regarded as the operating system for cloud setup. Its features are not limited to the basic services of the cloud platform. It also provides orchestration, fault management, and service management, and ensures high availability for the user applications deployed over it.

Scalability and openness have always been the selling point of OpenStack. However, it has also made a great name in the IT industry and among researchers because of its unique landscape. OpenStack is an amalgam of various components that come together to provide cloud computing services. Its architecture offers plug-and-play scenarios, where components can be included within OpenStack based on users needs. It has a modular architecture (Figure 1) and provides various services such as computing, hardware life cycle, storage, networking, shared services, orchestration, workload provisioning, application life cycle, API proxies, and web front-ends. Basically, OpenStack is designed for administrators and researchers to deploy IaaS infrastructure while providing tools and services to manage virtual machines on top of existing resources.

Compute, networking, block storage, identity, image, object storage, and dashboard comprise major components of OpenStack (Figure 2). All these components collaborate to produce an environment that is viable and reliable for IaaS. The dashboard provides the user interface (UI) to all the other components of the system. Similarly, identity provides authentication (auth) services to all the installed components in the OpenStack cluster. The network section provides networking to the compute of OpenStack. Compute provides volumes to the running instances via block storage and uses the cloud images from object storage.

Compute: Nova is the project associated with the computation component of OpenStack. The main role of Nova is to manage the life cycle of virtual machines, which are initiated by OpenStack users. It is also responsible for managing CPU, memory usage, disk usage, and network interfaces on these virtual machines. Nova runs as a set of daemons on top of existing Linux servers to provide that service.

Networking: Neutron is a software-defined networking (SDN) project of OpenStack that is responsible for delivering networking as a service in virtual computing environments. Neutrons key responsibilities include providing IP addresses to virtual machines, subnets, topologies, and traffic routing. IP address allocation can be both static and dynamic. Users can also configure floating IPs to forward or reroute traffic. Neutron manages all networking facets for the virtual networking infrastructure (VNI) and the access layer aspects of the physical networking infrastructure (PNI) in the OpenStack environment.

Block storage: Cinder handles the block storage devices in OpenStack. It is responsible for providing APIs to users so they can manage and consume block storage on their virtual instances. It provides volumes to Nova virtual machines, ironic bare metal hosts, containers, and more while ensuring high availability, fault tolerance, recoverability, and open standards.Identity: Keystone is responsible for providing API client authentication, service discovery, and distributed multi-tenant authorisation by implementing OpenStacks identity API. It provides role-based access control for OpenStack components.

Image: Image service is provided by project Glance in OpenStack. With this service, users can upload and discover data assets that are meant to be used with other services. This currently includes images and metadata definitions. Glance manages virtual machines disk images, and provides image delivery to virtual machines as well as snapshot (backup) services.

Object storage: Swift in OpenStack provides object-level storage via a RESTful API. Swift is a highly available, distributed, eventually consistent object/blob store. Organisations can use it to store lots of data efficiently, safely, and cheaply.

Dashboard: The Horizon project provides administrators with a graphical user interface to administer OpenStack and its various components. Horizon is the canonical implementation of OpenStacks dashboard, which provides a web-based user interface to OpenStack services including Nova, Swift, Keystone, etc.

OpenStack is a combination of various systems and separately installed components. These services connect via APIs and provide users with useful resources like computing, networking, and storage. You can either install OpenStack via a script or individually install its various components.

DevStack: DevStack is an Ubuntu-based minimal installation of OpenStack. It follows a certain template and installs all the components and services. The installation given here is for the OpenStack experience and the development environment.

The local installation was done on an Ubuntu 20.04.4 LTS virtual machine, with 4 CPUs, 12GB of memory, and 150GB of storage.

First, lets update and upgrade our target platform:

We create a stack user who will be responsible for handling all the DevStack services on the created virtual machine. DevStack services should be run as a non-root user with sudo permissions. The following commands create a stack user with the appropriate permissions:

Well clone the DevStack repository and change the directory accordingly:

In the devstack directory, well have to make changes in the local.conf file and add passwords to the various services in DevStack:

And thats it; you have successfully configured DevStack. Its time to install the services, and run the following command:

Figure 3 showcases all the successful installations of DevStack and its various default services. The IP address can be used to visit the dashboard of DevStack. The figure also prompts the password and default users created.

Figure 4 shows the login page of the DevStack the user name and passwords can be obtained from the end of the installation script. The login page is produced by the Horizon service in OpenStack.

Figure 5 shows the default dashboard of OpenStack. The dashboard is produced via the Horizon component, where all the other services of OpenStack communicate their metadata, which is published on the dashboard. The Overview showcases the limited summaries of Compute, Volume, and Network. A default network and its related components are created by DevStack on installation.

Manual installation: Manual installation is a bit more complicated than the DevStack installation. Here, you have to individually handle the network and configurations on multiple nodes Compute, Controller and Block. Many services such as Etcd, Memcache, MySQL, and RabbitMQ are installed and configured to work on all the nodes. After the installation of basic services, all the OpenStack components are installed such as Identity, Glance, Neutron, Nova, and more.

The installation procedure is quite complex and involves a plethora of steps. The complete installation instructions are kept in the GitHub repository, and the link for the same can be found at https://github.com/shubhamaggarwal890/nginx-vod/blob/master/OpenStack-Manual.md.

Figure 6 shows the login page of the Ubuntu-based OpenStack; the user name, password, and domain are set by the administrator during the installation process. The login page is produced by Horizon service in OpenStack.

Figure 7 shows the default dashboard of OpenStack produced by the Horizon component. Here, all the other services of OpenStack communicate their metadata and it is published on the dashboard. Since in our installation of OpenStack we didnt install the Cinder component, which is responsible for storing the metadata of all the other components, the limited summaries of Compute, Volumes, and Network are not visible.

Figure 8 shows all the installed services contributing to the OpenStack system. Every service exposes endpoint APIs, which can be invoked by admin, internally and publically.

Figure 9 shows all the hypervisors attached to the OpenStack cluster. In the produced cluster, we went with one Compute node with type Quick Emulator (QEMU). The Hypervisor summary also shows the VCPUs used, along with other details such as RAM and storage size.

Figure 10 shows all the compute services running on their respective host nodes. The columns further detail the status and the current state of these services. The zone is the logical partition of the services.

One can argue that DevStack is easier to install than the manual installation of each service in OpenStack, as it handles all the components and their communication through an extensible script, bringing up the OpenStack environment in no time. But this type of installation has its own limitations. The DevStack environment cannot be tailored as per the administrators requirements. Moreover, DevStack is only for developer-based environments; such a cluster cannot and should not be deployed over production systems. To enable distributed systems and their communication, one must go for manual installation of OpenStack.

We saw that OpenStack abstracts most of the network functions, where we can deploy various networking functionalities through its dashboard or via the call of APIs. Traditionally, the setup of such an infrastructure would require the use of plenty of proprietary hardware. But today thats not the case, because with software-defined networking, all these networking functionalities have been virtualised as software.

See original here:
Integrating Network Function Virtualization with the DevOps Pipeline ... - Open Source For You

Global Cloud Computing in Banking Market Intelligence Report … – Business Wire

Global Cloud Computing in Banking Market Intelligence Report ...  Business Wire

Read the rest here:
Global Cloud Computing in Banking Market Intelligence Report ... - Business Wire

DaaS In Cloud Computing: Benefits And Risks – Dataconomy

DaaS in cloud computing has revolutionized the way organizations approach desktop management and user experience, ushering in a new era of flexibility, scalability, and efficiency. DaaS transcends the limitations of traditional desktop infrastructures, offering a seamless and immersive virtual desktop experience accessible from anywhere, at any time, on any device. Whether its a bustling metropolis or a remote corner of the world, DaaS empowers users to unlock their full productivity potential, untethered by the constraints of physical workstations. As the demand for remote work, mobility, and collaboration intensifies, DaaS in cloud computing emerges as a transformative force, revolutionizing the way businesses operate and paving the path to a future where the desktop is no longer confined to a desk, but becomes an ethereal gateway to boundless possibilities.

Desktop as a Service is a cloud computing model that delivers virtual desktop environments to end-users over the internet. It provides a complete desktop experience, including the operating system, applications, and data, all hosted and managed in the cloud. With DaaS, users can access their virtual desktops from any device with an internet connection, allowing for increased mobility and flexibility.

In the DaaS model, the desktop infrastructure is hosted and maintained by a cloud service provider. This eliminates the need for organizations to manage and maintain their own physical desktop hardware and infrastructure. Instead, businesses can subscribe to a DaaS service and pay for the resources they need on a usage basis.

The primary purpose of Desktop as a Service is to provide a cloud-based solution for delivering virtual desktop environments to end-users. DaaS aims to decouple the desktop infrastructure from physical hardware by hosting and managing virtual desktops in the cloud. This architectural shift allows businesses to achieve greater flexibility, scalability, and cost efficiency in their desktop management approach.

From a technical standpoint, DaaS serves to abstract the complexities of desktop provisioning, maintenance, and management by centralizing these tasks in the cloud. It leverages virtualization technologies and remote display protocols to deliver a rich desktop experience to end-users, regardless of their location or the device they are using. By encapsulating the entire desktop stack, including the operating system, applications, and data, within a virtual instance, DaaS enables seamless access and collaboration, improved disaster recovery capabilities, and enhanced security controls.

There are primarily two different types of Desktop as a Service models: multi-tenancy DaaS and single-tenancy DaaS.

Both multi-tenancy and single-tenancy DaaS models offer benefits and considerations depending on the specific needs of an organization. Organizations should evaluate their requirements, budget, security concerns, and customization needs to determine which type of DaaS model aligns best with their business objectives.

Yes, Desktop as a Service is a specific type of Software as a Service (SaaS). While SaaS is a broad category encompassing various cloud-based software applications delivered over the internet, DaaS specifically refers to the delivery of virtual desktop environments as a service.

SaaS refers to the model where software applications are hosted and provided by a service provider to end-users over the internet. Users access these applications through web browsers or specialized client software, eliminating the need for local installation and maintenance.

10 edge computing innovators to keep an eye on in 2023

The main difference between Software as a Service (SaaS) and Desktop as a Service (DaaS) lies in the nature of the services they provide:

DaaS, on the other hand, is specifically designed to deliver complete virtual desktop environments. It includes the operating system, applications, and user data, all hosted and managed in the cloud. DaaS enables users to access their desktops remotely from any device with an internet connection, providing a full desktop experience.

DaaS, in contrast, provides an entire desktop experience. It includes the operating system and allows users to access a virtual desktop environment that mimics a traditional local desktop. Users can run multiple applications, customize their desktop settings, and perform tasks similar to what they would do on a physical desktop.

DaaS, on the other hand, requires a more complex infrastructure to host and manage complete desktop environments. It includes virtualization technologies, remote display protocols, and storage systems to deliver the desktop experience to end-users. DaaS infrastructure needs to handle not only application delivery but also the complexities of operating systems, user profiles, data storage, and access controls.

One example of Desktop as a Service is Amazon WorkSpaces, provided by Amazon Web Services (AWS). Amazon WorkSpaces is a fully managed DaaS solution that allows users to access their virtual desktops securely from anywhere using various devices.

With Amazon WorkSpaces, organizations can provision and manage virtual desktops in the cloud, eliminating the need for on-premises infrastructure and maintenance. Users can access their virtual desktops through a web browser or the Amazon WorkSpaces client application, enabling a consistent desktop experience across different devices.

Amazon WorkSpaces offers a range of features, including customizable hardware configurations, persistent user profiles, and integration with other AWS services for seamless data storage and management. It provides security controls such as encryption, multi-factor authentication, and network isolation to protect sensitive data and ensure compliance.

DaaS empowers organizations to unlock the value of their data without the need for extensive infrastructure investments or specialized expertise. In this section, we will explore the benefits that DaaS brings to the table.

DaaS in cloud computing offers exceptional scalability and flexibility. With DaaS, businesses can easily scale up or down their desktop infrastructure based on their needs, without worrying about the underlying hardware limitations. The cloud provides the necessary resources to accommodate increased workloads or expanding teams, ensuring seamless operations and user satisfaction. Whether an organization needs to add new users, upgrade software, or allocate additional storage, DaaS in cloud computing allows for quick and efficient adjustments.

This scalability and flexibility eliminate the need for manual hardware upgrades, reducing costs and administrative burden. By leveraging the cloud, businesses can easily adapt to changing requirements and focus on their core operations, while enjoying the benefits of a dynamic and responsive desktop infrastructure.

DaaS in cloud computing offers significant cost savings compared to traditional desktop infrastructures. Instead of investing heavily in on-premises hardware, businesses can opt for a subscription-based model where they pay only for the resources they need. This eliminates the upfront capital expenditure associated with purchasing and maintaining physical infrastructure. Moreover, DaaS reduces ongoing operational costs by eliminating the need for IT staff to manage hardware, perform updates, or troubleshoot issues.

With cloud-based desktops, businesses can also benefit from centralized management, enabling efficient resource allocation and reducing wastage. The pay-as-you-go model of DaaS allows organizations to align costs with actual usage, making it a cost-effective solution for businesses of all sizes.

Security is a critical concern for businesses, and DaaS in cloud computing addresses this issue comprehensively. Cloud service providers implement robust security measures to protect desktops and data. These include data encryption, access controls, regular backups, and disaster recovery options.

By leveraging the cloud, businesses can ensure that their desktop infrastructure is hosted in secure environments with round-the-clock monitoring and advanced threat detection systems. DaaS also reduces the risk of data loss or theft due to physical damage or theft of hardware devices. Centralized data storage and backup mechanisms in the cloud provide an added layer of protection against potential data breaches or system failures, providing peace of mind for businesses.

DaaS in cloud computing enables enhanced accessibility and collaboration among users. Desktops hosted in the cloud can be accessed from any device with an internet connection, allowing employees to work remotely or access their workspaces on the go. This flexibility promotes productivity, as users can easily access their personalized desktop environments from various locations and devices.

Additionally, DaaS facilitates seamless collaboration among geographically dispersed teams. Multiple users can access and work on the same virtual desktop simultaneously, enabling real-time collaboration and reducing the need for file transfers or version control issues. These capabilities empower businesses to embrace remote work policies and foster a more collaborative and agile work environment.

DaaS in cloud computing simplifies IT management and maintenance tasks. Rather than dealing with complex hardware and software configurations, businesses can offload the responsibility to cloud service providers. DaaS providers handle backend operations such as software updates, security patches, and system maintenance, ensuring that desktop environments are up to date and running smoothly.

This reduces the burden on internal IT teams, allowing them to focus on strategic initiatives and core business functions. Additionally, DaaS provides centralized management tools that enable administrators to easily provision, monitor, and manage desktops from a single interface. This simplifies tasks such as user onboarding, resource allocation, and troubleshooting, enhancing operational efficiency and reducing IT overhead.

DaaS in cloud computing offers increased mobility and device independence for users. Since desktop environments are hosted in the cloud, employees can access their virtual desktops from a wide range of devices, including laptops, tablets, and smartphones. This mobility allows for greater flexibility in work practices, enabling employees to be productive from any location and on any device.

Moreover, device independence means that users are not tied to a specific device or operating system. They can seamlessly switch between devices without any loss of data or functionality, providing a consistent and personalized desktop experience. DaaS in cloud computing empowers organizations to embrace the growing trend of Bring Your Own Device (BYOD) policies, promoting employee satisfaction and work-life balance.

Deploying traditional desktop infrastructures can be a time-consuming process that involves procuring hardware, installing software, and configuring systems. DaaS in cloud computing eliminates these complexities and enables rapid deployment of desktop environments. With cloud-based desktops, businesses can provision new desktops and applications within minutes, significantly reducing the time-to-value.

This agility is especially beneficial in scenarios where businesses need to onboard new employees quickly or scale up operations to meet growing demands. By leveraging the cloud, organizations can accelerate their time-to-market, gain a competitive edge, and respond swiftly to business opportunities. DaaS in cloud computing streamlines the deployment process, allowing businesses to focus on their core activities and achieve faster results.

Ensuring business continuity and recovering from unexpected disruptions are crucial for organizations. DaaS in cloud computing offers robust disaster recovery capabilities that help businesses quickly resume their operations in the event of a disaster or system failure. Cloud service providers implement backup and replication mechanisms to safeguard desktop environments and data.

In case of a hardware failure or natural disaster, businesses can easily restore desktops and access their critical applications and data from alternate locations. This resilience provides peace of mind and minimizes downtime, ensuring that employees can continue working without significant disruptions. DaaS in cloud computing offers a reliable and cost-effective solution for disaster recovery, allowing businesses to protect their operations and maintain high levels of productivity.

Managing software licenses can be a complex and time-consuming task for businesses. DaaS in cloud computing simplifies software management by providing centralized control and licensing options. With cloud-based desktops, businesses can easily provision and manage software applications for their users from a single platform.

This centralized approach streamlines license allocation, updates, and compliance monitoring. It eliminates the need for individual installations and license management on each desktop, saving time and reducing administrative overhead. Additionally, cloud service providers often offer flexible licensing models, allowing businesses to scale up or down their software usage based on their needs. This flexibility ensures cost optimization and helps organizations stay compliant with software licensing agreements.

DaaS in cloud computing offers improved performance and user experience compared to traditional desktop infrastructures. By leveraging the clouds robust infrastructure, businesses can provide users with high-performance virtual desktops that are responsive and capable of handling resource-intensive applications.

Cloud service providers optimize their environments to deliver low-latency, high-bandwidth connections, ensuring smooth and efficient desktop interactions. Users can access their desktops quickly, launch applications seamlessly, and experience minimal lag or downtime. Moreover, DaaS allows for personalized desktop configurations, enabling users to customize their environments according to their preferences and work requirements. This level of performance and customization enhances user satisfaction, productivity, and overall work efficiency.

XaaS: Accessing technology solutions on demand

In a symphony of cloud-based innovation, the harmonious union of DaaS in cloud computing resonates with transformative power, orchestrating a grand finale to traditional desktop limitations. Like a maestro leading an ensemble, DaaS takes center stage, unlocking a symphony of flexibility, efficiency, and seamless user experiences. It conducts a melodious blend of mobility, scalability, and security, captivating businesses with its captivating performance.

With DaaS as the virtuoso, the limitations of physical workstations are swept away, and the digital realm becomes an enchanted landscape of boundless possibilities. Like a painters brush on a canvas, DaaS paints a masterpiece of improved productivity, simplified management, and liberated collaboration. It erases geographic boundaries, allowing teams to dance together across continents and time zones, their movements perfectly synchronized.

See original here:
DaaS In Cloud Computing: Benefits And Risks - Dataconomy

BASF strengthens R&D with more powerful supercomputer – BASF

BASF has started up a new supercomputer at its Ludwigshafen site to replace the existing one. With 3 petaflops of computing power, the new supercomputer is considerably more powerful than its 1.75 petaflop predecessor.

Digital technologies are among the most important instruments to further expand our research and development capabilities, said Dr. Melanie Maas-Brunner, member of the Board of Executive Directors and Chief Technology Officer of BASF. As one example, she noted that above-average computing power is required these days to work out the most promising polymer structures from thousands of possibilities. Over the past five years, we have worked very successfully worldwide with our supercomputer Quriosity. It enabled us to considerably shorten the development time for innovative molecules and chemical compounds and thus accelerate the market launch of new products, Maas-Brunner said. But the computing capacity was no longer sufficient. Moreover, the complexity of our research projects and thus the demands on the supercomputer have increased. We therefore decided to invest in a new high-performance computer.

The new supercomputer was manufactured by Hewlett Packard Enterprise (HPE) and works with AMD processors (CPUs). It has an innovative cooling concept based on warm-water cooling. The system absorbs the heat directly where it is generated in the supercomputer and transports it away, which significantly reduces the energy required and therefore the operating costs. The new BASF supercomputer, named Quriosity like its predecessor, is the worlds largest supercomputer used in industrial chemical research. The previous supercomputer will be refurbished by HPE, with a recovery rate of more than 95 percent.

BASF also relies on additional cloud computing power when needed

In addition to its own on-site supercomputer, BASF also plans to use cloud computing power. This hybrid solution offers us the best possible technical and operational flexibility, said Maas-Brunner. It allows us to handle requests requiring exceptionally large processing power as well as work on special tasks that our own supercomputer is not designed for.

Supercomputer enables fundamentally new research approaches

As a digital tool, the supercomputer is an enormous timesaver. Calculations that would have taken around a year in the past can be carried out by a supercomputer in just a few days. This has not only reduced product development times: We were able to identify and utilize previously hidden connections to drive completely new research approaches, said Maas-Brunner. Modeling, virtual experiments and simulations are becoming increasingly complex and require more computing power. With the new supercomputer, which is approximately twice as fast, we can now provide our researchers with the necessary computing power.

Entire company using Quriosity since 2017

The Quriosity supercomputer has been deployed at BASF since 2017. Since then, it has carried out an average of 20,000 tasks per day and is used by more than 400 employees worldwide. In the personal care business area, for example, the supercomputers complex simulations help researchers to better understand the composition of personal care products and more precisely predict which cosmetic ingredients harmonize optimally together to achieve the desired effect. Simulations also help to plan and optimize reaction processes. For example, the distribution of substances and the temperature in a reactor can be simulated and this information can be used to continuously improve production. At an early development stage for crop protection products, using molecular modeling the supercomputer can quickly identify suitable compounds which will be effective and environmentally sound. However, the supercomputer is also used in projects outside of research and development. It helps, for example, to optimize the fluid dynamics of plant components in production operations.

Original post:
BASF strengthens R&D with more powerful supercomputer - BASF