Category Archives: Cloud Servers
The future potential of the cloud – IT-Online
Despite an increased awareness by decision-makers of cloud computing, many organisations around the world are yet to fully adopt it.
By Frik van der Westhuizen, CEO of EQPlus
Research shows that 41% of companies in the EU used cloud computing last year, and then it was predominantly for email and storage of files. Meanwhile in EMEA, revenue from cloud-related services are expected to approach $400 billion by 2025, up from the $330 billion of 2020. With so much potential in the cloud, decision-makers will have no choice but to embrace it.
Improving cost management and making businesses more agile for a digitally driven market have been the most common benefits of going the cloud route for some time.
While these are still relevant today, the cloud also provides a platform for the development of advanced technologies that can foster innovation within an organisation. For example, the cloud supports automation that, when combined with no-code applications, make it possible for non-technical people to create a range of digital services that cater for an increasingly sophisticated customer base.
Evolving environment
But while the likes of artificial intelligence (AI) and automation can help businesses across industry sectors drive innovation, it also provides malicious users with the means to perpetrate more sophisticated cyberattacks. This results in the perpetual chicken and egg situation when it comes to organisational defences a company must use the cloud to defend itself while hackers exploit those same solutions for attack.
By 2025, it is anticipated that 80% of companies globally will be using the cloud. Of those, 84% will leverage a multi-cloud approach to benefit from specific service provider advantages. As part of this, the cloud provides the means to more easily create tailored solutions that address virtually any business need.
This modular approach can be seen in the increasing adoption of Kubernetes and containers to provide more agile and affordable ways for businesses to deliver micro-services.
Data at scale
Linking the cloud, big data, and the Internet of Things (IoT), will provide business and technology leaders with even more opportunities to grow. The increasing number of IoT devices generate more data which the cloud and edge computing services can analyse at scale. Doing so faster than what was possible before, empowers organisations to rapidly adapt to market demands.
The exponential growth of data has resulted in companies struggling to make sense of it. This presents massive untapped potential at a time where even the smallest iterative change can make a significant difference when it comes to competitive advantage. The cloud provides the means to not only store this data, but also analyse it at speed. When combined with AI and automation, decision-makers can access improved insights that can create much-needed differentiation.
Sustainability
One thing which South African companies will look to exploit when it comes to the cloud is its ability to help position sustainable business practices. The uncertainties around the stability of the national electricity grid mean business can turn to the cloud as a safe haven for their mission-critical systems.
There is therefore no need to run powerful (and energy expensive) on-premises servers. Instead, the cloud provides the means to optimise physical technology resources. The cloud does not compromise on flexibility, the ability to scale as needed, and the availability of resources on-demand. Companies can remain focused on realising their strategic mandate and worry less about energy efficient on-premises resources.
The cloud is here to stay. But that does not mean it will stagnate. If anything, it will continue to drive innovation and the adopting of more sophisticated solutions to harness new opportunities in a rapidly evolving market.
Related
Read the original here:
The future potential of the cloud - IT-Online
Bringing AWS-Style DPU Offload To The VMware Base – The Next Platform
Databases and datastores are by far the stickiest things in the datacenter. Companies make purchasing decisions that end up lasting for one, two, and sometimes many more decades because it is hard to move off a database or datastore once it is loaded up and feeling dozens to hundreds of applications.
The second stickiest thing in the datacenter is probably the server virtualization hypervisor, although this stickiness is more subtle in its inertia.
The choice of hypervisor depends on the underlying server architecture, of course, but inevitably the management tools that wrap around the hypervisor and its virtual machines end up automating the deployment of systems software (like databases and datastores) and the application software that rides on top of them. Once an enterprise has built all of this automation with VMs running across clusters of systems, it is absolutely loath to change it.
But server virtualization has changed with the advent of the data processing unit, or DPU, and VMware has to change with the times, which is what the Project Monterey effort with Nvidia and Intel is all about.
The DPU offload model that enhances the security of platforms particularly network and storage access while at the same time lowering the overall cost of systems by dumping the network, storage, and security functions that would have been done on the server to that DPU, thus freeing up CPU cores on the server that would have been burdened by such work. Like this:
Offload is certainly not a new concept to HPC centers, but the particular kind of offload the DPU is doing is inspired by the Nitro family of SmartNICs created by Amazon Web Services, which have evolved into full-blown DPUs with lots of compute of their own. The Nitro cards are central to the AWS cloud, and in many ways, they define the instances that AWS sells.
We believe, as do many, that in the fullness of time all servers will eventually have a DPU to better isolate applications from the control plane of the cluster that provides access to storage, networking, and other functions. DPUs will be absolutely necessary in any multi-tenant environment, but technical and economic benefits will accrue to those using DPUs on even single-node systems.
With the launch of the ESXi 8 hypervisor and its related vSphere 8 management tools, Nvidia and VMware have worked to get much of the VMware virtualization stack to run on its Arm-based server cores on the BlueField-2 DPU line, virtualizing cores running on X86 systems that the DPU is plugged into. Conceptually, this next generation of VMwares Cloud Foundation stack looks like this:
With the Nitro DPUs and a homegrown KVM hypervisor (which replaced a custom Xen hypervisor that AWS used for many years), AWS was able to reduce the amount of server virtualization code running on the X86 cores in its server fleet down to nearly zero. Which is the ultimate goal of Project Monterey as well. But as with the early Nitro efforts at AWS, shifting the hypervisor from the CPUs to the DPU took times and steps, and Kevin Deierling, vice president of marketing for Ethernet switches and DPUs at Nvidia, admits to The Next Platform that this evolution will take time for Nvidia and VMware as well.
I think it is following that similar pattern, where initially you will see some code running on the X86 and then a significant part being offloaded to the Bluefield DPUs, Deierling explains. Over time, I think you will see more and more of that being offloaded, accelerated, and isolated to the point where, effectively its a true bare metal server model where nothing is running on the X86. But today, theres still some software running out there.
The BlueField-2 DPU includes eight 64-bit Armv8 Cortex-A72 cores for local compute as well as two acceleration engines, a PCI-Express 4.0 switch, a DDR4 memory interface, and a 200 Gb/sec ConnectX-6dx network interface controller. That NIC interface can speak 200 Gb/sec Ethernet or 200 Gb/sec InfiniBand, as all Nvidia and prior Mellanox NICs for the past many generations can. That PCI-Express switch is there to provide endpoint and root complex functionality, and we are honestly still trying to sort out what that means.
It is not clear how many cores the vSphere 8 stack is taking to run on a typical X86 server without a DPU or how many cores are cleared up by running parts of the vSphere 8 stack on the BlueField-2 DPU. But Deierling did illustrate the principle by showing the effect of offloading virtualized instances of the NGINX Web and application server from the X86 CPUs to the BlueField-2.
In this case, NGINX was running on a two-socket server with a total of 36 cores, and eight of the cores were running NGINX and their work could be offloaded to the Arm cores on the BlueField-2 DPU and various security and networking functions related to the Web server also accelerated. The performance of NGINX improved, the latency of Web transactions dropped. Here is how Nvidia calculates the return on investment:
Deierling says that using the DPU offered a near immediate payback and made the choice of adding a DPU to systems a no brainer.
We dont know what editions of the vSphere 8 stack Essentials, Standard, Enterprise Plus, and Essentials Plus are certified to offload functions to the BlueField-2 DPU, and we dont know what a BlueField-2 DPU costs either. So it is very hard for us to reckon what the ROI of running virtualization on the DPU might bring specifically.
But even if the economics of the DPU were neutral the cost of the X86 cores freed up was the same as the cost of the BlueField-2 DPU it still makes sense to break the application plane from the control plane in a server to enhance security and to accelerate storage and networking. And while the benefits of enhanced security and storage and networking acceleration will be hard to quantify, they might even be sufficient for IT organizations to pay a premium for a DPU instead of just using a dumb NIC or a SmartNIC.
Here is one case in point that Deierling brought up just as an example. For many years, hyperscalers and cloud builders did not have security across the east-west traffic between the tens to hundreds of thousands of servers interlinked in their regions, which constitute their services. The DPU was invented in part to address this issue, encrypting data in motion across the network as application microservices chatter. A lot of hyperscalers and cloud builders as well as other service providers, enterprise datacenters, and HPC centers similarly are not protecting data in transit between compute nodes. It has just been too expensive and definitely was not off the shelf.
With Project Monterey, Nvidia and VMware are suggesting that organizations run VMwares NSX distributed firewall and NSX IDS/IPS software on the BlueField-2 DPU on every server in the fleet. (The latter is an intrusion detection system and intrusion prevention system.) The idea here is that no one on any network can be trusted, outside the main corporate firewall and inside of it, and the best way to secure servers and isolate them when there are issues is to wrap the firewall around each node instead of just around each datacenter.
The NSX software can make use of the Accelerated Switching and Packet Processing (ASAP2) deep packet Inspection technology that is embedded in the Nvidia silicon, which is used to offload of packet filtering, packet steering, cryptography, stateful connection tracking, and inspection of Layer 4 through Layer 7 network services to the BlueField-2 hardware.
The first of the server makers out the door with the combined VMware stack and Nvidia BlueField-2 is Dell, which has certified configurations of its PowerEdge R650 and R750 rack servers and its VxRAIL hyperconverged infrastructure with the Nvidia DPUs and vSphere 8 preinstalled to offload as much work as possible to those DPUs. These systems will be available in November. Pricing is obviously not available now. Hopefully they will be when they start shipping so we can figure out the real ROI of DPU offload for server virtualization. The numbers matter here. In a way, the ROI will pay for enhanced security for those who have to justify the added complexity and cost. Those who want the enhanced security at nearly any cost wont care as much about the DPU ROI. The trick for VMware and Nvidia is to price this low enough that it is indeed a no-brainer.
See the original post here:
Bringing AWS-Style DPU Offload To The VMware Base - The Next Platform
Underwater data centres are coming. Can they slash CO2 emissions and make the Internet faster? – Euronews
The first US commercial subsea data centre is set to touch the seabed of the Pacific ocean by the end of this year.
The Jules Verne Pod is scheduled for installation near Port Angeles, on the northwestern coastline of the United States, and could revolutionise how servers are run.
The pod, which is similar in size to a 6-metre shipping container, will accommodate 800 servers and be just over nine metres underwater. The innovation is intended to reduce carbon emissions by 40 per cent.
We are environmentally conscious and we take full advantage of sustainability opportunities, including energy generation, construction methods and materials, Maxie Reynolds, founder of Subsea Cloud, the company making the pods, told Euronews Next.
In February, Subsea Cloud said their first 10 pods would aim to offset more than 7,683 tons of CO2 in comparison with an equivalent land-based centre by reducing the need for electrical cooling.
At the time, Subsea said their data centres would be aimed at healthcare, finance, and the military in the US.
Data centres are used to centralise shared information technology (IT) operations and are essential for the running of our daily lives; the Cloud, Google, and Meta all have data centres that they use to run their products.
Data centres are currently built on land, sometimes in rural areas far from big populated areas.
The Jules Verne Pod comes after a previous government project by Subsea and will be followed by the Njord01 pod in the Gulf of Mexico and the Manannan pod in the North Sea.
The Gulf of Mexico pod depth will likely be around 250m, while the North Sea depth will likely be around 200m.
A seabed data centre is also planned by Chinese company Highlander off the coastal city of Sanya, on Hainan Island.
Subsea says the seabed data centres cost 90 per cent less to make compared with land-based operations.
The savings are the result of a smaller bill of materials, less complexities in terms of deployment and maintenance, too, said Reynolds.
Its complex and costly to put in the infrastructure in metropolitan areas and actually in rural areas too. There are land rights and permits to consider and labour is slower and can be more expensive.
For example, Reynolds said installing and burying a subsea cable takes about 18 minutes and costs about 1,700 for each mile (1.6 km) of cable. On land, it would take about 14 days and cost about 165,000 per mile.
The feasibility of underwater data storage was proved by Microsoft in 2020.
In 2018, the software giant launched Project Natick, dropping a data centre containing 855 servers to the seabed off Orkney, an archipelago on the northeastern coast of Scotland.
Two years later, the centre was reeled up to reveal that only eight servers were down, when the average on land in the same time frame would have been 64.
As well as reducing costs and their environmental footprint, underwater data centres could also provide a faster internet connection.
Subsea claims that latency - or data lag - can be reduced by up to 98 per cent with its underwater pods.
Latency is a byproduct of distance, so the further these data centres get away from metropolitan areas, the more of it is introduced, said Reynolds.
Around 40 per cent of the worlds population lives within 100 km of a coast, and in major urban coastal centres like Los Angeles, Shanghai, and Istanbul, the installation of Subseas data centres could vastly improve how people use their devices.
Signals travel at 200km/millisecond and the average data centre is 400km away from an internet user, meaning a round trip takes 40 milliseconds. This could be reduced by up to 20 times to 2 milliseconds by Subseas pods due to the reduced distance.
My competitors are the inefficient, wasteful timeworn data centres that create long lasting business, environmental and societal problems from the day they are built, said Reynolds.
Players in the subsea data centre space are, for now at least, long-distance allies rather than competitors. We need one another if we are to reshape and redesign the industry for the better.
Read the original post:
Underwater data centres are coming. Can they slash CO2 emissions and make the Internet faster? - Euronews
What is Cloud PC? Will JioCloud PC be a game changer The Mobile Indian – The Mobile Indian
Most of us probably use a desktop or laptop computer for data work. But what is a cloud PC? Cloud PCs are a type of computer that stores all of its files and programs on the internet instead of on its own hard drive. This means that you can access your cloud PC from any internet-connected device, making it a convenient way to work from anywhere. In this article, well introduce you to the basics of cloud PCs and JioCloud PC and how they can benefit you.
Related Stories:
What is JioAirFiber? Things we know
Jio 5G rollout to begin from Diwali: Check out the details
A cloud PC is simply a computer that runs on a remote server, accessed via the internet, something like Chromebook but more powerful. That means all your files and applications are stored off-site, and you can access them from any internet-connected device with a monitor, keyboard, mouse and any other peripheral device connected via USB port to a hub which you would require to perform input or out function.
Related Articles:
They are smart and super affordable, but Chromebooks are not for everyone
The Great Chromebook FAQ, All Your Questions Answered
There are plenty of advantages to using a cloud PC. For starters, you can say goodbye to expensive hardware upgrades. In addition, since everything is stored in the cloud, you can access the latest versions of your favourite software without installing anything locally.
Plus, cloud PCs are much more secure than traditional computers. For example, with on-site data storage, your business is at risk if your computer is stolen or damaged. But cloud storage makes your data safe and sound in a remote location.
Finally, cloud PCs are incredibly convenient. Whether at home or on the go, you can always pick up right where you left off. And since everything is stored online, you can easily share files and collaborate with others without having to email attachments back and forth.
If youre looking for a new way to computing, a cloud PC might be suitable for you.
The Jio Cloud PC will be a cloud PC solution from Jio. It will be a small box similar to the STB box in structure, which we use to stream entertainment connect on TV. It will have a few ports which you can use to attach devices such as a monitor, keyboard, mouse and so on to have a virtual desktop or Laptop at a cost lower than a traditional PC.
There are many benefits to using cloud PC, including the following:
As more and more businesses move to the cloud, its essential to understand the benefits of cloud PC. Cloud PC can help companies to save money, improve efficiency, and scale quickly.
The most significant benefit of cloud PC is that it can help businesses save money. By moving to the cloud, companies can avoid the high costs of buying and maintaining on-premise hardware and software. In addition, they can take advantage of pay-as-you-go pricing models that can save them even more money.
Another benefit of cloud PC is that it can help businesses improve efficiency. With on-premise systems, companies often have to deal with complex IT infrastructure that can be difficult to manage. By moving to the cloud, businesses can simplify their IT infrastructure and make it easier to manage. In addition, they can take advantage of features like automatic updates and self-service provisioning that can further improve efficiency.
Finally, cloud PC can help businesses scale quickly. Companies often must invest in additional hardware and software with on-premise systems as they grow. With the cloud, they can quickly add or remove capacity as needed without making a large upfront investment. This makes it easy for businesses to respond quickly.
There are a few drawbacks to cloud PCs. One is that they can be less reliable than traditional PCs since youre relying on an internet connection to access your files and applications. This problem can be if you have a spotty or unreliable internet connection.
Another drawback is that cloud PCs can be more expensive than traditional PCs in some cases, like you may have to pay for additional storage if you use a lot of data.
Finally, cloud PCs can be less private and secure than traditional PCs if you are on a public cloud. This is because your data is stored on someone elses servers, which means that they could potentially access your data. You may also have to share your data with other users of the same cloud service.
There are many cloud PC providers out there, and here are examples of some of the cloud PC providers.
Amazon Web Services (AWS) is a leading cloud computing platform. It offers a wide range of services, including computing, storage, database, and networking. AWS is used by some of the worlds largest companies, including Netflix, Airbnb, and Samsung.
Google Cloud Platform (GCP) is a cloud computing platform that offers various services, including computing, storage, database, and networking. GCP is used by some of the worlds largest companies, including Spotify, Coca-Cola, and Ubisoft.
Microsoft Azure is a cloud computing platform that offers various services, including computing, storage, database, and networking. Azure also has a good fan following and is used by companies such as Walmart, Honda, and GE.
Cloud PC is a type of computing where data processing and storage occur on remote servers accessed over the internet rather than on a local computer or server. This allows users to access their files and applications from anywhere with an internet connection.
Cloud PC can be used for both personal and business purposes and offers many benefits over traditional computing models.
Will JioCloud PC make a mark on the consumers will depend on the pricing and the bundling offer. It also needs to be seen when Jio launches it in the market because, as of now, it doesnt have a release date.
Read more:
What is Cloud PC? Will JioCloud PC be a game changer The Mobile Indian - The Mobile Indian
Cloud Performance Management Market Worth $3.9 Billion By 2027 Exclusive Report by MarketsandMarkets – Benzinga
Chicago, Aug. 30, 2022 (GLOBE NEWSWIRE) -- Cloud Performance Management Marketto grow from USD 1.5 billion in 2022 to USD 3.9 billion by 2027, at a Compound Annual Growth Rate (CAGR) of 17.6% during the forecast period, according to a new report by MarketsandMarkets. The major factors driving the growth of the Cloud Performance Management market include increasing demand of AL, Big data, cloud solutions.
Browse in-depth TOC on "Cloud Performance Management Market"233 Tables47 Figures225 Pages
Download PDF Brochure: https://www.marketsandmarkets.com/pdfdownloadNew.asp?id=239116385
Large Enterprises segment to hold the highest market size during the forecast period
Organizations with more than 1,000 employees are categorized as large enterprises. The traction of cloud performance management in large enterprises is said to be higher than SMEs, as they are adopting cloud performance management solutions to improve business operational efficiency across regions.
The increasing deployment of SaaS offerings such as customer relationship management, human capital management, enterprise resource management, and other financial applications creates an advantageous environment for cloud monitoring adoption, particularly in large organisations, improve the overall cloud system, improve the cloud monitoring, and sustain themselves in intense competition. Large enterprises introspect and retrospect on implementing best practices to ensure effective performance management. CMaaS (Cloud-Monitoring-as-a-Service) is a popular software solution for large businesses seeking a fully managed cloud monitoring service for cloud and virtualized environments. These solutions are provided by third-party providers and are monitored 24 hours a day by IT experts with access to the most recent APM technologies and services.
Banking, Financial Services, and Insurance to record the fastest market size during the forecast period
The BFSI vertical is crucial as it deals with financial data. Economic changes significantly affect this vertical. Regulatory compliances and the demand for new services have created an environment where financial institutions are finding cloud computing more important than ever to stay competitive. A recent worldwide survey on public cloud computing adoption in BFSI states that 80% of the financial institutions are considering hybrid & multi-cloud strategies to avoid vendor lock-in. It provides these critical financial institutions the much-needed flexibility to switch to alternate public cloud operators in case of an outage to avoid any interruptions in the services.
New competitors, new technologies, and new consumer expectations are impacting the BFSI sector. Digital transformation provides organizations access to new customer bases and offers enhanced visibility into consumer behaviour through advanced analytics, which helps organizations in creating targeted products for their customers. Most banks are adopting cloud performance management solutions owing to their benefits, such as configuration management and infrastructure automation to increase stability, security, and efficiency. The BFSI business is expected to hold a significant share of the cloud performance management market due to different advantages offered by cloud-based technologies, such as improved performance, reduced total cost of ownership, improved visibility, and standard industry practices. Cloud performance management is adopted for mission-critical industry verticals, such as BFSI, extensively to improve revenue generation, increase customer insights, contain costs, deliver market-relevant products quickly and efficiently, and help monetize enterprise data assets.
Request Sample Pages: https://www.marketsandmarkets.com/requestsampleNew.asp?id=239116385
Asia Pacific is projected to be the highest CAGR during the forecast period
The Asia Pacific region comprises emerging economies, such as China, Japan, Australia and New Zealand, and the rest of Asia Pacific. The demand for managed cloud and professional services is growing, particularly in countries with a mature cloud landscape, such as Japan. This is due to the increasing migration of complex Big Data and workloads such as enterprise resource planning (ERP) to cloud platforms. The expansion of open source technologies, as well as advancements in API-accessible single-tenant cloud servers, also helps to promote acceptance of managed private cloud providers. Furthermore, with the rise of the Internet of Things (IoT), the cloud is becoming increasingly important in enabling the development and delivery of IoT applications. To deal with the data explosion, more businesses in Asia-Pacific are redesigning their networks and deploying cloud services.
The huge amount of data lead to the complexity of managing workloads and applications manually, which would act as the major factor in the adoption of cloud performance management solutions among enterprises in this region. Also, the affordability and ease of deployment of cloud performance management solutions would act as the driving factors for the adoption of cloud technologies among enterprises. The increasing trend toward cloud-based solutions is expected to trigger the growth of the cloud performance management market in this region. Integration of latest technologies, such as AI, analytics, ML, drives the demand for cloud performance management solutions in the region. The availability of advanced and reliable cloud infrastructure presents attractive opportunities for cloud-based technologies. Increase in investments in Asia Pacific by giant cloud providers such as Google is the driver for the growth of CPM market in this region. Strong technological advancements and government initiatives have driven the cloud performance management market. Increase in urbanization, technological innovation, and government support for the digital economy with suitable policies and compliance (regulations) have driven the cloud performance management market.
Get 10% Free Customization on this Report: https://www.marketsandmarkets.com/requestCustomizationNew.asp?id=239116385
Market Players
Some prominent players across all service types profiled in the Cloud Performance Management Market study include Microsoft (US), IBM (US), HPE (US), Oracle (US), VMware (US), CA Technologies (US), Riverbed (US), Dynatrace (US), App Dynamics (US), BMC Software (US).
Browse Adjacent Markets:Cloud Computing Market Research Reports & Consulting
Related Reports:
Cloud Storage Market by Component (Solutions and Services), Application (Primary Storage, Backup and Disaster Recovery, and Archiving), Deployment Type (Public and Private Cloud), Organization Size, Vertical and Region - Global Forecast to 2027
Integrated Cloud Management Platform Market by Component (Solutions and Services), Organization Size, Vertical (BFSI, IT & Telecom, Government & Public Sector) and Region - Global Forecast to 2027
Here is the original post:
Cloud Performance Management Market Worth $3.9 Billion By 2027 Exclusive Report by MarketsandMarkets - Benzinga
Critical hole in Atlassian Bitbucket allows any miscreant to hijack servers – The Register
A critical command-injection vulnerability in multiple API endpoints of Atlassian Bitbucket Server and Data Center could allow an unauthorized attacker to remotely execute malware, and view, change, and even delete data stored in repositories.
Atlassian has fixed the security holes, which are present in versions 7.0.0 to 8.3.0 of the software, inclusive. Luckily there are no known exploits in the wild.
But considering the vulnerability, tracked as CVE-2022-36804, received a 9.9 out of 10 CVSS score in terms of severity, we'd suggest you stop what you're doing and update as soon as possible as it's safe to assume miscreants are already scanning for vulnerable instances.
As Atlassian explains in its security advisory, published mid-last week: "An attacker with access to a public repository or with read permissions to a private Bitbucket repository can execute arbitrary code by sending a malicious HTTP request."
Additionally, the Center for Internet Security has labeled the flaw a "high" security risk for all sizes of business and government entities. These outfits typically use Bitbucket for managing source code in Git repositories.
Atlassian recommends organizations upgrade their instances to a fixed version, and those with configured Bitbucket Mesh nodes will need to update those, too. There's a compatibility matrix to help users find the Mesh version that's compatible with the Bitbucket Data Center version.
And if you need to postpone a Bitbucket update, Atlassian advises turning off public repositories globally as a temporary mitigation. This will change the attack vector from an unauthorized to an authorized attack. However, "this can not be considered a complete mitigation as an attacker with a user account could still succeed," according to the advisory.
Security researcher @TheGrandPew discovered and reported the vulnerability via Atlassian's bug bounty program.
This latest bug follows a series of hits for the popular enterprise collaboration software maker.
Last month, Atlassian warned users of its Bamboo, Bitbucket, Confluence, Fisheye, Crucible, and Jira products that a pair of years-old, critical flaws threaten their security. It detailed the so-called Servlet Filter dispatcher vulnerabilities in its July security updates, and said the flaw allowed remote, unauthenticated attackers to bypass authentication used by third-party apps.
In June, Atlassian copped to another critical flaw in Confluence that was under active attack.
Plus, there was also the two-week-long embarrassing cloud outage that affected almost 800 customers this spring. This is less than half a percent of the company's total customers, but still, as co-founder and co-CEO Mike Cannon-Brookes admitted on the firm's most recent earnings call, it's "one customer is too many." And definitely not a good look for a cloud collaboration business.
Continue reading here:
Critical hole in Atlassian Bitbucket allows any miscreant to hijack servers - The Register
Crypto Quantique’s quantum-driven silicon IP enables root-of-trust in the Intel Pathfinder for RISC-V environment – Design and Reuse
LONDON, August 30, 2022 --- Crypto Quantique, a specialist in quantum-driven cyber security for the internet of things (IoT), announces that the companys QDID silicon IP block has been selected for the recently announced Intel Pathfinder for RISC-V* integrated development environment. Intel Pathfinder enables RISC-V cores and other IP to be evaluated in FPGAs and simulator programs before committing to the final silicon design and fabrication. The environment is supported by industry-standard toolchains.
QDID is the first security IP chosen for Intel Pathfinder. QDID is independently verified as resilient against all currently-known cyberattack mechanisms. Its analog block measures random, quantum tunnelling current in the fabric of silicon wafers to produce high-entropy, random numbers from which unique, unforgeable identities and cryptographic keys are created on-demand. These identities and keys form a root-of-trust for each device, and this is the foundation for IoT security when QDID chips are deployed.
Because QDIDs random numbers are generated within the fabric of the silicon, both key injection and the need to store cryptographic keys in memory are eliminated. This gives semiconductor users total control of their security framework and enables them to create in a zero-trust supply chain that eliminates several security vulnerabilities.
QDID is complemented by Crypto Quantiques QuarkLink end-to-end IoT security software. The QuarkLink platform enables thousands of end point devices to be connected to on-premises or cloud servers automatically through cryptographic APIs. Via a simple graphical user interface, users can achieve secure provisioning, onboarding, security monitoring, and certificate and key renewal or revocation with just a few keystrokes.
Crypto Quantiques CEO, Dr. Shahram Mossayebi, said, QDID for Intel Pathfinder RISC-V* is further validation of our technology leadership in IoT security. QDID is easily integrated into the security framework of the RISC-V architecture and, just as Intel is helping democratize chip design with its Intel Pathfinder development environment, QDID and QuarkLink are democratizing semiconductor security by making it technically and economically accessible for the broadest possible range of applications.
The integration of Crypto Quantique with Intel Pathfinder demonstrates our commitment to addressing key end-user concerns like security at an early stage in the development process, said Vijay Krishnan, General Manager, RISC-V Ventures from Intel, thus paving the way for increased RISC-V adoption in segments like IoT.
Intel Pathfinder is available in both Starter and Professional Editions. The Starter Edition is available as a free download from https://pathfinder.intel.com. More information on QDID can be found at https://www.cryptoquantique.com/products/qdid/ and a detailed description of QuarkLink is available at https://www.cryptoquantique.com/products/quarklink/.
Two Thirds Ethereum Nodes Are With AWS, Hetzner & OVH Cloud Servers – Infostor magazine
Currently, 4,653 active Ethereum nodes are managed by different centralized web providers. The important topic of discussion and has become part of trending news these days is that three cloud providers account for two-thirds of these 4,653 Ethereum nodes.
Source: Messari- Twitter
According to crypto analytics platform Messari, Three major cloud providers are responsible for 69% of the 65% of @Ethereum nodes hosted in data centers. Of the estimated 95% of @Solana nodes hosted in data centers, 72% are hosted with the same cloud providers as @Ethereum.
This special report shared by Messari clearly shows that Ethereum and Solana blockchain nodes are centralized to a considerable extent by the three primary cloud providers.
As per the data shared by Messari, three major cloud providers are responsible for 69% of the 65% of Ethereum nodes hosted in data centers. Also, the same three cloud providers are responsible for 72% of 95% of Solana nodes hosted in data centers.
Analyzing this tweet and the image shared by Messari makes us that Amazon Web Service is responsible for 50% of the hosted nodes on the Ethereum Main net. Apart from Amazon, the two other primary cloud providers, Hetzner and OVH, are responsible for 15% and 4% of the hosted nodes, respectively.
Also, the same is the situation of the Solana Hosted nodes. Hetzner accounts for 42% of the hosted Solana nodes. OVH stands at the second position with 26%, and AWS takes 3% of the total hosted Solana nodes.
Also read: 20 Most Expensive NFTs Sales In 2022
The current situation of 3 cloud providers acquiring two-thirds of Ethereum nodes indicates that crypto needs to decentralize. Presently, there are many companies favoring blockchain centralization. Digital companies need to clearly understand decentralization is crucial to protect Ethernodes from central points of failure.
Even if geographical centralization is a massive problem, still no steps are taken for the same. The United States and Germany account for 60% of Ethereum distributed globally.
With the increasing node and geographical centralization, you can expect more focus on the decentralization of blockchain networks. As blockchain networks like Ethereum and Solana face many risks, other crypto analytics platforms may share strong messages like this to solve these problems.
It is crucial to stop complete centralization as this exposes Ethernodes to central points of failure. We will have to wait and watch to see the steps which will be taken to reduce centralization.
Infostor.com (c)
See more here:
Two Thirds Ethereum Nodes Are With AWS, Hetzner & OVH Cloud Servers - Infostor magazine
Why There’s Renewed Interest In The Cloud for Healthcare – HIT Consultant
Wes Wright, Chief Technology Officer at Imprivata
From the development of the EMR to the growth of telehealth, the digital environment for healthcare has evolved tremendously over the last few years. So, its no surprise that IT spending is set to increase by 12.3% for cloud computing, 9.7%, for digital transformation, and 9.7% for security software this year. Though healthcare organizations have historically been slower to adopt cloud, were now seeing renewed interest.
Has the pandemic caused a reaction among IT leaders to follow this trend? Or is cloud just the start of a digital identity revolution for care providers? Lets dive into the main factors behind this trend.
But first why has healthcare lagged other industries in cloud adoption?
Cloud isnt anything new. In fact, almost all businesses use it in some form, and the market for public cloud alone will be worth over $800 billion by 2025. However, the healthcare sector can be slow to adopt new technologies like cloud.
Any medical data stored in the cloud is accessible from multiple locations. So, anything ranging from a patients personal health information (PHI) to a clinician or doctors digital identity is a lot more vulnerable to cyberattacks. With industry leaders focused on compliance, regulation, and security, this understandably makes healthcare leaders hesitant to adopt these technologies.
But while this threat appears in the cloud, a greater one lies in human error and negligence where a lost device or stolen password can lead to a breach just as easily as a phishing attempt.
The good news is that there are many solutions on the market today that HDOs can implement to mitigate security risks by protecting the digital identities of those accessing sensitive patient information.
Why is it important to protect a users digital identity?
Digital identity refers to the unique identifiers an individual uses to interact online. For example, a doctor uses their digital identity to log into a patients medical record. Of course, protecting these identifying credentials is crucial to reducing the risk of a cyberattack or breach of sensitive information.
Fortunately, digital identity solutions have seen more widespread adoption due to the need to manage and secure a multitude of users, locations, and devices many of which are cloud technologies. HDOs that have implemented these solutions have seen their security posture strengthen, allowing IT leaders to put more trust in cloud.
These solutions also provide HIPAA compliance a growing topic of importance as telehealth and virtual care open even more access points to a users digital identity. In the U.S., HIPAA encourages the use of electronic medical records and includes standards for protecting PHI. These standards affect any organization that handles patient data including cloud service providers.
Still, finding a HIPAA compliant digital identity solution is only one component of reaching a secure digital environment. HDOs must also configure their infrastructure, monitor it continuously and address any issues that arise. As such, many healthcare leaders are fearful of making a compliance misstep when they migrate to the cloud with the potential of hefty fines. Luckily, more digital identity solutions have taken steps to address these fears through the cloud.
So, how will the cloud change healthcare IT?
Within and beyond hospital walls, care as we know it has changed drastically since the onset of the pandemic. Keeping up with the rising number of COVID-19 patients while continuing care for the public has left clinicians and IT leaders with a lot more to keep track of (not to mention some serious burnout.) Fortunately, cloud offerings provide appealing capabilities that could lead to more innovative patient care while combatting ongoing IT talent shortages, and the pressure of growing costs.
Improve patient care through the cloud
Over the last couple decades, weve seen the electronic medical record (EMR) transform data storage for HDOs. How and where that data is accessed is different too, with many clinicians using smartphones and tablets to send information and access patient records. But as the digital health landscape evolves and cyberattacks rise, more safeguards have been put in place to protect this sensitive information but at what cost to the patient experience?
Using any mobile device often requires clinicians to connect to a VPN and authenticate at least twice not ideal for those working to provide care without worrying about technology.
How can cloud help? By providing ease for HDOs while empowering patients.
Weve already seen many healthcare organizations adopt cloud technologies with the rise of telemedicine, a trend that shows no signs of slowing. Many are using cloud to implement services that make it easier to share health data, treatment plans, lab test reports, medical records and even doctors notes, without sacrificing security.
For clinicians using mobile devices, cloud access is a secure and seamless option. Theres no need to access the VPN with multiple log ins because of the encrypted connection. This alone has given patients better visibility, access, and options regarding their own health, as HDOs maintain HIPAA compliance.
Maintain IT security amid resource constraints
As the amount of data in hospital systems continues growing, the responsibilities of IT staff have expanded as well quite a bit. Just as weve seen in other industries throughout the pandemic, this burnout has resulted in healthcare talent shortages and resource constraints.
But every talent shortage has a silver lining. In this case, its increased cloud adoption. For short-staffed healthcare organizations struggling to find qualified workers, cloud is a much more attractive option than on-site storage. HDOs can outsource all maintenance and support to cloud service providers. Those managing cloud servers are often IT experts trained at securing massive amounts of data further increasing the security of cloud.
Many analytical tools are used through cloud to handle database management, business analytics, artificial intelligence, and beyond. This adds a layer of security while converting data into meaningful information.
Growing cost pressures
Participants spent around 50-70% less when storing PHI in the cloud, compared to on-premises, according to a Black Book survey on the state of healthcare IT. At a time when IT budgets are being squeezed in all directions by increasing insurance rates, rising salaries, growing user demands, and higher overall operating costs, this represents an important saving.
Why is cloud cheaper than on-premises storage? First, because it requires little to no upfront spend on hardware and licensing. Second, because it enables users to pay as they go for availability and storage, as well as offering limitless scalability. And third, it supports remote access for clinicians.
For some healthcare organizations, the unpredictable price of energy might also play a role. Powering and cooling an on-premises data center is extremely energy intensive, and therefore costly. Migrating to the cloud protects organizations from this at least in part.
Finally, its worth noting the growing popularity of hybrid cloud solutions the market for which is forecast to triple in value from 2020 to 2026, rising from $52 to $145 billion. Hybrid cloud refers to any solution that combines private storage infrastructure with public cloud.
Hybrid cloud also enables organizations to select storage thats customized to the performance and cost requirements of their specific workload. For example, healthcare organizations can store dynamic, short-term workloads in the public cloud. Meanwhile, more static long-term workloads can be stored on-premises for less, leading to long-term cost savings.
Whats next for the cloud in healthcare?
Healthcare capabilities will continue to evolve with the cloud, enhancing security and patient care. IT constraints, the lack of security expertise, and financial limitations are just a few of the burdens that cloud will help HDOs overcome and manage.
Hybrid cloud will likely be the shift for many healthcare organizations opening the door for identity and access management providers to develop and integrate sleek and sophisticated cloud solutions that allow clinicians to focus more on care and less on technology at every touchpoint.
Weve all adjusted to a new normal over the last few years, and now, well see healthcare IT do the same.
About Wes Wright
Wes Wright is the Chief Technology Officer at Imprivata. Wes brings more than 20 years of experience with healthcare providers, IT leadership, and security. Prior to joining Imprivata, Wes was the CTO at Sutter Health, where he was responsible for technical services strategies and operational activities for the 26-hospital system. Wes has been the CIO at Seattle Childrens Hospital and has served as the Chief of Staff for a three-star general in the US Air Force.
Excerpt from:
Why There's Renewed Interest In The Cloud for Healthcare - HIT Consultant
Audi putting a centralized server solution into test operation in cycle-dependent production; edge solution gets rid of individual PCs – Green Car…
Audi is putting a centralized server solution into test operation in cycle-dependent production; it says it is the first manufacturer to do so. With this Edge Cloud 4 Production local server solution, Audi is initiating a paradigm shift in automation technology.
With the Edge Cloud 4 Production, a few centralized and local servers will take on the work of countless expensive industrial PCs. The server solution makes it possible to level out spikes in demand over the total number of virtualized clientsa more efficient use of resources. Production will save time and effort, particularly where software rollouts, operating system changes, and IT-related expenses are concerned.
What were doing here is a revolution. We used to have to buy hardware when we wanted to introduce new functions. With Edge Cloud 4 Production, we only buy applications in the form of software. That is the crucial step toward IT-based production.
Gerd Walker, Member of the Board of Management of AUDI AG Production and Logistics
After successful testing in the Audi Production Lab (P-Lab), three local servers will take over directing workers in the Bllinger Hfe. If the server infrastructure continues to operate reliably, Audi wants to roll out this automation technology for serial production throughout the entire Volkswagen Group.
In the Bllinger Hfe near Neckarsulm, the Audi e-tron GT quattro1 and the R8 share an assembly line. The small-scale series e produced there are particularly well suited for testing projects from the P-Lab and trying things out for large-scale series.
The crucial advantage of Edge Cloud 4 Production is that countless industrial PCs can be replaced along with their input and output devices and no longer need to be individually maintained. Process safety is also greatly improved. In the event of a disruption, the load can be shifted to other servers. In contrast, a broken industrial PC would have to be replaced. That takes time. On top of that, the solution reduces the workload for employees.
In the future, thin clients capable of what is known as power-over-Ethernet will set the pace. These terminal devices get their electrical power via Ethernet cables and most of their computational power through local servers. They have USB ports for output devices. That enables managers directing workers to look at a monitor and see what needs to be mounted onto which vehicle. In the future, an oversized PC with processing and storage capacity will not be necessary for these tasks.
Software-based infrastructures have proven themselves in data processing centers. Were convinced they will also work well in production.
Henning Lser, head of Audis Production Lab
Together with the experts from the P-Lab, the IT managers around Christoph Hagmller, the Head of IT Services at Audi in Neckarsulm and co-manager for Production IT in the Bllinger Hfe, are rolling out the new solution. With its comparatively low unit and cycle numbers, the Bllinger Hfe is ideally suited to functioning as a real lab for testing the new concept in series production.
Edge Cloud 4 Production has a hyper-converged infrastructure (HCI). This software-defined system combines all the elements of a small data processing center: storage, computing, networking, and management. The software defines functionalities such as web servers, databases, and managing systems. The cloud solution can also be quickly scaled at will to adapt to changing production requirements.
However, a public cloud link is out of the question due to productions stringent security requirements. Additionally, local servers make the necessary, very short latencies possible.
These are the reasons why we install the servers near us. Thats also why we call the solution Edge Cloud: because its close to our shop floor environment.
Henning Lser
The new IT concept also improves ease of maintenance and IT security. With industrial PCs, the patch cycles (the intervals between necessary updates) are usually longer. On top of that, updates can only be installed during pauses in production. With the cloud-based infrastructure, IT experts can roll out patches in all phases within a few minutes via the central servers. Moreover, IT colleagues install functionality updates in all virtual clients at the same time, such as a new operating system.
Hagmller says that the need for additional functionality will get increasingly elaborate and expensive in the future. He estimates that the cost of an updatefor example, from Windows 10 to Windows 11can be reduced by about one-third.
Additionally, with the server solution, we arent dependent on loose timeframes in Production anymore. It gives us tremendous flexibility to ensure our software and operating systems are always completely up to date.
Christoph Hagmller
Both data processing centers in the Neckarsulm plant are slated for subsequent mass production. A fiber optic cable connects them with the Bllinger Hfe. According to Henning Lser, 5G will be relevant in the second stage. Thus far, a separate computer has been installed in every automated guided vehicle (AGV). Here too, experts must install costly security updates and new operating systems. It is conceivable that they could acquire new functionalities, but they are seldom transferable to their computers.
We need a fast, high-availability network for that. In our testing environment in the P-Lab, we have taken another step forward concerning 5G.
Henning Lser
Read the original post:
Audi putting a centralized server solution into test operation in cycle-dependent production; edge solution gets rid of individual PCs - Green Car...