Category Archives: Cloud Servers

What is quantum computing? Everything you need to know about the strange world of quantum computers – Texasnewstoday.com

While researchers dont understand everything about the quantum world, what they do know is that quantum particles hold immense potential, in particular to hold and process large amounts of information.

Quantum computing exploits the puzzling behavior that scientists have been observing for decades in natures smallest particles think atoms, photons or electrons. At this scale, the classical laws of physics ceases to apply, and instead we shift to quantum rules.

While researchers dont understand everything about the quantum world, what they do know is that quantum particles hold immense potential, in particular to hold and process large amounts of information. Successfully bringing those particles under control in a quantum computer could trigger an explosion of compute power that would phenomenally advance innovation in many fields that require complex calculations, like drug discovery, climate modelling, financial optimization or logistics.

As Bob Sutor, chief quantum exponent at IBM, puts it: Quantum computing is our way of emulating nature to solve extraordinarily difficult problems and make them tractable, he tells ZDNet.

Quantum computers come in various shapes and forms, but they are all built on the same principle: they host a quantum processor where quantum particles can be isolated for engineers to manipulate.

The nature of those quantum particles, as well as the method employed to control them, varies from one quantum computing approach to another. Some methods require the processor to be cooled down to freezing temperatures, others to play with quantum particles using lasers but share the goal of finding out how to best exploit the value of quantum physics.

The systems we have been using since the 1940s in various shapes and forms laptops, smartphones, cloud servers, supercomputers are known as classical computers. Those are based on bits, a unit of information that powers every computation that happens in the device.

In a classical computer, each bit can take on either a value of one or zero to represent and transmit the information that is used to carry out computations. Using bits, developers can write programs, which are sets of instructions that are read and executed by the computer.

Classical computers have been indispensable tools in the last few decades, but the inflexibility of bits is limiting. As an analogy, if tasked with looking for a needle in a haystack, a classical computer would have to be programmed to look through every single piece of hay straw until it reached the needle.

There are still many large problems, therefore, that classical devices cant solve. There are calculations that could be done on a classical system, but they might take millions of years or use more computer memory that exists in total on Earth, says Sutor. These problems are intractable today.

At the heart of any quantum computer are qubits, also known as quantum bits, and which can loosely be compared to the bits that process information in classical computers.

Qubits, however, have very different properties to bits, because they are made of the quantum particles found in nature those same particles that have been obsessing scientists for many years.

One of the properties of quantum particles that is most useful for quantum computing is known as superposition, which allows quantum particles to exist in several states at the same time. The best way to imagine superposition is to compare it to tossing a coin: instead of being heads or tails, quantum particles are the coin while it is still spinning.

By controlling quantum particles, researchers can load them with data to create qubits and thanks to superposition, a single qubit doesnt have to be either a one or a zero, but can be both at the same time. In other words, while a classical bit can only be heads or tails, a qubit can be, at once, heads and tails.

This means that, when asked to solve a problem, a quantum computer can use qubits to run several calculations at once to find an answer, exploring many different avenues in parallel.

So in the needle-in-a-haystack scenario about, unlike a classical machine, a quantum computer could in principle browse through all hay straws at the same time, finding the needle in a matter of seconds rather than looking for years even centuries before it found what it was searching for.

Whats more: qubits can be physically linked together thanks to another quantum property called entanglement, meaning that with every qubit that is added to a system, the devices capabilities increase exponentially where adding more bits only generates linear improvement.

Every time we use another qubit in a quantum computer, we double the amount of information and processing ability available for solving problems. So by the time we get to 275 qubits, we can compute with more pieces of information than there are atoms in the observable universe. And the compression of computing time that this could generate could have big implications in many use cases.

Quantum computers are all built on the same principle: they host a quantum processor where quantum particles can be isolated for engineers to manipulate.

There are a number of cases where time is money. Being able to do things more quickly will have a material impact in business, Scott Buchholz, managing director at Deloitte Consulting, tells ZDNet.

The gains in time that researchers are anticipating as a result of quantum computing are not of the order of hours or even days. Were rather talking about potentially being capable of calculating, in just a few minutes, the answer to problems that todays most powerful supercomputers couldnt resolve in thousands of years, ranging from modelling hurricanes all the way to cracking the cryptography keys protecting the most sensitive government secrets.

And businesses have a lot to gain, too. According to recent research by Boston Consulting Group (BCG),the advances that quantum computing will enable could create value of up to $850 billion in the next 15 to 30 years, $5 to $10 billion of which will be generated in the next five years if key vendors deliver on the technology as they have promised.

Programmers write problems in the form of algorithms for classical computers to resolve and similarly, quantum computers will carry out calculations based on quantum algorithms. Researchers have already identified that some quantum algorithms would be particularly suited to the enhanced capabilities of quantum computers.

For example, quantum systems could tackle optimization algorithms, which help identify the best solution among many feasible options, and could be applied in a wide range of scenarios ranging from supply chain administration to traffic management. ExxonMobil and IBM, for instance, are working together to find quantum algorithmsthat could one day manage the 50,000 merchant ships crossing the oceans each day to deliver goods, to reduce the distance and time traveled by fleets.

Quantum simulation algorithms are also expected to deliver unprecedented results, as qubits enable researchers to handle the simulation and prediction of complex interactions between molecules in larger systems, which could lead to faster breakthroughs in fields like materials science and drug discovery.

With quantum computers capable of handling and processing much larger datasets,AI and machine learning applications are set to benefit hugely, with faster training times and more capable algorithms. And researchers have also demonstrated that quantum algorithmshave the potential to crack traditional cryptography keys, which for now are too mathematically difficult for classical computers to break.

To create qubits, which are the building blocks of quantum computers, scientists have to find and manipulate the smallest particles of nature tiny parts of the universe that can be found thanks to different mediums. This is why there are currently many types of quantum processors being developed by a range of companies.

One of the most advanced approaches consists of using superconducting qubits, which are made of electrons, and come in the form of the familiar chandelier-like quantum computers. Both IBM and Google have developed superconducting processors.

Another approach that is gaining momentum is trapped ions, which Honeywell and IonQ are leading the way on, and in which qubits are housed in arrays of ions that are trapped in electric fields and then controlled with lasers.

Major companies like Xanadu and PsiQuantum, for their part, are investing in yet another method that relies on quantum particles of light, called photons, to encode data and create qubits. Qubits can also be created out of silicon spin qubits which Intel is focusing on but also cold atoms or even diamonds.

Quantum annealing, an approach that was chosen by D-Wave, is a different category of computing altogether. It doesnt rely on the same paradigm as other quantum processors, known as the gate model. Quantum annealing processors are much easier to control and operate, which is why D-Wave has already developed devices that can manipulate thousands of qubits, where virtually every other quantum hardware company is working with about 100 qubits or less. On the other hand, the annealing approach is only suitable for a specific set of optimization problems, which limits its capabilities.

What can you do with a quantum computer today?

Right now, with a mere 100 qubits the state of the art, there is very little that can actually be done with quantum computers. For qubits to start carrying out meaningful calculations, they will have to be counted in the thousands, and even millions.

Both IBM and Google have developed superconducting processors.

Right now, with a mere 100 qubits the state of the art, there is very little that can actually be done with quantum computers. For qubits to start carrying out meaningful calculations, they will have to be counted in the thousands, and even millions.

While there is a tremendous amount of promise and excitement about what quantum computers can do one day, I think what they can do today is relatively underwhelming, says Buchholz.

Increasing the qubit count in gate-model processors, however, is incredibly challenging. This is because keeping the particles that make up qubits in their quantum state is difficult a little bit like trying to keep a coin spinning without falling on one side or the other, except much harder.

Keeping qubits spinning requires isolating them from any environmental disturbance that might cause them to lose their quantum state. Google and IBM, for example, do this by placing their superconducting processors in temperatures that are colder than outer space, which in turn require sophisticated cryogenic technologies that are currently near-impossible to scale up.

In addition, the instability of qubits means that they are unreliable, and still likely to cause computation errors. This hasgiven rise to a branch of quantum computing dedicated to developing error-correction methods.

Although research is advancing at pace, therefore, quantum computers are for now stuck in what is known as the NISQ era: noisy, intermediate-scale quantum computing but the end-goal is to build a fault-tolerant, universal quantum computer.

As Buchholz explains, it is hard to tell when this is likely to happen. I would guess we are a handful of years from production use cases, but the real challenge is that this is a little like trying to predict research breakthroughs, he says. Its hard to put a timeline on genius.

In 2019, Googleclaimed that its 54-qubit superconducting processor called Sycamore had achieved quantum supremacy the point at which a quantum computer can solve a computational task that is impossible to run on a classical device in any realistic amount of time.

Google said that Sycamore has calculated, in only 200 seconds, the answer to a problem that would have taken the worlds biggest supercomputers 10,000 years to complete.

More recently,researchers from the University of Science and Technology of China claimed a similar breakthrough, saying that their quantum processor had taken 200 seconds to achieve a task that would have taken 600 million years to complete with classical devices.

This is far from saying that either of those quantum computers are now capable of outstripping any classical computer at any task. In both cases, the devices were programmed to run very specific problems, with little usefulness aside from proving that they could compute the task significantly faster than classical systems.

Without a higher qubit count and better error correction, proving quantum supremacy for useful problems is still some way off.

Organizations that are investing in quantum resources see this as the preparation stage: their scientists are doing the groundwork to be ready for the day that a universal and fault-tolerant quantum computer is ready.

In practice, this means that they are trying to discover the quantum algorithms that are most likely to show an advantage over classical algorithms once they can be run on large-scale quantum systems. To do so, researchers typically try to prove that quantum algorithms perform comparably to classical ones on very small use cases, and theorize that as quantum hardware improves, and the size of the problem can be grown, the quantum approach will inevitably show some significant speed-ups.

For example, scientists at Japanese steel manufacturer Nippon Steelrecently came up with a quantum optimization algorithm that could compete against its classical counterpartfor a small problem that was run on a 10-qubit quantum computer. In principle, this means that the same algorithm equipped with thousands or millions of error-corrected qubits could eventually optimize the companys entire supply chain, complete with the management of dozens of raw materials, processes and tight deadlines, generating huge cost savings.

The work that quantum scientists are carrying out for businesses is therefore highly experimental, and so far there are fewer than 100 quantum algorithms that have been shown to compete against their classical equivalents which only points to how emergent the field still is.

With most use cases requiring a fully error-corrected quantum computer, just who will deliver one first is the question on everyones lips in the quantum industry, and it is impossible to know the exact answer.

All quantum hardware companies are keen to stress that their approach will be the first one to crack the quantum revolution, making it even harder to discern noise from reality. The challenge at the moment is that its like looking at a group of toddlers in a playground and trying to figure out which one of them is going to win the Nobel Prize, says Buchholz.

I have seen the smartest people in the field say theyre not really sure which one of these is the right answer. There are more than half a dozen different competing technologies and its still not clear which one will wind up being the best, or if there will be a best one, he continues.

In general, experts agree that the technology will not reach its full potential until after 2030. The next five years, however, may start bringing some early use cases as error correction improves and qubit counts start reaching numbers that allow for small problems to be programmed.

IBM is one of the rare companies thathas committed to a specific quantum roadmap, which defines the ultimate objective of realizing a million-qubit quantum computer. In the nearer-term, Big Blue anticipates that it will release a 1,121-qubit system in 2023, which might mark the start of the first experimentations with real-world use cases.

In general, experts agree that quantum computers will not reach their full potential until after 2030.

Developing quantum hardware is a huge part of the challenge, and arguably the most significant bottleneck in the ecosystem. But even a universal fault-tolerant quantum computer would be of little use without the matching quantum software.

Of course, none of these online facilities are much use without knowing how to speak quantum, Andrew Fearnside, senior associate specializing in quantum technologies at intellectual property firm Mewburn Ellis, tells ZDNet.

Creating quantum algorithms is not as easy as taking a classical algorithm and adapting it to the quantum world. Quantum computing, rather, requires a brand-new programming paradigm that can only be ran on a brand-new software stack.

Of course, some hardware providers also develop software tools, the most established of which is IBMs open-source quantum software development kit Qiskit. But on top of that, the quantum ecosystem is expanding to include companies dedicated exclusively to creating quantum software. Familiar names include Zapata, QC Ware or 1QBit, which all specialize in providing businesses with the tools to understand the language of quantum.

And increasingly, promising partnerships are forming to bring together different parts of the ecosystem. For example, therecent alliance between Honeywell, which is building trapped ions quantum computers, and quantum software company Cambridge Quantum Computing (CQC), has got analysts predicting that a new player could be taking a lead in the quantum race.

The complexity of building a quantum computer think ultra-high vacuum chambers, cryogenic control systems and other exotic quantum instruments means that the vast majority of quantum systems are currently firmly sitting in lab environments, rather than being sent out to customers data centers.

To let users access the devices to start running their experiments, therefore, quantum companies have launched commercial quantum computing cloud services, making the technology accessible to a wider range of customers.

The four largest providers of public cloud computing services currently offer access to quantum computers on their platform. IBM and Google have both put their own quantum processors on the cloud, whileMicrosofts Azure QuantumandAWSs Braketservice let customers access computers from third-party quantum hardware providers.

The jury remains out on which technology will win the race, if any at all, but one thing is for certain: the quantum computing industry is developing fast, and investors are generously funding the ecosystem. Equity investments in quantum computing nearly tripled in 2020, and according to BCG, they are set to rise even more in 2021 to reach $800 million.

Government investment is even more significant: the US has unlocked $1.2 billion for quantum information science over the next five years, while the EU announced a 1 billion ($1.20 billion) quantum flagship. The UKalso recently reached the 1 billion ($1.37 billion) budget milestonefor quantum technologies, and while official numbers are not known in China,the government has made no secret of its desire to aggressively compete in the quantum race.

This has caused the quantum ecosystem to flourish over the past years, with new start-ups increasing from a handful in 2013 to nearly 200 in 2020. The appeal of quantum computing is also increasing among potential customers: according to analysis firm Gartner,while only 1% of companies were budgeting for quantum in 2018, 20% are expected to do so by 2023.

Although not all businesses need to be preparing themselves to keep up with quantum-ready competitors, there are some industries where quantum algorithms are expected to generate huge value, and where leading companies are already getting ready.

Goldman Sachs and JP Morgan are two examples of financial behemoths investing in quantum computing. Thats because in banking,quantum optimization algorithms could give a boost to portfolio optimization, by better picking which stocks to buy and sell for maximum return.

In pharmaceuticals, where the drug discovery process is on average a $2 billion, ten-year-long deal that largely relies on trial and error, quantum simulation algorithms are also expected to make waves. This is also the case in materials science: companies like OTI Lumionics, for example,are exploring the use of quantum computers to design more efficient OLED displays.

Leading automotive companies including Volkswagen and BMW are also keeping a close eye on the technology, which could impact the sector in various ways, ranging from designing more efficient batteries to optimizing the supply chain, through to better management of traffic and mobility. Volkswagen, for example,pioneered the use of a quantum algorithm that optimized bus routes in real time by dodging traffic bottlenecks.

As the technology matures, however, it is unlikely that quantum computing will be limited to a select few. Rather, analysts anticipate that virtually all industries have the potential to benefit from the computational speedup that qubits will unlock.

There are some industries where quantum algorithms are expected to generate huge value, and where leading companies are already getting ready.

Quantum computers are expected to be phenomenal at solving a certain class of problems, but that doesnt mean that they will be a better tool than classical computers for every single application. Particularly, quantum systems arent a good fit for fundamental computations like arithmetic, or for executing commands.

Quantum computers are great constraint optimizers, but thats not what you need to run Microsoft Excel or Office, says Buchholz. Thats what classical technology is for: for doing lots of maths, calculations and sequential operations.

In other words, there will always be a place for the way that we compute today. It is unlikely, for example, that you will be streaming a Netflix series on a quantum computer anytime soon. Rather, the two technologies will be used in conjunction, with quantum computers being called for only where they can dramatically accelerate a specific calculation.

Buchholz predicts that, as classical and quantum computing start working alongside each other, access will look like a configuration option. Data scientists currently have a choice of using CPUs or GPUs when running their workloads, and it might be that quantum processing units (QPUs) join the list at some point. It will be up to researchers to decide which configuration to choose, based on the nature of their computation.

Although the precise way that users will access quantum computing in the future remains to be defined, one thing is certain: they are unlikely to be required to understand the fundamental laws of quantum computing in order to use the technology.

People get confused because the way we lead into quantum computing is by talking about technical details, says Buchholz. But you dont need to understand how your cellphone works to use it.

People sometimes forget that when you log into a server somewhere, you have no idea what physical location the server is in or even if it exists physically at all anymore. The important question really becomes what it is going to look like to access it.

And as fascinating as qubits, superposition, entanglement and other quantum phenomena might be, for most of us this will come as welcome news.

What is quantum computing? Everything you need to know about the strange world of quantum computers Source link What is quantum computing? Everything you need to know about the strange world of quantum computers

Read the rest here:
What is quantum computing? Everything you need to know about the strange world of quantum computers - Texasnewstoday.com

So you want to migrate to the cloud? – ITWeb

It seems nearly impossible to avoid the cloud as a business these days, and for many companies, the benefits cloud computing offers are just too great to ignore for much longer. Because of these, youve already taken the first step and made the decision you want to migrate to the cloud. But now what?

Luckily, with the plethora of tools created by both cloud providers and those built by software vendors, kicking off your migration to the cloud has never been easier whether youre looking to move onsite workloads or build cloud-native solutions from the start.

As cloud experts with experience advising, migrating, architecting, managing and optimising workloads in the cloud, BBD understands the nitty-gritty of what you need to consider before you take the plunge. There is, of course, quite a long list of things we can add here, but we know you also have work to do, so we will keep the rest of this to the point.

Although the right partner on this journey definitely makes your move to the cloud much more streamlined, there are multiple steps in the process. Over the next couple of weeks, this migration-focused series will unpack these steps and the processes you need to run through to ensure you ultimately deploy a secure, compliant, cost-effective and resilient environment.

Two of the most important aspects to consider from the start are security and compliance, because they often help establish whether your initial migration plan is viable or not, and if so, in which direction.

Security

Understanding your security goals and how you should be handling data will create a good foundation for you to know what services to use when architecting your environment.

Jaco Venter, head of BBDs managed cloud services team (MServ), says security should always be top of mind when planning your migration. There are the how do I keep my customers' information secure? and the how do I ensure my applications do not get compromised? conversations. These are both important to unpack with your cloud solution partner.

Both these topics can be addressed by planning for and implementing an architecture that includes best practices. BBD has done well-architected reviews on customer environments and often finds that the basics are covered, and thats a great start, but when looking at security, just the basics wont do, especially if it could lead to your environment being compromised.

As an example, AWS has created an Architecture Center on its website that provides reference architecture diagrams, vetted architecture solutions, well-architected best practices, patterns, icons and more. This easily accessible guidance was contributed to by AWS cloud architecture experts, including solutions architects, professional services consultants and partners.

For AWS migrations, Venter explains there is a shared responsibility model that pretty much goes like this: AWS is responsible for the security of the cloud. AWS will look after all things physical, from the security guards standing in front of their various data centres' doors, all the way through to the security and management of the infrastructure your services will be running on. You (or your cloud enablement partner), on the other hand, will be responsible for security in the cloud. This means you will still have to ensure your data is being protected and backed up.

There are, however, some AWS services that are fully managed, like RDS (relational database service), where AWS will manage and secure everything for you up until the DB table level.

Compliance

Understanding the compliance frameworks your organisation has to comply with is a recommended starting point, as it will influence a lot of the architecture youll need to devise before your cloud migration. An example of this would be when the customers you service are in a country with data residency restrictions/laws (such as GDPR, POPIA, PCI, ISO, etc). Here you need to plan for how you will handle and process those customers data versus the data of your customers in other countries without those restrictions/laws.

When looking at data residency again as an example, AWS has a couple of tools, such as Control Tower, that allow you to manage how data is transferred between regions or if it even can be transferred to another region.

On the whole, compliance will often dictate where you can or cannot deploy your workloads, and which services you can or cannot use. The great thing about AWS having obtained various compliance framework certifications for their infrastructure is that it makes it so much easier for you to be compliant, says Venter. But think about this in the same way as the shared responsibility model; AWS will make sure the infrastructure is compliant you will need to make sure your applications also meet the compliance framework requirements.

Ultimately, its worth understanding that the services you plan to leverage as part of your architecture can sometimes make it a bit easier to comply with the relevant compliance frameworks.

What else needs to be considered before finalising a cloud migration strategy?

It is always best to look at what migration tools the cloud provider you are migrating to has made available to you often at no additional charge.

Venter explains this is exactly the case when looking at the tools made available by AWS. AWS has made more than six tools available at no cost, and some of these tools are perfect for the planning phase, while others make the migration of your servers, applications and databases just so much easier.

One example of such a tool is the AWS Server Migration Service, which is an agentless service applicable when migrating virtual-only workloads from on-premises infrastructure, or from Microsoft Azure to AWS. It allows you to automate, schedule and track incremental replications of live server volumes making it easier to co-ordinate large-scale server migrations.

There is a long list of other considerations youll need to consider before kicking off your migration, some that are more important than others, but each could have an impact on your final architecture and how you manage that environment in the long run. These will be discussed in more detail as this series unfolds.

BBD has helped various clients reach their security and compliance goals in preparation for their coming migration to the cloud, and understands the importance of tool selection to aid in an efficient migration to the cloud.

If youve made the decision and are looking for a cloud enablement partner to guide you through devising a relevant strategy, implementing the migration and optimising as your business grows reach out to BBD at http://www.bbdsoftware.com.

Go here to see the original:
So you want to migrate to the cloud? - ITWeb

AMD 3rd Gen Epyc CPUs Put Intel Xeon SPs On Ice In The Datacenter – The Next Platform

SPONSORED Sometimes, bad things turn into excellent opportunities that can utterly transform markets. Many years hence, when someone writes the history of the datacenter compute business, they will judge AMD tapping Taiwan Semiconductor Manufacturing Corp to etch the cores in its second and third generation Epyc server processors to be extremely fortuitous. This allowed AMD to leapfrog Intel a generation ago and set itself up for a sustainable process lead while AMD had a parallel architectural advantage over its server CPU arch-rival.

We have not seen Intel knocked down so hard in the datacenter since AMDs 64-bit Opterons, with their integrated memory controllers, multicore architecture, HyperTransport interconnect, and other advanced features, made the 32-bit Xeon server chips look ridiculous in the early 2000s. It wasnt until Intel cloned many of the elements of the Opteron designs with its Nehalem Xeon E5500 processors in 2009 that it could field a server CPU that was technically and economically competitive with the Opteron alternatives.

History is repeating itself with the third generation Epyc 7003 series processors (formerly codenamed Milan), which came out in March of this year. (Our initial analysis of the SKU stacks is at this link and our deep dive into the Epyc 7003 architecture is here.) While Intels Ice Lake Xeon SP server processors, also the third generation of its most recent family, are a big improvement over their predecessors, they do not even come close to matching the Epyc 7003 series processors when it comes to single-core or total socket throughput performance. And when it comes to price/performance and compatibility with existing server designs, AMD is winning this matchup against Intel in datacenter compute hands down. As we have said, Intel has improved considerably with its Ice Lake chips compared to the Skylake and Cascade Lake predecessors in the Xeon SP line. But AMD is cleaning its clocks. And caches. And vector units. And so on.

And now, we are finally getting the data to do competitive analysis pitting the AMD 3rd Gen Epyc chips against the Intel Ice Lake chips, and given how AMD is running a clean sweep, it is no surprise that Intel has brought back Pat Gelsinger to try to reinvigorate the Xeon SP lineup and save the server CPU business. AMD has broken through the 10 percent server shipment share after seven years of research, development, and product rollouts and seems poised to double that share and maybe more because the company will have a sustainable architecture and manufacturing process advantage. (Our best guess is that about a year from now, AMD will have 25 percent server shipment share with some big error bars around that number to take into account macroeconomic factors and Intels pricing and bundling reactions.)

We are very excited about the momentum we are seeing across our customer base, Ram Peddibhotla, corporate vice president of product management for datacenter products at AMD, tells The Next Platform. And if you look at the kind of total cost of ownership savings possible from 3rd Gen Epyc versus Ice Lake, you can plough that into your core business and you are able to bring efficiencies to the business across the board. I have said this before, and I will say it again. The risk actually lies in not adopting Epyc. And if you dont adopt Epyc, I think you are actually at a severe competitive disadvantage.

It is hard to argue that point at the server CPU level, particularly after you look at the performance comparisons we are going to do. And then lets add in the fact that AMD is working with technology partners to bring Epyc chips to bear on particular software stacks and solutions that are relevant to the enterprise. This will significantly reduce friction in deals and drive enterprise adoption like we have already seen with HPC centers, public cloud builders, and hyperscalers.

First, lets look at some relevant performance matchups, and we will start with the SPEC CPU benchmarks that gauge integer and floating point performance. These are table stakes to be in the server CPU; if you cant deliver decent SPEC numbers, you wont get hyperscalers, cloud builders, and OEMs to answer the phone when you call. If you look at the SPECspeed2017 and SPECrate2017 tests which come in one-socket and two-socket versions with both integer and floating point performance ratings AMDs Epyc processors have the number one ranking on all 16 possible categories. (SPECspeed2017 measures the time for workloads to complete while SPECrate2017 measures throughput per unit of time, so they are slightly different in this regard.) And on power efficiency tests, AMD has swept the SPECpower2008 benchmarks and has the top ranking on all but one of the SPEC CPU 2017 energy efficiency benchmarks. This is unprecedented but could be the new normal for the next several generations of X86 server CPUs and maybe even across all classes of server CPUs. In many cases, the second generation Epyc 7002 series processors can beat Intels third generation Ice Lake Xeon SPs, and then the Epyc 7003s open an even larger gap. And here is the stunning thing that must have Intel fuming: AMD has now delivered better per core performance as well as better throughput up and down the SKU stack.

Here is how the top-bin parts compare, with Ice Lake Xeon SPs on the left, Epyc 7002s in the center, and Epyc 7003s on the right, on the SPECrate2017 integer, floating point, and Java benchmarks for two-socket systems:

The gap between Ice Lake and Epyc 7002 is bad enough for these top-bin systems, but the gap between Ice Lake and Epyc 7003 is large. On the integer test, the advantage to AMD is 47.2 percent, on the floating point test it is 36.5 percent, and on the SPECjbb2015 test it is 49.8 percent.

So how does it look at a constant number of cores, say perhaps 32 cores? Still not good for Intel. Here are the SPECrate2017 tests for 32-core Epyc 7002, 32-core Ice Lake, and 32-core Epyc 7003 parts:

The Ice Lake core has a tiny bit more oomph than the Epyc 7002 core it was intended to compete against, but Intel didnt make it into the field in time to do that, and the Epyc 7003 core, based on the Zen 3 design, has quite a bit more performance. Therefore, a 32-core Epyc 7003 chip can do 34.2 percent more integer work and 30.6 percent more floating point work than the 32-core Ice Lake chip.

Even if you scale down the Intel Ice Lake and AMD Epyc 7003 chips, the situation is still not great for Intel, as you can see here in this comparison showing integer performance on the SPECrate2017 test:

The message here is that if Intel wants to maintain shipments of its Xeon SPs, it will have to cut CPU prices and bundle in motherboards, NICs, FPGAs, and anything else it can in the deal to try to keep the revenue stream flowing. And even if it does this, Intels Data Center Group margins will take a big hit, as they did the first quarter of 2021. This is just the beginning of a potential price war and sustained technology campaign in the X86 server CPU market.

Here is a chart that shows how the Epyc 7002 and Epyc 7003 Epyc 7003 chip SKU stack compares against the most common SKUs in the Intel Ice Lake Xeon SP stack, which makes it easier to see the competitive positioning.

AMD purposely designed the Epyc server platform to have longevity while steadily increasing the value delivered in each generation of the Epyc family of processors, explains Peddibhotla. Many servers in the market will continue to support the second generation Epyc and the new third generation Epyc to co-exist together as the latest generation enhances performance per core even further and adds other core-count options to meet varying workload needs. The entry market with 8 to 16 cores will deliver great value with Epyc 7002 series with TCO-optimized volume. Per-core or high-density performance needs can be filled with the Epyc 7003. And the second generation Epyc is a great price/performance value at all available core counts.

Intel, by contrast, is making customers move from the Purley platform for Skylake and Cascade Lake Xeon SPs to the Whitley platform for Ice Lake and then the Eagle Stream platform for the future Sapphire Rapids fourth generation Xeon SPs.

Although raw performance on the SPEC tests is an important thing that all enterprises consider, what they want to know is how much more oomph can they get if they are upgrading servers that are several generations back, perhaps four years old. There is always a consolidation factor, but this one is playing out in favor of AMD:

As is usually the case, it will take far fewer servers to meet the same capacity or much more capacity will be available in the same number of physical servers. In this case, for just under 4,000 aggregate SPECrate2017 integer units of performance, you can replace 20 two-socket Broadwell Xeon E5 v4 servers with five Epyc 7003 Epyc 7763 servers to get the same performance or install 20 servers and get 4X the performance. Assuming that the Intel Ice Lake and AMD Epyc 7003 servers shown above cost about the same, for the same number of servers, you will get around 50 percent more performance, which means you can cut about a third of the server count to get the same performance and spend a third less money, too.

You can dice and slice this a lot of different ways, of course.

Here is a deep TCO analysis over three years that shows how this might play out for a 10,000 SPECrate2017 integer units of performance, showing the cost of acquiring the machines, administering them, paying for datacenter space, power, and cooling. It bears out what we just said above:

AMD has fought a long time to get back to this position. And datacenters the world over should be grateful. We really needed some competition here.

Sponsored by AMD

Go here to see the original:
AMD 3rd Gen Epyc CPUs Put Intel Xeon SPs On Ice In The Datacenter - The Next Platform

Cloud security in 2021: A business guide to essential tools and best practices – ZDNet

Cloud computing services have become a vital tool for most businesses. It's a trend that has accelerated recently, with cloud-based services such as Zoom,Microsoft 365 and Google Workspaceand many others becoming the collaboration and productivity tools of choice for teams working remotely.

While cloud quickly became an essential tool, allowing businesses and employees to continue operating from home, embracing the cloud canalso bring additional cybersecurity risks, something that is now increasingly clear.

Previously, most people connecting to the corporate network would be doing so from their place of work, and thus accessing their accounts, files and company servers from inside the four walls of the office building, protected by enterprise-grade firewalls and other security tools. The expanded use ofcloud applicationsmeant that suddenly this wasn't the case, with users able to access corporate applications, documents and services from anywhere. That has brought the need for new security tools.

Cloud computing security threats

The best cloud storage services

Free and cheap personal and small business cloud storage services are everywhere. But, which one is best for you? Let's look at the top cloud storage options.

Read More

While positive for remote workers because it allows them to continue with some semblance of normality working remotely also presents an opportunity for cyber criminals, who havequickly taken advantage of the switch to remote workingto attempt tobreak into the networksof organisations that have poorly configured cloud security.

SEE: IT Data Center Green Energy Policy (TechRepublic Premium)

Corporate VPNs and cloud-based application suiteshave become prime targets for hackers. If not properly secured, all of these can provide cyber criminals with a simple means of accessing corporate networks. All attackers need to do isget hold of a username and password by stealing them via aphishing emailor usingbrute force attacksto breach simple passwords and they're in.

Because the intruder isusing the legitimate login credentials of someone who is already working remotely, it's harder to detect unauthorised access, especially considering how the shift to remote working has resulted in some people working different hours to what might be considered core business hours.

Attacks against cloud applications can be extremely damaging for victims as cyber criminalscould be on the network for weeks or months. Sometimes they steal large amounts of sensitive corporate information; sometimes they might use cloud services as an initial entry point to lay the foundations for aransomware attackthat can lead to themboth stealing data and deploying ransomware. That's why it's important for businesses using cloud applications to have the correct tools and practices in place to make sure that users can safely use cloud services no matter where they're working from while also being able to use them efficiently.

Use multi-factor authentication controls on user accounts

One obvious preventative step is to put strong security controls around how users log in to the cloud services in the first place. Whether that's a virtual private network (VPN), remote desktop protocol (RDP) service or an office application suite, staff should need more than their username and password to use the services.

"One of the things that's most important about cloud is identity is king. Identity becomes almost your proxy to absolutely everything. All of a sudden, the identity and its role and how you assign that has all of the power," says Christian Arndt, cybersecurity director at PwC.

Whether it's software-based, requiring a user to tap an alert on their smartphone, or hardware-based, requiring the user to use a secure USB key on their computer,multi-factor authentication(MFA) provides an effective line of defence against unauthorised attempts at accessing accounts. According to Microsoft,MFA protects against 99.9% of fraudulent sign-in attempts.

Not only does it block unauthorised users from automatically gaining entry to accounts, the notification sent out by the service, which asks the user if they attempted to log in, can act as an alert that someone is trying to gain access to the account. This can be used to warn the company that they could be the target of malicious hackers.

Use encryption

The ability to easily store or transfer data is one of the key benefits of using cloud applications, but for organisations that want to ensure the security of their data, its processes shouldn't involve simply uploading data to the cloud and forgetting about it. There's an extra step that businesses can take to protect any data uploaded to cloud services encryption.

Just as when it's stored on regular PCs and servers, encrypting the data renders it unreadable, concealing it to unauthorised or malicious users. Some cloud providers automatically provide this service, employing end-to-end protection of data to and from the cloud, as well as inside it, preventing it from being manipulated or stolen.

Apply security patches as swiftly as possible

Like other applications, cloud applications can receive software updates as vendors develop and apply fixes to make their products work better. These updates can also contain patches for security vulnerabilities, as just because an application is hosted by a cloud provider, it doesn't make it invulnerable to security vulnerabilities and cyberattacks.

Critical security patches for VPN and RDP applicationshave beenreleased by vendorsin order to fix security vulnerabilities that put organisations at risk of cyberattacks. If these aren't applied quickly enough, there's the potential for cyber criminals to abuse these services as an entry point to the network that can be exploited for further cyberattacks.

Use tools to know what's on your network

Companies are using more and more cloud services and keeping track of every cloud app or cloud server ever spun up is hard work. But there are many, many instances of corporate data left exposed by poor use of cloud security. A cloud service can be left open and exposedwithout an organisation even knowing about it. Exposed public cloud storage resources can be discovered by attackers and that can put the whole organisation at risk.

In these circumstances, it could be useful to employ cloud security posture management (CSPM) tools. These can help organisations identify and remediate potential security issues around misconfiguration and compliance in the cloud, providing a means of reducing the attack surface available to hackers to examine, and helping to keep the cloud infrastructure secure against potential attacks and data breaches.

"Cloud security posture management is a technology that evaluates configuration drift in a changing environment, and will alert you if things are somehow out of sync with what your baseline is and that may indicate that there's something in the system that means more can be exploited for compromise purposes," says Merritt Maxim, VP and research director at Forrester.

SEE: Network security policy (TechRepublic Premium)

CSPM is an automated procedure and the use of automated management tools can help security teams stay on top of alerts and developments. Cloud infrastructure can be vast and having to manually comb through the services to find errors and abnormalities would be too much for a human especially if there are dozens of different cloud services on the network. Automating those processes can, therefore, help keep the cloud environment secure.

"You don't have enough people to manage 100 different tools in the environment that changes everyday, so I would say try to consolidate on platforms that solve a big problem and apply automation," says TJ Gonen, head of cloud security at Check Point Software, a cybersecurity company.

Ensure the separation of administrator and user accounts

Cloud services can be complex and some members of the IT team will have highly privileged access to the service to help manage the cloud. A compromise of a high-level administrator account could give an attacker extensive control over the network and the ability to perform any action the administrator privileges allow, which could be extremely damaging for the company using cloud services.

It's, therefore, imperative that administrator accounts are secured with tools such as multi-factor authentication and that admin-level privileges are only provided to employees who need them to do their jobs. According to the NCSC, admin-level devices should not be able to directly browse the web or read emails, as these could put the account at risk of being compromised.

It's also important to ensure that regular users who don't need administrative privileges don't have them, because in the event of account compromise an attacker could quickly exploit this access to gain control of cloud services.

Use backups as contingency plan

But while cloud services can and have provided organisations around the world with benefits, it's important not to rely on cloud for security entirely. While tools like two-factor authentication and automated alerts can help secure networks, no network is impossible to breach and that's especially true if extra security measures haven't been applied.

SEE:Ransomware: Paying up won't stop you from getting hit again, says cybersecurity chief

That's why a good cloud security strategy should also involvestoring backups of data and storing it offline, so in the event of an event that makes cloud services unavailable, there's something there for the company to work with.

Use cloud applications that are simple for your employees to use

There's something else that organisations can do to ensure the security of cloud and that's provide their employees with the correct tools in the first place. Cloud application suites can make collaboration easier for everyone, but they also need to be accessible and intuitive to use, or organisations run the risk of employees not wanting to use them.

A business could set up the most secure enterprise cloud suite possible, but if it's too difficult to use, employees, frustrated with not being able to do their jobs,could turn to public cloud tools instead.

This issue could lead to corporate data being stored in personal accounts, creating greater risk of theft, especially if a user doesn't have two-factor authentication or other controls in place to protect their personal account.

Information being stolen from a personal account could potentially lead to an extensive data breach or wider compromise of the organisation as a whole.

Therefore, for a business to ensure it has a secure cloud security strategy, not only should it be using tools like multi-factor authentication, encryption and offline backups to protect data as much as possible, the business must also make sure that all these tools are simple to use to encourage employees to use them correctly and follow best practices for cloud security.

More here:
Cloud security in 2021: A business guide to essential tools and best practices - ZDNet

Supporting staff with cloud-based asset tracking – Modern Healthcare

For health systems and hospitals, Cloud-Based RTLS provides a cost-effective, fast-to-deploy solution. A key aspect for health system management is the low investment cost for installing a cloud-based platform. Because the Plug-In BLE Sensors only need a wall outlet to operate, there is no need for design or construction projects. Facilities avoid noisy disruptions, and installation is as quick as plugging in sensors and activating with a smart phone. Full deployment is complete in a matter of days.

This simplified set up is not a burden on IT departments, since the cloud-based platform does not require dedicated servers or the footprint to house them. Further, cloud-based tracking is easily scalable, as it can be used across facilities and throughout multiple buildings. Regulatory compliance controls and encryption help ensure data confidentiality.

From the interviews we conduct with health system and hospital personnel, we understand the frustration they go through. We heard their stories of unsustainable waste. In one year, one hospital lost $5 million in IV pumps and telemetry packs alone.3 We also heard their concerns about how this waste and over-buying impacts their operations and budgets. This can include over-purchasing assets, repeat purchasing, labor costs to maintain extra equipment, and extra costs for software agreements, consumables, installations and user training. This waste is a constant issue that burdens day-to-day operations. Midmark RTLS works to support hospitals and health systems because budgets should be reserved for increasing the quality of care. Not repeat-buying items that were already in the building. Better care starts when all staff are well equipped, prepared and focusing on the patientnot searching for lost, missing or stolen medical devices and equipment.

For more information, contact Midmark to see other ways that cloud-based asset tracking supports clinical staff and biomedical teams.

Continue reading here:
Supporting staff with cloud-based asset tracking - Modern Healthcare

Data center accelerator market was valued at USD 13.7 billion in 2021 and is anticipated to – GlobeNewswire

New York, July 22, 2021 (GLOBE NEWSWIRE) -- Reportlinker.com announces the release of the report "Data Center Accelerator Market by Processor Type, Type, Application And Geography - Global Forecast to 2026" - https://www.reportlinker.com/p05494008/?utm_source=GNW A growing number of tech giants and startups have begun offering machine learning as a cloud service due to the burgeoning demand for AI-based computation.

Most companies and startups do not develop their own specialized hardware or software to apply deep learning to their specific business needs.Cloud-based solutions are ideal for small and midsized businesses that find on-premises solutions costlier.

Thus, the increasing adoption of cloud-based technology is necessitating the need for deep learning.

Artificial intelligence to drive the growth of cloud data centerCloud data center is dominating the data center accelerator market owing to rise in demand for AI based solution.The growth of AI is leading to changes in cloud server configuration.

The cloud computing market has witnessed significant growth owing to the surge in the volume of data being transferred to the cloud from consumers.The surge in AI-centric data has led to the growth of co-processors (accelerators) embedded in the servers.

The accelerators optimize data processing at the servers by reducing the latency.According to Intel, currently, ~7% of the servers are used in deep learning activities.There are ~12 million server units around the globe as of 2021.

In the AI-capable servers for deep learning training, the typical CPU to GPU attach rate is 14 GPUs; in some cases, it is around 18 GPUs.Deep learning is expected to account for the majority of cloud workload during the forecast period, which, in turn, is likely to propel the demand for accelerators for cloud servers.

More than one-third of servers to be shipped in 2026 are likely to run either deep learning training algorithms or deep learning inference algorithms.Accelerators are likely to be deployed in the cloud servers for both public and enterprise cloud inference applications.

However, training applications are expected to account for the majority of the server applications by the end of 2026.

Asia Pacific is the fastest-growing region in the data center accelerator marketThe data center accelerator market in APAC is anticipated to register the highest CAGR of 42.7% between 2021 and 2026. The organizations in APAC have more preference for deploying a hybrid cloud. The organizations are adopting a mix of on-premises, third-party, co-location, private cloud, hosted cloud, and public clouddepending on the nature of workloads, legacy decisions made by the team, budgets, and technology maturity within the organization.The 2 major players in the data center accelerator market are NVIDIA Corporation (US) and Intel Corporation (US).Intel mainly focuses on its Xeon Phi processors and FPGA co-processors; however, NVIDIA has nearly reached a monopoly in the data centers accelerator market with its GPU accelerators.

Apart from NVIDIA and Intel, several start-ups are working on ASIC and FPGA accelerator architectures.

The breakup of primaries conducted during the study is depicted below: By Company Type: Tier 1 45 %, Tier 2 32%, and Tier 3 23% By Designation: C-Level Executives 30%, Directors 45%, and Others 25% By Region: North America 26%, Europe 40%, APAC 22% and ROW 12%

Research CoverageThe report segments the data center accelerator market and forecasts its size, by volume and value, based on region (North America, Europe, Asia Pacific, and RoW), Processor Type (CPU, GPU, FPGA, ASIC), Type (HPC Accelerator, Cloud Accelerator), Application (Deep Learning Training, Public Cloud Interface, Enterprise Interface).The report also provides a comprehensive review of market drivers, restraints, opportunities, and challenges in the data center accelerator market.

The report also covers qualitative aspects in addition to the quantitative aspects of these markets.

Key Benefits of Buying This Report This report includes market statistics pertaining to the processor, type, application, and region. An in-depth value chain analysis has been done to provide deep insight into the data center accelerator market. Major market drivers, restraints, challenges, and opportunities have been detailed in this report. Illustrative segmentation, analyses, and forecasts for the market based on processor, type, application, and region have been conducted to provide an overall view of the data center accelerator market. The report includes an in-depth analysis and ranking of key players.Read the full report: https://www.reportlinker.com/p05494008/?utm_source=GNW

About ReportlinkerReportLinker is an award-winning market research solution. Reportlinker finds and organizes the latest industry data so you get all the market research you need - instantly, in one place.

__________________________

Read the original:
Data center accelerator market was valued at USD 13.7 billion in 2021 and is anticipated to - GlobeNewswire

The Benefits of Cloud Object Storage for Higher Education – EdTech Magazine: Focus on Higher Education

The pandemic has accelerated the push to go off-premises. As InfoWorld notes, the growth of cloud is unlikely to slow over the next few years, especially as universities and colleges prioritize quality hybrid learning experiences for students.

But storage remains a challenge for many higher education institutions that continue to support blended or online learning. While traditional storage options such as storage area networking (SAN) and network-attached storage (NAS) still make sense for structured data storage, these options are not ideal for unstructured assets. With unstructured data now accounting for 80 percent of organizational information, postsecondary schools need a new way to handle the explosion of data.

The most familiar file storage systems are hierarchical. Files are stored in layered directories that are logically segmented and rigidly defined. Block storage systems emerged as cloud offerings gained ground. These solutions store data as evenly sized blocks of information, each with its own unique identifier.

According to Jon Toor, chief marketing officer ofCloudian, cloud object storage takes a different approach. Cloud object storage is a flat-file system, he says. It has an addressing structure that lets you directly address a lot of data. And the amount of data we can address is essentially limitless, up to multiple exabytes. In practice, object-based storage sees data assets defined as unique objects that arent uniform in size and include metadata descriptors. As a result, these systems are ideal for unstructured data that doesnt conform to rigid file formats.

Toor offers a postsecondary example: Say youre storing a genome in a research institution. With cloud object storage, you can do analysis and take out key facts and store them in the metadata. Now, you have a searchable database with metadata that you can search, just likeGoogle.

Solutions such as SAN and NAS offer the benefit of onsite data storage, which is best suited for hierarchical file systems. Typical cloud-based systems, meanwhile, shift the heavy lifting away from data centers and into public or hybrid cloud frameworks.

But cloud object storage offers the best of both worlds. The easiest way to think of it: Its cloud tech that resides in your own data center behind your own firewall, says Toor. Its close by and easy to use. And its part of your own infrastructure. Thanks to their modular nature, object-based solutions can be easily expanded as storage requirements grow, offering capacity on demand with zero downtime.

Compared with other solutions, this approach offers better data security. You get the same visibility around whos accessing your data as with traditional cloud technology, says Toor. You can spot if its being accessed improperly and set up alerts to notify you.

Toor also highlights the persistent nature of this storage solution. This is an evergreen environment, he says. We sell a complete box, and you create a cluster by tying three boxes together. You add new boxes over time. As they age, they need to be swapped out. After five years, you simply add new boxes, migrate the data transparently in the background and decommission the service with no disruption and no downtime.

MORE ON EDTECH: Here's 4 ways to manage multicloud environments in higher ed.

Toor points to three substantive benefits for postsecondary schools transitioning to cloud object storage:

Consider theUniversity of Leicesters recent adoption of Cloudian S3 compatible object storage. Not only did the school improve accessibility by eliminating its single-point-of-failure backup system, it also reduced storage space requirements by 50 percent, saving 25 percent in storage costs.

When it comes to adopting object-based storage for postsecondary schools, Toor puts it simply: You can build it across multiple locations and can scale to any size. Thats why people move to cloud object storage.

Read this article:
The Benefits of Cloud Object Storage for Higher Education - EdTech Magazine: Focus on Higher Education

Leaked memo shows Oracles flagship cloud unit told employees to ramp up for 247 work on projects that insiders say have fallen behind schedule – Times…

Oracle cofounder and CTO Larry Ellison

Reuters/Robert Galbraith

Oracle Cloud Infrastructure, the database giants flagship cloud unit and its answer to the dominant Amazon Web Services, has instructed its employees to focus on an updated set of priorities for the next several quarters, according to a memo viewed by Insider.

Other feature and development work is paused to assist in this effort, said the memo, sent to OCIs over 10,000 employees last week.

We know this change impacts many of the teams directly and indirectly. We appreciate your ability to Expect and Embrace Change, and your ability to continue to iterate and deliver a world class platform, the memo said, referring to the units leadership principles. Oracle declined to comment for this story.

The email lists those priorities in order of importance, starting with security (e.g. patching), followed by operations & support, region build for big customers (Telesis, NRI) and gov work [sic], and finally, region build for all other regions.

Two Oracle insiders say that several of the region build projects, with deadlines before the end of the year, are running behind schedule. In industry parlance, a region generally refers to the cloud servers or data centers intended to serve a particular area or country, and a region build is the process of setting those up. New regions would help expand Oracles reach and make it more appealing for large customers and government deals.

The memo says that its highest-priority region after those built for customers and the government right now is in Israel, followed by other dedicated regions including Oman, then rest of commercial. Oracles dedicated cloud regions only serve particular areas or countries.

Region bootstrap, across regions, will need to happen on a 247 basis in order to hit our delivery dates. All teams will need to resource appropriately to accommodate this expectation. This means, in some cases, temporarily reallocating personnel from other projects, teams, or orgs, the memo says, further calling on teams to facilitate war rooms on a 247 basis to troubleshoot issues and create 24-hour-a-day on-call rotations.

Notably, however, three Oracle employees told Insider that company leadership has since reduced the schedules to 147 after widespread discontent that one person characterized as a backlash.

Two people close to OCI speculate the updated priorities could be, at least in part, related to the fact that Oracle wants to make itself a stronger competitor for the Pentagons Joint Warfighter Cloud Capability contract, the successor to the now-scrapped $10 billion JEDI deal. Oracle lost out on the JEDI contract, which was awarded to Microsoft, but was ultimately canceled amid a lengthy legal challenge from Amazon.

The email suggests security is OCIs new top priority. The company recently reorganized the cloud security organization, which has a few hundred employees, and replaced its leader after only about a year, according to company insiders and an internal email viewed by Insider.

The changes come as OCIs workplace culture comes under scrutiny. More than a dozen current and former Oracle employees and executives recently told Insider that OCI is led by what one person described as a culture of fear, telling Insider that OCI boss Clay Magouyrk is known for trying to get results by beating down employees emotionally.

Magouyrks leadership style was cited in a pair of lawsuits filed by former vice presidents against the company and an executive. One of the former VPs who sued Oracle died by suicide in April. An attorney for the VPs said the cases are headed for arbitration.

Do you work at Oracle? Contact reporter Ashley Stewart via encrypted messaging app Signal (+1-425-344-8242) or email ([emailprotected]).

See the rest here:
Leaked memo shows Oracles flagship cloud unit told employees to ramp up for 247 work on projects that insiders say have fallen behind schedule - Times...

Insurance Applications in the Hundreds of Thousands Exposed – Lexology

Insurance technology startup BackNine has announced that it has made public hundreds of thousands of insurance applications. This happened after one of its web host left cloud servers without protection on the internet.

TechCrunch reports that the California-based startup develops back-office software to assist larger insurance companies. The larger companies sell and maintain life and disability insurance policies. Chances are good that BackNine may have processed your personal information if you applied for insurance in the past several years.

The startup partners with some of Americas largest insurance carriers. Many of the insurance applications found in the exposed bucket were for Prudential TransAmerica, John Hancock, Lincoln Financial Group, and AIG.

In addition to this work, BackNine also provides a white-labeled web form for smaller or independent financial planners who sell insurance plans on their own websites.

BackNine Servers Hosted by Amazon

Amazons cloud hosts BackNines storage servers. The startup says that Amazon misconfigured this server to permit members of the public access to the more than 711,000 files inside. This data includes completed insurance applications that contain applicants extremely sensitive personal and medical information. Moreover, the files contained images of individuals signatures and other internal BackNine files.

Editors at TechCrunch reviewed some of the materials and found contact information, such as full names, addresses, and phone numbers, along with also Social Security numbers, medical diagnoses, medications taken and detailed completed questionnaires about an applicants health, past and present.

Other files for insurance applications included lab and test results, like blood work and electrocardiograms. Plus, there were applications that contained drivers license numbers. The exposed documents date back to 2015, and as recently as this month.

Permissions Changed on Amazon Storage Bucket

Amazon names its storage servers buckets, which are private by default. However, it BackNines case, someone with control of the buckets appears to have changed its insurance applications permissions to public. Sadly, none of the data was encrypted.

Amazon Web Services (AWS) is an adopted cloud platform that offers more than 200 fully-featured services from global data centers. Millions of customersincluding fast-growing startupsuse AWS.

Its website says that AWS plans to be the most flexible and secure cloud computing environment available today. The company has designed its core infrastructure to satisfy the security requirements for the military, global banks, and other high-sensitivity organizations. AWS says

[T]his is backed by a deep set of cloud security tools, with 230 security, compliance, and governance services and features. AWS supports 90 security standards and compliance certifications, and all 117 AWS services that store customer data offer the ability to encrypt that data.

Vice President Alerted and Locks Down Insurance Applications Data

TechCrunch contacted BackNine vice president Reid Tattersall but received no response. However, within minutes of providing Tattersall with the name of the exposed bucket, the data was locked down. The news source asked Tattersall if the startup alerted local authorities per state data breach notification laws. Alternative, did the company have any plans to notify the affected individuals who suffered data exposure? They didnt get an answer.

Companies can face stiff financial and civil penalties for failing to disclose a cybersecurity incident such as exposing insurance applications. BackNine is based in California, a state with some of the most aggressive data protections laws in the country. The California Consumer Privacy Act provides for the imposition of penalties for violations. The California Attorney Generals Office is authorized to seek civil penalties of $2,500 for each violation or $7,500 for each intentional violation.

The CCPA applies to for-profit organizations that operate in California and satisfy one of these criteria:

These criteria could render BackNine liable for exposing client insurance applications.

See more here:
Insurance Applications in the Hundreds of Thousands Exposed - Lexology

Multiple encryption flaws uncovered in Telegram messaging protocol – The Daily Swig

Vulnerabilities highlight risks of knit-your-own crypto

UPDATED An analysis of the popular Telegram secure messaging protocol has identified four cryptographic vulnerabilities.

Although none of the flaws are particularly serious or easy to exploit, security researchers have nonetheless warned that the software falls short on some essential data security guarantees.

Computer scientists from from ETH Zurich and Royal Holloway, University of London, uncovered the vulnerabilities after examining the open source code used to provide encryption services to the Telegram app. The audit excluded any attempt to attack any of Telegrams live systems.

The researchers found that Telegrams proprietary system falls short of the security guarantees enjoyed by other, widely deployed cryptographic protocols such as Transport Layer Security (TLS).

ETH Zurich professor Kenny Paterson commented that encryption services could be done better, more securely, and in a more trustworthy manner with a standard approach to cryptography.

Catch up with the latest encryption-related news and analysis

The most significant vulnerability among the quartet makes it possible for an attacker to manipulate the sequencing of messages coming from a client to one of the cloud servers operate by Telegram.

A second flaw made it possible for an attacker on the network to detect which of two messages are encrypted by a client or a server, an issue more of interest to cryptographers than hostile parties, the researchers suggest.

The third security issue involves a potential manipulator-in-the-middle attack targeting initial key negotiation between the client and the server. This assault could only succeed after sending billions of messages.

A fourth security weakness made it possible (at least in theory) for an attacker to recover some plain text from encrypted messages a timing-based side-channel attack that would require an attacker to send millions of messages and observe how long the responses take to be delivered. The researchers admit the attack is impractical while Telegram goes further and categorises it as a non-threat.

"The researchers did not discover a way to decipher messages," a representative of Telegram told The Daily Swig.

In a statement, the firm welcomed the research

The traits of MTProto pointed out by the group of researchers from the University of London and ETH Zurich were not critical, as they didn't allow anyone to decipher Telegram messages. That said, we welcome any research that helps make our protocol even more secure.

These particular findings helped further improve the theoretical security of the protocol: the latest versions of official Telegram apps already contain the changes that make the four observations made by the researchers no longer relevant.

The researchers notified Telegram about their research in April. Telegram has since patched all four flaws, clearing the way for researchers to go public with their findings through a detailed technical blog post.

Royal Holloway professor Martin Albrecht told The Daily Swig that the researchers offered lessons for other developers of secure messaging apps for example, industry standard TLS encryption should be a preferred design choice.

The mode of Telegram we looked at was when messages are encrypted between the client and the server only, Albrecht explained.

This is no different from running Facebook Messenger or IRC [Internet Relay Chat] over TLS. Here it makes little sense to not use TLS (or its UDP variants). It is well studied, including its implementations, it does not need special assumptions, it is less brittle than [for example] MTProto.

MTProto is the encryption scheme used by Telegram.

READ Kaspersky Password Manager lambasted for multiple cryptographic flaws

Telegram already relies on TLS for its security for messages from the server to Android clients, but it relies on proprietary approaches elsewhere.

Whether apps are built using TLS as a foundation or not, an audit by cryptographers is highly advisable.

Albrecht commented: When we talk about secure messaging apps specifically, i.e messages are encrypted between the parties not just the transport layer between client and server, they should have cryptographers on staff who formally reason about the design. In the future this should get easier with the MLS standard.

The research into Telegram was motivated by use of technology by participants in large-scale protests such as those seen in 2019/2020 in Hong Kong.

We found that protesters critically relied on Telegram to coordinate their activities, but that Telegram had not received a security check from cryptographers, according to Albrecht.

Albrecht was part of a team that researched what makes the Telegram platform attractive to high-risk users involved in mass protests, who are likely to be targeted by surveillance.

Telegram does seem to have the advantage of staying up in light of government crackdown in contrast to other social networks and seemingly not complying all that much with government requests, according to Albrecht.

YOU MAY LIKE Threema, the European rival to Signal, wins pivotal privacy battle in Swiss Court

Although mobile messaging apps such as Signal are often recommended and used by the security-savvy, features and utility are more important for mainstream users and go some way to explaining use of Telegram among protesters in Hong Kong and beyond.

It might be better to compare Telegram to Facebook or Twitter (in terms of features and appeal) than to, say, Signal, he added.

Telegram may be preferred to Facebook even if the latter is likely better or at stricter when it comes to data governance, Albrecht concluded.

On the flip side, it is not clear what security policies, processes and safeguards Telegram have in place to, e.g continuously vet their (server and client) code for software vulnerabilities, to prevent their own staff from snooping.

This story was updated to add comment from Telegram that welcomed the work of the researchers but disputed the impact of one of the admitted vulnerabilities.

RELATED Encryption issues account for minority of flaws in encryption libraries research

Excerpt from:
Multiple encryption flaws uncovered in Telegram messaging protocol - The Daily Swig