Category Archives: Cloud Servers

To protect data and code in the age of hybrid cloud, you can always turn to Intel SGX – The Register

Sponsored Data and code are the lifeblood of digital organisations, and increasingly these are shared with others in order to achieve specific business goals. As such, data and code must be protected no matter where the workloads run, be they in on-premises data centers, remote cloud servers, or edge-of-the-network.

Take medical images processed in the cloud for example. Their processing must be encrypted for security and privacy. Banks need to share insights into financial data without sharing that underlying confidential data with others. Other organisations may want to process data using artificial intelligence and machine learning but keep secret these learning algorithms that turn data into useful analysis.

While encrypting data at rest or in transit is commonplace, encrypting sensitive data while it is actively in-use in memory is the latest, and possibly most challenging, step on the way to a fully encrypted data lifecycle.

One new security model that is growing increasingly popular as a way of protecting data in use is confidential computing. This model uses hardware protections to isolate sensitive data.

Confidential computing changes how code and data are processed at the hardware level and changes the structure of applications. Using the confidential computing model, encrypted data can be processed in the hardware without being exposed to the rest of the system.

A crucial part of that is the Intel Software Guard Extensions (Intel SGX). It was introduced for client platforms in 2015 and brought to the data center in 2017, and developed as a means of protecting the confidentiality and integrity of code. It does this by creating encrypted enclaves that help safeguard information and code whilst in use. This year, Intel submitted the SGX software development kit (SDK) to the Linux Foundations new Confidential Computing Consortium to help secure data in applications and the cloud.

To protect data in use, applications can employ something called Trusted Execution Environments (TEEs) running inside a processor. The fundamental principle here is of hardware isolation between that TEE where only trusted code is executed on selected data and the host devices operating environment. Within a TEE, data is safely decrypted, processed, and re-encrypted. TEEs also provide for the secure execution of authorised software, known as trusted applications or TAs, and protect the execution of authenticated code.

To keep data safe, TEEs use a secure area of memory and the processor that is isolated from the rest of a systems software stack. Only trusted TAs are allowed to run inside this environment, a system that is cryptographically enforced. Applications using a TEE can be divided into a trusted part (the TA) and an untrusted part (the rest of the application that runs as normal), allowing the developer great control over the exact portions of data needing advanced protections.

The goal of the Confidential Computing Consortium is to establish common, open-source standards and tools for the development of TEEs and TAs.

This is where Intel has stepped in with Intel SGX. It offers hardware-based memory encryption that isolates specific application code and data in memory. It works by allowing developers to create TEEs in hardware. This application-layer TEE can be used to help protect the confidentiality and integrity of customer data and code while its processed in the public cloud, encrypt enterprise blockchain payloads, enable machine learning across data sources, significantly scale key management solutions, and much more.

This technology helps minimise the attack surface of applications by setting aside parts of the hardware that are private and that are reserved exclusively for the code and data. This protects against direct assaults on the executing code or the data that are stored in memory.

To achieve this, Intel SGX can put application code and data into hardened enclaves or trusted execution modules encrypted memory areas inside an applications address space. Code in the enclave is trusted as it cannot be altered by other apps or malware.

Intel SGX provides a group of security-related instructions, built into the companys Intel Core and Xeon processors. Intel provides a software development kit as a foundation for low-level access to the feature set with higher-level libraries that open it up to other cloud-optimized development languages.

Any number of enclaves can be created to support distributed architectures. Some or all parts of the application can be run inside an enclave.

Code and data are designed to remain encrypted even if the operating system, other applications, or the cloud stack have been compromised by hackers. This data remains safe even if an attacker has full execution control over the platform outside the enclave.

Should an enclave be somehow modified by malicious software, the CPU will detect it and wont load the application. Any attempt to access the enclave memory are denied by the processor, even those made by privileged users. This detection stops encrypted code and data in the enclave from being exposed.

Where might enterprise developers use Intel SGX? A couple of specific scenarios spring to mind. Key management is one, with enclaves used in the process of managing cryptographic keys and providing HSM-like functionality. Developers can enhance the privacy of analytics workloads, as Intel SGX will let you isolate the multi-party joint computation of sensitive data. Finally, theres digital wallets with secure enclaves able to help protect financial payments and transactions. There are more areas, but this is just a sampler.

Intel SGX enables applications to be significantly more secure in todays world of distributed computing because it provides a higher level of isolation and attestation for program code, data and IP. Thats going to be important for a range of applications from machine learning to media streaming, and it means stronger protection for financial data, healthcare information, and user smartphone privacy whether its running on-prem, in hybrid cloud, or on the periphery of the IoT world.

Sponsored by Intel

Sponsored: How to get more from MicroStrategy by optimising your data stack

View original post here:
To protect data and code in the age of hybrid cloud, you can always turn to Intel SGX - The Register

The silver lining in cloud for financial institutions – ITProPortal

A recent report from analyst research firm Aite estimates that the majority of global tier-one financial institutions have less than ten per cent of their total technology stack hosted in a public cloud environment. A startling statistic considering how large institutions such as Bank of America, which has been running cloud since 2013, has saved $2.1 billion in infrastructure costs. The financial sector has been slow to move to a cloud-based environment. Understandable concerns around data security are often cited as a key reason, but those financial institutions that do take the step towards the cloud can expect to realise a number of huge business benefits.

Cloud migration offers demonstrable business benefits for banks and financial institutions. A Chief Technology Officer at a tier-one global bank, explains this better when he says: The transparency that the cloud offers around costs - getting that understood with folks who are responsible for finance and forecasting is an important piece. The CTO added that cloud-enabled IT departments are able to deliver solutions in a faster timeframe. Furthermore, he states that the IT department can also work on DevSecOps models, giving them a lot more automation around their software development, data quality, software development and test quality.

The technology that allows systems to run in a cloud environment has been around for a long time. The bigger challenge for CTOs and CIOs is often getting the business approvals to be able to do what they want to do with cloud technology. Organisations should not underestimate the length of time it can take to implement a cloud infrastructure. Each system that's being moved into the cloud needs to be judged on its own merits; there might be legacy systems or the organisation might already have the existing infrastructure it needs and it makes sense to actually continue to use it for a period of time.

Risk and security understandably are among the biggest challenges faced by banks and financial institutions when migrating their organisation towards a cloud environment. Firms need to be assured that, in the cloud, both the institutions and its customers data are well protected. One of the main challenges financial institutions face is actually moving data to the public cloud. Security teams and regulators have strict requirements that need to be met, such as data location and encryption. These need to be understood and addressed upfront, rather than waiting for the project to be implemented.

Ensuring that we had the right risk and security controls that are approved and agreed on from an enterprise perspective was essential, said a CTO from a tier-one bank. With the cloud, like any environment, its essential to manage the security and risk associated with that environment, which should be a foundational piece of your strategy.

Cloud technology offers all financial institutions and banks the opportunity to have greater control over their workloads and, even with out-of-the-box cloud solutions, they can enjoy greater insights into those workloads and budgets associated with them. In a world of shrinking budgets for both people and resources, the cloud offers a lot of additional tooling at very competitive prices, whether that comes via a third-party or as part of the native toolset.

Other benefits of the cloud include stability and scalability as well as disaster recovery. In the cloud, stability means reliability. Financial firms can rest assured that in the event of a problem in one location, they have business continuity as other servers in the cloud will scale, and act as backup providing the correct deployments architectures are in place. CTOs and CIOs can also add new functionality without disruption. Financial institutions reputations rely on 24/7/365 reliability. With the cloud, there is no downtime, if one site is down another will pick up the slack, so the business can continue to function in the case of a disaster in one location.

Adopting cloud technologies is a key strategic business decision and firms looking to start their cloud programs can look to engage with experts early in order to help them formulate and implement strategies. This is where managed service providers can really add value as well as help save time, money and resources over the long term. Working with external managed service providers can help to accelerate migration and so that IT departments can start to realise the benefits quickly while focusing on other business critical tasks.

So, in general, the tide is turning - senior executives are starting to champion the move to the cloud. But the reality is that it's an education exercise. There is no one single cloud infrastructure model for financial institutions - every organisation is different. The key thing is that firms adopt a cloud infrastructure that works for them and gives them scope to scale as required.

Colin Sweeney, VP Client Operations, Fenergo

Read the original:
The silver lining in cloud for financial institutions - ITProPortal

Listen to the boomers – or cloud could make you go bust – ITProPortal

Much like a millennial showing their baby boomer parent how to use social media, cloud-native companies are often perceived as having the edge over their legacy rivals due to their familiarity with the cloud. This isnt without reason as the benefits that being born in the cloud has given these companies are clear, but their reliance on the platform could also be their undoing.

This familiarity can lead to overconfidence and in turn, uncontrollable cloud costs. With more traditional rivals now well up to speed on how to use the cloud to their advantage, cloud-natives need to avoid saying okay boomer and falling back purely on the cloud.

There is no doubt that leveraging the cloud has driven advantages for digital businesses. This is borne out by the litany of digital-first firms who have disrupted traditional players in their industries. However, this cutting-edge technology doesnt always support effective cost management. These young upstarts must take desperate action to maintain their edge, or they are in danger of becoming victims of their own success.

The reason many cloud-natives are facing challenges is multifaceted. With unswerving faith in the cloud, its highly likely that this group will make continued investments in this technology without effective analysis. Paired with the move fast and break things mantra that many young companies have, this can lead to cloud usage and costs escalating out of control wiping out the business value derived from cloud-based applications.

In some organisations, a cloud-first mentality can result in a lack of accountability and an overall reckless approach to managing the cloud whoever requires it first gets it first, with zero concern paid to costs. This unintentional spending on cloud is called cloud sprawl.

All too often this situation is fostered because cloud-native players lack both visibility around resources and effective inter-department communication. This means its impossible to link the business strategy to cloud usage. While digital players must capitalise on the benefits of the cloud, they must also understand its impact on budgets and business goals.

Strategic business targets and the balance sheet have suffered because the goal for many cloud-native businesses has been to scale quickly, unencumbered by the liabilities of their legacy competitors, or even being profitable. This isnt inherently a problem, but the rules of business dictate at some point in time a firm must turn a profit, which can be challenging for cloud-natives when they have ploughed everything into growth via cloud and struggle to claw spending back.

Spiralling cloud costs are a threat. Cloud-native businesses need to consider how they remove that threat take a step back, consider the wider businesses objectives and how the power of the cloud can be bridled to meet them.

In order to tackle a hubristic approach to the cloud, CIOs and CFOs of cloud-native players need to start with a holistic snapshot of cloud spending. To achieve this, they must leverage data, drive transparent inter-department communication and continually optimise their platforms to eliminate cloud sprawl. In this way they can build an informed strategy anchored in realistic spending.

Running workloads in the cloud can be expensive if not managed properly. This makes it necessary to also understand how using more cloud will impinge on networking, storage, risk management and security expenditure. Analysing the costs in this helps businesses to decipher the total value that is being driven by cloud usage and then link it back to the strategic requirements of the business.

With cloud being carved up among business units, from marketing to IT, a cloud-native player cant develop the total picture of usage or cost. Departments leverage multiple clouds for various requirements, and, over time, this results in increased usage. This happens even when there is no actual demand on a just in case basis.

The regular communication between IT and the other various business units still isnt happening 41 per cent of IT decision makers say that decisions on cloud are made in siloed departments, with either IT or a business unit deciding without communicating with each other. Each department has different needs and its down to IT to collaborate with them to make informed decisions.

To manage this issue, cloud-native players should adopt a Single Source of Truth (SSOT). An SSOT involves structuring information models and associated data so every data element can be edited in one place. With this centralised system of record, all cloud data and costs can be viewed transparently and communicated to any part of the company.

Without an SSOT, cloud usage can become split between business units or devolved to different applications and software or compute power and storage. Again, this creates a situation where its almost impossible to see what is being used, paid for and what cloud capacity is required.

One of the main triggers of cloud sprawl is the assumption that the cloud is the solution to every business requirement. Not every business unit should go all in on the cloud. In some instances, on-premise is a better option, because it enables more direct control over workloads. Nearly half (41 per cent) of IT leaders say that on-premise offers more agility in workload control than cloud

Moreover, on-premise offers greater control over rewriting and refactoring costs. This can be crucial for guaranteeing more efficient operations, especially when benchmarked against the cost of complete cloud migration.

A hybrid approach is seen as striking the right balance by many in the industry. One survey found that a 70/30 split of cloud to on-premise was the perfect balance, enabling specific mission critical applications to stay, but most of the compute power to move into the public cloud. However, a shift towards hybrid cloud needs to be accompanied by a culture shift throughout the business which enables communication around, and understanding of, hybrid IT.

Once these considerations have been decided on, its important for cloud-native players to continually optimise. This can be as straightforward as analysing whether an instance would be more efficient if it is managed on a pay-as-you-use cloud model versus a reserved spend. Or, perhaps calculating the value that can be gained from migrating depreciated servers to the cloud. Optimisation helps to support improved decision making, but also managed cloud usage and expense.

The rocketing cloud usage and costs of the okay boomer attitude that some cloud-natives have embodied is leading them into difficult territory, often resulting in cloud sprawl, careless investment and missed business goals. However, this situation can be remedied with a more considered approach and a realisation that the silver lining does not always lie in every cloud.

Henrik Nilsson, Vice President EMEA, Apptio

Follow this link:
Listen to the boomers - or cloud could make you go bust - ITProPortal

Server Microprocessor Market Projected to Grow at an Impressive CAGR Of XX% Between 2017 2027 Bulletin Line – Bulletin Line

According to a new market study, the Server Microprocessor Market is projected to reach a value of ~US$XX in 2019 and grow at a CAGR of ~XX% over the forecast period 2017 2027. The presented study ponders over the micro and macro-economic factors that are likely to influence the growth prospects of the Server Microprocessor Marketover the assessment period.

The market report throws light on the current trends, market drivers, growth opportunities, and restraints that are likely to influence the dynamics of the Server Microprocessor Market on a global scale. The Five Force and SWOT analysis included in the report provides a fair idea of how the different players in the Server Microprocessor Market are adapting to the evolving market landscape.

ThisPress Release will help you to understand the Volume, growth with Impacting Trends. Click HERE To get SAMPLE PDF (Including Full TOC, Table & Figures) at https://www.futuremarketinsights.co/reports/sample/REP-GB-5216

Analytical insights enclosed in the report:

The report splits the Server Microprocessor Marketinto different market segments including, region, end-use, and application.

The report provides an in-depth analysis of the current trends that are expected to impact the business strategies of key market players operating in the market. Further, the report offers valuable insights related to the promotional, marketing, pricing, and sales strategies of the established companies in the Server Microprocessor Market. The market share, growth prospects, and product portfolio of each market player are evaluated in the report along with relevant tables and figures.

The study aims to address the following doubts related to the Server Microprocessor Market:

Get Access To TOC Covering 200+ Topics athttps://www.futuremarketinsights.co/toc/REP-GB-5216

key players, and development of workload-specific server microprocessor designs is the growing trend in the global server microprocessor market.

Server Microprocessor Market: Market Dynamics

Expanding cloud infrastructure coupled with increasing adoption of cloud based solutions by organizations across various industries is the prominent factor drives the growth of global server microprocessor market. Increasing interest on hyper cloud solutions due to dynamic workload of organizations, emerging 5G networks and expanding internet of things (IoT) applications, accelerates the growth of global server microprocessor market. Also rising focus on exploring wide range of chip technologies by top internet giants such as Facebook, Google, Amazon, with the objective to enhance their artificial intelligence capabilities, fuels the growth of global server microprocessor market. Increasing focus on reducing data centre volume coupled with increasing investment in commercializing quantum computing, and complexity in upgrading server processors, are the factors identified as restraints likely to deter the progression of global server microprocessor market.

Server Microprocessor Market: Market Segmentation

The global server microprocessor market is segmented on the basis of number of cores, operating frequency, and by region.

On the basis of number of cores, the global server microprocessor market is segmented into

Six-core & less

Above six-core

On the basis of operating frequency, the global server microprocessor market is segmented into

1.5GHz 1.99GHz

2.0GHz 2.49GHz

2.5GHz 2.99GHz

3.0GHz and higher

Regionally, the global server microprocessor market is segmented into

In terms of revenue, the above six-core segment is expected to dominate the global server microprocessor market, due to expanding cloud infrastructure.

Server Microprocessor Market: Regional Outlook

Among all regions, server microprocessor market in North America is expected to dominate the market, due to increasing enterprise cloud data volumes. In terms of revenue, Asia Pacific is identified as the fastest growing server microprocessor market, due to adoption to software as a service (SaaS) based business models.

Server Microprocessor Market: Competition Landscape

In July 2017, Intel Corporation a U.S. based multinational technological company, launched Xeon Scalable an energy efficient server processor, with the objective to expand its portfolio.

In June 2017, Advanced Micro Devices, Inc.- a U.S. based multinational semiconductor company, launched EPYC 7000 series a high performance processor for datacentre, with the objective to cater the increasing demand for lower energy high computing efficiency server processor.

Prominent players in the global server microprocessor market includes Intel Corporation, Advanced Micro Devices (AMD), Inc., Cavium, Qualcomm Technologies, Inc., Applied Micro Circuits Corporation., and Marvell.

The report covers exhaustive analysis on:

Server Microprocessor Market segments

Server Microprocessor Market dynamics

Historical Actual Market Size, 2015 2016

Server Microprocessor Market size & forecast 2017 to 2027

Ecosystem analysis

Server Microprocessor Market current trends/issues/challenges

Competition & Companies involved technology

Value Chain

Server Microprocessor Market drivers and restraints

Regional analysis for Server Microprocessor Market includes

The report is a compilation of first-hand information, qualitative and quantitative assessment by industry analysts, inputs from industry experts and industry participants across the value chain. The report provides in-depth analysis of parent market trends, macro-economic indicators and governing factors along with market attractiveness as per segments. The report also maps the qualitative impact of various market factors on market segments and geographies.

Report Highlights:

Detailed overview of parent market

Changing market dynamics in the industry

In-depth market segmentation

Historical, current and projected market size in terms of volume and value

Recent industry trends and developments

Competitive landscape

Strategies of key players and products offered

Potential and niche segments, geographical regions exhibiting promising growth

A neutral perspective on market performance

Must-have information for market players to sustain and enhance their market footprint.

NOTE All statements of fact, opinion, or analysis expressed in reports are those of the respective analysts. They do not necessarily reflect formal positions or views of Future Market Insights.

Request Customized Report As Per Your Requirements athttps://www.futuremarketinsights.co/customization-available/REP-GB-5216

Why Opt for FMI?

About Us

Future Market Insights (FMI) is a leading market intelligence and consulting firm. We deliver syndicated research reports, custom research reports and consulting services which are personalized in nature. FMI delivers a complete packaged solution, which combines current market intelligence, statistical anecdotes, technology inputs, valuable growth insights and an aerial view of the competitive framework and future market trends.

Contact Us

Future Market Insights

616 Corporate Way, Suite 2-9018,

Valley Cottage, NY 10989,

United States

T: +1-347-918-3531

F: +1-845-579-5705

T (UK): + 44 (0) 20 7692 8790

Read this article:
Server Microprocessor Market Projected to Grow at an Impressive CAGR Of XX% Between 2017 2027 Bulletin Line - Bulletin Line

Dr. Max Welling on Federated Learning and Bayesian Thinking – Synced

Introduced by Google in 2017, Federated Learning (FL) enables mobile phones to collaboratively learn a shared prediction model while keeping all the training data on the device, decoupling the ability to do machine learning from the need to store the data in the cloud. Two years have passed, and several new research papers have proposed novel systems to boost FL performance. This March for example a team of researchers from Google suggested a scalable production system for FL to enable increasing workload and output through the addition of resources such as compute, storage, bandwidth, etc.

Earlier this month, NeurIPS 2019 in Vancouver hosted the workshop Federated Learning for Data Privacy and Confidentiality,where academic researchers and industry practitioners discussed recent and innovative work in FL, open problems and relevant approaches.

Professor Dr. Max Welling is the research chair in Machine Learning at the University of Amsterdam and VP Technologies at Qualcomm. Welling is known for his research in Bayesian Inference, Generative modeling, Deep Learning, Variational autoencoders, Graph Convolutional Networks.

Below are excerpts from the workshop talk Dr. Welling gave on Ingredients for Bayesian, Privacy Preserving, Distributed Learning, where the professor shares his views on FL, the importance of distributed learning, and the Bayesian aspects of the domain.

The question can be separated in two parts. Why do we need distributed or federated inferencing? Maybe that is easier to answer. We need it because of reliability. If you in a self-driving car, you clearly dont want to rely on a bad connection to the cloud in order to figure out whether you should brake. Latency. If you have your virtual reality glasses on and you have just a little bit of latency youre not going to have a very good user experience. And then theres, of course, privacy, you dont want your data to get off your device. Also compute maybe because its close to where you are, and personalization you want models to be suited for you.

It took a little bit more thinking why distributed learning is so important, especially within a company how are you going to sell something like that? Privacy is the biggest factor here, there are many companies and factories that simply dont want their data to go off site, they dont want to have it go to the cloud. And so you want to do your training in-house. But theres also bandwidth. You know, moving around data is actually very expensive and theres a lot of it. So its much better to keep the data where it is and move the computation to the data. And also, personalization plays a role.

There are many challenges when you want to do this. The data could be extremely heterogeneous, so you could have a completely different distribution on one device than you have on another device. Also, the data sizes could be very different. One device could contain 10 times more data than another device. And the compute could be heterogeneous, you could have small devices with a little bit of compute that now and then or you cant use because the batterys down. There are other bigger servers that you also want to have in your in your distribution of compute devices.

The bandwidth is limited, so you dont want to send huge amounts of even parameters. Lets say we dont move data, but we move parameters. Even then you dont want to move loads and loads of parameters over the channel. So you want to maybe quantize it to see this. I believe Bayesian thinking is going to be very helpful. And again, the data needs to be private so you wouldnt want to send parameters that contain a lot of information about the data.

So first of all, of course, were going to move model parameters, were not going to move data. We have data stored at places and were going to move the algorithm to that data. So basically you get your learning update, maybe privatized, and then you move it back to your central place where youre going to update it.And of course, bandwidth is another challenge that you have to solve.

We have these heterogeneous data sources and we have very variability in the speed in which we can sync these updates. Here I think the Bayesian paradigm is going to come in handy because, for instance, if you have been running an update on a very large dataset, you can shrink your posterior parameters to a very small posterior. Where on another device, you might have much less data, and you might have a very wide posterior distribution for those parameters. Now, how to combine that? You shouldnt average them, its silly. You should do a proper posterior update where the one that has a small peaked posterior has a lot more weight than the one with a very wide posterior. Also uncertainty estimates are important in that aspect.

The other thing is that with Bayesian update, if you have a very wide posterior distribution, then you know that parameter is not going be very important for making predictions. And so if youre going to send that parameter over a channel, you will have to quantize it, especially to save bandwidth. The ones that are very uncertain anyway you can quantize at a very coarse level, and the ones which have a very peak posterior need to be encoded very precisely, and so you need much higher resolution for that. So also there, the Bayesian paradigm is going to be helpful.

In terms of privacy, there is this interesting result that if you have an uncertain parameter and you draw a sample from that posterior parameter, then that single sample is more private than providing the whole distribution. Theres results that show that you can get a certain level of differential privacy by just drawing a single sample from that posterior distribution. So effectively youre adding noise to your parameter, making it more private. Again, Bayesian thinking is synergistic with this sort of Bayesian federated learning scenario.

We can do MCMC (Markov chain Monte Carlo) and variational based distributed learning. And as theres advantages to do that because it makes the updates more principled and you can combine things which, one of them might be based on a lot more data than another one.

Then we have private and Bayesian to privatize the updates of a variational Bayesian model. Many people have worked on many other of these intersections, so we have deep learning models which have been privatized, we have quantization, which is important if you want to send your parameters over a noisy channel. And its nice because the more you quantize, the more private things become. You can compute the level of quantization from your Bayesian posterior, so all these things are very nicely tied together.

People have looked at the relation between quantized models and Bayesian models how can you use Bayesian estimates to quantized better? People have looked at quantized versus deep to make your deep neural network run faster on a mobile phone you want to quantize it. People have looked at distributed versus deep, distributed deep learning. So many of these intersections have actually been researched, but it hasnt been put together. This is what I want to call for. We can try to put these things together and at the core of all of this is Bayesian thinking, we can use it to execute better on this program.

Journalist: Fangyu Cai | Editor: Michael Sarazen

Like Loading...

Read this article:
Dr. Max Welling on Federated Learning and Bayesian Thinking - Synced

7 crackpot technologies that might transform IT – CIO East Africa

Innovation is the cornerstone of technology. In IT, if youre not experimenting with a steady stream of emerging technologies, you risk disruption. Moreover, you can find yourself challenged when it comes to luring top talent and keeping ahead of competitors.

But knowing which bets to place when it comes to adopting emerging technologies can seem impossible. After all, most fizz out, and even those that do prove worthwhile often fall a little short of their hyped potential. Plus, most of what has most recently been considered cutting-edge today, such as artificial intelligence and machine learning, is already finding its way into production systems. You have to look far ahead sometimes to anticipate the next wave coming. And the farther out you look, the more risky the bets become.

Still, sometimes a great leap forward is worth considering. In that light, here are seven next-horizon ideas that might prove to be crackpot or a savvy play for business value emerging along the fringe. It all depends on your perspective. William Gibson used to say that the future is already here, its just not evenly distributed yet. These ideas may be too insane for your team to try or they may be just the right thing for moving forward.

Of all the out-there technologies, nothing gets more press than quantum computers and nothing is spookier. The work is done by a mixture of physicists and computer scientists fiddling with strange devices at super-cold temperatures. If it requires liquid nitrogen and lab coats, well, its got to be innovation.

The potential is huge, at least in theory. The machines can work through bazillions of combinations in an instant delivering exactly the right answer to a mathematical version of Tetris. It would take millions of years of cloud computing time to find the same combination.

Cynics, though, point out that 99 percent of the work that we need to do can be accomplished by standard databases with good indices. There are few real needs to look for strange combinations, and if there are, we can often find perfectly acceptable approximations in a reasonable amount of time.

The cynics, though, are still looking through life through old glasses. Weve only tackled the problems that we can solve with old tools. If youve got something that your programmers say is impossible, perhaps trying out IBMs Q Experience quantum cloud service may be just the right move. Microsoft has also launched Azure Quantum for experimentation. AWS is following suit with Bracket as well.

Potential first adopters: Domains where the answer lies in the search for an exponentially growing combination of hundreds of different options.

Chance of happening in the next five years: Low. Google and IBM are warring with press releases. Your team will spend many millions just to get to the press release stage.

Many of the headlines continue to focus on the dramatic rise and fall in value of bitcoin but in the background developers have created dozens of different approaches to creating blockchains for immortalizing complex transactions and digital contracts. Folding this functionality into your data preservation hierarchy can bring much needed assurance and certainty into the process.

The biggest challenge may be making decisions about the various philosophical approaches. Do you want to rely on proof of work or some looser consensus that evolves from a trusted circle? Do you want to fret over elaborate Turing-complete digital contracts or just record transactions in a shared, trustworthy ledger? Sometimes a simple API that offers timely updates is enough to keep partners synchronized. A few digital signatures that guarantee database transactions may just be enough. There are many options.

Potential first adopters: Industries with tight, synchronized operations between businesses that dont want to trust each other but must. These frenemies can use a shared blockchain database to eliminate some of the disputes before they happen.

Potential for success in five years: High. There are dozens of active prototypes already running and early adopters can dive in.

For the past few decades, the internet has been the answer to most communications problems. Just hand the bits to the internet and theyll get there. Its a good solution that works most of the time but sometimes it can be fragile and, when cellular networks are involved, fairly expensive.

Some hackers have been moving off the grid by creating their own ad hoc networks using the radio electronics that are already in most laptops and phones. The bluetooth code will link up with other devices nearby and move data without asking mother may I to some central network.

Enthusiasts dream of creating elaborate local mesh networks built out of nodes that pass along packets of bits until they reach the right corner of the network. Ham radio hobbyists have been doing it for years.

Potential early adopters: Highly localized applications that group people near each other. Music festivals, conferences, and sporting events are just some of the obvious choices.

Potential for success in five years: High. There are several good projects and many open source experiments already running.

If the buzzwords green and artificial intelligence are good on their own, why not join the two and double the fun? The reality is a bit simpler than doubling the hype might suggest. AI algorithms require computational power and at some point computational power is proportional to electrical power. The ratio keeps getting better, but AIs can be expensive to run. And the electrical power produces tons of carbon dioxide.

There are two strategies for solving this. One is to buy power from renewable energy sources, a solution that works in some parts of the world with easy access to hydro-electric dams, solar farms or wind turbines.

The other approach is to just use less electricity, a strategy that can work if questions arise about the green power. (Are the windmills killing birds? Are the dams killing fish?) Instead of asking the algorithm designers to find the most awesome algorithms, just ask them to find the simplest functions that come close enough. Then ask them to optimize this approximation to put the smallest load on the most basic computers. In other words, stop dreaming of mixing together a million layered algorithm trained by a dataset with billions of examples and start constructing solutions that use less electricity.

The real secret force behind this drive is alignment between the bean counters and the environmentalists. Simpler computations cost less money and use less electricity which means less stress on the environment.

Potential early adopters: Casual AI applications that may not support expensive algorithms.

Potential for success in five years: High. Saving money is an easy incentive to understand.

The world has been stuck on the old QWERTY keyboards since someone designed them to keep typewriters from jamming. We dont need to worry about those issues anymore. Some people have imagined rearranging the keys and putting the common letters in the most convenient and fastest locations. The Dvorak keyboard is just one example and it has some fans who will teach you how to use it.

A more elaborate option is to combine multiple keys to spell out entire words or common combinations. This is what the court reporters use to keep accurate transcripts, and just to pass the qualifying exam, the new reporters must be able to transcribe more than 200 words per minute. Good transcriptionists are said to be able to handle 300 words per minute.

One project, Plover, is building tools for converting regular computers to work like stenotypes. If it catches on, there could be an explosion in creative expression. Dont focus on the proliferation of inter-office memos and fine print.

Potential first adopters: Novelists, writers, and social media addicts.

Potential for success in five years: Medium. Two-finger typing is a challenge for many.

Wait, werent we supposed to be rushing to move everything to the cloud? When did the pendulum change directions? When some businesses started looking at the monthly bill filled with thousands of line entries. All of those pennies per hour add up.

The cloud is an ideal option for sharing resources, especially for work that is intermittent. If your load varies dramatically, turning to the public cloud for big bursts in computation makes plenty of sense. But if your load is fairly consistent, bringing the resources back under your roof can reduce costs and remove any worries about what happens to your data when its floating around out in the ether.

The major clouds are embracing solutions that offer hybrid options for moving data back on premises. Some desktop boxes come configured as private cloud servers ready to start up virtual machines and containers. And AWS recently announced Outposts, fully managed compute and storage racks that are built with the same hardware Amazon uses in its datacenters, run the same workloads, and are managed with the same APIs.

Potential first adopters: Shops with predictable loads and special needs for security.

Potential for success in five years: High. Some are already shifting load back on premises.

The weak spot in the world of encryption has been using the data. Keeping information locked up with a pretty secure encryption algorithm has been simple. The standard algorithms (AES, SHA, DH) have withstood sustained assault from mathematicians and hackers for some years. The trouble is that if you want to do something with the data, you need to unscramble it and that leaves it sitting in memory where its prey to anyone who can sneak through any garden-variety hole.

The idea with homomorphic encryption is to redesign the computational algorithms so they work with encrypted values. If the data isnt unscrambled, it cant leak out. Theres plenty of active research thats produced algorithms with varying degrees of utility. Some basic algorithms can accomplish simple tasks such as looking up records in a table. More complicated general arithmetic is trickier and the algorithms are so complex they can take years to perform simple addition and subtraction. If your computation is simple, you might find that its safer and simpler to work with encrypted data.

Potential first adopters: Medical researchers, financial institutions, data-rich industries that must guard privacy.

Potential for success in five years: Varies. Some basic algorithms are used commonly to shield data. Elaborate computations are still too slow.

View post:
7 crackpot technologies that might transform IT - CIO East Africa

Over 750,000 Applications for Copies of US Birth Certificates Left Exposed Online – Security Boulevard

Quick question, were you born in the United States? Have you recently applied for a new copy of your birth certificate? Well, you could be one of the unfortunate people whose birth certificate application was left exposed online.

It has been reported that more than 750,000 applications for copies of U.S. birth certificates have been left exposed without any access control in a misconfigured cloud server within an Amazon Web Services (AWS) storage bucket.

It is understood that a British security company discovered the data container with no password protection leaving the door wide open for cybercriminals to steal the information for fraudulent purposes. Whats worrying is the cache is seemingly being updated on a weekly basis with more applications being added.

The data was being collated by a third-party partner of the U.S. government which provides a service to U.S. citizens who wish to have copies of their birth and death certificates from state governments.

The company at fault has not been named as it is believed the critical data is still online and currently exposed. The leak exposed traditional sensitive information like names, date of birth, home addresses, email addresses and phone numbers, however, more historical information has also been revealed. For example, the server also contained past names of family members, old addresses linked to the applicant, and even the reason as to why the individual was seeking this information, which could be as trivial as applying for a new passport or even to research their familys history.

Sadly, this is not the first time an unprotected AWS server has resulted in a high profile data leak as in June 2019, Netflix, Ford, and many other brands all had data exposed in an open Amazon AWS bucket which amounted to 1TB worth of information being left unprotected.

With these incidents frequently occurring, it begs the question as to why these online cloud servers are being left unprotected. Identity theft and fraud is widespread, and these leaks do not give people the confidence that companies, governments, and other organizations are doing enough to secure their critical data.

Service providers and processors need to wake up to the reality that data needs to be protected in a data-centric fashion to eliminate the risks of having a lapse or lack of due diligence. Adopting a data-centric protection model ensures that data is protected anywhere it is stored, moved, shared, or used and is the only true firebreak that can quench identity theft.

Here is the original post:
Over 750,000 Applications for Copies of US Birth Certificates Left Exposed Online - Security Boulevard

Amping Up The Arm Server Roadmap – The Next Platform

Competition in and of itself does not directly drive innovation customer needs that might be met by some other product is really what makes suppliers hop to and get the lead out. No matter what you do in this world, there is always a chance that someone else will do it better and quicker or both.

The nascent Arm server chip is littered with companies that that attempted to break into the server space and compete against the hegemony of the X86 architecture, or those who thought they might take a run at it and then, either early on or just before announcing products, decided against it.

Breathe in.

Calxeda launched in 2011 and then famously flamed out for complicated reasons it didnt have 64-bit processors and it could not force the hardware stack into datacenters before the software stack was ready. It also ran out of money trying to do tech support for partners. Nvidia launched its Project Denver Arm server effort and then quietly killed it off. Samsung never did make its plans known, and killed them off before admitting anything. AMD jumped in with the K12 Arm server project and its low-end Seattle APUs to try to save face in systems, but then pulled back to concentrate on the Epyc X86 server chips without a doubt the right thing to do. If the world wants or needs high volume Arm servers at some future date, AMD will be able to create one fairly quickly by global replacing Epyc cores with Arm cores which is basically what the K12 project was about anyway. Qualcomm was gung-ho with its Centriq line, and then decided after putting a prototype and a production chip into the field that this was not going to work out well financially and spiked it. Phytium made a fuss three years ago with its Mars Arm server chip, and was never heard from again, probably because Huawei Technologys HiSilicon Kunpeng 920 looks to be the Arm choice for China. Broadcom put together the very good Vulcan Arm server chip and mapped out a plan to take on Intel in the datacenter, and then in the middle of trying to buy Qualcomm, decided to jettison the Vulcan effort, which Cavium picked up and created its variant of the ThunderX2 chip. In the interim, Marvell bought Cavium and also picked up the Arm server design team from Qualcomm, so Marvell has benefited a few times over from the failures of others, particularly given that it is really the only vendor of Arm server chips that has anything close to production installations. (We just did an in-depth review of the ThunderX roadmap here.) Fujitsu has done a fine job with the HPC-centric A64FX processor, aimed at traditional supercomputing as well as AI workloads, which we have covered at length. And Amazon Web Services has cooked up its own Graviton family of Arm server chips, which it is putting up for sale by the hour on its EC2 compute service. The Graviton chips have the potential to be a higher volume product than the ThunderX line if they take off on the AWS cloud.

Breathe out.

That leaves one more credible maker of Arm server processors, and that is Ampere Computing, the company that created out of the ashes of the Applied Micro X-Gene Arm server chip business nearly two years ago, which is notable because it has Renee James, former president of Intel, as its chief executive officer as well as a slew of former Intel chip people on staff and equity backing from The Carlyle Group to boot. Jeff Wittich, senior vice president of products at Ampere, had a chat with The Next Platform about what is coming next for the company as it builds out its roadmap and tries to being Arm servers into the datacenter among the hyperscaler and public cloud elite.

Wittich is no stranger to these customers, which is why Ampere tapped him for his role in June. Wittich got is bachelors in electrical engineering at the University Notre Dame and then went on to get a masters in electrical engineering at the University of California Santa Barbara, where he also worked for two years as a graduate student researcher before joining Intel. He was a process engineer working on etching equipment for a year, and then a senior device engineer in the foundries to work on the 45 nanometer Hi-K metal gate processes that debuted in Intel Xeon server chips in the late 2000s. After that for a five year stint, Wittich was a product reliability engineer for Intels 22 nanometer products, and in 2014, became senior director of cloud business and platform strategy at the chip giant. Until joining Ampere, that Xeon chip business grew by 6X in five years significantly higher than the company had expected.

Suffice it to say, James and Wittich know these hyperscaler and cloud builder customers intimately. And that is perhaps more dangerous to Intel than an instruction set and a clever arrangement of transistors on a wafer of silicon.

Here was our starting thesis in the conversation. If you looked at the past eight to ten years from outside the IT sector and you didnt know much about it, you might think that somebody was intentionally benefiting from the end of Denard scaling and the slowdown in Moores Law advances in transistors. All of the CPU vendors started stumbling around, in a bit of a daze and not getting important work done on time, and this is coincident with the rise of the hyperscalers and cloud builders and Intel being able to maintain 50 percent gross margins with its datacenter products because, even with increasing theoretical competition, those alternative Arm chip suppliers from days gone by could not deliver the right chip at high volume at a predictable cadence. It is one thing to make a few hundred or thousand samples; it is quite another to build a few tens of thousands or hundreds of thousands per quarter.

I completely agree, Wittich tells The Next Platform. Thats one thing that I think its really important, the fact that our whole executive team at Ampere, weve all done this before. So, 500,000 units doesnt sound like much at all to me. I did that for 15 years, and our head architect and our head of engineering, theyve all done this for ten or more generations of high volume, server-class CPUs. I think we know what it takes to deliver at scale and at high volume. Thats why I think we are particularly well suited to succeed in this space.

Take a look at the executive team at Ampere and you will see that Wittich is not kidding. These are very seasoned people from Intel. And they all believe that the time is right to create that alternative, and that an Arm architecture is the way to go.

If somebody can come in and establish that they have a reliable cadence of product delivery, can meet the volume requirements, can get through a qual cycle in an efficient and reliable manner, provide the customer support that the hyperscalers expect, then theres a big opportunity there, says Wittich. We are not just trying to go and compete on matching and exceeding the exact same performance metrics or TCO metrics that the broad datacenter market has looked at for the last decade. We are specifically delivering the type of performance that you need in a multitenant cloud, with the type of performance consistency, with the type of security that you need. So it goes beyond just basic performance and basic TCO. Its also about the type of power efficiency that hyperscalers need. Its the type of scalability theyre looking for, and its that foundation of a cloud architected features that provide quality of service, manageability, and security. Theres an opportunity to come in and reshape the landscape by doing something thats truly different and truly innovative right now.

One could argue that the eMAG 1 chip, which was based on the Skylark X-Gene 3 chip created by Applied Micro with some tweaks by Ampere, was not that product, although it was a perfectly respectable server chip. The 32-core Skylark chip started shipping in volume in September 2018, stacked up pretty well against the 16-core Skylake Xeon SP processors from Intel, providing the same number of threads (when Intel turned on HyperThreading). All Ampere chips use real cores without threads to scale up compute, and this is a conscious choice as it simplifies the pipelines some and, moreover, some workloads do worse rather than better with simultaneous hyperthreading turned on. And finally, adding threads provides another way that security can be compromised because threads means virtualizing and sharing resources like registers and L1 caches, and virtual CPUs (vCPUs) on the public clouds usually have a thread (not a core) as their finest level of granularity. This is, Wittich says, inherently less secure.

With the next generation Arm server processor which is code-named Quicksilver but which will not be called the eMAG 2, by the way, and the official brand has not been unveiled as yet Ampere will be scaling up and out on a bunch of different vectors. And it is still focusing on multitenant cloud and edge use cases and is not really pursuing legacy enterprise platforms or the HPC sector as other Arm suppliers are trying to do. Ampere will be using the Ares core created by Arm Holdings for its Neoverse N1 platform, and it has modifications that are being done by Ampere to optimize performance and to make use of Amperes own mesh interconnect for on-die communication. The next-generation Ampere chip will scale up to 80 cores on a monolithic die, and will be etched in the 7 nanometer processes created by fab partner Taiwan Semiconductor Manufacturing Corp. The 32-core eMAG 1 did alright for a first stab at the market by Ampere, but Wittich says that it did not have enough single core performance, and this will be fixed in the next generation chip.

The Ampere Quicksilver chip will have eight memory channels, just like the eMAG 1 did, and Wittich says it will have as more memory bandwidth as the eMAG 1 provided, and that further it is getting its DRAM memory controllers from a third party as many Arm server chip makers do. The Quicksilver chip will be supporting the CCIX interface for linking to accelerators like GPUs, and will support two-socket NUMA configurations as well as single-socket implementations. CCIX will be the transport for these NUMA links. This stands to reason since Applied Micro did not really have a hardware-based NUMA technology of its own and was resorting to software-based NUMA over PCI-Express for the X-Gene 2. (Other Arm chip makers are using CCIX for NUMA links.) The future CPU will also have PCI-Express 4.0 peripheral controllers, but the number of lanes is not yet clear.

The clock speeds on that 7 nanometer Quicksilver chip have not been revealed, but it is hard to imagine that even with the process shrink from 16 nanometers for the eMAG 1 that Ampere can maintain a sustained turbo speed of 3.3 GHz on the cores which boosting the core count to 80. What we do know is that, owing to its edge, hyperscaler, and cloud target markets, the Quicksilver chip will have a much wider compute and thermal range than the Skylark chip had. The Skylark chip had SKUs that ranged from 75 watts to 125 watts, but Quicksilver will range from a low of 45 watts all the way up to 200 watts or more. That implies SKUs ranging from maybe 10 cores all the way up to 80 cores, depending on how much juice the uncore region of the Quicksilver chip burns.

The Quicksilver chip came back from the foundry this week and samples will be shipped out to key partners before the end of the year, according to Wittich. The plan is to ramp up volumes towards the middle of 2020.

As you can see, Ampere has two more chips on the roadmap that it is showing publicly, with a 7 nanometer follow-on in development now and a 5 nanometer kicker to that in definition stages now. This hews more or less to the Arm Holding Neoverse roadmap, and we expect for Ampere to stay more or less in synch with that, picking and choosing technologies as it sees fit from Arm and as it has done with Quicksilver. That implies a more or less annual cadence of chip rollouts.

Read this article:
Amping Up The Arm Server Roadmap - The Next Platform

For Cloud-native App Security, Few Companies Have Embraced DevSecOps – Security Boulevard

A recent study identified some startling news when it comes to the state of security and cloud-native apps. The Security for DevOps Enterprise Survey Report, conducted by the research firm Enterprise Strategy Group on behalf of Data Theorem found only 8% of companies are securing 75% or more of their cloud-native applications with DevSecOps practices today.

The survey did provide some hope: those companies employing DevSecOps practices should jump from that paltry 8% to 68% of companies that are securing 75% or more of their cloud-native applications with DevSecOps practices in two years.

The study results also revealed that API-related vulnerabilities are the top threat respondents are concerned about, at 63%, when it comes to serverless usage within organizations. Additional study findings include:

The survey found that workloads not only continue to move to public cloud platforms, but more organizations are also embracing serverless capabilities. The report found the general shift to everything consumed as-a-service continues. The report predicts that production workloads will continue to shift to public cloud platforms as organizations report that more than 40% of their production applications run on public cloud infrastructure.

Given this affinity for and commitment to public cloud infrastructure, it follows that there is already an appreciable use of serverless functions, especially in the enterprise, with many evaluating or planning to use serverless functions. Specifically, more than half of respondents indicate that their organizations software developers are already using serverless functions to some extent, with another 44% either evaluating or planning to start using serverless within the next two years. Those who are planning or evaluating will need to understand the associated threat model and means of mitigating risks. The report states.

While enterprises are increasingly turning to public infrastructure and serverless, the future will most likely be a mix of workload types. Containers and serverless are marginally cannibalizing virtual machines and bare metal servers and are expected to coexist with these server types as the underpinnings of both cloud-native and legacy applications, the report states.

However, while the server type mix for the typical organization is skewed toward VMs and bare metal today, this is expected to shift noticeably in the next 24 months, with containers and serverless platforms supporting, on average, 46% of production applications, it continues.

This study reveals that while organizations have started, there is more work to be done when it comes to securing their cloud-native apps with the benefits DevSecOps offers, says Doug Cahill, senior analyst and group practice director of cybersecurity for ESG. Fundamental changes to application architectures and the infrastructure platforms that host them are antiquating existing cybersecurity technologies and challenging traditional approaches to protecting business-critical workloads, he continues.

The report advises organizations to consider newer approaches to securing their cloud-native apps, especially technologies that mitigate the risks associated with API-related vulnerabilities. The report found that API risks topped the minds of respondents.

If organizations are going to ultimately get a handle on cloud, serverless and, more broadly, API risks they cant have (as 82% of organizations claim to do today) separate security teams for cloud-native apps and other systems. Its a good sign that 50% of respondents plan to merge those security efforts.

Its problematic that 32% currently have no such plans.

This study, Security for DevOps Enterprise Survey Report, is based on responses from 371 IT and cybersecurity professionals at organizations in North America responsible for evaluating, purchasing, and managing cloud security technology products and services.

Visit link:
For Cloud-native App Security, Few Companies Have Embraced DevSecOps - Security Boulevard

Over 750,000 Applications for US Birth Certificates Left Exposed Online – Security Boulevard

Quick question, were you born in the United States? Have you recently applied for a new copy of your birth certificate? Well, you could be one of the unfortunate people whose birth certificate application was left exposed online.

It has been reported that more than 750,000 applications for copies of U.S. birth certificates have been left exposed without any access control in a misconfigured cloud server within an Amazon Web Services (AWS) storage bucket.

It is understood that a British security company discovered the data container with no password protection leaving the door wide open for cybercriminals to steal the information for fraudulent purposes. Whats worrying is the cache is seemingly being updated on a weekly basis with more applications being added.

The data was being collated by a third-party partner of the U.S. government which provides a service to U.S. citizens who wish to have copies of their birth and death certificates from state governments.

The company at fault has not been named as it is believed the critical data is still online and currently exposed. The leak exposed traditional sensitive information like names, date of birth, home addresses, email addresses and phone numbers, however, more historical information has also been revealed. For example, the server also contained past names of family members, old addresses linked to the applicant, and even the reason as to why the individual was seeking this information, which could be as trivial as applying for a new passport or even to research their familys history.

Sadly, this is not the first time an unprotected AWS server has resulted in a high profile data leak as in June 2019, Netflix, Ford, and many other brands all had data exposed in an open Amazon AWS bucket which amounted to 1TB worth of information being left unprotected.

With these incidents frequently occurring, it begs the question as to why these online cloud servers are being left unprotected. Identity theft and fraud is widespread, and these leaks do not give people the confidence that companies, governments, and other organizations are doing enough to secure their critical data.

Service providers and processors need to wake up to the reality that data needs to be protected in a data-centric fashion to eliminate the risks of having a lapse or lack of due diligence. Adopting a data-centric protection model ensures that data is protected anywhere it is stored, moved, shared, or used and is the only true firebreak that can quench identity theft.

See the original post:
Over 750,000 Applications for US Birth Certificates Left Exposed Online - Security Boulevard