Page 4,213«..1020..4,2124,2134,2144,215..4,2204,230..»

OVH makes foray into APAC cloud market – ComputerWeekly.com

French infrastructure-as-a-service (IaaS) supplier OVH has expanded its footprint in Asia-Pacific (APAC) with new datacentres in Sydney and Singapore, along with a regional headquarters in Melbourne, Australia.

What are your peers in the Nordics region looking to spend their budget on in 2017? Unsurprisingly, cloud computing is one of the biggest draws and more than half of CIOs in the region will spend more on cloud technologies this year than they did in 2016.

By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.

You also agree that your personal information may be transferred and processed in the United States, and that you have read and agree to the Terms of Use and the Privacy Policy.

The recent investments are part of the companys efforts to tap the booming APAC public cloud market, especially in mature economies, where public cloud spending is expected to hit $10bn by 2017.

Although the market for IaaS in the region is currently dominated by the likes of Microsoft, Amazon Web Services and Alicloud, as well as smaller regional players, OVH is confident of delivering a differentiated IaaS offering powered by its hyperscale infrastructure comprising 270,000 physical servers hosted in 20 datacentres around the globe.

We deliver a hosted dedicated cloud infrastructure with the commercial attributes of the public cloud very fast provisioning, full elasticity up and down and zero minimum commitment, said Laurent Allard, vice-chairman of OVHs board of directors. We do this with the bare metal cloud servers, as well as in the VMware space with SDDC [software-defined datacentre] on demand.

Noting that cloud adoption is still nascent in the APAC region, Clement Teo, principal analyst at Ovum, said there are growth opportunities for new market entrants. Cloud suppliers such as OVH, in particular, could support the infrastructure needs of large European enterprises that are expanding into the region, he told Computer Weekly.

Besides targeting large enterprises, OVH which was started 17 years ago in the garage of its current CEO, Octave Klabahas is also eyeing startups, which Allard said will be the big companies of tomorrow.

To reach more startups, OVH has introduced its Digital Launch Pad (DLP) programme in Singapore and Southeast Asia. Through the DLP, which has already enrolled 700 startups globally, the company will support local startups at each stage of their development and offer free cloud computing resources ranging from $1,000 to $100,000 per company.

Teo said going after startups is a sound strategy, especially if the startups are gaming companies that need to scale up quickly. OVH could also look into providing a marketplace for developers to pick and choose the services they want to deploy, he added.

OVHsexpansion plans do not stop at two regional datacentres. The company will hire 80 full-time staff for its Melbourne regional HQ within three years, including highly skilled employees covering technical pre-sales, technical sales, customer support and marketing.

Later in 2017, we will review the need for additional footprint across the region based on customer feedback and requirements, said Allard.

The company is also looking to extend its certifications for its Singapore datacentre in the next few months, including alignment with cloud usage guidelines set out by the Monetary Authority of Singapore and local security standards such as Multi-Tier Cloud Security.

Continued here:
OVH makes foray into APAC cloud market - ComputerWeekly.com

Read More..

Targeting Small To Midsize Practices, Cloud Hosting Leader Infinitely Virtual Enters Market For Law Firms, Legal … – HostReview.com (press release)

Targeting Small To Midsize Practices, Cloud Hosting Leader Infinitely Virtual Enters Market For Law Firms, Legal Services IaaS Provider Brings Award-Winning Technology and Service toExpanding Sector

LOS ANGELES (June 2, 2017) Leading Infrastructure-as-a-Service (IaaS) provider Infinitely Virtual today announced its entry into the cloud hosting market for small and mid-size law firms, a burgeoning segment of the $300 billion legal services industry. Every day, law firms confront critical IT issues across the board, and theres no margin for error, said Adam Stern, founder and CEO, Infinitely Virtual. The variables in play for legal practices and their clients run the gamut data security, data integrity, system performance, uptime, disaster recovery, untold regulatory requirements. Increasingly, law firms are hopping off the IT rollercoaster of buy, deploy, depreciate, and instead embracing a model that fosters growth and enables partners to sleep at night. Cloud-hosted solutions for law firms must be up and available 100 percent of the time, and a hosting vendor must deliver absolute reliability, total security and zero data loss, Stern said. For nearly a decade, we have been focused on cloud-based application delivery. Every Infinitely Virtual hosting plan is designed to provide what law firms and legal professionals need. The message for legal practices is clear -- what goes into our cloud is whatever the practice chooses to put in it. Whatever the application(s) of choice, it will operate seamlessly within our Infrastructure as a Service model, across every category of legal and business software. Among those categories:

Infinitely Virtual offers the highest level of client data protection in the industry, including such technologies as clustered firewalls and intrusion detection and prevention software (IDPS), for free. IDPS detects threats to sensitive client data that even the strongest firewall wont catch. As cyber threats become ever more insidious, the company has implemented systems that go well beyond basic malware and antivirus solutions. Every Infinitely Virtual user gets a server and a dedicated firewall. The company deploys application-consistent backup as the most secure way to restore data offered for free, both locally and offsite, with 28-day retention. Infinitely Virtual routinely takes point-in-time snapshots of sensitive data, enabling fast, clean restoration without tape in minutes. Unlimited backup retention is available as well. Every part of the IV environment is redundant; hardware failures do not lead to outages. Infinitely Virtual is a perennial leader in Host Reviews monthly ratings, and is ranked among the worlds Top 100 Cloud Service Providers, according to Pentons Talkin Cloud 100. The company earned the highest rating of "Enterprise-Ready" in Skyhigh Networks CloudTrust Program for four of our offerings: Cloud Server Hosting, InfiniteVault, InfiniteProtect and Virtual Terminal Server. Skyhigh (www.skyhighnetworks.com) provides an objective and detailed risk assessment of more than 9,000 cloud services across 50 attributes developed in conjunction with the Cloud Security Alliance (CSA). For additional information, visit http://www.infinitelyvirtual.com. About Infinitely Virtual: The World's Most Advanced Hosting Environment. Infinitely Virtual is a leading provider of high quality and affordable Cloud Server technology, capable of delivering services to any type of business, via terminal servers, SharePoint servers and SQL servers all based on Cloud Servers. Named to the Talkin Cloud 100 as one of the industrys premier hosting providers, Infinitely Virtual has earned the highest rating of "Enterprise-Ready" in Skyhigh Networks CloudTrust Program for four of its offerings -- Cloud Server Hosting, InfiniteVault, InfiniteProtect and Virtual Terminal Server. The company recently took the #1 spot in HostReviews Ranking of VPS hosting providers. Infinitely Virtual was established as a subsidiary of Altay Corporation, and through this partnership, the company provides customers with expert 247 technical support. More information about Infinitely Virtual can be found at: http://www.infinitelyvirtual.com, @iv_cloudhosting, or call 866-257-8455.

Media Contact: Ken Greenberg Edge Communications, Inc. 323-469-3397 ken@edgecommunicationsinc.com

View original post here:
Targeting Small To Midsize Practices, Cloud Hosting Leader Infinitely Virtual Enters Market For Law Firms, Legal ... - HostReview.com (press release)

Read More..

Private cloud 3x cheaper than public cloud; you’re kidding, right? – TechRepublic

Image: iStockphoto/Dmitrii_Guzhanin

Do we really have to go over this again? Based on a new study from ServerPronto University (yes, really), private cloud (read: legacy data centers dressed up in cloudy clothes) are 3x cheaper than Amazon Web Services (AWS). Dell founder Michael Dell took the bait, touting EMC-Dell-based VxRail as 2-4x cheaper than AWS.

It's a nice thought, but reminds me of Trautman's declaration to Rambo: "It's over Johnny. It's over!" to which Rambo replies: "Nothing is over! Nothing! You just don't turn it off!"

Meaning, the silly cloud price war is over and, really, it never begun. The cloud has never been a question of cost, but rather one of convenience.

Even so, it's worth digging into the cost claims, if briefly. ServerPronto is (wait for it!) a dedicated server hosting company. That means it gets paid to push servers on enterprises, even as the world increasingly thinks about serverless computing (as the natural extension of cloud computing, wherein the server disappears entirely and only services/functions matter). One other post it published recently suggests a disconnect from reality: The Simple Reason Companies are Abandoning the Cloud.

Because, um, that's happening?...

If you look at ServerPronto's numbers, it's a wonder that anyone would ever consider running anything in the public cloud. After all, the company finds that AWS costs $2,762.81 a month for a comparable configuration, while the private cloud offering costs a mere $899 a month (even the pricing is optimized for opticsit's not $900 per month. It's $899).

SEE: AWS isn't the cheapskate's cloud, and Amazon doesn't care (TechRepublic)

Things enter bizarro-land, however, when ServerPronto spells out the reasons private server (I mean, "cloud") hosting manages to be so much cheaper:

Well, yes. But that "impact" is completely in the public cloud's favor. It's a truism that private servers sitting in an average enterprise data center get used just 5-10% of the time. ServerPronto's $899/month covers just a fraction of what you'd need to pay to get remotely close to the public cloud's levels of utilization.

The company might respond: "But that's not the point! Even with our profound waste of money, energy, and materials to bulk up a data center, we're still cheaper!" To which I'd say: "Doubtful at best, but irrelevant, anyway."

Irrelevant, because enterprises aren't simply buying raw storage and compute from the public clouds. They're buying into Amazon Aurora, Google BigTable, Microsoft Azure Machine Learning, etc. They're buying services and convenience.

SEE: Why public cloud R&D is making lock-in worse, not better (TechRepublic)

Is that cheap? No, but it's demonstrably cheaper, for example, to run big data workloads on public clouds than on dedicated servers. Why? Because the very nature of data science requires elastic infrastructure, as AWS product chief Matt Wood told me:

Even if all an enterprise buys from the public cloud is storage and compute, it's going to be cost-advantageous compared to bulking up on under-utilized, quickly obsolete servers. But, as mentioned, enterprises are increasingly looking to the cloud for the next stage of convenience, including powerful services they can rent by the hour (or minute) instead of signing long-term contracts for dumb infrastructure that requires the extra cost (and expertise) of server-side software.

As former VMware executive Mathew Lodge tweeted, the ServerPronto University study "Displays a staggering lack of understanding of the drivers for public cloud." That, or a profound need to keep peddling servers in a world that increasingly doesn't care. To be clear, there are very good reasons to run private clouds, as I've written before, but saving 3x on infrastructure is not one of them.

Read this article:
Private cloud 3x cheaper than public cloud; you're kidding, right? - TechRepublic

Read More..

Tektronix AWG Pulls Test into Era of Quantum Computing – Electronic Design

When a company calls and says they have the best widget ever, you have to be skeptical. However, you also cant help but be curious. When they talked about how it would advance the state of the art in radar, electronic warfare, and quantum-computing test, and make an engineers workspace tidier, I was smitten.

I met up with theTektronix team, led by Product Market Manager Kip Pettigrew, and wasnt disappointed: The new AWG5200 arbitrary waveform generator is a work of art and function. Physically, its both commanding and imposing. It measures 18.13 6.05 from the front, but its 23.76 inches deepso, while itll sit nicely within a test stack and help reduce clutter, the stack had better have a deep shelf (Figs. 1 and 2).

Its whats within those dimensions, and what you have to pay to get it, though, that give the AWG5200 a certain level of gravitas. For sure, its hard to ignore a price point of $82,000, but its not surprising when you understand what youre getting in return.

1. The AWG5200 measures 18.13 6.05 and comes with a 6.5-inch touchscreen, a removable hard drive (upper right), and two, four, or eight channels (bottom right). (Source: Tektronix)

Aimed squarely at military/government and advanced research applications, the system emphasizes signal fidelity, scalability, and flexibility. It can accurately reproduce complex, real-world signals across an ever-expanding array of applications without having to physically expand a test area. Its also supported by Tektronixs SourceXpress software, which lets you create waveforms and control the AWGs remotely, and has a growing library of waveform-creation plugins.

2. The AWG5200 is designed to be compact so that it can stack easily with other equipment to reduce overall space requirements, though it is 23.76 inches deep. A synchronization feature allows it to scale up beyond eight channels by adding more AWG5200s. (Source: Tektronix)

Let the Specs Tell the Story

Digging into the specs uncovers what the AWG5200 is all about. Words like powerful, precision, and solid engineering come to mind. The system can sample at 5 Gsamples/s (10-Gsamples/s with interpolation) with 16-bit vertical resolution across two, four, or eight channels per unit. Channel-to-channel skew (typical) is <25 ps with a range of 2 ns and a resolution of 0.5 ps. The analog bandwidth is 2 GHz at 3 dB) or 4 GHz at 6 dB, and the amplitude range is 100 to 0.75 V p-p, with an accuracy of 2% of setting.

The AWG5200s multi-unit synchronization feature helps scale up beyond eight channels. Note that each channel is independent, so the classic tradeoff of sample memory for bandwidth doesnt apply here. Each channel gets 2 Gsamples of waveform memory.

The precision is embodied within its ability to generate RF signals with a spurious-free dynamic range (SFDR) of 70 dBc. Combined with a software suite and support, this is critical as new waveforms and digital-modulation techniques are explored in a time of rapid wireless evolution in military and government applications, as well as 5G and even quantum-computer test. Signal fidelity isnt something you want to worry about, and the expanding library and customizable features help kickstart and then fine-tune your research and development waveforms.

Howd They Do That?

Achieving higher or improved specifications is almost always a labor of love: The test companys engineers constant urge to make things better combines with customer feedback and an analysis of where to focus energy and development to have the most impact. However, at a fundamental level, the AWG5200s advances go back to the digital-to-analog converter (DAC) technology at the heart of the system.

Advances in DAC technologies, particularly with respect to signal processing and functional integration, allow them to directly generate detailed and complex RF and electronic-warfare (EW) signals. This is an area worth digging into in more detail, so Christopher Skach and Sahandi Noorizadeh developed a feature specially for Electronic Design on DAC technology advances and how its changing signal generation for test. Its worth a look.

Rapidly Evolving Applications

Pettigrew also provided a quick run through of the newer and more interesting applications, as well as the key market trends that the system is solving for. In general electronic test, go wide technologies like MIMO need test systems that can scale as they need multiple, independent, wide-bandwidth RF streams (Fig. 3).

3. Rapid expansion in the use of techniques such as MIMO requires more advanced and flexible waveform generators to generate multiple high-fidelity, RF signals with complex modulation schemes. (Source: Tektronix)

This translates over to mil/gov, too, where systems must be tested for their ability to detect and respond to adaptive threats. The signals of interest are able to be generated on two channels, while the others can be used to generate expected noise, Wi-Fi interferers, and other MIMO channels.

However, just being able to reproduce the signals isnt enough: The AWG must be capable of enabling stress and margin testing, as well as verification and characterization.1

On the research front, it turns out that quantum computing needs advanced AWGs, too, said Pettigrew, as they lack the fidelity, latency, and scalability. In quantum computers, the qubits are often controlled using precision-pulsed microwave signals, each requiring multiple independent RF channels. This is only going to get more interesting and challenging as companies like IBM and Google, along with many independent physicists and engineers, work to scale up quantum-computing technology and applications.

For all three of these applications, cost remains a factor. So, instead of developing multiple custom solutions, the AWG5200 may be a good commercial off-the-shelf (COTS) option.

References:

1. How New DAC Technologies are Changing Signal Generation for Test

Link:
Tektronix AWG Pulls Test into Era of Quantum Computing - Electronic Design

Read More..

Purdue, Microsoft Partner On Quantum Computing Research | WBAA – WBAA

Purdue researchers are partnering with Microsoft and scientists at three other universities around the globe to determine whether theyve found a way to create a stable form of whats known as quantum computing.

A new five-year agreement aims to build a type of system that could perform computations that are currently impossible in a short timespan, even for supercomputers.

Purdue physics and astronomy professor Michael Manfra is heading up the West Lafayette team, which will work with Microsoft scientists and university colleagues in Australia, the Netherlands and Denmark to construct, manipulate and strengthen tiny building blocks of information called topological qubits."

The real win that topological quantum computing suggests is that if you devise your system in which you store your information cleverly enough, that you can make the qubit insensitive basically deaf to the noise thats all around it in the environment, Manfra says.

He says that deafness is important because of whats held quantum computing back the ease with which its disturbed.

It can interact with photons; electromagnetic fields. It can interact with vibrations of the lattice. And those interactions, what they can do is cause a decoherence of that qubit basically cause it to lose the stored information.

Manfra says its an open question whether quantum computing will ever overtake the current zeroes-and-ones system of information storing, but he says hes interested in either proving or disproving the concept.

See more here:
Purdue, Microsoft Partner On Quantum Computing Research | WBAA - WBAA

Read More..

Toward mass-producible quantum computers | MIT News – MIT News

Quantum computers are experimental devices that offer large speedups on some computational problems. One promising approach to building them involves harnessing nanometer-scale atomic defects in diamond materials.

But practical, diamond-based quantum computing devices will require the ability to position those defects at precise locations in complex diamond structures, where the defects can function as qubits, the basic units of information in quantum computing. In todays of Nature Communications, a team of researchers from MIT, Harvard University, and Sandia National Laboratories reports a new technique for creating targeted defects, which is simpler and more precise than its predecessors.

In experiments, the defects produced by the technique were, on average, within 50 nanometers of their ideal locations.

The dream scenario in quantum information processing is to make an optical circuit to shuttle photonic qubits and then position a quantum memory wherever you need it, says Dirk Englund, an associate professor of electrical engineering and computer science who led the MIT team. Were almost there with this. These emitters are almost perfect.

The new paper has 15 co-authors. Seven are from MIT, including Englund and first author Tim Schrder, who was a postdoc in Englunds lab when the work was done and is now an assistant professor at the University of Copenhagens Niels Bohr Institute. Edward Bielejec led the Sandia team, and physics professor Mikhail Lukin led the Harvard team.

Appealing defects

Quantum computers, which are still largely hypothetical, exploit the phenomenon of quantum superposition, or the counterintuitive ability of small particles to inhabit contradictory physical states at the same time. An electron, for instance, can be said to be in more than one location simultaneously, or to have both of two opposed magnetic orientations.

Where a bit in a conventional computer can represent zero or one, a qubit, or quantum bit, can represent zero, one, or both at the same time. Its the ability of strings of qubits to, in some sense, simultaneously explore multiple solutions to a problem that promises computational speedups.

Diamond-defect qubits result from the combination of vacancies, which are locations in the diamonds crystal lattice where there should be a carbon atom but there isnt one, and dopants, which are atoms of materials other than carbon that have found their way into the lattice. Together, the dopant and the vacancy create a dopant-vacancy center, which has free electrons associated with it. The electrons magnetic orientation, or spin, which can be in superposition, constitutes the qubit.

A perennial problem in the design of quantum computers is how to read information out of qubits. Diamond defects present a simple solution, because they are natural light emitters. In fact, the light particles emitted by diamond defects can preserve the superposition of the qubits, so they could move quantum information between quantum computing devices.

Silicon switch

The most-studied diamond defect is the nitrogen-vacancy center, which can maintain superposition longer than any other candidate qubit. But it emits light in a relatively broad spectrum of frequencies, which can lead to inaccuracies in the measurements on which quantum computing relies.

In their new paper, the MIT, Harvard, and Sandia researchers instead use silicon-vacancy centers, which emit light in a very narrow band of frequencies. They dont naturally maintain superposition as well, but theory suggests that cooling them down to temperatures in the millikelvin range fractions of a degree above absolute zero could solve that problem. (Nitrogen-vacancy-center qubits require cooling to a relatively balmy 4 kelvins.)

To be readable, however, the signals from light-emitting qubits have to be amplified, and it has to be possible to direct them and recombine them to perform computations. Thats why the ability to precisely locate defects is important: Its easier to etch optical circuits into a diamond and then insert the defects in the right places than to create defects at random and then try to construct optical circuits around them.

In the process described in the new paper, the MIT and Harvard researchers first planed a synthetic diamond down until it was only 200 nanometers thick. Then they etched optical cavities into the diamonds surface. These increase the brightness of the light emitted by the defects (while shortening the emission times).

Then they sent the diamond to the Sandia team, who have customized a commercial device called the Nano-Implanter to eject streams of silicon ions. The Sandia researchers fired 20 to 30 silicon ions into each of the optical cavities in the diamond and sent it back to Cambridge.

Mobile vacancies

At this point, only about 2 percent of the cavities had associated silicon-vacancy centers. But the MIT and Harvard researchers have also developed processes for blasting the diamond with beams of electrons to produce more vacancies, and then heating the diamond to about 1,000 degrees Celsius, which causes the vacancies to move around the crystal lattice so they can bond with silicon atoms.

After the researchers had subjected the diamond to these two processes, the yield had increased tenfold, to 20 percent. In principle, repetitions of the processes should increase the yield of silicon vacancy centers still further.

When the researchers analyzed the locations of the silicon-vacancy centers, they found that they were within about 50 nanometers of their optimal positions at the edge of the cavity. That translated to emitted light that was about 85 to 90 percent as bright as it could be, which is still very good.

Its an excellent result, says Jelena Vuckovic, a professor of electrical engineering at Stanford University who studies nanophotonics and quantum optics. I hope the technique can be improved beyond 50 nanometers, because 50-nanometer misalignment would degrade the strength of the light-matter interaction. But this is an important step in that direction. And 50-nanometer precision is certainly better than not controlling position at all, which is what we are normally doing in these experiments, where we start with randomly positioned emitters and then make resonators.

Read the original:
Toward mass-producible quantum computers | MIT News - MIT News

Read More..

D-Wave partners with U of T to move quantum computing along – Financial Post

Not even the greatest geniuses in the world could explain quantum computing.

In the early 1930s Einstein, in fact, called quantum mechanics the basis for quantum computing spooky action at a distance.

Then theres a famous phrase from the late Nobel Laureate in physics, Richard Feynman: If you think you understand quantum mechanics, then you dont understand quantum mechanics.

That may be so, but the mystery behind quantum has not stopped D-Wave Systems Inc. from making its mark in the field. In the 1980s it was thought maybe quantum mechanics could be used to build a computer. So people starting coming up with ideas on how to build one, says Bo Ewald, president of D-Wave in Burnaby, B.C.

Two of those people were UBC PhD physics grads Eric Ladizinsky and Geordie Rose, who had happened to take an entrepreneur course before founding D-Wave in 1999. Since there werent a lot of businesses in the field, they created and collected patents around quantum, Ewald says.

What we have with D-Wave is the mother of all ships: that is the hardware capability to unlock the future of AI

While most who were exploring the concept were looking in the direction of what is called the universal gate model, D-Wave decided to work on a different architecture, called annealing. The two do not necessarily compete, but perform different functions.

In quantum annealing, algorithms quickly search over a space to find a minimum (or solution). The technology is best suited for speeding research, modelling or traffic optimization for example.

Universal gate quantum computing can put basic quantum circuit operations together to create any sequence to run increasingly complex algorithms. (Theres a third model, called topological quantum computing, but it could be decades before it can be commercialized.)

When D-Wave sold its first commercial product to Lockheed Martin about six years ago, it marked the first commercial sale of a quantum computer, Ewald says. Google was the second to partner with D-Wave for a system that is also being run by NASA Ames Research Center. Each gets half of the machine, Ewald says. They believed quantum computing had an important future in machine learning.

Most recently D-Wave has been working with Volkswagen to study traffic congestion in Beijing. They wanted to see if quantum computing would have applicability to their business, where there are lots of optimization problems. Another recent coup is a deal with the Los Alamos National Laboratory.

Theres no question that any quantum computing investment is a long-term prospect, but that has not hindered their funding efforts. To date, the company has acquired more than 10 rounds of funding from the likes of PSP, Goldman Sachs, Bezos Expeditions, DFJ, In-Q-Tel, BDC Capital, GrowthWorks, Harris & Harris Group, International Investment and Underwriting, and Kensington Partners Ltd.

What we have with D-Wave is the mother of all ships: that is the hardware capability to unlock the future of AI, says Jrme Nycz, executive vice-president, BDC Capital. We believe D-Waves quantum capabilities have put Canada on the map.

Now, Ewing says, the key for the company moving forward is getting more smart people working on apps and on software tools in the areas of AI, machine earning and deep learning.

To that end, D-Wave recently not only open-sourced its Qbsolv software tool, it launched an initiative with Creative Destruction Lab at the University of Torontos Rotman School of Management to create a new track focused on quantum machine learning. The intensive one-year program will go through an introductory boot camp led by Dr. Peter Wittek, author of Quantum Machine Learning: What Quantum Computing means to Data Mining, with instruction and technical support from D-Wave experts, and access to a D-Wave technology.

While it is still early days in terms of deployment for quantum computing, Ewald believes D-Waves early start gives them a leg up if and when quantum hits the mainstream. So far customers tend to be government and/or research related. Google is the notable exception. But once apps come along that are applicable for other industries, it will all make sense.

The early start has given D-Wave the experience to be able to adopt other architectures as they evolve. It may be a decade before a universal gate model machine becomes a marketable product. If that turns out to be true, we will have a 10-year lead in getting actual machines into the field and having customers working on and developing apps.

Ewald is the first to admit that as an early entrant, D-Wave faces criticism around its architecture. There are a lot of spears and things that we tend to get in the chest. But we see them coming and can deal with it. If we can survive all that, we will have a better view of the market, real customers and relationships with accelerators like Creative Destruction Lab. At the end of day we will have the ability to adapt when we need to.

Continue reading here:
D-Wave partners with U of T to move quantum computing along - Financial Post

Read More..

What’s the Safest Laptop For Internet Security? – HuffPost

How secure is a Chromebook vs. a PC and a MacBook? originally appeared on Quora: the place to gain and share knowledge, empowering people to learn from others and better understand the world.

Answer by Stan Hanks, CTO of Columbia Ventures Corp, on Quora:

How secure is a Chromebook vs. a PC and a MacBook?

ChromeOS is super-limited, designed primarily to just let you run Chrome.

Its not general purpose. Theres no support for running other applications (apart from widgets that let you diddle OS parameters like joining WiFi networks etc). Theres no local file storage. Theres no way to hand off to other executions because theres nothing else to execute.

Windows is not that. Its designed to allow any user to run any application that they want any time they want it, whether its good for them or not. You can write it yourself, you can download it from the Internet, you can buy it shrinkwrapped in a store, the OS doesnt care. If its got the right bytecode, itll run.

And handoffs between applications is trivial. So I can have bad actor code in Javascript on a web page make a call to download then run bad actor code, with elevated privilege, so all bets on secure are off.

Thankfully, there are ways to tighten down the controls on that to prevent users from screwing themselves over too much, and in enterprise environments, there are ways to lock things down to you can only run stuff that we say you can run, but thats not the default. The default is heres a gun, heres some ammo, theres your foot, good luck.

macOS is wound a bit more tightly than that. Having roots in UNIX, the default security model is much less permissive and the OS defaults which have grown around that base over the years are pretty conservative. Yes, you can build or download and run code. But for it to do any of a wide variety of things that would compromise the security of the system, you have to give authorization - and in a very obvious no, seriously, do you want this to happen, for reals? kind of way.

(Thats actually how the OSX/Dok malware worked; it solicited your administrator password and exfiltrated it, showing that you can exploit that sort of thing, but differently than many had thought.)

So, if you want to browse the web and not worry about your system being infected by random malware, the safest thing to do is get a Chromebook. There's nearly zero chance of it getting infected because the attack surface is really, really small. There's a very low chance of targeted malware evolving because the OS design means theres no native local data to exploit.

Your second choice: a macOS. It's much more secure from the start. The theory was that Mac users were safer because of sheer numbers: hundreds of millions of Windows systems make a more attractive target than a much smaller number of Mac users. However, since Apple owns the global market for laptops over $1000, those users are much, much more interesting from an exploit perspective, so we who use Macs have a giant, shiny target painted on us; expect exploits to arrive, in greater numbers, in the coming years.

Last choice: Windows. Way too easy to run code that you dont really want to run, way too difficult to use if you crank it down all the way.

Final note: no matter how secure the platform is, nothing can protect you from falling for phishing attacks, choosing to enter your credentials on a bad actor operated web site, or infrastructure attacks like man-in-the-middle. You can minimize the collateral damage, but its still dangerous out there, people. Be careful.

This question originally appeared on Quora - the place to gain and share knowledge, empowering people to learn from others and better understand the world. You can follow Quora on Twitter, Facebook, and Google+. More questions:

Start your workday the right way with the news that matters most.

Read the rest here:
What's the Safest Laptop For Internet Security? - HuffPost

Read More..

Enterprise Encryption Solutions – Data at Rest and Data in …

To reduce therisk posed byhackers, insider threats, and other malicious attacks, your organization must utilize encryption to protect sensitive data wherever it is found across your on-premises, virtual, public cloud, and hybrid environments. This includes data at rest in application and web servers, file servers, databases, and network attached storage, as well as data in motion across your network.

As your corporate data assets grow, data-at-rest encryption is a critical last line of defense. Encryption applies security and access controls directly to your sensitive structured and unstructured data - wherever it resides.

In addition to protecting data at rest, enterprises must also address threats to sensitive data as it traverses networks. Data-in-motion encryption ensures your data, video, voice and even metadata is protected from eavesdropping, surveillance, and overt and covert interception. With Gemalto's comprehensive portfolio of SafeNet data-at-rest and data-in-motion encryption solutions, you can secure all types of sensitive data across today's distributed enterprise.

Gemalto's portfolio of data-at-rest encryption solutions delivers transparent, efficient, and unmatched data protection at all levels of the enterprise data stack, including the application, database (column or file), file system, full disk (virtual machine), and network attached storage levels. In addition to working across on-premises, virtual, and cloud environments, these solutions are deployed with the SafeNet KeySecure enterprise key manager for centralized key and policy management .

Learn More About Data-at-Rest Encryption

A powerful safeguard for data in motion, SafeNet High Speed Encryptors deliver proven and certified Layer 2 encryption capabilities that meet secure network performance demands for real time, low latency and near zero overhead to provide security without compromise .

Learn More About SafeNet High Speed Encryptors

Without a comprehensive data protection platform that includes strong encryption to secure and control access to your high-value information, and centralized enterprise key management to secure, manage, and prove ownership of your keys, your sensitive data is at risk. Gemalto's encryption solutions enable your organization to meet your immediate data protection and business needs now, while investing in a platform that provides robust security, a growing ecosystem, and the scalability you need to build a trusted framework for the future.

With Gemalto's encryption solutions, you can meet a wide variety of use cases, including:

Read the original:
Enterprise Encryption Solutions - Data at Rest and Data in ...

Read More..

How to Search on Securely Encrypted Database Fields – SitePoint

This post was originally published on the ParagonIE blog and reposted here with their permission.

We [ParagonIE] get asked the same question a lot (or some remix of it).

This question shows up from time to time in open source encryption libraries bug trackers. This was one of the weird problems covered in my talk at B-Sides Orlando (titled Building Defensible Solutions to Weird Problems), and weve previously dedicated a small section to it in one of our white papers.

You know how to search database fields, but the question is, How do we securely encrypt database fields but still use these fields in search queries?

Our secure solution is rather straightforward, but the path between most teams asking that question and discovering our straightforward solution is fraught with peril: bad designs, academic research projects, misleading marketing, and poor threat modeling.

If youre in a hurry, feel free to skip ahead to the solution.

Lets start with a simple scenario (which might be particularly relevant for a lot of local government or health care applications):

Lets first explore the flaws with the obvious answers to this problem.

The most obvious answer to most teams (particularly teams that dont have security or cryptography experts) would be to do something like this:

In the above snippet, the same plaintext always produces the same ciphertext when encrypted with the same key. But more concerning with ECB mode is that every 16-byte chunk is encrypted separately, which can have some extremely unfortunate consequences.

Formally, these constructions are not semantically secure: If you encrypt a large message, you will see blocks repeat in the ciphertext.

In order to be secure, encryption must be indistinguishable from random noise to anyone that does not hold the decryption key. Insecure modes include ECB mode and CBC mode with a static (or empty) IV.

You want non-deterministic encryption, which means each message uses a unique nonce or initialization vector that never repeats for a given key.

There is a lot of academic research going into such topics as homomorphic, order-revealing, and order-preserving encryption techniques.

As interesting as this work is, the current designs are nowhere near secure enough to use in a production environment.

For example, order-revealing encryption leaks enough data to infer the plaintext.

Homomorphic encryption schemes are often repackaging vulnerabilities (practical chosen-ciphertext attacks) as features.

As weve covered in a previous blog post, when it comes to real-world cryptography, confidentiality without integrity is the same as no confidentiality. What happens if an attacker gains access to the database, alters ciphertexts, and studies the behavior of the application upon decryption?

Theres potential for ongoing cryptography research to one day produce an innovative encryption design that doesnt undo decades of research into safe cryptography primitives and cryptographic protocol designs. However, were not there yet, and you dont need to invest into a needlessly complicated research prototype to solve the problem.

I dont expect most engineers to arrive at this solution without a trace of sarcasm. The bad idea here is, because you need secure encryption (see below), your only recourse is to query every ciphertext in the database and then iterate through them, decrypting them one-by-one and performing your search operation in the application code.

If you go down this route, you will open your application to denial of service attacks. It will be slow for your legitimate users. This is a cynics answer, and you can do much better than that, as well demonstrate below.

Lets start by avoiding all the problems outlined in the insecure/ill-advised section in one fell swoop: All ciphertexts will be the result of an authenticated encryption scheme, preferably with large nonces (generated from a secure random number generator).

With an authenticated encryption scheme, ciphertexts are non-deterministic (same message and key, but different nonce, yields a different ciphertext) and protected by an authentication tag. Some suitable options include: XSalsa20-Poly1305, XChacha20-Poly1305, and (assuming its not broken before CAESAR concludes) NORX64-4-1. If youre using NaCl or libsodium, you can just use crypto_secretbox here.

Consequently, our ciphertexts are indistinguishable from random noise, and protected against chosen-ciphertext attacks. Thats how secure, boring encryption ought to be.

However, this presents an immediate challenge: We cant just encrypt arbitrary messages and query the database for matching ciphertexts. Fortunately, there is a clever workaround.

Before you begin, make sure that encryption is actually making your data safer. It is important to emphasize that encrypted storage isnt the solution to securing a CRUD app thats vulnerable to SQL injection. Solving the actual problem (i.e. preventing the SQL injection) is the only way to go.

If encryption is a suitable security control to implement, this implies that the cryptographic keys used to encrypt/decrypt data are not accessible to the database software. In most cases, it makes sense to keep the application server and database server on separate hardware.

Possible use-case: Storing social security numbers, but still being able to query them.

In order to store encrypted information and still use the plaintext in SELECT queries, were going to follow a strategy we call blind indexing. The general idea is to store a keyed hash (e.g. HMAC) of the plaintext in a separate column. It is important that the blind index key be distinct from the encryption key and unknown to the database server.

For very sensitive information, instead of a simple HMAC, you will want to use a key-stretching algorithm (PBKDF2-SHA256, scrypt, Argon2) with the key acting as a static salt, to slow down attempts at enumeration. We arent worried about offline brute-force attacks in either case, unless an attacker can obtain the key (which must not stored in the database).

So if your table schema looks like this (in PostgreSQL flavor):

You would store the encrypted value in humans.ssn. A blind index of the plaintext SSN would go into humans.ssn_bidx. A naive implementation might look like this:

A more comprehensive proof-of-concept is included in the supplemental material for my B-Sides Orlando 2017 talk. Its released under the Creative Commons CC0 license, which for most people means the same thing as public domain.

Depending on your exact threat model, this solution leaves two questions that must be answered before it can be adopted:

Given our example above, assuming your encryption key and your blind index key are separate, both keys are stored in the webserver, and the database server doesnt have any way of obtaining these keys, then any attacker that only compromises the database server (but not the web server) will only be able to learn if several rows share a social security number, but not what the shared SSN is. This duplicate entry leak is necessary in order for indexing to be possible, which in turn allows fast SELECT queries from a user-provided value.

Furthermore, if an attacker is capable of both observing/changing plaintexts as a normal user of the application while observing the blind indices stored in the database, they can leverage this into a chosen-plaintext attack, where they iterate every possible value as a user and then correlate with the resultant blind index value. This is more practical in the HMAC scenario than in the e.g. Argon2 scenario. For high-entropy or low-sensitivity values (not SSNs), the physics of brute force can be on our side.

A much more practical attack for such a criminal would be to substitute values from one row to another then access the application normally, which will reveal the plaintext unless a distinct per-row key was employed (e.g. hash_hmac('sha256', $rowID, $masterKey, true) could even be an effective mitigation here, although others would be preferable). The best defense here is to use an AEAD mode (passing the primary key as additional associated data) so that the ciphertexts are tied to a particular database row. (This will not prevent attackers from deleting data, which is a much bigger challenge.)

Compared to the amount of information leaked by other solutions, most applications threat models should find this to be an acceptable trade-off. As long as youre using authenticated encryption for encryption, and either HMAC (for blind indexing non-sensitive data) or a password hashing algorithm (for blind indexing sensitive data), its easy to reason about the security of your application.

However, it does have one very serious limitation: It only works for exact matches. If two strings differ in a meaningless way but will always produce a different cryptographic hash, then searching for one will never yield the other. If you need to do more advanced queries, but still want to keep your decryption keys and plaintext values out of the hands of the database server, were going to have to get creative.

It is also worth noting that, while HMAC/Argon2 can prevent attackers that do not possess the key from learning the plaintext values of what is stored in the database, it might reveal metadata (e.g. two seemingly-unrelated people share a street address) about the real world.

Possible use-case: Encrypting peoples legal names, and being able to search with only partial matches.

Lets build on the previous section, where we built a blind index that allows you to query the database for exact matches.

This time, instead of adding columns to the existing table, were going to store extra index values into a join table.

The reason for this change is to normalize our data structures. You can get by with just adding columns to the existing table, but its likely to get messy.

The next change is that were going to store a separate, distinct blind index per column for every different kind of query we need (each with its own key). For example:

Every index needs to have a distinct key, and great pains should be taken to prevent blind indices of subsets of the plaintext from leaking real plaintext values to a criminal with a knack for crossword puzzles. Only create indexes for serious business needs, and log access to these parts of your application aggressively.

Thus far, all of the design propositions have been in favor of allowing developers to write carefully considered SELECT queries, while minimizing the number of times the decryption subroutine is invoked. Generally, that is where the train stops and most peoples goals have been met.

However, there are situations where a mild performance hit in search queries is acceptable if it means saving a lot of disk space.

The trick here is simple: Truncate your blind indexes to e.g. 16, 32, or 64 bits, and treat them as a Bloom filter:

It may also be worth converting these values from a string to an integer, if your database server will end up storing it more efficiently.

I hope Ive adequately demonstrated that it is not only possible to build a system that uses secure encryption while allowing fast queries (with minimal information leakage against very privileged attackers), but that its possible to build such a system simply, out of the components provided by modern cryptography libraries with very little glue.

If youre interested in implementing encrypted database storage into your software, wed love to provide you and your company with our consulting services. Contact ParagonIE if youre interested.

Go here to see the original:
How to Search on Securely Encrypted Database Fields - SitePoint

Read More..