Page 2,149«..1020..2,1482,1492,1502,151..2,1602,170..»

The Next And More Profitable 10 Percent Of Server Share For AMD – The Next Platform

When this is all said and done, Intel will deserve some kind of award for keeping its 14 nanometer processes moving along enough as it gets its 10 nanometer and 7 nanometer processes knocked together to still, somehow, manage to retain dominant market share in the server space.

Or, maybe Intel just got lucky that AMDs supply cant even come close to meeting the potential demand it might otherwise have if there were no limits on the fab capacity at Taiwan Semiconductor Manufacturing Co, which etches the core complexes in its Epyc server chips. (Globalfoundries, the spinout of AMDs own foundry mashed up with IBM Microelectronics and Chartered Semiconductor, still makes the memory and I/O hub portions of the Rome Epyc 7002 and Milan Epyc 7003 processors from AMD.)

This time last year, when we took a look at how AMDs share of processor shipments and revenues had grown since its re-entry into the server space back in 2017, the company had just broke through the 10 percent shipment barrier and looked to be on a Opteron-class fast path to 20 percent or maybe event 25 percent. And then the 10 nanometer Ice Lake Xeon SPs were launched, and say what you will about how the Rome chips beat them and the Milan chips hammer them, you go to the datacenter with the server chip that you have, paraphrasing former Secretary of State Donald Rumsfeld, who when under fire for the sorry state of the US Army as the Gulf War started quipped with a certain amount of pique that you go to war with the army that you have.

If you look at the data from Mercury Research, which does a fabulous job of watching the competition between AMD and Intel, you will see that AMD has a big jump in server CPU shipment share and then it levels off a bit or even declines some and then once it gets its footing, it blasts up a few points to a new level and repeats this entallening sawtooth shape again and again.

And so, as the fourth quarter came to a close, according to Mercury Research, AMD shipped 1.13 million server CPUs, an increase of 82.9 percent over the year ago period, which is great. But the overall market for server CPUs pushed into the channel, not consumed by the customers on the other side of the channel rose by 29.9 percent to 8.85 million units. And thus, Intel was still able to grow at almost the same pace of the market at large, rising 24.6 percent to 7.71 million units. And thus, Intel was able to have a record quarter for server CPU shipments, which drove server revenues to new highs as well despite a 5 percent decline in sales to hyperscalers and cloud builders and thanks in large part to a return to spending by enterprises and telco and communications service providers. We suspect that there has been a little channel stuffing on the part of Intel, and a whole lot of component bundling (which is price cutting that doesnt affect the Data Center Group revenue line but does cut into the revenues and profits of the Data Center Group adjacencies such as switch ASICs, network interface cards, silicon photonics (mostly Ethernet optical transceivers), and Optane storage. Such pricing tactics will all come home to roost unless server demand keeps on growing and Intel does a good job with the forthcoming 10 nanometer Sapphire Rapids Xeon SPs due by the second quarter.

To stick with the war metaphor, Intels 14 nanometer infantry with some 10 nanometer tanks was able to hold the line against AMDs tactical assault teams and sharpshooters. And there are some 10 nanometer gunships on their way as AMD brings a new class of weapons to bear with the Genoa Epyc 7004s. If AMD had more capacity, it would be eating more share. There is no question about that. But we think that both Intel and AMD are happy to manage capacity close enough to demand to be able to still have shortages that cause prices to hold or even increase, particularly with high end SKUs in their CPU lines. And they will stretch out product launches to the breaking point, and can always let the hyperscalers and cloud builders have these parts on the sly and charge a premium for that, too.

In other words, server buyers, none of you are in the drivers seat. TSMC and Intel Foundry Services are, and they are calling the tune on pricing and setting the pace on shipments, and if you need a server, you are going to the datacenter with the CPUs that you have.

Anyway, back to statistics. When you do the math, AMD had 12.9 percent shipment share in the fourth quarter of 2021, according to Mercury Research, and that is barely three-tenths of a point higher than the share it had in Q3 and mimicking the same tepid growth in share it had in the prior year at the same time.

It is hard to say what will happen as Genoa Epyc 7004s hit the market and possibly more revenue is recognized for the Trento custom Epyc processors used in the Frontier supercomputer at Oak Ridge National Laboratory (we think a lot of this was done in Q2, Q3, and Q4 of 2021, but AMD has not said how this works).

What we will observe is that AMDs revenue share has been outpacing its shipment share since Q2 2021, which is when that revenue recognition for Frontier might have started. And given that the major supercomputing centers usually pay less than half of list price for CPUs and GPUs, based on estimates we have done, then this revenue recognition would actually have hurt AMDs revenue share. And so if AMDs share of the X86 server chip market revenues is higher that means Intels average revenue per chip is trending up slow than AMDs is trending up. (Both are trending up as customers increasingly buy up the stack.) We will be watching this revenue expansion rate carefully. In the fourth quarter, for instance, AMD captured 14.4 percent share of the $7.48 billion in X86 server CPU sales, and it had the same share of the $6.65 billion in X86 server CPU sales in Q3 2021, according to Mercury Research.

It is even harder to say what will happen out towards the end of 2022 and through 2023 and into 2024. By that time, the server market could be consuming somewhere close to 10 million X86 server CPUs a quarter, and maybe even 500,000 or more Arm server CPUs (why not?), with the X86 servers generating somewhere close to $9 billion in revenues per quarter. We have a long way to go to get to that point, but when it is done, if current trends persist, it is not hard to see AMD having somewhere north of 20 percent shipment share and close to 25 percent revenue share of the X86 market, which is going to continue to grow even if at a much slower pace than it is doing now.

This will be fun to watch unfold, quarter by quarter, shot by shot.

Read the original here:
The Next And More Profitable 10 Percent Of Server Share For AMD - The Next Platform

Read More..

Kubernetes on Bare Metal vs. VMs: It’s Not Just Performance The New Stack – thenewstack.io

Too often, the debate about running Kubernetes on bare metal versus virtual machines is overly simplistic. Theres more to it than a trade-off between the relative ease of management you get with VMs and the performance advantage of bare metal. (The latter, in fact, isnt huge nowadays, as Ill explain below.)

Im going to attempt to walk through the considerations at play. As you will see, while I tend to believe that Kubernetes on bare metal is the way to go for most use cases, theres no simple answer.

Off the bat, lets address the performance vs. ease-of-use question.

Andy Holtzmann

Andy is a site reliability engineer at Equinix and has been running Kubernetes on bare metal since v1.9.3 (2018). He has run production environments with up to 55 bare-metal clusters, orchestrated Kubernetes installs on Ubuntu, CentOS and Flatcar Linux, and recently helped accelerate the bring-up of Equinix Metals Kubernetes platform to under one hour per new greenfield facility. Andy joined Equinix after working in senior software engineer roles at Twilio and SendGrid.

Yes, VMs are easier to provision and manage, at least in some ways. You dont need to be concerned with details of the underlying server hardware when you can set up nodes as VMs and orchestrate them using the VM vendors orchestration tooling. You also get to leverage things like golden images to simplify VM provisioning.

On the other hand, if you take a hypervisor out of the picture, you dont spend hardware resources running virtualization software or guest operating systems. All of your physical CPU and memory can be allocated to business workloads.

But its important not to overstate this performance advantage. Modern hypervisors are pretty efficient. VMware reports hypervisor overhead rates of just 2 percent compared to bare metal, for example. You have to add the overhead cost of running guest operating systems on top of that number, but still, the raw performance difference between VMs and bare metal can be negligible, at least when youre not trying to squeeze every last bit of compute power from your infrastructure. (There are cases where that 2% difference is meaningful.)

When its all said and done, virtualization is going to reduce total resource availability for your pods by about 10% to 20%.

Now, lets get into all the other considerations for running Kubernetes on bare metal versus Kubernetes on VMs. First, the orchestration element. When you run your nodes as VMs, you need to orchestrate those VMs in addition to orchestrating your containers. As a result, a VM-based Kubernetes cluster has two independent orchestration layers to manage.

Obviously, each layer is orchestrating a different thing, so, in theory, this shouldnt cause problems. In practice, it often does. For example, imagine you have a failed node and both the VM-level orchestrator and the Kubernetes orchestrator are trying to recover from the failure at the same time. This can lead to your orchestrators working at cross purposes because the VM orchestrator is trying to stand up a server that crashed, while Kubernetes is trying to move pods to different nodes.

Similarly, if Kubernetes reports that a node has failed but that node is a VM, you have to figure out whether the VM actually failed or the VM orchestrator simply removed it for some reason. This adds operational complexity, as you have more variables to work through.

You dont have these issues with Kubernetes on bare metal server nodes. Your nodes are either fully up or theyre not, and there are no orchestrators competing for the nodes attention.

Another key advantage of running Kubernetes on bare metal is that you always know exactly what youre getting in a node. You have full visibility into the physical state of the hardware. For example, you can use diagnostics tools like SMART to assess the health of hard disks.

VMs dont give you much insight about the physical infrastructure upon which your Kubernetes clusters depend. You have no idea how old the disk drives are, or even how much physical memory or CPU cores exist on the physical servers. Youre only aware of VMs virtual resources. This makes it harder to troubleshoot issues, contributing again to operational complexity.

For related reasons, bare metal takes the cake when it comes to capacity planning and rightsizing.

There are a fair number of nuances to consider on this front. Bare metal and virtualized infrastructure support capacity planning differently, and there are various tools and strategies for rightsizing everything.

But at the end of the day, its easier to get things exactly right when planning bare metal capacity. The reason is simple enough: With bare metal, you can manage resource allocation at the pod level using cgroups in a hyper-efficient, hyper-reliable way. Using tools like the Kubernetes vertical autoscaler, you can divvy up resources down to the millicore based on the total available resources of each physical server.

Thats a luxury you dont get with VMs. Instead, you get a much cruder level of capacity planning because the resources that can be allocated to pods are contingent on the resource allocations you make to the VMs. You can still use cgroups, of course, but youll be doing it within a VM that doesnt know what resources exist on the underlying server. It only knows what it has been allocated.

You end up having to oversize your VMs to account for unpredictable changes in workload demand. As a result, your pods dont use resources as efficiently, and a fair amount of the resources on your physical server will likely end up sitting idle much of the time.

Another factor that should influence your decision to run Kubernetes on bare metal versus VMs is network performance. Its a complex topic, but essentially, bare metal means less abstraction of the network, which leads to better network performance.

To dig a level deeper, consider that with virtual nodes you have two separate kernel networking stacks per node: one for the VMs and another for the physical hosts. There are various techniques for negotiating traffic between the two stacks (packet encapsulation, NAT and so on), and some are more efficient than others (hint: NAT is not efficient at all). But at the end of the day, they each require some kind of performance hit. They also add a great deal of complexity to network management and observability.

Running on bare metal, where you have just one networking stack to worry about, you dont waste resources moving packets between physical and virtual machines, and there are fewer variables to sort through when managing or optimizing the network.

Granted, managing the various networks that exist within Kubernetes, and this partially depends on the container network interface (CNI) you use, does add some overhead. But its minor compared to the overhead that comes with full-on virtualization.

As Ive already implied, the decision between Kubernetes on bare metal and Kubernetes on VMs affects the engineers who manage your clusters.

Put simply, bare metal makes operations and hence your engineers lives simpler in most ways. Beyond the fact that there are fewer layers and moving parts to worry about, a bare-metal environment reduces the constraints under which your team works. They dont have to remember that VMs only support X, Y and Z configurations or puzzle over whether a particular version of libvirt supports a feature they need.

Instead, they simply deploy the operating system and packages and get to work. Its easier to set up a cluster, and its much easier to manage operations for it over the long term when youre dealing solely with bare metal.

Let me make clear that I do believe there are situations where running Kubernetes on VMs makes sense.

One scenario is when youre setting up small-scale staging environments, where performance optimization is not super important. Getting the most from every millicore is not usually a priority for this type of use case.

Another situation is when you work in an organization that is already very heavily wedded to virtualized infrastructure or particular virtualization vendors. In this case, running nodes as VMs simply poses less of a bureaucratic headache. Or maybe there are logistical challenges with acquiring and setting up bare metal servers. If you can self-service some VMs in a few minutes, versus taking months to get physical servers, just use the VMs if it suits your timeline better. Your organization may also be wedded to a managed Kubernetes platform offered by a cloud provider that only runs containers on VMs. Anthos, Google Clouds managed hybrid multicloud Kubernetes offering, supports bare-metal deployments, and so does Red Hats OpenShift. AWSs EKS Anywhere bare metal support is coming later this year.

In general, you should never let a dependency on VMs stop you from using Kubernetes. Its better to take advantage of cloud native technology than to be stuck in the past because you cant have the optimal infrastructure.

VMs clearly have a place in many Kubernetes clusters, and that will probably never change. But when it comes to questions like performance optimization, streamlining capacity management or reducing operational complexity, Kubernetes on bare metal comes out ahead.

Feature image via Pixabay

Read the original post:
Kubernetes on Bare Metal vs. VMs: It's Not Just Performance The New Stack - thenewstack.io

Read More..

Expect sales reps’ calls if IT wants to ditch Oracle – The Register

Oracle executives brief clients against plans to move away from Big Red's technology platforms, it is alleged.

A recent webinar by Palisade Compliance heard that it took "guts" for enterprise customers to make the decision to move away from Oracle technology as its senior salespeople would call customer CEOs and board members and brief against IT management proposing such a move.

Craig Guarente, an advisor in Oracle licensing and compliance issues, said: "You need the courage to do something different, because you're going to have 20 Oracle reps telling you why it's a mistake, and they're going to call your CEO. They're going to call your board and do whatever they need to make you [change course]."

Guarente, CEO of Palisade Compliance and former Oracle veep, said customers would often complain that no matter how they try to reduce their reliance on Big Red's technology, "the Oracle calculator only has a plus button."

"Sometimes companies get in distress and they're shrinking and Oracle says, 'Yeah, but you still have to pay me, I know you only have half the users and half the capacity but you still have to pay me and we're going to raise prices because of inflation.' That really frustrates companies," he said.

The webinar was also hosted Ed Boyajian, CEO of EDB, a software company backing the open-source database PostgreSQL. He said large customers had moved away from Oracle to PostgreSQL but that it often required top-level support.

"Our biggest customers very large-scale enterprise-wide Postgres users report needing a strategic drive to change. That intersected the C-suite: there is a common theme that it takes a strong commitment at that level. Because people are always afraid of the risk of the unknown."

We have asked Oracle to comment.

Big Red has argued that its approach to the cloud has offered a way of integrating with the on-prem world. In 2020, it launched an on-premises cloud product, Oracle Dedicated Region Cloud, completely managed by Oracle, using the same architecture, cloud services, APIs, and SLAs as its equivalent regional public and private clouds.

"Customers can think of it as their own private cloud running inside their data centre, or they will also see it as a hybrid cloud, given that this the exact same thing we offer in a public cloud," said Regis Louis, exec veep of product management for Oracle Cloud Platform in EMEA.

Meanwhile, Oracle also claims to innovate with tight integration between hardware and software supporting the performance of its Exadata products. Big Red claims it beefed-up Exadata X9M, launched last year, provides online transaction processing (OLTP) with more than 70 per cent higher input/output operations per second (IOPS) than its earlier release.

But some customers have trodden the path away from the dominant application and database vendor. EDB claims to offer tools that smooth the migration to PostgreSQL, plus the option of moving applications without rewriting them.

Speaking to The Register in 2020, Ganadeva Bandyopadhay, associate vice president of IT at TransUnion CIBIL, described the migration from Oracle to Postgres EDB.

The company was looking to revamp older applications based on "rapidly outgoing concepts like heavy database servers with a lot of business logic within the database code," Bandyopadhay said.

The credit information company operating in India found its Oracle licences were being underused, but the rigidity in the rules made it difficult to move them onto different virtual instances and convert from the processor-based to the "Named User Plus" licensing.

Starting from 2015, Bandyopadhay and his team wanted to remove the business logic from the main database, improving performance and flexibility in the architecture, something he said would have been difficult to do with Oracle.

"It was nothing against Oracle, but our logic was to address Oracle features which are built within the database," he said. "There is a cost to that which we had accepted for a long time, but with the changing expectations [from the business], we had to really revamp and flatten out the databases and put the business logic into somewhere else in the middle tier," he said.

After completing the migration in 2017, Bandyopadhay's team found the Postgres EDB-based system achieved higher throughput at lower licensing costs than Oracle, but not before reskilling its internal IT team.

Read more:
Expect sales reps' calls if IT wants to ditch Oracle - The Register

Read More..

Threat Actors Organize: Welcome to Ransomware Inc. – Virtualization Review

News

"Many people still think of a ransomware actor as the proverbial 400-pound hacker in his mom's basement -- nothing could be further from the truth," says in-the-trenches security expert Allan Liska. "There are a number of cottage industries that have sprung up in support of ransomware."

In fact, the intelligence analyst at Recorded Future outlined a businesslike environment and workflow he has discerned from his more than 20 years in the IT security field, most recently focused on ransomware:

"In fact, the leader of a ransomware group is often nothing more than a 'marketing' person whose sole purpose is to get more affiliates for the group," said Liska, who is known as the "Ransomware Sommelier."

He shared his thoughts with Virtualization & Cloud Review following his presentation in a recent multi-part online event titled "Modern Ransomware Defense & Remediation Summit," now available for on-demand viewing.

It's no surprise Liska started off discussing initial access brokers early on, as he has become somewhat of a specialist in that area. For example, last year took to Twitter to lead a crowdsourcing effort to create a one-stop-shop for a list of initial access vulnerabilities used by ransomware attackers, as we explained in the article "'Ransomware Sommelier' Crowdsources Initial Access Vulnerability List."

Of course, organized ransomware has been a known thing for a while now, with even nation-state actors getting in on the action, but Liska and other security experts indicate the bad guys are getting more sophisticated.

"Outsourcing the initial access to an external entity lets attackers focus on the execution phase of an attack without having to worry about how to find entry points into the victim's network," said an article last summer in Infosecurity Magazine titled "The Rise of Initial Access Brokers," which noted the flourishing market often sees compromised VPN or RDP accounts as network inroads, along with other exposed remote services like SSH.

Digital Shadows also charted "The Rise of Initial Access Brokers" a year ago, complete with a chart showing popular access types and their average prices (note that prices have likely gone up with the recent inflation spike):

Liska detailed the initial access scene in his opening presentation, titled "The Current Ransomware Threat Landscape & What IT Pros Must Know."

"So one of the things that you have to understand with ransomware is it's generally not the ransomware actor that's gaining that initial access," he explained. "There are other criminals that are called initial access brokers, and they're the ones who generally gain that access. And then they turn around and they sell it to the ransomware actors themselves, whether it's to the operator of the ransomware-as-a-service offering, or whether it's one of their affiliates that people that sign up to be able to deploy their ransomware.

"Think of it like flipping houses, except you're flipping networks. You're turning that network over to a ransomware actor who's then going to deploy the ransomware."

Allan Liska, Intelligence Analyst, Recorded Future

"So when you're talking about an attack like this, you're generally talking about two different types of actors: one to get the initial access and one that turns around and sells it. Think of it like flipping houses, except you're flipping networks. You're turning that network over to a ransomware actor who's then going to deploy the ransomware. And they generally sell that initial access from anywhere from a couple thousand to 10, 15, even 100,000, depending on the type of access they're able to get -- so if you have administrator access -- and the size of the network. But you know, the thing is, if you're a ransomware actor, it's still a good investment. Because if you're confident you can deploy the ransomware you're gonna make way more than what you're paying for that initial access."

Liska explained he and other security experts are seeing four primary initial access vectors: credential stuffing/reuse; phishing; third-party; and exploitation, summarized in this graphic:

Phishing was the popular vector throughout 2019 and 2020, Liska said, but RDP (Remote Desktop Protocol) -- "low hanging fruit" -- is gaining traction. Here are Liska's thoughts on RDP, third-party attacks and exploitation:

RDP: "Ransomware Deployment Protocol"?"What we're starting to see in 2021 -- and we expect this to continue into 2022 -- is that credential stuffing and credential reuse attacks are becoming much more common," Liska said. "In fact, we kind of have a joke in the industry that RDP actually stands for ransomware deployment protocol, instead of what it actually means, only because RDP is one of the most common entry methods. Because it's so easy for these initial access brokers to just fire up an old laptop and start scanning, looking for open RDP connections, and then trying credential stuffing/credential reuse attacks. You have to keep in mind, there are literally billions of credentials that are being sold on underground markets.

"So while it seems like a credential use attack would be a challenge, it really isn't. You connect to the RDP server, you see what network it belongs to, you search on Genesis market or one of the other markets for usernames and passwords that match it. And then you try those -- you get 100 of them -- you try them and unfortunately, most the time, they will find a match, and they'll be able to gain access. That's why Multi-Factor Authentication is so important for any system that's exposed to the internet."

Third-Party Attacks"These are increasingly common," Liska said. "We really saw this take off in 2021. So a ransomware actor, or the initial access broker, gains access to a managed service provider, or a vendor of some kind. And rather than [deploy] the ransomware on that vendor, what they do is they use that access to jump to those partners. They find it's really easy, because you get to start right in the gooey center, and work your way out. So we're seeing a big increase in that. And again, that goes with the sophistication and increasing sophistication of the ransomware access."

Exploitation"And then exploitation is also growing in popularity," Liska continued. "So, you know, in the last year, we catalogued more than 40 different exploits that were used by ransomware groups or the initial access focus in order to gain that first access. So it's really, really important that you're patching. Again, anything that's public facing, especially anything that has proof of concept code, or anything like that release, has to be patched immediately."

RaaS: Ransomware-as-a-ServiceOne striking fact that speaks to the businesslike organization of ransomware are numerous RaaS operations that have sprung up around the globe, as Liska's chart below shows:

Cybersecurity specialist Rubrik, in a ransomware compendium, says of RaaS: "Criminals don't have to create their viruses anymore. Developers will create ransomware for a fee or share of the profits, creating a whole new industry that caters to ransomware." Also, the company noted a growing ecosystem of dedicated infrastructure has formed to support ransomware, including "bulletproof" hosts who will refuse to take criminal users offline, along with dedicated networks to help criminals avoid anti-virus software and move and hide virtual currency payments.

See the original post:
Threat Actors Organize: Welcome to Ransomware Inc. - Virtualization Review

Read More..

Duke University and IonQ Develop New Quantum Computing Gate – HPCwire

DURHAM, N.C. & COLLEGE PARK, Md., Feb. 10, 2022 Today, the Duke Quantum Center (DQC) at Duke University and IonQ announced the invention of a new quantum computing operation with the potential to accelerate several key quantum computing techniques and contribute to scaling quantum algorithms. The new quantum gate is a novel way to operate on many connected qubits at once and leverages the multi-qubit communication bus available only on IonQ and DQC quantum computers. Full details of the gate technique can be found on the preprint archive arXiv at arXiv:2202.04230.

The new gate family includes the N-qubit Toffoli gate, which flips a select qubit if and only if all the other qubits are in a particular state. Unlike standard two-qubit quantum computing gates, the N-qubit Toffoli gate acts on many qubits at once, leading to more efficient operations. The gate appears naturally in many common quantum algorithms.

IonQ and Dukes discovery may lead to significant efficiency gains in solving fundamental quantum algorithms, such as Grovers search algorithm, variational quantum eigensolvers (VQEs), and arithmetic operations like addition and multiplication. These use cases are ubiquitous across quantum computing applications, and are core to IonQs work in quantum chemistry, quantum finance, and quantum machine learning. They are also key components of commonly accepted industry benchmarks for quantum computers, which have alreadyshown IonQs computers to be industry leaders.

This discovery is an example of us continuing to build on the leading technical architecture weve established. It adds to the unique and powerful capabilities we are developing for quantum computing applications, said Peter Chapman, CEO at IonQ.

This research, conducted at Duke by Dr. Or Katz, Prof. Marko Cetina, and IonQ co-Founder and Chief Scientist Prof. Christopher Monroe, will be integrated into IonQs quantum computing operating system for the general public to use. Monroe notes that, no other available quantum computing architecturesnot even other ion-based quantum computersare able to utilize this new family of N-qubit gates. This is because IonQs quantum computers uniquely feature full connectivity and a wide communication bus that allows all qubits to talk to each other simultaneously.

This discovery follows a series of announcements around IonQs research efforts and preparations for scale. In December, IonQ announced that itplans to use barium ions as qubitsin its systems, bringing about a wave of advantages it believes will enable advanced quantum computing architectures. Last year, the team also debuted the industrys firstReconfigurable Multicore Quantum Architecture and Evaporated Glass Trap technology, both of which are expected to contribute to scaling the number of qubits in IonQs quantum computers.

About IonQ

IonQ, Inc. is a leader in quantum computing, with a proven track record of innovation and deployment. IonQs next-generation quantum computer is the worlds most powerful trapped-ion quantum computer, and IonQ has defined what it believes is the best path forward to scale.

IonQ is the only company with its quantum systems available through the cloud on Amazon Braket, Microsoft Azure, and Google Cloud, as well as through direct API access. IonQ was founded in 2015 by Christopher Monroe and Jungsang Kim based on 25 years of pioneering research. To learn more, visitwww.ionq.com.

Source: IonQ

Continue reading here:
Duke University and IonQ Develop New Quantum Computing Gate - HPCwire

Read More..

Quantum computing venture backed by Jeff Bezos will leap into public trading with $1.2B valuation – GeekWire

A team member at D-Wave Systems, based in Burnaby, B.C.,, works on the dilution refrigerator system that cools the processors in the companys quantum computer. (D-Wave Systems Photo / Larry Goldstein)

Burnaby, B.C.-based D-Wave Systems, the quantum computing company that counts Jeff Bezos among its investors and NASA among its customers, has struck a deal to go public with a $1.2 billion valuation.

The deal involves a combination with DPMC Capital, a publicly traded special-purpose acquisition company, or SPAC. Its expected to bring in $300 million in gross proceeds from DPMCs trust account, plus $40 million in gross proceeds from investors participating in a PIPE arrangement. (PIPE stands for private investment in public equity.)

Quantum computing takes advantage of phenomena at the quantum level, processing qubits that can represent multiple values simultaneously as opposed to the one-or-zero paradigm of classical computing. The approach is theoretically capable of solving some types of problems much faster than classical computers.

Founded in 1999, D-Wave has focused on a type of technology called quantum annealing, which uses quantum computing principles and hardware to tackle tasks relating to network optimization and probabilistic sampling.

Physicists have debated whether D-Waves Advantage system should be considered an honest-to-goodness quantum computer, but the company says that question has been settled by research that, among other things, turned up signatures of quantum entanglement. D-Wave is included among the quantum resources offered by Amazon and Microsoft, and it also has its own cloud-based platform, known as Leap.

The SPAC deal has already been cleared by the boards of directors for D-Wave and DPCM Capital. If the transaction proceeds as expected, with approval by DPCMs stockholders, it should close by midyear. The result would be a combined company called D-Wave Quantum Inc. that would remain headquartered in Burnaby a suburb of Vancouver, B.C. and trade on the New York Stock Exchange under the QBTS stock symbol.

Today marks an inflection point signaling that quantum computing has moved beyond just theory and government-funded research to deliver commercial quantum solutions for business, D-Wave CEO Alan Baratz said in a news release.

Among the investors involved in the PIPE transaction are PSP Investments, NEC Corp., Goldman Sachs, Yorkville Advisors and Aegis Group Partners. Other longtime D-Wave investors include Bezos Expeditions as well as In-Q-Tel, a venture capital fund backed by the CIA and other intelligence agencies.

In what was described as an innovative move, the SPAC deal sets aside a bonus pool of 5 million shares for DPCMs non-redeeming public stockholders.

D-Wave says it will use the fresh funding to accelerate its delivery of in-production quantum applications for its customers, and to build on a foundation of more than 200 U.S. patents. The company is aiming to widen its offerings beyond quantum annealing by developing more versatile gate-model quantum computers.

Emil Michael, DPMC Capitals CEO, said the total addressable market for quantum computing services could amount to more than $1 billion in the near term, and rise to $150 billion as applications mature.

While quantum computing is complex, its value and benefits are quite simple: finding solutions to problems that couldnt be previously solved, or solving problems faster with more optimal results, Michael said. D-Wave is at the forefront of developing this market, already delivering the significant benefits of quantum computing to major companies across the globe.

Continue reading here:
Quantum computing venture backed by Jeff Bezos will leap into public trading with $1.2B valuation - GeekWire

Read More..

Global $1.6 Billion Quantum Computing Technologies and Markets to 2026 – PRNewswire

DUBLIN, Feb. 10, 2022 /PRNewswire/ -- The "Quantum Computing: Technologies and Global Markets to 2026" report has been added to ResearchAndMarkets.com's offering.

The global quantum computing technologies market should reach $1.6 billion by 2026 from $390.7 million in 2021 at a compound annual growth rate (CAGR) of 33.2% for the forecast period of 2021 to 2026.

Report Scope

This report provides an overview of the global market for quantum computing and analyzes market trends. Using 2020 as the base year, the report provides estimated market data for the forecast period 2021 through 2026. Revenue forecasts for this period are segmented based on offering, deployment, technology, application, end-user industry and region.

Quantum computing is the gateway to the future. It can revolutionize computation by making certain types of classically stubborn problems solvable. Currently, no quantum computer is mature enough to perform calculations that traditional computers cannot, but great progress has been made in the last few years. Several large and small start-ups are using non-error-corrected quantum computers made up of dozens of qubits, some of which are even publicly accessible via the cloud. Quantum computing helps scientists accelerate their discoveries in related areas, such as machine learning and artificial intelligence.

Early adoption of quantum computers in the banking and financial industries, increased investment in quantum computing technology, and the rise of numerous strategic partnerships and collaborations are the main drivers behind the market growth.

The trend towards strategic approaches such as partnerships and collaborations is expected to continue. As quantum computer vendors move to quantum development, the consumer industries will seek to adopt current and new quantum technologies to gain a competitive advantage. The technological hurdles in the implementation of the quantum systems, as well as the lack of quantum skills, can limit the market growth. However, increasing adoption of quantum technology in healthcare, increasing demand for computing power, and the introduction of cloud-based quantum computing services are expected to open up new market opportunities during the forecast period.

Between 2021 and 2026, many companies with optimization problems may adopt a hybrid approach where some of the problems are handled by classical computing and the rest by quantum computers. The demand for quantum computers is expected to grow from multiple end-user industries, from finance to pharmaceuticals, automobiles to aerospace. Many industries, such as banks, are now using cloud-based quantum services.

There is no doubt that quantum computers will be expensive machines to develop and will be operated by a small number of key players. Companies like Google and IBM plan to double the performance of quantum computers each year. In addition, a small but important cohort of promising start-ups is steadily increasing the number of qubits a computer can process. This creates an immersive opportunity for the global quantum computing market growth in the coming years.

This report has divided the global quantum computing market based on offering, technology, deployment, application, end-user industry, and region. Based on offering, the market is segmented into systems and services. The services memory segment held the largest market share, and it is expected to register the highest CAGR during the forecast period. The services segment includes quantum computing as a service (QCaaS) and consulting services.

The report also focuses on the major trends and challenges that affect the market and the competitive landscape. It explains the current market trends and provides detailed profiles of the major players and the strategies they adopt to enhance their market presence. The report estimates the size of the global quantum computing market in 2020 and provides projections of the expected market size through 2026.

Competitive Landscape

Company profiles of the key industry players include

Patent Analysis

For more information about this report visit https://www.researchandmarkets.com/r/o1td8j

Media Contact:

Research and Markets Laura Wood, Senior Manager [emailprotected]

For E.S.T Office Hours Call +1-917-300-0470 For U.S./CAN Toll Free Call +1-800-526-8630 For GMT Office Hours Call +353-1-416-8900

U.S. Fax: 646-607-1907 Fax (outside U.S.): +353-1-481-1716

SOURCE Research and Markets

See original here:
Global $1.6 Billion Quantum Computing Technologies and Markets to 2026 - PRNewswire

Read More..

Postdoctoral Research Associate in Quantum Algorithms for Fluid Simulations job with DURHAM UNIVERSITY | 281136 – Times Higher Education (THE)

Department of Physics

Grade 7: - 34,304 - 36,382Fixed Term - Full TimeContract Duration: 24 monthContracted Hours per Week: 35Closing Date: 11-Mar-2022, 7:59:00 AM

The Department

The Department of Physics at Durham University is one of the leading UK Physics departments with an outstanding reputation for excellence in teaching, research and employability of our students.

The Department is committed to advancing equality and we aim to ensure that our culture is inclusive, and that our systems support flexible and family-friendly working, as recognized by our Juno Champion and Athena SWAN Silver awards. We recognise and value the benefits of diversity throughout our staff and students.

The Role

Applications are invited for a postdoctoral position to develop quantum algorithms for fluid simulations, to work as part of the EPSRC funded ExCALIBUR project on Quantum Enhanced and Verified Exascale Computing (QEVEC). The QEVEC project addresses the potential of quantum computing as a disruptor in exascale computing. Even at an early stage of development, if quantum computing can be deployed as a co-processor to tackle bottlenecks in existing and future exascale codes, it has the potential to provide a huge boost to the overall computational power.

There will be four PDRAS in the QEVEC team, each with different expertise, working to-gether to develop quantum computing for the main ExCALIBUR use cases (fluids simulations and materials simulations), and methods to validate the hybrid quantum-classical algorithms. This post specifically considers the important application area of quantum algorithms in com-putational fluid dynamics. The candidate will evaluate new and existing quantum algorithms for their suitability as quantum subroutines for exascale codes, including for lattice Boltz-mann, smooth particle hydrodynamics, and other computational fluid dynamics (CFD) meth-ods.

We are seeking an enthusiastic computational researcher in fluid dynamics who is interested in developing quantum computing skills, or a quantum computing researcher who is keen to investigate potential applications in fluid dynamics simulations. You need to be a good team worker and communicator to work closely with computational scientists across discipline boundaries.

The post is based in Durham, but the candidate is expected to collaborate closely with other members of the QEVEC team based in Strathclyde, UCL, Warwick and London Southbank. Where appropriate, the candidate will also engage with other ExCALIBUR projects, Collaborative Computational Projects (CCPs) and High End Consortiums (HECs), the National Quantum Computing Centre and the Quantum Computing and Simulation Hub.

The post is for 24 months, to commence in June 2022 or as soon as possible thereafter. We also welcome part time applications to this role.

Informal enquiries are welcome and should please be directed to Prof Halim Kusumaatmaja (halim.kusumaatmaja@durham.ac.uk) and/or Dr Alastair Basden (a.g.basden@durham.ac.uk). Further details on the QEVEC project can be found in the EXCALIBUR website (https://excalibur.ac.uk/projects/qevec/).

Responsibilities:

These posts are fixed term for 24 months.

The post-holder is employed to work on research/a research project which will be led by another colleague. Whilst this means that the post-holder will not be carrying out independent research in his/her own right, the expectation is that they will contribute to the advancement of the project, through the development of their own research ideas/adaptation and development of research protocols.

Successful applicants will ideally be in post by 1st June 2022.

The Requirements

Essential:

Desirable:

How to Apply

For informal enquiries please contact Prof Halim Kusumaatmaja at halim.kusumaatmaja@durham.ac.uk and/or Dr Alaistair Basden at a.g.basden@durham.ac.uk. All enquiries will be treated in the strictest confidence.

We prefer to receive applications online via the Durham University Vacancies Site. https://www.dur.ac.uk/jobs/. As part of the application process, you should provide details of 3 (preferably academic/research) referees and the details of your current line manager so that we may seek an employment reference.

Applications are particularly welcome from women and black and minority ethnic candidates, who are under-represented in academic posts in the University.

What to Submit

All applicants are asked to submit:

A CV and covering letter which details your experience, strengths and potential in the requirements set out above;, and clearly describes where you meet the essential and desirable criteria (for example, as a bullet

DBS Requirement:Not Applicable.

Read the original here:
Postdoctoral Research Associate in Quantum Algorithms for Fluid Simulations job with DURHAM UNIVERSITY | 281136 - Times Higher Education (THE)

Read More..

Why It’s Time to Think Differently About Honeywell – Motley Fool

As the headline says, it is time to start thinking differently about Honeywell International (NYSE:HON). The company is known as being one of the last great diversified industrial giants, and that definition still applies. However, what many investors might be missing is that Honeywell is an aggressive investor in cutting-edge technologies, and those businesses are going to significantly add to the value of the company in a few years.

Pause for a second and consider investing in a small company backed by substantive investors that's on track to grow its quantum computing-based revenue from $20 million in 2022 to around $2 billion in 2026.

Image source: Getty Images.

At the same time, the small company has a sustainable technology company (green fuels, feedstocks for recycled plastics, etc.) set to generate $700 million in revenue by 2025. Moreover, this is not any old start-up company with wide-eyed dreams; it's backed by a tried and tested management team with deep pockets.

Such a company would be valued at multiples equivalent to several times its sales. That's the sort of value that investors should start to price into Honeywell stock. The reason is that CEO Darius Adamczyk told investors to expect those revenue figures for two of Honeywell's highest-growth businesses in the coming years.

Of course, transitioning to this kind of thinking won't come easily, and the investments necessary to get there will hold back earnings and free cash flow (FCF) in the near term. However, that's the flip side of the coin, and investors thinking about Honeywell as a diversified industrial might stress over the lost earnings and cash flow.

To put some figures on the matter, management noted that its full-year 2022 earnings before interest, tax, depreciation and amortization (EBITDA) would be negatively affected by $150 million due to investment in its quantum computing business, Quantinuum. Moreover, capital investments made to support growth in Quantinuum will eat into FCF to the tune of $200 million to $300 million in 2022.

Image source: Getty Images.

To put that figure into context, Honeywell is currently valued at $132 billion, a figure equivalent to 23.2 times its trailing FCF. If investors price out the stock based on FCF, then the "lost" $300 million could cost the market cap around 23.2 multiplied by $300 million -- $6.95 billion in market cap.

If you are taking a glass-half-full approach and getting optimistic about the growth investments, then the news about the investments is a good thing. After all, investors put money in stocks because they feel confident that management can generate better returns on the money than they (investors) can.

On the other hand, the glass-half-empty approach laments that Honeywell's earnings and cash flow are being held back and downgrades the stock accordingly. This approach shaves off some of the market cap as outlined above.

Unfortunately, the bull and bear debate over the stock won't stop here. Not least because Honeywell, in line with many other industrial peers, is expecting a formula of first half affected by supply chain pressures and cost increases, followed by a better second half in 2022.

As such, investors looking at the stock as a diversified industrial will have to tolerate a mix of earnings and margin headwinds from the increased investments and the uncertainty from waiting until the second half for an acceleration in growth at Honeywell.

Image source: Getty Images.

Moreover, management prepared investors for a challenging first quarter, with organic growth forecast to be in the range of a 2% decline to an increase of 1%. Meanwhile, adjusted EPS is forecast to be in the range of $1.80 to $1.90, implying a decline of 6% to a decline of 1%.

In addition, CFO Greg Lewis told investors that "with the supply chain impacts that we have been facing, those will continue to drive higher inventory levels, dampening our cash generation in the short term."

It all adds up to a first quarter that's likely to look a little weak on a headline basis.

If you are looking at the stock purely as a diversified industrial, then the answer is probably "no." Despite the fall in the share price, Honeywell is still a highly rated stock, and the slightly disappointing 2022 guidance means the stock isn't quite at a level enticing enough for investors not taking the long view.

However, for investors looking for a back-door way to play quantum computing and sustainable technology trends, Honeywell may well represent a great way to do so without considering the nosebleed valuations and blue sky assumptions that usually come with such investments.

Whichever way you look at it, Honeywell's growth investments are changing the investment proposition over the stock.

This article represents the opinion of the writer, who may disagree with the official recommendation position of a Motley Fool premium advisory service. Were motley! Questioning an investing thesis -- even one of our own -- helps us all think critically about investing and make decisions that help us become smarter, happier, and richer.

Go here to see the original:
Why It's Time to Think Differently About Honeywell - Motley Fool

Read More..

Benefits of and Best Practices for Protecting Artificial Intelligence and Machine Learning Inventions as Trade Secrets – JD Supra

We previously discussedwhich portions of an artificial intelligence/machine-learning (AI/ML) platform can be patented. Under what circumstances, however, is it best to keep at least a portion of the platform a trade secret? And what are some best practices for protecting trade secrets? In this post, we explore important considerations and essential business practices to keep in mind when working to protect the value of trade secrets specific to AI/ML platforms, as well as the pros and cons of trade secret versus patent protection.

Protecting AI/ML Platforms via Trade Secrets

What qualifies as a trade secret can be extraordinarily broad, depending on the relevant jurisdiction, as, generally speaking, a trade secret is information that is kept confidential and derives value from being kept confidential. This can potentially include anything from customer lists to algorithms. In order to remain a trade secret, however, the owner of the information must follow specific business practices to ensure the information remains secret. If businesses do not follow the proscribed practices, then the ability to protect the trade secret is waived and its associated value is irretrievably lost. The business practices required are not onerous or complex, and we will discuss these below, but many businesses are unaware of what is required for their specific type of IP and only discover their error when attempting to monetize their inventions or sell their business. To avoid this devastating outcome, we work to arm our clients with the requisite practices and procedures tailored to their specific inventions and relevant markets.

In the context of AI/ML platforms, trade secrets can include the structure of the AI/ML model, formulas used in the model, proprietary training data, a particular method of using the AI/ML model, any output calculated by the AI/ML model that is subsequently converted into an end product for a customer, and similar aspects of the platform. There are myriad ways in which the value of the trade secret may be compromised.

For example, if an AI/ML model is sold as a platform and the platform provides the raw output of the model and a set of training data to the customer, then the raw output and the set of training data would no longer qualify for trade secret protection. Businesses can easily avoid this pitfall by having legally binding agreements in place between the parties to protect the confidentiality and ownership interests involved. Another area in which we frequently see companies waive trade secret protection is where the confidential information that can be independently discovered (such as through reverse-engineering a product). Again, there are practices that businesses can follow to avoid waiving trade secret protection due to reverse-engineering. Owners, therefore, must also be careful in ensuring that the information they seek to protect cannot be discovered through use or examination of the product itself and where that cannot be avoided, ensure that such access is governed by agreements that prohibit such activities, thereby maintaining the right to assert trade secret misappropriation and recover the value of the invention.

To determine if an invention may be protected as a trade secret, courts will typically examine whether the business has followed best practices or reasonable efforts for the type of IP and relevant industries. See e.g. Intertek Testing Services, N.A., Inc. v. Frank Pennisi et al., 443 F. Supp. 3d 303, 323 n.19 (E.D.N.Y. Mar. 9, 2020). What constitutes best practices for a particular type of IP can vary greatly. For example, a court may examine whether those trade secrets were adequately protected. The court may also look to whether the owner created adequate data policies to prevent employees from mishandling trade secrets. See Yellowfin Yachts, Inc. v. Barker Boatworks, LLC, 898 F.3d 1279 (11th Cir. Aug. 7, 2018)(where the court held that requiring password protection to access trade secrets was insufficient without adequate measures to protect information stored on employee devices). If the court decides that the business has not employed best practices, the owner can lose trade secret protection entirely.

Most often, a failure to ensure all parties who may be exposed to trade secrets are bound by a legally-sufficient confidentiality or non-disclosure agreement forces the owner to forfeit their right to trade secret protection for that exposed information. Owners should have experienced legal counsel draft these agreements to ensure that the agreements are sufficient to protect the trade secret and withstand judicial scrutiny; many plaintiffs have learned the hard way that improperly-drafted agreements can affect the trade secret protection afforded to their inventions. See, e.g., BladeRoom Group Ltd. v. Emerson Electric Co., 11 F.4th 1010, 1021 (9th Cir. Aug. 30, 2021)(holding that NDAs with expiration dates also created expiration dates for trade secret protection); Foster Cable Servs., Inc. v. Deville, 368 F. Supp. 3d 1265 (W.D. Ark. 2019)(holding that an overbroad confidentiality agreement was unenforceable); Temurian v. Piccolo, No. 18-cv-62737, 2019 WL 1763022 (S.D. Fla. Apr. 22, 2019)(holding that efforts to protect data through password protection and other means were negated by not requiring employees to sign a confidentiality agreement).

There are many precautions owners can take to protect their trade secrets, which we discuss below:

Confidentiality and Non-Disclosure Agreements: One of the most common methods of protecting trade secrets is to execute robust confidentiality agreements and non-disclosure agreements with everyone who may be exposed to trade secrets, to ensure they have a legal obligation to keep those secrets confidential. Experienced legal counsel who can ensure the agreements are enforceable and fully protect the owner and their trade secrets are essential as there are significant pitfalls in these types of agreements and many jurisdictions have contradicting requirements.

Marketing and Product Development: The AI/ML platform itself should also be constructed and marketed in such a way as to prevent customers from easily discovering the trade secrets, whether through viewing marketing materials, through ordinary use of the platform, or through reverse-engineering of the platform. For example, if an AI/ML platform uses a neural network to classify medical images, and the number of layers used and the weights used by the neural network to calculate output are commercially valuable, the owner should be careful to exclude any details about the layers of the AI/ML model in marketing materials. Further, the owner may want to consider developing the platform in such a way that the neural network is housed internally (protected by various security measures) and therefore not directly accessible by a customer seeking to reverse-engineer the product.

Employee Training: Additionally, owners should also ensure that employees or contractors who may be exposed to trade secrets are trained in how to handle those trade secrets, including how to securely work on or discuss trade secrets, how to handle data on their personal devices (or whether trade secret information may be used on personal devices at all), and other such policies.

Data Security: Owners should implement security precautions (including limiting who can access trade secrets, requiring passwords and other security procedures to access trade secrets, restricting where data can be downloaded and stored, implementing mechanisms to protect against hacking attempts, and similar precautions) to reduce the risk of unintended disclosure of trade secrets. Legal counsel can help assess existing measures to determine whether they are sufficient to protect confidential information under various trade secret laws.

Pros and Cons of Trade Secret Protection over Patent Protection

Trade secret protection and patent protection are obtained and maintained in different ways. There are many reasons why trade secret protection may be preferable to patent protection for various aspects of an AI/ML platform, or vice-versa. Below we discuss some criteria to consider before deciding how to protect ones platform.

Protection Eligibility: As noted in our previous blog post, patent protection may be sought for many components of an AI/ML platform. There are, however, some aspects of an AI/ML platform that may not be patent-eligible. For example, while the architecture of a ML model may be patentable, specific mathematical components of the model, such as the weight values, mathematical formulas used to calculate weight values in an AI/ML algorithm, or curated training data, may not, on their own, be eligible for patent protection. If the novelty of a particular AI/ML platform is not in how an AI/ML model is structured or utilized, but rather in non-patentable features of the model, trade secret protection can be used to protect this information.

Cost: There are filing fees, prosecution costs, issue fees, and maintenance fees required to obtain and keep patent protection on AI/ML models. Even for an entity that qualifies as a micro-entity under the USPTOs fee schedule, the lifetime cost of a patent could be several thousand dollars in fees, and several thousand dollars in attorneys fees to draft and prosecute the patent. Conversely, the costs of trade secret protection are the costs to implement any of the above methods of keeping critical portions of the AI/ML model secret from others. In many instances, it may be less expensive to rely on trade secret protection, than it may be to obtain patent protection.

Development Timeline: AI/ML models, or software that implements them, may undergo several iterations through the course of developing a product. As it may be difficult to determine which, if any, iterations are worth long-term protection until development is complete, it may be ideal to protect each iteration until the value of each has been determined. However, obtaining patent protection on each iteration may, in some circumstances, be infeasible. For example, once a patent application has been filed, the specification and drawings cannot be amended to cover new, unanticipated iterations of the AI/ML model; a new application that includes the new material would need to be filed, incurring further costs. Additionally, not all iterations will necessarily include changes that can be patented, or it may be unknown until after development how valuable a particular modification is to technology in the industry, making it difficult to obtain patent protection for all iterations of a model or software using the model. In these circumstances, it may be best to use a blend of trade secret and patent protection. For example, iterations of a model or software can be protected via trade secret; the final product, and any critical iterations in between, can subsequently be protected by one or more patents. This allows for a platform to be protected without added costs per iteration, and regardless of the nature of the changes made in each iteration.

Duration of Protection: Patent owners can enjoy protection of their claimed invention for approximately twenty years from the date of filing a patent application. Trade secret protection, on the other hand, lasts as long as an entity keeps the protected features a secret from others. For many entities, the twenty-year lifetime of a patent is sufficient to protect an AI/ML platform, especially if the patent owner anticipates substantially modifying the platform (e.g., to adapt to future needs or technological advances) by the end of the patent term. To the extent any components of the AI/ML platform are unlikely to change within twenty years (for example, if methods used to curate training data are unlikely to change even with future technological advances), it may be more prudent to protect these features as trade secrets.

Risk of Reverse-Engineering: As noted above, trade secrets do not protect inventions that competitors have been able to discover by reverse-engineering an AI/ML product. While an entity may be able to prevent reverse-engineering of some aspects of the invention through agreements between parties with permission to access the AI/ML product or through creative packaging of the product, there are some aspects of the invention (such as the training data that needs to be provided to the platform, end product of the platform, and other features) that may need to remain transparent to a customer, depending on the intended use of the platform. Such features, when patent-eligible, may benefit more from patent protection than from trade secret protection, as a patent will protect the claimed invention even if the invention can be reverse-engineered.

Exclusivity: A patent gives the patent owners the exclusive right to practice or sell their claimed inventions, in exchange for disclosing how their inventions operate. Trade secrets provide no such benefit; to the extent competitors are able to independently construct an AI/ML platform, they are allowed to do so even if an entity has already sold a similar platform protected by trade secret. Thus, to the extent an exclusive right to the AI/ML model or platform is necessary for the commercial viability of the platform or its use, patent protection may be more desirable than trade secret protection.

Conclusion

Trade secret law allows broad protection of information that can be kept secret from others, provided certain criteria are met to ensure the information is adequately protected from disclosure to others. Many aspects of an AI/ML platform can be protected under either trade secret law or patent law, and many aspects of an AI/ML platform may only be protected under trade secret law. It is therefore vital to consider trade secret protection alongside patent protection, to ensure that each component of the platform is being efficiently and effectively protected.

[View source.]

See original here:
Benefits of and Best Practices for Protecting Artificial Intelligence and Machine Learning Inventions as Trade Secrets - JD Supra

Read More..