Page 1,772«..1020..1,7711,7721,7731,774..1,7801,790..»

Cryptocurrency BNB’s Price Increased More Than 5% Within 24 hours – Benzinga – Benzinga

Over the past 24 hours, BNB's BNB/USD price has risen 5.82% to $273.07. This continues its positive trend over the past week where it has experienced a 18.0% gain, moving from $227.31 to its current price. As it stands right now, the coin's all-time high is $686.31.

The chart below compares the price movement and volatility for BNB over the past 24 hours (left) to its price movement over the past week (right). The gray bands are Bollinger Bands, measuring the volatility for both the daily and weekly price movements. The wider the bands are, or the larger the gray area is at any given moment, the larger the volatility.

BNB's trading volume has climbed 65.0% over the past week, moving in tandem, directionally, with the overall circulating supply of the coin, which has increased 0.06%. This brings the circulating supply to 163.28 million, which makes up an estimated 98.89% of its max supply of 165.12 million. According to our data, the current market cap ranking for BNB is #5 at $44.10 billion.

Powered by CoinGecko API

This article was generated by Benzinga's automated content engine and reviewed by an editor.

Original post:
Cryptocurrency BNB's Price Increased More Than 5% Within 24 hours - Benzinga - Benzinga

Read More..

Cryptocurrency Bitcoin Cash Up More Than 5% In 24 hours – Benzinga – Benzinga

Over the past 24 hours, Bitcoin Cash's BCH/USD price rose 5.28% to $123.87. This continues its positive trend over the past week where it has experienced a 24.0% gain, moving from $100.28 to its current price. As it stands right now, the coin's all-time high is $3,785.82.

The chart below compares the price movement and volatility for Bitcoin Cash over the past 24 hours (left) to its price movement over the past week (right). The gray bands are Bollinger Bands, measuring the volatility for both the daily and weekly price movements. The wider the bands are, or the larger the gray area is at any given moment, the larger the volatility.

The trading volume for the coin has risen 37.0% over the past week diverging from the circulating supply of the coin, which has decreased 0.0%. This brings the circulating supply to 19.12 million, which makes up an estimated 91.06% of its max supply of 21.00 million. According to our data, the current market cap ranking for BCH is #32 at $2.36 billion.

Powered by CoinGecko API

This article was generated by Benzinga's automated content engine and reviewed by an editor.

See the original post here:
Cryptocurrency Bitcoin Cash Up More Than 5% In 24 hours - Benzinga - Benzinga

Read More..

Quantum Computing Use Cases: How Viable Is It, Really? – thenewstack.io

Use cases for quantum computing are still at an experimental stage, but were getting closer to meaningful commercialization of the technology. In a new research report, IonQ and GE Research (General Electrics innovation division) announced encouraging results for the use of quantum computing in risk management which potentially has wide applicability in industries like finance, manufacturing, and supply chain management.

I interviewed Sonika Johri, Lead Quantum Applications Research Scientist at IonQ, and Annarita Giani, Complex System Scientist at GE Research, to learn more about the current use cases for quantum computing.

In a blog post about the research, IonQ stated that we trained quantum circuits with real-world data on historical data indexes in order to predict future performance. This was done using hybrid quantum computing, in which some components of a problem are handled by a quantum computer while others are done by a classical computer. The results indicated that in some cases the hybrid quantum predictions outperformed classical computing workloads.

you need to model probability distributions and model complex correlations. And both of these things are things that quantum computers do really well.

Sonika Johri, IonQ

Johri explained that the analysis was done on stock market indices, on which they set out to model the correlations between them in order to make better predictions.

In order to solve a problem like this, she continued, basically, you need to model probability distributions and model complex correlations. And both of these things are things that quantum computers do really well.

When you measure quantum states, theyre just probability distributions, she said, and quantum entanglement is what allows you to generate or have access to complex correlations.

Giani, from the conglomerate General Electric, added that its not just about the stock market, when it comes to potential applications for these research findings. Its much more than that, she said. Imagine supply chain optimization. Imagine if you build an engine, how many suppliers have to come together. What is the risk of each supplier? Or indeed, measuring the risk of failure for each machine, or machine part, from a supplier.

I asked how the hybrid model worked for the stock market calculations which parts of the software program were quantum and which classical?

How it works is that you set up whats called an optimization loop, Johri replied. You use a quantum computer to calculate the value of some function, which is very hard for a classical computer to calculate [] and then you use the classical computer to run an optimizer that sends parameters to the quantum computer for the function its supposed to calculate, and then the quantum computer sends an answer back to the classical computer. This process is repeated on other values, hence the term optimization loop.

Essentially, the classical computer is outsourcing the hardest part of the calculation to the quantum computer, said Johri.

Both Johri and Giani think that quantum computing will remain a hybrid solution for some time to come.

Quantum computers are not good for doing one plus one, but theyre really good at sampling from probability distributions, is how Johri put it.

IBM and others in the industry have cited 1,000 qubits as the level at which quantum approaches can surpass classical computing (the so-called quantum advantage). So I asked Johri and Giani how long before the type of risk management calculations demonstrated in their research are available to commercial companies, like for instance GE?

We are looking at the industrial advantage [] what is it that could push our processes the way we build things, the way we maintain things.

Annarita Giani, GE Research

From the industrial point of view, said Giani, we are not looking at quantum advantage. We are looking at the industrial advantage, right. So what is it that could push our processes the way we build things, the way we maintain things. So at this point, we are very focused on use cases.

Giani added that doing research like this helps GE prepare for a time when quantum computing software and hardware is ready for commercialization. At that point, she said, GE will be ready to put it into production and scale solutions such as the risk management algorithms tested in this research.

On the hardware side, Johri thinks they will need to get to about 50 high fidelity qubits before the solution is competitive with classical computing approaches. As of February this year, IonQ had achieved 20 algorithmic qubits (#AQ), so she said there is still a lot of work to do.

But wait, IBM already has a 127-qubit quantum processor well above the 50 qubits Johri mentioned. I asked her how IBMs measurement compares to IonQs AQ?

Generally, not all qubits are made equal, and simply counting qubits does not give one the whole story regarding qubit quality or utility, she replied. There are many ways to compare the value of a systems qubits, but the best all start with running real algorithms on the system in question, because thats the thing we actually care about. She referred readers to this article on the IonQ blog for a more technical explanation.

Its also worth noting that IonQ has a different way of generating qubits than IBM. Whereas IBM uses superconducting, IonQ uses ions trapped in electric fields.

I asked Giani what other use cases GE is looking into for quantum computing?

We are interested in optimization as a big use case that touches all of our businesses, she replied. It can go from optimizing how a machine works, to optimizing a schedule inside the machine shop or global maintenance operations. Chemistry is another set of many different applications. New materials, for example energy storage for PV solar panels. And one particular domain, that touches many applications, that Im personally interested in we are pushing it inside GE is the fight [against] climate change.

We are interested in optimization as a big use case that touches all of our businesses.

Annarita Giani, GE Research

As an example of how quantum computing could help fight climate change, Giani mentioned forecasting. If we could forecast the climate, she said, given all the possible parameters, that will be a great advantage to help make a decision. Forecasting the supply of wind, solar and hydro resources will become more important for ensuring a stabilized grid, as more renewable resources are brought online.

Giani later forwarded me two research articles (1, 2) about climate change and quantum computing, and mentioned an upcoming workshop at IEEE Quantum Week in September on the topic (for any readers who would like to investigate further).

So how will companies in the near future access quantum computing? Johri expects it will be done via cloud computing platforms much like many of todays classical computing applications. IonQ will be providing the hardware for some of those platforms; as of today, IonQs machines are accessible via Microsoft Azure Quantum, Google Cloud and Amazon Braket.

In the next couple of years, or five years, well see the emergence of higher level quantum programming abstraction software.

Sonika Johri, IonQ

Finally, I asked the pair how easy it is for software developers to get into quantum computing?

Giani replied that GE Research did have quantum physicists on staff when it first began to explore quantum computing four or five years ago. However, she said that nowadays more software developers are getting involved.

If you are a software engineer with a great level of curiosity [and] an open mind, I think its possible, she said. You dont need to be a quantum physicist.

Johris outlook is a bit more cautious.

Programming quantum computers, its at a level thats even lower than machine code right now, she said. However, she does expect this to improve. In the next couple of years, or five years, well see the emergence of higher-level quantum programming abstraction software.

IonQ itself is focused on the hardware side, but Johri says that any quantum software that has any traction at all is integrated with IonQ systems, and is being used.

So there you have it, use cases like risk management and even climate change forecasting are starting to become viable for quantum computers. However, itll take several more years at least for this to be commercialized by GE and others.

Feature image via Shutterstock.

Read the original:
Quantum Computing Use Cases: How Viable Is It, Really? - thenewstack.io

Read More..

Old computer technology points the way to future of quantum computing – Alberta Prime Times

VANCOUVER Researchers have made a breakthrough in quantum technology development that has the potential to leave todays supercomputers in the dust, opening the door to advances in fields including medicine, chemistry, cybersecurity and others that have been out of reach.

In a study published in the journal Nature on Wednesday, researchers from Simon Fraser University in British Columbia said they found a way to create quantum computing processors in silicon chips.

Principal investigator Stephanie Simmons said they illuminated tiny imperfections on the silicon chips with intense beams of light. The defects in the silicon chips act as a carrier of information, she said. While the rest of the chip transmits the light, the tiny defect reflects it back and turns into a messenger, she said.

There are many naturally occurring imperfections in silicon. Some of these imperfections can act as quantum bits, or qubits. Scientists call those kinds of imperfections spin qubits. Past research has shown that silicon can produce some of the most stable and long-lived qubits in the industry.

"These results unlock immediate opportunities to construct silicon-integrated, telecommunications-band quantum information networks," said the study.

Simmons, who is the university's Canada Research Chair in silicon quantum technologies, said the main challenge with quantum computing was being able to send information to and from qubits.

"People have worked with spin qubits, or defects, in silicon before," Simmons said. "And people have worked with photon qubits in silicon before. But nobody's brought them together like this."

Lead author Daniel Higginbottom called the breakthrough "immediately promising" because researchers achieved what was considered impossible by combining two known but parallel fields.

Silicon defects were extensively studied from the 1970s through the '90s while quantum physics has been researched for decades, said Higginbottom, who is a post-doctoral fellow at the university's physics department.

"For the longest time people didn't see any potential for optical technology in silicon defects. But we've really pioneered revisiting these and have found something with applications in quantum technology that's certainly remarkable."

Although in an embryonic stage, Simmons said quantum computing is the rock 'n' roll future of computers that can solve anything from simple algebra problems to complex pharmaceutical equations or formulas that unlock deep mysteries of space.

"We're going to be limited by our imaginations at this stage. What's really going to take off is really far outside our predictive capabilities as humans."

The advantage of using silicon chips is that they are widely available, understood and have a giant manufacturing base, she said.

"We can really get it working and we should be able to move more quickly and hopefully bring that capability mainstream much faster."

Some physicists predict quantum computers will become mainstream in about two decades, although Simmons said she thinks it will be much sooner.

In the 1950s, people thought the technology behind transistors was mainly going to be used for hearing aids, she said. No one then predicted that the physics behind a transistor could be applied to Facebook or Google, she added.

"So, we'll have to see how quantum technology plays out over decades in terms of what applications really do resonate with the public," she said. "But there is going to be a lot because people are creative, and these are fundamentally very powerful tools that we're unlocking."

This report by The Canadian Press was first published July 14, 2022.

Hina Alam, The Canadian Press

The rest is here:
Old computer technology points the way to future of quantum computing - Alberta Prime Times

Read More..

Beyond The Cloud – Global Finance

The Cloud has become a must for traditional financial institutions, who are increasingly viewing it a key tool for innovating and achieving business goals.

While legacy core banking systems once provided the backbone on which the worlds financial infrastructure is built , the processing demands of digitalization and heightened customer expectations require more agility, so banks are turning to cloud-native technologies to provide next-generation customer experience, whilst enjoying lower cost, easy maintenance, flexibility, speed of deployment and security.

At 33%, Amazon Web Services (AWS) has the largest share of the cloud market and is, according to a 2017 report by the World Economic Forum, forming the backbone of the financial services ecosystem. From capital markets and insurance organizations to global investment banks, payments, retail, corporate banks, fintech and startups, AWS helps customers to unlock a wide variety of benefits enabling them to scale business models and transform product offerings.

Rowan Taylor, head of Financial Services Industry Business Development, EMEA, Amazon Web Services says financial services institutions are using AWS to optimize all aspects of their businessfrom customer-service delivery models to risk managementin order to build a foundation for long-term growth, and product differentiation.

AWS does this by helping customers to modernize and transform their organizations through access to technologies like compute, storage, and databases, through to machine learning (ML) and artificial intelligence (AI), application programme interfaces (APIs), microservices, data lakes, Internet of Things (IoT), and fully-managed data management and analytics services like Amazon FinSpacepurpose-built for the financial services industry to facilitate the storage, cataloguing, and preparation of financial industry data at scale, states Taylor. This makes it faster, easier, and more cost effective for customers to move their applications to the cloud, to accelerate innovation, and build nearly anything imaginable to transform their businesses.

Enhancing Cloud Security

Concerns over security, data residency, and privacy left financial institutions behind other sectors in moving to the cloud. A survey of 100 global banks by Accenture in 2021 found that just 8% of workloads run in the cloud. This figure is expected to double within two years, however, as banks weigh speed of execution against security and resiliency.

To assuage such fears, cloud service providers (CSPs) have invested heavily in best-in-class security, privacy, and compliance to meet the very strict levels required by financial institutions. AWS is architected to be the most flexible and secure cloud computing environment available today, Taylor says. Our core infrastructure is built to satisfy the security requirements for the military, global banks, and other high-sensitivity organizations.

To fine tune its financial services offering, AWS has poached talent from banks. Prestigious banking hires include former JPMorgan executive director in trade surveillance John Kain, now leading AWS Worldwide Business & Market Development for Banking and Capital Markets. A bevy of hires from Goldman Sachs include machine-learning wizard Roger Li Zheng, Jeff Savio, softward engineer Ishan Guru; Ranjeet Dayama, a former vice president of data engineering at Marcus [Goldman's digital consumer bank] and principal solutions architect John Butler.

AWS is not alone in viewing banks as fertile recruitment grounds: Microsoft VP of Worldwide Financial Services Bill Borden joined from Bank of America Merrill Lynch, while Howard Boville, senior VP of IBM Hybrid Cloud, was Bank of Americas former CTO.

Advanced Analytics

Last November, AWS and Goldman Sachs announced the launch of the Goldman Sachs Financial Cloud for Data. This new suite of cloud-based data and analytics solutions for financial institutions redefines how customers can discover, organize, and analyze data in the cloud, allowing them to gain rapid insights and drive more informed investment decisions, Taylor says, explaining that the collaboration with Goldman Sachs reduces the need for investment firms to develop and maintain foundational data-integration technology and lowers the barriers to entry for accessing advanced quantitative analytics across global markets. This means Goldman Sachs institutional clients will be able to accelerate time to market for financial applications, optimize their resources to focus on portfolio returns, and innovate faster.

To help financial institutions and other data-heavy clients achieve performance gains and cost savings AWS launched Graviton3 processors in December 2021. These provide up to 25% better compute performance compared to the previous generation and use 60% less power for the same performance than comparable Amazon Elastic Compute Cloud (Amazon EC2) instances, the company says. This means it is more energy and cost efficient to operate cloud services, which can help customers to be more sustainable, explains Taylor.

Looking ahead, Taylor believes open banking has the potential to transform the competitive landscape and consumer experience of the banking industry by providing third-party financial services providers with open access to consumer banking, transaction, and other financial data from banks and non-bank financial institutions using APIs. Open banking is slowly becoming a major source of innovation allowing financial services customers to build unified APIs across multiple microservices that can interact with third parties faster.

Similarly, AWS expects open banking to have an impact on corporate banking because it integrates transactional services into the Enterprise Resource Planning (ERP) systems. Treasurers can make executing transactions out of the ERP system easier, reducing the complexity of linking and exchanging data between siloed systems, Taylor says.

He predicts more financial organizations to leverage their data and use ML and AI to optimize nearly every aspect of the financial value chain, from front-of-house customer service to back-of-house processes like risk and fraud mitigation.

For example, NuData Security, a Mastercard company, leverages billions of anonymous data points and ML to identify and block account takeover attacks. NuData helps customers fight fraud and protect consumers online, and uses the ML services to improve detection of fraudulent attacks, and AWS servers to provide real-time device intelligence.

In March 2022 NatWest Group said it was seeking to leverage machine learning to become a data-driven bank. By working with AWS and applying our ML and data analytics services, NatWest Group will have the ability to derive new insights, Taylor says.

For financial institutions of all sizes, the cloud has gone way beyond commodity IT and cost savings. It now provides transformational tools to modernize quickly and stay abreast of the innovations brought by challenger banks.

See the original post:
Beyond The Cloud - Global Finance

Read More..

Is Cloud DX Poised To Innovate Its Way Into The $250 Billion Telehealth Opportunity? – Benzinga – Benzinga

With the outbreak of the COVID-19 pandemic in 2020, telehealth usage reportedly soared to unprecedented levels, as both consumers and health care providers sought ways to safely access and deliver healthcare. Regulatory changes enacted by the government during this period also played a role by enabling increased access to telehealth and greater ease of reimbursements.

Taking stock of the potential, Mckinsey estimated in May 2020 that up to $250 billion of U.S. healthcare dollars could potentially be shifted to telehealth care.

Investment in virtual health has continued to accelerate post pandemic per Rock Healths 2021 digital healthcare funding report. The total venture capital investment into the digital healthcare sector in the first half of 2021 totaled $14.7 billion, which is more than all of the investment in 2020 ($14.6 billion) and nearly twice the investment in 2019 ($7.7 billion).

In addition, the total revenue of the top 60 virtual healthcare players also reportedly increased to $5.5 billion in 2020, from around $3 billion the year before.

With the continued interest in telehealth, a favorable regulatory environment and strong investment in this sector, it is expected that telehealth will continue to remain a robust option for healthcare in the future.

Companies like Teladoc Health Inc. TDOC, Goodrx Holdings Inc. GDRX and Dialogue Health Technologies Inc. CARE aim to disrupt the healthcare industry and take advantage of the immense potential opportunity presented by the telehealth segment.

But despite the relevance demonstrated by telehealth services during the pandemic, it is believed that there is a need for innovative product designs and digital solutions products with seamless capabilities to meet consumer preferences.

Cloud DX Inc. CDX, a Kitchener, Canada-based virtual care platform provider with headquarters in Brooklyn, New York, claims it has carved a unique niche for itself in the highly regulated digital healthcare industry by providing sophisticated hardware and software solutions to advanced healthcare providers for remote patient monitoring.

The company asserts that innovation is the cornerstone of its operation, one which sets it apart from other players in the telehealth space. Speaking with Benzinga, Cloud DX CEO and founder Robert Kaul said that Cloud DX could be best defined as a software platform that, through its innovative technologies, enables its proprietary or third party hardware devices used by patients to make their virtual care experience better. Dedicated to innovation, the company also makes its own hardware in cases where its primary focus is to collect more data for its system to use to come up with better outcomes for patients. Typically, at-home medical tools or hardware do not provide clinical level data, which is often why physicians prefer Cloud DXs proprietary devices.

The company boasts core competencies in biomedical hardware engineering, cloud-based medical device architecture, and algorithm-based result generation. The company claims it is pushing the boundaries of medical device technology with smart sensors, ease of use, cloud diagnostics, artificial intelligence (AI), and state-of-the-art design.

For example, its Pulsewave, a unique pulse acquisition device records up to 4,000 data points from a patients radial artery pulse and securely transmits the raw pulse signal to cloud diagnostics servers, which display nearly instant results on heart rate, blood pressure, pulse variability, average breathing rate and a proprietary total anomaly score that can have significant potential for identifying cardiac diseases, according to the company.

Its smartphone app AcuScreen is capable of detecting numerous respiratory illnesses, including tuberculosis from the sound of a person coughing, and its VITALITI continuous vital sign monitor (currently undergoing clinical trial evaluation), a highly advanced wearable, will measure ECG, heart rate, oxygen saturation, respiration, core body temperature, blood pressure, movement, steps and posture.

According to Cloud DX, these competencies coupled with its positive regulatory approval experience and internationally ISO-certified quality management enable it to create medically accurate, consumer-vital platforms that position it to be a front runner in clinical-grade data collection.

An added advantage of its Connected Health platform, according to the company, is the ability to integrate with many Electronic Medical Record (EMRs) systems, improving efficiency and return on investment (ROI).

Cloud DX maintains that, through innovation, collaboration and integration, its platform has the ability to unify the clinical and home monitoring experience, delivering futuristic, connected healthcare solutions.

In April this year, Cloud DX announced its partnership with Sheridan College on a project involving the companys eXtended Reality division, Cloud XR,to further develop its Clinic of the Future, an augmented reality (AR) platform.

With exciting new medical metaverse products in the pipeline, a strong patent portfolio, solid partners like Medtronic Plc. MDT, and a sales strategy that's driving rapid adoption among global healthcare providers, Cloud DX believes it is well positioned for success in the highly competitive digital healthcare arena.

Get the latest on Cloud DX here.

This post contains sponsored advertising content. This content is for informational purposes only and is not intended to be investing advice.

Featured Photo by National Cancer Institute on Unsplash

Visit link:
Is Cloud DX Poised To Innovate Its Way Into The $250 Billion Telehealth Opportunity? - Benzinga - Benzinga

Read More..

GoTab Unveils EasyTab, A New Feature that Helps Servers and Bartenders Seamlessly Bridge Mobile Order and Pay-at-Table with Traditional Service – PR…

Restaurant commerce platform makes tab management - from order to close - fast, easy and hassle-free

ARLINGTON, Va., July 18, 2022 /PRNewswire/ --Already at the forefront of contactless ordering and payment technology, restaurant commerce platformGoTab has announced today it is launching a new distinctive feature: EasyTab. This feature, which leverages guest mobile devices, bridges traditional service and empowers guests by making opening, closing, reordering, and transferring tabs from bar to table fast, easy and hassle-free.

Designed to make the dine-in ordering experience even more convenient for servers and guests, EasyTab is one more feature that makes GoTab the most flexible tab-based ordering solution available on the market. Bridging traditional service models with guest-led ordering, EasyTab helps GoTab operators introduce contactless ordering to their guests, guiding them over the hurdle of using their mobile device to place their food and drink orders. EasyTabis changing the way technology supports restaurant operations, creating more positive, profitable experiences, specifically in restaurants with multiple dining areas and who cater to large groups or events.

"Operators are struggling with the double-edged sword of an influx of guests that are eager to return to in-venue dining and entertainment, while having to operate with less staff. At the same time, they can be reluctant to introduce mobile order and pay due to the resistance of guests and staff to learning new operating models. With EasyTab, we effectively solve the problem. The result is an elegant transition from traditional ordering through a server, to mobile ordering and payment, and back again, depending on the guest's individual preference," says Tim McLaughlin, GoTab CEO and Co-Founder.

How EasyTab Works

Guests can open a tab the traditional way through a server or bartender. When prompted to keep the tab open, the server (or guest) simply dips or swipes the card provided for payment, and enters the guest's mobile number. A link to the open tab is provided via text, and the guest accesses the tab on their device. Now they can order, re-order, and pay throughout their visit, from anywhere in the venue. And because their payment method is automatically applied to their open tab, they can split or settle their check out without having to wait for a physical check or head to the counter to pay. Click here to learn more: https://vimeo.com/727438052/1c24be5764

Works Seamlessly with the GoTab All-in-One POS

Unlike other POS systems that make guests close their tab when they move from the bar to the table, GoTab tabs stay open no matter where guests venture. Where on-premise dining used to rule the scene, restaurants have adapted to business models that also include online ordering and 3rd party marketplace platforms increasing their revenue streams more than ever before. The EasyTab feature from the GoTab POS takes the guesswork out of managing complex omni-channel front-of-house operations while integrating back-of-house operations.

GoTab's all-in-one, cloud-based POSmakes the systems restaurants use work more effectively, and reduces the top pain points restaurants face from staffing shortages, managing 3rd party platforms, and time spent on multiple transactions. The company's customizable, scalable POS moves restaurants towards a more frictionless experience. The POS can run on nearly any existing hardware (iOS, Android, or PC), removing cost and time barriers restaurants face when looking to switch to innovative, customizable solutions. From quick-service to mid-size and fine dining restaurants, GoTab's POS anticipates the needs of restaurant staff and guests, so restaurants can focus more time anticipating the needs of their guests, leading to enhanced guest experience and higher profitability.

EasyTab + All-in-One POS supports Back of House and Payment Processing

Using GoTab's Kitchen Display System and integrated printers, servers always know where to locate guests and deliver the food and beverage orders. Meanwhile, guests are free to move about, and reorder at their convenience without ever having to flag a server, close or reopen their tab. When friends join the party, guests can share their tab using GoTab's native features for tab sharing (via text or QR code specific to their tab). When everyone is ready to leave, they can expedite the payment process by closing out and splitting the tab on their own on their mobile device. GoTab makes the payment process easy and seamless by clearly displaying all charges, fees and tip recommendations.

EasyTab is now available for all GoTab customers. Click here to learn more and request a demo.

About GoTab, Inc.

GoTab, Inc., a Restaurant Commerce Platform (RCP), is helping large- and mid-sized restaurants, breweries, bars, hotels and other venues run lean, profitable operations while making guests even more satisfied. It integrates with popular point-of-sale (POS) systems and allows patrons to order and pay through a server, order and pay directly from their own mobile phones, or blend the two experiences all on one tab, through its easy-to-use mobile POS, contactless ordering, payment features, and kitchen management systems (KMS). The guest never has to download a mobile app or create a password. Operators get flexible features that can be rapidly applied to access new revenue streams via dine-in, take-out and delivery, ghost kitchens, retail groceries, and more. Founded in 2016, GoTab processes over $250M transactions per year with operations across 35 U.S. states and growing. For more information, consult our media kit, request a demo here or learn more at https://gotab.io/en

Media Contact:Madison McGillicuddy[emailprotected](203) 268-8269

SOURCE GoTab

View original post here:
GoTab Unveils EasyTab, A New Feature that Helps Servers and Bartenders Seamlessly Bridge Mobile Order and Pay-at-Table with Traditional Service - PR...

Read More..

KAIST Shows Off DirectCXL Disaggregated Memory Prototype – The Next Platform

The hyperscalers and cloud builders are not the only ones having fun with the CXL protocol and its ability to create tiered, disaggregated, and composable main memory for systems. HPC centers are getting in on the action, too, and in this case, we are specifically talking about the Korea Advanced Institute of Science and Technology.

Researchers at KAISTs CAMELab have joined the ranks of Meta Platforms (Facebook), with its Transparent Page Placement protocol and Chameleon memory tracking, and Microsoft with its zNUMA memory project, is creating actual hardware and software to do memory disaggregation and composition using the CXL 2.0 protocol atop the PCI-Express bus and a PCI-Express switching complex in what amounts to a memory server that it calls DirectCXL. The DirectCXL proof of concept was talked about it in a paper that was presented at the USENIX Annual Technical Conference last week, in a brochure that you can browse through here, and in a short video you can watch here. (This sure looks like startup potential to us.)

We expect to see many more such prototypes and POCs in the coming weeks and months, and it is exciting to see people experimenting with the possibilities of CXL memory pooling. Back in March, we reported on the research into CXL memory that Pacific Northwest National Laboratory and memory maker Micron Technology are undertaking to accelerate HPC and AI workloads, and Intel and Marvell are both keen on seeing CXL memory break open the memory hierarchy in systems and across clusters to drive up memory utilization and therefore drive down aggregate memory costs in systems. There is a lot of stranded memory out there, and Microsoft did a great job quantifying what we all know to be true instinctively with its zNUMA research, which was done in conjunction with Carnegie Mellon University. Facebook is working with the University of Michigan, as it often does on memory and storage research.

Given the HPC roots of KAIST, the researchers who put together the DirectCXL prototype focused on comparing the CXL memory pooling to direct memory access across systems using the Remote Direct Memory Access (RDMA) protocol. They used a pretty vintage Mellanox SwitchX FDR InfiniBand and ConnectX-3 interconnect running at 56 Gb/sec as a benchmark against the CXL effort, and the latencies did get lower for InfiniBand. But they have certainly stopped getting lower in the past several generations and the expectation is that PCI-Express latencies have the potential to go lower and, we think, even surpass RDMA over InfiniBand or Ethernet in the long run. The more protocol you can eliminate, the better.

RDMA, of course, is best known as the means by which InfiniBand networking originally got its legendary low latency, allowing machines to directly put data into each others main memory over the network without going through operating system kernels and drivers. RDMA has been part of the InfiniBand protocol for so long that it was virtually synonymous with InfiniBand until the protocol was ported to Ethernet with the RDMA over Converged Ethernet (RoCE) protocol. Interesting fact: RDMA is actually is based on work done in 1995 by researchers at Cornell University (including Verner Vogels, long-time chief technology officer at Amazon Web Services) and Thorsten von Eicken (best known to our readers as the founder and chief technology officer at RightScale) that predates the creation of InfiniBand by about four years.

Here is what the DirectCXL memory cluster looks like:

On the right hand side, and shown in greater detail in the feature image at the top of this story, are four memory boards, which have FPGAs creating the PCI-Express links and running the CXL.memory protocol for load/store memory addressing between the memory server and hosts attached to it over PCI-Express links. In the middle of the system are four server hosts and on the far right is a PCI-Express switch that links the four CXL memory servers to these hosts.

To put the DirectCXL memory to the test, KAIST employed Facebooks Deep Learning Recommendation Model (DLRM) for personalization on the server nodes using just RDMA over InfiniBand and then using the DirectCXL memory as extra capacity to store memory and share it over the PCI-Express bus. On this test, the CXL memory approach was quite a bit faster than RDMA, as you can see:

On this baby cluster, the tensor initialization phase of the DLRM application was 2.71X faster on the DirectCXL memory than using RDMA over the FDR InfiniBand interconnect, the inference phase where the recommender actually comes up with recommendations based on user profiles ran 2.83X faster, and the overall performance of the recommender from first to last was 3.32X faster.

This chart shows how local DRAM, DirectCXL, and RDMA over InfiniBand stack up, and the performance of CXL versus RDMA for various workloads:

Heres the neat bit about the KAIST work at CAMELab. No operating systems currently support CXL memory addressing and by no operating systems, we mean neither commercial-grade Linux or Windows Server do, and so KAIST created the DirectCXL software stack to allow hosts to reach out and directly address the remote CXL memory using load/store operations. There is no moving data to the hosts for processing data is processed from that remote location, just as would happen in a multi-socket system with the NUMA protocol. And there is a whole lot less complexity to this DirectCXL driver than Intel created with its Optane persistent memory.

Direct access of CXL devices, which is a similar concept to the memory-mapped file management of the Persistent Memory Development Toolkit (PMDK), the KAIST researchers write in the paper. However, it is much simpler and more flexible for namespace management than PMDK. For example, PMDKs namespace is very much the same idea as NVMe namespace, managed by file systems or DAX with a fixed size. In contrast, our cxl-namespace is more similar to the conventional memory segment, which is directly exposed to the application without a file system employment.

We are not sure what is happening with research papers these days, but people are cramming a lot of small charts across two columns, and it makes it tough to read. But this set of charts, which we have enlarged, shows some salient differences between the DirectCXL and RDMA approaches:

The top left chart is the interesting one as far as we are concerned. To read 64 bytes of data, RDMA needs to do two direct memory operations, which means it has twice the PCI-Express transfer and memory latency, and then the InfiniBand protocol takes up 2,129 cycles of a total of 2,705 processor cycles during the RDMA. The DirectCXL read of the 64 bytes of data takes only 328 cycles, and one reason it can do this is that the DirectCXL protocol converts load/store requests from the last level cache in the processor to CXL flits, while RDMA has to use the DMA protocol to read and write data to memory.

Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.Subscribe now

See more here:
KAIST Shows Off DirectCXL Disaggregated Memory Prototype - The Next Platform

Read More..

What Is Containerization and How It Can Help Your Applications Get to the Market Faster. – TechGenix

Pack it up, wrap it up, ship your container! (source: UnSpalsh).

Containerization has been a significant development within the IT sphere since Docker was released in 2013 and was the first commercially successful containerization tool. We always hear people in IT saying, well it works on my machine!. But this doesnt guarantee the app will work on others computers. And this is where containerization comes into play. Everyone in IT is always talking about containers and containerization because it can save time, money, and effort.

But what does it mean exactly? What does it do and what are its benefits? Ill answer all these questions in this article. Ill also show you how it works and the different types of containerization and services. Lets get started!

Containerization is where an application runs in a container isolated from the hardware. This container houses a separate environment specific to the application inside it. Everything the application needs to run is encapsulated and isolated inside of its container. For example, binaries, libraries, configuration files, and dependencies all live in containers. This means you dont need to manually configure them on your machine to run the application.

Additionally, you dont need to configure a container multiple times. When you run a containerized application on your local machine, itll run as expected. This makes your applications easily portable. As a result, you wont worry whether the apps will run on other peoples machines.

But how exactly does this happen? Lets jump into how containerization technology works.

Lets break down the containers layers.

Think of a containerized application as the top layer of a multi-tier cake. Now, well work our way up from the bottom.

To sum up, each container is an executable software package. This package also runs on top of a host OS. A host may even support many containers concurrently.

Sometimes, you may need thousands of containers. For example, this happens in the case of a complex microservices architecture. Generally, this architecture uses numerous containerized application delivery controllers (ADCs). This configuration works so well because the containers run fewer resource-isolated processes. And you cant access these processes outside the container.

But why should you use containerization technology? Lets look at its benefits.

Containerization technology offers many benefits. One of which I mentioned earlier, is portability. You dont need to worry if the application wont run because the environment isnt the same. You also can deliver containerized apps easily to users in a virtual workspace. But lets take a look at 4 other benefits of containerization:

Containerization cuts down on overhead costs. For example, organizations can reduce some of their server and licensing costs. Containers enable greater server efficiency and cost-effectiveness. In other words, you dont need to buy and maintain physical hardware. Instead, you can run containers in the cloud or on VMs.

Containerization offers more agility to the software development life cycle. It also enables DevOps teams to quickly spin up and spin down containers. In turn, this increases developer productivity and efficiency.

Encapsulation is another benefit. How so? Suppose one container fails or gets infected with a virus. All these problems wont spread to the kernel nor to the other containers. Thats because each container is encapsulated. You can simply delete that instance and create a new one.

Containers let you orchestrate them with Kubernetes. Its possible to automate rollouts and rollbacks, orchestrate storage systems, perform load balancing, and restart any failing containers. Kubernetes is also compatible with other container tools. Thats why it is so popular! But you also can use other container orchestration tools, such as OpenShift, Docker Swarm, and Rancher.

Clearly then, containerization technology offers many, many benefits. But how does it differ from virtualization? Lets find out!

Both VMs and containers provide an execution environment, but theyre still different. To simplify matters, Ive created this table below.

Now, you know the difference between virtualization and containerization. But where can you use containers? And why should you use them? Lets see.

Besides applications, you also can containerize some services. This can facilitate their delivery in their containers. Lets take a look at all the services that you can containerize.

This is a big one and perhaps the most used. Previously, software development used a monolithic code base. This meant including everything in one repo. But this method was hard to manage. Instead, its more efficient to break services (features or any data sent via third-party APIs) down into separate parts. After that, we can inject them into the application. Generally, separate development teams own these microservices, and they communicate with the main app via APIs.

Databases can be containerized to provide applications with a dedicated database. As a result, you wont need to connect all apps to a monolithic database. This makes the connection to the database dedicated and easier to manage, all from within the container.

Web servers are quickly configurable and deployable with just a few commands on the CLI. Its also better for development to separate the server from the host. And you can achieve that with the container. Itll encapsulate the server.

You also can run containers within a VM (virtual machine). This helps maximize security, talk to selected services, or max out the physical hardware. Its almost like a picture within a picture within another picture. Containerizing VMs lets you use the hardware to its maximum.

An application delivery controller manages an applications performance and security. If you containerize ADCs, layer 4-7 services will become more available in DevOps environments. These services supply data storage, manipulation, and communication. This contributes to the overall efficiency of development.

Next, lets take a look at some of the top containerization providers.

If you want to use containerization technology, youll need the help of a third-party solution. To this end, Ive compiled this list of the top 4 vendors on the market. (Note: I classified these in alphabetical order, not from best to worst).

ECR is an Amazon Web Services product that stores, manages, and deploys Docker images. These are managed clusters of Amazon EC2 (compute) instances. Amazon ECR also hosts images with high availability and scalable architecture. In turn, your team can easily deploy containers for your applications.

The pricing for AWS tools varies based on the number of tools you use and the subscription rates. Consult AWS for actual prices.

Mesos is an open-source cluster manager. Like Kubernetes, it manages the running containers. You also can integrate your own monitoring systems to keep an eye on your containers. Additionally, Mesos excels at running numerous clusters in large environments.

Mesos is an open-source tool.

AKS is Microsoft Azures container orchestration service based on the open-source Kubernetes system. If your organization is using Azure, then you definitely need to use AKS. In fact, it easily integrates Kubernetes into Azure. Your development team can use AKS to deploy, scale, and manage Docker containers and container-based applications across a cluster of container hosts.

Azure services are also subscription-based. Consult Azure for the latest pricing for these services.

This Google container orchestration tool creates a managed, production-ready environment to deploy your applications. It facilitates the deployment, updating, and management of apps and services. This also gives you quick app development and iteration.

Google Cloud Services are also subscription-based. Consult Google for updated pricing.

These are some of the top vendors for containerization. But did you notice weve been talking a lot about Kubernetes and Docker? Lets talk more about these tools and see why they go together like PB and J!

The Docker Engine is perhaps the most well-known container tool worldwide. Its the main component of container architecture. Additionally, Docker is a Linux kernel-based open-source tool. Its responsible for creating containers inside an OS.

Kubernetes usually works together with Docker. It specifically orchestrates all the Docker containers running in various nodes and clusters. This provides maximum application availability.

Developers generally create containers from Docker images. Generally, these have a read-only status, but Docker creates a container by adding a read-write file system. It creates a network interface to allow communication between the container and a local host. Then, it adds an IP address and executes the indicated process.

Finally, lets take a look at some of the options you have to get started using containers.

You need to keep these 7 points in mind when shopping around for a containerization platform.

These are some of the things to consider before selecting a vendor. Lets wrap up!

To sum up, containerization offers many benefits. Primarily, it saves your IT operations money and time. It also makes IT jobs a lot easier. However, you should consider many things before picking the right tools.

Often, the best combination is Docker and Kubernetes. But depending on your environment, you might want to opt for AWS, Azure, Google, or open-source tools. I dont recommend that only one person make this decision. Your development and DevOps teams need to come together and choose the best solution.

Do you have more questions? Are you looking for more information? Check out the FAQ and Resources sections below!

Docker is a containerization tool released in 2013. It revolutionized how applications and services are handled virtually. Docker has also made it much easier for developers to port their applications to different systems and machines. Thats because Docker creates images of the application and environment. Then, it places them inside a container that can be on any machine.

Kubernetes is an open-source container orchestration tool. It helps manage and deploy applications on premises or in the cloud. Kubernetes also operates at the container level, not on the hardware level. It offers features such as deployment, scaling, and load balancing. It also allows you to integrate your own logging and monitoring.

Virtualization lets you create a virtual machine or isolated environment. This helps you use environments to run more than one project on one machine. Isolating environments even stops variable conflicts between dependencies. It also allows for a cleaner, less buggy development process.

Containerization is a form of virtualization. But instead of running a VM, you can create many containers and run many applications in their own environments. Containerization lets you transfer programs between teams and developers. It also allows you to take advantage of your hardware by hosting many applications from one server. Additionally, you can run all kinds of environments on one server without conflicts.

Network orchestration creates an abstraction between the administrator and cloud-based network solutions. Administrators can easily provision resources and infrastructure dynamically across multiple networks. Orchestration tools are very useful if you have multiple applications running in containers. The more containers you have, the harder it is to manage without the proper orchestration software.

Learn about the differences and similarities between IaaS, Virtualization, and Containerization.

Learn about Docker and Kubernetes in this comparison guide.

Learn how Docker brought containerization to the forefront of software development.

Learn how Azure makes it easier to handle containers and the benefits it brings.

Learn about all the Kubernetes networking trends coming down the road in 2022.

Read the original post:
What Is Containerization and How It Can Help Your Applications Get to the Market Faster. - TechGenix

Read More..

DeepMind details AI work with YouTube on video compression and AutoChapters – 9to5Google

Besides research, Alphabets artificial intelligence lab is tasked with applying its various innovations to help improve Google products. DeepMind today detailed three specific areas where AI research helped enhance the YouTube experience.

Since 2018, DeepMind has worked with YouTube on a label quality model (LQM) that more accurately identifies what videos meet advertiser-friendly guidelines and can display ads.

Since launching to production on a portion of YouTubes live traffic, weve demonstrated an average 4% bitrate reduction across a large, diverse set of videos.

Calling YouTube one of its key partners, DeepMind starts with how its MuZero AI model helps optimize video compression in the open source VP9 codec. More details and examples can be found here.

By learning the dynamics of video encoding and determining how best to allocate bits, our MuZero Rate-Controller (MuZero-RC) is able to reduce bitrate without quality degradation.

Most recently, DeepMind is behind AutoChapters, which are available for 8 million videos today. The plan is to scale this feature to more than 80M auto-generated chapters over the next year.

Collaborating with the YouTube Search team, we developed AutoChapters. First we use a transformer model that generates the chapter segments and timestamps in a two-step process. Then, a multimodal model capable of processing text, visual, and audio data helps generate the chapter titles.

DeepMind has previously worked on improving Google Maps ETA predictions, Play Store recommendations, and data center cooling.

FTC: We use income earning auto affiliate links. More.

Check out 9to5Google on YouTube for more news:

Original post:
DeepMind details AI work with YouTube on video compression and AutoChapters - 9to5Google

Read More..