Category Archives: Machine Learning
AI Can Write in English. Now It’s Learning Other Languages – WIRED
What's surprising about these large language models is how much they know about how the world works simply from reading all the stuff that they can find, says Chris Manning, a professor at Stanford who specializes in AI and language.
But GPT and its ilk are essentially very talented statistical parrots. They learn how to re-create the patterns of words and grammar that are found in language. That means they can blurt out nonsense, wildly inaccurate facts, and hateful language scraped from the darker corners of the web.
Amnon Shashua, a professor of computer science at the Hebrew University of Jerusalem, is the cofounder of another startup building an AI model based on this approach. He knows a thing or two about commercializing AI, having sold his last company, Mobileye, which pioneered using AI to help cars spot things on the road, to Intel in 2017 for $15.3 billion.
Shashuas new company, AI21 Labs, which came out of stealth last week, has developed an AI algorithm, called Jurassic-1, that demonstrates striking language skills in both English and Hebrew.
In demos, Jurassic-1 can generate paragraphs of text on a given subject, dream up catchy headlines for blog posts, write simple bits of computer code, and more. Shashua says the model is more sophisticated than GPT-3, and he believes that future versions of Jurassic may be able to build a kind of common-sense understanding of the world from the information it gathers.
Other efforts to re-create GPT-3 reflect the worldsand the internetsdiversity of languages. In April, researchers at Huawei, the Chinese tech giant, published details of a GPT-like Chinese language model called PanGu-alpha (written as PanGu-). In May, Naver, a South Korean search giant, said it had developed its own language model, called HyperCLOVA, that speaks Korean.
Jie Tang, a professor at Tsinghua University, leads a team at the Beijing Academy of Artificial Intelligence that developed another Chinese language model called Wudao (meaning "enlightenment'') with help from government and industry.
The Wudao model is considerably larger than any other, meaning that its simulated neural network is spread across more cloud computers. Increasing the size of the neural network was key to making GPT-2 and -3 more capable. Wudao can also work with both images and text, and Tang has founded a company to commercialize it. We believe that this can be a cornerstone of all AI, Tang says.
Such enthusiasm seems warranted by the capabilities of these new AI programs, but the race to commercialize such language models may also move more quickly than efforts to add guardrails or limit misuses.
Perhaps the most pressing worry about AI language models is how they might be misused. Because the models can churn out convincing text on a subject, some people worry that they could easily be used to generate bogus reviews, spam, or fake news.
I would be surprised if disinformation operators don't at least invest serious energy experimenting with these models, says Micah Musser, a research analyst at Georgetown University who has studied the potential for language models to spread misinformation.
Musser says research suggests that it wont be possible to use AI to catch disinformation generated by AI. Theres unlikely to be enough information in a tweet for a machine to judge whether it was written by a machine.
More problematic kinds of bias may be lurking inside these gigantic language models, too. Research has shown that language models trained on Chinese internet content will reflect the censorship that shaped that content. The programs also inevitably capture and reproduce subtle and overt biases around race, gender, and age in the language they consume, including hateful statements and ideas.
Similarly, these big language models may fail in surprising or unexpected ways, adds Percy Liang, another computer science professor at Stanford and the lead researcher at a new center dedicated to studying the potential of powerful, general-purpose AI models like GPT-3.
Originally posted here:
AI Can Write in English. Now It's Learning Other Languages - WIRED
Edge AI: The Future of Artificial Intelligence and Edge Computing | ITBE – IT Business Edge
Edge computing is witnessing a significant interest with new use cases, especially after the introduction of 5G. The 2021 State of the Edge report by the Linux Foundation predicts that the global market capitalization of edge computing infrastructure would be worth more than $800 billion by 2028. At the same time, enterprises are also heavily investing in artificial intelligence (AI). McKinseys survey from last year shows that 50% of the respondents have implemented AI in at least one business function.
While most companies are making these tech investments as a part of their digital transformation journey, forward-looking organizations and cloud companies see new opportunities by fusing edge computing and AI, or Edge AI. Lets take a closer look at the developments around Edge AI and the impact this technology is bringing on modern digital enterprises.
AI relies heavily on data transmission and computation of complex machine learning algorithms. Edge computing sets up a new age computing paradigm that moves AI and machine learning to where the data generation and computation actually take place: the networks edge. The amalgamation of both edge computing and AI gave birth to a new frontier: Edge AI.
Edge AI allows faster computing and insights, better data security, and efficient control over continuous operation. As a result, it can enhance the performance of AI-enabled applications and keep the operating costs down. Edge AI can also assist AI in overcoming the technological challenges associated with it.
Edge AI facilitates machine learning, autonomous application of deep learning models, and advanced algorithms on the Internet of Things (IoT) devices itself, away from cloud services.
Also read: Data Management with AI: Making Big Data Manageable
An efficient Edge AI model has an optimized infrastructure for edge computing that can handle bulkier AI workloads on the edge and near the edge. Edge AI paired with storage solutions can provide industry-leading performance and limitless scalability that enables businesses to use their data efficiently.
Many global businesses are already reaping the benefits of Edge AI. From improving production monitoring of an assembly line to driving autonomous vehicles, Edge AI can benefit various industries. Moreover, the recent rolling out of 5G technology in many countries gives an extra boost for Edge AI as more industrial applications for the technology continue to emerge.
A few benefits of edge computing powered by AI on enterprises include:
Implementation of Edge AI is a wise business decision as Insight estimates an average 5.7% return on Investment (ROI) from industrial Edge AI deployments over the next three years.
Machine learning is the artificial simulation of the human learning process with the use of data and algorithms. Machine learning with the aid of Edge AI can lend a helping hand, particularly to businesses that rely heavily on IoT devices.
Some of the advantages of Machine Learning on edge are mentioned below.
Privacy: Today, information and data being the most valuable assets, consumers are cautious of the location of their data. The companies that can deliver AI-enabled personalized features in their applications can make their users understand how their data is being collected and stored. It enhances the brand loyalty of the customers.
Reduced Latency: Most of the data processes are carried out both on network and device levels. Edge AI eliminates the requirement to send huge amounts of data across networks and devices; thus, improve the user experience.
Minimal Bandwidth: Every single day, an enterprise with thousands of IoT devices has to transmit huge amounts of data to the cloud. Then carry out the analytics in the cloud, and retransmit the analytics results back to the device. Without a wider network bandwidth and cloud storage, this complex process would turn it into an impossible task. Not to mention the possibility of exposing sensitive information during the process.
However, Edge AI implements cloudlet technology, which is small-scale cloud storage located at the networks edge. Cloudlet technology enhances mobility and reduces the load of data transmission. Consequently, it can bring down the cost of data services and enhance data flow speed and reliability.
Low-Cost Digital Infrastructure: According to Amazon, 90% of digital infrastructure costs come from Inference a vital data generation process in machine learning. Sixty percent of organizations surveyed in a recent study conducted by RightScale agree that the holy grail of cost-saving hides in cloud computing initiatives. Edge AI, in contrast, eliminates the exorbitant expenses incurred on the AI or machine learning processes carried out on cloud-based data centers.
Also read: Best Machine Learning Software in 2021
Developments in knowledge such as data science, machine learning, and IoT development have a more significant role in the sphere of Edge AI. However, the real challenge lies in strictly following the trajectory of the developments in computer science. In particular, next-generation AI-enabled applications and devices that can fit perfectly within the AI and machine learning ecosystem.
Fortunately, the arena of edge computing is witnessing promising hardware development that will alleviate the present constraints of Edge AI. Start-ups like Sima.ai, Esperanto Technologies, and AIStorm are among the few organizations developing microchips that can handle heavy AI workloads.
In August 2017, Intel acquired Mobileye, a Tel Aviv-based vision-safety technology company, for $15.3 billion. Recently, Baidu, a Chinese multinational technology behemoth, initiated the mass-production of second-generation Kunlun AI chips, an ultrafast microchip for edge computing.
In addition to microchips, Googles Edge TPU, Nvidias Jetson Nano, along with Amazon, Microsoft, Intel, and Asus, embarked on the motherboard development bandwagon to enhance edge computings prowess. Amazons AWS DeepLens, the worlds first deep learning enabled video camera, is a major development in this direction.
Also read: Edge Computing Set to Explode Alongside Rise of 5G
Poor Data Quality: Poor quality of data of major internet service providers worldwide stands as a major hindrance for the research and development in Edge AI. A recent Alation report reveals that 87% of the respondents mostly employees of Information Technology (IT) firms confirm poor data quality as the reason their organizations fail to implement Edge AI infrastructure.
Vulnerable Security Feature: Some digital experts claim that the decentralized nature of edge computing increases its security features. But, in reality, locally pooled data demands security for more locations. These increased physical data points make an Edge AI infrastructure vulnerable to various cyberattacks.
Limited Machine Learning Power: Machine learning requires greater computational power on edge computing hardware platforms. In Edge AI infrastructure, the computation performance is limited to the performance of the edge or the IoT device. In most cases, large complex Edge AI models have to be simplified prior to the deployment to the Edge AI hardware to increase its accuracy and efficiency.
Virtual assistants like Amazons Alexa or Apples Siri are great benefactors of developments in Edge AI, which enables their machine learning algorithms to deep learn at rapid speed from the data stored on the device rather than depending on the data stored in the cloud.
Automated optical inspection plays a major role in manufacturing lines. It enables the detection of faulty parts of assembled components of a production line with the help of an automated Edge AI visual analysis. Automated optical inspection allows highly accurate ultrafast data analysis without relying on huge amounts of cloud-based data transmission.
The quicker and accurate decision-making capability of Edge AI-enabled autonomous vehicles results in better identification of road traffic elements and easier navigation of travel routes than humans. It results in faster and safer transportation without manual interference.
Apart from all of the use cases discussed above, Edge AI can also play a crucial role in facial recognition technologies, enhancement of industrial IoT security, and emergency medical care. The list of use cases for Edge AI keeps growing every passing day. In the near future, by catering to everyones personal and business needs, Edge AI will turn out to be a traditional day-to-day technology.
Read next: Detecting Vulnerabilities in Cloud-Native Architectures
See the original post here:
Edge AI: The Future of Artificial Intelligence and Edge Computing | ITBE - IT Business Edge
In the Face of Rising Security Issues and Hacks, GBC.AI Makes the Case for Machine Learning and AI Integration – TechBullion
Share
Share
Share
Trustless. The word gets thrown around a lot in the blockchain space. It is one of the principles that decentralized finance is based upon. DeFi, as it was initially envisioned, is supposed to provide users with the ability to interact directly with each other thanks to decentralized technology that eliminates the need for third-party control. Faith can be placed in the blockchain systems that are secure and enable users to engage in financial transactions as they wish without having to trust all-too-corruptible humans.
That is at least how it is supposed to go. In reality, things are quite different. This is not to disparage many of the remarkable developments that have occurred thanks to blockchain technology. Individuals willing to take the plunge into the industry have at their fingertips more possibilities than are dreamt of in traditional banking philosophies, which by and large profit off the individual and throw them back bread crumbs as a reward.
There is no disputing the great promise of the industry. What is uncertainand what needs to be addressedis whether blockchain technology is making good on that promise, specifically when it comes to security and trustlessness.
Chances are you have heard about the Poly Network hack that happened a couple of weeks ago. Poly Network is a decentralized platform that facilitates peer-to-peer cryptocurrency transactions between users across different blockchains. The hacker took the equivalent of over $600 million worth of different cryptocurrencies. That makes it the biggest hack in cryptocurrency history. And yet, in what some have taken as a sign of the industrys strength, the hack hasnt been treated with the same significance that earlier hacksfor lesser sumshave.
This is partially due to the particulars of the case. The hacker responsible has reportedly returned all of the assets in question. Embedded in one of the final transactions is a note in which the hacker claims that their intentions were not to make a profit but rather to expose vulnerabilities and thereby make the network stronger:
MONEY MEANS LITTLE TO ME, SOME PEOPLE ARE PAID TO HACK, I WOULD RATHER PAY FOR THE FUN. I AM CONSIDERING TAKING THE BOUNTY AS A BOUNUS FOR PUBLIC HACKERS IF THEY CAN HACK THE POLY NETWORK IF THE POLY DONT GIVE THE IMAGINARY BOUNTY, AS EVERYBODY EXPECTS, I HAVE WELL ENOUGH BUDGET TO LET THE SHOW GO ON.
I TRUST SOME OF THEIR CODE, I WOULD PRAISE THE OVERALL DESIGN OF THE PROJECT, BUT I NEVER TRUST THE WHOLE POLY TEAM.
Bizarre to say the least. If the hacker didnt intend to take the money for themself, then why take so much? Perhaps they wanted to draw as much attention to the situation as possible knowing that if it was the biggest hack in crypto history, it would surely make headlines. But the repercussions of hacks of that magnitude in the past have led to downturns in the crypto market that have caused assets across the board to depreciate in value. It would be a very risky way to try to strengthen DeFi.
One also has to consider that the hackers hands were tied. Many of the stolen funds had been identified on their respective blockchains and exchanges like Binance had promised to freeze any of them that came within their purview. In addition to that, a significant portion of the stolen funds were in USDT. When Tether got wind of what had happened they announced that they would be freezing the approximately $33 million of USDT that had been stolen, preventing the hacker from making any transfers with them.
Regardless of the hackers intentions and that the funds ended up getting returned to the exchange, there are two issues here that bring us back to where we started. The first is the issue of security. About $1 billion dollars has been stolen from DeFi projects in this year alone. That is a staggering figure that indicates that there are serious issues with the security of blockchain platforms. Hand in hand with that is the issue of trust.
In this case Tether froze the stolen funds, preventing the hacker from transacting with them. While the intentions and outcome here were both good, this kind of power completely flies in the face of the decentralized ethos. There should not be a third party that can act like a traditional bank with complete control over assets that are being exchanged among users. The whole idea behind decentralized finance was to do away with institutions like that.
The problem is that the systems in place are not trustable enough yet. The industry is still young, so from a certain perspective, it is to be expected that there would be growing pains and vulnerabilities. But $1 billion in stolen assets is much more than can be explained away by an industry that is still coming into its own. The unpleasant truth here is that by and large the DeFi industry is falling short of what it promises.
This is where projects like GBC.AI come into the picture. GBC.AI is a company that has been working to apply the benefits of AI and machine learning to the blockchain sector. The project has developed what they call blockchain guardians, AI technology that optimizes blockchain operations while also taking a preemptive approach to chain security.
Once a blockchain is launched, it is very difficult to go back and alter it or improve it. While there are benefits to immutability, there are also downsides. Take the Poly Network case. Once a flaw in the network has been detected and exploited, given that the blockchain is already in operation, there is a substantial risk that the entire chain could collapse under the pressure of further attacks.
What GBC.AI is striving to do is to make blockchains adaptable and dynamic. With a blockchain guardian connected to a network, the AI can assess potential risks before they appear and greatly reduce any threat of dropped transactions. Rather than dealing with a static network, attackers will have to deal with blockchains that are constantly reacting and adapting to internal and external circumstances. As GBC.AI has proven with their work on the Solana blockchain, this kind of arrangement not only bolsters security, but it significantly improves chain functionality.
What is key about this is that it complies with the decentralized, trustless philosophy. By introducing AI and machine learning into the equation, users will not have to place their trust in third partieslike the teams that create and operate exchanges and pseudo banks like Tetherwhen they want to participate in decentralized finance.
While projects like GBC.AI and others working to bring in AI and machine learning to the blockchain space are still relatively new, given the gravity of the security issues, it should be only a matter of time before this becomes a major feature of blockchain development. For a long time people have wondered how the two most significant sectors of technology, AI and blockchain, could operate in conjunction. Circumstances have come together in such a way as to make DeFi the space in which that conjugation is necessary. The future of the industry could very well depend on it.
Here’s How Companies are Using AI, Machine Learning – Dice Insights
Companies widely expect that artificial intelligence (A.I.) and machine learning will fundamentally change their operations in coming years. To hear executives talk about it, apps will grow smarter, tech stacks will automatically adapt to vulnerabilities, and processes throughout organizations will become entirely automated.
Given the buzz around A.I., its easy for predictions to easily slip into the realm of the fantastical (In less than six months, well have cars that drive themselves! And apps that predict what a user wants before they want it!). Its worth taking a moment to see what companies areactuallydoing with A.I. at this juncture.
To that end, CompTIArecently asked 400 companiesabout their most common use-cases for A.I. Heres what they said:
The pandemic has accelerated digital transformation and changed how we work, Khali Henderson, Senior Partner at BuzzTheory and vice chair of CompTIAs Emerging Technology Community, wrote in a statement accompanying the data.We learnedsomewhat painfullythat traditional tech infrastructure doesnt provide the agility, scalability and resilience we now require. Going forward, organizations will invest in technologies and services that power digital work, automation and human-machine collaboration. Emerging technologies like AI and IoT will be a big part of that investment, which IDC pegs at $656 billion globally this year.
That predictive sales/lead scoring would top this list makes a lot of senseif companies are going to invest in A.I., theyre likely to start with a process that can provide a rapid return on investment (and generate a lot of cash).According to CompTIA, A.I. helps with more effective prioritization of sales prospects via lead scoring and provides detailed, real-time analytics. Its a similar story with CRM/service delivery optimization, where A.I. can help salespeople and technologists better identify potential customers and cross-selling opportunities.
Companies have spent years working on chatbots and digital assistants, hoping that automated systems can replace massive, human-powered call centers. So far, theyve had mixed results;the early generations of chatbotswere capable of conducting simple interactions with customers, but had a hard time with complex requests and the nuances of language. The emergence of more sophisticated systems likeGoogle Duplexpromises a future in which machines effectively chat with customers on a range of issuesprovided customers can trust interacting with software in place of a human being.
As A.I. and machine learning gradually evolve, opportunities to work with the technology will increase. While many technologists tend to equate artificial intelligence withcutting-edge projectssuch as self-driving cars, this CompTIA data makes it clear that companies first use of A.I. and machine learning will probably involve sales and customer service. Be prepared.
Membership has its benefits. Sign up for a free Dice profile, add your resume, discover great career insights and set your tech career in motion. Register now
Here is the original post:
Here's How Companies are Using AI, Machine Learning - Dice Insights
Machine Learning Operationalization Software Market Summary, Trends, Sizing Analysis and Forecast To 2027 – The AltcoinBeacon
Market Study Report adds Global Machine Learning Operationalization Software Market Research its online database. The report provides information on Industry Trends, Demand, Top Manufacturers, Countries, Material and Application.
The Machine Learning Operationalization Software market report offers a competitive advantage to companies operating in this business sphere through a holistic analysis of the present as well as future growth prospects. Informed by the views of industry experts and analysts, the report is compiled in an easily understandable manner that answers all doubts and queries of the client.
Request a sample Report of Machine Learning Operationalization Software Market at:https://www.marketstudyreport.com/request-a-sample/3975931?utm_source=altcoinbeacon.com&utm_medium=Ram
The document highlights important facets including key growth stimulants and opportunities that will facilitate the revenue flow during the study duration. Moreover, it lists the challenges tacked by the industry alongside solutions to overcome them. Insights pertaining to the market share and growth rate estimates of the industry segments are included as well.
Apart from this, the report delves into the business scenario across various regional markets and profiles the established players in these geographies. Furthermore, it explicates the prevalent tactics employed by leading companies while simultaneously suggesting strategies for adapting to the industry changes amid the COVID-19 pandemic.
The key questions answered in the report:
Ask for Discount on Machine Learning Operationalization Software Market Report at:https://www.marketstudyreport.com/check-for-discount/3975931?utm_source=altcoinbeacon.com&utm_medium=Ram
Key pointers from the TOC of the Machine Learning Operationalization Software market report:
Product gamut
Application scope
Regional outlook
Competitive landscape
In conclusion, the Machine Learning Operationalization Software market report systematically scrutinizes the industry through a multitude of segments to provide an broad view of this business domain. Additionally, it delineates the supply chain in terms of consumers, distributors, raw material, and equipment traders in this industry.
For More Details On this Report: https://www.marketstudyreport.com/reports/global-machine-learning-operationalization-software-market-size-status-and-forecast-2021-2027
Related Reports:
2. Global and China Contactless Payment Market Size, Status and Forecast 2021-2027Read More: https://www.marketstudyreport.com/reports/global-and-china-contactless-payment-market-size-status-and-forecast-2021-2027
Contact Us:Corporate Sales,Market Study Report LLCPhone: 1-302-273-0910Toll Free: 1-866-764-2150 Email: [emailprotected]
The rest is here:
Machine Learning Operationalization Software Market Summary, Trends, Sizing Analysis and Forecast To 2027 - The AltcoinBeacon
Column: Simplifying live broadcast operations using AI and machine learning – NewscastStudio
Subscribe toNewscastStudio's newsletter for the latest in broadcast design and engineering delivered to your inbox.
Artificial Intelligence and machine learning are seen as pillars of the next generation of technological advancement in broadcast media for a variety of reasons, including the ability to sift through mountains of data while identifying anomalies, spotting trends and alerting users to potential problems before they occur without the need for human intervention. With the more data they ingest these models improve over time, meaning the more ML models utilized across a variety of applications, the faster and more complex the insights derived from these tools become.
But to truly understand why machine learning provides enormous value for broadcasters, lets break it down into use cases and components within broadcast media where AI and ML can have the greatest impact.
Imagine a live sporting event stopsstreaming,or that framesstart dropping for no apparent reason.Viewers are noticing quality problems and starting to complain.Technicians are baffled and customers may have just missed the play of the year. Revenue therefore takes a hit and executives want to know what is to blame.
These are situations every broadcaster wants to avoid, and in these tense moments there is no time to lose viewers are flipping to otherservices andad revenue is being lost by the second. What went wrong? Who or what is to blame and how can we get this back up and running immediately, while mitigating this risk in the future? Modern broadcasters need to know before problems happen not be caught in a crisis trying to pick up the pieces after an incident.
Advertisement
The promise of our interconnected world means video workflowsareinteracting, intertwining, and integrating in new ways every day, simultaneously increasing information sharing, agility and connectivity while producing increasingly complex challenges and issues to diagnose. As more on-prem and cloud resources are connected with equipment from different vendors, sources, and partner organizationsdistributing to new device types,thereisan enormous, ever-expanding number of log and telemetrydata produced.
As a result, broadcastengineers have more information than they can effectively process. They routinely silence frequent alerts and alarms because with too much data overload it can be impossible to tellwhat isimportant and what is not. This inevitably leaves teams overwhelmed and lacking insights.
Advanced analytics and ML can help with these problems by making sense of overwhelming quantities of data, allowing human operators to sift through insignificant clutter and to focus and understand where issues are likely to occur before failures are noticed. Advanced analytics provide media companies the unprecedented opportunity to leverage sophisticated event correlation, data aggregation, deep learning, and virtually limitless applications to improve broadcast workflows. The benefit is to be able to do more with less, to innovate faster than the competition and prepare for the future both by increasing your knowledge base and opening the potential for cost reduction and time savings, honing in on the crucial details behind the data that matters most to both their users and organization.
One of the biggest challenges facing broadcast operations engineers is to recognize when things are not working before the viewers experience is affected. In a perfect world operators and engineers want to predict outages and identify potential issues ahead of time. Machine learning models can be orchestrated to recognize the normal ranges based on hundreds to thousands of measurements beyond the ability of a human operator and alert the operator in real time when a stream anomaly occurs. While this process normally requires monitoring logs on dozens of machines and keeping track of the performance of network links between multiple locations and partners, using ML allows the system to identify patterns in large data sets and helps operators focus only on workflow anomalies dramatically reducing workload.
Anomaly detection works by building a predictive model of what the next measurements related to a stream will be for example, the round-trip time of packets on the network or the raw bitrate of the stream and then determining how different the expected value is from the next measurement. As a tool to sort through normal and abnormal streams, this can be essential, especially when managing hundreds or thousands of concurrent channels. One benefit of anomalous behavior identification would be enabling an operator to switch to a backup link that uses a different network link before a failure occurs.
Anomaly detection can also be a vital component of reducing needless false alarms and reducing time waste. Functionality such as customizable alerting preferences and aggregated health scores generated by threat-gauging data points assist operators to sift through and assimilate data trends so they can focus where they really need to. In addition, predictive and proactive alerting can be orders of magnitude less expensive and allow broadcasters to be able to identify the root causes of instability and failure faster and easier.
A major challenge to any analytics system is data collection. When you have a video workflow comprised of machines in disparate data centers running different operating systems and tools, it can be difficult to assimilate and standardize reliable, relevant data that can be used in any AI/ML system. While there are natural data aggregation points in most broadcast architectures for example if you are using a cloud operations and remote management platform or common protocol stack this is certainly not a given.
Although standards exist for how video data should be formatted and transmitted, few actually describe how machine data, network measurements, and other telemetry should be collected, transmitted and stored. Therefore it is essential to select a technology partner that sends data to a common aggregation point where it is parsed, normalized and put into a database while supporting multiple protocols to support a robust AI/ML solution.
Once you have a method for collecting real-time measurements from your video workflow, you can feed this data into a ML engine to detect patterns. From there you can train the system not only to understand normal operating behavior for anomaly detection, but also to recognize specific patterns leading up to video degradation events. With these patterns determined you can also identify common metadata related to degradation events across systems, allowing you to identify that the degradation event is related to a particular shared network segment.
For example, if a particular ISP in a particular region continues to experience latency or blackout issues, the system learns to pick up on warning signs ahead of time and notifies the engineer before an outage preventing issues proactively while simultaneously improving root cause identification within your entire ecosystem. Developers can also see that errors are more often observed using common encoder or network hardware settings. Unexpected changes in the structure of the video stream or the encoding quality might also be important signals of impending problems. By observing correlations, ML gives operators key insights into the causes of problems and how to solve them.
Predictive analytics, alerts and correlations are useful for automated failure prediction and alerting, but when all else fails, ML models can also be used to help operators concentrate on areas of concern following an outage, making retrospective analysis much easier and faster via root cause analysis.
With workflows that consist of dozens of machines and network segments, it is inherently difficult to know where to look for problems. However, ML models, as we have seen, provide trend identification and help visualize issues using data aggregation. Even relatively straightforward visualizations of how a stream deviates from the norm are incredibly valuable, whether in the form of historical charts, customizable reports or questions as simple as how a particular stream compares to a similar recent stream.
Leveraging AI and ML to improve operational efficiency and quality provides a powerful advantage while preparing broadcasters for the future of live content delivery over IP. Selecting the right vendor for system monitoring and orchestration that integrates AI and ML capabilities can help your organization make sense of the vast amounts of data being sent across the media supply chain and be a powerful differentiator.
As experiments to test hypotheses are essential to the traditional learning process, the same goes for ML models. Building, training, deploying, and updating ML models are inherently complex, meaning providers in cooperation with their users must continue to iterate, compare results, and adjust accordingly to understand the why behind the data, improving root cause analysis and the customer experience.
Machine learning presents an unprecedented opportunity for sophisticated event correlation, data aggregation, deep learning, and virtually unlimited applications across broadcast media operations as it evolves exponentially year to year. As models become more informed and interconnected, problem solving and resolution technology based on deep learning and AI will become increasingly essential tools. Broadcast organizations looking to prepare themselves for such a future would be wise to prepare for this eventuality by choosing the right vendor to integrate AI and ML enabled tools into their workflows.
Andrew leads Zixis Intelligent Data Platform initiative, bringing AI and ML to live broadcast operations. Before Zixi he led the video platform product team at Brightcove where he spent 6 years working with some of the largest broadcasters and media companies. Particular areas of interest include live streaming, analytics, ad integration, and video players. Andrew has an MBA from Babson College and a BA from Oberlin College.
Go here to read the rest:
Column: Simplifying live broadcast operations using AI and machine learning - NewscastStudio
Apple’s Machine Learning Research Team have Published a Paper on using Specialized Health Sensors in Future AirPods – Patently Apple
Apple began discussing integrating health sensors into future sports-oriented headphones in a patent application that was published back in April 2009 and filed in 2008. Apple's engineers noted at the time that "The sensor can also be other than (or in addition to) an activity sensor, such as a psychological or biometric sensors which could measure temperature, heartbeat, etc. of a user of the monitoring system." Fast forwarding to 2018, Apple decided to update their AirPods trademark by adding "wellness sensors" to its description, a telltale sign something was in-the-works. Then a series of patents surfaced in 2020-21 timeline covering health sensor for future AirPods (01,02&03). To top it all off, in June of this year, Apple's VP of Technology talked about health sensors on Apple Watch and possibly AirPods.
The latest development on this front came from Apple's Machine Learning (ML) Research team earlier this month in the form of a research paper. Apple notes, "In this paper, we take the first step towards developing a breathlessness measurement tool by estimating respiratory rate (RR) on exertion in a healthy population using audio from wearable headphones. Given this focus, such a capability also offers a cost-effective method to track cardiorespiratory fitness over time. While sensors such as thermistors, respiratory gauge transducers, and acoustic sensors provide the most accurate estimation of a persons breathing patterns, they are intrusive and may not be comfortable for everyday use. In contrast, wearable headphones are relatively economical, accessible, comfortable, and aesthetically acceptable."
Further into the paper, Apple clarifies: "All data was recorded using microphone-enabled, near-range headphones, specifically Apples AirPods. These particular wearables were selected because they are owned by millions and utilized in a wide array of contexts, from speaking on the phone to listening to music during exercise."
(Click on image to greatly Enlarge)
Below is a full copy of the research paper published by Apple's Machine Learning Research team in the form of a SCRBD document, courtesy of Patently Apple.
Machine Learning Team Paper on Respiratory Rates in Wearable Microphones by Jack Purcher on Scribd
While the paper doesn't discuss when these specialized sensors using machine learning techniques will be implemented in AirPods, it's clearly a positive development that Apple is well into the process of proving the value of adding such sensors to future AirPods.
See the original post here:
Apple's Machine Learning Research Team have Published a Paper on using Specialized Health Sensors in Future AirPods - Patently Apple
Leveraging 5G to Support Inference at the Edge – HPCwire
The hardware and infrastructure that enables AI, machine learning, and inference has consistently improved year after year. As high-performance processing, GPU-acceleration, and data storage have advanced, the performance available in the data center has become powerful enough to support machine learning and training workloads. Still, data bottleneck challenges continue to exist over WANs when teams look to implement AI workloads in real world, production environments.
With the advent of 5G wireless networks, deploying AI at the edge and managing the movement of crucial data between the edge and the data center is becoming more practical. When you consider the high-bandwidth and low-latency advantages of 5G paired with the improvements in processing power, high-speed storage, and embedded accelerators within edge devices, you see a pathway towards powerful, real-world inference applications.
Inference on the edge is a major opportunity for businesses to gain a competitive advantage and improve operational efficiencies. Examples of use cases for edge computing include autonomous vehicles; natural language processing; and computer vision.
While 5G is a critical advancement that makes these edge deployments possible, it also means there must be substantial changes to edge devices and data center infrastructure. Most inference deployments will need advanced computing power, expanded storage, and improved connectivity to handle demanding workloads, larger amounts of data, and faster transmission of data to and from the data center.
Even with improved speed and efficiency from 5G networking, businesses cannot rely on these networks to always operate at peak efficiency. As adoption continues and more edge devices are deployed, there may be variance in network strength, bandwidth, and load. Instead, they will require localized, low-latency computing resources for edge data processing and storage to meet their goals. This will limit the amount of data that must be transmitted to cloud or on-premises data centers for intensive compute tasks to improve performance and limit the risk of exceeding network bandwidth.
Real-World Artificial Intelligence at the Edge
Autonomous vehicles, just-in-time maintenance, and real-time image processing. These are some of the ways in which organizations hope to deploy AI technologies such as machine learning and deep learning at the edge.
Machine learning and deep learning rely on massive amounts of data that must be stored and processed. Leveraging these technologies at the edge requires a tiered processing system in which data is analyzed and processed to a point at the edge, then uploaded to a data center for further processing and training of algorithms and artificial neural networks (ANNs). Until 5G, WANs had not been powerful enough to support an effective multi-tiered processing system.
In a tiered system, edge devices can carry some of the burden of data processing. However, AI workflows will require support from more powerful compute resources to train algorithms, enable human oversight, and analyze data. To support this, the data center will require significant changes.
If your organization has been developing ANNs in the cloud or on a local cluster, moving to production inference on the edge has one notable red flag. You must consider the effect of this shift on the networking capabilities of your data center environment, both from the edge to the data center and between compute and storage within the system. This can have a ripple effect on things like power and cooling, which needs to be accounted for in system design.
Another consideration is the potential need for flexibility in workflows within the data center. With hundreds or thousands of edge devices producing and consuming data, supporting various applications and workloads from a single data center environment is key.
Some situations may require innovative technologies such as composable infrastructure to make it cost-effective. Composable infrastructure abstracts hardware resources from their physical location and manages them via software over the network fabric, so you can apply those assets where needed at any given time.
The data center is not the only area that requires significant consideration. As you plan to deploy inference devices on the edge, compute, storage, acceleration, and connectivity capabilities will play a major part in your success.
On the edge, supporting 5G connectivity in industrial mobile computing devices means rethinking many of the core pieces of their design, including RF antennas; power requirements; new hardware and firmware; new safety and regulatory testing; and cybersecurity tools.
New edge infrastructures will also need advanced security solutions to protect against the inherent risks of expanding your environment to thousands of decentralized devices. This means finding tools that eliminate redundant copies of data or resource silos, encrypt data in-flight and at rest, and consider the physical access risks of unmonitored nodes and embedded systems throughout the world.
Regardless of how we each approach adopting inference at the edge, it will inevitably become a central technology in the enterprise businesses of tomorrow.
It is critical for organizations considering edge computing to find technology partners that know how to work with 5G. A company that has experience deploying cutting-edge AI, HPC, and data analytics workloads is ideal for their understanding of emerging technologies, high-speed networking, and complex data management systems.
To learn more about how to prepare your data center for deploying inference at the edge speak with an expert at siliconmechanics.com/contact.
Read the original post:
Leveraging 5G to Support Inference at the Edge - HPCwire
Deep machine learning study finds that body shape is associated with income – PsyPost
A new study published in PLOS One has found a relationship between a persons body shape and their family income. The findings provide more evidence for the beauty premium a phenomenon in which people who are physically attractive tend to earn more than their less attractive counterparts.
Researchers have consistently found evidence for the beauty premium. But Suyong Song, an associate professor at The University of Iowa, and his colleagues observed that the measurements used to gauge physical appearance suffered some important limitations.
I have been curious of whether or not there is physical attractiveness premium in labor market outcomes. One of the challenges is how researchers overcome reporting errors in body measures such as height or weight, as most previous studies often defined physical appearance from subjective opinions based on surveys, Song explained.
The other challenge is how to define body shapes from these body measures, as these measures are too simple to provide a complete description of body shapes. In this study, collaborated with one of my coauthors (Stephen Baek at University of Virginia), we use novel data which contains three-dimensional whole-body scans. Using a state-of-the art machine learning technique, called graphical autoencoder, we addressed these concerns.
The researchers used the deep machine learning methods to identify important physical features in whole-body scans of 2,383 individuals from North America.
The data came from the Civilian American and European Surface Anthropometry Resource (CAESAR) project, a study conducted primarily by the U.S. Air Force from 1998 to 2000. The dataset included detailed demographic information, tape measure and caliper body measurements, and digital three-dimensional whole-body scans of participants.
The findings showed that there is a statistically significant relationship between physical appearance and family income and that these associations differ across genders, Song told PsyPost. In particular, the males stature has a positive impact on family income, whereas the females obesity has a negative impact on family income.
The researchers estimated that one centimeter increase in stature (converted in height) is associated with approximately $998 increase in family income for a male who earns $70,000 of the median family income. For women, the researchers estimated that one unit decrease in obesity (converted in BMI) is associated with approximately $934 increase in the family income for a female who earns $70,000 of family income.
The results show that the physical attractiveness premium continues to exist, and the relationship between body shapes and family income is heterogeneous across genders, Song said.
Our findings also highlight importance of correctly measuring body shapes to provide adequate public policies for improving healthcare and mitigating discrimination and bias in the labor market. We suggest that (1) efforts to promote the awareness of such discrimination must occur through workplace ethics/non-discrimination training; and (2) mechanisms to minimize the invasion of bias throughout hiring and promotion processes, such as blind interviews, should be encouraged.
The new study avoids a major limitation of previous research that relied on self-reported attractiveness and body-mass index calculations, which do not distinguish between fat, muscle, or bone mass. But the new study has an important limitation of its own.
One major caveat is that the data set only includes family income as opposed to individual income. This opens up additional channels through which physical appearance could affect family income, Song explained. In this study, we identified the combined association between body shapes and family income through the labor market and marriage market. Thus, further investigations with a new survey on individual income would be an interesting direction for the future research.
The study, Body shape matters: Evidence from machine learning on body shape-income relationship, was published July 30, 2021.
Go here to read the rest:
Deep machine learning study finds that body shape is associated with income - PsyPost
Google and Lumen Bioscience Apply Machine Learning to the Manufacture of Spirulina-Based Biologic Drugs – BioPharm International
The companies announce the results of a research collaboration that applied machine learning to significantly advance the scalability of spirulina-based biologic drugs.
Lumen Bioscience, a clinical-stage biopharmaceutical company, announced in an August 11, 2021 press release the results of a research collaboration with Google that applied machine learning (ML) to significantly advance the scalability of spirulina-based biologic drugs. Lumens platform builds on discoveries of engineering spirulina and subsequent development of a low-cost system to manufacture them at large-scale under biopharmaceutical-grade current good manufacturing practice controls.
In a biomanufacturing system like Lumenswhere the growth media includes water and mineral saltsthe number of potentially interacting variables is too vast to explore with one-factor-at-a-time experimentation, according to Lumens press release. The ML application helps to improve the productivity process that took decades for older biomanufacturing platforms like yeast, E. coli, and CHO.
In the paper, the application of ML to increase spirulina productivity using Bayesian black box optimization to rapidly explore a 17-dimensional space containing several environmental variables, including pH, temperature, and light spectrum and light intensity is detailed. The research is titled Machine Learning Optimization of Photosynthetic Microbe Cultivation and Recombinant Protein Production and is pending peer review.
The combination of two pioneering innovationsthe machine-learning of Google and our spirulina-based therapeutics productionbrings us even closer to a fully optimized approach that could have a major impact on devastating diseases globally, said Jim Roberts, co-founder and chief scientific officer of Lumen Bioscience, in a press release. We believe this paper is the first to describe the application of AI techniques to biologics manufacturing. We look forward to the future implementation of these practices, as supported with funding from the Department of Energy, to provide mucosally and topically delivered biologics for highly prevalent diseases that, until now, have been infeasible due to the cost and scaling challenges of traditional biomanufacturing platforms.
The research was led by Caitlin Gamble, Lumen and Drew Bryant at Google Accelerated Science and was funded in part by the Bill & Melinda Gates Foundation. Lumen Bioscience also received $2 million in additional grant funding from the Department of Energy for further development of these research findings.
Source: Lumen Bioscience