Category Archives: Machine Learning
5 ways machine learning must evolve in a difficult 2023 – VentureBeat
Join top executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for success. Learn More
With 2022 well behind us, taking stock in how machine learning (ML) has evolved as a discipline, technology and industry is critical. With AI and ML spend expected to continue to grow, companies are seeking ways to optimize rising investments and ensure value, especially in the face of a challenging macroeconomic environment.
With that in mind, how will organizations invest more efficiently while maximizing MLs impact? How will big techs austerity pivot influence how ML is practiced, deployed, and executed moving forward? Here are 5 ML trends to expect in 2023.
Although we saw plenty of top technology companies announce layoffs in the latter half of 2022, its likely none of these companies are laying off their most talented ML personnel. However, to fill the void of fewer people on deeply technical teams, companies will have to lean even further into automation to keep productivity up and ensure projects reach completion. We expect to also see companies that use ML technology implement more systems to monitor and govern performance and make more data-driven decisions on managing ML or data science teams. With clearly defined goals, technical teams will have to be more KPI-centric so that leadership can have a more in-depth understanding of MLs ROI. Gone are the days of ambiguous benchmarks for ML.
Recent layoffs, specifically for those working with ML, are likely the most recent hires as opposed to the more long-term staff that have been working with ML for years. Since ML and AI have become more common in the last decade, many big tech companies have begun hiring these types of workers because they could handle the financial cost and keep them away from competitors not necessarily because they were needed. From this perspective, its not surprising to see so many ML workers being laid off, considering the surplus within larger companies. However, as the era of ML talent hoarding ends, it could usher in a new wave of innovation and opportunity. With so much talent now looking for work, we will likely see many folks trickle out of big tech and into small and medium-sized businesses or startups.
Transform 2023
Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.
Looking at ML projects in progress, teams will have to be far more efficient given the recent layoffs and look towards automation to help projects move forward. Other teams will need to develop more structure and determine deadlines to ensure projects are completed effectively. Different business units will have to begin communicating more improving collaboration and sharing knowledge so that smaller teams can act as one cohesive unit.
In addition, teams will also have to prioritize which types of projects they need to work on to make the most impact in a short period of time. I see ML projects boiled down to two types: sellable features that leadership believes will increase sales and win against the competition; and revenue optimization projects that directly impact revenue. Sellable feature projects will likely be postponed as theyre hard to get out quickly. Instead, now-smaller ML teams will focus more on revenue optimization as it can drive real revenue. Performance, in this moment, is essential for all business units and ML isnt immune to that.
Its clear that next year, MLOps teams that specifically focus on ML operations, management, and governance, will have to do more with less. Because of this, businesses will adopt more off-the-shelf solutions because they are less expensive to produce, require less research time, and can be customized to fit most needs.
MLOps teams will also need to consider open-source infrastructure instead of getting locked into long-term contracts with cloud providers. While organizations using ML at hyperscale can certainly benefit from integrating with their cloud providers, it forces these companies to work the way the provider wants them to work. At the end of the day, you might not be able to do what you want, the way you want, and I cant think of anyone who actually relishes that predicament.
Also, you are at the mercy of the cloud provider for cost increases and upgrades, and you will suffer if you are running experiments on local machines. On the other hand, open source delivers flexible customization, cost savings, and efficiency and you can even modify open-source code yourself to ensure that it works exactly the way you want. Especially with teams shrinking across tech, this is becoming a much more viable option.
One of the factors slowing down MLOps adoption is the plethora of point solutions. Thats not to say that they dont work, but that they might not integrate well together and leave gaps in a workflow. Because of that, I firmly believe that 2023 will be the year the industry moves towards unified, end-to-end platforms built from modules that can be used individually and also integrate seamlessly with each other (as well as integrate easily with other products).
This kind of platform approach, with the flexibility of individual components, delivers the kind of agile experience that todays specialists are looking for. Its easier than purchasing point products and patching them together; its faster than building your own infrastructure from scratch (when you should be using that time to build models). Therefore, it saves both time and labor not to mention that this approach can be far more cost-effective. Theres no need to suffer with point products when unified solutions exist.
In a potentially challenging 2023, the ML category is due for continued change. It will get smarter and more efficient. As organizations talk about austerity, expect to see the above trends take center stage and influence the direction of the industry in the new year.
Moses Guttmann is CEO and cofounder of ClearML.
Welcome to the VentureBeat community!
DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.
If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.
You might even considercontributing an articleof your own!
Read More From DataDecisionMakers
Link:
5 ways machine learning must evolve in a difficult 2023 - VentureBeat
HUMBL Launches Artificial Intelligence and Automated Machine Learning Initiatives Across Consumer, Commercial and Latin America – Yahoo Finance
HUMBL, Inc.
San Diego, California, March 28, 2023 (GLOBE NEWSWIRE) -- HUMBL, Inc. (OTCQB: HMBL) HUMBL announced today the launch of its Artificial Intelligence (AI) and Automated Machine Learning initiatives across its consumer, commercial and Latin America business units.
On the commercial side, HUMBL kicked off its AI / Automated Machine Learning initiatives with the announcement of its first commercial sales contract in its HUMBL Latin America subsidiary, with the sale of AI / Automated Machine Learning services for a leading IT / Telecommunications provider in the Latin America region in the form of a $60,000 (USD) contract for initial deliverables and a total contract value of $195,000 (USD) over three years, pending the achievement of milestones by HUMBL Latin America.
Artificial Intelligence is an accelerant to the principles of web3, said Brian Foote, CEO of HUMBL. The use of public data sets to create more autonomous, intelligent outcomes for consumers, as well as the corporations and governments that serve them, is an excellent use of automated machine learning technologies, continued Foote. The use of AI can help our clients model for more predictive outcomes around things like credit scoring, default rates, churn rates, healthcare patterns and more; driving more tailored experiences for consumers, while driving revenues and improved efficiencies for corporations and governments.
HUMBL has also moved into internal testing on its consumer AI initiatives and its planned Hey BLUE virtual assistant, which builds on the companys signature mascot, a Bored Ape Yacht Club NFT of the same name (BLUE). The company intends to scale up its consumer AI product lines across the HUMBL Platform - in particular around its planned HUMBL Pro subscription services - which will be available across key touch points throughout the HUMBL ecosystem.
About HUMBL
HUMBL is a Web 3 platform with product lines including the HUMBL Wallet, HUMBL Search Engine, HUMBL Social, HUMBL Tickets, HUMBL Marketplace and HUMBL Authentics. The company also has a commercial blockchain services unit called HUMBL Blockchain Services (HBS) for private and public sector clients.
Story continues
Safe Harbor Statement
This release contains forward-looking statements within the meaning of the Private Securities Litigation Reform Act of 1995. You can identify these statements by the use of the words "may," "will," "should," "plans," "expects," "anticipates," "continue," "estimates," "projects," "intends," and similar expressions. Forward-looking statements involve risks and uncertainties that could cause results to differ materially from those projected or anticipated. These risks and uncertainties include, but are not limited to, the Company's ability to successfully execute its expanded business strategy, including by entering into definitive agreements with suppliers, commercial partners and customers; general economic and business conditions, effects of continued geopolitical unrest and regional conflicts, competition, changes in technology and methods of marketing, delays in completing various engineering and manufacturing programs, changes in customer order patterns, changes in product mix, continued success in technical advances and delivering technological innovations, shortages in components, production delays due to performance quality issues with outsourced components, regulatory requirements and the ability to meet them, government agency rules and changes, and various other factors beyond the Company's control. Except as may be required by law, HUMBL undertakes no obligation, and does not intend, to update these forward-looking statements after the date of this release.
Contact
HUMBL, Inc.PR@HUMBL.com
Source: HUMBL, Inc.
See the rest here:
HUMBL Launches Artificial Intelligence and Automated Machine Learning Initiatives Across Consumer, Commercial and Latin America - Yahoo Finance
CBS News travels to Toronto to explore the future of AI – University of Toronto
Calling Toronto one of the worlds leading AI research hubs, CBS News recently spoke to several of the fields luminaries all connected to the University of Toronto about the potential impact of AI-powered chatbot technologies such as ChatGPT.
Geoffrey Hinton, aUniversity Professor Emeritus in the department of computer science in the Faculty of Arts & Science,spoke to CBS News reporter Brook Silva-Braga about the past, present and future of AI, saying the technology is comparable in scale to the industrial revolution or electricity or maybe the wheel.
Hinton, who is also affiliated with Google, has mentored many students including U of T alumni Nick Frosst, co-founder of AI language processing companyCohere, and Ilya Sutskever, co-founder of OpenAI, the company that developed ChatGPT.
In the context of large language models, we get a huge amount of text and then we show it a few words and we get it to predict the next word, Frossttold CBS. This simple technique turns out to give you something very useful and very powerful.
Recognized for making key contributions to the tech and AI ecosystem, U of T is also home to the Schwartz Reisman Institute for Technology and Society created to help guide the development and implementation of AI and other transformational technologies by taking into account social, ethical and other considerations.
Just how these powerful technologies are managed is an issue people should be thinking about now, Hinton said, pointing out that the development of artificial general intelligence, a machine that could exhibit human levels of intelligence, is progressing faster than we think.
Until quite recently I used to think it might be 20 to 50 years before we have general purpose AI, he said. Now I think it might be 20 years or less.
See more here:
CBS News travels to Toronto to explore the future of AI - University of Toronto
Machine learning expert: Health information revolution is underway – Bryant University
In medicine, notes Tingting Zhao, Ph.D., theres no such thing as too much information. When it comes to making the best decisions for their patients, doctors always want to know more, she says. An assistant professor of Information Systems and Analytics and a faculty fellow with Bryants Center of Health and Behavioral Sciences, Zhao is an emerging leader in healthcare informatics, a field that provides new insights into keeping us healthier and better informed.
Zhao, an accomplished researcher, studies a range of topics at the intersection of data, technology, and medical knowledge. That fusion, she says, is an important and exciting frontier with limitless potential. By combining all three of them, we can make better decisions in terms of public policy, disease control, and disease prevention, says Zhao, who will be teaching in Bryants new Healthcare Informatics graduate program this fall.
As technology evolves, it produces more information than ever before. Some estimates suggest that the healthcare industry generates as much as 30 percent of the worlds data, a number that could rise to 36 percent by 2025, according to RBC Capital Markets. Healthcare informatics, which integrates healthcare sciences, computer science, information science, and cognitive science, helps practitioners figure out new ways to use data to enhance delivery of care, improve patient education, and inform public health policy.
It is now more crucial than ever that professionals have the skills and knowledge to understand, use, and innovate with data, says Zhao, pointing to the rise of precision medicine, an emerging approach for disease treatment and prevention that takes into account individual variability in genes, environment, and lifestyle. Its a competitive and exciting area, she says one fueled by a desire to create new ways to treat patients. We can develop more efficient systems to not just record that data but to use it to make informed decisions as well.
The whole world is realizing we should put more resources into developing new technologies to help us provide better healthcare and better understand what makes us sick."
Personal health devices such as Apple Watches and Fitbits are other examples of the health information revolution, states Zhao. We are connecting with and developing new pieces of component technology and new techniques that we can use as analytical tools to analyze patient behaviors, she says.
The COVID-19 pandemic, Zhao says, drastically raised awareness of the value of the field and led to a massive influx of funding for data-related healthcare projects on the national level, taking it from roughly $23 billion to nearly $96 billion. The whole world is realizing we should put more resources into developing new technologies to help us provide better healthcare and better understand what makes us sick, she states.
An expert in machine learning, a branch of artificial intelligence and computer science that focuses on the use of data and algorithms to imitate the way humans learn, Zhao developed her passion for combining data science with healthcare as a doctoral student, when she discovered the thrill of knowing that something she worked on could one day help real life patients. It was exciting to know this was the future, she says.
Zhao has since contributed to a variety of projects, including a National Institutes of Health-funded collaboration with Harvard Medical School and Northeastern Group to develop algorithms that can identify Chronic Obstructive Pulmonary Disease (COPD) and make predictions regarding its progress. Her most recent research study, which involved developing and honing algorithms to identify which genes respond to specific perturbation stressors, which can provide a better understanding of the underlying mechanisms of disease and advance the identification of new drug targets, was published in the prestigious journal Briefings in Bioinformatics.
"This is an opportunity to do meaningful work that benefits the entire community."
Students in the universitys Healthcare Informatics graduate program, Zhao says, will gain exposure to research conducted by their professors, including opportunities to join them in scholarly work. When students come here, they can learn what truly goes on in the field, says Zhao, who cites working with junior partners on research projects as one of her favorite parts of being a professor.
The program aligns well with Bryants other new graduate programs in Data Science and Business Analytics, notes Zhao, who is teaching courses in both. This provides opportunities for ideas to cross-pollinate and lead to new insights and innovations. The healthcare informatics field, specifically, is ideal for curious people who want to explore and to find ways to use their talents and their education to help others.
This is an opportunity to do meaningful work that benefits the entire community, says Zhao.
Continue reading here:
Machine learning expert: Health information revolution is underway - Bryant University
‘The new Excel’: MBA students flock to machine learning course – University of Toronto
With recent instability in some U.S. banks and the crypto winter that began last year, experts say itsmore important than everfor financeprofessionals to understand the innovations andchallenges in the sector.
The world is changing quickly, and so too are the skills needed to thrive, saysJohn Hull, a University Professorof finance at the University of Torontos Rotman School of Management.
Hull is the academic director of theRotman Financial Innovation Hub (FinHub), which is designed to help fintech practitioners, students and faculty to share insights and equip students with best-in-class knowledge of financial innovation. He created the hub five years ago withAndreas Park, professor of finance at U of TMississauga, andthe latePeter Christoffersen, who was a professor of finance at Rotman.
We recognized there were lots of things happening in the financial sector that are transformative and different, and we wanted to develop the knowledge base and pass it on to the students so they can compete in this space, says Park, who has a cross-appointment to Rotman.
Each year, students can take courses taught by FinHub-affiliated faculty. That includesHull and Park, who offer courses on machine learning, blockchain, decentralized finance and financial market trading.
One of the most in-demand MBA electives ismachine learning and financial innovation, which introduces students to the tools of machine learning. A similar course is compulsory for students in the master of financial risk management and master of finance programs.
Students are required to learn Python in the course, with Hull calling the programming language the new Excel as it becomes a common requirement for many jobs in finance.
Ive met traders in their 40s who go and learn Python because it simplifies their workflow, says Park. Its all about inferring data and making sense of it, and then predicting future data using machine learning tools. And to do that, you need to learn Python.
The machine learning course is offered to full-time MBA students in March and April of their first year. Its also available as an elective in their second year.
Many MBA students get involved in machine learning as part of their summer internship, so it's important to give them an opportunity to familiarize themselves with machine learning and Python applications prior to that time, says Hull.
MBA student Cameron Thompsontook the course prior to an internship at Boston Consulting Group (BCG) and says the hands-on practice in class was invaluable, with or without an extensive background in computer programming.
Being familiar with common machine learning terminology from day one on the job was quite useful, says Thompson, who will be returning to BCG full-time following graduation. The course builds a solid foundation for using data in a strategic wayand then adds the machine learning content its hard to go anywhere without seeing an application.
In his second year, Thompson pursued an independent FinHub study project sponsored by the Bank of Canada that involvedworking with researchers from Rotman and the Faculty of Applied Science & Engineering on a natural language processing model.
MBA gradFengmin Weng, whotook the elective course with Hull, says the insights from class prepared her to lead a machine learning project at TD.
Machine learning is definitely the trend in the financial industry, particularly in the risk management area, says Weng, who came from an accounting background when she pursued the master of financial risk management program.
It definitely helps us to make better decisions around our strategy, she says. If you want to develop your career in the risk area, machine learning is your weapon.
Richard Liu, who received his MBA from Rotman threeyears ago, saysthe machine learning course was one of the most eye-opening parts of his MBA experience. Today, he says he uses many of the concepts from the course in his work as a financial planner.
Im able to recognize when its more effective to train computers to enhance our work, how to coexist withrobo-advisors and how to automate some of our financial planning processes, says Liu.
Students involved with FinHub courses are equipped with the tools to think critically about the implications and benefits of emerging technologies in the financial sector, says Park, adding thattheyre able to enter an organization and use these tools to help improve processes and strategies.
Hull, meanwhile, says student who take the course gain insight into the direction the finance world is heading namely that machine learning is becoming more and more important in business."
Read more here:
'The new Excel': MBA students flock to machine learning course - University of Toronto
Machine Learning Models Offer Effective Approach for Analyzing … – The Ritz Herald
Data breaches have become a major concern for companies in recent years, as they can result in significant financial and reputational damage. A study by IBM found that the average cost of a data breach is $3.86 million, highlighting the importance of developing effective strategies to prevent them. Dr. Aashis Luitels research provides a comprehensive approach to analyzing data breach risks using machine learning models. The study emphasizes the need to conduct a detailed analysis of publicly available data breach records to identify trends in data breach characteristics and sources of geographical heterogeneity. Dr. Luitel is a Technical Program Manager at Microsofts Cloud and Artificial Intelligence and a Cybersecurity Professorial lecturer at various US universities. He earned a Doctorate from the George Washington University.
Dr. Luitels research involves developing a series of supervised machine-learning models to predict the probability of data breach incidence, size, and timing. The methodology uses tree-based supervised machine learning methods adapted to high-dimensional sparse panel data and nonparametric and parametric survival analysis techniques. The study results indicate that the proposed modeling framework provides a promising toolbox that directly addresses the timing of repeat data breaches. Analyzing feature importance, partial dependence, and hazard ratios revealed early warning signals of data breach incidence, size, and timing for US organizations.
Dr. Luitel notes that his research has important implications for security engineers and developers of data security systems. By assessing an organizations susceptibility to data breach risks based on various contextual features, stakeholders can make informed decisions about protecting their organizations from data breaches. Moreover, the methodology proposed in the study can help organizations gain executive management support in implementing security systems, thereby minimizing a data breachs financial and reputational impact.
Dr. Luitels research is particularly timely given the recent surge in remote work due to the COVID-19 pandemic. The pandemic has led to an increase in cyber-attacks and data breaches, as many organizations have had to quickly shift to remote work without adequate security measures in place. Remote work has opened up new vulnerabilities and risks for organizations, such as unsecured Wi-Fi networks and personal devices for work purposes. As a result, it is more critical than ever to have effective strategies for preventing and managing data breaches.
In addition to the risks posed by remote work, organizations face a constantly evolving threat landscape, with cybercriminals using increasingly sophisticated techniques to breach networks and steal sensitive data. This makes it challenging for security professionals to keep up and identify potential threats before they cause damage.
Dr. Luitels research provides a promising solution to this challenge by using machine learning models to automate the process of identifying potential data breach risks. By analyzing large amounts of data, the models can detect patterns and trends that may be difficult for humans to discern. This can help organizations gain a more comprehensive understanding of their vulnerabilities and develop more effective security strategies.
Furthermore, the methodology proposed by Dr. Luitel can benefit organizations across a wide range of industries, including healthcare, finance, and retail. Healthcare, in particular, is a sector that is particularly vulnerable to data breaches due to the sensitive nature of patient information. With the increasing use of electronic health records and other digital tools, healthcare providers must ensure robust security measures to protect patient data.
In the finance industry, data breaches can have significant financial consequences, potentially damaging consumer trust and resulting in regulatory fines. By using machine learning models to predict the likelihood of a data breach and identify areas of vulnerability, financial institutions can develop more targeted security strategies and minimize the impact of any breaches that do occur.
In retail, data breaches can result in losing valuable customer data, including payment information and personal details. This can damage the retailers reputation and result in a loss of consumer trust. Using Dr. Luitels machine learning models, retailers can identify potential risks and develop more effective security measures to protect their customers data.
Dr. Luitels research offers a valuable contribution to the field of data security, providing a comprehensive and automated approach to identifying and mitigating data breach risks. With the ever-increasing importance of digital data and the rise of remote work, effective data security measures have become more critical than ever. By using machine learning models to analyze data breach risks, organizations can develop targeted security strategies that minimize the risk of data breaches and protect their reputation and bottom line.
Dr. Luitels research also highlights the importance of adopting a proactive approach to data security. Rather than waiting for a breach to occur, organizations can use machine learning models to predict potential breaches and implement strategies to prevent them. By analyzing patterns and trends in historical data breaches, organizations can identify potential vulnerabilities and take action to address them before cybercriminals exploit them. Moreover, the methodology proposed by Dr. Luitels research can help organizations comply with data protection regulations. The General Data Protection Regulation (GDPR) in the European Union and the California Consumer Privacy Act (CCPA) in the United States require organizations to implement appropriate measures to protect personal data. Failure to comply with these regulations can result in significant fines and reputational damage.
By using machine learning models to analyze data breach risks, organizations can demonstrate their compliance with these regulations and ensure the protection of their customers personal data. In addition, Dr. Luitels research can aid in developing cyber insurance policies. Insurance companies can use the models to assess an organizations data breach risk and develop customized policies that provide appropriate coverage. By using the models to identify potential risks and vulnerabilities, insurance companies can develop policies that provide more comprehensive coverage, thereby reducing their financial risk.
According to security researchers, Dr. Luitels research contributes to data security. Organizations can develop effective strategies to prevent breaches and minimize their impact by using machine learning models to analyze data breach risks. With the increasing importance of digital data and the growing threat landscape, the need for robust data security measures has never been more critical. The models proposed in Dr. Luitels research provide a promising approach to addressing these challenges and safeguarding organizations sensitive data.
See original here:
Machine Learning Models Offer Effective Approach for Analyzing ... - The Ritz Herald
Voice Data: New Machine Learning Smarts are Powering Fast and Feature-Rich Analysis – UC Today
In business, what is said can speak volumes.
Not just the words that are uttered: when, how and by whom are also insights of high value.
When analyzed and understood, they have the power to drive customer satisfaction levels, aid staff training and ensure legal compliance.
Indeed, the capture and retrospective curation of voice data has long been a thing, but software implementation has, to date, taken months and delivered only the most rudimentary intelligence.
However, todays smarter enterprises are now benefitting from altogether more sophisticated solutions that are fast and feature-rich.
Amazon Chime the all-in-one-place meet, chat and call platform for business has just transformed voice communications with its Amazon Chime SDK Call Analytics feature that makes it simpler for communication builders and developers to add that functionality into their app and website workloads.
Users benefit from real-time transcription, call categorization, post-call summary, speaker search, and tone-based sentiment via pre-built integrations with Amazon Transcribe and Amazon Transcribe Call Analytics, and natively through the Amazon Chime SDK voice analytics capability.
Insights can be consumed in both real-time and following completion of a call by accessing a data lake. Users can then use pre-built dashboards in Amazon QuickSight or the data visualization tool of their choice to help interpret information and implement learnings.
Voice remains a hugely important part of any organizations suite of communication channels and is capable of so much more than simply facilitating a conversation, says Sid Rao, GM of Amazon Chime SDK.
It generates valuable data which, when processed by call analytics, can contribute greatly to the effectiveness and efficiency of enterprises processes and workflows.
Machine Learning-based call analytics are particularly helpful for companies processing large volumes of call data to monitor customer satisfaction, improve staff training or stay compliant, but implementing such solutions can often take months.
The new call analytics features from Amazon Chime SDK reduces deployment time to a few days.
The insights and call recordings can be used across a variety of use cases such as financial services, insurance, mortgage advisory, expert consultation, and remote troubleshooting for products.
Customers can use the launched feature to improve customer experience, increase efficiency of experts such as wealth management advisors, and reduce compliance costs.
For example, banks can use Amazon Chime SDK call analytics to record and transcribe trader conversations for compliance purposes, generate real-time transcription, and perform speaker attribution using the speaker search feature.
Amazon Chime SDK customer IPC is a leading provider of secure, compliant communications and multi-cloud connectivity solutions for the global financial markets.
Tim Carmody, IPC CTO, said: In our industry, transcribing and recording trader calls is required for regulatory compliance. With all that recorded call data, machine learning is ideal to monitor calls for compliance and acquire better insights about the trades that are occurring.
Optional integration of Amazon Chime SDKs call analytics feature into call flows helps our customers compliance teams to securely monitor and automatically flag trades for non-compliance in real-time, as well as gather new trader insights from call data. Working with AWS, IPC was able to execute this quickly: where 12 months prior it would have taken over a week to implement a machine-learning-powered solution like this, Amazon Chime SDKs call analytics was deployed in just a couple days.
Businesses can also apply voice tone analysis to customer conversations to assess sentiment around products, services, or experiences.
The Chime SDK Insights console can manage integrations with AWS Machine Learning services such as Amazon Transcribe, Amazon Transcribe Call Analytics and Chime SDK voice insights, including speaker search and voice tone analysis.
Speaker search uses machine learning to take a 10 second voice sample from call audio and returns a set of closest matches from a database of voiceprints.
Voice tone analysis uses Machine Learning to extract sentiment from a speech signal based on a joint analysis of linguistic information (what was said) as well as tonal information (how it was said).
Real-time alerts can be triggered by events such as poor caller sentiment, or key words spoken during a call.
All in all, its a powerful tool capable of raising the value of voice data to great new heights.
Now THATS what were talking about..!
To learn more about how Amazon Chime SDK can help your business digitize and thrive, visit Amazon Chime SDK.
Read more here:
Voice Data: New Machine Learning Smarts are Powering Fast and Feature-Rich Analysis - UC Today
Machine Learning Prediction of S&P 500 Movements using QDA in R – DataDrivenInvestor
Quadratic Discriminant Analysis is a classification method in statistics and machine learning. It is similar to Linear Discriminant Analysis (LDA), but it assumes that the classes have different covariance matrices, whereas LDA assumes that the classes have the same covariance matrix. If you want to learn more about LDA, here is my previous article where I talk about it.
In QDA, the goal is to find a quadratic decision boundary that separates the classes in a given dataset. This boundary is based on the estimated means and covariance matrices of the classes. Moreover, QDA can be used for both binary and multiclass classification problems. It is often used in situations where the classes have nonlinear boundaries or where the classes have different variances.
In R, QDA can be performed using the qda() function in the MASS package. We will use it on the SMarket data, part of the ISLR2 library. The syntax is identical to that of lda(). In the context of the Smarket data, the QDA model is being used to predict whether the stock market will go up or down (represented by the Direction variable) based on the percentage returns for the previous two days (represented by the Lag1 and Lag2 variables). The QDA model estimates the covariance matrices for the up and down classes separately and uses them to calculate the probability of each observation belonging to each class. The observation is then assigned to the class with the highest probability.
train <- (Smarket$Year < 2005)Smarket.2005 <- Smarket[!train, ]Direction.2005 <- Smarket$Direction[!train]
We first load the libraries and then split the data into test and training subsets in order to avoid overfitting the model.
Then we fit a QDA model to the training data (subset = train), using the qda function.
# OUTPUT:Prior probabilities of groups:Down Up 0.491984 0.508016
Group means:Lag1 Lag2Down 0.04279022 0.03389409Up -0.03954635 -0.03132544
We only use the Lag1 and Lag2 variables because they are the ones that seem to have the highest explicative power (we discovered it in a previous article about logistic regression: basically, they are the ones with the smallest p-value). Here is the article if you want to delve deeper into the topic:
The output contains the group means. But it does not contain the coefficients of the linear discriminants, because the QDA classifier involves a quadratic, rather than a linear, function of the predictors.
Next, we make predictions on the test data using the predict function and calculate the confusion matrix and the classification accuracy.
mean(qda.pred$class == Direction.2005)# OUTPUT:[1] 0.599
The output of the table function shows the confusion matrix, and the output of the mean function shows the classification accuracy.
Interestingly, the QDA predictions are accurate almost 60% of the time, even though the 2005 data was not used to fit the model. This level of accuracy is quite impressive for stock market data, which is known to be quite hard to model accurately. This suggests that the quadratic form assumed by QDA may capture the true relationship more accurately than the linear forms assumed by LDA and logistic regression. However, I would definitely recommend evaluating this methods performance on a larger test set before betting that this approach will consistently beat the market!
We can create a scatterplot with contours to visualize the decision boundaries for the Linear Discriminant Analysis (LDA) and Quadratic Discriminant Analysis (QDA) models on the Smarket data.
len1<-80; len2<-80; delta<-0.1grid.X1<-seq(from=min(Smarket$Lag1)-delta,to=max(Smarket$Lag1)+delta,length=len1)grid.X2<-seq(from=min(Smarket$Lag2)-delta,to=max(Smarket$Lag2)+delta,length=len2)dataT<-expand.grid(Lag1=grid.X1,Lag2=grid.X2)
lda.pred<-predict(lda.fit,dataT)zp <- lda.pred$posterior[,2] -lda.pred$posterior[,1]contour(grid.X1, grid.X2, matrix(zp, nrow=len1),levels=0, las=1, drawlabels=FALSE, lwd=1.5, add=T, col="violet")
qda.pred<-predict(qda.fit,dataT)zp <- qda.pred$posterior[,2] -qda.pred$posterior[,1]contour(grid.X1, grid.X2, matrix(zp, nrow=len1),levels=0, las=1, drawlabels=FALSE, lwd=1.5, add=T, col="brown")
The first two lines of code create a color indicator variable for the Direction variable based on whether it is Up or Down in the training data. The plot function is then used to create a scatterplot of the Lag2 variable against the Lag1 variable, with points colored according to the color indicator variable.
The next four lines of code define a grid of points to be used for generating the contours. The expand.grid function creates a data frame with all possible combinations of Lag1 and Lag2 values within the specified grid range.
The susequent chunck of code use the predict function to generate the predicted class probabilities for each point in the grid, for both the LDA and QDA models. The contour function is then used to create a contour plot of the decision boundaries for each model, with the levels set to 0 to show the decision boundary between the two classes. The LDA contours are colored violet, while the QDA contours are colored brown.
Thank you for reading the article. If you enjoyed it, please consider following me.
See the rest here:
Machine Learning Prediction of S&P 500 Movements using QDA in R - DataDrivenInvestor
Machine learning model helps forecasters improve confidence in storm prediction – Phys.org
This article has been reviewed according to ScienceX's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:
fact-checked
trusted source
proofread
When severe weather is brewing and life-threatening hazards like heavy rain, hail or tornadoes are possible, advance warning and accurate predictions are of utmost importance. Colorado State University weather researchers have given storm forecasters a powerful new tool to improve confidence in their forecasts and potentially save lives.
Over the last several years, Russ Schumacher, professor in the Department of Atmospheric Science and Colorado State Climatologist, has led a team developing a sophisticated machine learning model for advancing skillful prediction of hazardous weather across the continental United States. First trained on historical records of excessive rainfall, the model is now smart enough to make accurate predictions of events like tornadoes and hail four to eight days in advancethe crucial sweet spot for forecasters to get information out to the public so they can prepare. The model is called CSU-MLP, or Colorado State University-Machine Learning Probabilities.
Led by research scientist Aaron Hill, who has worked on refining the model for the last two-plus years, the team recently published their medium-range (four to eight days) forecasting ability in the American Meteorological Society journal Weather and Forecasting.
The researchers have now teamed with forecasters at the national Storm Prediction Center in Norman, Oklahoma, to test the model and refine it based on practical considerations from actual weather forecasters. The tool is not a stand-in for the invaluable skill of human forecasters, but rather provides an agnostic, confidence-boosting measure to help forecasters decide whether to issue public warnings about potential weather.
"Our statistical models can benefit operational forecasters as a guidance product, not as a replacement," Hill said.
Israel Jirak is science and operations officer at the Storm Prediction Center and co-author of the paper. He called the collaboration with the CSU team "a very successful research-to-operations project." CSU Ph.D. student Allie Mazurek discusses the CSU-MLP with forecaster Andrew Moore. Credit: Provided/Allie Mazurek
"They have developed probabilistic machine learning-based severe weather guidance that is statistically reliable and skillful while also being practically useful for forecasters," Jirak said. The forecasters in Oklahoma are using the CSU guidance product daily, particularly when they need to issue medium-range severe weather outlooks.
The model is trained on a very large dataset containing about nine years of detailed historical weather observations over the continental U.S. These data are combined with meteorological retrospective forecasts, which are model "re-forecasts" created from outcomes of past weather events. The CSU researchers pulled the environmental factors from those model forecasts and associated them with past events of severe weather like tornadoes and hail. The result is a model that can run in real time with current weather events and produce a probability of those types of hazards with a four- to eight-day lead time, based on current environmental factors like temperature and wind.
Ph.D. student Allie Mazurek is working on the project and is seeking to understand which atmospheric data inputs are the most important to the model's predictive capabilities. "If we can better decompose how the model is making its predictions, we can hopefully better diagnose why the model's predictions are good or bad during certain weather setups," she said.
Hill and Mazurek are working to make the model not only more accurate, but also more understandable and transparent for the forecasters using it.
For Hill, it's most gratifying to know that years of work refining the machine learning tool are now making a difference in a public, operational setting.
"I love fundamental research. I love understanding new things about our atmosphere. But having a system that is providing improved warnings and improved messaging around the threat of severe weather is extremely rewarding," Hill said.
More information: Aaron J. Hill et al, A New Paradigm for Medium-Range Severe Weather Forecasts: Probabilistic Random ForestBased Predictions, Weather and Forecasting (2022). DOI: 10.1175/WAF-D-22-0143.1
Go here to read the rest:
Machine learning model helps forecasters improve confidence in storm prediction - Phys.org
Machine Learning Executive Talks Rise, Future of Generative AI – Georgetown University The Hoya
Keegan Hines, a former Georgetown adjunct professor and the current vice president of machine learning at Arthur AI, discussed the rapid rise in generative Artificial Intelligence (AI) programs and Georgetowns potential in adapting to software like ChatGPT.
The Master of Science in Data Science and Analytics program in the Graduate School of Arts & Sciences hosted the talk on March 17. The discussion centered on the rapid development of generative AI over the past six months.
Hines said generative AI has the capacity to radically change peoples daily lives, including how students are taught and how entertainment is consumed.
I definitely think were going to see a lot of personal tutoring technologies coming up for both little kids and college students, Hines said at the event. I have a feeling that in the next year, someone will try to make an entirely AI-generated TV show. Its not that hard to imagine an AI-generated script, animation and voice actors.
Imagine what Netflix becomes. Netflix is no longer recommend Keegan the best content; Netflix is now create something from scratch which is the perfect show Keegans ever wanted to see, Hines added.
Hines then discussed algorithms that generate text. He said the principal goal of these algorithms is to create deep learning systems that can understand complex patterns over longer time scales.
Hines said one challenge AI faces is that it can provide users with incorrect information.
These models say things and sometimes theyre just flatly wrong, Hines said. Google got really panned when they made a product announcement about Bard and then people pointed out Bard had made a mistake.
Bard, Googles AI chatbot, incorrectly answered a question about the James Webb Space Telescope in a video from the programs launch Feb. 6, raising concerns about Googles rushed rollout of Bard and the possibility for generative AIs to spread misinformation.
Hines said the potential for bias and toxicity in AI is present, as seen with Microsofts ChatGPT-powered Bing search engine, which manufactured a conspiracy theory relating Tom Hanks to the Watergate scandal.
Theres been a lot of research in AI alignment, Hines said. How do we make these systems communicate the values we have?
Teaching and learning in all levels of education will need to adapt to changes in technology, according to Hines.
One example is a high school history teacher who told students to have ChatGPT write a paper and then correct it themselves, Hines said. I think this is just the next iteration of open book, internet, ChatGPT. How do you get creative testing someones critical thinking on the material?
Hines said OpenAI, the company behind ChatGPT, noticed larger, more complex language models were more accurate than smaller models due to lower levels of test loss or errors made during training.
A small model has a high test loss whereas a really big model has a much more impressive test loss, Hines said. The big model also requires less data to reach an equivalent amount of test loss.
OpenAIs hypothesis was that the secret to unlocking rapid advancement in artificial intelligence lies in creating the largest model possible, according to Hines.
There didnt seem to be an end to this trend, Hines said. Their big hypothesis was, lets just go crazy and train the biggest model we can think of and keep going. Their big bet paid off and these strange, emergent, semi-intelligent behaviors are happening along the way.
Hines said he is optimistic about the fields future, and he predicted AI will be able to produce even more complex results, such as creating a TV show. It was really only about ten years ago that deep learning was proven to be viable. Hines said. If were going to avoid the dystopian path and go down the optimistic path, generative AI will be an assistant. It will get you 80% of the way and you do the next 20%.
See more here:
Machine Learning Executive Talks Rise, Future of Generative AI - Georgetown University The Hoya