Category Archives: Machine Learning
The Role of AI and Machine Learning in Cybersecurity – Analytics Insight
AI and machine learning are the kind of buzzwords that generate a lot of interest; hence, they get thrown around all the time. But what do they actually mean? And are they as instrumental to the future of cybersecurity as many believe?
When a large set of data is involved, having to analyze it all by hand seems like a nightmare. Its the kind of work that one would describe as boring and tedious. Not to mention the fact it would take a lot of staring at the screen to find what youve set out to discover.
The great thing about machines and technology is that unlike humans it never gets tired. Its also better geared for being able to notice patterns. Machine learning is what you get when you reach the point of teaching your tools on how to spot patterns. The AI helps you interpret it all better and make the solution self-sufficient.
Cybersecurity solutions (antivirus scanners in particular) are all about spotting a pattern and planning the right response. These scanners rely on heuristic modeling. It gives them the ability to recognize a piece of code as malicious, even though it might be the case that no one has flagged it as such before. In essence, it has plenty to do with teaching the software to recognize and alert you when something is out of the ordinary.
As soon as something oversteps the threshold of tolerance, it triggers an alarm. From there on out, the rest is up to the user. For instance, the user may instruct the antivirus software to move the infected file to quarantine. It can do so with or without human intervention.
Applying AI to cybersecurity solutions is taking things up a notch. Without it, the option of having the software learn on its own by observing would not be possible.
Imagine having an entity working in the background that knows you so well that it can predict your every move. It might be slight nuances. For example, the way you move your mouse or the parts of the web youre browsing on a frequent basis. Even the order of the applications you launch upon logging in.
Without having to introduce yourself, the AI would get to know you and your habits pretty well. Thus, it would form a digital fingerprint of you. It sounds scary, but it could come in handy. For instance, it could raise the alarm if an unauthorized individual ever gets access to your PC.
Of course, observing your behavior is not the end of what employment of AI and machine learning can do. Why not do the same thing for computer processes?
Imagine having to monitor what programs are running in the background yourself. Tracking how much resources they consume all day, every day, by hand. It doesnt sound enjoyable now, does it? But its the work AI excels at.
Without lifting a finger, youd have a powerful watchdog that would start barking as soon as something is out of the ordinary. For instance, it could alert you about malicious operating system behaviors. You would know right away about crypto mining malware or other types of threats affecting your computer.
The smart malware designers make it so that your systems CPU usage gets off the charts only when youre not using the PC. Theres no way to spot such a thing while youre away from the keyboard. Unless you have AI-powered cybersecurity solutions to track it all for you 24/7.
Webmasters keep trying to fend off bot traffic and automated scripts. These are used for automatic data scraping and similar activities. For instance, someone could write a script to harvest every bit of contact details on the website. They can then send unsolicited offers to all those contacts. Even when they dont scrape contacts, no one wants bot traffic because it consumes valuable server resources and slows everything down for legitimate browsers. Thus, it harms the user experience.
The simple solution is to block a range of IP addresses. But by using a VPN (you can read more about it here) server or a proxy, a script can get around the obstacle. Now lets introduce some AI into the equation. By observing every browsers activity, it would be able to recognize repetitive behavior. It would associate it with an IP address thats currently browsing, then flag it. Sure, a script may discard an IP address and try with a new one. But the fingerprint left by its activities would remain since its rather much pattern-based. In the end, the new IP could be flagged much faster by automated observation.
Since they came to be, AI and machine learning have changed the world of cybersecurity forever. As time goes on, they will keep getting more and more refined. Its a matter of question when it will reach the point of becoming your cybersecurity watchdog, tailored to your needs.
Read more from the original source:
The Role of AI and Machine Learning in Cybersecurity - Analytics Insight
Why Unsupervised Machine Learning is the Future of Cybersecurity – TechNative
Not all Artificial Intelligence is created equal
As we move towards a future where we lean on cybersecurity much more in our daily lives, its important to be aware of the differences in the types of AI being used for network security.
Over the last decade, Machine Learning has made huge progress in technology with Supervised and Reinforcement learning, in everything from photo recognition to self-driving cars.
However, Supervised Learning is limited in its network security abilities like finding threats because it only looks for specifics that it has seen or labeled before, whereas Unsupervised Learning is constantly searching the network to find anomalies.
Machine Learning comes in a few forms: Supervised, Reinforcement, Unsupervised and Semi-Supervised (also known as Active Learning).
Supervised Learning relies on a process of labeling in order to understand information.
The machine learns from labeling lots of data and is able to recognize something only after someone, most likely a security professional, has already labeled it, as it can not do so on its own.
This is beneficial only when you know exactly what youre looking for, which is definitely not commonly the case in cybersecurity. Most often, hackers are using a method of attack that the security program has not seen before in which case a supervised system would be totally useless.
This is where Unsupervised Learning comes in. Unsupervised Learning draws inferences from datasets without labels. It is best used if you want to find patterns but dont know exactly what youre looking for.
This makes it useful in cybersecurity where the attacker is always changing methods. Its not looking for a specific label, but rather any pattern that is out of the norm will be flagged as dangerous, which is a much better method in a situation where the attacker is always changing forms.
Unsupervised Learning will first create a baseline for your network that shows what everything should look like on a regular day. This way, if some file transfer breaks the pattern of regular behavior by being too large or sent at an odd time, it will be flagged as possibly dangerous by the Unsupervised system.
A Supervised Learning program will miss an attack if it has never seen it before because it hasnt yet labeled that activity as dangerous, whereas with Unsupervised Learning security, the program only has to know that the action is abnormal in order to flag it as a potential threat.
There are two types of Unsupervised Learning: discriminative models and generative models. Discriminative models are only capable of telling you, if you give it X then the consequence is Y. Whereas the generative model can tell you the total probability that youre going to see X and Y at the same time.
So the difference is as follows: the discriminative model assigns labels to inputs, and has no predictive capability. If you gave it a different X that it has never seen before it cant tell what the Y is going to be because it simply hasnt learned that. With generative models, once you set it up and find the baseline you can give it any input and ask it for an answer. Thus, it has predictive ability for example it can generate a possible network behavior that has never been seen before.
So lets say some person sends a 30 megabyte file at noon, what is the probability that he would do that? If you asked a discriminative model whether this is normal, it would check to see if the person had ever sent such a file at noon before but only specifically at noon. Whereas a generative model would look at the context of the situation and check if they had ever sent a file like that at 11:59 a.m. and 12:30 p.m. too, and base its conclusions off of surrounding circumstances in order to be more accurate with its predictions.
The Artificial Intelligence that we are using at MixMode now is what is in the class of generative models in Unsupervised Learning, that basically gives it this predictive ability. It collects data to form a baseline of the network and will be able to predict what will happen over time because of its knowledge of what a day of the week looks like for the network.
If anything strays from this baseline, the platform will alert whichever security team oversees it that there has been an irregularity detected in network performance that should be adhering to the baseline standard.
For example, It collects data as it goes and then it says I know whats going to happen on monday at 9: People are going to come in and network volume will grow, then at noon they gonna go for lunch so the network level will drop a bit, then theyll continue working until six and go home and the network level will go down to the level it is during the night.
Because of its predictive power, the Generative Unsupervised learning model is capable of preventing Zero-Day attacks, which makes it the best security method out there and has the fastest response time to any breach.
Semi-Supervised or Active Learning takes the best of both unsupervised and supervised learning and puts them together in order to make predictions on how a network should behave.
Active learning starts with unsupervised learning by looking for any patterns on a network that deviate from the norm, then once it finds one it can label it as a threat, which is the supervised learning portion.
An active learning platform will be extremely useful because not only is it constantly scanning for any deviations on the network, but it is also constantly labeling and adding metadata to the abnormalities it does find which makes it a very strong detection and response system.
Featured image: Pablo Lagato
Go here to read the rest:
Why Unsupervised Machine Learning is the Future of Cybersecurity - TechNative
JP Morgan expands dive into machine learning with new London research centre – The TRADE News
JP Morgan is expanding its foray into machine learning and artificial intelligence (AI) with the launch of a new London-based research centre, as it explores how it can use the technology for new trading solutions.
The US investment bank has recently launched a Machine Learning Centre of Excellence (ML CoE) in London and has hired Chak Wong who will be responsible for overseeing a new team of machine learning engineers, technologists, data engineers and product managers.
Wong was most recently a professor at the Hong Kong University of Science and Technology, where he taught Masters and PhD level courses on AI and derivatives. He was also a senior quant trader at Morgan Stanley and Goldman Sachs in London.
According to JP Morgans website, the ML CoE teams partner across the firm to create and share Machine Learning Solutions for our most challenging business problems. The bank hopes the expansion of the machine learning centre to Europe will accelerate the deployment of the technology in regions outside of the US.
JP Morgan will look to build on the success of a similar New York-based centre it launched in 2018 under the leadership of Samik Chandarana, head of corporate and investment banking applied AI and machine learning.
These ventures include the application of the technology to provide an optimised execution tool in FX algo trading, and the development of Robotrader as a tool to automate pricing and hedging of vanilla equity options, using machine learning.
In November last year, JP Morgan also made a strategic investment in FinTech firm Limeglass, which deploys AI, machine learning and natural language processing (NLP) to analyse institutional research.
AI and machine learning technology has been touted to revolutionise quantitative and algorithmic trading techniques. Many believe its ability to quantify and analyse huge amounts of data will enable them to make more informed investment decisions. In addition, as data sets become more complex, trading strategies are increasingly being built around new machine and deep learning tools.
Speaking at an industry event in Gaining the Edge Hedge Fund Leadership conference in New York last year, representatives from the hedge fund and allocator industry discussed the significant importance the technology will have on investment strategies and processes.
AI and machine learning is going to raise the bar across everything. Those that are not paying attention to it now will fall behind, said one panellist from a $6 billion alternative investment manager, speaking under Chatham House Rules.
Go here to see the original:
JP Morgan expands dive into machine learning with new London research centre - The TRADE News
Google shows off far-flung A.I. research projects as calls for regulation mount – CNBC
Google senior fellow Jeff Dean speaks at a 2017 event in China.
Source: Chris Wong | Google
Artificial intelligence and machine learning are crucial to Google and its parent company Alphabet. Recently promoted Alphabet CEO Sundar Pichai has been talking about an "AI-first world" since 2016, and the company uses the technology across many of its businesses, from search advertising to self-driving cars.
But regulators are expressing concern about the growing power and lack of understanding about how AI works and what it can do. The European Union is exploring new AI regulation, including a possible temporary ban on the use of facial recognition in public, and New York Rep. Carolyn Maloney, who chairs the House Oversight and Reform Committee, recently suggested that AI regulation could be on the way in the U.S., too. Pichai recently called for "clear-eyed" AI regulation amid a rise in fake videos and abuse of facial recognition technology.
Against this backdrop, the company held an event Tuesday to showcase the positive side of AI by showing some of the long-term projects the company is working on.
"Right now, one of the problems in machine learning is we tend to tackle each problem separately," said Jeff Dean, head of Google AI, at Google's San Francisco offices Tuesday. "These long arcs of research are really important to pick fundamental important problems and continue to make progress on them."
While most of Google's projects are still years out from broad use, Dean said they are important in moving Google products along.
Here's a sampling of some of the company's more speculative and long-term AI projects:
Google's robotic kitten helps it understand locomotion.
Google's D'Kitty is a four-legged robot that the company says learned to walk on its own by studying locomotion and using machine learning techniques. Dean said he hopes Google's research and development findings will contribute to machines learning how physical hardware can function in "the real world."
Using braided electronics in soft materials, Google's artificial intelligence technology can connect gestures with media controls. One prototype showed sweatshirt drawstrings that could be twisted to adjust music volume. The user could pinch the drawstrings to play or pause connected music.
Google's tech-woven fabric can control music.
A new transcription feature in Google Translate will convert speech to written transcript and will be available on Android phones at some point in the future. Natural language processing, which is a subset of artificial intelligence, is "of particular interest" to the company, Dean said.
Google Translate currently supports 59 languages.
Google Health announced new research Tuesday, showing that when the company's AI is applied to retinal scans, it can help determine if a patient is anemic. It can also detect diabetic eye diseases and glaucoma, Dean said. The company hopes to analyze other diseases in the future.
Google examines eye health
Google is using sensing tools to track underwater sea life. Using sound detection and artificial intelligence, the company said it can now detect orcas in real time and send messages to harbor managers to help them protect the endangered species.
Google announced Tuesday that it's teaming up with organization DFO and Rainforest Connection to track critically endangered Southern Resident killer whales in Canada. The company's also in the early stages of working with the Monterey Bay Aquarium to help detect species in the ocean nearby.
Google's artificial intelligence can detect certain sea animals based on sounds.
Google's working on a project called MediaPipe, which analyzes video of bodily movements including hand tracking. Dean said the company hopes to read and analyze sign language.
"Video is the next logical frontier for a lot of this work" Dean said.
Google is working on an AI project that detects sign language.
See the original post here:
Google shows off far-flung A.I. research projects as calls for regulation mount - CNBC
This tech firm used AI & machine learning to predict Coronavirus outbreak; warned people about danger zones – Economic Times
A couple of weeks after the Coronavirus outbreak and the disease has become a full-blown pandemic. According to official Chinese statistics, more than 130 people have died from the mysterious virus.
Contagious diseases may be diagnosed by men and women in face masks and lab coats, but warning signs of an epidemic can be detected by computer programmers sitting thousands of miles away. Around the tenth of January, news of a flu outbreak in Chinas Hubei province started making its way to mainstream media. It then spread to other parts of the country, and subsequently, overseas.
But the first to report of an impending biohazard was BlueDot, a Canadian firm that specializes in infectious disease surveillance. They predicted an impending outbreak of coronavirus on December 31 using an artificial intelligence-powered system that combs through animal and plant disease networks, news reports in vernacular websites, government documents, and other online sources to warn its clients against traveling to danger zones like Wuhan, much before foreign governments started issuing travel advisories.
They further used global airline ticketing data to correctly predict that the virus would spread to Seoul, Bangkok, Taipei, and Tokyo. Machine learning and natural language processing techniques were also employed to create models that process large amounts of data in real time. This includes airline ticketing data, news reports in 65 languages, animal and plant disease networks.
iStock
We know that governments may not be relied upon to provide information in a timely fashion. We can pick up news of possible outbreaks, little murmurs or forums or blogs of indications of some kind of unusual events going on, Kamran Khan, founder and CEO of BlueDot told a news magazine.
The death toll from the Coronavirus rose to 81 in China, with thousands of new cases registered each day. The government has extended the Lunar New Year holiday by three days to restrict the movement of people across the country, and thereby lower the chances of more people contracting the respiratory disease.
However, a lockdown of the affected area could be detrimental to public health, putting at risk the domestic population, even as medical supplies dwindle, causing much anger and resentment.
24 May, 2018
24 May, 2018
24 May, 2018
24 May, 2018
24 May, 2018
UoB uses machine learning and drone technology in wildlife conservation – Education Technology
The university's new innovations could transform wildlife conservation projects around the globe
The University of Bristol (UoB) has partnered with Bristol Zoological Society (BZS) to develop a trailblazing approach to wildlife conservation, harnessing the power of machine learning and drone technology to transform wildlife conservation around the world.
Backed by the Cabot Institute for the Environment, BZS and EPSRCs CASCADE grant, a team of researchers travelled to Cameroon in December last year to test a number of drones, sensor technologies and deployment techniques to monitor the critically endangered Kordofan giraffe populations in Bnou National Park.
There has been significant and drastic decline recently of larger mammals in the park and it is vital that accurate measurements of populations can be established to guide our conservation actions, said Dr Grinne McCabe, head of field conservation and science at BZS.
In related news: Sustainable Livestock Systems Collaborative seeks to solve the food crisis through tech
Bnou National Park is very difficult to patrol on foot and large parts are virtually inaccessible, presenting a huge challenge for wildlife monitoring. Whats more, the giraffe are very well camouflaged and often found in small, transient groups, said Dr Caspian Johnson, conservation science lecturer at BZS.
Striving to uncover the best method for airborne wildlife monitoring, BZS reached out to Dr Matt Watson from the UoBs School of Earth Sciences, and Dr Tom Richardson from the Universitys Aerospace Department, as well as a member of the Bristol Robotics Laboratory (BRL). The team forged successful collaborations using drones to monitor and measure volcanic emissions to create a system for wildlife monitoring.
A machine learning based system that we develop for the Kordofan giraffe will be applicable to a range of large mammals. Combine that with low-cost aircraft systems capable of automated deployment without the need for large open spaces to launch and land, and we will be able to make a real difference to conservation projects worldwide, said Dr Watson.
Go here to read the rest:
UoB uses machine learning and drone technology in wildlife conservation - Education Technology
AI Is Coming to a Grocery Store Near You – Built In
For the consumer packaged goods industry CPG for short the Super Bowl presents both an opportunity and a challenge. The National Retail Federation estimates that almost 194 million Americans will watch Super Bowl LIV.The report claims that each one will spend an average of $88.65 on food, drinks, merchandise and party supplies. Really.
To secure valuable shopping cart space, big food and beverage brands like PepsiCo, Anheuser-Busch InBev and Tyson Foods pull out all the stops, offering promotions on soda, beer and hot dogs designed to be so tempting that they stop consumers in their tracks. Once a promos set, brands need to ensure they have the right amount of product in the right places. Its a process known as demand forecasting, where historical sales data helps estimate consumer demand.Getting that forecast right can make or break the success of a campaign.
Demand forecasts play an important role in a CPG brands day-to-day operations, but they take on a special significance during events like the Super Bowl, where billions of dollars are a stake. If a forecast underestimates demand, brands cede sales to competitors with readily available products. Companies that overestimate demand run the risk of overstocking the wrong store shelves or watching inventory expire in distribution centers.
Increasingly, the brands that come out on top are victorious because of technology. At Kraft Heinz, for example, machine learning models do much of the heavy lifting to generate accurate demand forecasts for major events like the Super Bowl.
What you got probably five, seven years ago were a lot of the consulting firms pitching you on what AI can do, said Brian Pivar, senior director of data and analytics at Kraft Heinz. Now, youre seeing companies build these things out internally they see the value.
For the worlds biggest food and beverage brands, growth means mergers and acquisitions, with big brands often buying smaller competitors that have cornered the market on emerging trends. Acquiring a startup food brand isnt easy, but its much less complex than merging two multinationals that manage critical sales, supply chain and manufacturing processes using customized software platforms.
Thats the world Pivar stepped into when he arrived at Kraft Heinz in late 2018, three years after the merger of Kraft Foods and H.J. Heinz created the worlds fifth-largest CPG brand. In the years since, the company has doubled down on artificial intelligence technologies, including machine learning. But Kraft Heinz, like many other companies in its space, is still playing catch up.
Theres a lot of opportunity to leverage AI to help us make better and smarter decisions.
Even when companies like Kraft Heinz want to move full-steam ahead and incorporate the latest tech into their operations, they still face challenges. Chief among them is the ability to implement technical builds successfully.
CPG companies dont always have strong data foundations, said Pivar. So what you see sometimes is a data scientist spending most of their time getting and properly structuring data. Lets say 80 percent of their time is spent doing that and 20 percent is spent building ML or AI tools instead of the reverse, which is what you want to see.
When Pivar came to Kraft Heinz, his first order of business was to develop a five-year strategy that gave leadership visibility into both his teams goals and their roadmap. Instead of hiring a crew of data scientists right off the bat, Pivar instead brought on data engineers to ensure that his team had the necessary foundation to build advanced analytics. The company also spent four months evaluating and testing cloud partners to find the perfect fit.
In the two years since Pivar joined Kraft Heinz, his team has built machine learning models to generate more accurate demand forecasts around major events with distinctive promotions, like the Super Bowl. Prior to Pivars arrival, these forecasts were generated manually in spreadsheets like Excel.
Machine learning can do amazing things.Read more about the latest ML technology.
His team also built atool that relies on recent sales data and inventory numbers at stores and distribution centers to predict when supermarkets will need to be resupplied and with what products, along with insights about the cause of low stock.
Were looking across the business, from sales to our operations and supply chain teams, said Pivar. Within all of those spaces, theres a lot of opportunity to leverage AI to help us make better and smarter decisions.
Kraft Heinz isnt the only big player in the CPG space thats incorporating AI across its business. Frito-Lay, a PepsiCo subsidiary, is working on a project that uses computer vision and a custom algorithm to optimize the potato-peeling process.Beer giant AB InBev uses machine learning to ensure compliance and fight fraud.And Tyson Foods is considering the viability of using AI-powered drones to monitor animal health and safety.
Even grocery stores are getting in on the action. Walmart has built a 50,000-square-foot store in Levittown, New York, filled with artificial intelligence technology.
Walmarts Intelligent Research Lab, or IRL, is both a technology testbed and a fully functioning store covering 50,000 square feet.The store is filled with sensors and cameras and can automatically alert store associates when a product is out of stock, shopping carts need collecting or more registers are necessary to quell long lines.Theres enough cable in IRL to scale Mt. Everest five times, and the store has enough computing power to download 27,000 hours of music per second.
CPG brands are still figuring out how best to leverage artificial intelligence, which means that, at least in the short term, the shopping experience might not change drastically. But that doesnt mean consumers wont be driving change at least, according to Shastri Mahadeo, founder and CEO of Unioncrate.
Unioncrate is a New York-based startup whose AI-powered supply chain-planning platform generates demand forecasts based on consumer activity and the factors that impact purchasing decisions. For Mahadeo, AI has the potential to both save brands money and reduce waste by aligning production decisions with consumer demand.
If a brand can accurately predict what a retailer is going to order based on what consumers are going to buy, then theyll produce whats needed so they dont have money tied up in working capital, said Mahadeo. Similarly, if a retailer can accurately predict what consumers will buy, then they can stock accordingly.
In addition to streamlining back-end processes related to manufacturing and supply chain management, Pivar said that within 10 years, AI could be used to create a more personalized shopping experience, one where brands customize promotions to consumers with the same focus seen on platforms like Instagram or Facebook.
What does that mean to CPG? asked Pivar. Were still figuring that out, but thats where I see things going.
Cant get enough AI?Read more about its latest applications here.
Visit link:
AI Is Coming to a Grocery Store Near You - Built In
Blue Prism Adds Conversational AI, Automated Machine Learning and Integration with Citrix to its Digital Workforce – what’s up
LONDON andAUSTIN,Texas, Jan. 29, 2020 /PRNewswire/ --Looking to empower enterprises with the latest and most innovative intelligent automation solutions, Blue Prism(AIM: PRSM) today announced theaddition of DataRobot, ServisBOTandUltimato its Technology Alliance Program (TAP) as affiliate partners. These partners extend Blue Prism's reach by making their software accessible to customers viaBlue Prism's Digital Exchange (DX), an intelligent automation "app store" and online community.
Blue Prism's DX is unique in that, every week new intelligent automation capabilities get added to the forum which has resulted intens of thousands of assets being downloaded, making it the ideal online community foraugmenting and extending traditional RPA deployments. The latest capabilities on the DX includedealing with conversational AI (working with chatbots), adding automated machine learning as well as new integrations with Citrix. With just a few clicks users can drag and drop these new capabilities into Blue Prism's Digital Workforceno coding required.
"Blue Prism's vision of providing a Digital Workforce for Every Enterprise is extended with our DX community, which continues to push the boundaries of intelligent automation," says Linda Dotts, SVP Global Partner Strategy and Programs for Blue Prism. "Our DX ecosystem is the catalyst and cornerstone for driving broader innovations with our Digital Workforce. It provides everyone with an a la carte menu of automation options that are drag and drop easy to use."
Below is a quick summary of the new capabilities being brought to market by these TAP affiliate partners:
DataRobot: The integration of DataRobot with Blue Prism provides enterprises with the intelligent automation needed to transform business processes at scale. By combining RPA with AI, the integration automates data-driven predictions and decisions to improve the customer experience, as well as process efficiencies and accuracy. The resulting business process improvements help move the bottom line for businesses by removing repetitive, replicable, and routine tasks for knowledge workers so they can focus on more strategic work.
"The powerful combination of RPA with AI what we call intelligent process automation unlocks tremendous value for enterprises who are looking to operationalize AI projects and solve real business problems," says Michael Setticasi, VP of Alliances at DataRobot. "Our partnership with Blue Prism will extend our ability to deliver intelligent process automation to more customers and drive additional value to global enterprises."
ServisBOT:ServisBOT offers the integration of an insurance-focused chatbot solution to Blue Prism's Robotic Process Automation (RPA), enabling customers to file an insurance claim with their provider using the convenience and 24/7 availability of a chatbot. This integration with ServisBOT's natural language technology adds a claims chatbot skill to the Blue Prism platform, helping insurance companies increase efficiencies and reduce costs across the complete claims management journey and within a Blue Prism defined workflow.
"Together we are providing greater efficiencies in managing insurance claims through chatbots combined with AI-powered automation," says Cathal McGloin, CEO of ServisBOT. "This drives down operational costs while elevating a positive customer experience through faster claims resolution times and reduced friction across all customer interactions."
Ultima: The integration of Ultima IA-Connect with Blue Prism enables fast, secure automation of business processes over Citrix Cloud and Citrix virtual apps and desktops sessions (formerly known as XenApp and XenDesktop). The new IA-Connect tool allows users to automate processes across Citrix ICA or Microsoft RDP virtual channels, without needing to resort to screen scraping or surface automation.
"We know customers who decided not to automate because they were nervous about using cloud-based RPA or because running automations over Citrix was simply too painful," says Scott Dodds, CEO of Ultima. "We've addressed these concerns, with IA-Connect now available on the DX. It gives users the ability to automate their business processes faster while helping reduce overall maintenance and support costs."
Joining the TAP is easier than ever with a new self-serve function on the Digital Exchange itself. To find out more please visit:https://digitalexchange.blueprism.com/site/global/partner/index.gsp
About Blue PrismBlue Prism's vision is to provide a Digital Workforce for Every Enterprise. The company's purpose is to unleash the collaborative potential of humans, operating in harmony with a Digital Workforce, so every enterprise can exceed their business goals and drive meaningful growth, with unmatched speed and agility.
Fortune 500 and public-sector organizations, among customers across 70 commercial sectors, trust Blue Prism's enterprise-grade connected-RPA platform, which has users in more than 170 countries. By strategically applying intelligent automation, these organizations are creating new opportunities and services, while unlocking massive efficiencies that return millions of hours of work back into their business.
Available on-premises, in the cloud, hybrid, or as an integrated SaaS solution, Blue Prism's Digital Workforce automates ever more complex, end-to-end processes that drive a true digital transformation, collaboratively, at scale and across the entire enterprise.
Visit http://www.blueprism.com to learn more or follow Blue Prism on Twitter @blue_prism and on LinkedIn.
2020 Blue Prism Limited. "Blue Prism", "Thoughtonomy", the "Blue Prism" logo and Prism device are either trademarks or registered trademarks of Blue Prism Limited and its affiliates. All Rights Reserved.
OReilly and Formulatedby Unveil the Smart Cities & Mobility Ecosystems Conference – Yahoo Finance
Conference to showcase the practical, real-life enterprise use of data science, machine learning, AI, IoT, and open data in cities and mobility industries
OReilly, the premier source for insight-driven learning on technology and business, and Formulatedby today announced a new conference focused on how machine learning is transforming the future of urban communities and mobility industries around the world. The inaugural Smart Cities & Mobility Ecosystems (SCME) conference will take place in Phoenix, AZ from April 15-16, 2020 followed by a second event in Miami, FL from June 3-4, 2020.
Rapid technological advancements are challenging cities and the mobility industry with new business models, methodologies in development and manufacturing, unprecedented levels of automation, and the need for new infrastructure. From predictive analytics to policy, the Smart Cities & Mobility Ecosystems conference examines the role of governments, enterprises, and individuals in driving positive change as communities become increasingly connected.
"How we plan, build, and improve our cities has fundamentally changed, driven by powerful new technologies that can make life better for all the constituencies cities hope to serve," said Roger Magoulas, VP of Radar at OReilly and chair of the Smart Cities & Mobility Ecosystems conference. "This conference helps take the pulse of what we expect to change and what is possible for communities and mobility over the coming years."
The focused event brings together enterprise practitioners, technical experts, and executives to discuss how data, artificial intelligence (AI), machine learning, and cutting-edge technologies impact the future of our communities. Attendees can also workshop real-world applications of deep learning, sensor fusion, data processing and AI, automotive camera technology and computer vision algorithms, and reinforcement learning.
"The conversation around AI and ML has moved mainstream in applications like Smart Cities and Mobility Ecosystems," said Anna Anisin, founder and CEO at Formulatedby. "We're excited to collaborate with OReilly to connect our audience of ML practitioners and executives with the policymakers and stakeholders who will participate in taking this technology to the next level to improve lives at scale."
Key speakers at the Smart Cities & Mobility Ecosystems conference in Phoenix include:
Key speakers at the Smart Cities & Mobility Ecosystems conference in Miami include:
Registration for the upcoming Smart Cities and Mobility Ecosystems conference is now open for Phoenix and Miami. A limited number of media passes are also available for qualified journalists and analysts. Please contact info@formulated.by for media or analyst registration. Follow #SCME on Twitter for the latest news and updates.
About Formulatedby
Formulatedby is a marketing agency specializing in building data science, machine learning and AI communities. Female-owned and formulated in Miami, its best known for the Data Science Salon, a vertically focused conference series around AI and ML, and for working throughout the technology landscape in B2B enterprise marketing and experiential marketing. For more information, visit formulated.by.
About OReilly
For 40 years, OReilly has provided technology and business training, knowledge, and insight to help companies succeed. Our unique network of experts and innovators share their knowledge and expertise at OReilly conferences and through the companys SaaS-based training and learning solution, OReilly online learning. OReilly delivers highly topical and comprehensive technology and business learning solutions to millions of users across enterprise, consumer, and university channels. For more information, visit http://www.oreilly.com.
View source version on businesswire.com: https://www.businesswire.com/news/home/20200129005576/en/
Contacts
Allison Stokesfama PR for OReilly617-986-5010OReilly@famapr.com
Read the rest here:
OReilly and Formulatedby Unveil the Smart Cities & Mobility Ecosystems Conference - Yahoo Finance
Machine Learning Could Aid Diagnosis of Barrett’s Esophagus, Avoid Invasive Testing – Medical Bag
A risk prediction model consisting of 8 independent diagnostic variables, including age, sex, waist circumference, stomach pain frequency, cigarette smoking, duration of heartburn and acidic taste, and current history of antireflux medication use, can provide potential insight into a patients risk for Barretts esophagus before endoscopy, according to a study in published Lancet Digital Health.
The study assessed data from 2 prior case-control studies: BEST2 (ISRCTN Registry identifier: 12730505) and BOOST (ISRCTN Registry identifier: 58235785). Questionnaire data were assessed from the BEST2 study, which included responses from 1299 patients, of whom 67.7% (n=880) had Barretts esophagus, which was defined as endoscopically visible columnar-lined oesophagus (Prague classification C1 or M3), with histopathological evidence of intestinal metaplasia on at least one biopsy sample. An algorithm was used to randomly divide (6:4) the cohort into a training data set (n=776) and a testing data set (n=523). A total of 398 patients from the BOOST study, including 198 with Barretts esophagus, were included in this analysis as an external validation cohort. Another 200 control individuals were also included from the BOOST study.
Researchers used a univariate approach called information gain, as well as a correlation-based feature selection. These 2 machine learning filter techniques were used to identify independent diagnostic features of Barretts esophagus. Multiple classification tools were assessed to create a multivariable risk prediction model. The BEST2 testing data set was used for internal validation of the model, whereas the BOOST external validation data set was used for external validation.
In the BEST2 study, the investigators identified a total of 40 diagnostic features of Barretts esophagus. Although 19 of these features added information gain, only 8 features demonstrated independent diagnostic value after correlation-based feature selection. The 8 diagnostic features associated with an increased risk for Barretts esophagus were age, sex, cigarette smoking, waist circumference, frequency of stomach pain, duration of heartburn and acidic taste, and receiving antireflux medication.
The upper estimate of the predictive value of the model, which included these 8 features, had an area under the curve (AUC) of 0.87 (95% CI, 0.84-0.90; sensitivity set, 90%; specificity, 68%). In addition, the testing data set demonstrated an AUC of 0.86 (95% CI, 0.83-0.89; sensitivity set, 90%; specificity, 65%), and the external validation data set featured an AUC of 0.81 (95% CI, 0.74-0.84; sensitivity set, 90%; specificity, 58%).
The study was limited by the fact that it collected data solely from at-risk patients, which enriched the overall cohorts for patients with Barrets esophagus.
The researchers concluded that the risk prediction panels generated from this study would be easy to implement into medical practice, allowing patients to enter their symptoms into a smartphone app and receive an immediate risk factor analysis. After receiving results, the authors suggest, these data could then be uploaded to a central database (eg, in the cloud) that would be updated after that person sees their medical professional.
Reference
Rosenfeld A, Graham DG, Jevons S, et al; BEST2 study group. Development and validation of a risk prediction model to diagnose Barretts oesophagus (MARK-BE): a case-control machine learning approach [published online December 5, 2019]. Lancet Digit Health. doi:10.1016/S2589-7500(19)30216-X.
Original post:
Machine Learning Could Aid Diagnosis of Barrett's Esophagus, Avoid Invasive Testing - Medical Bag