Page 2,530«..1020..2,5292,5302,5312,532..2,5402,550..»

Reliable Health Assistants Yielded by Artificial Intelligence – Analytics Insight

Implementation of artificial intelligence in healthcare is the most valuable discovery and invention of all time. Reading and comprehending diagnostic reports, medical advisories and efficient diagnosis are some crucial features endowed by the use of artificial intelligence algorithms. Consolidation of artificial intelligence and human management are together responsible for saving preventable deaths since its convergence. The strength to analyze big data makes artificial intelligence smarter and beyond useful.

Buoy Health is a US-based application that is empowered with efficient machine learning capabilities to detect diagnostic conditions of the patient. This healthcare application detects a probable disease by asking questions based on the answers to the preceding ones. Subsequently, guide the users on how to proceed further.

This application measures the consumption of medicines in ones body and analyses potential threats on excessive or deficient intake. It also successfully recognizes different combinations of medicines and food habits associated with them as safe or unsafe. Lastly, it rewards the users for having medicines on time.

Asthmapolis is a smart inhaler aided with a sensor that automatically responds given that it is in sync with your smartphone. It manifests feedback and asthma-related information including warnings in certain environments. The phone also detects the tool whenever it is nearby to ease the trouble to find it.

This is an initiative by the Government of India to contain the spread of novel coronavirus by detecting the contamination and identifying carriers of it. This further warns the user and also informs them about any developments in the body complying with the symptoms of the virus. This has gained considerable popularity and appreciation mainly amongst the Indians because of its proven efficacy.

Babylon is a UK-based health assistant that is effective in comparing symptoms with a large database of common illnesses to identify the disease occurring in the users body. This advises further steps depending on the users medical history.

Nursing facilities are extended in the form of a virtual health assistant like Sesne.ly. This virtual nurse takes care of the users daily with the medicines and monitors the developments in the body of the user within the interval of doctor visits. This has a crucial role to play in the expansion of health tech in 2021.

Aicure is a popular health assistant amongst patients with serious illnesses who tend to skip medicines and practices prescribed by their doctor. This application is assisted by the phones camera to track the behaviors of the patient that is in accordance with or against the doctors advice. This exhibits the endless extent of progress that can be brought in by artificial intelligence.

This is a Wearable health assistant that records the physical activities of an individual to display their fitness. The degree of exercise and calories burnt are exclusively measured to reflect their agility. This could be used by doctors to realize the users potential. Calculating the movement of an individual is empowered by robust algorithms furnishing artificial intelligence.

Binah.in is an efficient health assistant application that uses a smartphone camera to diagnose health problems by scanning the users face from side to side and reporting the health conditions that they are undergoing. However minor health issues can get immediate and effective consultation without having to commute to the doctor and wait in a long queue.

Youper is a health assistant that is empowered to cure mental health illnesses and instability. psychological counseling is an important phenomenon of this application. This functions by carrying out compassionate conversations with the user and suggesting practices that would further help them in overcoming the inner conflicts in a subtle way that appears appealing to the users. Clinical assistance to deal with psychological issues is in high demand and thus this healthcare application has considerably fulfilled the requirement.

Share This ArticleDo the sharing thingy

About AuthorMore info about author

Read more from the original source:
Reliable Health Assistants Yielded by Artificial Intelligence - Analytics Insight

Read More..

MIT Centre for Future Skills Excellence Concludes the 5-Week, Short Term Course on Artificial Intelligence for Teachers – PR Newswire India

AI is already being used in education, notably in the form of skill development tools and testing systems. As AI educational solutions improve, it is hoped that AI will be able to help bridge gaps in learning and teaching, allowing schools and teachers to accomplish more than ever before.AIcan improve efficiency, personalisation, and administrative responsibilities, giving teachers more time and freedom to focus on understanding and adaptability. The objective for AI in education is for them to work together for the best outcome for students by using the best features of machines and teachers. This was the motivation behind curating this exclusive course for teachers.

Teachers from all over India actively participated in the training on various topics covering the basics to advanced concepts inArtificial Intelligence. Eminent Industry veterans like Dr. Raja N. Moorthy, Member Advisory Board Kirusa Inc, Mr. Arpit Yadav, Senior Data Scientist INSOFE, Ms. Rupa Singh, CEO and Founder AI-Beehive, Dr. Dharmendra Singh Rajput, Associate Professor, VIT Vellore, Ms. Pradnya Paithankar, Senior Trainer and Consultant and Mr. Tushar Kute, Researcher and Senior Trainer, MITU Research, conducted the training sessions and engaged practical sessions using Industrial Case Studies which span over 5 weeks.

Distinguished Professor from the Indian Institute of Technology Bombay, Mumbai, Prof. Kannan Moudgalya, Erach and Meheroo Mehta Advanced Education Technology Chair, graced the occasion as the Chief Guest of the Valedictory Function. He shared his initiatives in upskilling students from humble backgrounds. His iconic project, 'Spoken Tutorial' is an educational multimedia platform that has won numerous awards, funded by the National Mission on Education through Information and Communication Technology (ICT) and launched by the Ministry of Human Resources and Development (MHRD), Government of India. Here, one can self-teach various Free and Open-Source Software. The self-paced, multilingual courses allow anybody with a computer and a desire to learn from anywhere, at any time, and in their preferred language. He believes that everyone deserves equal opportunity to reach the pyramid of success. He congratulated the teachers on their enthusiasm and urge in learning the new emerging technologies.He also commended & appreciated the endeavours by theMIT Centre for Future Skills Excellencefor upskilling & reskilling initiatives.

Mr. Sushant Gadankush, Founder & MD, InnoWise India, also graced the occasion. He presented an overview onRiYSALabs, a unique online platform that canprovide students with a rich, engaging experience and make it easy for the teachers to monitor their progress using live virtual machine views.He encouraged the education fraternity to understand the balance between educational organisations and technology. He appreciated the teachers' efforts in skilling up for a better teaching-learning experience in the new hi-tech classrooms.Dr. Vinnie Jauhari, Director of Education Advocacy at Microsoft Corporation India Limited sent her best wishes to the participants. She believes in Institutional excellence and setting global benchmarks in higher education, executive education and learning. She congratulated the teachers who have started their journey of upskilling towards emerging technologies.

Shri Tushar Kute shared his experience of training the teachers who had joined from all corners of India. He expressed his gratitude towards all the teachers for being participative, sincere and attentive throughout the training sessions. He said that this training would always be special to him as he had the good fortune of training the ones who were already working on the noble cause of nation-building. Dr. Asawari Bhave - Gudipudi, Dean, Faculty of Humanities & Social Sciences,MIT Art, Design and Technology Universityhas advised the education fraternity to explore Artificial Intelligence as it is an integral part of the lives and careers of the current and future generations.A few teacher participants also shared their remarkable experience from a total newbie in AI to a fairly knowledgeable AI enthusiast. For them, it was a total deviation from the regular classrooms of mathematics or science or languages to some challenging and exciting technology Gyan. From anxiety in the beginning to the accomplishment of becoming reasonably tech-savvy, their stories of an exciting journey said it all.

Prof. Suraj Bhoyar, Project Director,MIT Centre for Future Skills Excellenceshared the intent and summary of the 5-week, 2-credit course which was initiated exclusively for teachers looking to upskill themselves with Artificial Intelligence.MIT Centre for Future Skills Excellence (MIT FuSE)firmly believes in the potential of teachers in building the future of the nation and such training is an attempt to help teachers with their endeavour in keeping up with the National Education Policy (NEP) 2020's and guidelines by CBSE to mandate AI Training in Schools, Colleges to spread awareness on emerging technologies for students right from classromms.He promised more such short courses for tech enthusiasts in future. He also reiterated thatAI is not a futuristic vision, but rather something that is here today and being integrated with Education and deployed for better student-teacher interactions and go hand in hand with exponential technologies likeIoT,Data Analytics,Robotics,Cyber Security,Cloud Computing,Blockchain, etc.

The short-term course on Artificial Intelligence was inaugurated on Sept. 27, 2021 with insights & blessings from the Top Global Artificial Intelligence Influencers and industry veterans, Mr. Utpal Chakraborty, TEDx speaker & Former Head of Artificial Intelligence, YES Bank Ltd., Dr. Anoop V.S., Senior Scientist (Research & Training) from IIITMK Kerala and Mr. Arpit Yadav, Senior Data Scientist from INSOFE. The valedictory ceremony concluded with a pledge to build smart and enterprising India from the teacher participants & educators. Prof. Vilas Khedekar, Prof. Ajita Deshmukh, & Dr. Priya Singh have taken efforts to curate the short-term course on AI in Education for teachers. Ms. Smruti Shelke fromMIT Centre for Future Skills Excellence (MIT FuSE)compered the ceremony along with Prof. Komal Gagare from MIT School of Education & Research.

MIT FuSEhas curated exclusive courses keeping up with the current job market requirements of experts inArtificial Intelligence & Machine Learning,Enterprise Resource Planning,Robotic Process Automation,Cloud Computing,Cyber Security&Blockchain Technology. Budding tech-enthusiasts can check theMIT FuSE Websitefor more information on career opportunities in various emerging technologies and guidance on pursuing the same.

About MIT-ADT University

MAEER's Trust which is known to set the strong precedence for the privatization of Engineering education in Maharashtra had taken a first mover's advantage by establishing the Maharashtra Institute of Technology (MIT-Pune), in 1983, which continues to remain the flagship institute of the group.

MIT Art, Design and Technology University,Punehas been established under the MIT Art, Design and Technology University Act, 2015 (Maharashtra Act No. XXXIX of 2015). The University commenced its operations successfully from27th June 2016. The University is a self-financed institution and empowered to award the degrees under section 22 of the University Grants Commission act, 1956. The University has a unique blend of Art, Design, and Technology as the core of its academics.

Recently, MIT Art, Design and Technology University,Punehas accomplished the following accolades:

1. Ranked 26th for ARIIA 2020 by the Ministry of Education, Govt. ofIndia.2. Received 5 Star rating for exemplary performance by the Ministry of Education's Innovation Council, Govt. ofIndia.3. Conferred with Best University Campus Award by ASSOCHAM,New Delhi4. Granted with Atal Incubation Centre under ATAL Innovation Mission, NITI Aayog, Govt. ofIndia

MIT Art, Design and Technology University has been taking a holistic approach towards imparting education wherein the students are being motivated to build a complete winning personality which is "physically fit, intellectually sharp, mentally alert and spiritually elevated". The students are being encouraged to participate in yoga, meditation, physical training, spiritual elevation, communication skills, and other personality development programmes. Currently, we have 7500+ students studying in various schools of higher education under the University viz. Engineering and Technology, Food Technology, Bioengineering, Arts, Design, Marine Engineering, Journalism and Broadcasting, Film and Television, Music (Hindustani Classical Vocal and Instrumental), Teacher Education, and Vedic Sciences.

Prof. Suraj Bhoyar Project Director MIT-FuSE, MIT-ADT University, Pune Mobile No.: 9028483286

Photo - https://mma.prnewswire.com/media/1674545/MIT_Artificial_Intelligence_Course_for_TEACHERS.jpgLogo - https://mma.prnewswire.com/media/1479539/MIT_ADTU_Logo.jpg

SOURCE MIT Art, Design & Technology University Pune

Follow this link:
MIT Centre for Future Skills Excellence Concludes the 5-Week, Short Term Course on Artificial Intelligence for Teachers - PR Newswire India

Read More..

UNESCO Conducts a Training on Artificial Intelligence for Disaster Response in Tanzania – Africanews English

Over the past several decades, climate change has led to major disasters in Eastern Africa countries including Tanzania. From floods, chronic droughts, landslides, strong winds and earthquakes to their secondary impacts of diseases and epidemics, these are some of the recent disasters plaguing Tanzania. These disasters lead to death and displacement of people, loss of properties and livelihoods, disruption of social networks and services such as water, food, and healthcare thereby leaving communities more vulnerable and susceptible to the next extreme event. Lack of disaster preparedness and awareness makes the situation worse as communities remain helpless in the event of disasters hence face its full impact. Combining citizen science and modern technological innovation provides an opportunity to build the resilience of communities and reduce risks.

To address this, the UNESCO Offices in Tanzania and Nairobi conducted a two-weeks training (16th to 31st August 2021) on the application of artificial intelligence (AI) using a Chatbot for disaster preparedness and response. The training is part of the Regional project on Strengthening Disasters Prevention approaches in Eastern Africa (STEPDEA) funded by the Ministry of foreign affairs of the Government of Japan and led by the UNESCO Office in Nairobi in collaboration with national governments, UNESCO National Commissions and Japanese institutions (LINE, Weather News Inc.) etc. Participants were introduced to the benefits of artificial intelligence and taken through the installation, use and management of the AI Chatbot for disaster response and management. The AI Chatbot is designed to enable the citizens and local authorities to access information before the onset of a disaster (warning), citizens to report disasters as they occur; and local authorities to respond immediately by identifying vulnerable areas and parties that are at risk, and informing the public where to find distribution points for assistance. A Collaborating, Learning and Adapting (CLA) framework was adapted during the 2-week training. CLA aims to improve results and facilitate country-led development by enhancing knowledge sharing and collaboration among partners while adapting to new and changing situations. The training brought together 150 people (50% of them women). The participants were drawn from national level public institutions, higher education institutions and non-state actors in both mainland Tanzania and Zanzibar.

Distributed by APO Group on behalf of United Nations Educational, Scientific and Cultural Organization (UNESCO).

Download logo

Africanews provides content from APO Group as a service to its readers, but does not edit the articles it publishes.

View post:
UNESCO Conducts a Training on Artificial Intelligence for Disaster Response in Tanzania - Africanews English

Read More..

The Coming Convergence of NFTs and Artificial Intelligence – Yahoo Finance

Non-fungible tokens (NFT) are becoming one of the most important trends in the crypto ecosystem. The first generation of NFTs has focused on key properties such as ownership representation, transfer, automation as well as building the core building blocks of the NFT market infrastructure.

The hype in the NFT market makes it relatively hard to distinguish signal versus noise when even the most simplistic form of NFTs are able to capture incredible value. But, as the space evolves, the value proposition of NFTs should go from static images or text to more dynamic and intelligent collectibles. Artificial intelligence (AI) is likely to have an impact in the next wave of NFTs.

Jesus Rodriguez is the CEO of IntoTheBlock, a market intelligence platform for crypto assets. He has held leadership roles at major technology companies and hedge funds. He is an active investor, speaker, author and guest lecturer at Columbia University in New York.

We are already seeing manifestations of NFT-AI convergence in the form of generative art. However, the potential is much bigger. Injecting AI capabilities into the lifecycle of NFTs opens the door to forms of intelligent ownership that we havent seen before.

Today, NFTs remain mostly digital manifestations of the offline word in areas such as art or collectibles. While compelling, that vision is quite limited. A more intriguing way to think about NFTs is as digital ownership primitives. Ownership representations have much wider applications than collectibles. While in the physical world ownership is mostly represented as static records, in the digital on-chain world ownership can be programmable, composable and, of course, intelligent.

With intelligent digital ownership the possibilities are endless. Lets illustrate this in the context of collectibles that remain one the best-known applications of NFTs.

Read more: Jesus Rodriguez - When DeFi Becomes Intelligent

Imagine digital-art NFTs that could converse in natural language answering questions to explain the inspiration behind their creation and adapt those answers to a specific conversation context. We could also envision NFTs that could adapt to your feelings, mood and provide an experience that is constantly fulfilling. What about intelligent NFT wallets that, as they interact with a website, could decide which ownership rights to present in order to improve the experience for a given user?

Story continues

Echoing William Gibsons famous quote, The future is already here, its just not very evenly distributed, we should think about the intersection of intelligent digital ownership as something that is possible with todays AI and NFT technologies. NFTs are likely to evolve as a digital ownership primitive and intelligence should definitely be part of it.

To understand how intelligent NFTs can be enabled with todays technologies, we should understand what AI disciplines have intersection points with the current generation of NFTs. The digital representation of NFTs relies on digital formats such as images, video, text or audio. These representations map brilliantly to different AI sub-disciplines.

Podcast: How Erick Calderon Turned NFT Squiggles Into a $6M Funding Round

Deep learning is the area of AI that relies on deep neural networks as a way to generalize knowledge from datasets. Although the ideas behind deep learning have been around since the 1970s, they have seen an explosion in the last decade with a number of frameworks and platforms that have catalyzed its mainstream adoption. There are some key areas of deep learning that can be incredibly influential to enable intelligence capabilities in NFTs:

Computer vision: NFTs today are mostly about images and videos and, therefore, a perfect fit to leverage the advancements in computer vision. In recent years, techniques such as convolutional neural networks (CNN), generative adversarial neural networks (GAN) and, more recently, transformers have pushed the boundaries of computer vision. Image generation, object recognition, scene understanding are some of the computer vision techniques that can be applied in the next wave of NFT technologies. Generative art seems like a clear domain to combine computer vision and NFTs.

Natural language understanding: Language is a fundamental form to express cognition, and that includes forms of ownership. Natural language understanding (NLU) has been at the center of some of the most important breakthroughs in deep learning in the last decade. Techniques such as transformers powering models such as GPT-3 have reached new milestones in NLU. Areas such as question answering, summarization and sentiment analysis could be relevant to new forms of NFTs. The idea of superposing language understanding to existing forms of NFTs seems like a trivial mechanism to enrich the interactivity and user experience in NFTs.

Read more: Jesus Rodriguez - 3 Factors That Make Quant Trading in Crypto Unique

Speech recognition: Speech intelligence can be considered the third area of deep learning that can have an immediate impact in NFTs. Techniques such as CNNs and recurrent neural networks (RNN) have advanced the speech intelligence space in the last few years. Capabilities such as speech recognition or tone analysis could power interesting forms of NFTs. Not surprisingly, audio-NFTs seem like the perfect scenario for speech intelligence methods.

The advancements in language, vision and speech intelligence expand the horizon of NFTs. The value unlocked at the intersection of AI and NFTs will impact not one but many dimensions of the NFT ecosystem. In todays NFT ecosystem, there are three fundamental categories that can be immediately reimagined by incorporating AI capabilities:

AI-generated NFTs: This seems to be the most obvious dimension of the NFT ecosystem to benefit from recent advancements in AI technologies. Leveraging deep learning methods in areas such as computer vision, language and speech can enrich the experience for NFT creators to levels we havent seen before. Today, we can see manifestations of this trend in areas such as generative art but they remain relatively constrained both in terms of the AI methods used as well as in the use cases they tackle.

In the near future, we should see the value of AI-generated NFTs to expand beyond generative art into more generic NFT utility categories providing a natural vehicle for leveraging the latest deep learning techniques. An example of this value proposition can be seen in digital artists like Refik Anadol who are already experimenting with cutting edge deep learning methods for the creation of NFTs. Anadols studio have been a pioneer in using techniques such as GANs, and even dabbling into quantum computing, trained models in hundreds of millions images and audio clips to create astonishing visuals. NFTs have been one of the recent delivery mechanisms explored by Anadol.

Read more: Designer Eric Hu on Generative Butterflies and the Politics of NFTs

NFTs embedded-AI: We can use AI to generate NFTs but that doesnt mean that they will be intelligent. But what if they could? Natively embedding AI capabilities into NFT is another market dimension that can be unlocked by the intersection of these two fascinating technology trends. Imagine NFTs that incorporate language and speech capabilities to establish a dialog with users, answer questions about its meaning or interact with a specific environment. Platforms such as Alethea AI or Fetch.ai are starting to scratch the surface here.

AI-first NFT infrastructures: The value of deep learning methods for NFTs wont only be reflected at the individual NFT level but across the entire ecosystem. Incorporating AI capabilities in building blocks such as NFT marketplaces, oracles or NFT data platforms can prepare the foundation to gradually enable intelligence across the entire lifecycle of NFTs. Imagine NFT data APIs or oracles that provide intelligent indicators extracted from on-chain datasets or NFT marketplaces that use computer vision methods to make smart recommendations to users. Data and intelligence APIs are going to become an important component of the NFT market.

AI is changing the landscape of all software and NFTs are not the exception. By incorporating NFT capabilities, the NFTs can evolve from basic ownership primitives to intelligent, self-evolving forms, or ownership that enable richer digital experiences and higher utility for NFT creators and consumers. The era of intelligent NFTs does not require any futuristic technical breakthroughs. The recent advancements in computer vision, natural language understanding or speech analysis combined with the flexibility of NFT technologies already offered a great landscape for experimentation to bring intelligence to the NFT ecosystem.

Read the rest here:
The Coming Convergence of NFTs and Artificial Intelligence - Yahoo Finance

Read More..

Heard on the Street 10/28/2021 – insideBIGDATA

Welcome to insideBIGDATAs Heard on the Street round-up column! In this new regular feature, we highlight thought-leadership commentaries from members of the big data ecosystem. Each edition covers the trends of the day with compelling perspectives that can provide important insights to give you a competitive advantage in the marketplace. We invite submissions with a focus on our favored technology topics areas: big data, data science, machine learning, AI and deep learning. Enjoy!

Paradigm shifts needed to establish greater harmony between artificial intelligence (AI) and human intelligence (HI). Commentary by Kevin Scott, CTO, Microsoft.

Comparing AI to HI is a long history of people assuming things that are easy for them will be easy for machines and vice versa, but its more the opposite. Humans find becoming a Grand Master of chess, or performing very complicated and repetitive data work, difficult, whereas machines are easily able to do those things. But on things we take for granted, like common sense reasoning, machines still have long way to go. AI is a tool to help humans do cognitive work. Its not about whether AI is becoming the exact equivalent of HIthats not even a goal Im working toward.

Importance of utilizing the objectivity of data to discover areas of opportunity within organizations. Commentary by Eric Mader, Principal Consultant, AHEAD.

During this era of accelerated tech adoption and digital transformation, more and more companies are turning to data collection and storage to fuel business decisions. This is, in part, because 2020 lacked the stability to serve as a functional baseline for business decisions in 2021. With this surge of organizations leaning on data and analytics, its essential to have a clear understanding of how this information can be helpful, but also harmful. These companies may inherently understand the importance of analyzing their data, but their biggest problem lies in their approach to preventing data bias. Accepting the natural appetite within organizations to lead their data to some degree is an important first step in capitalizing on the true objectivity of data. Luckily, there is one piece of advice that organizations must remember to avoid falling prey to confirmation bias in data analytics: Be careful how you communicate your findings. Its a simple practice that can make all the difference. While the researchers analyzing the data should be aware of common statistical mistakes and their own biases towards a specific answer, careful attention should also be paid to how data is presented. Data teams have to avoid communicating their findings in ways that might be misleading or misinterpreted. By upholding a level of meticulousness within your data strategy, organizations can ensure that their data approach is working with them and not against them.

Delivering Better Patient Experiences with AI. Commentary by Joe Hagan, Chief Product Officer at LumenVox.

Call management is critical in the healthcare industry to support patient needs such as scheduling, care questions and prescription refills. However, data suggests that more than 50% of contact agents time is spent on resetting passwords for patient applications and portals each password taking three or more minutes to reset. How can healthcare call centers overcome this time suck and better serve patients? AI-enabled technologies such as speech recognition and voice biometrics. According to a survey from LumenVox and SpinSci, nearly 40% of healthcare providers want to invest in AI for their contact centers in the next one to three years. The great digital disruption in healthcare that prioritizes the patient experience is here. As healthcare contact centers take advantage of technologies such as AI, they will be better equipped to deliver high quality service to patients.

AI is the Future of Video Surveillance. Commentary by Rick Bentley, CEO of Cloudastructure.

Not long ago, if you wanted a computer to recognize a car then it was up to you to explain to it what a car looks like: Its got these black round things called wheels at the bottom, the windows kind of go around the top half-ish part, theyre kind of rounded on top and shiny, the lights on the back are sometimes red, the ones up front are sometimes white, the ones on the side are sometimes yellowAs you can imagine, it doesnt work very well.Today you just take 10,000 pictures of cars and 50,000 pictures of close-but-not cars (motorcycles, jet skis, airplanes) and your AI/ML platform will do a better job of detecting cars than you ever could.Intelligent AI and ML powered video solutions are the way of the future, and if businesses dont keep up, they could be putting themselves and their employees at risk. Two years ago, more than 90% of enterprises were still stuck on outdated on-premises security methods, many with limited if no AI intelligence. The industry is under rapid transformation as they take advantage of this technology and move their systems to the cloud.Enhanced AI functionality and cloud adoption in video surveillance have allowed business owners and IT departments to monitor the security of their businesses from the safety of their homes. The AI surveillance solution can be accessed from any authorized mobile device and the video is stored safely off premises so that it cannot be hacked, and it is safe from environmental hazards. Additionally, powerful AI analytics allow intelligent surveillance systems to sort through large volumes of footage to identify interesting activity more than 10x faster and more accurately than manual, on-premises solutions. In the upcoming years, AI functionality will continue to get more and more advanced allowing businesses to generate real-time insight and enable a rapid response to incidents resulting in a more efficient and safer society.

How Businesses Can Properly Apply and Maximize AIs Impact. Commentary by Bren Briggs, VP of DevSpecOpsatHypergiant.

Businesses that do not utilize AI and ML solutions will become increasingly irrelevant. This is not because AI or ML are magic bullets, but rather because utilizing them is a hallmark of innovative, resilience-first thinking. For businessesto maximize the impact of AI, they must first pose critical business questions and then seek solutions that streamline data management as well as improve and strengthen business processes. AI, when utilized well, helps companies predict problems and then act to swiftly respond to those challenges.I always encourage companies tofocuson the basics:hire the experts, set up the models for success, and pick the AI solutions that will most benefit their organization. In doing that, companies should ramp up their data scienceand MLOps teams, which can help assess which AI problems are most likely to be successful and have a strong ROI.A cost/benefit analysis will help you determine if an AI integration is actually the best use of your companys resources at any given time.

AIas aSoftwareDeveloperin theContext ofProductDevelopment. Commentary by Jonathan Grandperrin, CEO,Mindee.

As artificial intelligencecontinues to take great stridesandmoremachine learningbased products for engineering teams emerge,a challenge arises, apressing need for software developers to skill up and understand how AI functionsand how to appropriately leverage them.AIs capabilities to automate tasks and optimize multiple if not all processes have revolutionized the world. To reach the promised efficiency, AI must be integrated into all day-to-day products, such as websites, mobile applications, even products like smart TVs. However, a problemcomes upfor developers in this context: AI does not rely on the same principles as software development. Different from software development, which relies on deterministic functions, most of the time, AI is based on a statistical approximation, which changes the whole paradigm from the point of view of a software developer.It is also the reason behind the rise of data science positions in software companies.To succeed, developers must hold the capacity to create AI models from scratch and provide technical teams with ML features that they can understand andutilize.Fortunately, itsbecomingincreasinglycommon to see ML libraries for developers. In fact,those looking to ramp up on their ML skills can participate in intro courses thatcan easilyextend their skillset.

How to Solve MLs Long Tail Problem. Commentary by Russell Kaplan, Scale AIs Head of Nucleus.

The most common problem in machine learningtodayis uncommon data. With ML deployed in ever more production environments, challenging edge cases have become the norm instead of the exception. Best practices for ML model development are shifting as a result. In the old world, ML engineers would collect a dataset, make a train/test split, and then go to town on training experiments: tuning hyperparameters, model architectures, data augmentation, and more. When the test set accuracy was high enough, it was time to ship. Training experiments still help, but are no longer highest leverage. Increasingly, teams hold their models fixed while iterating on their datasets, rather than the other way round. Not only does this lead to larger improvements, its also the only way to make targeted fixes to specific ML issues seen in production. In ML, you cannot if-statement your way out of a failing edge case. To solve long tail challenges in this way, its not enough to make dataset changesyou have to make the right dataset changes. This means knowing where in the data distribution your model is failing. The concept of one test set no longer fits. Instead, high performing ML teams curate collections of many hard tests sets covering a diversity of settings, and measure accuracy on each. Beyond helping inform what data to collect and label next to drive the greatest model improvement, the many-test-sets approach also helps catch regressions. If aggregate accuracy goes up but performance in critical edge cases goes down, your model may need another turn before you ship.

Top Data Privacy Considerations for M&As. Commentary by Matthew Carroll, CEO, Immuta.

M&A deals continue to soar globally, with the technology and financial services and insurance and industries leading the pack. However, with the increase in deals comes an increase in deals that fall through. Research shows that one of the primary reasons M&A deals fall through at a surprisingly high rate between 70% and 90% is data privacy and regulatory concerns as more companies move their data to the cloud. M&A transactions lead to an instantaneous growth in the number of data users,but the scope of data used is often complex and risky especially when it involves highly-sensitive personal, financial or health-related data. With two companies combining their separate vast data sets, its imperative to find an efficient way to ensure that data protection methods and standards are consistent, that only authorized users can access data for approved uses, and privacy regulations are adhered to across jurisdictions and the globe. Merging data is just the beginning. Once mergers are completed, the joint entities must be able to provide comprehensive auditing to prove compliance. Without a strong data governance framework, stakeholder buy-in, and automated tools that work across both companies data ecosystems, this can lead to unmanageable and risk-prone processes that inhibit the value of the combined data and could lead to data vulnerabilities.

Why Training Your AI Systems Is More Complex Than You Think. Commentary by Doug Gilbert, CIO and Chief Digital Officer,Sutherland.

Fewenterprises, if any, areready to deploy AI systemsor ML models thatare completely freefrom any formofhuman intervention oroversight. Whentrainingalgorithms,its important to first understand the inherent risks of bias from the training environment, the selection training data and algorithms based upon human expertise in that particular field, and the application of AI against the very specific problem it was trained to solve. The avoidance of any or all of these can lead to unpredictable or negative outcomes. Human oversight using methods such asHuman-in-the-Loop (HitL), Reinforcement Learning, Bias Detection, andContinuous Regression TestinghelpsensureAI systems are trained adequately and effectivelytodeal with real-life interactions, work and use cases and create positive outcomes.

Scientific vs. Data Science Use Cases. Commentary by Graham A. McGibbon, Director of Partnerships, ACD/Labs.

Current scientific informatics systems support electronic representations of data obtained from experiments and tests which are often confirmatory analyses with interpretations. Data science is more often exploratory and the supporting systems typically rely on data pipelines and large amounts of clean and comprehensive data required for appropriate statistical treatments. Data science systems are founded on large volumes of data being well-identified via metadata, which is needed for the critical capability of machines to self-interpret these large datasets, and subsequently derives correlations and predictions that are otherwise not obvious. Ultimately, some of these systems could cycle continuously and autonomously given sufficient coupling with automated data generation technologies. However, scientists want the ability to judge the output of their analyses and view and explore unanticipated features in their data along with any machine-derived interpretations. Consequently, these scientific consumers need representations of results that they can easily evaluate. When comparing the current output capabilities of data science systems to contemporary or historical scientific systems, they lack some of the semiotics that domain-specific scientists expect. As such, there remains a need to bridge data science and domain-specific science, particularly if changes are desired in the latter to make it machine-interpretable for further adoption. Its important to understand that data science and domain-specific science will likely have to make adjustments for accommodating the other to ultimately reap the full benefit of generating human-interpretable knowledge outputs.

Why Predictive Analytics are Increasingly Important for Smart, Sustainable Operations. Commentary by Steve Carlini, Vice President of Innovation and Data Center at Schneider Electric.

In the data center world, predictive analytics are used mainly for critical components in the power and cooling architecture to prevent unplanned downtime. For example, a DCIM solution can look at UPS system batteries and collect data on a number of discharges, temperatures,and overall ageto come up with recommendations for battery replacement. These recommendations are based on different risk tolerances, for example, the software will say something like,Battery System 4 has a 20% chance of failure next week and a 50% chance of failure with 2 months.Facility operators can then manage risk and make informed decisions regarding battery replacement. When using analytics on larger data centers, it is important that facility-level systems are included because they are the backbone of IT rooms.Power system software must cover the medium voltage switchgear, the busway, the low voltage switchgear, all the transformers, all the power panels, and the power distribution units. Cooling system software must cover the cooling towers, the chillers, the pumps, the variable speed drives, and the Computer Room Air Conditioners (CRACs).Due to the scale and level of machinery in larger data centers, its necessary that all systems are included for comprehensive predictive analytics. As edge data centers become a critical part of the data center architecture and are deployed at scale, DCIM software benefits from unlimited cloud storage and data lakes.Predictive analytics become highly valuable as almost none of these sites are manned with service technicians. The DCIM system can leverage predictive analytics with a certain degree of linkage and automation to dispatch service personnel and replacement parts.As more data is collected, the accuracy of these analytics leveraging machine learning models will become trusted.This is already in process, as even todayoperators of mission critical facilities have the ability to plan ordesign systems with less physical redundancy and rely on the software for advanced notifications regarding battery health.

Sign up for the free insideBIGDATAnewsletter.

Join us on Twitter:@InsideBigData1 https://twitter.com/InsideBigData1

View post:

Heard on the Street 10/28/2021 - insideBIGDATA

Read More..

Machine Learning as a Service Market Size is Projected to Reach 22.10 Bn in 2027, Growing with 39.2% CAGR Says Brandessence Market Research -…

LONDON, Oct. 26, 2021 /PRNewswire/ -- Brandessence Market Research has published a new report title, "The machine learning as a service market reached USD 2.27 billion in 2020. The market is likely to grow at 39.2%, reaching USD 22.10 billion in 2027". The covid-19 pandemic has pushed many firms out of their comfort zones, and strongly towards digitalization, increased data analysis, and subsequent data automation. The machine learning promises a major prospect as migration to cloud also emerges as a key trends, thanks to increased interest in the arena by key tech players, lowering costs of data storage, and tremendous growth of data insights.

Machine Learning as a Service Market: Expert Analysis

"The fast learning pace of machine learning model has accelerated a shift towards subscription-based model, which has enabled a cost-effective pricing model for end-consumers. The model is increasingly offers core services like natural language processing, general machine learning, and computer vision algorithms. The increasing adoption of IoTs, automation, and cloud-based services will continue to drive adoption towards cost-effective data servicing centers. The growth of big data generators, super computers, and highly effective system for gaining new insights are key to growth for the machine learning as a service market", report leading analysts at Brandessence market research.

Request a Sample Report: @https://brandessenceresearch.com/requestSample/PostId/1669

Machine Learning as a Service Market: Segment Analysis

The machine learning as a service market report is divided into Network Analytics and Automated Traffic Management as applications. Among these, the growth of augmented reality, risk analytics, predictive maintenance, fraud detection, and others continue to drive strong growth for the network analytics segment. Machine learning is essential for growth of automated traffic management, and network analytics. Furthermore, with the advent of 5G technology, the growth of machine learning remains promising as large datasets continue to drive growth of cloud, gaming sphere, analytics, and more.

The automated traffic management for automated electric vehicle also remains a promising prospect. The promising horizon of smart cities, the growth of smart infrastructure, and growing demand for interconnected automated vehicles remain promising opportunities for growth for players in the machine learning as a service market.

The machine learning as a service market report categorizes organizations into small, medium, and large as end-users for machine learning as a service market. The growth of the subscription-based models remains a promising prospect for small organizations, as increasing deployment in cloud, and growing importance of digitalization and analytics continues to drive growth of the machine learning as a service market. The machine learning as a service also continues to remain appealing to medium-sized, and large organizations alike. The machine learning as a service model is likely to show promising demand for all end-users, with large organizations being likely to hold largest share of total revenues, thanks to their sizable demand.

Machine Learning as a Service Market: Competitive Analysis

The machine learning as a service market is a fragmented landscape, with increased small players leading innovation, thanks to technologies like TinyML. The landscape continues to grow due to innovation, as open-ended operating systems continue to drive down costs, and create new opportunities for players in the machine learning as a service market. Some key players in the machine learning as a service market are IBM Corporation, FICO, Microsoft, Amazon Web Services, Google Inc., Hewlett Packard Enterprises, AT&T, BigML Inc., Ersatz Labs., SAS Institute Inc., FICO, Yottamine Analytics, Amazon Web Services, Microsoft Corporation, Prediction Labs Ltd., and IBM Corporation

In December 2020, HP started offering HPE Greenlake, a cloud platform with pay-per-use model to cater to most demanding and data-intensive workloads. The platform runs on the power of AI, and ML to create new products, and experiences through a hybrid cloud model.

Eurobank, a major Greek bank has expanded its use of FICO compliance solutions, after a new regulation from the European Union. The bank operates in six countries, and its new machine learning solutions offers it new capabilities like doing KYC and AML checks in real-time, and processes like digital on-boarding.

Request for Methodology of this report:https://brandessenceresearch.com/requestMethodology/PostId/1669

Machine Learning Trends

The increased advancement in machine learning has moved the traditional model from AI-based analytics towards advanced image processing. This advancement has made machine learning a key tool for healthcare imaging diagnostics. Furthermore, the machine learning model remains a frontrunner in in new technologies like OpenAI, which can generate new visual designs based on text input. The growing advancement in Open AI model remains a promising prospect for global manufacturing, distribution, packaging, and sales alike.

The advancement of machine learning in healthcare remains a promising prospect. Machine learning with the help of data analytics, and visual lab results promises a new era for understanding genetic sequencing. The AI algorithms are trained to look at data using multi-modal techniques like optical character recognition, and machine vision to improve medical diagnosis.

The growth of new innovations like Tiny MK, a model that runs on hardware constrained devices for powering refrigerators, cars, and utility meters also remains a new promise for players in the machine learning as a service market. The tiny algorithms promise to capture gestures and common sounds like baby crying, or gunshots for applications like tracking environmental conditions, asset location and orientation, and even vital medical signs. The increasing storage, battery, and functionalities of the Tiny MK remain a major promise for a wide variety of commercial applications.

The data learning continues to drive demand for data labeling, a subset of the core functionality. The increasing demand for human-led data labeling has driven low-cost data labeling services, mostly arising in Asia Pacific region. The semi-supervised, and novice data labeling services present a challenge in applications like voice assistants led commercial functions. The growth of these applications has also driven demand for automated data labeling. The improvement in automated data labeling continues to make way for new processes like PlatformOps, MLOps, and DataOps, which are now combinedly referred to as, the 'XOps'.

Machine learning and AI also promise to team up with employees to take burden off of mundane responsibilities. For example, sales employee often need to login information like customer name, their address, and other important details. Today, online assistants through customer management systems deliver this information to employees to begin a smooth conversation about consumer concerns. Furthermore, machine learning systems are also advancing troubleshooting, sales through text-based and voice-based assistants online. This not only saves various companies significant amounts of money, but also reduces downtime, prepares them for digital adoption, and more. Young, tech-savvy consumers also increasingly prefer to contact customer service through online portals, saving tons of money for companies in customer service, and increasingly report higher satisfaction.

Key Benefits of GlobalMachine Learning as a Service Industry Report

Get Complete Access of Research Report: https://brandessenceresearch.com/technology-and-media/machine-learning-as-a-service-market

Related Reports:

Robotic Process Automation IndustryProjected to Reach $ 18339.95 Mn in 2027

Platform As A Service MarketForecast 2021-2027

Mobility As A Service MarketForecast 2021-2027

UCaaS MarketSize Will Reach USD 32.28 Bn by 2027

Brandessence Market Research & Consulting Pvt ltd.

Brandessence market research publishes market research reports & business insights produced by highly qualified and experienced industry analysts. Our research reports are available in a wide range of industry verticals including aviation, food & beverage, healthcare, ICT, Construction, Chemicals and lot more. Brand Essence Market Research report will be best fit for senior executives, business development managers, marketing managers, consultants, CEOs, CIOs, COOs, and Directors, governments, agencies, organizations and Ph.D. Students. We have a delivery center in Pune, India and our sales office is in London.

Website: https://brandessenceresearch.com

Blog: Digital Map Companies

Contacts: Mr. Vishal Sawant Email: [emailprotected] Email: [emailprotected] Corporate Sales: +44-2038074155 Asia Office: +917447409162

SOURCE Brandessence Market Research And Consulting Private Limited

Read more:
Machine Learning as a Service Market Size is Projected to Reach 22.10 Bn in 2027, Growing with 39.2% CAGR Says Brandessence Market Research -...

Read More..

Machine Learning Reveals Aggression Symptoms in Childhood ADHD – Technology Networks

Child psychiatric disorders, such as oppositional defiant disorder and attention-deficit/hyperactivity disorder (ADHD), can feature outbursts of anger and physical aggression. A better understanding of what drives these symptoms could help inform treatment strategies. Yale researchers have now used a machine learning-based approach to uncover disruptions of brain connectivity in children displaying aggression.

While previous research has focused on specific brain regions, the new study identifies patterns of neural connections across the entire brain that are linked to aggressive behavior in children. The findings, published in the journalMolecular Psychiatry, build on a novel model of brain functioning called the connectome that describes this pattern of brain-wide connections.Maladaptive aggression can result in harm to self or others. This challenging behavior is one of the main reasons for referrals to child mental health services, saidDenis Sukhodolsky, senior author and associate professor in the Yale Child Study Center. Connectome-based modeling offers a new account of brain networks involved in aggressive behavior.

For the study, which is the first of its kind, researchers collected fMRI (functional magnetic resonance imaging) data while children performed an emotional face perception task in which they observed faces making calm or fearful expressions. Seeing faces that express emotion can engage brain states relevant to emotion generation and regulation, both of which have been linked to aggressive behavior, researchers said. The scientists then applied machine learning analyses to identify neural connections that distinguished children with and without histories of aggressive behavior.

They found that patterns in brain networks involved in social and emotional processes such as feeling frustrated with homework or understanding why a friend is upset predicted aggressive behavior. To confirm these findings, the researchers then tested them in a separate dataset and found that the same brain networks predicted aggression. In particular, abnormal connectivity to the dorsolateral prefrontal cortex a key region involved in the regulation of emotions and higher cognitive functions like attention and decision-making emerged as a consistent predictor of aggression when tested in subgroups of children with aggressive behavior and disorders such as anxiety, ADHD, and autism.

These neural connections to the dorsolateral prefrontal cortex could represent a marker of aggression that is common across several childhood psychiatric disorders.This study suggests that the robustness of these large-scale brain networks and their connectivity with the prefrontal cortex may represent a neural marker of aggression that can be leveraged in clinical studies, saidKarim Ibrahim, associate research scientist at the Yale Child Study Center and first author of the paper. The human functional connectome describes the vast interconnectedness of the brain. Understanding the connectome is on the frontier of neuroscience because it can provide us with valuable information for developing brain biomarkers of psychiatric disorders.

Added Sukhodolsky: This connectome model of aggression could also help us develop clinical interventions that can improve the coordination among these brain networks and hubs like the prefrontal cortex. Such interventions could include teaching the emotion regulation skills necessary for modulating negative emotions such as frustration and anger.

Reference: Ibrahim K, Noble S, He G, et al. Large-scale functional brain networks of maladaptive childhood aggression identified by connectome-based predictive modeling. Mol Psychiatry. Published online October 25, 2021:1-15. doi:10.1038/s41380-021-01317-5

This article has been republished from the following materials. Note: material may have been edited for length and content. For further information, please contact the cited source.

See original here:
Machine Learning Reveals Aggression Symptoms in Childhood ADHD - Technology Networks

Read More..

Insights on the Data Science Platform Global Market to 2027 – Featuring Microsoft, IBM and Google Among Others – Yahoo Finance

Dublin, Oct. 25, 2021 (GLOBE NEWSWIRE) -- The "Global Data Science Platform Market (2021-2027) by Component, Deployment, Organization Size, Function, Industry Vertical, and Geography, Competitive Analysis, Impact of Covid-19, Ansoff Analysis" report has been added to ResearchAndMarkets.com's offering.

The Global Data Science Platform Market is estimated to be USD 43.3 Bn in 2021 and is expected to reach USD 81.43 Bn by 2027, growing at a CAGR of 11.1%.

Key factors such as a massive increase in data volume due to increasing digitalization and automation of processes have been a crucial driver in the growth of data science platform. Besides, the enterprises are increasingly focusing on analytical tools for deriving insights into consumer behavior and purchasing patterns. This, in turn, has been shaping their business decisions and strategies to compete in the market. Besides, the adoption of data science platforms has found its way in various industry verticals such as manufacturing, IT, BFSI, retail, etc. All these factors have helped in contributing to the growth of the data science platform market.

However, the costs attached to the deployment of these platforms, along with less workforce with domain expertise capabilities and threats to data privacy, has been a hindrance in the growth of the market.

Market Dynamics

Drivers

High Generation of Data Volumes

Rising Focus On Data-Driven Decisions

Increasing Adoption of Data Science Platforms Across Diversified Industry Verticals

Restraints

Opportunities

Increasing Adoption of Data-Driven Technologies by Enterprises

Increasing Demand for Public Cloud

Investments and Funding in Development of Big Data and Related Technologies by Public and Private Sectors

Challenges

Segments Covered

By Component, the market is classified as platform and services. Amongst the two, the Platforms segment holds the highest market share. With a rise in digitalization and automation in various processes, data has been at the forefront. With massive data being churned by the enterprises, the availability of data science platforms is proving beneficial to provide real-time insights and streamline the business processes accordingly. Therefore, enterprises are adopting these platforms to bring process uniformity and business efficiency. This has accelerated the demand for the platform segment.

By Deployment, the Cloud-based segment is estimated to hold the highest market share. The cloud-based platforms are comparatively cost-effective and scalable with the ease of deployment. Since they can be accessible with minimum capital requirements, they are considered a resourceful source of deployment in varied industry sectors.

By Organization Size, Large Enterprises hold the highest market share. A data science platform has essential tools such as predictive analytical tools that can help an organization derive insights and provide meaningful business outcomes. The large scale organizations have the financial backing to invest in such solutions and provide an enhanced customer experience. The real-time insights can help these enterprises in improvising their business processes too. Therefore such enterprises hold a higher demand for data science platforms.

By Function, the Marketing And Sales segment is estimated to hold a high market share. The availability of data science platform in marketing and sales has decipher more insights about buyer behaviour patterns, marketing spending, and help the enterprises generate more ROI. The enterprises are also depending on these platforms due to their reliability in services, reducing financial risks, thereby generating higher revenues. Besides, these platforms are capable of providing an enhanced customer experience. This has led to a high adoption rate in the marketing and sales segment resulting in market segment growth.

By Industry Vertical, the BFSI sector adequately implements such platforms for proactively engaging in fraud detection and providing their customers the needed security. The data science platform can help manage customer data, reduce the complexity in operations, and provide insightful data for risk modeling for investment bankers. Also, the banks are often engaged in providing their customer's personalized services, storing massive data. These platforms can be helpful in this regard, thereby supporting the BFSI market segment growth. Besides the BFSI segment, the healthcare segment has also been drawing lucrative opportunities from these platforms. One of the prominent applications has been in the medical imaging segment. These platforms are being used in the diagnostic segment for improving diagnostic accuracy and efficiency.

By Geography, North America is projected to lead the market. The factors attributed to the growth of the market are the presence of capital intensive industries seeking to deploy a data science platform by integrating with their current IT infrastructure to have a competitive edge over the market. The region has a comparatively faster adoption rate to newer technological solutions due to a solid technological infrastructure. This has further led to a rise in the data science platform vendors offering new solutions to the enterprises. All these factors have aided in the growth of the data science platform of this region.

Story continues

Company Profiles

Some of the companies covered in this report are Microsoft Corporation, IBM Corporation, SAS Institute, Inc., SAP SE, RapidMiner, Inc., Dataiku SAS, Alteryx, Inc., Fair Issac Corporation, MathWorks, Inc., Teradata, Inc, etc.

Competitive Quadrant

The report includes Competitive Quadrant, a proprietary tool to analyze and evaluate the position of companies based on their Industry Position score and Market Performance score. The tool uses various factors for categorizing the players into four categories. Some of these factors considered for analysis are financial performance over the last 3 years, growth strategies, innovation score, new product launches, investments, growth in market share, etc.

Why buy this report?

The report offers a comprehensive evaluation of the Global Data Science Platform Market. The report includes in-depth qualitative analysis, verifiable data from authentic sources, and projections about market size. The projections are calculated using proven research methodologies.

The report has been compiled through extensive primary and secondary research. The primary research is done through interviews, surveys, and observation of renowned personnel in the industry.

The report includes in-depth market analysis using Porter's 5 force model and the Ansoff Matrix. The impact of Covid-19 on the market is also featured in the report.

The report also contains the competitive analysis using Competitive Quadrant, Infogence's Proprietary competitive positioning tool.

Report Highlights:

A complete analysis of the market including parent industry

Important market dynamics and trends

Market segmentation

Historical, current, and projected size of the market based on value and volume

Market shares and strategies of key players

Recommendations to companies for strengthening their foothold in the market

Key Topics Covered:

1 Report Description

2 Research Methodology

3 Executive Summary

4 Market Overview4.1 Introduction 4.2 Market Dynamics4.2.1 Drivers4.2.2 Restraints4.2.3 Opportunities4.2.4 Challenges4.3 Trends

5 Market Analysis5.1 Porter's Five Forces Analysis5.2 Impact of COVID-195.3 Ansoff Matrix Analysis

6 Global Data Science Platform Market, By Component6.1 Introduction6.2 Platform6.3 Services6.3.1 Managed Services6.3.2 Professional Services6.3.2.1 Training and Consulting6.3.2.2 Integration and Deployment6.3.2.3 Support and Maintenance

7 Global Data Science Platform Market, By Deployment7.1 Introduction7.2 Cloud7.3 On-premises

8 Global Data Science Platform Market, By Organization Size8.1 Introduction8.2 Large Enterprises8.3 Small and Medium-sized Enterprises

9 Global Data Science Platform Market, By Function9.1 Introduction9.2 Marketing9.3 Sales9.4 Logistics9.5 Finance and Accounting9.6 Customer Support9.7 Others

10 Global Data Science Platform Market, By Industry Verticals10.1 Introduction10.2 Banking, Financial Services, and Insurance (BFSI)10.3 Telecom and IT10.4 Retail and E-Commerce10.5 Healthcare and Life sciences10.6 Manufacturing10.7 Energy and Utilities10.8 Media and Entertainment10.9 Transportation and Logistics10.10 Government10.11 Others

11 Global Data Science Platform Market, By Geography

12 Competitive Landscape12.1 Competitive Quadrant12.2 Market Share Analysis12.3 Competitive Scenario12.3.1 Mergers & Acquisitions12.3.2 Agreements, Collaborations, & Partnerships12.3.3 New Product Launches & Enhancements12.3.4 Investments & Fundings

13 Company Profiles13.1 Microsoft Corporation13.2 IBM Corporation 13.3 Google, Inc13.4 Wolfram13.5 DataRobot Inc.13.6 Sense Inc.13.7 RapidMiner Inc. 13.8 Domino Data Lab13.9 Dataiku SAS13.10 Alteryx, Inc.13.11 Oracle13.12 Tibco Software Inc.13.13 SAS Institute Inc.13.14 SAP SE13.15 The Mathworks, Inc.13.16 Cloudera, Inc.13.17 H2O.ai13.18 Fair Issac Corporation13.19 Teradata, Inc13.20 Kaggle Inc., 13.21 Micropole S.A. 13.22 Continuum Analytics, Inc.13.23 C&F Insight technology solutions13.24 Civis Analytics, Inc.13.25 VMware Inc13.26 Alpine Data Labs,13.27 Thoughtworks Inc13.28 MuSigma13.29 Tableau Software LLC

14 Appendix

For more information about this report visit https://www.researchandmarkets.com/r/w83wb2

Go here to see the original:

Insights on the Data Science Platform Global Market to 2027 - Featuring Microsoft, IBM and Google Among Others - Yahoo Finance

Read More..

AT&T and H2O.ai Launch Co-Developed Artificial Intelligence Feature Store with Industry-First Capabilities – Yahoo Finance

H2O AI Feature Store, currently in production use at AT&T, delivers a repository for collaborating, sharing, reusing and discovering machine learning features to speed AI project deployments, improve ROI and is now available to any company or organization

DALLAS and MOUNTAIN VIEW, Calif., Oct. 28, 2021 /PRNewswire/ --

What's the news? AT&T and H2O.ai jointly built an artificial intelligence (AI) feature store to manage and reuse data and machine learning engineering capabilities. The AI Feature Store houses and distributes the features data scientists, developers and engineers need to build AI models. The AI Feature Store is in production at AT&T, meeting the high levels of performance, reliability and scalability required to meet AT&T's demand. Today, AT&T and H2O.ai are announcing that the same solution in production at AT&T, including all its industry-first capabilities, will now be available as the "H2O AI Feature Store" to any company or organization.

What is a feature store? Data scientists and AI experts use data engineering tools to create "features," which are a combination of relevant data and derived data that predict an outcome (e.g., churn, likely to buy, demand forecasting). Building features is time consuming work, and typically data scientists build features from scratch every time they start a new project. Data scientists and AI experts spend up to 80% of their time on feature engineering, and because teams do not have a way to share this work, the same work is repeated by teams throughout the organization. Also, it is important that features are available for both training and real-time inference to avoid training-serving skew which causes model performance problems and contributes to project failure. Feature stores allow data scientists to build more accurate features and deploy these features in production in hours instead of months. Until now there weren't places to store and access features from previous projects. As data and AI are and will continue to be important to every business, demand is growing to make these features reusable. Feature stores are seen as a critical component of the infrastructure stack for machine learning because they solve the hardest problem with operationalizing machine learningbuilding and serving machine learning data to production.

Story continues

How is AT&T using its feature store? AT&T carries more than 465 petabytes of data traffic across its global network on an average day. When you add in the data generated internally from our different applications, in our stores, among our field technicians, and across other parts of our business, turning data into actionable intelligence as quickly as possible is vital to our success. AT&T's implementation of the AI Feature Store has been instrumental in helping turn this massive trove of data into actionable intelligence.

Who will use the H2O AI Feature Store? We know other organizations feel the same way about making their own data actionable. H2O.ai, the leading AI cloud platform provider, has co-developed the feature store with us, and now together we are offering the production-tested feature store as a software platform for other companies and organizations to use with their own data. From financial services to health organizations and pharmaceutical makers, retail, software developers and more, we know the demand for reliable, easy-to-use, and secure feature stores is booming. Any organization currently using AI or planning to use AI will want to consider the value of a feature store. We expect customers to use the H2O AI Feature Store for forecasting, personalization and recommendation engines, dynamic pricing optimization, supply chain optimization, logistics and transportation optimization, and more. We are using the feature store at AT&T for network optimization, fraud prevention, tax calculations and predictive maintenance.

The H2 AI Feature Store includes industry-first capabilities, including integration with multiple data and machine learning pipelines, which can be applied to an on-premise data lake or by leveraging cloud and SaaS providers.

The H2O AI Feature Store also includes Automatic Feature Recommendations, an industry first, which let data scientists select the features they want to update and improve and receive recommendations to do so. The H2O AI Feature Store recommends new features and feature updates to improve the AI model performance. The data scientists review the suggested updates and accept the recommendations they want to include.

What are people saying?

"Feature stores are one of the hottest areas of AI development right now, because being able to reuse and repurpose data engineering tools is critical as those tools become increasingly complex and expensive to build," said Andy Markus, Chief Data Officer, AT&T. "These storehouses are vital not only to our own work, but to other businesses, as well. With our expertise in managing and analyzing huge data flows, combined with H2O.ai's deep AI expertise, we understand what business customers are looking for in this space and our Feature Store offering meets this need."

"Data is a team sport and collaboration with domain experts is key to discovering and sharing features. Feature Stores are the digital 'water coolers' for data science," said Sri Ambati, CEO and founder of H2O.ai. "We are building AI right into the Feature Store and have taken an open, modular and scalable approach to tightly integrate into the diverse feature engineering pipelines while preserving sub-millisecond latencies needed to react to fast-changing business conditions. AI-powered feature stores focus on discoverability and reuse by automatically recommending highly predictive features to our customers using FeatureRank. AT&T has built a world-class data and AI team and we are privileged to collaborate with them on their AI journey."

To learn more about H2O AI Feature Store please visit http://www.h2o.ai/feature-store and sign up to join our preview program or for a demo.

Please join AT&T and H2O.ai on October 28th at 2:00 PT CT at AT&T Business Summit for a discussion on the future of AI as a Service. Register at https://register-bizsummit.att.com

*About AT&T CommunicationsWe help family, friends and neighbors connect in meaningful ways every day. From the first phone call 140+ years ago to mobile video streaming, we @ATT innovate to improve lives. AT&T Communications is part of AT&T Inc. (NYSE:T). For more information, please visit us at att.com.

*About H2O.aiH2O.ai is the leading AI cloud company, on a mission to democratize AI for everyone. Customers use the H2O AI Hybrid Cloud platform to rapidly solve complex business problems and accelerate the discovery of new ideas. H2O.ai is the trusted AI provider to more than 20,000 global organizations, including AT&T, Allergan, Bon Secours Mercy Health, Capital One, Commonwealth Bank of Australia, GlaxoSmithKline, Hitachi, Kaiser Permanente, Procter & Gamble, PayPal, PwC, Reckitt, Unilever and Walgreens, over half of the Fortune 500 and one million data scientists. Goldman Sachs, NVIDIA and Wells Fargo are not only customers and partners, but strategic investors in the company. H2O.ai's customers have honored the company with a Net Promoter Score (NPS) of 78 the highest in the industry based on breadth of technology and deep employee expertise. The world's top 20 Kaggle Grandmasters (the community of best-in-the-world machine learning practitioners and data scientists) are employees of H2O.ai. A strong AI for Good ethos to make the world a better place and Responsible AI drive the company's purpose. Please join our movement at http://www.h2O.ai.

AT&T Inc. logo (PRNewsfoto/AT&T Communications)

H2O.ai logo

Cision

View original content to download multimedia:https://www.prnewswire.com/news-releases/att-and-h2oai-launch-co-developed-artificial-intelligence-feature-store-with-industry-first-capabilities-301410998.html

SOURCE AT&T Communications

View post:
AT&T and H2O.ai Launch Co-Developed Artificial Intelligence Feature Store with Industry-First Capabilities - Yahoo Finance

Read More..

Before Machines Can Be Autonomous, Humans Must Work to Ensure Their Safety – University of Virginia

As the self-driving car travels down a dark, rural road, a deer lingering among the trees up ahead looks poised to dart into the cars path. Will the vehicle know exactly what to do to keep everyone safe?

Some computer scientists and engineers arent so sure. But researchers at the University of Virginia School of Engineering and Applied Science are hard at work developing methods they hope will bring greater confidence to the machine-learning world not only for self-driving cars, but for planes that can land on their own and drones that can deliver your groceries.

The heart of the problem stems from the fact that key functions of the software that guides self-driving cars and other machines through their autonomous motions are not written by humans instead, those functions are the product of machine learning. Machine-learned functions are expressed in a form that makes it essentially impossible for humans to understand the rules and logic they encode, thereby making it very difficult to evaluate whether the software is safe and in the best interest of humanity.

Researchers in UVAs Leading Engineering for Safe Software Lab the LESS Lab, as its commonly known are working to develop the methods necessary to provide society with the confidence to trust emerging autonomous systems.

The teams researchers are Matthew B. Dwyer, Robert Thomson Distinguished Professor; Sebastian Elbaum, Anita Jones Faculty Fellow and Professor; Lu Feng, assistant professor; Yonghwi Kwon, John Knight Career Enhancement Assistant Professor; Mary Lou Soffa, Owens R. Cheatham Professor of Sciences; and Kevin Sullivan, associate professor. All hold appointments in the UVA Engineering Department of Computer Science. Feng holds a joint appointment in the Department of Engineering Systems and Environment.

Since its creation in 2018, the LESS Lab has rapidly grown to support more than 20 graduate students, publish more than 50 publications and obtain competitive external awards totaling more than $10 million in research funding. The awards have come from such agencies as the National Science Foundation, the Defense Advanced Research Projects Agency, the Air Force Office of Scientific Research and the Army Research Office.

The labs growth trajectory matches the scope and urgency of the problem these researchers are trying to solve.

An inflection point in the rapid rise of machine learning happened just a decade ago when computer vision researchers won the ImageNet Large Scale Visual Recognition Challenge to identify objects in photos using a machine-learning solution. Google took notice and quickly moved to capitalize on the use of data-driven algorithms.

Other tech companies followed suit, and public demand for machine-learning applications snowballed. Last year, Forbes estimated that the global machine-learning market grew at a compound annual growth rate of 44% and is on track to become a $21 billion market by 2024.

But as the technology ramped up, computer scientists started sounding the alarm that mathematical methods to validate and verify the software were lagging.

Government agencies like the Defense Advanced Research Projects Agency responded to the concerns. In 2017, DARPA launched the Assured Autonomy program to develop mathematically verifiable approaches for assuring an acceptable level of safety with data-driven, machine-learning algorithms.

UVA Engineerings Department of Computer Science also took action, extending its expertise with a strategic cluster of hires in software engineering, programming languages and cyber-physical systems. Experts combined efforts in cross-cutting research collaborations, including the LESS Lab, to focus on solving a global software problem in need of an urgent solution.

By this time, the self-driving car industry was firmly in the spotlight and the problems with machine learning were becoming more evident. A fatal Uber crash in 2018 was recorded as the first human death from an autonomous vehicle.

The progress toward practical autonomous driving was really fast and dramatic, shaped like a hockey stick curve, and much faster than the rate of growth for the techniques that can ensure their safety, Elbaum said.

This August, tech blog Engadget reported that one self-driving car company, Waymo, had put in 20 million miles of on-the-road testing. Yet, devastating failures continue to occur; as the software has gotten more and more complex, no amount of real-world testing would be enough to find all the bugs.

And the software failures that have occurred have made many wary of on-the-road testing.

Even if a simulation works 100 times, would you jump into an autonomous car and let it drive you around? Probably not, Dwyer said. Probably you want some significant on-the-street testing to further increase your confidence. And at the same time, you dont want to be the one in the car when that test goes on.

And then there is the need to anticipate and test every single obstacle that might come up in the 4-million-mile public roads network.

Think about the complexities of a particular scenario, like driving on the highway with hauler trucks on either side so that an autonomous vehicle needs to navigate crosswinds at high speeds and around curves, Dwyer said. Getting a physical setup that matches that challenging scenario would be difficult to do in real-world testing. But in simulation that can get that much easier.

And thats one area where LESS Lab researchers are making real headway. They are developing sophisticated virtual reality simulations that can accurately create such difficult scenarios that might otherwise be impossible to test on the road.

There is a huge gap between testing in the real world versus testing in simulation, Elbaum said. We want to close that gap and create methods in which we can expose autonomous systems to the many complex scenarios they will face. This will substantially reduce testing time and cost, and it is a safer way to test.

Having simulations so accurate they can stand in for real-world driving would be the equivalent of rocket fuel for the huge amount of sophisticated testing it will take to fix the timeline and human cost problem. But it is just half of the solution.

The other half is developing mathematical guarantees that can prove the software will do what it is supposed to do all the time. And that will require a whole new set of mathematical frameworks.

Before machine learning, engineers would write explicit, step-by-step instructions for the computer to follow. The logic was deterministic and absolute, so humans could use formal mathematical rules that existed to test the code and guarantee it worked.

So if you really wanted a property that the autonomous system only makes sharp turns when there is an obstacle in front of it, before machine learning, mechanical engineers would say, I want this to be true, and software engineers would build that rule into the software, Dwyer said.

Today, the computer continuously improves a probability that an algorithm will produce an expected outcome by being fed images of examples to learn from. The process is no longer based on absolutes, following rules a human can understand.

With machine learning, you can give an autonomous system examples of sharp turns only with obstacles in front of it, Dwyer said. But that doesnt mean that the computer will learn and write a program that guarantees that is always true. Previously, humans checked the rules they codified into systems. Now we need a new way to check that a program the machine wrote observes the rules.

Elbaum stresses that there are still a lot of open-ended questions that need to be answered in the quest for concrete, tangible methods. We are playing catch-up to ensure these systems do what they are supposed to do, he said.

That is why the combined strength of the LESS Labs faculty and students is critically important for speeding up the discovery. Just as important is the commitment of the LESS Lab in working collectively and in concert with other UVA experts on machine learning and cyber-physical systems to enable a future where people can trust autonomous systems.

The labs mission could not be more relevant if we are ever to realize the promise of self-driving cars, much less eliminate the fears of a deeply skeptical public.

If your autonomous car is driving on a sunny day down the road with no other cars, it should work, Dwyer said. If its rainy and its at night and its crowded and there is a deer on the side of the road, it should also work, and in every other scenario.

We want to make it work all the time, for everybody, and in every context.

Read more here:
Before Machines Can Be Autonomous, Humans Must Work to Ensure Their Safety - University of Virginia

Read More..