Category Archives: Machine Learning

How AI Is Poised to Help Humanity – Entrepreneur

Opinions expressed by Entrepreneur contributors are their own.

Recent advances in artificial intelligence have caused a surge of public and business interest in this remarkable technology. Though most of its applications are still in their infancy, professionals across a wide range of industries have begun using AI-infused assistants to accomplish various tasks. This accelerated pace of innovation and data usage has, however, led to increased uncertainty about just where machine learning and thinking is headed, and the impacts it will have on society. In terms of positive effects, I certainly see some standouts.

Shutterstock

AI can consider multiple scenarios and make educated decisions using both previously gathered information as well as real-time data. As a result, the errors are reduced and the chance of reaching broader accuracy and greater precision is much greater. Example:The forecasting of weather and other natural disasters such as earthquakes and tsunamis.

Robots applying machine learning can risk dangers that might otherwise be lethal for humans. Think of exploring space or the deepest parts of oceans, defusing a bomb or inspecting unstable structures. Example:In the still-lethal areas of theChernobyl nuclear disaster site in Ukraine, robots are conducting radiation surveillance, removing debris from the destroyed reactor, taking samples of radiological materials and even burying radioactive materials.

Related: How AI Can Deliver Clean Water to Billions

Conducting a repetitive task is definitionally tedious, and often time-consuming. Using AI for monotonous and routine actions can help direct focus to other components of a to-do list and free us to be increasingly creative. Example:Apple's Siri, Google Assistant and Amazon's Alexa, along with a newer arrival, C9 Companion, are intelligent assistants that can carry a meaningful conversation and help manage and organize daily life (such as answering emails, texting friends and other tasks).

AI can make decisions and carry out actions far faster than humans, and latest generations of machine learning can consider facts and statistics as well as learn aspects of human emotion, then weigh both into its calculations.Example:In health care, AI can help doctors and researchers diagnose cancer more accurately and efficiently, making treatment more effective.

Using AI, we can make robots that can function all day, every day, without break. Plus, unlike we humans, they do not become bored or disinterested in repetitive tasks.Example:Chatbots for customer service or hotlines. Offering 24/7 customer service is essential for global companies as they have customers from all over the world who reside in different time zones. AI solutions are winnings ways of them staying connected with customers.

Related: 3 Ways Machine Learning Can Help Entrepreneurs

AI is powering inventions in a variety of sectors that will help humans solve complex problems, expanding our creativity and ingenuity in the process.Examples include medical devices, drug synthesizers, weapons even kitchen appliances. In part this is because machine learning can create unpredictable, innovative outcomes autonomously, rather than merely following instructions, and can do so without human bias.

See the article here:
How AI Is Poised to Help Humanity - Entrepreneur

Women Innovators And Researchers Who Made A Difference In AI In 2021 – Analytics India Magazine

There is a troubling and persistent absence of women when it comes to the field of artificial intelligence and data science. Women constitute a mere 22 per cent or less than a quarter of professionals in this field, as says the report Where are the women? Mapping the gender job gap in AI, from The Turing Institute. Yet, despite low participation and obstacles, women are breaking the silos and setting an example for players out in the field of AI.

To honour their commitment and work done, we have listed some of the women innovators and researchers who have worked tirelessly and contributed significantly to the field of AI and data science. The list below is provided in no particular order.

The brainchild behind and the founder of The Algorithmic Justice League (AJL), Joy Buolamwini, has started the organisation that combines art and research to illuminate the social implications and harms of artificial intelligence. With her pioneering work on algorithmic bias, Joy opened the eyes of the world and brought out the gender bias and racial prejudices embedded in facial recognition systems. As a result, Amazon, Microsoft, and IBM all halted their facial recognition services, admitting that the technology was not yet ready for widespread usage. One can watch the famous documentary Coded Bias to understand her work. Her contributions will surely pave the way for a more inclusive and diversified AI community in the near future.

A large chunk of researchers and scholars focus on improving algorithms for the machines to work efficiently. However, Cynthia Rudin, the Duke University computer science professor and engineer, worked tirelessly to utilise the power of AI to serve humanity and help society. As a result, she was bestowed with the Squirrel AI Award for Artificial Intelligence for the Benefit of Humanity 2021. Her research work majorly focuses on machine learning tools that help humans make better decisions, mainly interpretable ML and its applications. In a conversation with us, Cynthia wished that AI could solve the refugee crisis, reverse climate change and help end extreme poverty.

Allie is currently the Global Head of Machine Learning Business Development, Startups and Venture Capital at Amazon Web Services. She is in the supporting role for many big AI organisations. In her effort to increase representation, Allie co-founded Girls of the Future, an organisation that showcases girls aged between 13 to 18 who are innovating in the field of STEM. Moreover, she will be presenting the Data and Machine Learning keynote at AWS re: Invent with Swami Sivasubramanian, VP, Amazon AI. She formerly worked at IBM, where she oversaw large-scale product development using computer vision, conversation, data, and regulation.

Dr Lucia is a Natural Language Processing Professor at the Imperial College of London. She built and led the Language and Multimodal AI Lab, managing a team of 20 researchers and her research examines many aspects of data-driven approaches to language processing, with a focus on multimodal and multilingual context models. Her work has benefited several fields, including machine translation, quality estimate, image captioning, and text adaptation. She is currently involved and working on a number of machine translation projects, including multilingual video captioning and text adaptation, which will surely contribute to the advancement of AI-driven technologies in this field.

A prominent advocate of safe and reliable AI, Cassie is currently the Chief Decision Scientist at Google. Recently, she introduced the AI-for-everyone course Making Friends with Machine Learning. Her research interests include applied artificial intelligence and data science process architecture. At Google, Cassie co-founded the field of decision intelligence at Google, combining social science, decision theory, and managerial science with data science to better understand how actions lead to results.

Although females are under-represented in the field of AI and technology, all is not lost yet; some incredibly inspirational female pioneers are shaping the world of AI.

Kumar Gandharv, PGD in English Journalism (IIMC, Delhi), is setting out on a journey as a tech Journalist at AIM. A keen observer of National and IR-related news. He loves to hit the gym. Contact: [emailprotected]

Originally posted here:
Women Innovators And Researchers Who Made A Difference In AI In 2021 - Analytics India Magazine

Machine learning optimization of an electronic health record audit for heart failure in primary care – DocWire News

This article was originally published here

ESC Heart Fail. 2021 Nov 23. doi: 10.1002/ehf2.13724. Online ahead of print.

ABSTRACT

AIMS: The diagnosis of heart failure (HF) is an important problem in primary care. We previously demonstrated a 74% increase in registered HF diagnoses in primary care electronic health records (EHRs) following an extended audit procedure. What remains unclear is the accuracy of registered HF pre-audit and which EHR variables are most important in the extended audit strategy. This study aims to describe the diagnostic HF classification sequence at different stages, assess general practitioner (GP) HF misclassification, and test the predictive performance of an optimized audit.

METHODS AND RESULTS: This is a secondary analysis of the OSCAR-HF study, a prospective observational trial including 51 participating GPs. OSCAR used an extended audit based on typical HF risk factors, signs, symptoms, and medications in GPs EHR. This resulted in a list of possible HF patients, which participating GPs had to classify as HF or non-HF. We compared registered HF diagnoses before and after GPs assessment. For our analysis of audit performance, we used GPs assessment of HF as primary outcome and audit queries as dichotomous predictor variables for a gradient boosted machine (GBM) decision tree algorithm and logistic regression model. Of the 18 011 patients eligible for the audit intervention, 4678 (26.0%) were identified as possible HF patients and submitted for GPs assessment in the audit stage. There were 310 patients with registered HF before GP assessment, of whom 146 (47.1%) were judged not to have HF by their GP (over-registration). There were 538 patients with registered HF after GP assessment, of whom 374 (69.5%) did not have registered HF before GP assessment (under-registration). The GBM and logistic regression model had a comparable predictive performance (area under the curve of 0.70 [95% confidence interval 0.65-0.77] and 0.69 [95% confidence interval 0.64-0.75], respectively). This was not significantly impacted by reducing the set of predictor variables to the 10 most important variables identified in the GBM model (free-text and coded cardiomyopathy, ischaemic heart disease and atrial fibrillation, digoxin, mineralocorticoid receptor antagonists, and combinations of renin-angiotensin system inhibitors and beta-blockers with diuretics). This optimized query set was enough to identify 86% (n = 461/538) of GPs self-assessed HF population with a 33% reduction (n = 1537/4678) in screening caseload.

CONCLUSIONS: Diagnostic coding of HF in primary care health records is inaccurate with a high degree of under-registration and over-registration. An optimized query set enabled identification of more than 80% of GPs self-assessed HF population.

PMID:34816632 | DOI:10.1002/ehf2.13724

Go here to read the rest:
Machine learning optimization of an electronic health record audit for heart failure in primary care - DocWire News

Your neighborhood matters: A machine-learning approach to the geospatial and social determinants of health in 9-1-1 activated chest pain – DocWire…

This article was originally published here

Res Nurs Health. 2021 Nov 24. doi: 10.1002/nur.22199. Online ahead of print.

ABSTRACT

Healthcare disparities in the initial management of patients with acute coronary syndrome (ACS) exist. Yet, the complexity of interactions between demographic, social, economic, and geospatial determinants of health hinders incorporating such predictors in existing risk stratification models. We sought to explore a machine-learning-based approach to study the complex interactions between the geospatial and social determinants of health to explain disparities in ACS likelihood in an urban community. This study identified consecutive patients transported by Pittsburgh emergency medical service for a chief complaint of chest pain or ACS-equivalent symptoms. We extracted demographics, clinical data, and location coordinates from electronic health records. Median income was based on US census data by zip code. A random forest (RF) classifier and a regularized logistic regression model were used to identify the most important predictors of ACS likelihood. Our final sample included 2400 patients (age 59 17 years, 47% Females, 41% Blacks, 15.8% adjudicated ACS). In our RF model (area under the receiver operating characteristic curve of 0.71 0.03) age, prior revascularization, income, distance from hospital, and residential neighborhood were the most important predictors of ACS likelihood. In regularized regression (akaike information criterion = 1843, bayesian information criterion = 1912, 2 = 193, df = 10, p < 0.001), residential neighborhood remained a significant and independent predictor of ACS likelihood. Findings from our study suggest that residential neighborhood constitutes an upstream factor to explain the observed healthcare disparity in ACS risk prediction, independent from known demographic, social, and economic determinants of health, which can inform future work on ACS prevention, in-hospital care, and patient discharge.

PMID:34820853 | DOI:10.1002/nur.22199

Originally posted here:
Your neighborhood matters: A machine-learning approach to the geospatial and social determinants of health in 9-1-1 activated chest pain - DocWire...

Design of AI may change with the open-source Apache TVM and a little help from startup OctoML – ZDNet

In recent years, artificial intelligence programs have been prompting change in the design of computer chips, and novel computers have likewise made possible new kinds of neural networks in AI. There is a feedback loop going on that is powerful.

At the center of that sits the software technology that converts neural net programs to run on novel hardware. And at the center of that sits a recent open-source project gaining momentum.

Apache TVM is a compiler that operates differently from other compilers. Instead of turning a program into typical chip instructions for a CPU or GPU, it studies the "graph" of compute operations in a neural net, in TensorFlow or Pytorch form, such as convolutions and other transformations, and figures out how best to map those operations to hardware based on dependencies between the operations.

At the heart of that operation sits a two-year-old startup, OctoML, which offers ApacheTVM as a service. As explored in March by ZDNet's George Anadiotis, OctoML is in the field of MLOps, helping to operationalize AI. The company uses TVM to help companies optimize their neural nets for a wide variety of hardware.

Also:OctoML scores $28M to go to market with open source Apache TVM, a de facto standard for MLOps

In the latest development in the hardware and research feedback loop, TVM's process of optimization may already be shaping aspects of how AI is developed.

"Already in research, people are running model candidates through our platform, looking at the performance," said OctoML co-founder Luis Ceze, who serves as CEO, in an interview with ZDNet via Zoom. The detailed performance metrics mean that ML developers can "actually evaluate the models and pick the one that has the desired properties."

Today, TVM is used exclusively for inference, the part of AI where a fully-developed neural network is used to make predictions based on new data. But down the road, TVM will expand to training, the process of first developing the neural network.

"Already in research, people are running model candidates through our platform, looking at the performance," says Luis Ceze, co-founder and CEO of startup OctoML, which is commercializing the open-source Apache TVM compiler for machine learning, turning it into a cloud service. The detailed performance metrics mean that ML developers can "actually evaluate the models and pick the one that has the desired properties."

"Training and architecture search is in our roadmap," said Ceze, referring to the process of designing neural net architectures automatically, by letting neural nets search for the optimal network design. "That's a natural extension of our land-and-expand approach" to selling the commercial service of TVM, he said.

Will neural net developers then use TVM to influence how they train?

"If they aren't yet, I suspect they will start to," said Ceze. "Someone who comes to us with a training job, we can train the model for you" while taking into account how the trained model would perform on hardware.

That expanding role of TVM, and the OctoML service, is a consequence of the fact that the technology is a broader platform than what a compiler typically represents.

"You can think of TVM and OctoML by extension as a flexible, ML-based automation layer for acceleration that runs on top of all sorts of different hardware where machine learning models runGPUs, CPUs, TPUs, accelerators in the cloud," Ceze told ZDNet.

"Each of these pieces of hardware, it doesn't matter which, have their own way of writing and executing code," he said. "Writing that code and figuring out how to best utilize this hardware today is done today by hand across the ML developers and the hardware vendors."

The compiler, and the service, replace that hand tuning today at the inference level, with the model ready for deployment, tomorrow, perhaps, in the actual development/training.

Also: AI is changing the entire nature of compute

The crux of TVM's appeal is greater performance in terms of throughput and latency, and efficiency in terms of computer power consumption. That is becoming more and more important for neural nets that keep getting larger and more challenging to run.

"Some of these models use a crazy amount of compute," observed Ceze, especially natural language processing models such as OpenAI's GPT-3 that are scaling to a trillion neural weights, or parameters, and more.

As such models scale up, they come with "extreme cost," he said, "not just in the training time, but also the serving time" for inference. "That's the case for all the modern machine learning models."

As a consequence, without optimizing the models "by an order of magnitude," said Ceze, the most complicated models aren't really viable in production, they remain merely research curiosities.

But performing optimization with TVM involves its own complexity. "It's a ton of work to get results the way they need to be," observed Ceze.

OctoML simplifies things by making TVM more of a push-button affair.

"It's an optimization platform," is how Ceze characterizes the cloud service.

"From the end user's point of view, they upload the model, they compare the models, and optimize the values on a large set of hardware targets," is how Ceze described the service.

"The key is that this is automatic no sweat and tears from low-level engineers writing code," said Ceze.

OctoML does the development work of making sure the models can be optimized for an increasing constellation of hardware.

"The key here is getting the best out of each piece of hardware." That means "specializing the machine code to the specific parameters of that specific machine learning model on a specific hardware target." Something like an individual convolution in a typical convolutional neural network may become optimized to suit a particular hardware block of a particular hardware accelerator.

The results are demonstrable. In benchmark tests published in September for the MLPerf test suite for neural net inference, OctoML had a top score for inference performance for the venerable ResNet image recognition algorithm in terms of images processed per second.

The OctoML service has been in a pre-release, early access state since December of last year.

To advance its platform strategy, OctoML earlier this month announced it had received $85 million in a Series C round of funding from hedge fund Tiger Global Management, along with existing investors Addition, Madrona Venture Group and Amplify Partners. The round of funding brings OctoML's total funding to $132 million.

The funding is part of OctoML's effort to spread the influence of Apache TVM to more and more AI hardware. Also this month, OctoML announced a partnership with ARM Ltd., the U.K. company that is in the process of being bought by AI chip powerhouse Nvidia. That follows partnerships announced previously with Advanced Micro Devices and Qualcomm. Nvidia is also working with OctoML.

The ARM partnership is expected to spread use of OctoML's service to the licensees of the ARM CPU core, which dominates mobile phones, networking and the Internet of Things.

The feedback loop will probably lead to other changes besides design of neural nets. It may affect more broadly how ML is commercial deployed, which is, after all, the whole point of MLOps.

As optimization via TVM spreads, the technology could dramatically increase portability in ML serving, Ceze predicts.

Because the cloud offers all kinds of trade-offs with all kinds of hardware offerings, being able to optimize on the fly for different hardware targets ultimately means being able to move more nimbly from one target to another.

"Essentially, being able to squeeze more performance out of any hardware target in the cloud is useful because it gives more target flexibility," is how Ceze described it. "Being able to optimize automatically gives portability, and portability gives choice."

That includes running on any available hardware in a cloud configuration, but also choosing the hardware that happens to be cheaper for the same SLAs, such as latency, throughput and cost in dollars.

With two machines that have equal latency on ResNet, for example, "you'll always take the highest throughput per dollar," the machine that's more economical. "As long as I hit the SLAs, I want to run it as cheaply as possible."

See more here:
Design of AI may change with the open-source Apache TVM and a little help from startup OctoML - ZDNet

Global Marketing Automation Market Report 2021: Market to Reach $6.3 Billion by 2026 – GlobeNewswire

Dublin, Nov. 24, 2021 (GLOBE NEWSWIRE) -- The "Marketing Automation - Global Market Trajectory & Analytics" report has been added to ResearchAndMarkets.com's offering.

Global Marketing Automation Market to Reach $6.3 Billion by 2026

Amid the COVID-19 crisis, the global market for Marketing Automation estimated at US$3.9 Billion in the year 2020, is projected to reach a revised size of US$6.3 Billion by 2026, growing at a CAGR of 8.6% over the analysis period. Cloud, one of the segments analyzed in the report, is projected to grow at a 9.6% CAGR to reach US$4.6 Billion by the end of the analysis period.

Growth in the global is set to be driven by rise of digital advertising, growing usage of the Internet and other technologies, and surging popularity of social media networks. Companies are increasingly relying on the digital media marketing techniques such as search engine marketing, social media marketing, online advertising and mobile advertising while continuing to engage in traditional channels to gain benefits of both the worlds.

Ensuring that the brand stands available, relevant and consistent on social media is difficult for various companies. In addition, organizations are required to regularly update blogs and information while tracking trends, measuring effectiveness of social efforts and engaging with customers. These issues have paved way for social media automation solutions that allow companies to realize the power of marketing automation along with social media to drive gains.

After a thorough analysis of the business implications of the pandemic and its induced economic crisis, growth in the On-Premise segment is readjusted to a revised 6.8% CAGR for the next 7-year period. This segment currently accounts for a 37.3% share of the global Marketing Automation market. Cloud-based tools allow marketers to gain more control over their marketing and business content. These tools allow for the proper implementation of strategies independently without the need to rely on other departments.

The U.S. Market is Estimated at $1.2 Billion in 2021, While China is Forecast to Reach $898.4 Million by 2026

The Marketing Automation market in the U.S. is estimated at US$1.2 Billion in the year 2021. The country currently accounts for a 29.31% share in the global market. China, the world's second largest economy, is forecast to reach an estimated market size of US$898.4 Million in the year 2026 trailing a CAGR of 10.6% through the analysis period.

Among the other noteworthy geographic markets are Japan and Canada, each forecast to grow at 7.1% and 7.4% respectively over the analysis period. Within Europe, Germany is forecast to grow at approximately 8.3% CAGR while Rest of European market (as defined in the study) will reach US$989.3 Million by the end of the analysis period.

In the US, the COVID-19 pandemic onset led to a significant impact on digital advertising during the early part of 2020. However, in the second half of the year, the holiday season and ad spend by political parties aided in compensating for the losses registered earlier in the year. Digital ad spend therefore increased at a double-digit rate for the year.

The increase in online shopping, home deliveries, and connected TV helped maintain the market`s growth. Thriving economies, growing employment opportunities, rising income levels, continuous development of cellular markets, rising 4G penetrations, and increasing spending power in major countries are driving growth prospects in the Asia-Pacific region.

Key Topics Covered:

I. METHODOLOGY

II. EXECUTIVE SUMMARY

1. MARKET OVERVIEW

2. FOCUS ON SELECT PLAYERS (Total 252 Featured)

3. MARKET TRENDS & DRIVERS

4. GLOBAL MARKET PERSPECTIVE

III. REGIONAL MARKET ANALYSIS

IV. COMPETITION

For more information about this report visit https://www.researchandmarkets.com/r/snjh6

Go here to see the original:
Global Marketing Automation Market Report 2021: Market to Reach $6.3 Billion by 2026 - GlobeNewswire

ML Kit | Google Developers

Machine learning for mobile developers

ML Kit brings Googles machine learning expertise to mobile developers in a powerful and easy-to-use package. Make your iOS and Android apps more engaging, personalized, and helpful with solutions that are optimized to run on device.

Video and image analysis APIs to label images and detect barcodes, text, faces, and objects.

Scan and process barcodes. Supports most standard 1D and 2D formats.

Identify objects, locations, activities, animal species, products, and more. Use a general-purpose base model or tailor to your use case with a custom TensorFlow Lite model.

Recognizes handwritten text and handdrawn shapes on a digital surface, such as a touch screen. Recognizes 300+ languages, emojis and basic shapes.

Separate the background from users within a scene and focus on what matters.

Natural language processing APIs to identify and translate between 58 languages and provide reply suggestions.

Determine the language of a string of text with only a few words.

Generate reply suggestions in text conversations.

Detect and locate entities (such as addresses, date/time, phone numbers, and more) and take action based on those entities. Works in 15 languages.

[{ "type": "thumb-down", "id": "missingTheInformationINeed", "label":"Missing the information I need" },{ "type": "thumb-down", "id": "tooComplicatedTooManySteps", "label":"Too complicated / too many steps" },{ "type": "thumb-down", "id": "outOfDate", "label":"Out of date" },{ "type": "thumb-down", "id": "samplesCodeIssue", "label":"Samples / code issue" },{ "type": "thumb-down", "id": "otherDown", "label":"Other" }] [{ "type": "thumb-up", "id": "easyToUnderstand", "label":"Easy to understand" },{ "type": "thumb-up", "id": "solvedMyProblem", "label":"Solved my problem" },{ "type": "thumb-up", "id": "otherUp", "label":"Other" }]

Continued here:
ML Kit | Google Developers

IEEE: Most Important 2022 Tech Is AI/Machine Learning, Cloud and 5G – Virtualization Review

News

IEEE says the most important technologies in 2022 will be AI/machine learning, cloud computing and 5G wireless.

That comes in a new report published by the large technical professional organization titled "The Impact of Technology in 2022 and Beyond: an IEEE Global Study," based on an October survey of 350 chief information officers, chief technology officers and technology leaders from the U.S., U.K., China, India and Brazil who were asked about key technology trends, priorities and predictions for 2022 and beyond.

"Among total respondents, more than one in five (21 percent) say AI and machine learning, cloud computing (20 percent), and 5G (17 percent) will be the most important technologies next year," IEEE said in a Nov. 18 announcement. "Because of the global pandemic, technology leaders surveyed said in 2021 they accelerated adoption of cloud computing (60 percent), AI and machine learning (51 percent), and 5G (46 percent), among others."

The report includes respondent data for 12 questions, starting off with: "Which will be the most important technology in 2022?" Although "Other" was the top answer (25 percent of respondents), the three technologies listed above weren't far behind. Other answers were "Augmented and Virtual Reality (AR/VR)" at 9 percent and "Predictive AI" at 7 percent.

"AI is working all around us," the report quotes Shelly Gupta, IEEE graduate student member, as saying. "It has entered into almost every sector enable its growth. The AI industry will continue to proliferate. In turn, it will continue to drive massive innovation that will fuel many existing industries."

The "big three" technologies listed above as being most important for 2022 are also the top three answers to the second question, "Which technologies did you accelerate adopting in 2021 due to the pandemic?" though in a different order: cloud computing (60 percent), AI/machine learning (51 percent) and 5G (46 percent).

"Cloud computing has had a huge boost due to remote work as well as accelerated trends in digital transformation," said Tom Coughlin, who holds the title of IEEE Life Fellow among several others.

The other 10 questions and their top answers are:

"Time and time again technology rises to meet the biggest challenges of society," the report said. "The innovators and technologists that bring new ideas to life serve as catalysts of positive global change."

About the Author

David Ramel is an editor and writer for Converge360.

Visit link:
IEEE: Most Important 2022 Tech Is AI/Machine Learning, Cloud and 5G - Virtualization Review

MCubed does web workshops: Join Mark Whitehorns one-day introduction to machine learning next month – The Register

Event You want to know more about the ins and outs of machine learning, but cant figure out where to start? Our AI practitioners' conference MCubed and The Register regular Mark Whitehorn have got you covered.

Join us on December 9 for an interactive online workshop to learn all about ML types and algorithms, and find out about strengths and weaknesses of different approaches by using them yourself.

This limited one-day online workshop is geared towards anyone who wants to gain an understanding of machine learning no matter your background. Mark will start with the basics, asking and answering what is machine learning, before diving deeper into the different types of systems you keep hearing about.

Once youre familiar with supervised, unsupervised, and reinforcement learning, things will get hands-on with practical exercises using common algorithms such as clustering and, of course, neural networks.

In the process, youll also investigate the pros and cons of different approaches, which should help you in assessing what could work for a specific task and what isnt an option, and learn how the things youve just tried relate to what Big Biz are using. However, its not all code and algorithms in the world of ML, which is why Mark will also give you a taster of what else there is to think about when realizing machine learning projects, such as data sourcing, model training, and evaluation.

Since Python has turned into the language of choice for many ML practitioners, exercises and experiments will be performed in Python mostly, so installing it along with an IDE will help you make the most of the workshop if you havent already.

This doesnt mean the course is for Pythonistas only, however. If youre not familiar with the language, exercises will be transformed into demonstrations providing you insight into the inner workings of the associated code, before we start altering some of the parameters together. Like that, you get to find out how each parameter influences the learning that is performed, leaving you in top shape to continue in whatever language (or no-code ML system) you feel comfortable with.

Your trainer, Professor Mark Whitehorn, works as a consultant for national and international organisations, such as the Bank of England, Standard Life, and Sainsburys, designing analytical systems and data science solutions. He is also the Emeritus Professor of Analytics at the University of Dundee where he teaches a master's course in data science and conducts research into the development of analytical systems and proteomics. You can get a taster of his brilliant teaching skills here.

If this sounds interesting to you, head over to the MCubed website to secure your spot now. Tickets are very limited to make sure we can answer all your questions and everyone is getting proper support throughout the day so dont wait for too long.

Excerpt from:
MCubed does web workshops: Join Mark Whitehorns one-day introduction to machine learning next month - The Register

DEWC, AIML partner on AI and machine learning to enhance RF signal detection – Defence Connect

key enablers | 19 November 2021 | Reporter

By: Reporter

DEWC Systems and the Australian Institute for Machine Learning (AIML) have agreed to partner on research to better detect radio signals in complex environments.

DEWC Systems and the Australian Institute for Machine Learning (AIML) have agreed to partner on research to better detect radio signals in complex environments.

DEWC Systems and the University of Adelaides Australian Institute for Machine Learning (AIML) have announced the commencement of a partnership to better understand how to apply artificial intelligence and machine learning to detect radio frequencies in difficult environments using MOESS and Wombat S3 technology.

As of yet, both organisations have already undertaken significant research on Phase 1 of the Miniaturised Orbital Electronic Sensor System (MOESS) project with the collaboration hoping to enhance the research yet further.

The original goal of the MOESS was to develop a platform to perform an array of applications and develop an automatic signal classification process. The Wombat 3 is a ground-based version of the MOESS.

Chief technology officer of DEWC Systems Dr Paul Gardner-Stephen will lead the project, which hopes to develop a framework for AI-enabled spectrum monitoring and automatic signal classification.

Radio spectrum is very congested, with a wide range of signals and interference sources, which can make it very difficult to identify and correctly classify the signals present. This is why we are turning to AI and ML, to bring the necessary algorithmic power necessary to solve this problem, Gardner-Stephen said.

"This will enable the creation of applications that work on DEWCs MOESS and Wombat S3 (Wombat Smart Sensor Suite) platforms to identify unexpected signals from among the forest of wireless communications, to help defence identify and respond to threats as they emerge.

According to Gardner-Stephen, both the MOESS and Wombat 3 platforms are highly capable software defined radio (SDR) platforms with on-board artificial intelligence and machine learning processors.

Since the project is oriented around creating an example framework, using two of DEWC Systems software defined radio (SDR) products, both DEWC Systems and AIML can create the kinds of improved situation awareness applications that use those features to generate the types of capabilities that will support defence in their mission, he explained.

In addition to directly working towards the creation of an important capability, it will also act to catalyse awareness of some of the kinds of applications that are possible with these platforms.

Subscribe to the Defence Connect daily newsletter. Be the first to hear the latest developments in the defence industry.

Chief executive of DEWC Systems Ian Spencer noted that the company innovates with academic institutions to develop leading technology.

Whilst we provide direction and guidance of the project, AIML will be bringing their deep understanding and cutting-edge technology of AI and machine learning. This is what DEWC Systems does. We collaborate with universities and other industry sectors to develop novel and effective solutions to support the ADO, Spencer said.

It is hoped that the technology developed throughout the partnership will support machine learning and artificial intelligence needs of Defence.

[Related:Veteran-owned SMEs DEWC Systems and J3Seven aim to solve mission critical challenges]

DEWC, AIML partner on AI and machine learning to enhance RF signal detection

See more here:
DEWC, AIML partner on AI and machine learning to enhance RF signal detection - Defence Connect