Page 2,321«..1020..2,3202,3212,3222,323..2,3302,340..»

MCubed does web workshops: Join Mark Whitehorns one-day introduction to machine learning next month – The Register

Event You want to know more about the ins and outs of machine learning, but cant figure out where to start? Our AI practitioners' conference MCubed and The Register regular Mark Whitehorn have got you covered.

Join us on December 9 for an interactive online workshop to learn all about ML types and algorithms, and find out about strengths and weaknesses of different approaches by using them yourself.

This limited one-day online workshop is geared towards anyone who wants to gain an understanding of machine learning no matter your background. Mark will start with the basics, asking and answering what is machine learning, before diving deeper into the different types of systems you keep hearing about.

Once youre familiar with supervised, unsupervised, and reinforcement learning, things will get hands-on with practical exercises using common algorithms such as clustering and, of course, neural networks.

In the process, youll also investigate the pros and cons of different approaches, which should help you in assessing what could work for a specific task and what isnt an option, and learn how the things youve just tried relate to what Big Biz are using. However, its not all code and algorithms in the world of ML, which is why Mark will also give you a taster of what else there is to think about when realizing machine learning projects, such as data sourcing, model training, and evaluation.

Since Python has turned into the language of choice for many ML practitioners, exercises and experiments will be performed in Python mostly, so installing it along with an IDE will help you make the most of the workshop if you havent already.

This doesnt mean the course is for Pythonistas only, however. If youre not familiar with the language, exercises will be transformed into demonstrations providing you insight into the inner workings of the associated code, before we start altering some of the parameters together. Like that, you get to find out how each parameter influences the learning that is performed, leaving you in top shape to continue in whatever language (or no-code ML system) you feel comfortable with.

Your trainer, Professor Mark Whitehorn, works as a consultant for national and international organisations, such as the Bank of England, Standard Life, and Sainsburys, designing analytical systems and data science solutions. He is also the Emeritus Professor of Analytics at the University of Dundee where he teaches a master's course in data science and conducts research into the development of analytical systems and proteomics. You can get a taster of his brilliant teaching skills here.

If this sounds interesting to you, head over to the MCubed website to secure your spot now. Tickets are very limited to make sure we can answer all your questions and everyone is getting proper support throughout the day so dont wait for too long.

Excerpt from:
MCubed does web workshops: Join Mark Whitehorns one-day introduction to machine learning next month - The Register

Read More..

DEWC, AIML partner on AI and machine learning to enhance RF signal detection – Defence Connect

key enablers | 19 November 2021 | Reporter

By: Reporter

DEWC Systems and the Australian Institute for Machine Learning (AIML) have agreed to partner on research to better detect radio signals in complex environments.

DEWC Systems and the Australian Institute for Machine Learning (AIML) have agreed to partner on research to better detect radio signals in complex environments.

DEWC Systems and the University of Adelaides Australian Institute for Machine Learning (AIML) have announced the commencement of a partnership to better understand how to apply artificial intelligence and machine learning to detect radio frequencies in difficult environments using MOESS and Wombat S3 technology.

As of yet, both organisations have already undertaken significant research on Phase 1 of the Miniaturised Orbital Electronic Sensor System (MOESS) project with the collaboration hoping to enhance the research yet further.

The original goal of the MOESS was to develop a platform to perform an array of applications and develop an automatic signal classification process. The Wombat 3 is a ground-based version of the MOESS.

Chief technology officer of DEWC Systems Dr Paul Gardner-Stephen will lead the project, which hopes to develop a framework for AI-enabled spectrum monitoring and automatic signal classification.

Radio spectrum is very congested, with a wide range of signals and interference sources, which can make it very difficult to identify and correctly classify the signals present. This is why we are turning to AI and ML, to bring the necessary algorithmic power necessary to solve this problem, Gardner-Stephen said.

"This will enable the creation of applications that work on DEWCs MOESS and Wombat S3 (Wombat Smart Sensor Suite) platforms to identify unexpected signals from among the forest of wireless communications, to help defence identify and respond to threats as they emerge.

According to Gardner-Stephen, both the MOESS and Wombat 3 platforms are highly capable software defined radio (SDR) platforms with on-board artificial intelligence and machine learning processors.

Since the project is oriented around creating an example framework, using two of DEWC Systems software defined radio (SDR) products, both DEWC Systems and AIML can create the kinds of improved situation awareness applications that use those features to generate the types of capabilities that will support defence in their mission, he explained.

In addition to directly working towards the creation of an important capability, it will also act to catalyse awareness of some of the kinds of applications that are possible with these platforms.

Subscribe to the Defence Connect daily newsletter. Be the first to hear the latest developments in the defence industry.

Chief executive of DEWC Systems Ian Spencer noted that the company innovates with academic institutions to develop leading technology.

Whilst we provide direction and guidance of the project, AIML will be bringing their deep understanding and cutting-edge technology of AI and machine learning. This is what DEWC Systems does. We collaborate with universities and other industry sectors to develop novel and effective solutions to support the ADO, Spencer said.

It is hoped that the technology developed throughout the partnership will support machine learning and artificial intelligence needs of Defence.

[Related:Veteran-owned SMEs DEWC Systems and J3Seven aim to solve mission critical challenges]

DEWC, AIML partner on AI and machine learning to enhance RF signal detection

See more here:
DEWC, AIML partner on AI and machine learning to enhance RF signal detection - Defence Connect

Read More..

How Machine Learning is Used with Operations Research? – Analytics India Magazine

A solution given by a predictive model can be more reliable if it gets optimized for being a proper solution to the problem. Different approaches of machine learning are used to build predictive models whereas different approaches of operations research are used to find optimal solutions. The combination of both of these approaches gives such solutions which are not only accurate but also optimal. In this article, we are going to discuss the combination of machine learning and operation research and how it helps in solving specific problems where accurate and optimal solutions are needed. We will also discuss a few notable use cases of this combination. The major points to be covered in this article are listed below.

Table of Contents

What is Operations Research?

Operation research is used as an analytical approach or method which can help in solving problems and making decisions. This decision and problem-solving approach can help in management and benefits of an organization. The basic approach for solving problems using operation research can start with breaking down the problem into basic components and ends with solving those broken parts in defined steps using mathematical analysis.

The overall procedure of operation research can be completed into the following steps:-

Concepts of operation research became very useful for the world during World War II because of the military planner. After the world war, these concepts have become useful in the domain of society, management, and business problems.

Characteristics of Operations Research

There are the following characteristics of a basic operations research procedure:-

Uses of Operations Research

There are a variety of problem and decision-making domains where operations research can be helpful. Some of them are listed below as:

By the above, we can say that the operation research approach is far better than ordinary software and data analytic tools. An experienced person in operation research can benefit an organization to achieve more complete datasets and using all possible outcomes can predict the best solution and estimate the risk.

The above image is a representation of the operation research procedure with its main components. We can say that operation research is a science of optimization using which we can obtain a huge number of improvements in any field. Some of the papers and research are examples of 20-40% of the improvement in the problem-solving domain.

Machine Learning in Operations Research

In the above section, we have an overview of the operation where we have seen how we can find an optimal and best solution to a problem and how we can make decisions using simple steps. When we talk about machine learning we can say the algorithms under machine learning work on the basis of learning from the past histories of the data and information under the data and the main motive of the algorithms is to predict an accurate value that can satisfy the user and perform the task accurately for which model is assigned.

We can say that OR and ML both work on finding the better solution to a problem where models in machine learning can also be used in making decisions. For experienced operation research things become difficult when the set of the solution becomes higher and manually performing the testing of the solutions becomes hectic and time taking. Also with this testing task the experienced need to estimate the risk before applying the solution to the problem of making any decision. Using machine learning we can reduce the time taken by the operation research and the manual iteration between the testing. Hybridization of ML and OR can be considered as the next advancement of operation research where models from machine learning can help in various tasks that come under operation research.

Way to Hybridization of ML and OR

We can perform the hybridization of ML and OR in the following four ways:-

Comparing Operations Research and Machine Learning

Lets go through an example where we are in a city, lets call it Mumbai and we want to travel around Mumbai in an optimal way so that we can cover the most number of locations in a short time and at less cost. So to do this using machine learning we are required to optimize all the possible ways and their times and cost so that the model related to the machine learning can predict an optimal way by considering all the facts in the account. When the same problem comes in the way of operation research it can be thinking of the cost or time or the distance and we can find more than one solution for the problem and after applying them all once we can find an optimal way.

By these procedures of both, we can say that the number of nodes and steps taken by the machine learning algorithms is less than the number of nodes and steps taken by the operation research. We can even say that many of the building blocks of the machine learning models are taken from the operation research procedure. Some of the examples are as follows:

Example of Combination of OR and ML

Lets go through one more example of a road construction company which has got a tender from the government. The task of the company is to repair the road defects. This can be done by the combination of machine learning and operation research where the machine learning models can help in identifying the type of road defects like broken roads in a small area, medium area or large area After that, using the operation research, we can find the beneficial policies for replacement and repairing of the road. This can be a work procedure where the machine learning and operation research is used together for the development. Similarly, there are various domains where we are required to work on both of the technologies for approaching the solution to a problem in a better way.

Solving Problems of ML Using OR

The paradigm of machine learning can be considered as the combination of various domains like sentiment analysis, computer vision, and recommender systems where applying OR with them can help us in various aspects. Also, it can help in solving problems that occur with machine learning. Lets talk about the problems of machine learning and how we can solve it using operation research.

As we know that recommendation systems are becoming more important for a lot of business domains because of their success in providing fruitful recommendations to the user of the business and using these recommendations the owner of the business can make a lot of benefits also they are made using the machine learning procedure where they are used for giving recommendations.

Lets take an example of the restaurant where we have enabled services like online booking and machine learning algorithms are helping in estimating various aspects like eating time of the customer, habits of the customer and customer bookings and recommendation system are installed to provide recommendations to the users according to those attributes of the users. The problem with these instalments comes when the traffic of the customer is very high and the online booking system starts getting confused about the table allotment to the customer.

In such a situation operation research can help in increasing the traffic by managing them and system response time where the work of the operation research procedure can be optimizing the real-time booking, the number of people eating in the real-time, expected number of customers in a particular time. These optimizations can help in simulating the bookings with customer behaviour. This simulation can be done by combining the OR and ML together.

The computer vision algorithms of the machine learning paradigm work on the visual data and one of the main tasks of these algorithms are to classify or identify the images from a given set of images. Lets say we have a computer vision algorithm to track the food demand on a similar restaurant. where a deep learning model is installed with cameras and working for estimating the food wastage and it is working by recognizing the food type and estimating the food demand.

Since we know that pixels of the images will be the main factor in which the classification is dependent and due to distance and size sometimes we face the failure of the deep learning models. An operation research procedure can be enabled with the machine learning or deep learning algorithm, where it can be used for tracking the different matching algorithms between the frames of the image and we can optimize the maximum number of food sold and amount of food wasted.

In the field of sentiment analysis we know we have reached so far in the context of advancement and now many of the systems have become so reliable when we talk about the results that they are producing. One of the major problems with these systems or for making these systems we require a lot of data. And we know it is tough and costly to make such data available for the models. In this scenario, we can use operation research for optimizing data that can be accurate, effective, and cost-effective for the model.

Frequently it happens that the data we gather for modelling is biased by an emotion that can be estimated and tracked by the operation research. When we talk about the NLP system we know that the system cannot autonomously change its emotions and they are also allowed to control them less. Using the operation research we can make them controlled by just optimizing systems behaviour and results.

As we know that the machine learning models are based on the parameters which we need to fit in the models so that using the parameter and the data model be trained to perform the task which is assigned to the model and also we see that before feeding data into the model we require parameters that can help the model to work well with the data. Optimization of the parameters can be done by operation research because we have defined earlier that operation research is a science of optimization. The better fit parameters can be obtained by optimizing the sets of parameters using the operation research techniques.

Use Cases of Combination of ML and OR.

As of now, we have seen various ways and benefits of using the OR and ML together. In this section of the article, we will discuss some real-life use cases of this combination. Since both of them are very relatable to each other many of the big giant companies like google, amazon, etc. are using the combination to obtain a good result and provide customer satisfaction for example:

The above-given examples of real-life use cases of the combination of ML and OR are some major examples that are consistent with the improvement. There can be various examples of this combination and also the only motive is to use the combination to improve the work strength and accuracy and benefit of the organizations.

Final Words

In this article, we have seen what are the basics of operation research and how it can be combined with machine learning. The point to be noted here is that the machine learning models are related and concerned with the one task prediction whereas the operation research is concerned with the large collection of unique methods for specific classes of problems. As we have seen in the examples we can achieve higher accuracy and benefits using the combination of the ML and OR.

Continue reading here:
How Machine Learning is Used with Operations Research? - Analytics India Magazine

Read More..

Machine learning: Aleph Alpha works on transformative AI with Oracle and Nvidia – Market Research Telecast

As part of the International Supercomputing Conference (ISC) on November 16, 2021, the German AI company Aleph Alpha presented its new multimodal model of artificial intelligence (AI) in a panel with Oracle and Nvidia, which differs from the pure language model GPT-3 computer vision connects with NLP and also transfers the flexibility of GPT-3 for all possible types of interaction to the multimodal area. Specifically, according to CEO Jonas Andrulis, the model is intended to generate any text or to integrate images into a text context. The Aleph-Alpha model is apparently just as powerful as GPT in the text part, but images can also be combined at any time. Unlike DALL-E, the new model is not limited to a single image including caption. The first test tracks show that it is apparently able to understand images and texts with world knowledge.

Andrulis had examples with him that visibly impressed the audience and made tangible what capabilities his AI model already has. The examples sometimes showed unusual image content with surreal content such as a bear in a taxi, a couple underwater camping or a fish with huge teeth and tooth gaps that the AI is able to correctly describe when prompted with text questions. One level more complex is the image of a note in the elevator, in which the AI can correctly conclude between the situation, essential and insignificant content of the message and the institutional framework (university), which is only possible through causal inference. The answers provided in the output are not only possible from the picture shown, but rather prove that the AI model independently creates further connections.

On a handwritten treasure map, the model is not only able to decipher the writing, but also to make accurate assessments of the character of the marked places (including where it is most dangerous). The correct analysis and description of technical drawings with meta terms that cannot be derived from the prompt has already been successful in individual cases. A few examples can be seen in the series of pictures for the Aleph Alpha heise Developer has provided the image material.

image 1 from 5

According to its inventor, it is the pioneer of a transformation that could change all branches of industry in a way that electricity was able to do recently. The panels title therefore symbolically carried the claim that it was about nothing less than a fourth industrial revolution (How GPT-3 is Spearheading the Fourth Industrial RevolutionThe panel leaders talked about their companies and their research joining forces. In doing so, they are creating an alternative (and in some cases one step ahead) to other hyperscalers and tech giants such as Microsoft, which recently sold out for one billion US dollars secured exclusive rights to GPT-3.

Hyperscaling of the hardware for training large language models such as GPT-3 is a focus of the current edition of the conference, which is currently taking place in hybrid form and brings together experts from industry and research every year. One of the hot topics is that the increasingly large models require correspondingly larger clusters for training and inference (application), which poses major challenges for engineers and research teams, especially when it comes to cooling and the high-speed connection between GPUs.

A key message of the panel was that, given the current state of technology, it is no longer sufficient to formulate a smart idea as a model, but that ultimately the required upscaled infrastructure determines progress and success. Panel leader Kevin Jorissen from Oracle and the two panelists Joey Conway from Nvidia Corporation and Jonas Andrulis from Aleph Alpha impressively demonstrated to the specialist audience what it means to have a model with a scope of around 200 billion parameters or even larger operate and what GPU resources, but above all the time, are now required for this. The AI model from Aleph Alpha discussed as an example would take around three months to train with 512 GPUs. One of the questions discussed with the audience was how to distribute the model over several GPUs and how to deal with instabilities, since with insufficient hardware, even small problems can force a restart of a test that has run for weeks or even months, which results in high costs in addition to the loss of time.

Aleph Alpha GmbH, founded in Heidelberg, is considered a beacon in Germany and Europe because, according to the technology index MAD 2021 (Machine Learning, AI and Data Landscape) as the only European AI company to research, develop and design general artificial intelligence (Artificial General Intelligence, short: AGI). The Aleph-Alpha founders Jonas Andrulis and Samuel Weinbach with their 30-strong team work closely with the one headed by Professor Kristian Kersting Research Center Hessian.AI together, which is anchored at the TU Darmstadt. In addition, there is a scientific cooperation with the University of Heidelberg, and the AI company has Oracle and Hewlett Packard Enterprise (HPE) by its side as international partners for, among other things, the cloud infrastructure and the necessary hardware.

Co-founder and CEO Andrulis, who previously held a leading position in AI development at Apple, became Awarded the German AI Prize in October 2021. In the current year, the start-up has already received around 30 million euros in funding from European investors in order to promote unsupervised learning as a pioneer. A dedicated data center with high-performance clusters is currently being set up. Who is more interested in the work of Aleph Alpha, finds interesting facts on their website and on the companys technology blog.

This years edition of the International Supercomputing Conference (ISC) from November 14th to 19th was or is under the motto Science and Beyond, and for the first time the organizers have also organized the international conference in hybrid form. In addition to the on-site event in St. Louis, Missouri, participants from around the world also had the opportunity to join in virtually. Numerous sessions were held either on the conference platform or in breakout rooms via Zoom. Anyone interested in the program you will find it on the conference website.

Even those who missed the starting shot can still go on board at the last minute: One Registration is possible during the ongoing conference until November 19, 2021. Depending on your interests, this could make sense, because registered participants can later access the recordings of the lectures, some of which have been recorded, on the conference platform.

(yeah)

Disclaimer: This article is generated from the feed and not edited by our team.

The rest is here:
Machine learning: Aleph Alpha works on transformative AI with Oracle and Nvidia - Market Research Telecast

Read More..

Edinburgh machine learning specialist to add 100 jobs thanks to investment co-venture – The Scotsman

Edinburgh-headquartered Brainnwave has agreed a Series A investment worth Can$10.2 million (6m) with Hatch, one of the worlds most prominent engineering, project management and professional services firms.

The two outfits have formed a co-venture focusing on developing applications and products combining Brainnwaves machine learning and artificial intelligence-powered analytics platform with Hatchs extensive knowledge of the metals and mining, energy and infrastructure sectors.

It will also provide the Scots group with access to clients on a global scale. The funding will unlock a plan to grow Brainnwaves headcount by 100 people in highly skilled roles, while in parallel upscaling the firms Edinburgh and London locations.

Brainnwaves tech which is already used by the likes of William Grant & Sons, Aggreko and Metropolitan Thames Valley is said to combine data exploration and visualisation to rapidly improve decision-making capabilities.

Steve Coates, chief executive and co-founder of Brainnwave, said: This partnership made sense because both organisations are like-minded in their entrepreneurial approach, willingness to do things differently and challenge the status quo.

Alim Somani, managing director of Hatchs digital practice, added: Our partnership with Brainnwave helps us develop practical, innovative solutions for our clients challenges and accelerates our ability to deliver them quickly so that our clients can begin to reap the benefits.

The co-venture is to initially target two of what it sees as the worlds most pressing issues - climate change and urbanisation.

A message from the Editor:

Original post:
Edinburgh machine learning specialist to add 100 jobs thanks to investment co-venture - The Scotsman

Read More..

Brivo Unveils Anomaly Detection, a Revolutionary Technology that Harnesses Access Data and Machine Learning to Strengthen Built World Security – Yahoo…

Patent-pending technology advances Brivo's efforts in revolutionizing enterprise PropTech through the power of data

BETHESDA, Md., Nov. 17, 2021 /PRNewswire/ -- Brivo a global leader in cloud-based access control and smart building technologies today announced the release of Anomaly Detection in its flagship access control solution, Brivo Access. Anomaly Detection is a patent-pending technology that uses advanced analytics with machine learning algorithms to compare massive amounts of user and event data to identify events that are out of the ordinary or look suspicious, and issues priority alerts for immediate follow up. With Anomaly Detection, business leaders can get a nuanced understanding of security vulnerabilities across their facility portfolio and take action on early indicators of suspicious user behaviors that may otherwise go unnoticed.

Brivo

"With Anomaly Detection, Brivo is incorporating the latest data and machine learning technology in ways never before seen in physical security," said Steve Van Till, Founder and CEO of Brivo. "Along with our recently released Brivo Snapshot capability, Anomaly Detection uses AI to simplify access management by notifying customers about abnormal situations and prioritizing them for further investigation. After training, each customer's neural network will know more about traffic patterns in their space than the property managers themselves. This means that property managers can stop searching for the needle in the haystack. We identify it and flag it for them automatically."

Anomaly Detection's AI engine learns the unique behavioral patterns of each person in each property they use to develop a signature user and spatial profile, which is continuously refined as behaviors evolve. This dynamic real-time picture of normal activity complements static security protocols, permissions, and schedules. In practice, when someone engages in activity that is a departure from their past behavior, Anomaly Detection creates a priority alert in Brivo Access Event Tracker indicating the severity of the aberration. This programmed protocol helps organizations prioritize what to investigate.

Story continues

As more companies roll out hybrid work policies for employees, most businesses are poised to see a lot of variation in office schedules and movement. For human operators, learning these new patterns would take a tremendous amount of time, particularly analyzing out-of-the-ordinary behaviors that are technically still within the formal bounds of acceptable use. With Anomaly Detection in Brivo Access, security teams can gain better visibility and understanding as the underlying technology continuously learns users' behaviors and patterns as they transition over time.

The release of Anomaly Detection continues Brivo's significant investments in Brivo Access and AI over the last year to offer building owners and managers more comprehensible, actionable insights and save time-intensive legwork. With a comprehensive enterprise-grade UI, real-time data visualizations, and clear indicators of emerging trends across properties, organizations can secure and manage many spaces from a central hub.

Anomaly Detection is now available in the Enterprise Edition of Brivo Access. For more information, visit our All Access Blog.

About BrivoBrivo, Inc., created the cloud-based access control and smart building technology category over 20 years ago and remains a global leader serving commercial real estate, multifamily residential and large distributed enterprises. The company's comprehensive product ecosystem and open API provide businesses with powerful digital tools to increase security automation, elevate employee and tenant experience, and improve the safety of all people and assets in the built environment. Brivo's building access platform is now the digital foundation for the largest collection of customer facilities in the world, trusted by more than 23 million users occupying over 300 million square feet across 42 countries. Learn more at http://www.Brivo.com.

Cision

View original content to download multimedia:https://www.prnewswire.com/news-releases/brivo-unveils-anomaly-detection-a-revolutionary-technology-that-harnesses-access-data-and-machine-learning-to-strengthen-built-world-security-301426528.html

SOURCE Brivo

Visit link:
Brivo Unveils Anomaly Detection, a Revolutionary Technology that Harnesses Access Data and Machine Learning to Strengthen Built World Security - Yahoo...

Read More..

Adelaide at the centre of next generation AI research – Newswise

Newswise A new research centre that focuses on next-generation artificial intelligence (AI) technology will develop the high-calibre expertise Australia needs to compete in the coming machine learning-enabled global economy.

Launched today, Friday, 19 November, by the Hon. Steven Marshall, Premier of South Australia, the Centre for Augmented Reasoning funded with $20 million from the Australian Government is based at the University of Adelaide.

The new centre is headquartered within the internationally regarded Australian Institute for Machine Learning (AIML) at the University of Adelaide, which was jointly established with the Government of South Australia at Adelaides Lot Fourteen innovation precinct.

Augmented reasoning isa new and emerging field of AIwhich combines an advanced ability to learn patterns using traditional machine learning, with an ability to reason.

The four-year investment by the Department of Education, Skills and Employment in people and research will train a new generation of experts in machine learning which is the AI technology driving real economic impact today and support the growth of new high-tech jobs at the University and Lot Fourteen.

A $3.5m innovation fund for AI commercialisation will provide seed funding to launch new start-ups, as well as support local collaboration opportunities, strategic development programs, and new business ventures.

The centre will lead the research and development of new augmented systems, and improve machine learning technology across a range of applications, which might include:

Comments from the Hon. Steven Marshall, Premier of South Australia:

Centres like this cement Lot Fourteen as the innovation centre of the nation.

Nowhere else can you find a site which presents collaborative opportunities for so many high-tech and high-growth sectors, creating jobs and boosting the economy.

Comments from Senator Rex Patrick, Senator for South Australia:

I am pleased to have played a part in delivering this centre for South Australia. It will be a major drawcard for the smartest young minds in the state to stay here in SA.

AI is a critically important emerging technology that Australia must embrace. The jobs of the future will incorporate AI, not be replaced by it.

Governments should be working to greatly increase Australias technological capabilities, all the more so as we work our way out of the COVID-19 disruptedeconomyandthis Centre should play a big part in this.

Comments from Professor Peter Hj AC, Vice-Chancellor and President of the University of Adelaide:

The Centre for Augmented Reasoning is a vital new hub within the Universitys Australian Institute for Machine Learning, for Australias high-calibre machine learning expertise.

Building on the Universitys existing research strengths at AIML, the centre will support high-performance machine learning research, provide valuable scholarship opportunities, support AI commercialisation initiatives, and become a leading voice in Australias AI landscape.

AI is already having an impact on every academic area of the University. Just as computers are now the standard tool in all workplaces, machine learning will soon become a new standard for every industry. Its a critical part of the future.

Comments from Professor Anton van den Hengel, Director of the Centre for Augmented Reasoning, University of Adelaide:

Artificial Intelligence is right now being used to improve the productivity of every industry sector. If Australia wants to participate in a future AI-enabled global economy, we need to be applying AI to improve our productivity. That's the way that we maintain Australian jobs.

In every industry, the jobs that AI supports aren't AI jobs. Theyre jobs in mining, agriculture, building and service industries. All of those industries will be impacted by the productivity gains from AI.

By using AI to improve their efficiency, productivity and quality, Australian businesses will remain competitive in an increasingly automated global economy.

If Australia is too slow in adopting new technology, then our industries will not be able to compete against regions that have already embraced the changes brought about by AI.

View post:
Adelaide at the centre of next generation AI research - Newswise

Read More..

DataX is funding new AI research projects at Princeton, across disciplines – Princeton University

Graphic courtesy of the Center for Statistics and Machine Learning

Ten interdisciplinary research projects have won funding fromPrinceton Universitys Schmidt DataX Fund, with the goal of spreading and deepening the use of artificial intelligence and machine learning across campus to accelerate discovery.

The 10 faculty projects, supported through a major gift from Schmidt Futures, involve 19 researchers and several departments and programs, from computer science to politics.

The projects explore a variety of subjects, including an analysis of how money and politics interact, discovering and developing new materials exhibiting quantum properties, and advancing natural language processing.

We are excited by the wide range of projects that are being funded, which shows the importance and impact of data science across disciplines, saidPeter Ramadge, Princeton's Gordon Y.S. Wu Professor of Engineering and the director of the Center for Statistics and Machine Learning (CSML).These projects are using artificial intelligence and machine learning in multifaceted ways: to unearth hidden connections or patterns, model complex systems that are difficult to predict, and develop new modes of analysis and processing.

CSML is overseeing a range of efforts made possible by the Schmidt DataX Fund to extend the reach of data science across campus. These efforts include the hiring of data scientists and overseeing the awarding of DataX grants. This is the second round of DataX seed funding, with thefirst in 2019.

Discovering developmental algorithmsBernard Chazelle, the Eugene Higgins Professor of Computer Science;Eszter Posfai, the James A. Elkins, Jr. '41 Preceptor in Molecular Biology and an assistant professor of molecular biology;Stanislav Y.Shvartsman,professor of molecular biology and the Lewis Sigler Institute for Integrative Genomics, and also a 1999 Ph.D. alumnus

Natural algorithms is a term used to described dynamic, biological processes built over time via evolution. This project seeks to explore and understand through data analysis one type of natural algorithm, the process of transforming a fertilized egg into a multicellular organism.

MagNet: Transforming power magnetics design with machine learningtools and SPICE simulationsMinjie Chen, assistant professor of electrical and computer engineering and the Andlinger Center for Energy and the Environment;Niraj Jha, professor of electrical and computer engineering; Yuxin Chen,assistant professor of electrical and computer engineering

Magnetic components are typically the largest and least efficient components in power electronics. To address these issues, this project proposes the development of an open-source, machine learning-based magnetics design platform to transform the modeling and design of power magnetics.

Multi-modal knowledge base construction for commonsense reasoningJia Deng andDanqi Chen, assistant professors of computer science

To advance natural language processing, researchers have been developing large-scale, text-based commonsense knowledge bases, which help programs understand facts about the world. But these data sets are laborious to build and have issues with spatial relationships between objects. This project seeks to address these two limitations by using information from videos along with text in order to automatically build commonsense knowledge bases.

Generalized clustering algorithms to map the types of COVID-19 responseJason Fleischer, professor of electrical and computer engineering

Clustering algorithms are made to group objects but fall short when the objects have multiple labels, the groups require detailed statistics, or the data sets grow or change. This project addresses these shortcomings by developing networks that make clustering algorithms more agile and sophisticated. Improved performance on medical data, especially patient response to COVID-19, will be demonstrated.

New framework for data in semiconductor device modeling, characterization and optimization suitable for machine learning toolsClaire Gmachl, the Eugene Higgins Professor of Electrical Engineering

This project is focused on developing a new, machine learning-driven framework to model, characterize and optimize semiconductor devices.

Individual political contributionsMatias Iaryczower, professor of politics

To answer questions on the interplay of money and politics, this project proposes to use micro-level data on the individual characteristics of potential political contributors, characteristics and choices of political candidates, and political contributions made.

Building a browser-based data science platformJonathan Mayer,assistant professor of computer science and public affairs, Princeton School of Public and International Affairs

Many research problems at the intersection of technology and public policy involve personalized content, social media activity and other individualized online experiences. This project, which is a collaboration with Mozilla, is building a browser-based data science platform that will enable researchers to study how users interact with online services. The initial study on the platform will analyze how users are exposed to, consume, share, and act on political and COVID-19 information and misinformation.

Adaptive depth neural networks and physics hidden layers: Applications to multiphase flowsMichael Mueller,associate professor of mechanical and aerospace engineering; Sankaran Sundaresan, the Norman John Sollenberger Professor in Engineering and a professor of chemical and biological engineering

This project proposes to develop data-based models for complex multi-physics fluids flows using neural networks in which physics constraints are explicitly enforced.

Seeking to greatly accelerate the achievement of quantum many-body optimal control utilizing artificial neural networksHerschel Rabitz, the Charles Phelps Smyth '16 *17 Professor of Chemistry; Tak-San Ho, research chemist

This project seeks to harness artificial neural networks to design, model, understand and control quantum dynamics phenomena between different particles, such as atoms and molecules.(Note: This project also received DataX funding in 2019.)

Discovery and design of the next generation of topological materials using machine learningLeslie Schoop,assistant professor of chemistry; Bogdan Bernevig, professor of physics; Nicolas Regnault, visiting research scholar in physics

This project aims to use machine learning techniques to uncover and develop topological matter, a type of matter that exhibits quantum properties, whose future applications can impact energy efficiency and the rise of super quantum computers. Current topological matters applications are severely limited because its desired properties only appear at extremely low temperatures or high magnetic fields.

Excerpt from:
DataX is funding new AI research projects at Princeton, across disciplines - Princeton University

Read More..

Research Team Probes History with Cutting-Edge Tech – Bethel University News

Zach Haala 23 and Professor of History Charlie Goldberg noticed an anomaly in their data. Using artificial intelligence (AI), the two had tracked the presence of smiles over nearly 80 years and thousands of Bethel photographs. As expected, smiles grew more prevalent in the photos over time, matching cultural shifts after World War II. But then in the 1960s, the number of smiles decreased. At first, they were stumped. Then Haala noticed thats when the yearbooks started featuring more sports photos. Its a good example of how data spits stuff out, but data needs to be interpreted, Goldberg says. In the 1960s, male athletes rarely smiled in photos, and large teams like mens football and basketball affected the research results. To Goldberg, it shows the promise of using AI to explore history and also raises questions. What do we then do with this stuff? How do we interpret it and use it to tell a human storywhich is what historians do? Goldberg asks.

Those are the kinds of questions Goldberg and Haala explored in the research project, "A Picture's Worth a Thousand Data Points? AI-driven Machine Learning in Digital Humanities Analyses." They were one of the 2021-22 student-faculty teams to receive an Edgren Scholarship to support their research.

To some, history may feel a long way from artificial intelligence and programming. But Goldberg also directs Bethels digital humanities program, which explores cutting-edge, forward-looking methods to apply to history, literature, and philosophy. While teaching Advanced Digital Humanities last year, Goldberg got the idea to use AI to study history. The class explores advances in AI technology and how its often a double-edged swordit yields many opportunities with data and research, but it also leads to things like deepfakesor fake photos or video created using AI, often depicting world leaders or celebrities. Goldberg wanted to go deeper. As a historian, he uses datausually text or photosto look for patterns. He was interested in using AI to isolate the same patterns historians explore but on a larger scale, and wanted to see how well AI could recognize the same patterns in photos that historians look for. He knew he needed a student who was highly skilled at coding and programmingand who was also willing to dive into the deep end and take risks. Enter Haala, who is triple majoring in computer science: software project management, software engineering, and digital humanitiesand he had taken Advanced Digital Humanities.

Read this article:
Research Team Probes History with Cutting-Edge Tech - Bethel University News

Read More..

Alphabet is putting its prototype robots to work cleaning up around Googles offices – The Verge

What does Googles parent company Alphabet want with robots? Well, it would like them to clean up around the office, for a start.

The company announced today that its Everyday Robots Project a team within its experimental X labs dedicated to creating a general-purpose learning robot has moved some of its prototype machines out of the lab and into Googles Bay Area campuses to carry out some light custodial tasks.

We are now operating a fleet of more than 100 robot prototypes that are autonomously performing a range of useful tasks around our offices, said Everyday Robots chief robot officer Hans Peter Brndmo in a blog post. The same robot that sorts trash can now be equipped with a squeegee to wipe tables and use the same gripper that grasps cups can learn to open doors.

These robots in question are essentially arms on wheels, with a multipurpose gripper on the end of a flexible arm attached to a central tower. Theres a head on top of the tower with cameras and sensors for machine vision and what looks like a spinning lidar unit on the side, presumably for navigation.

As Brndmo indicates, these bots were first seen sorting out recycling when Alphabet debuted the Everyday Robot team in 2019. The big promise thats being made by the company (as well as by many other startups and rivals) is that machine learning will finally enable robots to operate in unstructured environments like homes and offices.

Right now, were very good at building machines that can carry out repetitive jobs in a factory, but were stumped when trying to get them to replicate simple tasks like cleaning up a kitchen or folding laundry.

Think about it: you may have seen robots from Boston Dynamics performing backflips and dancing to The Rolling Stones, but have you ever seen one take out the trash? Its because getting a machine to manipulate never-before-seen objects in a novel setting (something humans do every day) is extremely difficult. This is the problem Alphabet wants to solve.

Is it going to? Well, maybe one day if company execs feel its worth burning through millions of dollars in research to achieve this goal. Certainly, though, humans are going to be cheaper and more efficient than robots for these jobs in the foreseeable future. The update today from Everyday Robot is neat, but its far from a leap forward. You can see from the GIFs that Alphabet shared of its robots that theyre still slow and awkward, carrying out tasks inexpertly and at a glacial pace.

However, its still definitely something that the robots are being tested in the wild rather than in the lab. Compare Alphabets machines to Samsungs Bot Handy, for example; a similar-looking tower-and-arm bot that the company showed off at CES last year, apparently pouring wine and loading a dishwasher. At least, Bot Handy looks like its performing these jobs, but really it was only carrying out a prearranged demo. Who knows how capable, if at all, this robot is in the real world? At least Alphabet is finding this out for itself.

Original post:
Alphabet is putting its prototype robots to work cleaning up around Googles offices - The Verge

Read More..