Category Archives: Machine Learning
Follow these Strategies to Win a Machine Learning Hackathon – Analytics Insight
The tech sphere brings exciting things for tech-savvy people literally every day. Whether you are a beginner or an expert, you can still participate in many tech-related competitions, conferences, events, and seminars to shape your skills. One such effective event to test your talent is the machine learning hackathon. Today, machine learning is emerging as a powerful technology behind every digital mechanism. Many people are aspiring to become well-versed in machine learning. However, they can enhance their practical skills by participating in machine learning hackathons. Machine learning hackathons are conducted specifically for programmers, coders, and others involved in the development of software. Other professionals like interface designers, project managers, domain experts, graphic designers, and others who work closely on software development will also try their hand at the competition. During a hackathon, participants will be asked to create a working software with the datasets and models provided within a limited time. Fortunately, at the end of every machine learning hackathon, participants will have amazing learning. However, winning a hackathon is completely different from participating in one for the experience. If you are planning to be the star of the event, then you should follow certain strategies to win a machine learning hackathon.
If you are new to machine learning and want to give it a try in a hackathon, then short or online hackathons should be your first choice. Remember victory comes with experience. Therefore, directly jumping into long hackathons wont secure you the winner title. Start from small which are below 24-hours and then move on to long hackathons. But make sure you are well-organized and prepared when you shift from short to long competitions.
As a beginner, you should try to enter a hackathon with someone who is experienced and has great knowledge about machine learning. This will help you both learn throughout the process and win the prize if possible. Besides, make sure you join hands with somebody who can contradict your basic knowledge. For example, if you are a developer, join a league with a person who has business knowledge. With the combination of business and developing skills, you can surely bank the first place.
Diverse here doesnt mean ideology but talent. If all the members of the team are developers and have zero knowledge about other perspectives, then the result will completely be a slider on one side. Therefore, you should ensure that everybody has something new and different from others to offer in your team. Also, try to go with clients who can support you throughout the challenge. Ask them to constantly check your development and give directions if necessary.
Success in a hackathon comes in two different ways. One is winning the competition and the other is impressing the client. Therefore, before beginning the work, sort out which one you are going to concentrate on. If you are planning to impress the judges and win the hackathon, then you should go with shiny software and an outstanding presentation. But it is quite opposite to impress the client. You should come up with useful software that can be utilized even after the competition is over if you planning to impress your sponsor.
Data is the core of software development along with the coding. Therefore, always make sure you are preparing the data well before starting core operations. But scraping data is time-consuming and can be tricky when it comes to dynamically generated content. Instead, try to go with publicly available data provided by IMDb or maybe the Kaggle dataset. One thing to keep in mind is that many winning teams usually save their time by seeking readily available data.
Share This ArticleDo the sharing thingy
Read more:
Follow these Strategies to Win a Machine Learning Hackathon - Analytics Insight
Are The Highly-Marketed Deep Learning and Machine Learning Processors Just Simple Matrix-Multiplication Accelerators? – BBN Times
Are The Highly-Marketed Deep Learning and Machine Learning Processors Just Simple Matrix-Multiplication Accelerators?
Artificial intelligence (AI) acceleratorsare computer systems designed to enhance artificial intelligenceandmachine learningapplications, includingartificial neural networks (ANNs)andmachine vision.
Most AI accelerators are just simple data matrix-multiplication accelerators. All the rest is commercial propaganda.
The main aim of this article is to understand the complexity of machine learning (ML) and deep learning (DL) processors and discover the truth about the so-called AI accelerators.
Unlike other computational devices that treat scalar or vectors as primitives, Googles Tensor Process Unit (TPU) ASIC treats matrices as primitives,The TPU is designed to perform matrix multiplication at a massive scale.
At its core, you find something that is inspired by the heart and not the brain. Its called a Systolic Array described in 1982 in Why Systolic Architectures"?
And this computational device contains 256 x 256 8bit multiply-add computational units. A grand total of 65,536 processors is capable of 92 trillion operations per second.
It uses DDR3 with only 30GB/s to memory. Contrast that to the Nvidia Titan X with GDDR5X hitting transfer speeds of 480GB/s.
Whatever, it has nothing to do with real AI hardware.
A central main processor is commonly defined as a digital circuit which performs operations on some external data source, usually memory or some other data stream, taking the form of a microprocessor implemented on a single metaloxidesemiconductor integrated circuit chip (MOSFET).
It could be supplemented with a coprocessor, performing floating point arithmetic, graphics, signal processing, string processing, cryptography, or I/O interfacing with peripheral devices. Some application-specific hardware units include video cards for graphics, sound cards, graphics processing units and digital signal processors.
A central processing unit (CPU), also called a central processor, main processor or just processor, is the electronic circuitry that executes instructions comprising a computer program. The CPU performs basic arithmetic, logic, controlling, and input/output (I/O) operations specified by the instructions in the program.
There are a lot of processing units, as listed below:
A Graphical Processing Unit (GPU) enables you to run high-definition graphics on your computer. GPU has hundreds of cores aligned in a particular way forming a single hardware unit. It has thousands of concurrent hardware threads, utilized for data-parallel and computationally intensive portions of an algorithm. Data-parallel algorithms are well suited for such devices because the hardware can be classified as SIMT (Single Instruction Multiple Threads). GPUs outperform CPUs in terms of GFLOPS.
The TPU and NPU go under a Narrow/Weak AI/ML/DL accelerator class of specialized hardware accelerator or computer system designed to accelerate special AI/ML applications, including artificial neural networks and machine vision.
Big-Tech companies such as Google, Amazon, Apple, Facebook, AMD and Samsung are all designing their own AI ASICs.
Typical applications include algorithms for training and inference in computing devices, such as self-driving cars, machine vision, NLP, robotics, internet of things, and other data-intensive or sensor-driven tasks. They are often manycore designs and generally focus on low-precision arithmetic, novel dataflow architectures or in-memory computing capability, with a typical NAI integrated circuit chip containing billions of MOSFET transistors.
Focus on training and inference of deep neural networks, Tensorflow uses a symbolic math library based on dataflow and differentiable programming
The former uses automatic differentiation (AD), algorithmic differentiation, computational differentiation, or auto-diff, and gradient-based optimization, working by constructing a graph containing the control flow and data structures in the program.
Again, the datastream/dataflow programming is a programming paradigm that models a program as a directed graph of the data flowing between operations, thus implementing data flow principles and architecture.
Things revolve around static or dynamic graphs, requesting the proper programming languages, such as C++, Python, R, or Julia, and ML libraries, such as TensorFlow or PyTorch.
What AI computing is still missing is a Causal Processing Unit, involving symmetrical causal data graphs, with the Causal Engine software simulating real-world phenomena in digital reality.
It is highly likely embedded in the human brain and Real-World AI.
See original here:
Are The Highly-Marketed Deep Learning and Machine Learning Processors Just Simple Matrix-Multiplication Accelerators? - BBN Times
Jrgen Schmidhuber Appointed as Director of AI Initiative at KAUST – HPCwire
THUWAL, Saudi Arabia, Sept. 2, 2021 King Abdullah University of Science and Technology (KAUST) announces the appointment of Professor Jrgen Schmidhuber as director of the Universitys Artificial Intelligence Initiative. Schmidhuber is a renowned computer scientist who is most noted for his pioneering work in the field of artificial intelligence, deep learning, and artificial neural networks. He will join KAUST on October 1, 2021.
Professor Schmidhuber earned his Ph.D. in Computer Science from the Technical University of Munich (TUM). He is a Co-Founder and the Chief Scientist of the company NNAISENSE and was most recently Scientific Director at the Swiss AI Lab, IDSIA, and Professor of AI at the University of Lugano. He is also the recipient of numerous awards, author of over 350 peer-reviewed papers, a frequent keynote speaker and an adviser to various governments on AI strategies.
His labs deep learning neural networks have revolutionized machine learning and AI. By the mid-2010s, they were implemented on over 3 billion devices and used billions of times per day by customers of the worlds most valuable public companies products, e.g., for greatly improved speech recognition on all Android phones, greatly improved machine translation through Google Translate and Facebook (over 4 billion translations per day), Apples Siri and Quicktype on all iPhones, the answers of Amazons Alexa, and numerous other applications. In 2011, his team was the first to win official computer vision contests through deep neural nets with superhuman performance. In 2012, they had the first deep neural network to win a medical imaging contest (on cancer detection), attracting enormous interest from the industry. His research group also established the fields of artificial curiosity through generative adversarial neural networks, linear transformers and networks that learn to program other networks (since 1991), mathematically rigorous universal AI and recursive self-improvement in meta-learning machines that learn to learn (since 1987).
Professor Schmidhuber will join KAUSTs already prominent AI faculty, recruit new faculty members and top student prospects from the Kingdom of Saudi Arabia and around the world, develop educational programs and entrepreneurial activities, and engage with key public and private sector organizations both within the Kingdom and globally. Researchers he joins include KAUSTs new Provost, Lawrence Carin, a leading expert in artificial intelligence and machine learning, the Deputy Director of the AI Initiative, Bernard Ghanem, the founding Interim Director Wolfgang Heidrich, and many other highly cited faculty in Computer Science, Applied Mathematics, Statistics, the Biological Sciences, and the Earth Sciences.
AI and machine learning are becoming established as core methodologies throughout science and engineering, as they have been in commercial and social spheres. KAUST expects AI to aid in the analysis of huge amounts of data coming from the Kingdoms gigaprojects and the design of data gathering in these domains and its own laboratories and fields. AI is also expected to play a big role in the Kingdoms energy transitions in hydrogen, carbon capture, solar, and wind. In both basic research and incorporation into its daily operations, KAUST is aligned with the vision of the Kingdom for a digital transformation powered by artificial intelligence.
KAUST President Tony Chan, himself a highly cited computer scientist and applied mathematician, expressed, I am delighted that we are able to recruit to KAUST a seminal leader in AI and machine learning in Dr. Schmidhuber. This signifies the commitment that KAUST, as well as Saudi Arabia, is making to lead in and contribute to this very important field.
About KAUST
King Abdullah University of Science and Technology (KAUST) advances science and technology through distinctive and collaborative research integrated with graduate education. Located on the Red Sea coast in Saudi Arabia, KAUST conducts curiosity-driven and goal-oriented research to address global challenges related to food, water, energy, and the environment.
Established in 2009, KAUST is a catalyst for innovation, economic development and social prosperity in Saudi Arabia and the world. The University currently educates and trains masters and doctoral students, supported by an academic community of faculty members, postdoctoral fellows and research scientists.
With over 100 nationalities working and living at KAUST, the University brings together people and ideas from all over the world. To learn more visit kaust.edu.sa.
Source: King Abdullah University of Science and Technology
View original post here:
Jrgen Schmidhuber Appointed as Director of AI Initiative at KAUST - HPCwire
Basic Concepts in Machine Learning
Last Updated on August 15, 2020
What are the basic concepts in machine learning?
I found that the best way to discover and get a handle on the basic concepts in machine learning is to review the introduction chapters tomachine learning textbooks and to watch the videos from the first model inonlinecourses.
Pedro Domingos is a lecturer and professor on machine learning at the University of Washing and author of a new book titled The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World.
Domingos has a free course on machine learning online at courser titled appropriately Machine Learning. The videos for each module can be previewedon Coursera any time.
In this post you will discover the basic concepts of machine learning summarized from Week One of Domingos Machine Learning course.
Basic Concepts in Machine LearningPhoto by Travis Wise, some rights reserved.
The first half of the lecture is on the general topic of machine learning.
Why do we need to care about machine learning?
A breakthrough in machine learning would be worth ten Microsofts.
Bill Gates, Former Chairman, Microsoft
Machine Learning is getting computers to program themselves. If programming is automation, then machine learning is automating the process of automation.
Writing software is the bottleneck, we dont have enough good developers. Let the data do the work instead of people. Machine learning is the way to make programming scalable.
Machine learning is like farming or gardening. Seeds is the algorithms, nutrientsis the data, thegardneris you and plants is the programs.
Traditional Programming vs Machine Learning
Sample applications of machine learning:
What is your domain of interest and how could you use machine learning in that domain?
There are tens of thousands of machine learning algorithms and hundreds of new algorithms are developed every year.
Every machine learning algorithm has three components:
All machine learning algorithms are combinations of these three components. A framework for understanding all algorithms.
There are four types of machine learning:
Supervised learning is the most mature, the most studied and the type of learning used bymost machine learning algorithms. Learning with supervision is much easier than learning without supervision.
Inductive Learning is where we are given examples of a function in the form of data (x) and the output of the function (f(x)). The goal of inductive learning is to learn the function for new data (x).
Machine learning algorithms are only a very small part of using machine learning in practice as a data analyst or data scientist. In practice, the process often looks like:
It is not a one-shot process, it is a cycle. You need to run the loop until you get a result that you can use in practice. Also, the data can change, requiring a new loop.
The second part of the lecture is on the topic of inductive learning. This is the general theory behind supervised learning.
From the perspective of inductive learning, we are given input samples (x) and output samples (f(x)) and the problem is to estimate the function (f). Specifically, the problem is to generalize from the samples and the mapping to be useful to estimate the output fornew samples in the future.
In practice it is almost always too hard to estimate the function, so we are looking for very good approximations of the function.
Some practical examples of induction are:
There are problems where inductive learning is not a good idea. It is important when to use and when not to use supervised machine learning.
4 problems where inductive learning might be a good idea:
We can write a program that works perfectly for the data that we have. This function will be maximally overfit. But we have no idea how well it will work on new data, it will likely be very badly because we may never see the same examples again.
The data is not enough. You canpredictanything you like. And this would be naive assume nothing about the problem.
In practice we are not naive. There is an underlying problem and we areinterested inan accurate approximation of the function. There is a double exponential number of possible classifiers in the number of input states. Finding a good approximate for the function is verydifficult.
There are classes of hypotheses that we can try. That is the form that the solution may take or the representation. We cannot know which is most suitable for our problem before hand. We have to use experimentation to discover what works on the problem.
Two perspectives on inductive learning:
You could be wrong.
In practice we start with a small hypothesis class and slowly grow the hypothesis class until we get a good result.
Terminology used inmachine learning:
Key issues in machine learning:
There are 3concerns for a choosing a hypothesis spacespace:
There are 3properties by which you could choose an algorithm:
In this post you discovered the basic concepts in machine learning.
In summary, these were:
These are the basic concepts that are covered in the introduction to most machine learning courses and in the opening chapters of any good textbook on the topic.
Although targeted at academics,as a practitioner, it is useful to have a firm footingin these concepts in order to better understand how machine learning algorithmsbehave in the general sense.
The rest is here:
Basic Concepts in Machine Learning
Machine Learning Helps Clarify the Risk Connected to Age-Related Blood Condition – On Cancer – Memorial Sloan Kettering
Artificial intelligence (AI) and machine learning allow researchers to study databases that otherwise would be too large and complex. In a recent study, Sloan Kettering Institute computational biologist Quaid Morris and collaborators used models to study an aging-related blood condition called clonal hematopoiesis (CH).
Their research showed how evolution and natural selection influence CH and the effects that it may have on health outcomes. CH is relatively common in older people, affecting up to 10% of the population by age 80. The condition raises the risk of developing blood disorders including some blood cancers and cardiovascular disease.
One of the issues that we face in studying something complicated like CH is the interplay of many different factors, says Dr. Morris, who is co-senior author of a paper on CH published August 13, 2021, in Nature Communications. AI could eventually give us the tools to guide clinical decisions.
Hematopoietic stem cells are cells that will eventually develop into different types of blood cells. In people with CH, some of these hematopoietic stem cells instead form a group of cells that is genetically distinct from the rest of their counterparts.Some of these subsets of cells, or clones, may contain mutations linked to cancer. This process can eventually lead to problems.
The presence of these mutations in the blood doesnt mean that the person carrying them has or will definitely develop cancer, but studies have shown that people with CH are at higher risk of developing certain blood cancers, especiallymyelodysplastic syndromeandacute myeloid leukemia (AML). They are also at increased risk for cardiovascular disease, heart attacks, and strokes.
In January 2018, Memorial Sloan Kettering Cancer Center launched a clinic for cancer patients found to have CH. The clinic provides these patients with regular monitoring for signs of blood cancer and regular screening for cardiovascular disease risk. Early detection of cancer or heart disease allows doctors to step in right away with a treatment plan. The clinic also has an important forward-looking research component: trying to understand which patients with CH are at highest risk of future health problems.
In the current study, the researchers looked at how different CH-related mutations interact with each other to increase or decrease the chances that a cancer-causing clone will eventually rise to dominance and progress become to cancer.
This type of research requires complex statistical models, says Dr. Morris, a member of the Computational and Systems Biology Program. Deep learning and neural network techniques are AI methods that can help us to make inferences about whats going on in this population of hematopoietic cells and study the interplay of different subsets of cells.
The hope is that AI can help us make sense of patterns that are so complex that we'd never be able to see them on our own.
The researchers used blood samples collected as part of the European Prospective Investigation into Cancer and Nutrition, anongoing, multicenter study that has medical information on about 65,000 people spanning almost three decades. The analysis of blood samples with CH included 92 samples from people who eventually developed AML and 385 controls (people who did not have AML).
This research was done in collaboration with scientists at the Ontario Institute for Cancer Research (OICR) and the University of Toronto, where Dr. Morris worked before coming to MSK. The co-senior author, Philip Awadalla of OICR, is an expert in population genetics, a field that focuses on how genes change in response to evolution and natural selection.
Dr. Morris says data collected through MSKs CH clinic will make this kind of analysis much more precise and potentially more useful going forward. The data we used in the current study was retrospective and taken from a single snapshot in time, he explains. In contrast, he notes, the CH clinic is collecting multiple samples from the same patients over months or years. This means that models we build with this data will be more informed and more effective at studying patterns over time and help us to make better predictions, he adds.
CH research is an important component of calculating and understanding cancer risk, a major goal of MSKs Precision Interception and Prevention Program. The objective of this approach is to either prevent cancer from occurring or stop it at the earliest stages, when its easier to treat.
The hope is that AI can help us make sense of patterns that are so complex that wed never be able to see them on our own, Dr. Morris says.
Read the original post:
Machine Learning Helps Clarify the Risk Connected to Age-Related Blood Condition - On Cancer - Memorial Sloan Kettering
Industry VoicesWhy the COVID-19 pandemic was a watershed moment for machine learning – FierceHealthcare
Times of crisis spark innovation and creativity, as evidenced in the way organizations have come together to innovate for the greater good during the COVID-19 pandemic.
Liquor distilleries started producing hand sanitizer, 3D printing companies made face shields and nasal swabs to meet massive demandsand auto companies shifted gears to make ventilators.
Machine learning (ML)computer systems that learn and adapt autonomously by using algorithms and statistical models to analyze and draw inferences from patterns in data to inform and automate processeshas also played an important role, supporting practically every aspect of healthcare. Amazon Web Services has supported customers as they enable remote patient care, develop predictive surge planning to help manage inpatient/ICU bed capacityand tackle the unprecedented feat of developing an messenger ribonucleic acid (mRNA)-based COVID-19 vaccine in under a year.
We now have the opportunity to build on our lessons from the past year to apply ML to help address several underlying problems that plague the healthcare and life sciences communities.
Telehealth was on the rise before COVID-19, but it revealed its true potential during the pandemic. Telehealth is often viewed simply as patients and providers interacting online via video platforms but has proven capable of doing much more. Applying ML to telehealth provides a unique opportunity to innovate, scale and offer more personalized experiences for patients and ensure they have access to the resources and care they need, no matter where they're located.
ML-based telehealth tools such as patient service chatbots, call center interactions to better triage and direct patients to the information and care they requireand online self-service prescreenings are helping optimize patient experiences and streamline provider assessments and diagnostics.
RELATED:Global investment in telehealth, artificial intelligence hits a new high in Q1 2021
For example, GovChat, South Africa's largest citizen engagement platform, launched a COVID-19 chatbot in less than two weeks using an artificial intelligence (AI) service for building conversational interfaces into any application using voice and text. The chatbot provides health advice and recommendations on whether to get a test for COVID-19, information on the nearest COVID-19 testing facility, the ability to receive test resultsand the option for citizens to report COVID-19 symptoms for themselves, their family membersor other household members.
In addition, early in the COVID-19 crisis, New York City-based MetroPlusHealth identified approximately 85,000 at-risk individuals (e.g., comorbid heart or lung disease, or immunocompromised) who would require additional support services while sheltering in place. In order to engage and address the needs of this high-risk population, MetroPlusHealth developed ML-enabled solutions including an SMS-based chatbot that guides people through self-screening and registration processes, SMS notification campaigns to provide alerts and updated pandemic informationand a community-based organizations referral platform, called Now Pow, to connect each individual with the right resource to ensure their specific needs were met.
By providing an easy way for patients to access the care, recommendationsand support they need, ML has given providers the ability to innovate and scale their telehealth platforms to support diverse and continuously changing community needs. Agile, scalableand accessible telehealth continues to be important as providers look for ways to reach and engage patients in hard-to-reach or rural areas and those with mobility issues. Organizations and policymakers globally need to make telehealth and easy access to care a priority now and going forward in order to close critical gaps in care.
Beyond the unprecedented shifts in the approach to engaging, supporting and treating patients, COVID-19 has dictated clear direction for the future of patient care: precision medicine.
Guidelines for patient care planning care have shifted from statistically significant outcomes gathered from a general population to outcomes based on the individual. This gives clinicians the ability to understand what type of patient is most prone to have a disease, not just what sort of disease a specific patient has. Being able to predict the probability of contracting a disease far in advance of its onset is important to determining and initiating preventative, intervening, and corrective measures that can be tailored to each individual's characteristics.
RELATED:What's on the horizon for healthcare beyond COVID-19? Cerner, Epic and Meditech executives share their takes
One of the best examples of how ML is enabling precision medicine is biotech company Modernas ability to accelerate every step of the process in developing an mRNA vaccine for COVID-19. Moderna began work on its vaccine the moment the novel coronaviruss genetic sequence was published. Within days, the company had finalized the sequence for its mRNA vaccine in partnership with the National Institutes of Health.
Moderna was able to begin manufacturing the first clinical-grade batch of the vaccine within two months of completing the sequencinga process that historically has taken up to 10 years.
Personalized health isn't only about treating disease, it's about providing access to resources and information specific to a patient's needs. ML is playing a key role in curating content that can help to educate and support patients, caregivers and their families.
Breastcancer.org allows individuals with breast cancer to upload their pathology report to a private and secure personal account. The organization uses ML-based natural language processing to analyze and understand the report and create personalized information for the patient based on their specific pathology.
RELATED:Healthcare AI investment will shift to these 5 areas in the next 2 years: survey
For the last decade, organizations have focused on digitizing healthcare. Today, making sense of the data being captured will provide the biggest opportunity to transform care. Successful transformation will depend on enabling data to flow where it needs to be at the right time while ensuring that all data exchange is secure.
Interoperability is by far one of the most important topics in this discussion. Today, most healthcare data is stored in disparate formats (e.g., medical histories, physician notes and medical imaging reports), which makes extracting information challenging. ML models trained to support healthcare and life sciences organizations help solve this problem by automatically normalizing, indexing, structuring and analyzing data.
ML has the potential to bring data together in a way that creates a more complete view of a patient's medical history, making it easier for providers to understand relationships in the data and compare specific data to the rest of the population. Better data management and analysis leads to better insights, which lead to smarter decisions. The net result is increased operational efficiency for improved care delivery and management, and most importantly, improved patient experiences and health outcomes.
Looking ahead, imagine a time when our pernicious medical conditions like cancer and diabetes can be treated with tailored medicines and care plans enabled by AI and ML. The pandemic was a turning point for how ML can be applied to tackle some of the toughest challenges in the healthcare industry, though we've only just scratched the surface of what it can accomplish.
Taha Kass-Hout is the director of machine learning for Amazon Web Services.
See more here:
Industry VoicesWhy the COVID-19 pandemic was a watershed moment for machine learning - FierceHealthcare
How to upskill your team to tackle AI and machine learning – VentureBeat
The Transform Technology Summits start October 13th with Low-Code/No Code: Enabling Enterprise Agility. Register now!
Women in the AI field are making research breakthroughs, spearheading vital ethical discussions, and inspiring the next generation of AI professionals. We created the VentureBeat Women in AI Awards to emphasize the importance of their voices, work, and experience and to shine a light on some of these leaders. In this series, publishing Fridays, were diving deeper into conversations with this years winners, whom we honored recently at Transform 2021. Check out last weeks interview with a winner of our AI rising star award.
No one got more nominations for a VentureBeat AI award this year than Katia Walsh, a reflection of her career-long effort to mentor women in AI and data science across the globe.
For example, Mark Minevich, chair of AI Policy at International Research Center of AI under UNESCO, said, Katia is an impressive, values-driven leader [who has] been a diversity champion and mentor of women, LGBTQ, and youth at Levi Strauss & Co, Vodafone, Prudential, Fidelity, Forrester, and in academia over many years. And Inna Saboshchuk, a current colleague of Walshs at Levi Strauss & Co, said, a single conversation with her will show you how much she cares for the people around her, especially young professionals within AI.
In particular, these nominators and many others highlighted Walshs efforts to upskill team members. Most recently, she launched a machine learning bootcamp that allowed people with no prior experience to not only learn the skills, but apply them every day in their current roles.
VentureBeat is thrilled to present Walsh with this much-deserved AI mentorship award. We recently caught up with her to learn more about the early success of her latest bootcamp, the power of everyday mentorship, and the role it can play in humanizing AI.
This interview has been edited for brevity and clarity.
VentureBeat: You received a ton of nominations for this award, so clearly youre making a real impact. How would you describe your approach to AI mentorship?
Katia Walsh: My approach is not specific to AI mentorship, but rather overall leadership. I consider myself to be a servant leader, and I see my job as serving the people on my teams, my partners teams, and at the companies that I have the privilege to work for. My job is to remove barriers to help them grow, learn, engage, and mobilize others to succeed. So that extends to AI, but its not limited to that alone.
VentureBeat: Can you tell us about some of the specific initiatives youve launched? I know at Levi Strauss & Co, for example, you recently created a machine learning bootcamp to train more than 100 employees who had no prior machine learning experience, most of them women. Thats amazing.
Walsh: Absolutely. So we are still in the process. We just started our first cohort between April and May, where we took people with absolutely no experience in coding or statistics from all areas of the company including warehouses, distribution centers, and retail stores and sought to make sure we gave people across geographies and across the company the opportunity to learn machine learning and practice that in their day job, regardless of what that day job was.
So we trained the first cohort with 43 people, 63% of whom were women in 14 different locations around the world. And thats very important because diversity comes in so many different ways, including cultural and geographic diversity. And so that was very successful; every single one of those employees completed the bootcamp. And now were about to start our second cohort with 60 people, which will start in September and complete in November.
VentureBeat: Im glad you mentioned those different aspects of diversity, because the industry is full of conversations around diversity, inclusion efforts, and ethical AI some of them more genuine than others. So how does AI mentorship ladder up to all that?
Walsh: I see it as just yet another platform to make an impact. AI is such an exciting field, but it can also be seen as intimidating. Some people dont know if its technology or business, but the answer is both. In fact, AI is actually part of our personal lives as well. One of my goals is to humanize the field of AI so that everyone understands the benefits and feels the freedom and the power to contribute to it. And by feeling that, they will in turn help make it even more diverse. At the end of the day at this point, at least AI is the product of human beings, with all of human beings mindsets, capabilities, and limitations. And so, its also imperative to ensure that when we create algorithms, use data, and deliver digital products, we do our very best to really reflect the world we live in.
VentureBeat: We talked about initiatives, but of course mentorship is also about those everyday mentorship-like interactions, such as with ones manager or an industry connection. How important are these not just for personal development, but also running a business and being part of a team?
Walsh: Thats actually probably the most important stage. Our daily lives revolve around what might be considered the mundane meetings, tasks, assignments, deadlines and thats actually where we can make the most impact. Mentorship is really not about doing something special and extra, but rather making sure that as part of our daily lives and daily responsibilities and jobs, we ensure we think about if were being equitable, fair, and doing everything we can to bring diversity. But it cant be a box to check; it has to become a part of how we think and act every hour in every single day.
VentureBeat: Are there any misconceptions about mentorship you think are important to clear up, or often overlooked aspects of mentorship you think everyone should know about?
Walsh: One thing that comes to mind is this idea that women can only be mentored by other women. Thats actually not the case. And in my own experience, Ive had the great privilege of working with men who have themselves taken the chance on me, given me opportunities, and given me responsibilities even before I felt ready. And I really appreciate that. So everyone can be a mentor to women and all genders including fluid genders regardless of their own gender, job, or role.
VentureBeat: And do you have any advice for everyone, but especially business leaders, about how they can be better mentors? Or what about advice for people looking to be mentored about how to make the most out of those relationships and everyday interactions?
Walsh: Ill address the mentee question first. Ive really been impressed with people who, even at a very young age, have had the courage, incentive, and initiative to reach out and say, I want to learn from you. Can you spend a few minutes with me? I always take the call. So I really encourage people to feel that strength and to take that initiative to reach out to people they think they can learn from. And I encourage those who are mentors to also take that call and to proactively encourage others to stay connected with them. One of the things I did was actually give my cell phone number to everyone in my company. Its not commonly done, but Ive put it in our own town hall chat because I want people to feel that connection. I dont want anyone to feel intimidated by a title or where someone sits in a company. AI, data, and digital are truly transversal. Theyre horizontal and cut across everything in a company. So its part of what I do in my function, but its also part of really wanting to contribute to diversity and mentorship.
Read more:
How to upskill your team to tackle AI and machine learning - VentureBeat
Benefits of Pursuing a Career in Machine Learning In 2022 – Analytics Insight
Machine learning is one of the fastest-growing fields in the world right now. As per the reports, machine learning engineers are in high demand. All ventures currently have a huge number of utilizations in artificial intelligence, which is the essential motivation behind why there is an appeal for jobs in that field. If you are still confused with so many career options, this is high time for you to think about seeking a career in machine learning.
Along with AI, machine learning is the fuel required to power robots. With machine learning, you can control programs that can easily be updated and modified to adjust with new conditions and assignments to finish things rapidly and productively.
Here are a few benefits of pursuing a career in machine learning:
Regardless of the remarkable development in machine learning, the field faces expertise deficiency. If you can fulfill the needs of enormous organizations by acquiring the necessary machine learning skills, you will have a safe future in an innovation that is on the rise.
Machine learning promises to solve issues faced by businesses every day. As a machine learning engineer, you will deal with many challenges and foster arrangements that profoundly affect how organizations and individuals flourish. A job that allows you to work and address various challenges gives the highest satisfaction. Every day, you will get new opportunities to learn and grow in this field. You can observe trends firsthand that will help you boost your relevance in the marketplace, thus augmenting your value to the employer.
Machine learning is still in its early stage, and as the innovation develops and propels, you will have the insight and skill to follow a successful career and build the future for yourself. The average salary of a machine learning engineer is one of the top reasons why machine learning appears to be a worthwhile career to young minds.
Machine learning skills assist you with extending roads in your career. You can also start your career as a data scientist if you acquire relevant machine learning skills (that means you hit the two birds with one stone). Become a valuable asset by acquiring aptitude in the two fields and set out on an astonishing journey loaded up with difficulties, endless opportunities, and knowledge.
What are the other jobs you can get if you pursue a career in machine learning?
Machine learning engineers develop applications and arrangements that mechanize tasks. The majority of these are redundant assignments dependent on condition and activity sets that machines can perform without mistakes, effectively.
A couple of different jobs accessible in the field are ML data scientist, ML computer programmer, senior planner, ML architect, etc. A computer programmer with enough information on Python and the center ML libraries can switch careers into machine learning. Machine learning professional has an upper hand in the field if he knows tech areas like Probability and statistics, system design, ML algorithms and libraries, data modeling, programming languages, and more.
In conclusion, pursuing a career in machine learning is the best idea to become a part of the digital revolution happening in sectors like healthcare, hospitality, banking, logistics, manufacturing, and many more. Having machine learning skills allows you to become the first pick in any sector, which opens the door to various opportunities.
Share This ArticleDo the sharing thingy
About AuthorMore info about author
See the rest here:
Benefits of Pursuing a Career in Machine Learning In 2022 - Analytics Insight
Research Fellow in Adversarial Machine Learning for Transportation (EPSRC MACRO) job with CRANFIELD UNIVERSITY | 264376 – Times Higher Education (THE)
School/Department School of Aerospace, Transport and ManufacturingBased at Cranfield Campus, Cranfield, BedfordshireHours of work 37 hours per week, normally worked Monday to Friday. Flexible working will be considered.Contract type Fixed term contractFixed Term Period 15 MonthsSalary Full time starting salary is normally in the range of 33,809 to 37,684 per annum, with potential progression up to 47,105 per annumApply by 03/10/2021
Role Description
Cranfield Universitys world-class expertise, large-scale facilities and unrivalled industry partnerships is creating leaders in technology and management globally. Learn more about Cranfield and our unique impact here.
We welcome applications from prospective Research Fellows in Adversarial Machine Learning for Transportation. This exciting role is part of a larger project is funded by EPSRC.
About the School of Aerospace, Transport and Manufacturing
The School of Aerospace, Transport and Manufacturing (SATM) is a leading provider of postgraduate level engineering education, research and technology support to individuals and organisations. At the forefront of aerospace, manufacturing and transport systems technology and management for over 70 years, we deliver multi-disciplinary solutions to the complex challenges facing industry.
About the Role
Our reputation for leading in the field of digital systems: sensor data, communications, machine learning, and reasoning - has been established through more than thirty years of research into this field. We are primarily focused in this project on secure AI/ML for transportation and mobility as a service (MaaS). Our work covers academic provision (MSc and PhD) and research. Research works span from fundamental research and development to single client contract research and development.
As Research Fellow you will contribute to the research activities of the Centre for Autonomous and Cyberphysical Systems, especially concerning the specific activities of: (1) machine learning for transportation sector (especially in mobility as a service), (2) adversarial attack modelling in AI/ML, and (3) co-designing secure AI systems in mobility as a service sector.
About You
You will be expected to collaborate with the existing staff working in the same EPSRC project and the area and have communications and meetings with our collaborators within the university, the industrial / government partners, or in other universities.
You will be educated to doctoral level in a relevant subject and have experience of management research using both qualitative and quantitative methods. With excellent communication skills, you will have expertise in social network analysis and a background in Health & Safety would be an advantage. In return, the successful applicant will have exciting opportunities for career development in this key position, and to be at the forefront of world leading research and education, joining a supportive team and environment.
Our Values
Our shared, stated values help to define who we are and underpin everything we do: Ambition; Impact; Respect; and Community. Find out more here. We aim to create and maintain a culture in which everyone can work and study together and realise their full potential.
Diversity and Inclusion
Our equal opportunities and diversity monitoring has shown that women are currently underrepresented within the university and so we actively encourage applications from eligible female candidates. To further demonstrate our commitment to progressing gender diversity in STEM, we are members of WES & Working Families, and sponsors of International Women in Engineering Day.
Flexible Working
We actively consider flexible working options such as part-time, compressed or flexible hours and/or an element of homeworking, and commit to exploring the possibilities for each role. Find out more here.
How to Apply
Please do not hesitate to contact us for further details on E: hr@cranfield.ac.uk. Please quote reference number 3744.
Closing date for receipt of applications: 3 October 2021
The rest is here:
Research Fellow in Adversarial Machine Learning for Transportation (EPSRC MACRO) job with CRANFIELD UNIVERSITY | 264376 - Times Higher Education (THE)
COVID-19 showed why the military must do more to accelerate machine learning for its toughest challenges – C4ISRNet
As recent events have shown, military decision-making is one of the highest-stakes challenges in the world: Diplomatic relations are at stake; billions of dollars of tax-funded budgets are in the balance; the safety and well-being of thousands of military and civilian personnel around the globe are on the line; and above all, the freedom and liberty of the United States and its more than 330 million citizens must be protected. But with such immense stakes comes an almost unfathomably large amount of related data that must be taken into account. Whether it is managing population health in an increasingly complex and connected world, or managing decisions on the network-centric battlefield, standalone humans are proving insufficient to harness the data, analyze it, and make timely and correct decisions.
Spanning six branches and upward of 1.3 million active duty military personnel on all seven continents, how can all of the data points from dictates from the commander-in-chief to handwritten notes on the deck of an aircraft carrier be taken into account? In matters of national security, speed and reliability in decision-making and avoiding technological surprises or being caught off guard by the nations political rivals require massive real-time analysis and first and second order thinking that includes the complexities of human behavior.
Consider all of the stakes and moving parts facing the leadership at a large domestic military base during the recent COVID-19 pandemic. Concerns of COVID-19 did not just need to consider the base personnel, but also the behavior of the civilians in the surrounding counties, as people from throughout the region, military and civilian contractors alike, were coming and going daily. The information necessary to consider starts with infection and hospitalization rates, but also includes behavior monitoring (and influencing) as well as staying up to date with steps being taken by local, regional and state officials to monitor the virus and limit its spread. With so many moving parts, it is very difficult to stay up to the minute on everything and to determine the right decision with any degree of certainty.
The answer to this guesswork and analysis paralysis lies in the capabilities of artificial intelligence and machine learning. If the military continues to waste too much time with human hours of effort and analysis that could be handled by machines, that could lead to danger and even death of military personnel or civilians. At the heart of complex systems, such as the U.S. military, there is a critical tipping point where the systems are so complex that humans can no longer track them. But AI solutions are capable of delivering up-to-the-minute data modeling, considering all factors at play and second and third order consequences, that can present tangible, data-driven intelligence that takes actions far beyond the limitations of linear human minds. Perhaps the biggest benefit is the confidence to avoid the negative publicity from the podium moment, when asked to justify decisions. Decision-makers can confidently move beyond relying on hunches and instead identify data based on sub-indexes, models from experts, and simulations specific to that day and the circumstances specific to each facility.
When President Biden was recently called onto the carpet to explain the rapid fall of Afghanistan in nine days, he should have had an AI that could at least explain the data, the models and weights that fed the analysis, conclusions and decisions based on the belief that the 300,000 strong Afghan army would be able to hold off the 60,000 Taliban fighters long enough for an orderly withdrawal. Journalists would then be free to question the data sources, the models or the weightings, but not the president, who would be relying on these systems for his judgment. But more importantly, such a system would have certainly predicted this rapid fall in its Monte Carlo distribution of potential outcomes, and would have generated counter measures and cautions.
Without a deeper commitment to AI, the military risks missing out on intelligence that transcends classified, siloed and otherwise restricted information without compromising security. One of the biggest challenges to high-stakes decision-making in the military is silos of classified information, making it difficult or impossible for every party to know every factor that is shaping the situation.
Using AI and machine learning solves this challenge safely. Rather than dumping disparate data from various branches of the military and clearance level into one gigantic data lake, it is possible to leave all the data safely and securely where it is, and train a machine to know and inform the human decision-makers that the data exists. AI is capable of processing not only all of the information in the corpus, but it is also able to know which parties do and do not have clearance to each individual piece of data. In matters of classified information, it can tell different personnel that the information exists, and direct these individuals to the authority qualified to disclose it.
Capabilities like these can be readily applied to large, complex military undertakings, featuring processes, decisions and volumes of information. For instance, when a new aircraft carrier is being built, management requires information in hand-written reports. It is difficult for the naked eye to tell if the project is on time or on budget because of the heavy reliance on human judgment. If any human assessment is just a fraction off, it can massively impact the whole project.
Recent challenges that factor in the vagaries of human behavior illustrated starkly by COVID-19 and the withdrawal from Afghanistan, beg for the rapid analysis and creative input of machine learning systems. From digestion and quantification of countless data points to absorbing and cataloging knowledge of experts who will not always be around to help with predictive modeling of circumstances with dozens of variables, this amplified intelligence is the key to better outcomes.
Richard Boyd is CEO at Tanjo, a machine learning company.
Originally posted here:
COVID-19 showed why the military must do more to accelerate machine learning for its toughest challenges - C4ISRNet