Category Archives: Machine Learning
Developing Machine-Learning Apps on the Raspberry Pi Pico – Design News
Starting on Monday, October 24, and running through October 28, Design News will present the free course, Developing Machine-Learning Applications on the Raspberry Pi Pico. Each class runs an hour, beginning at 2:00 Eastern. You can also earnIEEE Professional Development Hoursfor participating. If you are not able to attend the class schedule, the course will be available on demand.
The Raspberry Pi Pico is a versatile, low-cost development board that applies to many applications. Course instructor Jacob Beningo will explain how to get up and running with the Raspberry Pi Pico. Hell mainly focus on how to develop machine-learning applications and deploy them to the Pico. Hell use gesture detection as an example application. Attendees will walk away understanding machine learning, the Pico, and best practices for working with both.
Related: Learn DC Motor Controls with the Raspberry Pi 2040 Pico
Heres a Breakdown of Developing Machine-Learning Applications on the Raspberry Pi Pico day by day:
Day 1: Getting Started with the Raspberry Pi Pico and Machine Learning
Related: 3 Tips for Rapid Prototyping with the Raspberry Pi Pico
In this session, we will introduce the Raspberry Pi Pico development board, based on the low-cost, high-feature RP2040 microcontroller. We will explore the Pico board features and why the board is well suited for machine-learning applications. Attendees will walk away understanding the Pico board and the fundamentals of machine learning on microcontroller-based devices.
Day 2: Machine-Learning Tools and Process Flow
There are a wide variety of tools developers use to deploy machine-learning models to the Raspberry Pi Pico. In this session, we will explore the various tools embedded software developers might be interested in using. Attendees will also learn about the general machine-learning process flow and how it fits within the standard embedded software programming model.
Day 3: Collecting Sensor Data Using Edge Impulse
Before a developer creates a machine-learning model, they must first collect the data used by the model. This session will explore how to connect and collect sensor data using Edge Impulse. Well discuss how much data to collect and the various options for doing so. Attendees will walk away understanding how to prepare their data for training and eventual deployment to the Pico.
Day 4: Designing and Testing a Machine-Learning Model
With sensor data now collected, developers will now want to use their data to train and test a machine-learning model. In this session, we will use the data we gathered in the previous session to train a model. Attendees will learn how to train a model and examine the training results in order to get the desired outcomes from their model.
Day 5: Deploying Machine-Learning Models and Next Steps
In this session, we will use the model we trained in the last session and deploy it to the Raspberry Pi Pico. Well investigate several methods to deploy the model and test how well the model works on the Raspberry Pi Pico. Attendees will see how to deploy the model and learn about the next steps for using the Raspberry Pi Pico for machine-learning applications.
Meet your instructor:Jacob Beningo is an embedded software consultant who currently works with clients in more than a dozen countries to dramatically transform their businesses by improving product quality, cost and time to market. He has published more than 300 articles on embedded software development techniques, has published several books, is a sought-after speaker and technical trainer and holds three degrees which include a Masters of Engineering from the University of Michigan.
Digi-Key Continuing Education Center, presented by Design News, will get you up to working speed quickly in a host of technologies you've been meaning to study, but haven't had the time all without leaving the comfort of your lab or office. Our faculty of expert tutors has divided the interdisciplinary world of design engineering into five dimensions: microcontrollers (basic and advanced), sensors, wireless, power, and lighting.
You can register for the free class here.
View original post here:
Developing Machine-Learning Apps on the Raspberry Pi Pico - Design News
Adversaries may be poisoning your machine learning engine. Here’s how to undo the damage. – SC Media
Machine learning has many useful cybersecurity applications provided the technology behind it isnt unduly influenced or tampered with by malicious actors, such that its data integrity is sabotaged.
Donnie Wendt, principal security researcher at Mastercard, calls this the Uncle Ronnie effect: When my son was little, before hed go over to visit my brother his Uncle Ronnie Id say, Please, please, dont learn anything new from him. Because I know hes going to teach my son something bad, Wendt told SC Media in an interview at the CyberRisk Alliances 2022 InfoSec World Conference in Orlando, Florida.
Likewise, an adversary can compromise a machine learning system teaching it bad habits that its operators must then undo if they can even detect the breach in the first place.
On the cyber front, properly trained machine-learning systems can help with such tasks as classifying malware, identifying phishing attempts, intrusion detection, behavioral analytics, and predicting if and when a vulnerability will be exploited. But there are ways to skew the results.
Our adversaries will try to figure out how to circumvent [machine learning] classification often times by injecting adversarial samples that will poison the training, explained Wendt, who presented on this very topic earlier this week at the InfoSec World conference. Alternatively, bad actors could launch an inference attack to gain unauthorized access to the data based to train the machine learning system.
To protect against attacks launched on machine learning models, Wendt recommended conducting proper data sanitization and also ensuring that you have proper version control access control around your data so that if there is an attack you can go back to prior versions of that data and rerun that model and look for drift. If you find evidence of wrongdoing, then you can at least undo whatever it is that troublemaking Uncle Ronnie taught your machine learning system.
For more insight on how machine learning it can be maliciously influenced, watch the embedded video below.
Excerpt from:
Adversaries may be poisoning your machine learning engine. Here's how to undo the damage. - SC Media
Google turns to machine learning to advance translation of text out in the real world – TechCrunch
Google is giving its translation service an upgrade with a new machine learning-powered addition that will allow users to more easily translate text that appears in the real world, like on storefronts, menus, documents, business cards and other items. Instead of covering up the original text with the translation the new feature will instead smartly overlay the translated text on top of the image, while also rebuilding the pixels underneath with an AI-generated background to make the process of reading the translation feel more natural.
Often its that combination of the word plus the context like the background image that really brings meaning to what youre seeing, explained Cathy Edwards, VP and GM of Google Search, in a briefing ahead of todays announcement. You dont want to translate a text to cover up that important context that can come through in the images, she said.
Image Credits: Google
To make this process work, Google is using a machine learning technology known as generative adversarial networks, otherwise known as GAN models the same technology that powers the Magic Eraser feature to remove objects from photos taken on the Google Pixel smartphones. This advancement will allow Google to now blend the translated text into even very complex images, making the translation feel natural and seamless, the company says. It should seem as if youre looking at the item or object itself with translated text, not an overlay obscuring the image.
The feature is another development that seems to point to Googles plans to further invest in the creation of new AR glasses, as an ability to translate text in the real world could be a key selling point for such a device. The company noted that every month, people use Google to translate text and images over a billion times in more than 100 languages. It also this year began testing AR prototypes in public settings with a handful of employees and trusted testers, it said.
While theres obvious demand for better translation, its not clear if users will prefer to use their smartphone for translations rather than special eyewear. After all, Googles first entry into the smartglasses space, Google Glass, ultimately failed as a consumer product.
Google didnt speak to its long-term plans for the translation feature today, noting only that it would arrive sometime later this year.
Link:
Google turns to machine learning to advance translation of text out in the real world - TechCrunch
A Machine Learning Model for Predicting the Risk of Readmission in Community-Acquired Pneumonia – Cureus
Background
Pneumonia is a common respiratory infection that affects all ages, with a higher rate anticipated as age increases. It is a disease that impacts patient health and the economy of the healthcare institution. Therefore, machine learning methods have been used to guide clinical judgment in disease conditions and can recognize patterns based on patient data. This study aims to develop a prediction model for the readmission risk within 30 days of patient discharge after the management of community-acquired pneumonia (CAP).
Univariate and multivariate logistic regression were used to identify the statistically significant factors that are associated with the readmission of patients with CAP. Multiple machine learning models were used to predict the readmission of CAP patients within 30 days by conducting a retrospective observational study on patient data. The dataset was obtained from the Hospital Information System of a tertiary healthcare organization across Saudi Arabia. The study included all patients diagnosed with CAP from 2016 until the end of 2018.
The collected data included 8,690 admission records related to CAP for 5,776 patients (2,965 males, 2,811 females). The results of the analysis showed that patient age, heart rate, respiratory rate, medication count, and the number of comorbidities were significantly associated with the odds of being readmitted. All other variables showed no significant effect. We ran four algorithms to create the model on our data. The decision tree gave high accuracy of 83%, while support vector machine (SVM), random forest (RF), and logistic regression provided better accuracy of 90%. However, because the dataset was unbalanced, the precision and recall for readmission were zero for all models except the decision tree with 16% and 18%, respectively. By applying the Synthetic Minority Oversampling TEchnique technique to balance the training dataset, the results did not change significantly; the highest precision achieved was 16% in the SVM model. RF achieved the highest recall with 45%, but without any advantage to this model because the accuracy was reduced to 65%.
Pneumonia is an infectious disease with major health and economic complications. We identified that less than 10% of patients were readmitted for CAP after discharge; in addition, we identified significant predictors. However, our study did not have enough data to develop a proper machine learning prediction model for the risk of readmission.
The Centers for Disease Control and Prevention (CDC) has defined pneumonia as a lung infection that can cause mild to severe illnesses in people of all ages. As a worldwide infectious disease, pneumonia is the leading cause of death in children less than five years of age [1]. In the United States, pneumonia results in around one million admissions yearly, and up to 20% of these admissions require intensive care, leading to more than 50,000 deaths. Pneumonia is also responsible for almost 140,000 hospital readmissions costing up to 10 billion US Dollars [2]. In a study that included patients discharged after being treated for pneumonia, almost 7.3% were readmitted in less than a month [3]. However, the average rate of readmission increased to 18.5% in 2012 [4]. With the increase in readmission rates, Medicare and Medicaid Services (CMS) recognized the financial impact of hospital readmissions and started to assess the event by introducing readmission adjustment factors for hospital reimbursement [5]. The focus on improving the readmission rate was not only an incentive raised financially but also to improve the quality of patient care [4].
The impact of readmission has raised the awareness to investigate its causes and risk factors. Many studies have been conducted to investigate these factors. Hebert et al. found that the risk factors identified can vary from pneumonia-related (35.8%) to comorbidity-related reasons [3]. While others recognized instability on discharge and treatment failure as possible causes [6]. A systematic review that studied social factors as possible readmission risk factors concluded that age (elderly), race (non-whites), education (low), unemployment, and low income were associated with a higher risk of hospital readmission [7]. In another study investigating the history of readmitted patients, 50.2% of the readmitted patients had a medical discharge with no follow-up visits until readmission. It should be noted that the readmission length of stay was reported to be almost 0.6 days longer than the usual length of stay [8]. Epstein et al. also highlighted a proportional relationship between overall hospital admission and readmission rate [9].
Machine learning methods have been used to guide clinical judgment in gray areas as they can predict a pattern based on data and help decision-making. In 1959, machine learning was defined as a Field of study that gives computers the ability to learn without being explicitly programmed. Many prediction algorithms have been proposed and successfully used in healthcare, such as support vector machine (SVM), decision tree (DT), and logistic regression, and have been tested in many medical conditions such as diabetes among other conditions [10].
The importance of creating prediction models for readmission is to overcome all previously mentioned complications. Due to the difficulties in predicting readmission, the model can guide healthcare providers (HCPs) in improving/supporting clinical decision-making during healthcare episodes while considering the risk of readmission [4]. As a result, the CMS has started readmission reduction program initiatives to push healthcare systems to find new innovative ways to predict and tackle these issues [11]. Multiple researchers have developed predictive models to identify patients who are more likely to be readmitted [12].
In 2016, a systematic review identified all used models and described their performances. The study concluded that only a few validated models predict the risk of readmission of pneumonia patients, the published models were moderately acceptable, and that any future model should have additional parameters to improve the prediction accuracy, or as suggested by O'Brien et al. to include all available patient care information if possible [13,14].
Another study aimed at predicting readmissions in general not specifically to pneumonia at King Abdulaziz Medical City (KAMC) in Riyadh, Saudi Arabia, used neural network (NN), SVM, rules-based techniques, and C4.5 DT and concluded that the models achieved higher accuracy levels when the prediction was designed for readmission within 15 days only. Furthermore, the study suggested that SVM was the best model. Of all the variables used in the training set, the visit history, lab tests, and patient age had more weight in the prediction than other variables in addition to the length of stay before discharge [15].
Furthermore, other researchers have developed logistic regression models with stepwise removal for the prediction that showed better prediction models [4]. Other studies have not disclosed the used prediction models and simply reported using logistic regression models for the prediction without further details [6]. Thus, this study aims to build a prediction model for the readmission risk within 30 days of discharge in patients discharged after community-acquired pneumonia (CAP) management. The main contributions of this study are testing previously used models while adding more parameters to improve the prediction model. The parameters are based on literature recommendations and data availability. In addition, we aimed to evaluate the quality of the Hospital Information System (HIS) datasets and use the model for feature selection from the dataset to identify risk factors and variations between subgroups.
Multiple machine learning models were used to predict readmission of CAP patients within 30 days of diagnosis by conducting a retrospective observational study on patient data. The dataset was obtained from multiple tertiary healthcare organizations in Saudi Arabia. All patients diagnosed with CAP from 2016 until the end of 2018 were included. We excluded pediatric patients who were less than 18 years old or patients who died within 30 days of discharge. Ethical approval was obtained from the Internal Review Board (IRB) of King Abdullah International Medical Research Center because the acquired data were de-identified.
The raw data were segregated into multiple datasets. The datasets obtained contained different attributes, including the history of pneumonia admissions, vital signs, radiology, laboratory, medication list, and comorbidities. For modeling, different data transformation and cleaning processes were conducted on the dataset. The length of stay (LOS) attribute was changed for patients discharged in less than a day to 1. The age attribute was derived from the date of birth and the admission date. All comorbidities, medications, radiology examinations, and lab results were transformed to dummy attributes with the values 1 and 0 for each admission. Attributes with missing values were filled with the median. Lastly, a new attribute was also derived to represent the class labels (readmission within 30 days of discharge and no readmission) as 0 and 1. Python coding language and Scikit-learn machine learning package were used for dataset cleaning and modeling [16].
Univariate analyses were deployed using logistic regression to study the significance of the relationship between each factor and the readmission. All significant variables with a p-value of less than 0.05 were then passed to multivariate logistic regression, and the odds ratios were calculated to determine the magnitude of the relationship. Furthermore, various classic and modern machine learning models have been reported in the literature. In this study, to predict readmission within 30 days, we experimented with several algorithms using predictive analytics. The first model was developed using RF with a maximum depth of two levels and 100 estimators [17]. Second, a logistic regression (LR, aka logit, MaxEnt) classifier was developed that implemented regularized logistic regression using the liblinear solver [18]. The third was developed using DTs with a maximum depth of two levels [19]. The fourth was the SVM classifier [20].
A cross-validation technique was used by splitting the dataset into 70% of the dataset for training and 30% for testing. To measure the performance of the model, precision, recall, f1-score, and accuracy metrics were used to measure both results, readmitted and not readmitted. These results were calculated for both labels in Python using the Scikit-learn library [16].
The collected data included 8,057 admission records related to pneumonia for 5,776 patients, including 2,965 males and 2,811 females. Patients were located at six different hospitals under the Ministry of National Guard Health Affairs. Of these admissions, only 791 admissions were followed by a readmission within 30 days, with a percentage of 9.1% of the total number of admissions included in the dataset. The minimum age was 18, and the maximum age was 119, but more than 60% of the patients were older than 66 years old with a median of 70 years (Table 1).
The first step in our analysis was the univariate logistic regression with each of the independent variables. The results indicated that all the variables were significantly affecting the readmission probability (Table 2).
Subsequently, a multivariate analysis that included all significant factors from the univariate analysis was conducted. The results indicated that keeping other variables constant, the odds of being readmitted were decreased by 1% for each unit increase in age. The odds of being readmitted were 1.01 times higher for every extra temperature unit. The odds of being readmitted decreased by 1% for each unit increase in heart rate. The odds of being readmitted decreased by 2% for each unit increase in the respiratory rate. The odds of being readmitted decreased by 7% for every extra prescribed medication. The odds of being readmitted were 1.08 times higher for every extra comorbidity that the patient had. All other variables showed no significant effect (Table 3).
For building the prediction model, four different machine learning algorithms were used. DT provided high accuracy of 83%, while SVM, RF, and logistic regression achieved better accuracy of 90%. Because the dataset was unbalanced, the precision and recall for readmission were zero for all models except the DT with 16% and 18%, respectively. On applying the Synthetic Minority Oversampling TEchnique (SMOTE) to balance the training dataset, the results did not change considerably, and the highest precision achieved was 16% in the SVM model [21]. The highest recall was achieved by RF at 45% but without any advantage to this model because the accuracy had reduced to 65%. To understand the impact of adding different attributes included in the dataset and to test whether the deletion would have a positive or negative impact on the model, we ran each of the four algorithms on the dataset five more times by removing one of the following groups of attributes: comorbidities, labs, radiology results, medication, and vital signs. The DT worked best without the vital signs, whereas the SVM and the logistic regression did not perform better without removing any of them. The relatively best result achieved was by removing comorbidities from the RF model which had 67% accuracy, 14% precision, and 47% recall (Table 4).
Factors that affect patients readmission have been studied extensively in the past two decades. In this study, we studied clinical factors that can be linked to the risk of readmission for patients with CAP. The results of the analysis showed that patient age, heart rate, respiratory rate, medication count, and the number of comorbidities that the patient had were significantly associated with the odds of being readmitted. All other variables showed no significant effect. The significant variables from our analysis are all factors that could indicate the severity of the patients condition in general and during the initial admission.
However, when examining the magnitude of the relationship using odds ratios, we can observe two controversial themes. The first is that the worse patients have higher odds of readmission than their counterparts, which can be drawn from the odds of the temperature and number of comorbidities. The second theme is that worse patients have lower odds of being readmitted than their counterparts, which can be drawn from the odds of the age of the patient, heart rate, respiratory rate, and the number of medications prescribed. These two themes can be explained by the behavior of clinicians when dealing with admitted patients.
The researchers hypothesis is that factors such as age, heart rate, and respiratory rate are directly related to the condition of patients with CAP, whereas temperature and the number of comorbidities are not related. This implies that patients who are worse according to their age, heart rate, and respiratory rate should be given more attention and prescribed more medications, which subsequently results in lower odds of being readmitted. This explanation is consistent with another study conducted at the same hospital for studying seven-day readmission for emergency department patients [22]. Nevertheless, this hypothesis needs further investigation and likely an observational study to be confirmed or rejected.
To build a prediction model, multiple algorithms and techniques were available for use in this study. However, we selected the ones that can easily be interpreted by HCPs where each rule can be translated into if/then rules. Moreover, we included other models that have been showing promising results per our literature review. After extensive testing of the final clean dataset and multiple trials of creating the prediction model, although the accuracy of the model reached 90%, we did not find any appropriate prediction for readmission. This can be explained by the nature of our dataset because the data for readmitted patients within 30 days of discharge was only 9.1% of the dataset and not enough to create an appropriate prediction.
Looking at the results of the model, the SVM performed poorly, which contradicts the findings reported in the literature. In this study, SVM showed zero recall by predicting that all patients will not require readmission. Moreover, as part of the models training exercise, we tried to add and drop some of the predictors while using precision and recall to observe the effect on the performance of the models. This exercise resulted in a slight improvement in the RF model after decreasing the number of dummy attributes. On the other hand, better performance of the DT model was noticed after removing the continuous attributes of vital signs. Even with the best version of each model, we conclude that none of these models would qualify to serve as a reliable or valid model to predict readmissions of patients with CAP.
This poor performance of the models is in part due to the low quality of the dataset that was available for this analysis. For example, the laboratory results were provided as free text instead of structured numeric values, which required sophisticated and advanced text-mining efforts to extract the relevant values for our study. Unfortunately, this led to only including the number of labs that were ordered for each patient instead of the results of the labs. Conducting more natural language processing could have a real contribution in creating a model with better performance than our models.
Furthermore, in comorbidities, there were several entries for the same comorbidity for each patient. Therefore, the frequency of those comorbidities was not easily derived from the data. In addition, the same comorbidity had multiple descriptions which made it harder to calculate. For example, diabetes mellitus, type II diabetes, and type II diabetes mellitus can be found simultaneously, and they might have been documented differently for the same patient. Moreover, we used the International Statistical Classification of Diseases and Related Health Problems, Tenth Revision, Australian Modification codes of pneumonia for applying the inclusion criteria that only include patients who had CAP as a primary diagnosis. However, we noticed that different codes of pneumonia were used interchangeably in the HIS documentation even during the same visit, which affected the accuracy of including only CAP patients.
Dealing with such data requires a careful process to ensure that the cleaning process was done appropriately and did not result in faulty data or missing data from the original dataset. Furthermore, the decisions to choose how to transform the data should be discussed before each action with an expert in the field not only be taken by the data scientist. For example, because we had multiple readings of vital signs, we needed to transform it to get only one value for each visit; the data scientist might use the median value to give an overall view of the entire admission status, but the physician would rather take the last reading before discharge because it will better reveal how the patient was discharged, which will affect the readmission possibility. Another would rather use the worst reading during the admission to reflect how bad the patients condition was, which would affect the possibility of being readmitted.
The main limitation of this study was the structure of the original dataset, which limited the number of features that could be engineered and used for prediction. Another limitation was the lack of data outside the hospital that could strongly impact the readmission of patients. For example, if the hospital collected how well patients are adhering to their medications or their diet, we hypothesize that the prediction could have gained more accuracy. This kind of data could be collected as part of future studies from patients smartphones through dedicated applications. Future nationwide studies can be conducted to compare different populations based on different societal factors, such as lifestyle and diet, to determine their impact on readmission.
Pneumonia is an infectious disease with major health and economic complications. Predicting CAP readmission using machine learning is an essential area of further research. Based on our study findings, the model accuracy reached 90%, but we did not find an appropriate prediction for readmission because the number of readmissions within 30 days was only 9.1% of the entire dataset received. We propose that machine learning can be used for predicting CAP readmission, but appropriate data sources and a suitable modeling technique are required to improve the accuracy of the model.
During the development of our machine learning and statistical analysis models, we identified factors that are associated with the readmission of patients with CAP. These factors should be used and closely monitored by HCPs to minimize the rate of readmissions of patients with CAP. In our study, we identified that less than 10% of the patients were readmitted for CAP after discharge within the dates included in our study. Furthermore, our results indicated that age, heart rate, respiratory rate, medication count, and the number of comorbidities that a patient had were significantly associated with the odds of being readmitted. However, the prediction performance of the models in our study was not adequate to predict the risk of readmission.
Here is the original post:
A Machine Learning Model for Predicting the Risk of Readmission in Community-Acquired Pneumonia - Cureus
ART-ificial Intelligence: Leveraging the Creative Power of Machine Learning | LBBOnline – Little Black Book – LBBonline
Above: Chago's AI self-portrait, generated in Midjourney.
I have learnt to embrace and explore the creative possibilities of computer-generated imagery. It all started with the introduction of Photoshop thirty years ago, and more recently, I became interested in the AI software program, Midjourney, a wonderful tool that allows creatives to explore ideas more efficiently than ever before. The best description for Midjourney that Ive found is an AI-driven tool for the exploration of creative ideas.
If I was talking to somebody who was unfamiliar with AI-generated art, I would show them some examples, as this feels like a great place to start. Midjourney is syntax-driven; users must break down the language and learn the key phrases and special order of the words, in order to take full advantage of the program. As well as using syntax, users can upload reference imagery to help bring their idea to life. An art director could upload a photo of Mars and use that as a reference to create new imagery I think this is a fantastic tool.
Im a producer, with an extensive background as a production artist, mostly in retouching and leading post production teams. I also have a background in CGI, I took some postgraduate classes at NYU for a couple semesters, and I went to college for architecture, so I can draw a little bit but I'm not going to pretend that I could ever do a CGI project. A lot of art directors and creative directors are in the same boat, they direct and creative direct - especially on the client side - a lot of CGI projects, but dont necessarily know CGI. Programs like Midjourney let people like us dip our toes into the creative waters, by giving us access to an inventive and artistic toolset.
Last week, the Steelworks team was putting together a treatment deck for a possible new project. We had some great ideas to send to the client, but sourcing certain specific references felt like finding a needle in the haystack. If we were looking for a black rose with gold dust powder on the petals, it is hard to find exactly what we want. Its times like these when a program like Midjourney can boost the creative. By entering similar references into the software and developing a syntax that is as close to what youre looking for as possible, you are given imagery that provides more relevant references for a treatment deck. For this reason, in the future I see us utilising Midjourney more often for these tasks, as it can facilitate the creative ideation for treatments and briefs for clients.
I'm optimistic about Midjourney because, as technology evolves, humans in the creative industries continue to find ways to stay relevant. I was working as a retoucher during the time Photoshop first came out with the Healing Brush. Prior to that, all retouching was done manually by manipulating and blending pixels. All of a sudden, the introduction of the Healing Brush meant that with one swipe, three hours of work was removed. I remember we were sitting in our post production studio when someone showed it to us and we thought, Oh my God, we're gonna be out of a job. Twenty years later, retouching still has relevance, as do the creatives who are valued for their unique skill sets.
I don't do much retouching anymore, but I was on a photo shoot recently and I had to get my hands in the sauce and put comps together for people. There were plenty of new selection tools in Photoshop that have come out in the last three years and I had no idea about most of them. I discovered that using these tools cut out roughly an hour's worth of work, which was great. As a result, it opened up time for me to talk to clients, and be more present at work and home. It's less time in front of the computer at the end of the day.
While these advancements in technology may seem daunting at first, I try not to think of it as a threat to human creativity, rather a tool which grants us more time to immerse ourselves in the activities that boost our creative thinking. Using AI programs like Midjourney helps to speed up the creative process which, in turn, frees up more time to do things like sit outside and enjoy our lunch in the sun, go to the beach or to the park with your kids things that feed our frontal cortex and inspire us creatively. It took me a long time to be comfortable with taking my nose off the grindstone and relearn how to be inspired creatively.
Read more here:
ART-ificial Intelligence: Leveraging the Creative Power of Machine Learning | LBBOnline - Little Black Book - LBBonline
Renesas Launches RZ/V2MA Microprocessor with Deep Learning and OpenCV Acceleration Engines – Hackster.io
Renesas has announced a new chip in its RZ/V family, aimed at accelerating OpenCV and other machine learning workloads for low-power computer vision at the edge: the RZ/V2MA microprocessor, built with the Apache TVM compiler stack in mind.
The new Renesas RZ/V2MA is built around two 64-bit Arm Cortex-A53 processor cores running at up to 1GHz plus an artificial intelligence coprocessor dubbed the DRP-AI and offering one trillion operations per second (TOPS) of compute performance per watt of power translating, in real-world terms, to a performance of 52 frames per second for the TinyYoloV3 network.
Renesas' latest chip aims to provide enough grunt for real-time edge-vision workloads in a low power envelope. (: Renesas)
In addition to the DRP-AI coprocessor, the chip also offers an accelerator specifically focused on OpenCV workloads improving performance for rule-based image processing, which can happen simultaneously alongside networks running on the DRP-AI.
On the software side, Renesas has a DRP-AI Translator, which offers conversion for ONNX and PyTorch models to enable them to run on the DRP-AI core, with TensorFlow support to follow; the company has also announced the DRP-AI TVM, built atop the open-source Apache TVM deep learning compiler stack, which allows for programs to be compiled to run on both the DRP-AI accelerator and one or both CPU cores.
"One of the challenges for embedded systems developers who want to implement machine learning is to keep up with the latest AI models that are constantly evolving," claims Shigeki Kato, vice president of Renesas' enterprise infrastructure division. "With the new DRP-AI TVM tool, we are offering designers the option to expand AI frameworks and AI models that can be converted to executable formats, allowing them to bring the latest image recognition capabilities to embedded devices using new AI models.
The RZ/V2MA also offers support for H.264 and H.265 video codecs, LPDDR4 memory at up to 3.2Gbps bandwidth, USB 3.1 connectivity, and two lanes of PCI Express. To showcase its capabilities, Renesas has used the chip in the Vision AI Gateway Solution reference design, which includes Ethernet, Wi-Fi, Bluetooth Low Energy (BLE), and cellular LTE Cat-M1 connectivity.
More information on the RZ/V2MA is available on the Renesas website.
What 5 Benefits a Machine Learning Developer Can Bring to Your Business – Business Review
Machine learning (ML) is at the peak of popularity right now. This branch of artificial intelligence is attracting more and more investments, and its market value is growing at a rapid pace.
As a result, from over $21 billion in 2022, the machine learning market will likely reach more than $209 billion in 2029! Its an indicator of annual growth of almost 39%.
Even though many businesses are actively adopting this innovation to keep up with the competition, some companies are still hesitant. Today well convince you why business owners should step out of their comfort zone and hire a machine learning developer.
Well discuss ML in general and provide the reasons and benefits of implementing this innovation into your business processes. Todays article will also explain what skills a decent machine learning engineer should have.
Machine learning is often confused with artificial intelligence or used interchangeably. These terms certainly relate to the same field, but lets figure out what ML means.
Machine learning is a branch of artificial intelligence and the most common application of AI. These are algorithms that enable software products to perform a given task more precisely or forecast outcomes more accurately. It becomes possible thanks to this technologys ability to process data: the more data available to ML, the more accurate the predictions and performance of tasks.
ML is the technology thanks to which artificial intelligence can work. Machine learning is a way to teach machines to process, analyze information, and draw conclusions based on it.
A recommendation engine is the most typical use case for ML. Its software that analyzes data and makes assumptions. In addition, machine learning is used to detect software threats, fraud, spam filtering, or business process automation.
Machine learning is gradually becoming the innovation that drives companies in various industries.
Organizations implement this technology to speed up routine tasks, help with data analysis, and search for more effective business solutions. So, if you hire dedicated machine learning developer, you can improve the productivity and scalability of your company.
On the other hand, organizations that continue to ignore ML implementation for their processes are sliding backward. As a result, they perform more manual work, complete their tasks slower, and experience an increased risk of human error.
Instead, with machine learning, companies can avoid all these limitations. By leveraging this technology, you begin to understand your customers better, get tools to improve your existing products, create new ones, and boost the competitiveness of your business.
Yet, properly implementing ML in your processes requires an experienced IT specialist who is well versed in this innovation. Only such an expert will help you maximize the benefits of machine learning for your business.
To choose the most suitable specialist to handle your machine learning tasks, you must have a good understanding of the basic skills of such an expert. Here are the most fundamental of them:
Statistics
Skills in statistics are a must for an ML developer. In particular, this includes an understanding of analysis of variance and hypothesis testing. Such knowledge is critical because machine learning algorithms are based on statistical models.
Probability
Knowledge of mathematical probability equations is also significant for a machine learning engineer. Such a skill will help a specialist to predict future results and train artificial intelligence for this.
Data modeling
Knowledge of data models is critical for a developer. With a deep understanding of this process, a specialist can identify data structures, discover patterns between them, and fill in the gaps where data is missing.
Machine learning data labeling
Data labeling in machine learning is directly responsible for teaching machines and software. Your specialist should be able to process raw data such as images, video, and audio and give them meaningful labels. That is why machine learning labeling skills are necessary.
Programming skills
Since machine learning works through algorithms, your expert must have programming skills. In particular, this is knowledge of such languages as Python, R, C, or C++. With these tools, IT specialists can create algorithms, scenarios, etc.
Source: Mobilunity
ML libraries and algorithms knowledge
Knowledge of existing machine learning libraries and algorithms is helpful for your specialist. These are, for example, such tools as Microsoft Cognitive Toolkit, MLlib, or Google TensorFlow.
Now that you know what ideal ML developers should be and what skills they should possess, lets see what this expert can offer your business.
We have collected five main advantages that you will get by hiring a machine learning engineer:
The success of any company depends on the ability to plan and make balanced business decisions carefully. An expert in machine learning will help leverage this technology to process large amounts of data. As a result of analyzing this information, you will be able to find efficient solutions, minimize risks, and receive accurate forecasts for your company.
A machine learning specialist can help you streamline your business processes. In the ML industry, this method is also called intelligent process automation. Your IT expert can not only transfer routine tasks to automatic mode but also do it with more complex duties. For example, ML can automate even data entry.
A machine learning expert will help your business gain significantly more loyal customers. It will be possible by analyzing your client data and their behavior. Based on your audience research, youll offer them exactly what they need.
Along with personalizing the customer experience, you get many more benefits. Specifically, its increased revenue by growing your audience, improving overall customer satisfaction, and faster customer data analysis.
Based on the forecasts that machine learning algorithms will prepare for you, you will be able to evaluate the resources of your business more reasonably. As a result, you will always be ready for the changing demand for your products and know what customers expect from you.
Machine learning experts will also help you with inventory, save on materials, understand the exact scope of work, and reduce company waste.
As already mentioned, machine learning technologies help detect malicious attacks and fraudulent activities. By hiring an in-house or nearshore IT team in ML, you get the opportunity to implement advanced security standards into your business. Machine learning algorithms will gather data about cyber threats and immediately respond to suspicious activity.
Now, machine learning technologies are the latest trend that more and more businesses are chasing. Companies are implementing ML to automate their processes, improve customer experience, optimize costs and resources, and enhance data security.
Moreover, by leveraging machine learning, you can set your business apart from the crowd and offer your customers something unique.
When you hire an experienced ML developer, all these benefits will be available. You can do it directly in your country or try to find experts abroad, for example, opting for a nearshore team in Portugal.
Regardless of your choice, a machine learning engineer is a valuable asset to your business. So dont neglect innovation.
The rest is here:
What 5 Benefits a Machine Learning Developer Can Bring to Your Business - Business Review
Collaborative machine learning that preserves privacy | MIT News | Massachusetts Institute of Technology – MIT News
Training a machine-learning model to effectively perform a task, such as image classification, involves showing the model thousands, millions, or even billions of example images. Gathering such enormous datasets can be especially challenging when privacy is a concern, such as with medical images. Researchers from MIT and the MIT-born startup DynamoFL have now taken one popular solution to this problem, known as federated learning, and made it faster and more accurate.
Federated learning is a collaborative method for training a machine-learning model that keeps sensitive user data private. Hundreds or thousands of users each train their own model using their own data on their own device. Then users transfer their models to a central server, which combines them to come up with a better model that it sends back to all users.
A collection of hospitals located around the world, for example, could use this method to train a machine-learning model that identifies brain tumors in medical images, while keeping patient data secure on their local servers.
But federated learning has some drawbacks. Transferring a large machine-learning model to and from a central server involves moving a lot of data, which has high communication costs, especially since the model must be sent back and forth dozens or even hundreds of times. Plus, each user gathers their own data, so those data dont necessarily follow the same statistical patterns, which hampers the performance of the combined model. And that combined model is made by taking an average it is not personalized for each user.
The researchers developed a technique that can simultaneously address these three problems of federated learning. Their method boosts the accuracy of the combined machine-learning model while significantly reducing its size, which speeds up communication between users and the central server. It also ensures that each user receives a model that is more personalized for their environment, which improves performance.
The researchers were able to reduce the model size by nearly an order of magnitude when compared to other techniques, which led to communication costs that were between four and six times lower for individual users. Their technique was also able to increase the models overall accuracy by about 10 percent.
A lot of papers have addressed one of the problems of federated learning, but the challenge was to put all of this together. Algorithms that focus just on personalization or communication efficiency dont provide a good enough solution. We wanted to be sure we were able to optimize for everything, so this technique could actually be used in the real world, says Vaikkunth Mugunthan PhD 22, lead author of a paper that introduces this technique.
Mugunthan wrote the paper with his advisor, senior author Lalana Kagal, a principal research scientist in the Computer Science and Artificial Intelligence Laboratory (CSAIL). The work will be presented at the European Conference on Computer Vision.
Cutting a model down to size
The system the researchers developed, called FedLTN, relies on an idea in machine learning known as the lottery ticket hypothesis. This hypothesis says that within very large neural network models there exist much smaller subnetworks that can achieve the same performance. Finding one of these subnetworks is akin to finding a winning lottery ticket. (LTN stands for lottery ticket network.)
Neural networks, loosely based on the human brain, are machine-learning models that learn to solve problems using interconnected layers of nodes, or neurons.
Finding a winning lottery ticket network is more complicated than a simple scratch-off. The researchers must use a process called iterative pruning. If the models accuracy is above a set threshold, they remove nodes and the connections between them (just like pruning branches off a bush) and then test the leaner neural network to see if the accuracy remains above the threshold.
Other methods have used this pruning technique for federated learning to create smaller machine-learning models which could be transferred more efficiently. But while these methods may speed things up, model performance suffers.
Mugunthan and Kagal applied a few novel techniques to accelerate the pruning process while making the new, smaller models more accurate and personalized for each user.
They accelerated pruning by avoiding a step where the remaining parts of the pruned neural network are rewound to their original values. They also trained the model before pruning it, which makes it more accurate so it can be pruned at a faster rate, Mugunthan explains.
To make each model more personalized for the users environment, they were careful not to prune away layers in the network that capture important statistical information about that users specific data. In addition, when the models were all combined, they made use of information stored in the central server so it wasnt starting from scratch for each round of communication.
They also developed a technique to reduce the number of communication rounds for users with resource-constrained devices, like a smart phone on a slow network. These users start the federated learning process with a leaner model that has already been optimized by a subset of other users.
Winning big with lottery ticket networks
When they put FedLTN to the test in simulations, it led to better performance and reduced communication costs across the board. In one experiment, a traditional federated learning approach produced a model that was 45 megabytes in size, while their technique generated a model with the same accuracy that was only 5 megabytes. In another test, a state-of-the-art technique required 12,000 megabytes of communication between users and the server to train one model, whereas FedLTN only required 4,500 megabytes.
With FedLTN, the worst-performing clients still saw a performance boost of more than 10 percent. And the overall model accuracy beat the state-of-the-art personalization algorithm by nearly 10 percent, Mugunthan adds.
Now that they have developed and finetuned FedLTN, Mugunthan is working to integrate the technique into a federated learning startup he recently founded, DynamoFL.
Moving forward, he hopes to continue enhancing this method. For instance, the researchers have demonstrated success using datasets that had labels, but a greater challenge would be applying the same techniques to unlabeled data, he says.
Mugunthan is hopeful this work inspires other researchers to rethink how they approach federated learning.
This work shows the importance of thinking about these problems from a holistic aspect, and not just individual metrics that have to be improved. Sometimes, improving one metric can actually cause a downgrade in the other metrics. Instead, we should be focusing on how we can improve a bunch of things together, which is really important if it is to be deployed in the real world, he says.
Continued here:
Collaborative machine learning that preserves privacy | MIT News | Massachusetts Institute of Technology - MIT News
Ilya Feige Joins Cerberus Technology Solutions as Global Head of Artificial Intelligence and Machine Learning – Business Wire
NEW YORK & LONDON--(BUSINESS WIRE)--Cerberus Capital Management, L.P. (together with its affiliates, Cerberus) today announced that Ilya Feige, Ph.D., has joined as Global Head of Artificial Intelligence and Machine Learning for Cerberus Technology Solutions (CTS).
Launched in 2018, CTS is an operating subsidiary of Cerberus focused exclusively on applying leading technologies and advanced analytics to drive business transformations. Today, CTS has more than 80 in-house and partner technologists organized across practice areas, including technology strategy, digital and e-Commerce, solutions architecture, data management and operations, advanced analytics and business intelligence, and cyber security. Dr. Feige will lead the platforms artificial intelligence (AI) and machine learning (ML) practice to apply data-driven solutions across Cerberus portfolio of investment as well as analyze value creation opportunities during diligence processes.
Our platform brings together top experts across the technology and data domains that are fundamental to an organizations operations and growth, said Ben Sylvester, Chief Executive Officer of CTS. Beyond his expertise, Ilya has an impressive track record of harnessing data to apply innovative solutions. We are excited for the global impact he will have on our partners in helping to unlock value across their businesses.
Dr. Feige was an executive with Faculty, one of Europes leading AI companies, and most recently served as Director of AI. During his tenure, he founded the companys AI research lab and subsequently built and led a team of 25 applied AI and ML practitioners. In this role, he spearheaded the expansion of Facultys AI platform and go-to-market strategy. Dr. Feige graduated McGill University with the highest honors and received a Ph.D. in Theoretical Physics from Harvard University, where he was awarded the Goldhaber Prize as the top Ph.D. student in physics. He has authored several peer-reviewed publications on AI safety, ML, and physics.
Dr. Feige commented: The use of technology is only becoming more critical to companies across all industries and Ive seen firsthand how the right technical solutions can be transformative to an organizations performance. CTS is a world-class platform that is truly unique. Their integration and deployment of technology expertise at scale helps partners not only improve their businesses, but also become more competitive. Im looking forward to joining this great team and the broader Cerberus family.
John Tang, Head of EMEA for CTS, added: We are thrilled to welcome Ilya to our CTS team. This addition underscores our platforms commitment to integrating cutting edge capabilities, including in next generation AI/ML technologies.
About CerberusFounded in 1992, Cerberus is a global leader in alternative investing with approximately $60 billion in assets across complementary credit, private equity, and real estate strategies. We invest across the capital structure where our integrated investment platforms and proprietary operating capabilities create an edge to improve performance and drive long-term value. Our tenured teams have experience working collaboratively across asset classes, sectors, and geographies to seek strong risk-adjusted returns for our investors. For more information about our people and platforms, visit us at http://www.cerberus.com.
See the original post here:
Ilya Feige Joins Cerberus Technology Solutions as Global Head of Artificial Intelligence and Machine Learning - Business Wire
The Worldwide Artificial Intelligence Industry is Expected to Reach $1811 Billion by 2030 – ResearchAndMarkets.com – Business Wire
DUBLIN--(BUSINESS WIRE)--The "Artificial Intelligence Market Size, Share & Trends Analysis Report by Solution, by Technology (Deep Learning, Machine Learning, Natural Language Processing, Machine Vision), by End Use, by Region, and Segment Forecasts, 2022-2030" report has been added to ResearchAndMarkets.com's offering.
The global artificial intelligence market size is expected to reach USD 1,811.8 billion by 2030. The market is anticipated to expand at a CAGR of 38.1% from 2022 to 2030.
Companies Mentioned
Artificial Intelligence (AI) denotes the concept and development of computing systems capable of performing tasks customarily requiring human assistance, such as decision-making, speech recognition, visual perception, and language translation. AI uses algorithms to understand human speech, visually recognize objects, and process information. These algorithms are used for data processing, calculation, and automated reasoning.
Artificial intelligence researchers continuously improve algorithms for various aspects, as conventional algorithms have drawbacks regarding accuracy and efficiency. These advancements have led manufacturers and technology developers to focus on developing standard algorithms. Recently, several developments have been carried out for enhancing artificial intelligence algorithms. For instance, in May 2020, International Business Machines Corporation announced a wide range of new AI-powered services and capabilities, namely IBM Watson AIOps, for enterprise automation. These services are designed to help automate the IT infrastructures and make them more resilient and cost reduction.
Various companies are implementing AI-based solutions such as RPA (Robotic Process Automation) to enhance the process workflows to handle and automate repetitive tasks. AI-based solutions are also being coupled with the IoT (Internet of Things) to provide robust results for various business processes. For Instance, Microsoft announced to invest USD 1 billion in OpenAI, a San Francisco-based company. The two businesses teamed up to create AI supercomputing technology on Microsoft's Azure cloud.
The COVID-19 pandemic has emerged as an opportunity for AI-enabled computer systems to fight against the epidemic as several tech companies are working on preventing, mitigating, and containing the virus. For instance, LeewayHertz, a U.S.-based custom software development company, offers technology solutions using AI tools and techniques, including the Face Mask Detection System to identify individuals without a mask and the Human Presence System to monitor patients remotely. Besides, Voxel51 Inc., a U.S.-based artificial intelligence start-up, has developed Voxel51 PDI (Physical Distancing Index) to measure the impact of the global pandemic on social behavior across the world.
AI-powered computer platforms or solutions are being used to fight against COVID - 19 in numerous applications, such as early alerts, tracking and prediction, data dashboards, diagnosis and prognosis, treatments and cures, and maintaining social control. Data dashboards that can visualize the pandemic have emerged with the need for coronavirus tracking and prediction. For instance, Microsoft Corporation's Bing's AI tracker gives a global overview of the pandemic's current statistics.
Artificial Intelligence Market Report Highlights
Key Topics Covered:
Chapter 1 Methodology and Scope
Chapter 2 Executive Summary
Chapter 3 Market Variables, Trends & Scope
3.1 Market Trends & Outlook
3.2 Market Segmentation & Scope
3.3 Artificial Intelligence Size and Growth Prospects
3.4 Artificial Intelligence-Value Chain Analysis
3.5 Artificial Intelligence Market Dynamics
3.5.1 Market Drivers
3.5.1.1 Economical parallel processing set-up
3.5.1.2 Potential R&D in artificial intelligence systems
3.5.1.3 Big data fuelling AI and Machine Learning profoundly
3.5.1.4 Increasing Cross-Industry Partnerships and Collaborations
3.5.1.5 AI to counter unmet clinical demand
3.5.2 Market Restraint
3.5.2.1 Vast demonstrative data requirement
3.6 Penetration & Growth Prospect Mapping
3.7 Industry Analysis-Porter's
3.8 Company Market Share Analysis, 2021
3.9 Artificial Intelligence-PEST Analysis
3.10 Artificial Intelligence-COVID-19 Impact Analysis
Chapter 4 Artificial Intelligence Market: Solution Estimates & Trend Analysis
Chapter 5 Artificial Intelligence Market: Technology Estimates & Trend Analysis
Chapter 6 Artificial Intelligence Market: End-Use Estimates & Trend Analysis
Chapter 7 Artificial Intelligence Market: Regional Estimates & Trend Analysis
Chapter 8 Competitive Landscape
For more information about this report visit https://www.researchandmarkets.com/r/ykyt2m
Continued here:
The Worldwide Artificial Intelligence Industry is Expected to Reach $1811 Billion by 2030 - ResearchAndMarkets.com - Business Wire