Page 1,634«..1020..1,6331,6341,6351,636..1,6401,650..»

Developing Machine-Learning Apps on the Raspberry Pi Pico – Design News

Starting on Monday, October 24, and running through October 28, Design News will present the free course, Developing Machine-Learning Applications on the Raspberry Pi Pico. Each class runs an hour, beginning at 2:00 Eastern. You can also earnIEEE Professional Development Hoursfor participating. If you are not able to attend the class schedule, the course will be available on demand.

The Raspberry Pi Pico is a versatile, low-cost development board that applies to many applications. Course instructor Jacob Beningo will explain how to get up and running with the Raspberry Pi Pico. Hell mainly focus on how to develop machine-learning applications and deploy them to the Pico. Hell use gesture detection as an example application. Attendees will walk away understanding machine learning, the Pico, and best practices for working with both.

Related: Learn DC Motor Controls with the Raspberry Pi 2040 Pico

Heres a Breakdown of Developing Machine-Learning Applications on the Raspberry Pi Pico day by day:

Day 1: Getting Started with the Raspberry Pi Pico and Machine Learning

Related: 3 Tips for Rapid Prototyping with the Raspberry Pi Pico

In this session, we will introduce the Raspberry Pi Pico development board, based on the low-cost, high-feature RP2040 microcontroller. We will explore the Pico board features and why the board is well suited for machine-learning applications. Attendees will walk away understanding the Pico board and the fundamentals of machine learning on microcontroller-based devices.

Day 2: Machine-Learning Tools and Process Flow

There are a wide variety of tools developers use to deploy machine-learning models to the Raspberry Pi Pico. In this session, we will explore the various tools embedded software developers might be interested in using. Attendees will also learn about the general machine-learning process flow and how it fits within the standard embedded software programming model.

Day 3: Collecting Sensor Data Using Edge Impulse

Before a developer creates a machine-learning model, they must first collect the data used by the model. This session will explore how to connect and collect sensor data using Edge Impulse. Well discuss how much data to collect and the various options for doing so. Attendees will walk away understanding how to prepare their data for training and eventual deployment to the Pico.

Day 4: Designing and Testing a Machine-Learning Model

With sensor data now collected, developers will now want to use their data to train and test a machine-learning model. In this session, we will use the data we gathered in the previous session to train a model. Attendees will learn how to train a model and examine the training results in order to get the desired outcomes from their model.

Day 5: Deploying Machine-Learning Models and Next Steps

In this session, we will use the model we trained in the last session and deploy it to the Raspberry Pi Pico. Well investigate several methods to deploy the model and test how well the model works on the Raspberry Pi Pico. Attendees will see how to deploy the model and learn about the next steps for using the Raspberry Pi Pico for machine-learning applications.

Meet your instructor:Jacob Beningo is an embedded software consultant who currently works with clients in more than a dozen countries to dramatically transform their businesses by improving product quality, cost and time to market. He has published more than 300 articles on embedded software development techniques, has published several books, is a sought-after speaker and technical trainer and holds three degrees which include a Masters of Engineering from the University of Michigan.

Digi-Key Continuing Education Center, presented by Design News, will get you up to working speed quickly in a host of technologies you've been meaning to study, but haven't had the time all without leaving the comfort of your lab or office. Our faculty of expert tutors has divided the interdisciplinary world of design engineering into five dimensions: microcontrollers (basic and advanced), sensors, wireless, power, and lighting.

You can register for the free class here.

View original post here:
Developing Machine-Learning Apps on the Raspberry Pi Pico - Design News

Read More..

Adversaries may be poisoning your machine learning engine. Here’s how to undo the damage. – SC Media

Machine learning has many useful cybersecurity applications provided the technology behind it isnt unduly influenced or tampered with by malicious actors, such that its data integrity is sabotaged.

Donnie Wendt, principal security researcher at Mastercard, calls this the Uncle Ronnie effect: When my son was little, before hed go over to visit my brother his Uncle Ronnie Id say, Please, please, dont learn anything new from him. Because I know hes going to teach my son something bad, Wendt told SC Media in an interview at the CyberRisk Alliances 2022 InfoSec World Conference in Orlando, Florida.

Likewise, an adversary can compromise a machine learning system teaching it bad habits that its operators must then undo if they can even detect the breach in the first place.

On the cyber front, properly trained machine-learning systems can help with such tasks as classifying malware, identifying phishing attempts, intrusion detection, behavioral analytics, and predicting if and when a vulnerability will be exploited. But there are ways to skew the results.

Our adversaries will try to figure out how to circumvent [machine learning] classification often times by injecting adversarial samples that will poison the training, explained Wendt, who presented on this very topic earlier this week at the InfoSec World conference. Alternatively, bad actors could launch an inference attack to gain unauthorized access to the data based to train the machine learning system.

To protect against attacks launched on machine learning models, Wendt recommended conducting proper data sanitization and also ensuring that you have proper version control access control around your data so that if there is an attack you can go back to prior versions of that data and rerun that model and look for drift. If you find evidence of wrongdoing, then you can at least undo whatever it is that troublemaking Uncle Ronnie taught your machine learning system.

For more insight on how machine learning it can be maliciously influenced, watch the embedded video below.

Excerpt from:
Adversaries may be poisoning your machine learning engine. Here's how to undo the damage. - SC Media

Read More..

Google turns to machine learning to advance translation of text out in the real world – TechCrunch

Google is giving its translation service an upgrade with a new machine learning-powered addition that will allow users to more easily translate text that appears in the real world, like on storefronts, menus, documents, business cards and other items. Instead of covering up the original text with the translation the new feature will instead smartly overlay the translated text on top of the image, while also rebuilding the pixels underneath with an AI-generated background to make the process of reading the translation feel more natural.

Often its that combination of the word plus the context like the background image that really brings meaning to what youre seeing, explained Cathy Edwards, VP and GM of Google Search, in a briefing ahead of todays announcement. You dont want to translate a text to cover up that important context that can come through in the images, she said.

Image Credits: Google

To make this process work, Google is using a machine learning technology known as generative adversarial networks, otherwise known as GAN models the same technology that powers the Magic Eraser feature to remove objects from photos taken on the Google Pixel smartphones. This advancement will allow Google to now blend the translated text into even very complex images, making the translation feel natural and seamless, the company says. It should seem as if youre looking at the item or object itself with translated text, not an overlay obscuring the image.

The feature is another development that seems to point to Googles plans to further invest in the creation of new AR glasses, as an ability to translate text in the real world could be a key selling point for such a device. The company noted that every month, people use Google to translate text and images over a billion times in more than 100 languages. It also this year began testing AR prototypes in public settings with a handful of employees and trusted testers, it said.

While theres obvious demand for better translation, its not clear if users will prefer to use their smartphone for translations rather than special eyewear. After all, Googles first entry into the smartglasses space, Google Glass, ultimately failed as a consumer product.

Google didnt speak to its long-term plans for the translation feature today, noting only that it would arrive sometime later this year.

Link:
Google turns to machine learning to advance translation of text out in the real world - TechCrunch

Read More..

A Machine Learning Model for Predicting the Risk of Readmission in Community-Acquired Pneumonia – Cureus

Background

Pneumonia is a common respiratory infection that affects all ages, with a higher rate anticipated as age increases. It is a disease that impacts patient health and the economy of the healthcare institution. Therefore, machine learning methods have been used to guide clinical judgment in disease conditions and can recognize patterns based on patient data. This study aims to develop a prediction model for the readmission risk within 30 days of patient discharge after the management of community-acquired pneumonia (CAP).

Univariate and multivariate logistic regression were used to identify the statistically significant factors that are associated with the readmission of patients with CAP. Multiple machine learning models were used to predict the readmission of CAP patients within 30 days by conducting a retrospective observational study on patient data. The dataset was obtained from the Hospital Information System of a tertiary healthcare organization across Saudi Arabia. The study included all patients diagnosed with CAP from 2016 until the end of 2018.

The collected data included 8,690 admission records related to CAP for 5,776 patients (2,965 males, 2,811 females). The results of the analysis showed that patient age, heart rate, respiratory rate, medication count, and the number of comorbidities were significantly associated with the odds of being readmitted. All other variables showed no significant effect. We ran four algorithms to create the model on our data. The decision tree gave high accuracy of 83%, while support vector machine (SVM), random forest (RF), and logistic regression provided better accuracy of 90%. However, because the dataset was unbalanced, the precision and recall for readmission were zero for all models except the decision tree with 16% and 18%, respectively. By applying the Synthetic Minority Oversampling TEchnique technique to balance the training dataset, the results did not change significantly; the highest precision achieved was 16% in the SVM model. RF achieved the highest recall with 45%, but without any advantage to this model because the accuracy was reduced to 65%.

Pneumonia is an infectious disease with major health and economic complications. We identified that less than 10% of patients were readmitted for CAP after discharge; in addition, we identified significant predictors. However, our study did not have enough data to develop a proper machine learning prediction model for the risk of readmission.

The Centers for Disease Control and Prevention (CDC) has defined pneumonia as a lung infection that can cause mild to severe illnesses in people of all ages. As a worldwide infectious disease, pneumonia is the leading cause of death in children less than five years of age [1]. In the United States, pneumonia results in around one million admissions yearly, and up to 20% of these admissions require intensive care, leading to more than 50,000 deaths. Pneumonia is also responsible for almost 140,000 hospital readmissions costing up to 10 billion US Dollars [2]. In a study that included patients discharged after being treated for pneumonia, almost 7.3% were readmitted in less than a month [3]. However, the average rate of readmission increased to 18.5% in 2012 [4]. With the increase in readmission rates, Medicare and Medicaid Services (CMS) recognized the financial impact of hospital readmissions and started to assess the event by introducing readmission adjustment factors for hospital reimbursement [5]. The focus on improving the readmission rate was not only an incentive raised financially but also to improve the quality of patient care [4].

The impact of readmission has raised the awareness to investigate its causes and risk factors. Many studies have been conducted to investigate these factors. Hebert et al. found that the risk factors identified can vary from pneumonia-related (35.8%) to comorbidity-related reasons [3]. While others recognized instability on discharge and treatment failure as possible causes [6]. A systematic review that studied social factors as possible readmission risk factors concluded that age (elderly), race (non-whites), education (low), unemployment, and low income were associated with a higher risk of hospital readmission [7]. In another study investigating the history of readmitted patients, 50.2% of the readmitted patients had a medical discharge with no follow-up visits until readmission. It should be noted that the readmission length of stay was reported to be almost 0.6 days longer than the usual length of stay [8]. Epstein et al. also highlighted a proportional relationship between overall hospital admission and readmission rate [9].

Machine learning methods have been used to guide clinical judgment in gray areas as they can predict a pattern based on data and help decision-making. In 1959, machine learning was defined as a Field of study that gives computers the ability to learn without being explicitly programmed. Many prediction algorithms have been proposed and successfully used in healthcare, such as support vector machine (SVM), decision tree (DT), and logistic regression, and have been tested in many medical conditions such as diabetes among other conditions [10].

The importance of creating prediction models for readmission is to overcome all previously mentioned complications. Due to the difficulties in predicting readmission, the model can guide healthcare providers (HCPs) in improving/supporting clinical decision-making during healthcare episodes while considering the risk of readmission [4]. As a result, the CMS has started readmission reduction program initiatives to push healthcare systems to find new innovative ways to predict and tackle these issues [11]. Multiple researchers have developed predictive models to identify patients who are more likely to be readmitted [12].

In 2016, a systematic review identified all used models and described their performances. The study concluded that only a few validated models predict the risk of readmission of pneumonia patients, the published models were moderately acceptable, and that any future model should have additional parameters to improve the prediction accuracy, or as suggested by O'Brien et al. to include all available patient care information if possible [13,14].

Another study aimed at predicting readmissions in general not specifically to pneumonia at King Abdulaziz Medical City (KAMC) in Riyadh, Saudi Arabia, used neural network (NN), SVM, rules-based techniques, and C4.5 DT and concluded that the models achieved higher accuracy levels when the prediction was designed for readmission within 15 days only. Furthermore, the study suggested that SVM was the best model. Of all the variables used in the training set, the visit history, lab tests, and patient age had more weight in the prediction than other variables in addition to the length of stay before discharge [15].

Furthermore, other researchers have developed logistic regression models with stepwise removal for the prediction that showed better prediction models [4]. Other studies have not disclosed the used prediction models and simply reported using logistic regression models for the prediction without further details [6]. Thus, this study aims to build a prediction model for the readmission risk within 30 days of discharge in patients discharged after community-acquired pneumonia (CAP) management. The main contributions of this study are testing previously used models while adding more parameters to improve the prediction model. The parameters are based on literature recommendations and data availability. In addition, we aimed to evaluate the quality of the Hospital Information System (HIS) datasets and use the model for feature selection from the dataset to identify risk factors and variations between subgroups.

Multiple machine learning models were used to predict readmission of CAP patients within 30 days of diagnosis by conducting a retrospective observational study on patient data. The dataset was obtained from multiple tertiary healthcare organizations in Saudi Arabia. All patients diagnosed with CAP from 2016 until the end of 2018 were included. We excluded pediatric patients who were less than 18 years old or patients who died within 30 days of discharge. Ethical approval was obtained from the Internal Review Board (IRB) of King Abdullah International Medical Research Center because the acquired data were de-identified.

The raw data were segregated into multiple datasets. The datasets obtained contained different attributes, including the history of pneumonia admissions, vital signs, radiology, laboratory, medication list, and comorbidities. For modeling, different data transformation and cleaning processes were conducted on the dataset. The length of stay (LOS) attribute was changed for patients discharged in less than a day to 1. The age attribute was derived from the date of birth and the admission date. All comorbidities, medications, radiology examinations, and lab results were transformed to dummy attributes with the values 1 and 0 for each admission. Attributes with missing values were filled with the median. Lastly, a new attribute was also derived to represent the class labels (readmission within 30 days of discharge and no readmission) as 0 and 1. Python coding language and Scikit-learn machine learning package were used for dataset cleaning and modeling [16].

Univariate analyses were deployed using logistic regression to study the significance of the relationship between each factor and the readmission. All significant variables with a p-value of less than 0.05 were then passed to multivariate logistic regression, and the odds ratios were calculated to determine the magnitude of the relationship. Furthermore, various classic and modern machine learning models have been reported in the literature. In this study, to predict readmission within 30 days, we experimented with several algorithms using predictive analytics. The first model was developed using RF with a maximum depth of two levels and 100 estimators [17]. Second, a logistic regression (LR, aka logit, MaxEnt) classifier was developed that implemented regularized logistic regression using the liblinear solver [18]. The third was developed using DTs with a maximum depth of two levels [19]. The fourth was the SVM classifier [20].

A cross-validation technique was used by splitting the dataset into 70% of the dataset for training and 30% for testing. To measure the performance of the model, precision, recall, f1-score, and accuracy metrics were used to measure both results, readmitted and not readmitted. These results were calculated for both labels in Python using the Scikit-learn library [16].

The collected data included 8,057 admission records related to pneumonia for 5,776 patients, including 2,965 males and 2,811 females. Patients were located at six different hospitals under the Ministry of National Guard Health Affairs. Of these admissions, only 791 admissions were followed by a readmission within 30 days, with a percentage of 9.1% of the total number of admissions included in the dataset. The minimum age was 18, and the maximum age was 119, but more than 60% of the patients were older than 66 years old with a median of 70 years (Table 1).

The first step in our analysis was the univariate logistic regression with each of the independent variables. The results indicated that all the variables were significantly affecting the readmission probability (Table 2).

Subsequently, a multivariate analysis that included all significant factors from the univariate analysis was conducted. The results indicated that keeping other variables constant, the odds of being readmitted were decreased by 1% for each unit increase in age. The odds of being readmitted were 1.01 times higher for every extra temperature unit. The odds of being readmitted decreased by 1% for each unit increase in heart rate. The odds of being readmitted decreased by 2% for each unit increase in the respiratory rate. The odds of being readmitted decreased by 7% for every extra prescribed medication. The odds of being readmitted were 1.08 times higher for every extra comorbidity that the patient had. All other variables showed no significant effect (Table 3).

For building the prediction model, four different machine learning algorithms were used. DT provided high accuracy of 83%, while SVM, RF, and logistic regression achieved better accuracy of 90%. Because the dataset was unbalanced, the precision and recall for readmission were zero for all models except the DT with 16% and 18%, respectively. On applying the Synthetic Minority Oversampling TEchnique (SMOTE) to balance the training dataset, the results did not change considerably, and the highest precision achieved was 16% in the SVM model [21]. The highest recall was achieved by RF at 45% but without any advantage to this model because the accuracy had reduced to 65%. To understand the impact of adding different attributes included in the dataset and to test whether the deletion would have a positive or negative impact on the model, we ran each of the four algorithms on the dataset five more times by removing one of the following groups of attributes: comorbidities, labs, radiology results, medication, and vital signs. The DT worked best without the vital signs, whereas the SVM and the logistic regression did not perform better without removing any of them. The relatively best result achieved was by removing comorbidities from the RF model which had 67% accuracy, 14% precision, and 47% recall (Table 4).

Factors that affect patients readmission have been studied extensively in the past two decades. In this study, we studied clinical factors that can be linked to the risk of readmission for patients with CAP. The results of the analysis showed that patient age, heart rate, respiratory rate, medication count, and the number of comorbidities that the patient had were significantly associated with the odds of being readmitted. All other variables showed no significant effect. The significant variables from our analysis are all factors that could indicate the severity of the patients condition in general and during the initial admission.

However, when examining the magnitude of the relationship using odds ratios, we can observe two controversial themes. The first is that the worse patients have higher odds of readmission than their counterparts, which can be drawn from the odds of the temperature and number of comorbidities. The second theme is that worse patients have lower odds of being readmitted than their counterparts, which can be drawn from the odds of the age of the patient, heart rate, respiratory rate, and the number of medications prescribed. These two themes can be explained by the behavior of clinicians when dealing with admitted patients.

The researchers hypothesis is that factors such as age, heart rate, and respiratory rate are directly related to the condition of patients with CAP, whereas temperature and the number of comorbidities are not related. This implies that patients who are worse according to their age, heart rate, and respiratory rate should be given more attention and prescribed more medications, which subsequently results in lower odds of being readmitted. This explanation is consistent with another study conducted at the same hospital for studying seven-day readmission for emergency department patients [22]. Nevertheless, this hypothesis needs further investigation and likely an observational study to be confirmed or rejected.

To build a prediction model, multiple algorithms and techniques were available for use in this study. However, we selected the ones that can easily be interpreted by HCPs where each rule can be translated into if/then rules. Moreover, we included other models that have been showing promising results per our literature review. After extensive testing of the final clean dataset and multiple trials of creating the prediction model, although the accuracy of the model reached 90%, we did not find any appropriate prediction for readmission. This can be explained by the nature of our dataset because the data for readmitted patients within 30 days of discharge was only 9.1% of the dataset and not enough to create an appropriate prediction.

Looking at the results of the model, the SVM performed poorly, which contradicts the findings reported in the literature. In this study, SVM showed zero recall by predicting that all patients will not require readmission. Moreover, as part of the models training exercise, we tried to add and drop some of the predictors while using precision and recall to observe the effect on the performance of the models. This exercise resulted in a slight improvement in the RF model after decreasing the number of dummy attributes. On the other hand, better performance of the DT model was noticed after removing the continuous attributes of vital signs. Even with the best version of each model, we conclude that none of these models would qualify to serve as a reliable or valid model to predict readmissions of patients with CAP.

This poor performance of the models is in part due to the low quality of the dataset that was available for this analysis. For example, the laboratory results were provided as free text instead of structured numeric values, which required sophisticated and advanced text-mining efforts to extract the relevant values for our study. Unfortunately, this led to only including the number of labs that were ordered for each patient instead of the results of the labs. Conducting more natural language processing could have a real contribution in creating a model with better performance than our models.

Furthermore, in comorbidities, there were several entries for the same comorbidity for each patient. Therefore, the frequency of those comorbidities was not easily derived from the data. In addition, the same comorbidity had multiple descriptions which made it harder to calculate. For example, diabetes mellitus, type II diabetes, and type II diabetes mellitus can be found simultaneously, and they might have been documented differently for the same patient. Moreover, we used the International Statistical Classification of Diseases and Related Health Problems, Tenth Revision, Australian Modification codes of pneumonia for applying the inclusion criteria that only include patients who had CAP as a primary diagnosis. However, we noticed that different codes of pneumonia were used interchangeably in the HIS documentation even during the same visit, which affected the accuracy of including only CAP patients.

Dealing with such data requires a careful process to ensure that the cleaning process was done appropriately and did not result in faulty data or missing data from the original dataset. Furthermore, the decisions to choose how to transform the data should be discussed before each action with an expert in the field not only be taken by the data scientist. For example, because we had multiple readings of vital signs, we needed to transform it to get only one value for each visit; the data scientist might use the median value to give an overall view of the entire admission status, but the physician would rather take the last reading before discharge because it will better reveal how the patient was discharged, which will affect the readmission possibility. Another would rather use the worst reading during the admission to reflect how bad the patients condition was, which would affect the possibility of being readmitted.

The main limitation of this study was the structure of the original dataset, which limited the number of features that could be engineered and used for prediction. Another limitation was the lack of data outside the hospital that could strongly impact the readmission of patients. For example, if the hospital collected how well patients are adhering to their medications or their diet, we hypothesize that the prediction could have gained more accuracy. This kind of data could be collected as part of future studies from patients smartphones through dedicated applications. Future nationwide studies can be conducted to compare different populations based on different societal factors, such as lifestyle and diet, to determine their impact on readmission.

Pneumonia is an infectious disease with major health and economic complications. Predicting CAP readmission using machine learning is an essential area of further research. Based on our study findings, the model accuracy reached 90%, but we did not find an appropriate prediction for readmission because the number of readmissions within 30 days was only 9.1% of the entire dataset received. We propose that machine learning can be used for predicting CAP readmission, but appropriate data sources and a suitable modeling technique are required to improve the accuracy of the model.

During the development of our machine learning and statistical analysis models, we identified factors that are associated with the readmission of patients with CAP. These factors should be used and closely monitored by HCPs to minimize the rate of readmissions of patients with CAP. In our study, we identified that less than 10% of the patients were readmitted for CAP after discharge within the dates included in our study. Furthermore, our results indicated that age, heart rate, respiratory rate, medication count, and the number of comorbidities that a patient had were significantly associated with the odds of being readmitted. However, the prediction performance of the models in our study was not adequate to predict the risk of readmission.

Here is the original post:
A Machine Learning Model for Predicting the Risk of Readmission in Community-Acquired Pneumonia - Cureus

Read More..

ART-ificial Intelligence: Leveraging the Creative Power of Machine Learning | LBBOnline – Little Black Book – LBBonline

Above: Chago's AI self-portrait, generated in Midjourney.

I have learnt to embrace and explore the creative possibilities of computer-generated imagery. It all started with the introduction of Photoshop thirty years ago, and more recently, I became interested in the AI software program, Midjourney, a wonderful tool that allows creatives to explore ideas more efficiently than ever before. The best description for Midjourney that Ive found is an AI-driven tool for the exploration of creative ideas.

If I was talking to somebody who was unfamiliar with AI-generated art, I would show them some examples, as this feels like a great place to start. Midjourney is syntax-driven; users must break down the language and learn the key phrases and special order of the words, in order to take full advantage of the program. As well as using syntax, users can upload reference imagery to help bring their idea to life. An art director could upload a photo of Mars and use that as a reference to create new imagery I think this is a fantastic tool.

Im a producer, with an extensive background as a production artist, mostly in retouching and leading post production teams. I also have a background in CGI, I took some postgraduate classes at NYU for a couple semesters, and I went to college for architecture, so I can draw a little bit but I'm not going to pretend that I could ever do a CGI project. A lot of art directors and creative directors are in the same boat, they direct and creative direct - especially on the client side - a lot of CGI projects, but dont necessarily know CGI. Programs like Midjourney let people like us dip our toes into the creative waters, by giving us access to an inventive and artistic toolset.

Last week, the Steelworks team was putting together a treatment deck for a possible new project. We had some great ideas to send to the client, but sourcing certain specific references felt like finding a needle in the haystack. If we were looking for a black rose with gold dust powder on the petals, it is hard to find exactly what we want. Its times like these when a program like Midjourney can boost the creative. By entering similar references into the software and developing a syntax that is as close to what youre looking for as possible, you are given imagery that provides more relevant references for a treatment deck. For this reason, in the future I see us utilising Midjourney more often for these tasks, as it can facilitate the creative ideation for treatments and briefs for clients.

I'm optimistic about Midjourney because, as technology evolves, humans in the creative industries continue to find ways to stay relevant. I was working as a retoucher during the time Photoshop first came out with the Healing Brush. Prior to that, all retouching was done manually by manipulating and blending pixels. All of a sudden, the introduction of the Healing Brush meant that with one swipe, three hours of work was removed. I remember we were sitting in our post production studio when someone showed it to us and we thought, Oh my God, we're gonna be out of a job. Twenty years later, retouching still has relevance, as do the creatives who are valued for their unique skill sets.

I don't do much retouching anymore, but I was on a photo shoot recently and I had to get my hands in the sauce and put comps together for people. There were plenty of new selection tools in Photoshop that have come out in the last three years and I had no idea about most of them. I discovered that using these tools cut out roughly an hour's worth of work, which was great. As a result, it opened up time for me to talk to clients, and be more present at work and home. It's less time in front of the computer at the end of the day.

While these advancements in technology may seem daunting at first, I try not to think of it as a threat to human creativity, rather a tool which grants us more time to immerse ourselves in the activities that boost our creative thinking. Using AI programs like Midjourney helps to speed up the creative process which, in turn, frees up more time to do things like sit outside and enjoy our lunch in the sun, go to the beach or to the park with your kids things that feed our frontal cortex and inspire us creatively. It took me a long time to be comfortable with taking my nose off the grindstone and relearn how to be inspired creatively.

Read more here:
ART-ificial Intelligence: Leveraging the Creative Power of Machine Learning | LBBOnline - Little Black Book - LBBonline

Read More..

Renesas Launches RZ/V2MA Microprocessor with Deep Learning and OpenCV Acceleration Engines – Hackster.io

Renesas has announced a new chip in its RZ/V family, aimed at accelerating OpenCV and other machine learning workloads for low-power computer vision at the edge: the RZ/V2MA microprocessor, built with the Apache TVM compiler stack in mind.

The new Renesas RZ/V2MA is built around two 64-bit Arm Cortex-A53 processor cores running at up to 1GHz plus an artificial intelligence coprocessor dubbed the DRP-AI and offering one trillion operations per second (TOPS) of compute performance per watt of power translating, in real-world terms, to a performance of 52 frames per second for the TinyYoloV3 network.

Renesas' latest chip aims to provide enough grunt for real-time edge-vision workloads in a low power envelope. (: Renesas)

In addition to the DRP-AI coprocessor, the chip also offers an accelerator specifically focused on OpenCV workloads improving performance for rule-based image processing, which can happen simultaneously alongside networks running on the DRP-AI.

On the software side, Renesas has a DRP-AI Translator, which offers conversion for ONNX and PyTorch models to enable them to run on the DRP-AI core, with TensorFlow support to follow; the company has also announced the DRP-AI TVM, built atop the open-source Apache TVM deep learning compiler stack, which allows for programs to be compiled to run on both the DRP-AI accelerator and one or both CPU cores.

"One of the challenges for embedded systems developers who want to implement machine learning is to keep up with the latest AI models that are constantly evolving," claims Shigeki Kato, vice president of Renesas' enterprise infrastructure division. "With the new DRP-AI TVM tool, we are offering designers the option to expand AI frameworks and AI models that can be converted to executable formats, allowing them to bring the latest image recognition capabilities to embedded devices using new AI models.

The RZ/V2MA also offers support for H.264 and H.265 video codecs, LPDDR4 memory at up to 3.2Gbps bandwidth, USB 3.1 connectivity, and two lanes of PCI Express. To showcase its capabilities, Renesas has used the chip in the Vision AI Gateway Solution reference design, which includes Ethernet, Wi-Fi, Bluetooth Low Energy (BLE), and cellular LTE Cat-M1 connectivity.

More information on the RZ/V2MA is available on the Renesas website.

Read more:
Renesas Launches RZ/V2MA Microprocessor with Deep Learning and OpenCV Acceleration Engines - Hackster.io

Read More..

Advantages of virtualization in cloud computing – TechRepublic

Virtualization supports cloud computing and enables differentiation between digital assets. Here are some of the benefits and use cases.

Cloud computing continues to become a viable option for businesses and organizations around the world. This ever-growing system of digital management allows companies the freedom to scale operations or efficiently manage clients, data and services. But cloud computing comes in many forms and can be complicated for businesses to maintain long-term. Fortunately, utilizing virtualization can be a great solution.

Virtualization, in fact, is already present in various aspects of the cloud model. From security to ease of access, virtualization is a way of organizing cloud systems and successfully conducting business. According to Introduction to Virtualization in Cloud Computing, a recently published chapter in Machine Learning and Optimization Models for Optimization in Cloud, the effects of virtualization on cloud systems are invaluable.

Understanding virtualization in cloud computing can help businesses maintain a digital presence without fear of deficiency. With new tools and the expanding network of digital computing, virtualization has become easier to grasp and implement than ever before.

Virtualization has been defined in a dozen different ways. At its most basic, it can be considered the creation of virtual hardware and software services that can be used and flexibly managed by organizations as needed. This definition sheds light on the current existence of virtualization in cloud models. That said, virtualization is an integral part of cloud technologies.

Indeed, it is not something that can be eliminated from the cloud process. Rather, virtualization is what makes cloud and multicloud technologies viable for numerous businesses.

The most important part of a virtual model is a bit of software called a hypervisor. Hypervisors can be imagined as a buffer between hardware and clients using operating systems or services to run their businesses. This allows one piece of hardware to form multiple virtual computers for guest use.

SEE: Megaport Virtual Edge vs Equinix Network Edge: Compare top edge computing platforms (TechRepublic)

Virtualization contributes to increased security for organizations operating in a digital space. Instead of relying on hardware to provide security, virtual systems store information collectively and stop data from being shared between applications. Virtualization also allows technicians to grant access to only specific aspects of a network and lets professionals exercise greater control over systems.

In truth, the most powerful security feature of virtualization is related to its ability to separate all digital computers on a network, meaning that data is not transferred from system to system. This helps keep cloud computing secure and maintains a safe atmosphere for businesses that handle private information or worry about data leaks.

Without hardware to consider, virtualization paves the way for new methods of conducting business digitally. Perhaps the most obvious benefit of flexible servers is the cost. Companies are no longer forced to pay for hardware when they adopt virtual solutions. In addition, virtualization allows for the differentiation of operating systems and networks. This is called desktop virtualization.

Indeed, there are multiple different types of virtualization that exist within the cloud space, including desktop, network, server, application and more. These different forms of virtual computing contribute to flexibility as well by making it possible for businesses to specialize in the technology that they require most. While this process can be confusing, it certainly has long-term benefits that grant more control over operations and digital commerce.

Considering the link between cloud computing and virtualization, it should come as no surprise that the same strengths related to productivity can also be applied to virtual technologies. Cloud systems, while possible without virtualization, would be much more complex and expensive. Coupled with virtual computing, the two processes ease the stress of businesses. Virtualization, for example, increases the efficiency of hardware by facilitating different operations, systems, desktops and storage functions at the same time.

Virtualization also allows for multiple operating systems to run at the same time. And, should there ever be a maintenance issue, restructuring a virtual system is much simpler and less expensive than addressing hardware faults. All of these processes and more come together to form a network that aids organizations by creating a smoother and more reliable cloud system.

Virtualization, and cloud computing itself, is projected to grow in the coming years. According to Allied Market Research, the virtualization security market alone is expected to jump from $1.6 billion to $6.2 billion in 2030. This is because virtualization has become increasingly important in businesses around the worldand not always the most obvious ones.

Virtualization is an integral part of the cloud streaming process. It allows organizations to separate content between users and platforms securely. The technology is also used to store information for clients and companies that handle sensitive content. This could mean medical records, financial information or any other private data. In fact, virtualization is expected to benefit globally as more organizations begin to move toward a digital environment.

Here is the original post:
Advantages of virtualization in cloud computing - TechRepublic

Read More..

The cloud computing revolution – Stuff

Ben Kepes is a Canterbury-based entrepreneur and professional board member. He's all about the cloud.

OPINION: I'm an aged sort of a chap. As such I have somewhat traditional views about how to achieve things.

Take for example the building of financial freedom for an individual. My perspective is that this is done over time - by building a good foundation, making calculated choices, smoothing the inevitable peaks and troughs and the like.

This is counter to the get rich quick approach which sees people jump on the next big thing (bitcoin, anyone?)

READ MORE:* The value of a board for a non-Elon Musk company* Patagonia - the business that measures success through impacts rather than profits * Are our tech companies making any money?

Bill Gates, co-founder of Microsoft, is credited with saying that most people overestimate what they can do in one year and underestimate what they can do in 10years.

It's a wise saying and seems to hold true no matter what context it is used within.

If we're talking about creating a high-performance team, changing the culture of an organisation or changing the way a society thinks, while individual inflection points are certainly a thing, change over a longer timescale is where the magic really happens.

In a short space of time people have become totally relaxed about using the cloud, says Ben Kepes.

Gates is a smart guy, and often I find myself in situations where I become frustrated by a lack of progress, only to take a step back and look at progress through a longer timeframe and wonder at how far we have, in fact, come.

I was thinking about this longer term perspective recently while I was sitting in a board meeting. The organisation in question, like most organisations out there, is grappling with an existing technology paradigm.

Like all organisations, technology (specifically software) powers the back-end processes of the organisation and the modernisation of that software is part of the key to unlocking growth, progress, customer centricity and everything a good business wants to achieve.

I listened in at the board meeting while the leadership team and the board discussed technology priorities. One of these priorities included moving from an on-premises model to SaaS model of software delivery.

My heart skipped a beat and I had to pinch myself as it became apparent just how far we have come in 15 or so years.

You see, back around 2006, when Salesforce.com and Amazon Web Services were simply tiny, nascent technology companies, and Xero was the merest glimmer in the eye of co-founder Rod Drury, I decided on a bit of a career change.

That saw me become a technology industry analyst, a vague sort of a role that sees an individual spend their working life observing what is happening with both vendors and customers of technology. My particular focus was cloud computing.

Back in those days, and this is no exaggeration, only a tiny number of people actually had any kind of an idea of what cloud computing is. Notions like this were utterly foreign within the boardroom, generally misunderstood in technology departments and totally off the radar for consumers.

But like that Bill Gates quote says, fast-forward a tad more than a decade and people are totally relaxed about sharing photos in the cloud, collaborating on documents in the cloud, and leveraging software delivered from the cloud to the browser on their desktop, laptop or mobile device. We've truly come far.

Supplied

Change over a longer timescale is where the magic really happens, says Ben Kepes.

Now I would not for a nanosecond suggest that I was, back then, prescient. I simply, as is my style, lurched from one random career down another pathway in order to investigate something interesting. There was no calculus to my choice, it was simply, to quote the Robert Frost words, a case that: Two roads diverged in a wood, and I I took the one less travelled by, And that has made all the difference.

Anyway, the reason for this article isn't to reflect on anything I've done, but rather to express amazement at what a revolution cloud computing has been to the world and to offer up some kudos to those who foresaw this change. People like Amazon Web Services creator Andy Jassy, or Salesforce.com founder Mark Benioff - they're the prescient ones.

To recognise how far we've come, I thought I'd dig out a video I made a decade or so ago, back when cloud was still a little known let alone used term.

It's almost quaint to think that once upon a time some random Kiwi had to make a video to explain what cloud computing was. Enjoy: https://www.youtube.com/watch?v=pcQQ2U_VBWI

Go here to read the rest:
The cloud computing revolution - Stuff

Read More..

This Top Cloud Computing Stock Is Starting to Look Like a Great Deal – The Motley Fool

The bear market of 2022 has been brutal to software stocks, and some of them are rightfully being punished. Fast growth is good, but profitability matters. And many upstart cloud software companies have been found to be deficient in this department.

That's not the case for Dynatrace (DT 3.10%). The company is dedicated to growing profitably, and the financial results show it. Nevertheless, headed into the fourth quarter of 2022, shares sold off some 45% this year as the stock gets hammered along with the rest of the tech market. Here's why Dynatrace is starting to look like a good deal.

One reason the market is being clobbered relates to fears of a recession. As economic downturn risk increases, many organizations tighten up their budgets. But the top brass at many cloud companies say they're still growing because "digital transformation" remains a top priority. That makes sense, as investing in digital processes helps a business get more efficient and saves resources in the long run.

That's the story Dynatrace's top team preached this year as well. As giant corporations (Dynatrace's focus) migrate more of their operations to the cloud, they need a new set of tools to ensure these new IT capabilities and cloud-based apps operate properly. That's where cloud observability comes in.

Dynatrace's platform covers everything from application performance monitoring to security to tech infrastructure monitoring. And unlike a lot of legacy software, this toolset doesn't just inform an IT department if something is amiss. It also suggests and helps automate a fix. Data volume and complexity are booming as cloud adoption ramps up, so this kind of automation is mission critical for mega-corporations.

But what if cloud industry growth slows down? At a recent tech conference, CEO Rick McConnell pointed out that the cloud hyper scalers Amazon AWS, Microsoft Azure, and Alphabet's Google Cloud collectively hauled in about $160 billion in revenue in Q2 2022, growing at a 36% year-over-year pace. Even if that rate of increase slows a bit, the cloud universe is doing just fine.

And since Dynatrace generally follows the route of cloud industry expansion, it's doing just fine as well. Its revenue increased 34% year-over-year in its last quarter, even as many of its big customers started reducing spending as recession worries mounted.

For its 2023 fiscal year (the 12-month period that will end in March 2023), Dynatrace expects revenue to be up at least 21% (or up at least 26% when excluding foreign currency exchange rates) to about $1.13 billion. Along the way, free cash flow profit margin should be about 28%, a very healthy rate for a growth company.

Granted, this is a slowdown from the recent past. Economic uncertainty and the U.S. dollar's historic run-up are weighing on revenue and profits. McConnell and company also decided earlier this year to invest a little of the company's cash into expanding its salesforce. Over the next year or two, management sees those free cash flow margins edging back up toward 30% as that investment is digested.

And thanks to its steady generation of fresh cash, Dynatrace's balance sheet has also rapidly improved since its IPO in 2019. Once saddled with liabilities, this company is now cash- and short-term-investment positive net of debt.

Data by YCharts.

This debt payoff hasn't hindered Dynatrace's development of more tools, though. McConnell has said the infrastructure monitoring module is pulling in about $100 million a year in sales now, but growing at a much faster rate than revenue overall. The more recently released app security module is well on its way to reaching $100 million in annualized revenue. And a new tool that has been in the works for a few years now, data log management (Splunk's primary software capability), is almost ready to be unveiled. Based on Dynatrace's conversations with customers, log management is expected to also very quickly ramp up to $100 million a year in sales.

After getting dragged down by the market overall, Dynatrace stock now trades for 32 times enterprise value to free cash flow. It's a rare software company that is reporting fast growth and robust free cash flow generation. I think it could be time to nibble here if you are looking for quality cloud computing stocks to hold for the next few years.

Suzanne Frey, an executive at Alphabet, is a member of The Motley Fool's board of directors. John Mackey, CEO of Whole Foods Market, an Amazon subsidiary, is a member of The Motley Fool's board of directors. Nicholas Rossolillo and his clients have positions in Alphabet (C shares), Amazon, and Dynatrace, Inc. The Motley Fool has positions in and recommends Alphabet (A shares), Alphabet (C shares), Amazon, Microsoft, and Splunk. The Motley Fool has a disclosure policy.

Go here to read the rest:
This Top Cloud Computing Stock Is Starting to Look Like a Great Deal - The Motley Fool

Read More..

Google to build its first cloud region in Greece – Reuters

ATHENS, Sept 29 (Reuters) - Alphabet Inc's Google will set up its first cloud region in Greece, the company said on Thursday, giving a boost to the country's efforts to become a world cloud computing hub.

The deal is estimated to contribute some 2.2 billion euros ($2.13 billion) to Greece's economic output and create some 20,000 jobs by 2030, Prime Minister Kyriakos Mitsotakis said.

Since taking office in 2019, Mitsotakis's conservative government has stepped up moves to diversify the economy and attract foreign investment and high tech companies to the country which emerged from a decade-long financial crisis in 2018.

Register now for FREE unlimited access to Reuters.comRegister

"Today, we are very pleased to be announcing our first cloud region in Greece which will provide storage and cloud services for Google customers," said Adaire Fox-Martin, president of Google Cloud International, announcing the investment at an event in Athens.

The investment would enable organisations to better use their data, help improve low latency and ensure users' security in the face of cybersecurity threats, she said.

A cloud region usually is based around a cluster of data centres.

Google's investment comes two years after Microsoft Corp. decided to build a data centre hub in the country.

Amazon Inc's cloud computing division also opened its first office in Greece last year to support what it said was a growing number of companies and public sector agencies using its cloud services.

($1 = 1.0333 euros)

Register now for FREE unlimited access to Reuters.comRegister

Reporting by Angeliki Koutantou. Editing by Jnae Merriman

Our Standards: The Thomson Reuters Trust Principles.

Link:
Google to build its first cloud region in Greece - Reuters

Read More..