Page 3«..2345..1020..»

Machine learning identifies trauma patients benefiting from tranexamic acid treatment – News-Medical.Net

Researchers from Osaka University use machine learning to identify patients more likely to survive traumatic injury if treated with tranexamic acid.

Worldwide, approximately 4.5 million people die of traumatic injury every year. Many of these patients die from blood loss.

Early treatment with a drug called tranexamic acid stops excessive bleeding by reducing the body's ability to break down blood clots. However, tranexamic acid can cause unnecessary drug side effects in patients who do not need it, so it is necessary to select truly effective target patients based on objective criteria.

Now, in a study published in Critical Care, researchers from Osaka University have addressed this treatment challenge by identifying subgroups of trauma patients who are more likely to survive if treated with tranexamic acid. The team found these subgroups by examining trauma patients who shared similar traits (also known as phenotypes).

We identified eight different trauma phenotypes, and then we evaluated the benefits of tranexamic acid treatment based on these phenotypes. We found subgroups of patients with significantly lower in-hospital mortality when they received tranexamic acid. We also found subgroups of patients who received no benefit from treatment."

Jotaro Tachino,lead author

The team used machine learning model to help categorize trauma patients into these subgroups. Using this technique, researchers processed information from over 50,000 patients in the Japan Trauma Data Bank and then analyzed patterns associated with trauma, treatment, and survival.

The team found an association between trauma phenotypes and in-hospital mortality, indicating that treatment with TXA could potentially influence this relationship.

The researchers say "Trauma patients are a heterogeneous population with injuries that vary greatly in type and severity. This makes it difficult to predict how effective a treatment will be in an individual patient". "We hope our results will help individual trauma patients receive more personalized care as well as improve the quality of care for all trauma patients."

Given the high death toll from traumatic injury, strategies that improve survival are essential for patients and their families. This research is a key step in optimizing tranexamic acid use in trauma patients.

Source:

Journal reference:

Tachino, J.,et al.(2024) Association between tranexamic acid administration and mortality based on the trauma phenotype: a retrospective analysis of a nationwide trauma registry in Japan.Critical Care.doi.org/10.1186/s13054-024-04871-w.

See the rest here:
Machine learning identifies trauma patients benefiting from tranexamic acid treatment - News-Medical.Net

Read More..

Application of machine learning for identification of heterotic groups in sunflower through combined approach of … – Nature.com

Experiment 1

For accurate identification of heterotic grouping pattern, a multi-prong strategy was adopted, wherein morphological, bio-chemical, and molecular datasets of sunflower genotypes were analyzed by using three clustering algorithms, i.e., hierarchical, K-means and hierarchical+K-means hybrid classification algorithm. Efficacy of these three machine learning algorithms were tested on the sunflower genotypes and the algorithm that best explains and accurately classified the genotypes were used for final parental selection for further hybrid development.

Figure2 represents the dendrogram obtained by using hierarchical classification algorithm. For hierarchical clustering, Ward.D2 method was applied on combined dataset of morphological+bio-chemical+molecular characterization. Cluster diagram (Fig.2) showed two distinct classes of genotypes, wherein cluster 1 contains all the restorer lines, while cluster 2 has CMS+B-line and self-pollinated lines. Number of genotypes grouped in cluster 1 includes 31 sunflower genotypes, while the rest 78 sunflower genotypes grouped in cluster 2. Further, at genetic distance of 18, these clusters can be sub-divided into 6 smaller groups. Sub-group 1-A has six genotypes, while there are 3, 8, 6, 2 and 6 genotypes in subgroup 1-B, 1-C, 1-D, 1-E and 1-F respectively. Likewise, Cluster-2 can be divided into six sub-groups at the genetic distance 18. The number of genotypes recorded in sub-group 2-A was 8, while sub-group 2-B had 11 genotypes. Similarly, the number of genotypes recorded in sub-groups 2-C, 2-D, 2-E, and 2-F were 7, 20, 20 and 12 respectively.

Hierarchical clustering of 109 sunflower genotypes through Ward.D2 method.

K-means cluster algorithm is an unsupervised machine learning based approach that tends to group the similar data points in one cluster, which is away from the dis-matching data points. More precisely, this algorithm aims to minimize the sum of square values within a cluster and consequently maximize the sum of squares between clusters. In the present study, K-means clustering applied on the 109 sunflower genotypes, precisely grouped the sunflower genotypes into 2 major clusters (Fig.3). The size of cluster 1 is 31, while cluster 2 classified 78 sunflower genotypes. Cluster 1 predominantly contains restorer lines, while cluster 2 contains self-pollinated (SFP) lines i.e. A-lines and B-lines of sunflower genetic pool under study. Although K-means application precisely grouped the sunflower genotypes into two major clusters, selecting genotypes with more precision to smaller groups was not possible using this algorithm. As many SFP lines lie closer to the A-line or B-lines, making it harder to distinguish between them.

K-means clustering of 109 sunflower genotypes.

Finally, a hybrid algorithm by using hierarchical+K-means clustering algorithms was applied on the sunflower genotypes to examine if the accuracy of harvesting more precise heterotic groups can be improved further or not? Setting the number of k(s) to 12, two major clusters were observed, that were further grouped into 12 smaller clusters (Fig.4). Cluster 1 contains 12 genotypes in which there were 2 B-lines and 10 restorer lines, cluster 2 contains 8 genotypes (4 CMS+4 B-lines). Cluster 3 had 4 genotypes (1 B-line+3 SFP lines), and 12 genotypes (6 CMS-lines, 5 B-lines and 1 SFP line) were grouped into cluster 4. Cluster 5 gathered 15 genotypes which were all Restorer lines, 11 genotypes were grouped in cluster 6 (5 CMS lines, 4 SFP lines, 1 Restorer line and 1 B-line). Likewise, cluster 7 had 6 sunflower genotypes (5 SFP lines+1 CMS lines), cluster 8 had 11 genotypes (6 SFP lines, 4 restorer lines and 1 CMS line). 6 sunflower genotypes (3 CMS lines, 2 SFP lines and 1 restorer lines) were grouped in cluster 9, while cluster 10 showed a grouping of 8 genotypes (3 CMS lines, 3 Restorer lines and 2 B-lines). Cluster 11 had 8 sunflower genotypes (3 SFP lines, 2 CMS lines, 2 B-lines and 1 Restorer line) and 8 sunflower genotypes tend to group in cluster 12 (3 Restorer, 2 CMS-lines, 2 B-lines and 1 SFP line).

Clustering of 109 sunflower genotypes through hybrid (hierarchical+K-means) machine learning.

Grouping of sunflower genotypes observed by the application of hybrid algorithm (hierarchical+K-means) was found to be useful to some extent as it can be used to group closer genotypes, however, grouping of genotypes with distinct characteristics like restorer lines and CMS lines closely is somewhat confusing, hence this algorithm is also found to be not a good fit for the current study. As the grouping of genotypes using hierarchical clustering algorithm is clearer and more definitive, hence selection of potential parents for the development of sunflower hybrids were based on the grouping observed through hierarchical clustering approach.

As 12 clusters were observed through hierarchical clustering method, 1 genotype from each of the 12 clusters was selected for further utilization in sunflower hybrid breeding program. Genotypes exhibiting the highest seed yield potential from each of the 12 clusters (recorded at the height of 18) were selected. Moreover, all the restorer lines tend to cluster separately from CMS lines, hence Line Tester mating design was followed for sunflower hybrid F1 development.

To assess the practical efficiency of the identified heterotic groups, selected parental lines were crossed in Line Tester mating design and 36 F1 hybrids of sunflower were generated. Heterosis (mid-parent heterosis, better parent heterosis) and combining ability analysis (General combining ability and Specific combining ability) were conducted to evaluate the potential of methodology used for identification/mining of heterotic grouping pattern and thereof selection of potential parental lines for commercial hybrid development.

Table 1 presents the mean performance of 12 sunflower lines that were planted at NARC, Islamabad. The study focused on nine agro-morphological traits. Among the lines, CMS-HAP-112 exhibited the shortest duration to initiate flowering, taking only 46.5days, while RHP-41 had the longest duration of 56.5days. CMS-HAP-111 completed 100% flowering the earliest, within 55days, followed by CMS-HAP-112 at 55.5days. On the other hand, RHP-41 took the maximum number of days to complete flowering, with a duration of 67.5days. Regarding plant height, the 12 parental sunflower lines displayed a range from 200.14cm (CMS-HAP-54) to 134.6cm (CMS-HAP-111). In terms of leaf area, CMS-HAP-56 had the highest recorded value of 257.48 cm2, while RHP-38 had the lowest average leaf area of 141.5 cm2. The largest head diameter of 19.3cm was observed in CMS-HAP-99, whereas the smallest head diameter of 10.45cm was found in RHP-38. In the context of stem curvature, the lowest value recorded was 6.95cm for RHP-71, while CMS-HAP-111 and CMS-HAP-12 exhibited the highest stem curvatures of 48cm and 45.7cm, respectively. The number of leaves varied among the parental lines, with CMS-HAP-111 having the fewest leaves (23.35), and CMS-HAP-112 having the highest number of leaves (33.1), followed by CMS-HAP-99 (33). The 100 seed weight of the parental lines ranged from 3.48g (RHP-69) to 6.61g (CMS-HAP-99). CMS-HAP-112 displayed the highest mean seed yield per plant at 68.19g, while the lowest seed yield per plant was observed in RHP-68 (27.28g) and RHP-41 (27.9g) (Table 1).

Table 2 shows the average of 36 sunflower hybrids grown in NARC, Islamabad. The research focused on nine agromorphological traits. Hybrids RHP-68CMS-HAP-112 and RHP-38CMS-HAP-112 had the shortest flowering times, only 44days. On the other hand, the hybrid RHP-71CMS-HAP-56 had the longest time to flower initiation at 56.5days. RHP-68CMS-HAP-112 and RHP-38CMS-HAP-54 showed the minimum number of days (50) required for hybrids to complete 100% flowering, whereas RHP-71CMS- HAP-111 was 66 5days. The number of days until the flowering rate reaches 100%. Regarding the mean leaf area approaching physiological maturity, RHP-71CMS-HAP-56 showed the highest value of 176.53 cm2, while RHP-69CMS-HAP had the lowest mean leaf area. The largest head diameter he recorded with the RHP-71CMS-HAP-99 was 23.95cm, followed by he with the RHP-53CMS-HAP-111 with a diameter of 22.77cm. Conversely, RHP-68CMS-HAP-112 had the smallest head diameter of 17.11cm, followed by RHP-68CMS-HAP-54 with 17.53cm, and the tallest hybrid in terms of plant height was RHP-71CMS. -HAP-112 had an average height of 175.17cm. while the smaller hybrids were RHP-53CMS-HAP-111 (131cm) and RHP-41CMS-HAP-56 (132cm).

Regarding stem curvature, the lowest recorded value was 42.77cm for RHP-68CMS-HAP-54, followed by RHP-53CMS-HAP-54 with a stem curvature of 48.83cm. HAP-99 and RHP-38CMS-HAP-112 exhibited maximum stem curvatures of 77.5cm and 74.83cm, respectively. RHP-53CMS-HAP-111 has the lowest number of seats (26), RHP-71CMS-HAP-56 has the highest number of seats (36.67), followed by RHP-71CMS-HAP-99 continued. (36.17). Test weights of hybrids ranged from 4.41g (RHP-71CMS-HAP-111) to 7.34g (RHP-38CMS-HAP-12). The minimum seed yield per plant for hybrid RHP-53CMS-HAP-111 was 49.3g, whereas RHP-71CMS-HAP-54 showed the highest average seed yield of 103.36g per plant, compared to RHP-41 followed by RHP-41CMS-HAP-111 of 99.45g.

Results of heterosis and heterobeltiosis for nine morphological characteristics of sunflower plants are presented in Table 3 and 4. Range of heterosis for days to flower initiation reported in present study was from 10.14**% (CMS-HAP-111RHP-71) to 13.04% (CMS-HAP-56RHP-68). The heterotic effects of six hybrids were found to be in positive direction, while non-significant heterosis effects were found of six cross combinations. Remaining all cross combinations showed a highly significant heterosis for days to flower initiation. Heterobeltiotic effects recorded for 36 sunflower hybrids were found to be in the range of 20.35% (CMS-HAP-112RHP-41) to 3.65*% (CMS-HAP-111RHP-71). Most of heterobeltiotic effects are in negative direction.

CMS-HAP-54RHP-38 showed the maximum heterotic effect in negative direction for days taken to 100% flowering ( 18.37**%) followed by CMS-HAP-56RHP-41 ( 17.0**%) and CMS-HAP-56 xRHP-38 ( 16.73**%). Whereas hybrid CMS-HAP-111RHP-71 depicted the highest positive heterotic effect for this trait (13.68**%) followed by CMS-HAP-12RHP-71 (8.94**%). The heterotic effect was significant for all hybrids except for CMS-HAP-111RHP-53. Range of heterobeltiosis was recorded from -23.7**% (CMS-HAP-112RHP-41) to 7.26**% (CMS-HAP-111RHP-71). Heterobeltiotic effect of all the hybrid combinations found to be statistically highly significant for days to complete flowering except four hybrids viz., CMS-HAP-112RHP-71, CMS-HAP-12RHP-71, CMS-HAP-54RHP-71 and CMS-HAP-99RHP-71.

Results obtained of heterosis and heterobeltiosis effects for leaf area in hybrid combination under study depicted that heterosis over mid parent ranged from 3.63ns% to 44.26**%. Highest magnitude of positive heterosis effect was noted for CMS-HAP-12RHP-38 (3.63ns%) while negative heterotic effect in negative direction was recorded for F1 hybrid CMS-HAP-56RHP-41 ( 44.26**%). Highest effect for heterobeltiosis observed in negative direction was ( 48.28**%) for CMS-HAP-56RHP-41, followed by CMS-HAP-56RHP-68 ( 46.11**%). Heterobeltiotic effects of 29 hybrids was found to be statistically significant.

Maximum heterosis for head diameter was observed for CMS-HAP-12RHP-38 (59.49**%), whereas lowest magnitude of mid parent heterosis was depicted by CMS-HAP-112RHP-68(4.65ns%) (Table 3). All hybrids exhibited positive mid parent heterosis. Maximum heterobeltiosis was observed for CMS-HAP-12RHP-71 (31.71**%), while minimum heterobeltiosis was recorded for CMS-HAP-99RHP-69 ( 6.68ns%). Only six sunflower hybrids showed a negative heterobeltiotic effect for head diameter. Maximum mid parent heterosis for plant height recorded was 31.4**% (CMS-HAP-54RHP-53), while minimum mid parent heterosis of 13.92*% was observed for CMS-HAP-111RHP-38. As many as thirty hybrids exhibited a negative magnitude of mid parent heterosis for head diameter in the present study. Range of heterobeltiosis observed was from 35.34% (CMS-HAP-54RHP-68) to 5.17*% (CMS-HAP-111RHP-71). Results for heterobeltiosis of 34 hybrids were found to be negative with respect to better parent heterosis.

Range of heterotic effects for the 36 sunflower hybrids under study recorded was from 65.87**% (CMS-HAP-111RHP-69) to 317.24**% (CMS-HAP-54RHP-71). All sunflower F1 hybrid combinations under study expressed highly significant positive heterotic effects for stem curvature. Heterobeltiosis was statistically significant for 24 hybrids and all 36 F1 hybrids showed positive heterotic effects over the best parent. Maximum heterobeltiosis observed was for CMS-HAP-99RHP-68 (194.68**%), while minimum heterobeltiosis was recorded for CMS-HAP-111RHP-69 (10.06ns%). Results for number of leaves per plant obtained depicted that maximum positive heterosis was recorded for CMS-HAP-111RHP-71 (45.58**%) followed by CMS-HAP-56RHP-71 (31.89**%). Maximum magnitude of negative heterotic effect was noted for CMS-HAP-112RHP-53 ( 9.25ns%), followed by CMS-HAP-99RHP-69 ( 8.66ns%). Of all the 36 hybrid combinations under study, 22 expressed positive heterosis for the average number of leaves per plant. Highest magnitude of heterobeltiotic effect in negative direction was recorded for CMS-HAP-111RHP-53 ( 20.37**%) while maximum better parent positive heterosis was noted for CMS-HAP-111RHP-71 (36.02**%) followed by CMS-HAP-56RHP-71 (24.29**%).

Among all the hybrids tested the results of 25 hybrids for 100 seed weight was found to be statistically significant (Table 4). Maximum heterotic effect noted for this character was 57.72**% (CMS-HAP-56RHP-69) while minimum mid-parent heterosis observed was 3.45ns% (CMS-HAP-111RHP-71). Only two hybrid combinations expressed heterosis for 100 seed weight in negative direction. Heterosis over better parent for 100 seed weight ranges from 15.49*% (CMS-HAP-111RHP-38) to 37.18**% (CMS-HAP-56RHP-53). Results of 10 hybrid combinations were found to be statistically significant. Heterobeltiotic effect of 24 hybrids were on positive side (Table 4). Among all the 36 hybrids tested, 35 sunflower hybrids expressed a positive mid parent heterosis for seed yield per plant. The maximum heterotic effect noted for this character was 134.69**% (CMS-HAP-111RHP-41) followed by 125.18**% (CMS-HAP-12RHP-71) and minimum mid-parent heterosis observed was 1.79ns (CMS-HAP-112RHP-53). Maximum heterobeltiosis recorded was 74.93**% (CMS-HAP-11RHP-41) while minimum heterobeltiosis noted was 27.58ns% (CMS-HAP-112RHP-53). Heterobeltiotic effect of only nine hybrids were negative while rest of 27 hybrids expressed a positive gain over their better parent for seed yield per plant (Table 4).

Line Tester mating design had the ability to evaluate a greater number of hybrids than the diallel and partial diallel mating designs. This technique of hybrid evaluation is quite successful in cases where hybrids must be developed from Restorer and complete male sterile lines. Results pertaining to General Combining Ability of 12 parental lines are presented in Table 5.

Pursual of GCA estimates of all 12 hybrids for DFI showed that only two parents, one CMS, i.e., CMS-HAP-12 (7.65**) and one R-line i.e., RHP-68 (1.07**) had positive and significant GCA effects. Similarly, the same two parents had the highest, positive and significant GCA effect for DFC, depicting that these hybrids are late maturing. For leaf area GCA estimates, CMS-HAP-12 (14.73**) were found to be highly significant and positive among all the 12 parental lines under examination, while CMS-HAP-99 showed the lowest GCA magnitude of 13.99**. GCA effects for average leaf area for all the six male lines were found to be non-significant. Range of GCA estimates for head diameter recorded was from 2.57** (CMS-HAP-12) to 1.17** (CMS-HAP-54), while among male lines RHP-68 was found to be a good general combiner for head diameter with GCA effect of 1.02*. The best general combining ability recorded for plant height was from CMS-HAP-12 (13.22**), while lowest GCA estimate of 10.3** was shown by CMS-HAP-111. Stem curvature GCA estimates of all the 12 parents under study were found to be statistically non-significant. GCA of number of leaves per plant were highly significant for two CMS lines viz., CMS-HAP-111 ( 1.94**) and CMS-HAP-12 (4.53**). RHP-71 (0.64ns) showed the maximum GCA among tester lines. For 100 seed weight only 2 parental lines i.e., CMS-HAP-112 (0.45*) and RHP-69 (0.41*) showed good general combining ability for this yield related important plant characteristic. CMS-HAP-12 exhibited highest GCA effect of 20.43** for seed yield per plant among female lines, while for testers no male line exhibited a significant positive GCA effect for seed yield.

Result of combination specific combining ability of thirty-six sunflower hybrids developed from 12 parental line following L T mating design for nine agro-morphological traits are presented in Table 6. SCA effect of CMS-HAP-12RHP-68 (3.18**) was the highest for DFI, while SCA estimate of 2.9** showed by CMS-HAP-112RHP-41 was the lowest in magnitude. Combination specific combining ability estimates for days taken to flower completion was found to be highest for CMS-HAP-12RHP-68 (3.60**), while CMS-HAP-112RHP-68 cross combination recorded maximum negative SCA effect for DFC, showing that this cross combination is the earliest in flowering than rest of hybrids study. Significant SCA estimates were recorded for all the 36 hybrids for leaf area with maximum SCA effect of 20.87** was observed for CMS-HAP-54RHP-38. Only three hybrids showed a positive and significant SCA magnitude for head diameter, with maximum value of 2.46* (CMS-HAP-12RHP-38). For head diameter, 21 hybrid combination depicted a negative SCA estimates showing that head diameter of hybrids was less than that of their respective parents. The highest magnitude of SCA for plant height was shown by CMS-HAP-112RHP-71 (15.6*). Combination specific combining ability estimates for stem curvature were positive for 34 cross combinations. Range of SCA effects for number of leaves per plant was from 3.47* (CMS-HAP-99RHP-41) to 3.53* (CMS-HAP-11RHP-53). Only one cross combination was found to be significant for head diameter SCA effect and in negative direction, i.e., CMS-HAP-111RHP-38 ( 1.30**). Positive SCA effects of 17 hybrids for 100 seed weight was observed. For seed yield per plant magnitude of SCA recorded was positive for 19 cross combinations, while maximum positive SCA magnitude was depicted by CMS-HAP-111RHP-53 (3.60**) followed by CMS-HAP-112RHP-53 (2.93**).

Original post:
Application of machine learning for identification of heterotic groups in sunflower through combined approach of ... - Nature.com

Read More..

3 Machine Learning Stocks That Could Be Multibaggers in the Making: March Edition – InvestorPlace

Machine learning could be a $528.1 billion market by the time 2030 rolls around,according to Statista. From there,Precedence Research says it could be worth more than $771.32by 2032. All creating big opportunities for machine learning stocks.

All as companies flock to the technology that involves showing data to a machine so it can learn and even make predictionslike a humanwith things such as facial recognition, product recommendations, financial accuracy, predictive analytics, medical diagnoses, and speech recognition just to name a few.

Look at healthcare, for example.

According to BuiltIn.com,Healthcare professionals usewearable technologyto compile real-time data, which machine learning can quickly process and learn from. Thats why the United Sates Food and Drug Administration has been working to integrate ML and AI intomedical device software. Machine learning is also helping to speed up the drug discovery process, organize patient data, and even help personalize treatments.

From there, the skys the limit.As these technologies continue to advance and mature, they are expected to have a transformative impact on various industries, shaping the way businesses operate, make decisions, and deliver value to customers,added Grand View Research.

That being said, investors may want to consider investing in some of the top machine learning stocks, including:

Source: Ascannio / Shutterstock.com

The last time I mentionedNvidia(NASDAQ:NVDA), it traded at $700 a share onFeb. 22.

I noted, I strongly believe its headed to at least $1,000, even $1,500 this year. All thanks to its dominance with artificial intelligence and machine learning, where its graphic processing units.

While its not up to $1,000 just yet, it did hit a high of $967.66. Thats not a bad return in about a month. From here, though, it could easily see $1,000.

Helping, the company recently launched its most powerful chips theGrace Blackwell 200 Superchips, which will continue to strengthen NVDA dominance with machine learning. We also have to consider the companys H100 GPUs have been the very backbone of cloud AI programs. Even its DRIVE platform uses machine learning to deliver autonomous vehicle navigation.

Even better, analysts at UBS just raised their price target on NVDA to $1,100. The firm noted NVDA sits on the cusp of an entirely new wave of demand from global enterprises and Sovereigns, as noted by Business Insider.

Source: Ascannio / Shutterstock.com

We can also look at machine learning stocks, likePalantirTechnologies (NYSE:PLTR), which designs programs that rely on machine learning to make decisions.

Most recently, the companywon a $178 million TITAN contractwith the U.S. Army. TITAN, or the Tactical Intelligence Targeting Access Node (TITAN), is the Armys next generation deep-sensing capability enabled with artificial intelligence and machine learning, as noted in a PLTR press release.

Helping, analysts at Wedbush raised their price target to $35 from $30, with an outperform rating. With the AI Revolution now quickly heading towards the key use case and deployment stage, Palantir with its flagship AIP platform and myriad of customer boot camps is in the sweet spot to monetize a tidal wave of enterprise spend now quickly hitting the shores of the tech sector in our opinion,said the firm, as quoted by Seeking Alpha.

Earnings havent been too shabby either. In its most recent quarter, the company beat expectations with EPS of eight cents on revenue of $608.35 million. Thats comparable to estimates of eight cents on $602.88 million revenue. U.S. commercial revenue jumped 70% to $131 million, while its customer list grew by 55% to 221.

Source: Sergio Photone / Shutterstock.com

Or, if youre rather diversify with 43 companies involved with artificial intelligence and machine learning, theres theGlobal X Robotics & Artificial Intelligence(NASDAQ:BOTZ).

With an expense ratio of 0.69%, the BOTZ ETF invests in companies that potentially stand to benefit from increased adoption and utilization of robotics and artificial intelligence (AI), including those involved with industrial robotics and automation, non-industrial robots, and autonomous vehicles,as noted by GlobalXETFs.com.

Some of its top holdings include Nvidia,Intuitive Surgical(NASDAQ:ISRG),ABB Ltd. (OTCMKTS:ABBNY),SMC Corp. (OTCMKTS:SMCAY), andUiPath Inc.(NYSE:PATH) to name just a few.

While the BOTZ ETF already ran from a recent low of $22.63 to a high of $31.94, theres still further upside remaining. In fact, with the AI and machine learning boom showing no clear signs of slowing, the BOTZ ETF could easily see $40 near term. Also, whats nice about the BOTZ ETF is we can gain exposure to massive companies, like NVDA, for less than $32 a share.

On the date of publication, Ian Cooper did not hold (either directly or indirectly) any positions in the securities mentioned. The opinions expressed in this article are those of the writer, subject to the InvestorPlace.comPublishing Guidelines.

Ian Cooper, a contributor to InvestorPlace.com, has been analyzing stocks and options for web-based advisories since 1999.

Original post:
3 Machine Learning Stocks That Could Be Multibaggers in the Making: March Edition - InvestorPlace

Read More..

Achieve DevOps maturity with BMC AMI zAdviser Enterprise and Amazon Bedrock | Amazon Web Services – AWS Blog

In software engineering, there is a direct correlation between team performance and building robust, stable applications. The data community aims to adopt the rigorous engineering principles commonly used in software development into their own practices, which includes systematic approaches to design, development, testing, and maintenance. This requires carefully combining applications and metrics to provide complete awareness, accuracy, and control. It means evaluating all aspects of a teams performance, with a focus on continuous improvement, and it applies just as much to mainframe as it does to distributed and cloud environmentsmaybe more.

This is achieved through practices like infrastructure as code (IaC) for deployments, automated testing, application observability, and complete application lifecycle ownership. Through years of research, the DevOps Research and Assessment (DORA) team has identified four key metrics that indicate the performance of a software development team:

These metrics provide a quantitative way to measure the effectiveness and efficiency of DevOps practices. Although much of the focus around analysis of DevOps is on distributed and cloud technologies, the mainframe still maintains a unique and powerful position, and it can use the DORA 4 metrics to further its reputation as the engine of commerce.

This blog post discusses how BMC Software addedAWS Generative AIcapabilities to its productBMC AMI zAdviser Enterprise. The zAdviser usesAmazon Bedrockto provide summarization, analysis, and recommendations for improvement based on the DORA metrics data.

Tracking DORA 4 metrics means putting the numbers together and placing them on a dashboard. However, measuring productivity is essentially measuring the performance of individuals, which can make them feel scrutinized. This situation might necessitate a shift in organizational culture to focus on collective achievements and emphasize that automation tools enhance the developer experience.

Its also vital to avoid focusing on irrelevant metrics or excessively tracking data. The essence of DORA metrics is to distill information into a core set of key performance indicators (KPIs) for evaluation. Mean time to restore (MTTR) is often the simplest KPI to trackmost organizations use tools like BMC Helix ITSM or others that record events and issue tracking.

Capturing lead time for changes and change failure rate can be more challenging, especially on mainframes. Lead time for changes and change failure rate KPIs aggregate data from code commits, log files, and automated test results. Using a Git-based SCM pulls these insight together seamlessly. Mainframe teams using BMCs Git-based DevOps platform, AMI DevX ,can collect this data as easily as distributed teams can.

Amazon Bedrock is a fully managed service that offers a choice of high-performing foundation models (FMs) from leading AI companies like AI21 Labs, Anthropic, Cohere, Meta, Stability AI, and Amazon via a single API, along with a broad set of capabilities you need to build generative AI applications with security, privacy, and responsible AI.

BMC AMI zAdviser Enterprise provides a wide range of DevOps KPIs to optimize mainframe development and enable teams to proactvely identify and resolve issues. Using machine learning, AMI zAdviser monitors mainframe build, test and deploy functions across DevOps tool chains and then offers AI-led recommendations for continuous improvement. In addition to capturing and reporting on development KPIs, zAdviser captures data on how the BMC DevX products are adopted and used. This includes the number of programs that were debugged, the outcome of testing efforts using the DevX testing tools, and many other data points. These additional data points can provide deeper insight into the development KPIs, including the DORA metrics, and may be used in future generative AI efforts with Amazon Bedrock.

The following architecture diagram shows the final implementation of zAdviser Enterprise utilizing generative AI to provide summarization, analysis, and recommendations for improvement based on the DORA metrics KPI data.

The solution workflow includes the following steps:

The following screenshot shows the LLM summarization of DORA metrics generated using Amazon Bedrock and sent as an email to the customer, with a PDF attachment that contains the DORA metrics KPI dashboard report by zAdviser.

In this solution, you dont need to worry about your data being exposed on the internet when sent to an AI client. The API call to Amazon Bedrock doesnt contain any personally identifiable information (PII) or any data that could identify a customer. The only data transmitted consists of numerical values in the form of the DORA metric KPIs and instructions for the generative AIs operations. Importantly, the generative AI client does not retain, learn from, or cache this data.

The zAdviser engineering team was successful in rapidly implementing this feature within a short time span. The rapid progress was facilitated by zAdvisers substantial investment in AWS services and, importantly, the ease of using Amazon Bedrock via API calls. This underscores the transformative power of generative AI technology embodied in the Amazon Bedrock API. This API, equipped with the industry-specific knowledge repository zAdviser Enterprise and customized with continuously collected organization-specific DevOps metrics, demonstrates the potential of AI in this field.

Generative AI has the potential to lower the barrier to entry to build AI-driven organizations. Large language models (LLMs) in particular can bring tremendous value to enterprises seeking to explore and use unstructured data. Beyond chatbots, LLMs can be used in a variety of tasks, such as classification, editing, and summarization.

This post discussed the transformational impact of generative AI technology in the form of Amazon Bedrock APIs equipped with the industry-specific knowledge that BMC zAdviser possesses, tailored with organization-specific DevOps metrics collected on an ongoing basis.

Check out the BMC website to learn more and set up a demo.

Sunil Bemarkar is a Sr. Partner Solutions Architect at Amazon Web Services. He works withvarious Independent Software Vendors (ISVs)and Strategic customers across industries to accelerate their digital transformation journeyand cloud adoption.

Vij Balakrishna is a Senior Partner Development manager at Amazon Web Services. She helps independent software vendors (ISVs) across industries to accelerate their digital transformation journey.

Spencer Hallman is the Lead Product Manager for the BMC AMI zAdviser Enterprise. Previously, he was the Product Manager for BMC AMI Strobe and BMC AMI Ops Automation for Batch Thruput. Prior to Product Management, Spencer was the Subject Matter Expert for Mainframe Performance. His diverse experience over the years has also included programming on multiple platforms and languages as well as working in the Operations Research field. He has a Master of Business Administration with a concentration in Operations Research from Temple University and a Bachelor of Science in Computer Science from the University of Vermont. He lives in Devon, PA and when hes not attending virtual meetings, enjoys walking his dogs, riding his bike and spending time with his family.

Read the original post:
Achieve DevOps maturity with BMC AMI zAdviser Enterprise and Amazon Bedrock | Amazon Web Services - AWS Blog

Read More..

Reinforcement learning is the path forward for AI integration into cybersecurity – Help Net Security

AIs algorithms and machine learning can cull through immense volumes of data efficiently and in a relatively short amount of time. This is instrumental to helping network defenders sift through a never-ending supply of alerts and identify those that pose a possible threat (instead of false positives). Reinforcement learning underpins the benefit of AI to the cybersecurity ecosystem and is closest to how humans learn through experience and trial and error.

Unlike supervised learning, reinforcement learning focuses on how agents can learn from their own actions and feedback in an environment. The idea is that reinforcement learning will maximize its capabilities over time by using rewards and punishments to calculate positive and negative behavior. Enough information is collected to make the best decision in the future.

Alert fatigue for security operations center (SOC) analysts has become a legitimate business concern for chief information security officers, who are concerned about analyst burnout and employee turnover as a result. Any solution able to handle most of the alert noise so that analysts can prioritize actual threats will be saving the organization both time and money.

AI capabilities help mitigate the threat posed by large social engineering, phishing, and spam campaigns by understanding and recognizing the kill chain of such attacks before they succeed. This is important given the security resource constraints most organizations experience, regardless of their size and budget.

More sophisticated dynamic attacks are a bigger challenge and, depending on the threat actor, may only be used a limited number of times before the attackers adjust or alter a part of the attack sequence. Here is where reinforcement learning can study the attack cycles and identify applicable patterns from previous attacks that have both failed and succeeded. The more exposed to sophisticated attacks and their varied iterations, the better-positioned reinforcement learning is positioned to identify them in real-time.

Granted, there will be a learning curve at the onset, especially if attackers frequently change how they pull off their attacks. But some part of the attack chain will remain, becoming a pertinent data point to drive the process.

Detection is only one part of monitoring threats. AI reinforcement learning may have applicability in prediction to prevent attacks as well, learning from past experiences and low signals and using patterns to predict what might happen next time.

Preventing cyber threats is a natural advancement from passive detection and is a necessary progression to making cybersecurity proactive rather than reactive. Reinforcement learning can enhance a cybersecurity products capability by making the best decisions based on the threat. This will not only streamline responses, but also maximize available resources via optimal allocation, coordination with other cybersecurity systems in the environment, and countermeasure deployment. The continuous feedback and reward-punishment cycle will increasingly make prevention more robust and effective the longer it is utilized.

One use case of reinforcement learning is network monitoring, where an agent can detect network intrusions by observing traffic patterns and applying lessons learned to raise an alert. Reinforcement learning can take it one step further by executing countermeasures: blocking or redirecting the traffic. This can be especially effective against botnets where reinforcement learning can study communication patterns and devices in the network and disrupt them based on the best course of response action.

AI reinforcement learning can also be applied to a virtual sandbox environment where it can analyze how malware operates, which can aid vulnerability management patch management cycles.

One immediate concern is the number of devices continually being added to networks, creating more endpoints to protect. This situation is exacerbated by remote work situations, as well as personal devices being allowed in professional environments. The constant adding of devices will make it increasingly more difficult for machine learning to account for all potential entry points for attacks. While the zero-trust approach alone could bring intractable challenges, synergizing it with AI reinforcement learning can achieve a strong and flexible IT security.

Another challenge will be access to enough data to detect patterns and enact countermeasures. In the beginning, there may be an insufficient amount of available data to consume and process, which may skew learning cycles or even provide flawed courses of defensive action.

This could have ramifications when addressing adversaries that are purposefully manipulating data to trick learning cycles and impact the ground truth of the information at the onset. This must be considered as more AI reinforcement learning algorithms are integrated into cybersecurity technologies. Threat actors are nothing if not innovative and willing to think outside the box.

Contributing author: Emilio Iasiello, Global Cyber Threat Intelligence Manager, Dentons

Read more from the original source:
Reinforcement learning is the path forward for AI integration into cybersecurity - Help Net Security

Read More..

Ask AT&T: Revolutionizing Efficiency and Creativity with AI – AT&T Newsroom

Participants showcased their talent in machine learning, code generation, and problem-solving, guided by Ask AT&T. The team used Ask AT&T to research industry trends, draft business plans, conduct SWOT analysis, and design PowerPoint templates.

By the end of the competition, Ask AT&T emerged as an indispensable tool for everyday work. Although AI tools like Ask AT&T have potential for improvement, their immense potential was recognized. As AI continues to develop, it will revolutionize our work processes, increasing efficiency and allowing more time for complex tasks. This aligns with our focus on improving internal processes at AT&T.

The TDP's AI Learning & Problem-Solving Challenge was an inclusive event, involving around 700 employees from the corporate systems organization. The competition comprised 16 teams and over 70 participants, from new hires to veterans.

The most innovative teams proposed diverse learning and training tools. Several leaders evaluated the final four contenders, with PLEdge of Progress emerging as the winners. Some of the winning solutions are in the backlog for development.

Participants expressed that AI tools like Ask AT&T, when used effectively, can significantly enhance efficiency and productivity.

Follow this link:
Ask AT&T: Revolutionizing Efficiency and Creativity with AI - AT&T Newsroom

Read More..

Ethical AI: Tackling Bias And Ensuring Fairness In Machine Learning Algorithms – Dataconomy

One of the most recognizable trends of the early years of the 21st century has been the spread and application of AI (Artificial Intelligence) within many professional areas. The data analysis, pattern recognition, and decision-making functionalities in AI have produced remarkable efficiencies and ideas. However, ethical concerns have risen to dominate as these artificial intelligence systems including machine learning algorithms penetrate our daily lives. This signifies a significant year in our journey towards addressing these issues that would ensure that equity is promoted in AI systems and prevent them from perpetuating or worsening societal disparities by 2024.

The term bias in AI refers to systematic discrimination or advantage afforded to some individuals or groups and not others. This can be expressed in different ways like racial, gender, socio-economic status, and age biases among others. Such prejudices are usually derived from the data used for training machine learning models. If the training data is non-representative of a varied population on earth or it contains historical biases, then such AI systems are likely to capture those partialities resulting in unfair and disproportionate outputs. How this AI biasness algorithms and Machine learning working practically that you can understanding from multiple AI tutorial or Data Science Course available online.

The reason to create artificial intelligence systems that are fair is justice. In critical fields such as health care, law enforcement, employment and financial services, these technologies play a bigger role. The effects of biased decisions can be life-changing for individuals. Guaranteeing fairness in AI has more than one aim: its about making systems that mirror our shared values and promote a more equitable way of life.

One of the leading tactics aimed at fighting bias in artificial intelligence is to ensure that the data sets used for training the machine learning models are diverse and representative of the global population. This means demographic diversity, but also different experiences, perspectives and environments. Again, efforts aiming at auditing and cleansing datasets from historical biases, are important too.

Transparency is about an AI system that can be understood and investigated by humans in the way it was created. This is closely related to the idea of explainable AI, where models are built to provide reasons for their decisions in a language understandable to human beings. Hence, stakeholders can grasp how and why particular choices were made thereby identifying and mitigating biases.

It is important to continuously check the bias of AI systems. Such checks include both pre-deployment and after-deployment processes that ensure continued fairness even as they encounter new data or scenarios.

Ensuring AI fairness requires developing, and implementing ethicalness of AI frameworks as well as governance arrangements at the societal and organizational levels. These AI framework is little bit very complex task to understanding. Multiple artificial intelligence course helps to understand these complex structure of fairness pattern in AI. Establishing guidelines, principles or standards for developing and using ethical artificial intelligence alongside mechanisms that can hold accountable those who have suffered from bad decisions of AI are fundamental in this regard.

Tackling bias in AI is a complex challenge that requires collaboration across disciplines, including computer science, social sciences, ethics, and law. Such collaboration can bring diverse perspectives and expertise to the forefront, facilitating more holistic and effective solutions.

A dynamic and constantly changing field is the ethical AI adventure in such a way that it remains very important even as we go forward. Technology and methodology advancements combined with an increasing understanding among the general population about ethical considerations are facilitating the movement to more equitable AI systems. The concern is on making sure that harm has ceased happening and also utilizing AI potentiality towards societal benefit and human well-being.

In conclusion, bias in AI and fairness issues rank top among various pressing ethical challenges facing the AI community now. In addition, diversity and ethics, continuous vigilance, transparency, accountability, and oversight of research operations involved in its development will foster not only innovative but also just outcomes for all people from different backgrounds.

Featured image credit: Steve Johnson/Unsplash

Original post:
Ethical AI: Tackling Bias And Ensuring Fairness In Machine Learning Algorithms - Dataconomy

Read More..

What is a model card in machine learning and what is its purpose? – TechTarget

What is a model card in machine learning?

A model card is a type of documentation that is created for, and provided with, machine learning models. A model card functions as a type of data sheet, similar in principle to the consumer safety labels, food nutritional labels, a material safety data sheet or product spec sheets.

There has been a dramatic rise in the development and adoption of machine learning (ML) and artificial intelligence (AI) during recent years. Further advances in generative AI employ large language models (LLMs) as a core component. However, the many models used in those platforms are increasingly complex and difficult to understand. Even model developers sometimes struggle to fully understand and describe the ways a given model behaves. This complexity has created serious questions about core business values such as transparency, ethics and accountability. Common questions include the following:

First proposed by Google in 2018, the model card is a means of documenting vital elements of a ML model so users -- including AI designers, business leaders and ML end users -- can readily understand the intended use cases, characteristics, behaviors, ethical considerations, and the biases and limitations of a particular ML model.

As late as 2024, there are no current legislative or regulatory requirements to produce or provide model card documentation with ML models. Similarly, there are no currently established standards in model card format or content. However, major ML developers have spearheaded the adoption of model card documentation as a way of demonstrating responsible AI development, and adopters can find model cards for major platforms such as Meta Llama, Google face detection and OpenAI GPT-3.

The rise of ML and AI is driving the need for transparency and responsible governance. Businesses must understand what ML models are for, how they work, how they compare to other competitive models, how they're trained and their suitability for intended tasks.

Model cards are a tool that can address such concerns which readily impact governance and regulatory issues for the business. Model cards can provide a range of important benefits to ML and AI projects including the following:

Labels and other informational summaries are generally most effective when they allow comparing similar products side by side using comparable content and formats. However, the information presented on an ML model card can vary. Unlike highly regulated informational displays -- such as food nutritional labeling -- there are no current standards to govern the information or formatting included on ML model cards.

ML models can vary dramatically in their scope, purpose and capabilities, so this makes it hard to regulate. For example, an ML model developed to aid in medical diagnosis can be distinctly different than an ML model created to run analytics on retail sales operations, or a complex LLM used in an AI construct. Consequently, ML model developers largely use their own discretion to determine what information to include, and how that information should be presented. Yet, as leading technology firms develop ML/AI platforms and document those offerings through model cards, some de facto documentation standards are taking shape. Model cards should include the following:

This first section of a model card is typically the introduction to the model which can outline the model's essential details including the model's name, version, revision list, a brief general description of the model, business or developer details and contact information, and licensing details or limits.

This section describes the intended uses, use cases and users for the model. For example, a section on use cases may describe uses in object detection, facial detection or medical diagnoses. This section may also include caveats, use limitations or uses deemed out of scope. For example, a model intended for object detection may detail input from photos or video; output including detection of a specified number of object classes; and other output data such as object bounding box coordinates, knowledge graph ID, object description and confidence score.

This section describes the overall design of the model and any underlying hardware back end that runs the model and hosts related data. Readers can refer to the model card to understand the design elements or underlying technologies that make the model work. For the object detection model example, the model card may describe an architecture including a single image detector model with a Resnet 101 backbone and a feature pyramid network feature map.

This section outlines, describes or summarizes the data used in model training; where and when the data was obtained; and any statistical distribution of key factors in the data which may allow for inadvertent bias. Since training data may be proprietary to the model's developers, training details may be deliberately limited or protected by a separate confidentiality agreement. Training details may also describe training methodologies employed with the model.

This section outlines details related to the model's performance measured against a test data set, not a training data set, as well as details about the test data set itself. For the object detection model example, performance metrics included on the model card may note the use of both Google's internal image data set as well as an open source image set as test data and the number of object classes the model can detect in each data set. Additionally, performance details may outline reported metrics including the precision and accuracy of the object detection. More sophisticated models may utilize other detailed metrics to measure performance.

A key segment of any model card is the section describing limitations, possible biases or variable factors that might affect the model's performance or output. For the object detection model example, known limitations may include factors such as object size, clutter, lighting, blur, resolution and object type since the model can't recognize everything.

This final segment of a model card is often dedicated to business-related details including information about the model's developers, detailed contact, support and licensing information, fairness/privacy and usage information, suggestions for model monitoring, any relevant assessment of impacts to individuals or society, and other ethical or potential legal concerns related to the model's usage.

As leading technology organizations build ML and AI platforms, their work on model cards and other documentation has provided a standard for other ML firms to follow. Today there are many examples of ML model cards to review including the following major examples:

There are also more standardized tools for model card creation, as well as model card repositories, such as these examples:

Both GitHub and Hugging Face provide a repository of model cards which are available for review and study, offering model card examples across many different model types, purposes and industry segments.

Original post:
What is a model card in machine learning and what is its purpose? - TechTarget

Read More..

Artificial Intelligence in Nutrition: Definition, Benefits, and Algorithms – ThomasNet News

Artificial intelligence (AI) is transforming the way we perceive and manage nutrition. There are applications for diet tracking, which offer personalized guidance and meal plans, solutions that pinpoint ingredients with specific health benefits, and tools for analyzing medical data to inform customized nutrition interventions.

These technologies serve to optimize medical outcomes, improve public health nutrition advice, encourage healthy eating, support chronic disease management, prevent health decline, aid disease prevention, and improve overall well-being.

The use of AI and machine learning (ML) in nutrition has benefits in several areas, including:

A one-size-fits-all approach to public health nutrition guidance fails to account for different dietary preferences, health goals, lifestyles, nutritional requirements, intolerances, allergies, and other health conditions.

A young and active vegan with a nut allergy, for example, has hugely different dietary needs to an elderly carnivore living with diabetes.

AI-powered technology can quickly analyze vast amounts of nutrition data and cross-reference it with an individuals measurements and requirements to produce personalized and optimal nutrition plans for all.

Clinical nutrition can be defined as a discipline that deals with the prevention, diagnosis, and management of nutritional and metabolic changes related to acute and chronic disease and conditions caused by a lack or excess of energy and nutrients.

AI has several applications in this field, from analyzing complex medical data and medical images to informing the decisions of medical practitioners and producing personalized nutrition plans for patients. Because AI solutions can identify previously overlooked associations between diet and medical outcomes, they can improve chronic disease management, optimize patient recovery, and improve patient wellbeing.

A tailored nutrition plan for a diabetic person, for example, will evaluate their gut microbiome and blood glucose levels, while a person with cardiovascular problems may require a diet that takes into consideration their cholesterol levels and blood pressure.

There has been a rise in AI-powered apps that assist users in tracking their nutritional intake while offering personalized guidance on making healthier choices.

The challenge with self-reported food diaries is that they depend on the memory and honesty of individuals, which often leads to under- and over-reporting and other inaccuracies. When certain snacks and meals are forgotten, portion sizes are miscalculated, or food choices that are perceived to be less healthy are deliberately omitted, it is more difficult for nutrition-focused apps and healthcare professionals to provide informed and effective nutritional advice.

With AI-powered computer vision technology, food tracking apps can identify food items, estimate portion sizes, and calculate nutritional values with increasing accuracy. Coupled with wearable devices, which track a users activity, this technology is empowering people to make optimal nutritional choices. Some nutrition apps offer additional personalization.

For example, they might partner with health organizations to obtain their users electronic health records or feature a nutrition chatbot to quickly respond to queries or perform a dietary assessment.

Nutraceuticals are products derived from food sources that promise additional health benefits to their basic nutritional value. Some examples include glucosamine, which is used in the treatment of arthritis; omega-3 fatty acids, which are used to treat inflammatory conditions; and many nutrient-rich foods, including soybeans, ginger, garlic, and citrus fruits.

Various nutraceutical companies have come under fire for marketing products as health solutions without meaningful scientific evidence to back their claims. But AI looks set to transform the industrys image by finding genuine health solutions fast.

The speed and accuracy with which an AI solution can identify bioactive compounds in foods and then predict the actions they will have in the body is of particular interest to nutraceutical companies. At present, it often takes several years to identify, develop, test, and launch a new ingredient.

In the future, ML solutions are likely to support the development of targeted nutraceutical solutions.

Across 48 countries, 238 million people are facing high levels of acute food insecurity. Meanwhile, one-third of the food produced for human consumption is lost or wasted, which equates to 1.3 billion tons every year.

AI is aiding the global effort to address food insecurity and reduce waste generation.

It can predict demand for certain crops to enable farmers to optimize their planting plans, detect crop and livestock disease at an early stage to contain damage and limit loss, and identify trends in consumer behavior to help retailers forecast demand and better manage their inventories. In addition, AI systems can track food from farm to plate, helping to ensure is it harvested, shipped, and consumed on time.

In the aftermath of a natural disaster or conflict, AI can quickly analyze data to inform humanitarian responses.

The challenges associated with AI in nutrition include:

To improve accuracy and efficiency, ML solutions are fed vast amounts of training data. In nutrition, such data is especially sensitive, including personal information and medical records.

Once a product, such as a food tracking app, is live, additional data is collected, as users are required to disclose personal information, including measurements, medications, food intake, and existing health conditions.

Rigorous safeguarding must be implemented to ensure that all personal data is safely collected and stored and that users understand how it is being used.

AI solutions are known to perpetuate societal stereotypes and biases. Amazon deployed a recruitment system that discriminated against women, the Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) created a tool to predict the likelihood of criminals reoffending, which misclassified almost twice as many Black defendants, and a Twitter algorithm was proven to favor white faces over Black faces when cropping images.

If nutrition-centered AI solutions are not carefully developed, these tools could reinforce outdated and oversimplified concepts of nutritional health and wellness or reflect biases in the healthcare system.

The use of diverse training data can prevent unfair or inaccurate outcomes, and these tools must be continuously monitored and updated to echo the latest healthcare guidance.

Meal scanning technology enables food-tracking app users to log their intake by simply snapping a photograph of their meals via their cell phone cameras. These tools are exceptionally fast and can be highly accurate, but there are some major limitations to consider.

For example, the technology will struggle to detect a basic ingredient swap in familiar recipes. When scanning a slice of cake, it would record items such as butter and eggs, even if those ingredients had been replaced with avocado and yogurt. Similarly, the app wont register when a creamy pasta sauce is replaced with a milk-based alternative.

Fortunately, these shortcomings can be addressed with some manual effort on the users part.

Nutrition is a complex and nuanced field, which will continue to benefit from the inputs and expertise of qualified healthcare professionals.

Take the management of chronic illnesses as an example. While an AI-powered app can produce highly customized dietary plans for individuals living with diabetes, celiac disease, or Crohns disease, additional medical support and monitoring is likely to be required.

Complexities also arise when poor or unusual eating habits are linked to mental health conditions, such as eating disorders. In these scenarios, food-tracking apps are likely to cause more harm than good.

The market for personalized nutrition is fast-expanding, driven largely by rapid developments in AI.

Some exciting industry players include;

Nutrition labels are designed to prevent false advertising and promote food safety. But perhaps one of the most arduous tasks involved in launching new food products, medicines, and supplements is ensuring adherence to labeling regulations and standards, which are not only complex but can also vary enormously from country to country. Manual reviews in the food industry are repetitive, slow, and prone to human error, which, at best, results in delayed product launches and, at worst, poses a threat to human health.

Verifying the accuracy and compliance of labels is made easy with AI algorithms. Manufacturers simply upload their recipes and packaging design to an AI-powered tool, which analyzes the ingredients and identifies any issues. This drives operational efficiencies, reduces product waste, ensures customer safety, and enables more cost-effective international trade.

An increasing number of companies are using smart labels to provide consumers with additional nutritional information. This enables people to make more informed decisions about the food and supplements they consume while ensuring food safety.

The growing demand for personalized nutrition has led to the rapid adoption of fitness-tracking apps like MyFitnessPal and MyPlate. Indeed, almost two-thirds of American adults are mobile health app users, according to a 2023 survey.

Amid widespread criticism that these apps are promoting unhealthy diets, extreme exercise regimes, and rapid weight loss, users must understand the technologys limitations.

Here are some important things to consider:

AI-powered apps are more likely to be beneficial when healthcare professionals, including doctors and dietitians, work closely with their patients and clients to recommend appropriate products and monitor usage.

The applications of AI in nutrition are far-reaching, enabling personalized diet plans, enhanced clinical nutrition, the development of targeted nutraceuticals, and more effective methods for addressing food insecurity.

As adoption increases, these solutions will require increasingly robust regulation, particularly in relation to data handling and security, algorithm bias, and consumer education.

With the revenue from health apps forecast to grow to $35.7 billion by 2030, healthcare professionals must be aware of the information that is being communicated to consumers so they can guide their patients toward truly health-promoting options.

As for the developers of AI-powered nutrition technology, inputs from experts in diverse fields, including healthcare, nutrition, technology, and ethics, will ensure solutions are safe and effective.

Read the original:
Artificial Intelligence in Nutrition: Definition, Benefits, and Algorithms - ThomasNet News

Read More..

The impact of AI and machine learning technology in revolutionizing manufacturing practices – Asia Business Outlook

In an exclusive interviewwith Asia Business Outlook, BN Shukla, Operations Director, Jabil, India, shares his views on the challenges of interpreting and understanding the decision-making processes, strategies to optimize the implementation of AI & ML, robust measures to protect sensitive production data, how manufacturers ensure compliance with industry standards and regulations and more. BN Shukla, Operations Director for Jabil in India, has a career spanning more than 28 years with a specialization in operational excellence and business management.

Considering the complexity of AI and machine learning algorithms, especially within critical manufacturing processes, how can manufacturers navigate the challenge of interpreting and understanding the decision-making processes of these systems?

Some of the common challenges in increasing the adoption of digital technology in manufacturing include:

Capital investment: Varying degrees of costs from IoT sensors used on existing machines to purchasing large machinery with integrated machine learning solutionsto enterprise-wide infrastructure adaptations, particularly in large-scale projects.

Effective change management: AI/ML is changing the way we do things as we merge the physical and digital together. Strategies must be accompanied by a support structure for employees, empowering them with the right tools and skills, thus creating a culture ripe for a successful transition.

Technical skill gaps: Fuelled by digitalization, the roles and expectations of the workforce on and off the shop floor are evolving. Talents that have digital dexterity and are ready to adapt and innovate in manufacturing processes and adopt digital tools that support those processes will successfully implement new technology and maintain operations.

Data growth, sensitivity, and security: The physical and digital systems in smart factories make real-time interoperability possible. While large volumes of data are generated, challenges remain in data quality and management for decision-making, looming concerns over data and IP privacy, ownership, governance, and an increased risk of an expanded attack surface as numerous machines and devices are connected to networks.

To ensure the quality and reliability of the data used to train and optimize these systems, we have put in place several guard rails:

Datafication: We progress from digitization to digitalization to datafication, where we investigate business processes and transform the process into quantifiable data to track, monitor, and analyze. To do this effectively, we have set up an enterprise-wide Data & AI council, which involves senior members from all the functions to help identify key processes critical to the business and have the process owners work on critical data definitions, data lineage, and data sources. Although this is not technology-specific, it helps to set a good foundation for the organization moving forward. Teams across Jabil are learning how to use data effectively to enable Data to speak, data to act.

AI/ML: Collected data that is not used effectively is a waste of resources. We leverage AI/ML/deep learning to extract value out of the swarm of data we collect each day from our factories and work processes to help deliver business insights, automate tasks, and advance system capabilities. Our AI/ML strategy spans from using AI algorithms to improving our inspection process in the factories. By using advanced data analytics to derive algorithms or new business models, we can gain new insights and intelligence for our business. Were also developing our knowledge database and combining it with Generative AI technology to merge the insights from our self-healing manufacturing line (ready in April 2024) with the know-how of our workforce to continuously train our AI models and guide our technicians to take action.

SAP S4: When Jabil migrated from SAP ECC to SAP S4 Hana in January 2022, we were able to offload the technical debt that came from 20-plus years of over-customization and subsystems that were peripheral to the legacy SAP system. The Hana database also brought about greater speed in data processing and a simplified data structure. Nevertheless, the benefits of SAP S4 migration should not be just about solving technical issues but about bringing new value for the users through new ways of report creation, enhanced user experience, increased productivity, and the ability to leverage new functionality to transform the processes. We are still in the continuous improvement process to better leverage these functionalities and are bringing the users along in the transformation journey.

Automation: Process automation through the use of robotic process automation (RPA) tools has helped us automate many back-end office and repetitive tasks in various functions. Many functional teams who are using the RPA bots also try to humanize the bots and treat them as part of the (digital) workforce, measuring the performance of these bots to ensure we obtain the maximum ROI.

To navigate the challenge of interpreting the complexity in the decision-making process, it is crucial that organizations first have a clear strategy of how they plan to leverage AI/ML in the company. At Jabil, we took a customer-first approach, and we deliberated from the onset. How we leverage AI/ML is about solving a business problem or providing deep insights to realize a step-change improvement in safety, quality, delivery, and cost.

Interpreting and understanding such systems, communications, and engagement with stakeholders is critical to ensure we are working on what matters to the business most.

In light of the challenge of ensuring scalability and adaptability across diverse manufacturing operations, what strategies can manufacturers employ to optimize the implementation of AI and machine learning solutions?

Once we knew what our North Star looked like, we were able to clearly break down obstacles within the People, Process, and Systems categories and develop solutions to take us to the next level. Some of these were:

Focus on people at the heart of transformations by taking an employee-first digitalization approach. Many people are familiar with the saying, AI will not replace people, but the people who can use it will. With that in mind, the human factor is a major lever for transitioning and tapping into opportunities that come with AI/ML.

Our industry-certified internal courses, in partnership with industry experts and local universities, have allowed us to grow our pool of subject matter experts by ensuring that technological know-how is retained and expanded through customized application-based upskilling. Additionally, as engineers and technicians take business-related modules, they promote diversity in the workplace in the form of business differentiation and innovative decision-making.

Enhance industry ecosystem through public-private initiatives: In many of our locations, we partner with leading equipment providers and government agencies to build a strong manufacturing ecosystem. We must continue to actively partner with academia to create the next generation of talented professionals.

Amidst concerns regarding workforce reskilling and upskilling, how can manufacturers effectively foster collaboration between human workers and intelligent machines in manufacturing processes?

There are seismic shifts with the convergence of technologies across operations, information technology (IT), and supply chains, creating a data-driven environment that enables us to deliver the future of Jabils manufacturing.

We need to embrace a work environment that is expected to blend advanced technology and digital skills with uniquely human skills to yield the highest level of productivity. The rise of advanced technology can replace the manual or repetitive tasks many jobs entail. This frees up space for skills that are uniquely and essentially human, or so-called soft skills, including critical thinking, people management, creativity, and effective communication. Companies need workers who can exhibit these skills, as well as digital skills, to work alongside robots and technologies.

The broader aim of digital transformation is not just to eliminate tasks and cut costs but to create value, safer workplaces, and meaningful work for people. Industry leaders need to put humans in the loop when preparing their workforce through rethinking work structure, retraining and reskilling talents, and structuring the organization to leverage technology and transform its business.

Theres no one-size-fits-all answer to this other than saying that digitalization, digital transformations, and digital readiness of ones operations and workforce are imperative to remain relevant and competitive in the long term.

We believe that the best way to make our organization more data-centric and digital is to invest in those who are adaptable, curious, and flexible. We look to our existing and future talents with the logic that digital transformation is changing everyones role, from the factory floor to our executives.

This marriage of multi-generational talents not only propels the industry forward but also heralds change within the industry itself, making manufacturing a destination for innovative jobs and being the continued change maker in Indias socio-economic landscape.

Given the cybersecurity risks associated with the adoption of AI and machine learning technologies in manufacturing, how can manufacturers implement robust measures to protect sensitive production data and mitigate potential cyber threats and data breaches?

At Jabil, we take cybersecurity seriously. Through our industry expertise and enterprise education and awareness efforts, we are building a data protection culture at Jabil, where our employees are empowered, and our customers and partners are confident in our ability to conduct business safely in todays evolving digital world.

Jabil delivers a three-pronged risk management methodology as part of our Defense-in-Depth approach:

This layered system provides several levels of protection for data, does not rely on any single tool or policy, and enables redundancy in our systems and processes.

We manage digital security guided by the National Institute of Standards and Technology (NIST) Cybersecurity Framework (CSF), coupled with best-of-class solutions for data protection, threat detection and continuous monitoring within Jabils Security Operations Center (SOC).

Backed by policies and procedures to ensure enterprise resiliency, we provide robust and holistic risk-based guidance and high-quality shared cybersecurity services and solutions. To ensure that our capabilities are always relevant and updated, we use a leading third-party assessor to rate our security program as a whole and conduct penetration tests (PenTests) to monitor compliance with policy and identify gaps for remediation.

We have also built an effective security program that centers around technical controls and empowering our people to be our best line of defense by equipping them with top-tier cybersecurity education and awareness programs that provide them with the information they need to stay safe online in their personal and professional lives.

A great example is educating our employees before they click and providing best practices and tools to analyze site URLs after an individual click. We ensure our readiness to act if cyberattacks breach our defenses through continuous improvement programs such as tabletop exercises, a ransomware playbook, and incident response.

See the rest here:
The impact of AI and machine learning technology in revolutionizing manufacturing practices - Asia Business Outlook

Read More..