Category Archives: Machine Learning

Optimization of wear parameters for ECAP-processed ZK30 alloy using response surface and machine learning … – Nature.com

Experimental results Microstructure evolution

The ZK30s AA and ECAPed conditions of the inverse pole figures (IPF) coloring patterns and associated band contrast maps (BC) are shown in Fig.2. High-angle grain boundaries (HAGBs) were colored black, while Low-angle grain boundaries (LAGBs) were colored white for AA condition, and it was colored red for 1P and Bc, as shown in Fig.2. The grain size distribution and misorientation angle distribution of the AA and ECAPed ZK30 samples is shown in Fig.3. From Fig.2a, it was clear that the AA condition revealed a bimodal structure where almost equiaxed refined grains coexist with coarse grains and the grain size was ranged between 3.4 up to 76.7m (Fig.3a) with an average grain size of 26.69m. On the other hand, low fraction of LAGBs as depicted in Fig.3b. Accordingly, the GB map (Fig.2b) showed minimal LAGBs due to the recrystallization process resulting from the annealing process. ECAP processing through 1P exhibited an elongated grain alongside refined grains and the grain size was ranged between 1.13 and 38.1m with an average grain size of 3.24m which indicated that 1P resulted in a partial recrystallization, as shown in Fig.2c,d. As indicated in Fig.2b 1P processing experienced a refinement in the average grain size of 87.8% as compared with the AA condition. In addition, from Fig.2b it was clear that ECAP processing via 1P resulted a significant increase in the grain aspect ratio due to the uncomplete recrystallization process. In terms of the LAGBs distribution, the GB maps of 1P condition revealed a significant increase in the LAGBs fraction (Fig.2d). A significant increase in the LAGBs density of 225% after processing via 1P was depicted compared to the AA sample (Fig.2c). Accordingly, the UFG structure resulted from ECAP processing through 1P led to increase the fraction of LAGBs which agreed with previous study35,36. Shana et al.35 reported that during the early passes of ECAP a generation and multiplication of dislocation is occur which is followed by entanglement of the dislocation forming the LAGBs and hence, the density of LAGBs was increased after processing through 1P. The accumulation of the plastic strain up to 4Bc revealed an almost UFG, which indicated that 4Bc led to a complete dynamic recrystallization (DRX) process (Fig.2e). The grain size was ranged between 0.23 up to 11.7m with average grain size of 1.94m (the average grain size was decreased by 92.7% compared to the AA condition). On the other hand, 4Bc revealed a decrease in the LAGBs density by 25.4% compared to 1P condition due to the dynamic recovery process. The decrease in the LAGBs density after processing through 4Bc was coupled with an increase in the HAGBs by 4.4% compared to 1P condition (Figs.2f, 3b). Accordingly, the rise of the HAGBs after multiple passes can be referred to the transfer of LAGBs into HAGBs during the DRX process.

IPF coloring maps and their corresponding BC maps, superimposed for the ZK30 billets in its AA condition (a,b), and ECAP processed through (c,d) 1P, (e,f) 4Bc (with HAGBs in black lines and LAGBs in white lines (AA) and red lines (1P, 4Bc).

Relative frequency of (a) grain size and (b) misorientation angle of all ZK30 samples.

Similar findings were reported in previous studies. Dumitru et al.36 reported that ECAP processing resulted in the accumulation and re-arrangement of dislocations which resulted in forming a subgrains and equiaxed grains with an UFG structure and a fully homogenous and equiaxed grain structure for ZK30 alloy was attained after the third pass. Furthermore, they reported that the LAGBs is transferred into HAGBs during the multiple passes which leads to the decrease in the LAGBs density. Figueiredo et al.37 reported that the grains evolved during the early passes of ECAP into a bimodal structure while further processing passes resulted in the achievement of a homogenous UFG structure. Zhou et al.38 reported that by increasing the processing passes resulted in generation of new grain boundaries which resulted in increasing the misorientation to accommodate the deformation and the Geometrically Necessary Dislocations (GNDs) generated a part of the total dislocations with a HAGBs, thus develop misorientations between the neighbor grains. Tong et al.39 reported that the fraction of LAGBs is decreased during multiple passes for MgZnCa alloy.

Figure4a displays X-ray diffraction (XRD) patterns of the AA-ZK30 alloy, 1P, and 4Bc extruded samples, revealing peaks corresponding to primary -Mg Phase, Mg7Zn3, and MgZn2 phases in all extruded alloys, with an absence of diffraction peaks corresponding to oxide inclusions. Following 1P-ECAP, the -Mg peak intensity exhibits an initial increase, succeeded by a decrease and fluctuations, signaling texture alterations in the alternative Bc route. The identification of the MgZn2 phase is supported by the equilibrium MgZn binary phase diagram40. However, the weakened peak intensity detected for the MgZn2 phase after the 4BcECAP process indicates that a significant portion of the MgZn2 dissolved into the Mg matrix, attributed to their poor thermal stability. Furthermore, the atomic ratio of Mg/Zn for this phase is approximately 2.33, leading to the deduction that the second phase is the Mg7Zn3 compound. This finding aligns with recent research on MgZn alloys41. Additionally, diffraction patterns of ECAP-processed samples exhibit peak broadening and shifting, indicative of microstructural adjustments during plastic deformation. These alterations undergo analysis for crystallite size and micro-strain using the modified Williamson and Hall (WH) method42, as illustrated in Fig.4b. After a single pass of ECAP, there is a reduction in crystallite size and an escalation in induced micro-strain. Subsequent to four passes-Bc, further reductions in crystallite size and heightened micro-strain (36nm and 1.94103, respectively) are observed. Divergent shearing patterns among the four processing routes, stemming from disparities in sample rotation, result in distinct evolutions of subgrain boundaries. Route BC, characterized by the most extensive angular range of slip, generates subgrain bands on two shearing directions, expediting the transition of subgrain boundaries into high-angle grain boundaries43,44. Consequently, dislocation density and induced micro-strains reach their top in route BC, potentially influenced by texture modifications linked to orientation differences in processing routes. Hence, as the number of ECAP passes increases, an intensive level of deformation is observed, leading to the existence of dynamic recrystallization and grain refinement, particularly in the ECAP 4-pass. This enhanced deformation effectively impedes grain growth. Consequently, the number of passes in the ECAP process is intricately linked to the equivalent strain, inducing grain boundary pinning, and resulting in the formation of finer grains. The grain refinement process can be conceptualized as a repetitive sequence of dynamic recovery and recrystallization in each pass. In the case of the 4Bc ECAP process, dynamic recrystallization dominates, leading to a highly uniform grain reduction and, causing the grain boundaries to become less distinct45. Figure4b indicates that microstructural features vary with ECAP processing routes, aligning well with grain size and mechanical properties.

(a) XRD patterns for the AA ZK30 alloy and after 1P and 4Bc ECAP processing, (b) variations of crystallite size and lattice strain as a function of processing condition using the WilliamsonHall method.

Figure5 shows the volume loss (VL) and average coefficient of friction (COF) for the AA and ECAPed ZK30 alloy. The AA billets exhibited the highest VL at all wear parameters compared to the ECAPed billets as shown in Fig.5. From Fig.5a it revealed that performing the wear test at applied load of 1N exhibited the higher VL compared to the other applied forces. In addition, increasing the applied force up to 3 N revealed lower VL compared to 1 N counterpart at all wear speeds. Further increase in the applied load up to 5 N revealed a notable decrease in the VL. Similar behavior was attained for the ECAP-processed billets through 1P (Fig.5c) and 4Bc (Fig.5e). The VL was improved by increasing the applied load for all samples as shown in Fig.5 which indicated an enhancement in the wear resistance. Increasing the applied load increases the strain hardening of ZK30 alloy that are in contact as reported by Yasmin et al.46 and Kori et al.47. Accordingly, increasing the applied load resulted in increasing the friction force, which in turn hinder the dislocation motion and resulted in higher deformation, so that ZK30 experienced strain hardening and hence, the resistance to abrasion is increased, leading to improving the wear resistance48. Furthermore, increasing the applied load leads to increase the surface in contact with wear ball and hence, increases gripping action of asperities, which help to reduces the wear rate of ZK30 alloy as reported by Thuong et al.48. Out of contrary, increasing the wear speed revealed increasing the VL of the AA billets at all wear loads. For the ECAPed billet processed through 1P, the wear speed of 125mm/s revealed the lowest VL while the wear speed of 250mm/s showed the highest VL (Fig.5c). Similar behaviour was recorded for the 4Bc condition. In addition, from Fig.5c, it was clear that 1P condition showed higher VL compared to 4Bc (Fig.5e) at all wear parameters, indicating that processing via multiple passes resulted in significant grain size refinement (Fig.2). Hence, higher hardness and better wear behavior were attained which agreed with previous study7. In addition, from Fig.5, it was clear that increasing the wear speed increased the VL. For the AA billets tested at 1N load the VL was 1.52106 m3. ECAP processing via 1P significantly improved the wear behavior as the VL was reduced by 85% compared to the AA condition. While compared to the AA condition, the VL improved by 99.8% while straining through 4Bc, which is accounted for by the considerable refinement that 4Bc provides. A similar trend was observed for the ECAPed ZK30 samples tested at a load of 3 and 5 N (Fig.5). Accordingly, the significant grain refinement after ECAP processing (Fig.2) increased the grain boundaries area; hence, a thicker oxide protective layer can be formed, leading to improve the wear resistance of the ECAPed samples. It is worth to mentioning here that, the grain refinement coupled with refining the secondary phase particle and redistribution resulted from processing through ECAP processing through multiple passes resulted in improving the hardness, wear behavior and mechanical properties according to HallPetch equation7,13,49. Similar findings were noted for the ZK30 billets tested at 3 N load, processing through 1P and 4Bc exhibited decreasing the VL by 85%, 99.85%, respectively compared to the AA counterpart. Similar finding was recorded for the findings of ZK30 billets which tested at 5 N load.

Volume loss of ZK30 alloy (a,c,e) and the average coefficient of friction (b,d,f) in its (a,b) AA, (c,d) 1P and (e,f) 4Bc conditions as a function of different wear parameters.

From Fig.5, it can be noticed that the COF curves revealed a notable fluctuation with implementing least square method to smoothing the data, confirming that the friction during the testing of ECAPed ZK30 alloy was not steady for such a time. The remarkable change in the COF can be attributed to the smaller applied load on the surface of the ZK30 samples. Furthermore, the results of Fig.5 revealed that ECAP processing reduced the COF, and hence, better wear behavior was attained. Furthermore, for all ZK30 samples, it was observed that the highest applied load (5 N) coupled with the lowest wear time (110s) exhibited better COF and better wear behavior was displayed. These findings agreed with Farhat et al.50, they reported that decreasing the grain size led to improve the COF and hence improve the wear behavior. Furthermore, they reported that a plastic deformation occurs due to the friction between contacted surface which resisted by the grain boundaries and fine secondary phases. In addition, the strain hardening resulted from ECAP processing leads to decrease the COF and improving the VL50. Sankuru et al.43 reported that ECAP processing foe pure Mg resulted in substantial grain refinement which was reflected in improving both microhardness and wear rate of the ECAPed billets. Furthermore, they found that increasing the number of passes up to 4Bc reduced the wear rate by 50% compared to the AA condition. Based on the applied load and wear velocity and distance, wear mechanism can be classified into mild wear and severe wear regimes49. Wear test parameters in the present study (load up to 5 N and speed up to 250mm/s) falls in the mild wear regime where the delamination wear and oxidation wear mechanisms would predominantly take place43,51.

The worn surface morphologies of the ZK30-AA billet and ECAPed billet processed through 4Bc are shown in Fig.6. From Fig.6 it can revealed that scores of wear grooves which aligned parallel to the wear direction have been degenerated on the worn surface in both AA (Fig.6a) and 4Bc (Fig.6b) conditions. Accordingly, the worn surface was included a combination of adhesion regions and a plastic deformation bands along the wear direction. Furthermore, it can be observed that the wear debris were adhered to the ZK30 worn surface which indicated that the abrasion wear mechanism had occur52. Lim et al.53 reported that hard particle between contacting surfaces scratches samples and resulted in removing small fragments and hence, wear process was occurred. In addition, from Fig.6a,b it can depicted that the wear grooves on the AA billet were much wider than the counterpart of the 4Bc sample and which confirmed the effectiveness of ECAP processing in improving the wear behavior of the ZK30 alloy. Based on the aforementioned findings it can be concluded that ECAP-processed billets exhibited enhanced wear behavior which can be attributed to the obtained UFG structure52.

SEM micrograph of the worn surface after the wear test: (ac) AA alloy; (b) ECAP-processed through 4Bc.

Several regression transformations approach and associations among variables that are independent have been investigated in order to model the wear output responses. The association between the supplied parameters and the resulting responses was modeled using quadratic regression. The models created in the course of the experiment are considered statistically significant and can be used to forecast the response parameters in relation to the input control parameters when the highest possible coefficient of regression of prediction (R2) is closer to 1. The regression Eqs.(9)(14) represent the predicted non-linear model of volume loss (VL) and coefficient and friction (COF) at different passes as a function of velocity (V) and applied load (P), with their associated determination and adjusted coefficients. The current studys adjusted R2 and correlation coefficient R2 values fluctuated between 95.67 and 99.97%, which is extremely near to unity.

$${text{AA }}left{ {begin{array}{*{20}l} {VL = + 1.52067 times 10^{ - 6} - 1.89340 times 10^{ - 9} P - 4.81212 times 10^{ - 11} V + 8.37361 times 10^{ - 12} P * V} hfill & {} hfill \ { - 2.91667E - 10 {text{P}}^{2} - 2.39989E - 14 {text{V}}^{2} } hfill & {(9)} hfill \ {frac{1}{{{text{COF}}}} = + 2.72098 + 0.278289P - 0.029873V - 0.000208 P times V + 0.047980 {text{P}}^{2} } hfill & {} hfill \ { + 0.000111 {text{V}}^{2} - 0.000622 {text{P}}^{2} times V + 6.39031 times 10^{ - 6} P times {text{V}}^{2} } hfill & {(10)} hfill \ end{array} } right.$$

$$1{text{ Pass }}left{ {begin{array}{*{20}l} {VL = + 2.27635 times 10^{ - 7} + 7.22884 times 10^{ - 10} P - 2.46145 times 10^{ - 11} V - 1.03868 times 10^{ - 11} P times V} hfill & {} hfill \ { - 1.82621 times 10^{ - 10} {text{P}}^{2} + 6.10694 times 10^{ - 14} {text{V}}^{2} } hfill & {} hfill \ { + 8.76819 times 10^{ - 13} P^{2} times V + 2.48691 times 10^{ - 14} P times V^{2} } hfill & {(11)} hfill \ {frac{1}{{{text{COF}}}} = - 0.383965 + 1.53600P + 0.013973V - 0.002899 P times V} hfill & {} hfill \ { - 0.104246 P^{2} - 0.000028 V^{2} } hfill & {(12)} hfill \ end{array} } right.$$

$$4{text{ Pass}}left{ {begin{array}{*{20}l} {VL = + 2.29909 times 10^{ - 8} - 2.29012 times 10^{ - 10} P + 2.46146 times 10^{ - 11} V - 6.98269 times 10^{ - 12} P times V } hfill & {} hfill \ { - 1.98249 times 10^{ - 11} {text{P}}^{2} - 7.08320 times 10^{ - 14} {text{V}}^{2} } hfill & {} hfill \ { + 3.23037 times 10^{ - 13} P^{2} * V + 1.70252 times 10^{ - 14} P times V^{2} } hfill & {(13)} hfill \ {frac{1}{{{text{COF}}}} = + 2.77408 - 0.010065P - 0.020097V - 0.003659 P times V} hfill & {} hfill \ { + 0.146561 P^{2} + 0.000099 V^{2} } hfill & {(14)} hfill \ end{array} } right.$$

The experimental data are plotted in Fig.7 as a function of the corresponding predicted values for VL and COF for zero pass, one pass, and four passes. The minimal output value is indicated by blue dots, which gradually change to the maximum output value indicated by red points. The effectiveness of the produced regression models was supported by the analysis of these maps, which showed that the practical and projected values matched remarkably well and that the majority of their intersection locations were rather close to the median line.

Comparison between VL and COF of experimental and predicted values of ZK30 at AA, 1P, and 4Bc.

As a consequence of wear characteristics (P and V), Fig.8 displays 3D response plots created using regression models to assess changes in VL and COF at various ECAP passes. For VL, the volume loss and applied load exhibit an inverse proportionality at various ECAP passes, which is apparent in Fig.8ac. It was observed that increasing the applied load in the wear process will minimize VL. So, the optimal amount of VL was obtained at an applied load of 5N. There is an inverse relation between V of the wear process and VL at different ECAP passes. There is a clear need to change wear speeds for bullets with varying numbers of passes. As a result, the increased number of passes will need a lower wear speed to minimize VL. The minimal VL at zero pass is 1.50085E06 m3 obtained at 5N and 250mm/s. Also, at a single pass, the optimal VL is 2.2266028E07 m3 obtained at 5 N and 148mm/s. Finally, the minimum VL at four passes is 2.07783E08 m3 at 5N and 64.5mm/s.

Three-dimensional plot of VL (ac) and COF (df) of ZK30 at AA, 1P, and 4Bc.

Figure8df presents the effect of wear parameters P and V on the COF for ECAPed ZK30 billets at zero, one, and four passes. There is an inverse proportionate between the applied load in the wear process and the coefficient of friction. As a result, the minimum optimum value of COF of the ZK30 billet at different process passes was obtained at 5 N. On the other hand, the speed used in the wear process decreased with the number of billet passes. The wear test rates for billets at zero, one, and four passes are 250, 64.5, and 64.5mm/s, respectively. The minimum COF at zero pass is 0.380134639, obtained at 5N and 250mm/s. At 5N and 64.5mm/s, the lowest COF at one pass is 0.220277466. Finally, the minimum COF at four passes is 0.23130154 at 5N and 64.5mm/s.

The previously mentioned modern ML algorithms have been used here to provide a solid foundation for analyzing the obtained data and gaining significant insights. The following section will give the results acquired by employing these approaches and thoroughly discuss the findings.

The correlation plots and correlation coefficients (Fig.9) between the input variables, force, and speed, and the six output variables (VL_P0, VL_P1, VL_P4, COF_P0, COF_P1, and COF_P4) for data preprocessing of ML models give valuable insights into the interactions between these variables. Correlation charts help to investigate the strength and direction of a linear relationship between model input and output variables. We can initially observe if there is a positive, negative, or no correlation between each two variables by inspecting the scatterplots. This knowledge aids in comprehending how changes in one variable effect changes in the other. In contrast, the correlation coefficient offers a numerical assessment of the strength and direction of the linear relationship. It ranges from 1 to 1, with near 1 indicating a strong negative correlation, close to 1 indicating a strong positive correlation, and close to 0 indicating no or weak association. It is critical to examine the size and importance of the correlation coefficients when examining the correlation between the force and speed input variables and the six output variables (VL_P0, VL_P1, VL_P4, COF_P0, COF_P1, and COF_P4). A high positive correlation coefficient implies that a rise in one variable is connected with an increase in the other. In contrast, a high negative correlation coefficient indicates that an increase in one variable is associated with an increase in the other. From Fig.9 it was clear that for all ZK30 billets, the both VL and COP were reversely proportional with the applied (in the range of 1-up to- 5N). Regarding the wear speed, the VL of both the AA and 1P conditions exhibited an inversed proportional with the wear speed while 4Bc exhibited a direct proportional with the wear speed (in the range of 64.5- up to- 250mm/s) despite of the COP for all samples revealed an inversed proportional with the wear speed. The VL of AA condition (P0) revealed strong negative correlation coefficient of 0.82 with the applied load while it displayed intermediate negative coefficient of 0.49 with the wear speed. For 1P condition, VL showed a strong negative correlation of 0.74 with the applied load whereas it showed a very weak negative correlation coefficient of 0.13 with the speed. Furthermore, the VL of 4Bc condition displayed a strong negative correlation of 0.99 with the applied load while it displayed a wear positive correlation coefficient of 0.08 with the speed. Similar trend was observed for the COF, the AA, 1P and 4Bc samples displayed intermediate negative coefficient of 0.047, 0.65 and 0.61, respectively with the applied load while it showed a weak negative coefficient of 0.4, 0.05 and 0.22, respectively with wear speed.

Correlation plots of input and output variables showcasing the strength and direction of relationships between each inputoutput variable using correlation coefficients.

Figure10 shows the predicted train and test VL values compared to the original data, indicating that the VL prediction model performed well utilizing the LR (Linear Regression) technique. The R2-score is a popular statistic for assessing the goodness of fit of a regression model. It runs from 0 to 1, with higher values indicating better performance. In this scenario, the R2-scores for both the training and test datasets range from 0.55 to 0.99, indicating that the ML model has established a significant correlation between the projected VL values and the actual data. This shows that the model can account for a considerable percentage of the variability in VL values.

Predicted train and predicted test VL versus actual data computed for different applied loads and number of passes of (a) 0P (AA), (b) 1P, and (c) 4Bc: evaluating the performance of the VL prediction best model achieved using LR algorithm.

The R2-scores for training and testing three distinct ML models for the output variables VL_P0, VL_P1, and VL_P4 are summarized in Fig.11. The R2-score, also known as the coefficient of determination, is a number ranging from 0 to 1 that indicates how well the model fits the data. For VL_P0, R2 for testing is 0.69, and that for training is 0.96, indicating that the ML model predicts the VL_P0 variable with reasonable accuracy on unknown data. On the other hand, the R2 value of 0.96 for training suggests that the model fits the training data rather well. In summary, the performance of the ML models changes depending on the output variables. With R2 values of 0.98 for both training and testing, the model predicts 'VL_P4' with great accuracy. However, the models performance for 'VL_P0' is reasonable, with an R2 score of 0.69 for testing and a high R2 score of 0.96 for training. The models performance for 'VL_P1' is relatively poor, with R2 values of 0.55 for testing and 0.57 for training. Additional assessment measures must be considered to understand the models' prediction capabilities well. Therefore, as presented in the following section, we did no-linear polynomial fitting with extracted equations that accurately link the output and input variables.

Result summary of ML train and test sets displaying R2-score for each model.

Furthermore, the data was subjected to polynomial fitting with first- and second-degree models (Fig.12). The fitting accuracy of the data was assessed using the R2-score, which ranged from 0.92 to 0.98, indicating a good fit. The following equations (Eqs.15 to 17) were extracted from fitting the experimental dataset of the volume loss at different conditions of applied load (P) and the speed (V) as follows:

$${text{VL}}_{text{P}}0 = 1.519e - 06{ } + { } - 2.417e - 09{text{ * P }} + { } - 3.077e - 11{ * }V$$

(15)

$$VL_{text{P}}1 = 2.299e - 07 - 5.446e - 10 * {text{P}} - 5.431e - 11 * V - 5.417e - 11 * {text{P}}^{2} + 2.921e - 12 * {text{P}} V + 1.357e - 13 * V^{2}$$

(16)

$$VL_{text{P}}4 = 2.433e - 08 - 6.200e - 10 * {text{P}} + 1.042e - 12 * V$$

(17)

Predicted versus actual (a) VL_P0 fitted to Eq.15 with R2-score of 0.92, (b) VL_P1 fitted to Eq.16 with R2-score of 0.96, (c) VL_P4 fitted to Eq.17 with R2-score of 0.98.

Figure13 depicts the predicted train and test coefficients of friction (COF) values placed against the actual data. The figure seeks to assess the performance of the best models obtained using the SVM (Support Vector Machine) and GPR (Gaussian Process Regression) algorithms for various applied loads and number of passes (0, 1P, and 4P). The figure assesses the accuracy and efficacy of the COF prediction models by showing the predicted train and test COF values alongside the actual data. By comparing projected and actual data points, we may see how closely the models match the true values. The ML models trained and evaluated on the output variables 'COF_P0', 'COF_P1', and 'COF_P4' using SVM and GPR algorithms show great accuracy and performance, as summarized in Fig.13. The R2 ratings for testing vary from 0.97 to 0.99, showing that the models efficiently capture the predicted variables' variability efficiently. Furthermore, the training R2 scores are consistently high at 0.99, demonstrating a solid fit to the training data. These findings imply that the ML models can accurately predict the values of 'COF_P0', 'COF_P1', and 'COF_P4' and generalize well to new unseen data.

Predicted train and predicted test COF versus actual data computed for different applied loads and number of passes of (a) 0P (AA), (b) 1P, and (c) 4Bc: evaluating the performance of the COF prediction best model achieved using SVM and GPR algorithms.

Figure14 presents a summary of the results obtained through machine learning modeling. The R2 values achieved for COF modeling using SVM and GPR are 0.99 for the training set and range from 0.97 to 0.99 for the testing dataset. These values indicate that the models have successfully captured and accurately represented the trends in the dataset.

Result summary of ML train and test sets displaying R2-score for each model.

The results of the RSM optimization carried out on the volume loss and coefficient of friction at zero pass (AA), along with the relevant variables, are shown in Appendix A-1. The red and blue dots represented the wear circumstance (P and V) and responses (VL and COF) for each of the ensuing optimization findings. The volume loss and coefficient of friction optimization objective were formed to in range, using minimize as the solution target, and the expected result of the desirability function was in the format of smaller-is-better attributes. The values of (A) P=5 N and (B) V=250mm/s were the optimal conditions for volume loss. Appendix A-1(a) shows that this resulted in the lowest volume loss value attainable of 1.50127E-6 m3. Also, the optimal friction coefficient conditions were (A) P=2.911 N and (B) V=250mm/s. This led to the lowest coefficient of friction value possible, which was 0.324575, as shown in Appendix A-1(b).

Appendix A-2 displays the outcomes of the RSM optimization performed on the volume loss and coefficient of friction at one pass, together with the appropriate variables. The volume loss and coefficient of friction optimization objectives were designed to be "in range," with "minimize" as the solution objective. It was anticipated that the intended function would provide "smaller-is-better" traits. The ideal conditions for volume loss were (A) P=4.95 N and (B) V=136.381mm/s. This yielded the lowest volume loss value feasible of 2.22725E-7 m3, as seen in Appendix A-2 (a). The optimal P and V values for the coefficient of friction were found to be (A) P=5 N and (B) V=64.5mm/s. As demonstrated in Appendix A-2 (b), this resulted in the lowest coefficient of friction value achievable, which was 0.220198.

Similarly, Appendix A-3 displays the outcomes of the RSM optimization performed on the volume loss and coefficient of friction at four passes, together with the appropriate variables. The volume loss and coefficient of friction optimization objectives were designed to be "in range," with "minimize" as the solution objective. The desired functions expected result would provide of "smaller-is-better" characteristics. The optimal conditions for volume loss were (A) P=5 N and (B) V=77.6915mm/s. This yielded the lowest volume loss value feasible of 2.12638E-8 m3, as seen in Appendix A-1 (a). The optimal P and V values for the coefficient of friction were found to be (A) P=4.95612 N and (B) V=64.9861mm/s. As seen in Appendix A-1(b), this resulted in the lowest coefficient of friction value achievable, which was 0.235109.

The most appropriate combination of wear-independent factors that contribute to the minimal feasible volume loss and coefficient of friction was determined using a genetic algorithm (GA). Based on genetic algorithm technique, the goal function for each response was determined by taking Eqs.(9)(14) and subjecting them to the wear boundary conditions, P and V. The following expression applies to the recommended functions for objective: Minimize (VL, COF), subjected to ranges of wear conditions: 1P5 (N), 64.5V250 (mm/s).

Figures15 and 16 show the GA optimization techniques performance in terms of fitness value and the running solver view, which were derived from MATLAB, together with the related wear requirements for the lowest VL and COF at zero pass. VL and COF were suggested to be minimized by Eqs.(9) and (10), which were then used as the function of fitness and exposed to the wear boundary limit. According to Fig.15a, the lowest value of VL that GA could find was 1.50085E6 m3 at P=5N and V=249.993mm/s. Furthermore, the GA yielded a minimum COF value of 0.322531 at P=2.91 N and V=250mm/s (Fig.15b).

Optimum VL (a) and COF (b) by GA at AA condition.

Optimum VL (a) and COF (b) by hybrid DOE-GA at AA condition.

The DOEGA hybrid analysis was carried out to enhance the GA outcomes. Wear optimal conditions of VL and COF at zero pass are used to determine the initial populations of hybrid DOEGA. The hybrid DOEGA yielded a minimum VL value of 1.50085E-6 m3 at a speed of 249.993mm/s and a load of 5N (Fig.16a). Similarly, at a 2.91 N and 250mm/s speed load, the hybrid DOEGA yielded a minimum COF (Fig.16b) of 0.322531.

The fitness function, as defined by Eqs.11 and 12, was the depreciation of VL and COF at a 1P, subject to the wear boundary condition. Figure17a,b display the optimal values of VL and COF by GA, which were 2.2266E7 m3 and 0.220278, respectively. The lowest VL measured at 147.313mm/s and 5 N. In comparison, 5 N and 64.5mm/s were the optimum wear conditions of COF as determined by GA. Hybrid DOEGA results of minimum VL and COF at a single pass were 2.2266 E-7 m3 and 0.220278, respectively, obtained at 147.313mm/s and 5 N for VL as shown in Fig.18a and 5 N and 64.5mm/s for COF as shown in Fig.18b.

Optimum VL (a) and COF (b) by GA at 1P condition.

Optimum VL (a) and COF (b) by hybrid DOE-GA at 1P condition.

Subject to the wear boundary condition, the fitness function was the minimization of VL and COF at four passes, as defined by Eqs.13 and 14. The optimum values of VL and COF via GA shown in Fig.19a,b were 2.12638E8 m3 and 0.231302, respectively. The lowest reported VL was 5 N and 77.762mm/s. However, GA found that the optimal wear conditions for COF were 5 N and 64.5mm/s. In Fig.20a,b, the hybrid DOEGA findings for the minimum VL and COF at four passes were 2.12638E8 m3 and 0.231302, respectively. These results were achieved at 77.762mm/s and 5 N for VL and 5 N and 64.5mm/s for COF.

Optimum VL (a) and COF (b) by GA at 4Bc condition.

Optimum VL (a) and COF (b) by hybrid DOE-GA at 4Bc condition.

A mathematical model whose input process parameters influence the quality of the output replies was solved using the multi-objective genetic algorithm (MOGA) technique54. In the current study, the multi-objective optimization using genetic algorithm (MOGA) as the objective function, regression models, was implemented using the GA Toolbox in MATLAB 2020 and the P and V are input wear parameter values served as the top and lower bounds, and the number of parameters was set to three. After that, the following MOGA parameters were selected: There were fifty individuals in the initial population, 300 generations in the generation, 20 migration intervals, 0.2 migration fractions, and 0.35 Pareto fractions. Constraint-dependent mutation and intermediary crossover with a coefficient of chance of 0.8 were used for optimization. The Pareto optimum, also known as a non-dominated solution, is the outcome of MOGA. It is a group of solutions that consider all of the objectives without sacrificing any of them55.

By addressing both as multi-objective functions was utilized to identify the lowest possible values of the volume loss and coefficient of friction at zero pass. Equations(9) and (10) were the fitness functions for volume loss and coefficient of friction at zero pass for ZK30. The Pareto front values for the volume loss and coefficient of friction at zero pass, as determined by MOGA, are listed in Table 2. The volume loss (Objective 1) and coefficient of friction (Objective 2) Pareto chart points at zero pass are shown in Fig.21. A friction coefficient reduction due to excessive volume loss was observed. As a result, giving up a decrease in the coefficient of friction can increase volume loss. For zero pass, the best volume loss was 1.50096E06 m3 with a sacrifice coefficient of friction of 0.402941. However, the worst volume loss was 1.50541E06 m3, with the best coefficient of friction being 0.341073.

The genetic algorithm was used for the multi-objective functions of minimal volume loss and coefficient of friction. The fitness functions for volume loss and coefficient of friction at one pass were represented by Eqs.(11) and (12), respectively. Table 3 displays the Pareto front points of volume loss and coefficient of friction at one pass. Figure22 presents the volume loss (Objective 1) and coefficient of friction (Objective 2) Pareto chart points for a single pass. It was discovered that the coefficient of friction decreases as the volume loss increases. As a result, the volume loss can be reduced at the expense of a higher coefficient of friction. The best volume loss for a single pass was 2.22699E07 m3, with the worst maximum coefficient of friction being 0.242371 and the best minimum coefficient of friction being 0.224776 at a volume loss of 2.23405E07 m3.

The multi-objective functions of minimal volume loss and coefficient of friction were handled by Eqs.(13) and (14), respectively, served as the fitness functions for volume loss and coefficient of friction at four passes. The Pareto front points of volume loss and coefficient of friction at four passes are shown in Table 4. The Pareto chart points for the volume loss (Objective 1) and coefficient of friction (Objective 2) for four passes are shown in Fig.23. It was shown that when the volume loss increases, the coefficient of friction lowers. The volume loss can be decreased as a result, however, at the expense of an increased coefficient of friction. The best minimum coefficient of friction was 0.2313046 at a volume loss of 2.12663E08 m3, and the best minimum volume loss was 2.126397E08 m3 at a coefficient of friction of 0.245145 for four passes. In addition, Table 5 compares wear response values at DOE, RSM, GA, hybrid RSM-GA, and MOGA.

This section proposed the optimal wear parameters of different responses, namely VL and COF of ZK30. The presented optimal wear parameters, such as P and V, are based on previous studies of ZK30 that recommended the applied load from one to 30 N and speed from 64.5 to 1000mm/s. Table 6 presents the optimal condition of the wear process of different responses by genetic algorithm (GA).

Table 7 displays the validity of wears regression model for VL under several circumstances. The wear models' validation was achieved under various load and speed conditions. The volume loss response models had the lowest error % between the practical and regression models and were the most accurate, based on the validation data. Table 7 indicates that the data unambiguously shows that the predictive molding performance has been validated, as shown by the reasonably high accuracy obtained, ranging from 69.7 to 99.9%.

Equations(15 to 17) provide insights into the relationship that links the volume loss with applied load and speed, allowing us to understand how changes in these factors affect the volume loss in the given system. The validity of this modeling was further examined using a new unseen dataset by which the prediction error and accuracy were calculated, as shown in Table 8. Table 8 shows that the data clearly demonstrates that the predictive molding performance has been validated, as evidenced by the obtained accuracy ranging from 69.7 to 99.9%, which is reasonably high.

Read the original post:
Optimization of wear parameters for ECAP-processed ZK30 alloy using response surface and machine learning ... - Nature.com

Enhancing Emotion Recognition in Users with Cochlear Implant Through Machine Learning and EEG Analysis – Physician’s Weekly

The following is a summary of Improving emotion perception in cochlear implant users: insights from machine learning analysis of EEG signals, published in the April 2024 issue of Neurology by Paquette al.

Cochlear implants provide some hearing restoration, but limited emotional perception in sound hinders social interaction, making it essential to study remaining emotion perception abilities for future rehabilitation programs.

Researchers conducted a retrospective study to investigate the remaining emotion perception abilities in cochlear implant users, aiming to improve rehabilitation programs by understanding how well they can still perceive emotions in sound.

They explored the neural basis of these remaining abilities by examining if machine learning methods could detect emotion-related brain patterns in 22 cochlear implant users. Employing a random forest classifier on available EEG data, they aimed to predict auditory emotions (vocal and musical) from participants brain responses.

The results showed consistent emotion-specific biomarkers in cochlear implant users, which could potentially be utilized in developing effective rehabilitation programs integrating emotion perception training.

Investigators concluded that the study demonstrated the promise of machine learning for enhancing cochlear implant user outcomes, especially regarding emotion perception.

Source: bmcneurol.biomedcentral.com/articles/10.1186/s12883-024-03616-0

The rest is here:
Enhancing Emotion Recognition in Users with Cochlear Implant Through Machine Learning and EEG Analysis - Physician's Weekly

An AI Ethics Researcher’s Take On The Future Of Machine Learning In The Art World – SlashGear

Nothing is built to last, not even the stuff we create to last as long as possible. Everything eventually degrades, especially art, and many people make careers and hobbies out of restoring timeworn items. AI could provide a useful second pair of eyes during the process.

Was Rahman pointed out that machine learning has served a vital role in art restoration by figuring out the most likely missing pieces that need replacing. Consider the exorcism scene in "Invincible;" Machine learning cuts down on the time-consuming, mind-numbing work human restorers have to carry out. To be fair, machine learning is technically different from AI, but it is also a subset of AI, so since we can use machine learning in art restoration, it stands to reason we could use AI, too.

Rahman also stated machine learning helps guide art restorers and is generally more accurate than prior techniques. More importantly, Rahman believes AI programs assigned to art restoration could prevent botched attempts that are the product of human error or when someone's pride exceeds their talent. Rahman cited the disastrous event when a furniture restorer forever disfigured Bartolom Esteban Murillo's Immaculate Conception, but that is far from the only case where an AI could come in handy. After all, someone once tried restoring EliasGarcia Martinez' Ecce Homofresco andaccidentally birthed what is colloquially known as "Monkey Christ."

While a steady hand and preternatural skill are necessary to rekindle the glory of an old painting or sculpture, Rahman believes AI could provide a guiding hand that improves the result's quality, provided the restorer already knows what they're doing.

Read the original post:
An AI Ethics Researcher's Take On The Future Of Machine Learning In The Art World - SlashGear

Automated Analysis of Nuclear Parameters in Oral Exfoliative Cytology Using Machine Learning – Cureus

Specialty

Please choose I'm not a medical professional. Allergy and Immunology Anatomy Anesthesiology Cardiac/Thoracic/Vascular Surgery Cardiology Critical Care Dentistry Dermatology Diabetes and Endocrinology Emergency Medicine Epidemiology and Public Health Family Medicine Forensic Medicine Gastroenterology General Practice Genetics Geriatrics Health Policy Hematology HIV/AIDS Hospital-based Medicine I'm not a medical professional. Infectious Disease Integrative/Complementary Medicine Internal Medicine Internal Medicine-Pediatrics Medical Education and Simulation Medical Physics Medical Student Nephrology Neurological Surgery Neurology Nuclear Medicine Nutrition Obstetrics and Gynecology Occupational Health Oncology Ophthalmology Optometry Oral Medicine Orthopaedics Osteopathic Medicine Otolaryngology Pain Management Palliative Care Pathology Pediatrics Pediatric Surgery Physical Medicine and Rehabilitation Plastic Surgery Podiatry Preventive Medicine Psychiatry Psychology Pulmonology Radiation Oncology Radiology Rheumatology Substance Use and Addiction Surgery Therapeutics Trauma Urology Miscellaneous

Read more:
Automated Analysis of Nuclear Parameters in Oral Exfoliative Cytology Using Machine Learning - Cureus

Essentiality, proteinprotein interactions and evolutionary properties are key predictors for identifying cancer … – Nature.com

Cancer-associated genes and essentiality scores

We first determined whether cancer-related genes are likely to have high essentiality scores. We aggregated several essentiality scores calculated by multiple metrics5 for the list of genes identified in the COSMIC Census database (Oct 2018) and for all other human protein coding genes. Two different approaches to scoring genes essentiality are available. The first group of methods calculates the essentiality scores by measuring the degree of loss of function caused by a change (represented by variation detection) in the gene. It uses the following methods: residual variation intolerance score (RVIS), LoFtool, Missense-Z, the probability of loss-of-function intolerance (pLI) and the probability of haplo-insufficiency (Phi). The second group (Wang, Blomen and Hart- EvoTol) studies the impact of variation on cell viability. For all methods above measuring essentiality, a higher score indicates a higher degree of essentiality. Each method is described in detail in5.

We find that on average the cancer genes exhibit a higher degree of essentiality compared to the average scores calculated for all protein coding human genes and all metrics (Fig.1). We find that genes associated with cancer have higher essentiality scores on average in both categories (intolerance to variants and cell line viability), compared to the average scores across all human genes. P values are consistently<0.00001 (Table 1).

We also investigated whether Tumor Suppressor Genes (TSGs) or Oncogenes as distinct groups of genes would show different degrees of essentiality. (If the gene is known to be both an oncogene and a TSG, then the essentiality score of that gene would be present in both the oncogene and the TSG groups). We found no significant differences in the degrees of essentiality on average for either group compared to the set of all cancer genes (Table 1; Fig.1).

The results are particularly of interest in the context of cancer, as essential genes have been shown to evolve more slowly than nonessential genes20,21,22, although some discrepancies have been reported22. A slower evolutionary rate indicates less probability to evolve resistance to a cancer drug. This is particularly important in the case of anticancer drugs as it was reported that these drugs cause a change in the selection pressure when administered, leading to increased drug resistance23.

This association between cancer-related genes and essentiality scores prompted us to develop methods to identify cancer-related genes using this information. We used a machine-learning approach. A range of open-source algorithms were applied and tested to produce the most accurate classifier. We focus on properties related to proteinprotein interaction networks, as essential genes are likely to encode hub proteins, i.e., those with highest degree values in the network21,24.

A total of nine different modelling approaches (or configurations) were run on the data to ensure the selection of the best performing approach (the list of these can be found in the Supplementary Information Table 2, along with their performance metrics). The performance metric used to rank the models was Logarithmic Loss (LogLoss), LogLoss is an appropriate and known performance measure when the model is of a binary-classification type. The LogLoss measures confidence of the prediction and estimates how to penalise incorrect classification. The selection mechanism for the performance metric takes the type of model (binary classification in this case) and distribution of values into consideration when recommending the performance metric. However, other performance metrics were also calculated (Supplementary Information Table 2). The performance metrics are calculated for all validation and test (holdout) sets to ensure that the model is not over-fitting. The particular model with best performance result (LogLoss) in this case was: eXtreme Gradient Boosted Trees Classifier with Early Stopping. The model shows very close LogLoss values for training/validation and holdout data sets (Table 2), demonstrating no over-fitting.

The model development workflow (i.e., the model blueprint) is shown in Fig.2. This shows the pre-processing steps and the algorithm used in our final model, and illustrates the steps involved in transforming input into a model. In this diagram, Ordinal encoding of categorical variables converts categorical variables to an ordinal scale while the Missing Values Imputed node imputes missing values. Numeric variables with missed values were imputed with an arbitrary value (default9999). This is effective for tree-based models, as they can learn a split between the arbitrary value (9999) and the rest of the data (which is far away from this value).

Model development stages.

To demonstrate the effectiveness of our model, a chart was constructed (Fig.3) that shows across the entire validation dataset (divided into 10 segments or bins and ordered by the average outcome prediction value) the average actual outcome (whether a gene has been identified as cancer gene or not) and the average predicted outcome for each segment of the data (order from lowest average to highest per segment). The left side of the curve indicates where the model predicted a low score on one section of the population while the right side of the curve indicates where the model predicted a high score. The "Predicted" blue line displays the average prediction score for the rows in that bin. The "Actual" red line displays the actual percentage for the rows in that bin. By showing the actual outcomes alongside the predictive values for the dataset, we can see how close these predictions are to the actual known outcome for each segment of the dataset. Also, we can determine if the accuracy diverges in cases where the outcome is confirmed as cancer or not, as the segments are ordered by their average of outcome scores.

The Lift Chart illustrating the accuracy of the model.

In general, the steeper the actual line is, and the more closely the predicted line matches the actual line, the better the model. A close relationship between these two lines is indicative of the predictive accuracy of the model; a consistently increasing line is another good indicator of satisfactory model performance. The graph we have for our model (Fig.3) thus indicates high accuracy of our prediction model.

In addition, the confusion matrix (Table 3) and the summary statistics (Table 4) show the actual versus predicted values for both true/false categories for our training dataset (80% of the total dataset). The model statistics show the model reached just over 89% specificity and 60% sensitivity in predicting cancer genes. This means that we are able to detect over half of cancer genes successfully while only misclassifying around 10% of non-cancer genes within the training/validation datasets. The summary statistics (Table 4) also shows the F1 score (harmonic mean of the precision and recall) and Matthews Correlation Coefficient (MCC is the geometric mean of the regression coefficient) for the model. The low F1 score reflects our choice to maximise the true negative rate (preventing significant misclassification of non-cancer genes).

To further confirm the models ability to predict cancer genes, we used the model on 190 new cancer genes that had been added to the COSMIC Cancer Census Genes between October 2018 and April 2020. Applying the model, we were able to predict 56 genes out of the newly added 190 genes as cancer genes, all of which were among the false positives detected by the model. This indicates that the model is indeed suitable to use to predict novel candidate cancer genes that could be experimentally confirmed later. A full ranked list of candidate genes predicted to be cancer associated by our model is available in Supplementary Information Table 3.

Another way to visualise the model performance, and determine the optimal score to use as a threshold between cancer and non-cancer genes, is the prediction distribution graph (Fig.4) which illustrates the distribution of outcomes. The distribution in purple shows the outcome where gene is not classified as a cancer gene while the second distribution in green shows the outcomes where gene is classified as a cancer gene. The dividing line represents the selected threshold at which the binary decision creates a desirable balance between true negatives and true positives. Figure4 shows how well our model discriminates between prediction classes (cancer gene or non-cancer gene) and shows the selected score (threshold) that could be used to make a binary (true/false) prediction for a gene to be classified as a candidate cancer gene. Every prediction to the left of the dividing line is classified as non-cancer associated and every prediction to the right of the dividing line is classified as cancer associated.

The prediction distribution graph showing how well the model discriminates between cancer and non-cancer genes.

The prediction distribution graph can be interpreted as follows: purple to the left of the threshold line is for instances where genes were correctly classified as non-cancer (true negatives). Green to the left of the threshold line is for instances were incorrectly classified as non-cancer (false negatives). Purple to the right of the threshold line, is for instances that were incorrectly classified as cancer gene (false positives). Green to the right of the threshold line, is for instances were correctly classified as cancer genes (true positives). The graph again confirms that the model was able to accurately distinguish cancer and non-cancer genes.

Using the receiver operating characteristic curve (ROC) curve produced for our model (Fig.5), we were able to evaluate the accuracy of prediction. The AUC (area under the curve) is a metric for binary classification that considers all possible thresholds and summarizes performance in a single value, with the larger the area under the curve, the more accurate the model. An AUC of 0.5 shows that predictions based on this model are no better than a random guess. An AUC of 1.0 shows that predictions based on this model are perfect. (This is highly uncommon and likely flawed, indicating some features that should not be known in advance are being used in model training and thus revealing the outcome.) As the area under the curve is of 0.86, we conclude that the model is accurate. The circle intersecting the ROC curve represents the threshold chosen for classification of genes. This is used to transform probability scores assigned to each gene into binary classification decisions, where each gene would be classified as a potential cancer gene or not.

The receiver operator characteristic (ROC) curve indicating model performance.

Feature impact measures how much worse a models error score would be if the model made predictions after randomly shuffling the values of one field input (while leaving other values unchanged) and thus shows how useful each feature is for the prediction. The scores were normalised so that the value of the most important feature column is 100% and the other subsequent features are normalised to it. This helps identify those properties that are particularly important in relation to predicting cancer genes and would aid in further our understanding of the biological aspects that might underline the propensity of a gene to be a cancer gene.

Closeness and degree are ranked as the properties with the highest feature impact (Fig.6). Both are proteinprotein interaction network properties, indicating a central role of the protein product within the network. We find that both correlate with likelihood of cancer association. Other important properties such the phi essentiality score (probability of haploinsufficiency compared to baseline neutral expectation) and Tajimas D regulatory (measures for genetic variation at intra-species level and for proportion of rare variants) show that increased essentiality accompanied with occurrence of rare variants increase the likelihood of pathological impact and for the gene to be linked to cancer initiation or progression. We also note that greater length of a gene or transcript increases the likelihood of a somatic mutation, so increasing the chance of a mutation within that gene, thus increasing the likelihood of it being a cancer gene.

The top properties ranked by their relative importance used to make the predictions by the model.

To confirm that the selected model performance is optimal based on the input data used, we created a new blended model combining the best 2nd and 3rd modelling approaches from all modelling approaches tested within our project and compared the performance metric (AUC) of our selected model with the new blended model. We found that improvement is small (0.008), despite the added complexity, where the blended model achieved an AUC of 0.866 and our single selected model achieved an AUC of 0.858.

We have also retrained our model using a dataset that excludes general gene properties and found that a reduction in models performance was evident but very small. The model trained on this dataset achieved an AUC of 0.835 and a sensitivity of 55% at a specificity of 89%. This small reduction in the predictability of the models indicates that essentiality and proteinprotein interaction network properties are the most important features predicting cancer genes and that information carried by gene general properties can be in most part be represented by information carried by these properties. This can be rationalised, as longer genes (median transcript length=3737) tend to have the highest number of proteinprotein interactions25.

According to a recent comprehensive review of cancer driver genes prediction models, currently the best performing machine learning model is driverMAPS with an AUC of 0.94, followed by HotNet2 with an AUC of 0.814. When comparing our model performance using AUC to the other 12 reviewed cancer driver genes prediction models, our model would come second with an AUC of 0.86. Our predictive model achieved better AUC measured performance when compared to the best model that used a similar network based approach (HotNet2 with AUC=0.81) and better than the best function-based prediction model (MutPanning with AUC=0.62). The strong performance of our model indicates the importance of combining different and distinctive gene properties, when building prediction models, while avoiding reliance on the frequency approach that could mask important driver genes that were detected in fewer samples. Despite the apparent success and high AUC score reported by our model, this should be treated with some caution. The AUC value is based on the ROC curve which is constructed by varying the threshold and then plotting the resulting sensitivities against the corresponding false positive rates. Several statistical methods are available to use to compare two AUC results and determine if the difference is significant26,27,28. These methods require the ranking of the variables in its calculations (e.g., to calculate the variance or covariance of the AUC). The ranking of predicated cancer associated genes was not available from all the other 12 cancer driver genes prediction methods. Thus, we were not able to determine whether the difference between the AUC score of our method and the AUC scores of these methods is significant.

The driverMAPS (Model-based Analysis of Positive Selection) method (the only method with higher AUC compared to our model) identifies cancer candidate genes using the assumption that these genes would exhibit elevated mutation rates in functionally important sites29. Thus, driverMAPS combines frequency- and function-based principles. Unlike our model that uses certain cohorts of genes properties, the parameters used in driverMAPS are mainly derived and estimated from factors influencing positive selection on somatic mutations. However, there are few features in common between the two models, such as dN/dS.

Despite driverMAPS had the overall best performance, network-based methods (like our method) showed much higher sensitivity than driverMAPS therefore potentially making more them more suited to distinguish cancer driver from non-driver genes. The driverMAPS paper29 provides a list of novel driver genes. We found that 35% of these novel candidate genes were also predicted by our model. Differenced in genes identified as cancer-related in the two approaches could be attributed to the different nature of features used by the two models. We believe that there is evidence30 pointing to genes with low mutation rates, but with important roles in driving the initiation and progression of tumours. Genes with high mutation rates were also shown to be less vital than expected in driving tumor initiation31. This variability in the mutation rate correlation with identified driver genes might explain some genes that our model does not identify as cancer-related genes where driverMaps does. Our model uses properties that are available for most protein coding genes, while driverMaps applies to genes already identified in tumour samples and predicts their likelihood to be driver cancer genes. Thus, the candidate list of genes provided by driverMaps is substantially smaller than our list. Using an ensemble method that evaluates both driverMAPS score and our models score for each gene, may produce more a reliable outcome. This would require further validation.

Enriching the models training dataset with added properties that show correlations with oncogenes could enhance the model prediction ability and elevate further the accuracy of the model. One potential feature is knowing whether a gene is an Ohnolog gene.

Paralogs retained from whole genome duplications (WGD) events that have occurred in all vertebrates, some 500 Myr ago are called ohnologs after Susumu Ohno32. Ohnologs have been shown to be prone to dominant deleterious mutations and frequently implicated in cancer and genetic diseases32. We investigated the enrichment of ohnologs within cancer-associated genes. Ohnolog genes can be divided into three sets: strict, intermediate and relaxed. These three sets are constructed using statistical confidence criteria32 . We found that 44% of the total number of cancer-associated genes (as reported in COSMIC census) belongs to an ohnolog family (using strict and intermediate thresholds). Considering that 20% of all known human genes are ohnologs (strict and intermediate) and that cancer-associated genes comprise less than 4% of all human genes, the enrichment of ohnolog genes with cancer-related genes is two times higher than expected. If only ohnologs that pass the strict threshold were considered, the fraction of cancer-related genes that are ohnologs is still high at 34%.

When performing pathway analysis (carried out using PANTHER gene ontology release 17.0), we found that cancer associated ohnologs show statistically significant enrichment (>tenfold) in many pathways and particularly within signalling pathways known to be cancer associated such as Jak/STAT, RAS and P53 (Supplementary Information Table 4). On the other hand, ohnologs that are not cancer associated are present in fewer signalling pathways and at enrichment (

The rest is here:
Essentiality, proteinprotein interactions and evolutionary properties are key predictors for identifying cancer ... - Nature.com

CSRWire – Island Conservation Harnesses Machine Learning Solutions From Lenovo and NVIDIA To Restore Island … – CSRwire.com

Published 04-18-24

Submitted by Lenovo

Optimizing and accelerating image processing with AI helps conservation experts safeguard seabird nesting sites on Robinson Crusoe Island.

Around the world, biodiversity is under threat. We are now in what many scientists call the sixth mass extinctionand over the last century, hundreds of species of plants and animals have been lost forever.

Island ecosystems can be particularly vulnerable to human activity. On Robinson Crusoe Island in the South Pacific Ocean, native seabirds such as the pink-footed shearwater are easy prey for an invasive species: the South American coati. Introduced to the island by humans almost a century ago, coatis are housecat-sized mammals in the same family as racoons, which hunt for shearwaters in their nesting sites throughout the island.

Protecting island ecosystems

Leading the fight against invasive species on Robinson Crusoe Island is Island Conservation: an international non-profit organization that restores island ecosystems to benefit wildlife, oceans, and communities. For many years, Island Conservation has been working side by side with island residents to help protect threatened and endangered species.

For Island Conservation, physically removing invasive coatis from shearwater nesting sites is only part of the challenge. To track coati activity, the organization also carefully monitors shearwater nesting sites using more than 70 remote camera traps.

Processing thousands of images a month

The organizations camera traps generate a massive amount of dataaround 140,000 images every monthwhich must be collected and analyzed for signs of coati activity. In the past, the Island Conservation team relied heavily on manual processes to perform this task. To classify 10,000 images would take a trained expert roughly eight hours of non-stop work.

Whats more, manual processing diverted valuable resources away from Island Conservations vital work in the field. The organization knew that there had to be a better way.

Realizing the potential of machine learning

David Will, Head of Innovation at Island Conservation, recalls the challenge: We started experimenting with machine learning [ML] models to accelerate image processing. We were convinced that automation was the way to go, but one of the big challenges was connectivity. Many of the ML solutions we looked at required us to move all of our photos to the cloud for processing. But on Robinson Crusoe Island, we just didnt have a reliable enough internet connection to do that.

As a temporary workaround, Island Conservation saved its camera trap images to SD cards and airmailed them to Santiago de Chile, where they could be uploaded to the cloud for processing. While airmail was the fastest and most frequent link between the island and the mainland, the service only ran once every two weeksand there was a lag of up to three months between a camera trap capturing an image and Island Conservation receiving the analysis.

David Will comments: The time between when we detected an invasive species on a camera and when we were able to respond meant we didnt have enough time to make the kind of decisions we needed to make to prevent extinctions on the island.

Tackling infrastructure challenges

Thats when Lenovo entered the frame. Funded by the Lenovo Work for Humankind initiative with a mission to use technology for good, a global team of 16 volunteers traveled to the island. Using Lenovos smarter technology from devices to software, IT services to servers, the volunteers were able to do their own day jobs while volunteering to help upgrade the islands networking infrastructure: boosting its bandwidth from 1 Mbps to 200 Mbps.

Robinson Crusoe Island is plagued with harsh marine conditions with limited access. They needed a sturdy system that brings compute to the data and allows remote management. The solution was LenovosThinkEdge SE450 with NVIDIA A40 GPUs. The AI-optimized edge server provided a rugged design capable of withstanding extreme conditions while running quietly, allowing it to live comfortably in the new remote workspace. Lenovo worked with Island Conservation to tailor the server to its needs, adding additional graphics cards to increase the AI processing capability per node. We took the supercomputer capability they had in Santiago and brought that into a form factor that is much smaller, says Charles Ferland, Vice President and General Manager of Edge Computing at Lenovo.

The ThinkEdge SE450 eliminated the need for on-site technicians. Unlike a data center, which needs staff on-site, the ThinkEdge server could be monitored and serviced remotely by Lenovo team members. It proved to be the perfect solution. The ThinkEdge server allows for full remote access and management of the device speeding up decisions from a matter of months to days.

David Will comments, Lenovo helped us run both the A40s at the same time immensely speeding up processing, something we previously couldnt do. It has worked tremendously well and almost all of our processing to-date has been done on the ThinkEdge SE450.

Unleashing the power of automation

To automate both the detection and classification of coatis, Lenovo data scientists from the AI Center of Excellence built a custom AI script to detect and separate out the results for coatis and other species from MegaDetectoran open-source object detection model that identifies animals, people, and vehicles in camera trap images. Next, Lenovo data scientists trained an ML model on a custom dataset to give a multi-class classification result for nine species local to Robinson Crusoe Island, including shearwater and coatis.

This two-step GPU-enabled detector-and-classifier pipeline can provide results for 24,000 camera trap images in just one minute. Previously, this would have taken a trained expert twenty hours of laboran astonishing 99.9% time saving. The model achieved 97.5% accuracy on a test dataset with approximately 400 classifications per second. Harnessing the power of NVIDIAs CUDA enabled GPUs allowed us to have a 160x speedup on MegaDetector compared to the previous implementation.

Sachin Gopal Wani, AI Data Scientist at Lenovo, comments: Delivering a solution that is easily interpretable by the user is a crucial part of our AI leadership. I made a custom script that generates outputs compatible with TimeLapsea software the conservationists use worldwide to visualize their results. This enabled much faster visualization for a non-technical end-user without storing additional images. Our solution allows for the results to load with the original images overlapped with classification results, saving terabytes of disk space.

With these ML capabilities, Island Conservation can filter out images that do not contain invasive species with a high degree of certainty. Using its newly upgraded internet connection, the organization can upload images of coati activity to the cloud, where volunteers on the mainland evaluate the images and send recommendations to the island rapidly.

Using ML, we can expedite image processing, get results in minutes, and cut strategic decision time from three months to a matter of weeks, says David Will. This shorter response time means more birds protected from direct predation and faster population recovery.

Looking to the future

Looking ahead, Island Conservation plans to continue its collaboration with the Lenovo AI Center of Excellence to develop Gen AI to detect other types of invasive species, including another big threat to native fauna: rodents.

With Lenovos support, were now seeing how much easier it is to train our models to detect other invasive species on Robinson Crusoe Island, says David Will. Recently, I set up a test environment to detect a new species. After training the model for just seven hours, we recorded 98% detection accuracyan outstanding result.

As the project scope expands, Island Conservation plans to use more Lenovo ThinkEdge SE450 devices with NVIDIA A40 GPUs for new projects across other islands. Lenovos ThinkEdge portfolio has been optimized for Edge AI inferencing, offering outstanding performance and ruggedization to securely process the data where its created.

Backed by Lenovo and NVIDIA technology, Island Conservation is in a stronger position than ever to protect native species from invasive threats.

David Will says: In many of our projects, we see that more than 30% of the total project cost is spent trying to remove the last 1% of invasives and confirm their absence. With Lenovo, we can make decisions based on hard data, not gut feeling, which means Island Conservation takes on new projects sooner.

Healing our oceans

Island Conservations work with Lenovo on Robinson Crusoe Island will serve as a blueprint for future activities. The team plans to repurpose the AI application to detect different invasive species on different islands around the world from the Caribbean to the South and West Pacific, the Central Indian Ocean, and the Eastern Tropical Pacificwith the aim of saving endangered species, increasing biodiversity, and increasing climate resilience.

In fact, Island Conservation, Re:wild, and Scripps Institution of Oceanography recently launched the Island-Ocean Connection Challenge to bring NGOs, governments, funders, island communities, and individuals together to begin holistically restoring 40 globally significant island-ocean ecosystems by 2030.

Everything is interconnected in what is known as the land-and-sea cycle, says David Will. Healthy oceans depend on healthy islands. Island and marine ecosystem elements cycle into one another, sharing nutrients vital to the plants and animals within them. Indigenous cultures have managed resources this way for centuries. Climate change, ocean degradation, invasive species, and biodiversity loss are causing entire land-sea ecosystems to collapse, and island communities are disproportionately impacted.

The Island-Ocean Connection Challenge marks the dawn of a new era of conservation that breaks down artificial silos and is focused on holistic restoration.

David Will concludes: Our collective effort, supported by Lenovo and NVIDIA, is helping to bridge the digital divide on island communities, so they can harness cutting-edge technology to help restore, rewild, and protect their ecosystems, and dont get further left behind by AI advances.

Get involved today at http://www.jointheiocc.org.

To read the Lenovo case study on Island Conservation, click here. Or to watch the Lenovo case study video, click here.

Lenovo is a US$62 billion revenue global technology powerhouse, ranked #217 in the Fortune Global 500, employing 77,000 people around the world, and serving millions of customers every day in 180 markets. Focused on a bold vision to deliver Smarter Technology for All, Lenovo has built on its success as the worlds largest PC company by further expanding into growth areas that fuel the advancement of New IT technologies (client, edge, cloud, network, and intelligence) including server, storage, mobile, software, solutions, and services. This transformation together with Lenovos world-changing innovation is building a more inclusive, trustworthy, and smarter future for everyone, everywhere. Lenovo is listed on the Hong Kong stock exchange under Lenovo Group Limited (HKSE: 992)(ADR: LNVGY). To find out more visit https://www.lenovo.com, and read about the latest news via our StoryHub.

More from Lenovo

Read more:
CSRWire - Island Conservation Harnesses Machine Learning Solutions From Lenovo and NVIDIA To Restore Island ... - CSRwire.com

Simplifying deep learning to enhance accessibility of large-scale 3D brain imaging analysis – Nature.com

Publishers note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This is a summary of: Kaltenecker, D. et al. Virtual reality-empowered deep-learning analysis of brain cells. Nat. Methods https://doi.org/10.1038/s41592-024-02245-2 (2024).

More:
Simplifying deep learning to enhance accessibility of large-scale 3D brain imaging analysis - Nature.com

The Future of ML Development Services: Trends and Predictions – FinSMEs

Enter the world of ML development services, a land where everything is in constant change due to technological advancements and data-driven innovation solutions.

In recent years, ML has become a groundbreaking technology that revolutionized various sectors such as health care, finances and transportation among others. The demand for ML development services has been growing at an extremely fast pace due to the rise of digitization that is taking place in various companies and doesnt seem like it will reduce any time soon. However, what is the future of machine learning in this fast-growing field? In this post, we will analyze the newest tendencies and make some forecasts on how ML development companies may change our world in a few years. Prepare for an adventurous journey into the world of existing technologies and their future possibilities!

First, we will address the described tendencies and forecasts without going into deeper details regarding why machine learning is gaining popularity in todays digital reality. This usefulness can be credited to the unmatched capacity to process vast tracts of data and make inferences or choices devoid of software. The advent of big data brought some enormous opportunities and challenges, high on the list of which is my favorite technology machine learning (ML). Importantly, it has already disrupted sectors such as healthcare services and finance industries especially when artificial intelligence is applied. Nevertheless, other applications of this technology are almost limitless to various areas and beyond; thus displaying the broad range of influence that transformative machine learning has.

Recently, there has been a significant increase in cloud-based machine learning capabilities. Most vendors, enterprises or individuals will find these platforms to be cost-effective means of deploying ML-based applications. Cloud-based solutions for the development of ML have three main benefits scalability, availability and automation. They provide an opportunity for developers to apply complex ML models and do not distract attention from important infrastructure details. In addition, the ML cloud platforms contain many tools and APIs for pre built models that result in development speed faster. The industry-wide adoption of ML-oriented products has determined the development of cloud-based platforms where solutions based on machine learning can be constructed. Because technology is developing every single day, we can assume that in future these platforms are going to be more complicated and provide developers with better choices of options as well as skills for AI.

With the above great leaps in machine learning for developers, there have been increasing conversations surrounding one field and it is interpretability. In other words, producing outputs is not enough for AI; the developers and users must come to grips with how those results were arrived at or what factors are involved. It is especially important for such areas as healthcare or finance since decisions made by AI models can influence significantly there. As a result, there is an elevated need for the generation of models that are easily transparent and interpretable to the needs shown. This is such a key achievement in ensuring that Artificial Intelligence becomes reliable and answerable to everything it offers.

The business need for integration with other growing technologies is because technology continues to evolve at the rate of exponential function. Scalable development is supported by artificial intelligence solutions for machines in different remote locations as we can see the increased popularity among manufacturers through Industrial Internet manufacturing and distribution. By integrating the above technologies it becomes possible to develop new competencies, improved decision making as well enhanced customer service. However, in the modern market, it is no longer possible to perceive these emerging technologies as standalone elements but more so as a constituent of the technology within which they operate. Integration strategy will result in development by a business or the adoption of some other software that is there and they would eventually benefit from this because it makes things much easier for them.

https://www.thewatchtower.com/blogs_on/supervised-machine-learning-its-advantages

Increased demand for personalized and customized ML solutions: With more companies embracing the use of machine learning to have an upper hand, the demand for specially tailored solutions shall grow. This will hence demand that machine learning development services like N-ix.com customize their solutions according to the specific needs and preferences of each client. Advancements in natural language processing (NLP): However, NPL has certainly come a long way and it continues to organize new language machines increasingly with effectiveness. With further advancements that lie ahead, NLP will evolve to even higher levels offering more advanced conversational AI and text analysis in the future.

Continued focus on ethics: However, as AI technologies continue their blend into different sectors of human life and activities in general, there will be an increased interest regarding the ethical development and deployment principles related to these emerging systems. The concern for these companies that provide the standards and guidelines will be for the government to model their operations by strict ethical practices to establish trust with clients as a well-behaved entity. In conclusion, machine learning development services have no limit to their possibilities in the future. Technological progress and wider adoption of AI solutions will surely keep the development in the field actively progressing, turning ML into a sphere with no boundaries for growth and innovation. Machine learning has a transforming effect on the world that is happening right under our noses, and it is quite thrilling for business owners as well as developers.

The trend of ML development services has tremendously changed. With the emergence of big data as a rapidly advancing trend and increasing demands for intelligent software, developers need to change their direction rather fast. Currently, ML algorithms are developed for application in various sectors such as medical care services or the financial sphere and other areas. Given that firms are increasingly embracing the creative development of approaches geared towards the promotion and support for complete production value, as well as other client relations enhancing such a trend is bound to be here with us. It is also clear that, as the demand for ML development services rises, there will be an increased number of innovative solutions to offer businesses a competitive edge. While much about ML remains unknown, there is no denying that such technologies have the potential to reform our lives and business operations.

Go here to see the original:
The Future of ML Development Services: Trends and Predictions - FinSMEs

Investigation of the effectiveness of a classification method based on improved DAE feature extraction for hepatitis C … – Nature.com

In this subsection, we evaluate the feature extraction effect of the IDAE by conducting experiments on the Hepatitis C dataset with different configurations to test its generalization ability. We would like to investigate the following two questions:

How effective is IDAE in classifying the characteristics of hepatitis C ?

If the depth of the neural network is increased, can IDAE mitigate the gradient explosion or gradient vanishing problem while improving the classification of hepatitis C disease ?

Does an IDAE of the same depth tend to converge more easily than other encoders on the hepatitis C dataset ?

Firstly, out of public health importance, Hepatitis C (HCV) is a global public health problem due to the fact that its chronic infection may lead to serious consequences such as cirrhosis and liver cancer, and Hepatitis C is highly insidious, leading to a large number of undiagnosed cases.It is worth noting that despite the wide application of traditional machine learning and deep learning algorithms in the healthcare field, especially in the research of acute conditions such as cancer, however, there is a significant lack of in-depth exploration of chronic infectious diseases, such as hepatitis C. In addition, the complex biological attributes of the hepatitis C virus and the significant individual differences among patients together give rise to the challenge of multilevel nonlinear correlation among features. Therefore, the application of deep learning methods to the hepatitis C dataset is not only an important way to validate the efficacy of such algorithms, but also an urgent research direction that needs to be put into practice to fill the existing research gaps.

The Helmholtz Center for Infection Research, the Institute of Clinical Chemistry at the Medical University of Hannover, and other research organizations provided data on people with hepatitis C, which was used to compile the information in this article. The collection includes demographic data, such as age, as well as test results for blood donors and hepatitis C patients. By examining the dataset, we can see that the primary features are the quantity of different blood components and liver function, and that the only categorical feature in the dataset is gender. Table 1 shows the precise definition of these fields.

This essay investigates the categorisation issue. The Table 2 lists the description and sample size of the five main classification labels. In the next training, in order to address the effect of sample imbalance on the classification effect, the model will be first smote32 sampled and then trained using the smote sampled samples. With a sample size of 400 for each classification.

The aim of this paper is to investigate whether IDAE can extract more representative and robust features, and we have chosen a baseline model that includes both traditional machine learning algorithms and various types of autoencoders, which will be described in more detail below:

SVM: support vector machines are used to achieve optimal classification of data by constructing maximally spaced classification hyperplanes and use kernel functions to deal with nonlinear problems, aiming to seek to identify decision boundaries that maximize spacing in the training data.

KNN: the K Nearest Neighbors algorithm determines the class or predictive value of a new sample by calculating its distance from each sample in the training set through its K nearest neighbors.

RF: random forests utilize random feature selection and Bootstrap sampling techniques to construct and combine the prediction results of multiple decision trees to effectively handle classification and regression problems.

AE: autoencoder is a neural network structure consisting of an encoder and a decoder that learns a compact, low-dimensional feature representation of the data through a autoreconfiguration process of the training data, and is mainly used for data dimensionality reduction, feature extraction, and generative learning tasks.

DAE: denoising autoencoder is a autoencoder variant that excels at extracting features from noisy inputs, revealing the underlying structure of the data and learning advanced features by reconstructing the noise-added inputs to improve network robustness, and whose robust features have a gainful effect on the downstream tasks, which contributes to improving the model generalization ability.

SDAE: stacked denoising autoencoder is a multilayer neural network structure consisting of multiple noise-reducing autoencoder layers connected in series, each of which applies noise to the input data during training and learns to reconstruct the undisturbed original features from the noisy data, thus extracting a more abstract and robust feature representation layer by layer.

DIUDA: the main feature of Dual Input Unsupervised Denoising Autoencoder is that it receives two different types of input data at the same time, and further enhances the generalization ability of the model and the understanding of the intrinsic structure of the data by fusing the two types of inputs for the joint learning and extraction of the feature representation.

In this paper, 80% of the Hepatitis C dataset is used as model training and the remaining 20% is used to test the model. Since the samples are unbalanced, this work is repeated with negative samples to ensure that the samples are balanced. For the autoencoder all methods, the learning rate is initialized to 0.001, the number of layers for both encoder and decoder are set to 3, the number of neurons for encoder is 10, 8, 5, the number of neurons for decoder is 5, 8, 10, and the MLP is initialized to 3 layers with the number of neurons 10, 8, 5, respectively, and furthermore all models are trained until convergence, with a maximum training epoch is 200. The machine learning methods all use the sklearn library, and the hyperparameters use the default parameters of the corresponding algorithms of the sklearn library.

To answer the first question, we classified the hepatitis C data after feature extraction using a modified noise-reducing auto-encoder and compared it using traditional machine learning algorithms such as SVM, KNN, and Random Forest with AE, DAE, SDAE, and DIUDA as baseline models. Each experiment was conducted 3 times to mitigate randomness. The average results for each metric are shown in Table 3.From the table, we can make the following observations.

The left figure shows the 3D visualisation of t-SNE with features extracted by DAE, and the right figure shows the 3D visualisation of t-SNE with features extracted by IDAE.

Firstly, the IDAE shows significant improvement on the hepatitis C classification task compared to the machine learning algorithms, and also outperforms almost all machine learning baseline models on all evaluation metrics. These results validate the effectiveness of our proposed improved noise-reducing autoencoder on the hepatitis C dataset. Secondly, IDAE achieves higher accuracy on the hepatitis C dataset compared to the traditional autoencoders such as AE, DAE, SDAE and DIUDA, etc., with numerical improvements of 0.011, 0.013, 0.010, 0.007, respectively. other metrics such as the AUC-ROC and F1 scores, the values are improved by 0.11, 0.10, 0.06,0.04 and 0.13, 0.11, 0.042, 0.032. From Fig. 5, it can be seen that the IDAE shows better clustering effect and class boundary differentiation in the feature representation in 3D space, and both the experimental results and visual analyses verify the advantages of the improved model in classification performance. Both experimental results and visualisation analysis verify the advantages of the improved model in classification performance.

Finally, SVM and RF outperform KNN for classification in the Hepatitis C dataset due to the fact that SVM can handle complex nonlinear relationships through radial basis function (RBF) kernels. The integrated algorithm can also integrate multiple weak learners to indirectly achieve nonlinear classification. KNN, on the other hand, is based on linear measures such as Euclidean distance to construct decision boundaries, which cannot effectively capture and express the essential laws of complex nonlinear data distributions, leading to poor classification results.

In summary, these results demonstrate the superiority of the improved noise-reducing autoencoder in feature extraction of hepatitis C data. It is also indirectly verified by the effect of machine learning that hepatitis C data features may indeed have complex nonlinear relationships.

To answer the second question, we analyze in this subsection the performance variation of different autoencoder algorithms at different depths. To perform the experiments in the constrained setting, we used a fixed learning rate of 0.001. The number of neurons in the encoder and decoder was kept constant and the number of layers added to the encoder and decoder was set to {1, 2, 3, 4, 5, 6}. Each experiment was performed 3 times and the average results are shown in Fig. 6, we make the following observations:

Effects of various types of autoencoders at different depths.

Under different layer configurations, the IDAE proposed in this study shows significant advantages over the traditional AE, DAE, SDAE and SDAE in terms of both feature extraction and classification performance. The experimental data show that the deeper the number of layers, the greater the performance improvement, when the number of layers of the encoder reaches 6 layers, the accuracy improvement effect of IDAE is 0.112, 0.103 , 0.041, 0.021 ,the improvement effect of AUC-ROC of IDAE is 0.062, 0.042, 0.034,0.034, and the improvement effect of F1 is 0.054, 0.051, 0.034,0.028 in the order of the encoder.

It is worth noting that conventional autocoders often encounter the challenges of overfitting and gradient vanishing when the network is deepened, resulting in a gradual stabilisation or even a slight decline in their performance on the hepatitis C classification task, which is largely attributed to the excessive complexity and gradient vanishing problems caused by the over-deep network structure, which restrict the model from finding the optimal solution. The improved version of DAE introduces residual neural network, which optimises the information flow between layers and solves the gradient vanishing problem in deep learning by introducing directly connected paths, and balances the model complexity and generalisation ability by flexibly expanding the depth and width of the network. Experimental results show that the improved DAE further improves the classification performance with appropriate increase in network depth, and alleviates the overfitting problem at the same depth. Taken together, the experimental results reveal that the improved DAE does mitigate the risk of overfitting at the same depth as the number of network layers deepens, and also outperforms other autoencoders in various metrics.

To answer the third question, in this subsection we analyse the speed of model convergence for different autoencoder algorithms. The experiments were also performed by setting the number of layers added to the encoder and decoder to {3, 6}, with the same number of neurons in each layer, and performing each experiment three times, with the average results shown in Fig. 7, where we observe the following conclusions: The convergence speed of the IDAE is better than the other autoencoder at different depths again. Especially, the contrast is more obvious at deeper layers. This is due to the fact that the chain rule leads to gradient vanishing and overfitting problems, and its convergence speed will have a decreasing trend; whereas the IDAE adds direct paths between layers by incorporating techniques such as residual connectivity, which allows the signal to bypass the nonlinear transforms of some layers and propagate directly to the later layers. This design effectively mitigates the problem of gradient vanishing as the depth of the network increases, allowing the network to maintain a high gradient flow rate during training, and still maintain a fast convergence speed even when the depth increases. In summary, when dealing with complex and high-dimensional data such as hepatitis C-related data, the IDAE is able to learn and extract features better by continuously increasing the depth energy, which improves the model training efficiency and overall performance.

Comparison of model convergence speed for different layers of autoencoders.

Link:
Investigation of the effectiveness of a classification method based on improved DAE feature extraction for hepatitis C ... - Nature.com

Artificial Intelligence in Regtech Global Market Report 2024 – Market to reach $6.64 billion by 2028 | bobsguide – Bobsguide

If you are a visitor of this website:

Please try again in a few minutes.

There is an issue between Cloudflare's cache and your origin web server. Cloudflare monitors for these errors and automatically investigates the cause. To help support the investigation, you can pull the corresponding error log from your web server and submit it our support team. Please include the Ray ID (which is at the bottom of this error page). Additional troubleshooting resources.

Read more here:
Artificial Intelligence in Regtech Global Market Report 2024 - Market to reach $6.64 billion by 2028 | bobsguide - Bobsguide