Category Archives: Machine Learning

How to Use Machine Learning to Scale HR Processes – The HR Director Magazine

Technology is constantly changing how industries operate. Adding AI frees time for working professionals to undertake other essential operations. Here is how machine learning and artificial intelligence in human resource management are scaling processes.

Machine learning (ML) is a part of AI that utilizes multiple algorithms and data to help a computer learn and predict outcomes. ML scans through large quantities of data to allow it to become more intelligent and to make assumptions with better accuracy.

It has the capability to bring many advantages to different industries, allowing them to operate more effectively. However, as with any technology or tool, proper research is necessary to ensure the organization has weighed all the considerations such as managing features and training appropriately.

When HR teams add ML and AI to daily operations, they can assign more time to tasks that carry significant weight, thus improving HR processes and efficiency. In addition, they are better equipped to help the organization reach its company goals. Here are a few areas where machine learning and artificial intelligence could streamline HR tasks as they increase.

AI technology could help find and hire new employees throughout the recruiting process. Talent acquisition can take a lot of time and use valuable company resources. Utilizing machine learning can free up some of the time HR spends recruiting and allow them to prioritize other vital tasks.

This technology can help create job listings and descriptions to attract high-performing candidates. Regarding the people that apply for the role, it can also scan through all the applicants to determine the best fit for the business. This allows for a more efficient approach to finding the right applicant.

In addition to using machine learning in HR to shortlist the best candidates, chatbots could help with the hiring process. They can answer applicants questions about the job and schedule interviews. These HR chatbots assist the candidate throughout the hiring process, making it easier for HR teams.

Machine learning and AI technology provide a smoother and more holistic onboarding process. Many time-consuming tasks are automated when ML is in the mix. For example, chatbots could automatically request the candidate fill in specific documents, and help HR teams complete tax forms or other paperwork. This reduces the chance of mistakes occurring due to human error and helps the process run more efficiently, saving time.

One of the best things machine learning can do is create a more personalized experience for each employee. By analyzing the new hires previous experience, ML can create employee onboarding programs designed explicitly for them. This helps the new worker adjust to their position quicker.

Machine learning can also analyze the employees performance to provide feedback, helping them grow in their role more effectively. Based on the inputs of new employees, AI can create all the necessary accounts and profiles they will require for their position.

Chatbots can answer questions and assist if the employee has work-related difficulties. For example, AI can aid the worker with getting set up with the companys network if they are having trouble.

HR teams can encourage hires to provide feedback to AI systems that can help them create more efficient processes. Workers can say what they are struggling with and feel needs improvement. Based on these inputs, machine learning can aid with adjusting the process to make onboarding even more straightforward for future hires. Its worth mentioning organizations with a robust onboarding process increase new-hire retention by 82% and improve productivity by seventy percent.

In terms of training employees, ML and AI can play a huge role in improving the process. Machine learning can scan details about the workers role and performance, and provide feedback on areas of improvement. This allows the employee to have a more personal experience while also knowing what critical points they should focus on next.

Another advantage of machine learning in HR is it allows management to identify other employees who can benefit from training courses. For example, ML can analyze training statistics to determine if any staff members might have gaps in their knowledge. This way, they can get up to speed, making employees feel more comfortable and proficient in their role.

ML scanning through training analytics provides other advantages as well. This AI technology can also identify if workers are more suited to different positions, allowing HR to make the necessary adjustments.

In other words, ML analyzing employee skills, performance, experience and training analytics opens up more opportunities for the worker. This means staff members have a more laid-out career path to follow.

Artificial intelligence in human resource management has many associated benefits that aid with improving HR processes. Here are the three top benefits of artificial intelligence and machine learning in daily human resource management operations.

Automating repetitive and time-consuming tasks is one of the most significant advantages of AI in the workspace. With this technology, operations such as scheduling interviews, employee attendance, filling out worker-related paperwork, administering benefits and providing payroll happen automatically.

With many of these tasks no longer on the daily to-do list for management, HR can focus their time on other valuable operations. In addition, AI allows HR teams to make more data-driven decisions that help push the organization forward.

One issue that plagues many organizations is employee retention a struggle that looks like it will be around for a while. In 2021, more than 47 million people left their jobs and 2022 was even worse, with over 50 million workers quitting.

However, ML can help with lowering the employee turnover rate. This happens by providing employees with a smooth onboarding process, mapping their career paths with new opportunities and creating a more personal experience.

In addition, machine learning can identify employees with the highest attrition rate workers who are most likely to leave their job which allows businesses to prepare accordingly. AI technology can calculate the staff attrition rate by analyzing worker data, quitting predictors and employee behavior.

Machine learning can create job listings that specifically target the highest-performing candidates. Also, it can determine which applicant will fit best with the company and schedule interviews. Chatbots also assist in answering important candidate questions.

All of this occurs automatically, which saves time for management teams. Machine learning in HR assists teams in guiding the candidate through hiring procedures. With many of the tasks automated in the hiring process, there is an overall increase in efficiency and effectiveness.

When looking at all these advantages, it is clear machine learning plays a vital role in improving HR processes. When organizations incorporate this technology, management teams are better equipped for daily operations. All this combined with a holistic approach allows HR employees to make more effective decisions that propel the organization forward.

Zac Amos is a tech writer with a special interest in HR technology, automation, and cybersecurity. He is the Features Editor at ReHack and a regular contributor at RecruitingDaily, ISAGCA, and DZone.

Read the rest here:
How to Use Machine Learning to Scale HR Processes - The HR Director Magazine

Best of artificial intelligence, machine learning will be deployed at Air India: Chandrasekaran – The Economic Times

Tata Sons Chairman N Chandrasekaran on Thursday said the best of artificial intelligence and machine learning will be deployed at Air India and emphasised that the airline is not just another business for the group but a passion and a national mission. As Tata Group steers the transformation of loss-making Air India since taking control in January last year, Chandrasekaran said that he most of the time receives "caring criticism" about the airline that also further strengthens the commitment. Speaking at an event in the national capital where Air India's new brand identity and aircraft livery were unveiled, he said the focus is on upgrading all human resources aspects in the airline. According to him, there is a lot of hard work needed but the path is clear for the airline, and added that the best of artificial intelligence and machine learning will be deployed at the airline. "We are focusing on upgrading all human resources aspects of the airline. Our fleet requires a lot of work. While we have ordered one of the largest fleet orders, it will take time.

"In the meantime, we have to refurbish our current fleet at an acceptable level. Our aim is to have the best of machine learning and the best of AI in Air India than any other airline," he said.

The new logo, signified by that historically used window, the peak of the golden window, signifies limitless possibilities, progressiveness, confidence and all of that, he emphasised.

Tata Group took control of Air India in January 2022.

Jehangir Ratanji Dadabhoy (JRD) Tata founded the airline in 1932 and named it Tata Airlines. In 1946, the aviation division of Tata Sons was listed as Air India, and in 1948, Air India International was launched with flights to Europe.

In 1953, Air India was nationalised and last year, the airline was taken over by the Tata Group from the government.

Read more from the original source:
Best of artificial intelligence, machine learning will be deployed at Air India: Chandrasekaran - The Economic Times

The Future of Autonomous Vehicles: Advancements in AI and … – Fagen wasanni

The field of autonomous driving research is at a pivotal stage, with groundbreaking advancements in artificial intelligence (AI) and machine learning taking it into uncharted territories. Researchers have made significant progress in overcoming challenges related to safety, decision-making, and environmental adaptability.

Collaborations between academia, industry, and regulatory bodies are shaping the future of transportation. These partnerships promise a revolutionary era of self-driving vehicles that will redefine commuting and how we interact with our environment.

Deploying cutting-edge technologies on devices like autonomous vehicles is no easy task. Most state-of-the-art technology, such as Chat-GPT, is built on top of the Transformers architecture in Machine Learning. However, slow inference times with machine learning transformers have been a challenge in the field of computer vision (CV) and natural language processing (NLP).

Transformers, especially large-scale models like BERT, GPT-3, and their variants, have achieved remarkable performance in various NLP tasks. Their computational complexity and memory requirements, however, limit their real-world applicability in time-sensitive or interactive applications.

Addressing the challenge of slow inference time is crucial for making transformer-based models more practical and accessible for real-world applications, including autonomous vehicles and social robots.

Recent research, led by Mr. Apoorv Singh, a Machine Learning Scientist at an autonomous vehicle company, has enabled the deployment of computation-hungry transformer models in real-time on autonomous cars. The research focuses on scaling down the inference time of Transformers-based computer vision models while maintaining detection performance.

These advancements give confidence that autonomous vehicles are rapidly progressing and will soon become a reality, bringing forth a transformative era of self-driving transportation. The future for autonomous vehicles seems brighter than ever, promising to revolutionize transportation and reshape our interaction with the world.

Read the original here:
The Future of Autonomous Vehicles: Advancements in AI and ... - Fagen wasanni

AI and the law: the challenges of making sure machine learning … – Scottish Business News

By Sinead Machin is a Senior Associate at Complete Clarity Solicitors and Simplicity Legal

EVERY advance in the dissemination of human knowledge from the printing press to newspapers, television and the internet has initially been seen as much as a threat as an opportunity. But few new systems have been greeted with such suspicion as AI.

Largely because of fears of machine superiority and loss of human jobs and functions to Artificial Intelligence, debate about its impact on current and future society has verged on the dramatic and, in some cases, the hysterical.

But one thing is beyond dispute AI is here, and it is here to stay. And the only rational response is to learn to live with it, understand its capabilities and its limitations and think very clearly about checks and balances to ensure a net benefit rather than an irreversible harm.

The impressive power of the technology, and particularly tools such as Chat GPT, has been exercising the minds of the legal profession around the world as it gets to grips with the practical, economic and ethical implications of AI.

There is no doubt that AI will become, if it has not already, an indispensable tool for coping with the immense amount of data which lawyers have to handle in complex cases, and some of the mundane processes which underpin the legal infrastructure.

Certainly, in high volume practices, machine learning and data analytics can be hugely beneficial in identifying and increasing the number of leads and prospects and SEO teams are seeing significant opportunities for business growth.

AI comes into its own in the field of case management, with its limitless capacity for examining massive volumes of data, finding patterns, and making predictions or choices using algorithms and statistical models.

This is creating much quicker and more streamlined case management, which clients are already coming to expect. In fact, it may soon become a recognised basis for complaint if the speed and efficiencies which are now possible are not achieved.

More troubling discussion is taking place around whether AI could carry out some of the tasks traditionally performed by lawyers, such as researching, preparing and presenting cases.

The pitfalls of this line of thinking were amply illustrated recently by the story of New York attorney Steven Schwartz, who used ChatGPT to write a legal brief. The chatbot not only completely fabricated the case law which he cited in court but reassured him repeatedly that the information was accurate. The judge in the case was singularly unimpressed.

Lawyers must be aware of the risks of using AI bots in terms of client confidentiality. If they fed client-specific information into a bot such as ChatGPT, it would become the property of OpenAI, the bot developer, and could be disclosed in other cases.

Scots law, of course, has its own unique characteristics, of which AI bots at this stage would likely be unaware, leading them to rely on English and Welsh cases and precedents which would have limited relevance.

However, it is learning fast. Chat GPT3 scored in the lowest 10% in the US Bar exam, but the next version, GPT4, scored in the top 10%. It is conceivable that law-specific bots will be developed to concentrate solely on particular areas of expertise.

Master of the Rolls and Head of Civil Justice in England and Wales Sir Geoffrey Vos said recently (June 2023) that public trust may limit the use of AI in legal decisions, pointing to the emotional and human elements involved in areas such as family and criminal law.

He warned that while AI has the potential to be a valuable tool for predicting case outcomes and making informed decisions, it was not infallible and should be used in conjunction with human judgement and expertise.

He pointed out that ChatGPT itself said: Ultimately, legal decision-making involves a range of factors beyond just predicting the outcome of a case, including strategic and ethical considerations and client goals.

See the article here:
AI and the law: the challenges of making sure machine learning ... - Scottish Business News

Using machine learning to predict surgical site infection | IDR – Dove Medical Press

Introduction

Surgical site infection (SSI)1 frequently develops postoperatively; this condition can be fatal for both surgeons and patients. Many factors are responsible for the infection of surgical incisions, including smoking status, diabetes, advanced age, hypoproteinemia, and internal fixation.2,3 In spinal surgery,4 SSI is associated with prominent morbidity, healthcare expenses owing to readmission and reoperation, and poor prognosis.5,6 Artificial intelligence is widely used in medical research, and the predictive effectiveness of machine learning is widely recognized. After achieving great success in various predictions, machine learning has attracted the attention of clinicians and medical researchers.7,8 In our previous studies, we constructed a machine learning prediction model and demonstrated good prediction ability.9,10

In this study, a machine learning model and a web-based prediction tool were developed to predict SSI in patients undergoing lumbar spinal surgery. Various machine learning algorithms were compared to identify the most effective approach.11,12 As a super data processing and calculation method, machine learning has considerable reliability in screening variables.13,14 However, the current prediction models based on machine learning mostly compare the effectiveness of different algorithms to select the best one.

Therefore, we aimed to select the ideal clinical variables using various machine learning algorithms and their intersection to build an ideal prediction model and perform internal verification. This prediction model might guide clinical diagnosis and prevention.

We obtained ethical approval from the Institutional Review Board of our institute (Approval No. 2022-E398-01). This retrospective study adheres to the principles outlined in the Declaration of Helsinki. A total of 4019 patients who underwent lumbar internal fixation surgery at our institute from June 2012 to February 2021 were included in the study. Clinical data such as age, sex, diabetes, Modic changes, anesthesia score, operation status, and serological and imaging indexes of patients were collected for statistical analysis. Operation status included the following parameters: use of antibiotics during the operation, operation time, anesthesia time, vertebral body number spanned, screw number, and intraoperative blood transfusion. The serological parameters were glucose, WBC, hemoglobin (Hb), PLT, ESR, and albumin. Imaging indexes were skin-to-lamina thickness and sebum thickness (In this study, sebum thickness at three distinct locations of lumbar surgical incisions was measured using CT examination as the measurement method. The average value derived from these measurements was considered as the value included in our study). Patients with incomplete information or those that did not meet the diagnostic criteria15 were excluded. Finally, 54 and 1273 patients were grouped into the SSI and normal lumbar fixation groups (Figure 1). Through random grouping, the data were classified into the test and verification groups (Table 1).

Table 1 The Distribution of Each Variable That Meets the Screening Condition

Figure 1 Data filtering and grouping.

R software (version 4.2.1; https://www.R-project.org) was used for statistical analyses. First, the filtered data were randomized into the test and verification groups. Second, in the test group, specific variables were screened via logistic regression analysis, Lasso regression analysis, support vector machine (SVM), and random forest. Specific variables acquired using these four methods were intersected, and a dynamic model was constructed. ROC and calibration curves were constructed to assess model performance. Finally, using the verification group, model performance was verified internally using ROC and calibration curves.

Single-factor logistic regression analysis was performed to select variables with p < 0.05. Then, multi-factor logistic regression analysis was performed; p < 0.05 was set as the threshold to select the predictive variables of this method.

Lasso regression analysis was performed, and a model was developed as a contraction approach to select risk factors from various variables as well as optimal predicting features based on SSI case data. LASSO regression and visualized analyses were conducted using the R glmnet package.

SVM recursive feature elimination (SVM-RFE) has been developed as an efficient approach under machine learning.To predict SSI, we developed an SVM-RFE model using the rms package. Data were analyzed via tenfold cross-validation, followed by the acquisition of an output vector feature index and variable sorting in the descending order of usefulness.

To construct the random forest model, the R random forest package was used to select variables, perform calculations, and visualize relative variable importance. %IncMSE indicates an increase in mean squared error. Random values were assigned to each variable to assess the importance of predicting variables. The models prediction error increased when a predicting variable of greater importance had its value randomly replaced. Consequently, a higher value indicated a higher level of variable importance. IncNodePurity indicates an increase in node purity and can be calculated as the sum of squares of residual errors; it indicates how one variable affects observed value heterogeneity in every node within the classification tree, with a higher value indicating higher variable importance.

We selected IncNodePurity as the indicating factors to judge whether a predictive variable was important. We identified the value with the highest importance to be the optimal predictive variable via tenfold cross-validation under five iterations.

The abovementioned methods were used to screen the predictive variables. The same variables were obtained using a Wayne diagram. After constructing a dynamic prediction model with common variables, ROC and calibration curves were constructed to evaluate model prediction performance; its effectiveness was verified using the verification group.

In total, the data of 1327 patients meeting the inclusion criteria were collected: age, sex, diabetes, Modic changes, anesthesia score, antibiotic use during the operation, operation time, anesthesia time, vertebral body number spanned, screw number, intraoperative blood transfusion, WBC, glucose, PLT, Hb, ESR, albumin, skin-to-lamina thickness, and sebum thickness. These variables were randomly divided into the test and verification groups. The distribution characteristics between the two groups are presented in Table 1. Additionally, Supplementary Figure 1 illustrates the correlations among the different variables in the test group.

Univariate logistic regression analysis revealed a statistically significant difference between p-values < 0.05, and the screened variables were age, diabetes, Modic changes, anesthesia time, vertebral body number spanned, screw number, blood transfusion, WBC, glucose, albumin, ESR, Hb, and sebum thickness. Multivariate logistic regression analysis revealed a statistically significant difference between p-values < 0.05; the screened variables were blood transfusion, glucose, Modic changes, Hb, vertebral body number spanned, and sebum thickness. Table 2 displays the results of the logistic regression analysis.

Table 2 Results After Logistic Regression Analysis

Results for Lasso regression analysis of dependent variables are shown in Supplementary Figure 2A; 12 significant variables in patients with SSI were compared with those in patients with SSI (Supplementary Figure 2B).

Following SVM-RFE analysis, Supplementary Figure 3A illustrates that ten variables with the lowest error rate were selected as predictive factors. Each of these factors was found to be statistically significant. Variables with the highest importance were determined using the random forest algorithm IncNodePurity. Supplementary Figure 3B shows that the best regression effect was obtained by leaving the 10 variables with the highest importance after tenfold cross-validation.

Table 3 displays the variables with the highest importance selected via Lasso regression analysis, SVM-RFE, and random forest. The intersection of the results obtained using the four methods was determined using a Venn diagram (Figure 2). Four predictors were obtained: Hb, glucose, Modic change, and sebum thickness. We used these four predictors to build a prediction model (Figure 3).

Table 3 Risk Factors Screened by Three Machine Learning Algorithms

Figure 2 The intersection of variables screened by using logistic regression analysis, LASSO, random forest, and SVM-RFE.

Figure 3 Four independent risk factors were identified, including Modic change, sebum thickness, Hb, glucose, and a dynamic model was constructed. Categorical variables were visually represented using block plots, while the distribution of continuous variables was depicted through violin plots. Larger plots accommodated more variables for comprehensive visualization. The red marker on the graph indicates that the probability of postoperative surgical site infection (SSI) was found to be 85.4% when all four independent risk factors were at their respective values.

To verify model efficiency, ROC (Figure 4B) and calibration (Figure 4A) curves were constructed using the test group; the area under the ROC curve (AUC) was 0.988. Calibration curve analysis revealed favorable consistency of the nomogram-predicted values compared with real measurements. In addition, the C-index of the model was 0.9861 (95% CI 0.9810.994). Finally, we used the validation group for internal validation; the ROC and calibration curves are shown in Figures 4D and C, respectively. The AUC was 0.987, and calibration curve analysis revealed favorable consistency of the nomogram-predicted values compared with real measurements. The C-index was 0.982 (95% CI 0.9740.999).

Figure 4 (A and B) represent the calibration curve and ROC curve of the training group, respectively, where the area under the curve (AUC) is 0.988. (C and D) represent the calibration curve and ROC curve of the validation group, respectively, where the area under the curve (AUC) is 0.987.

In this study, we used machine learning algorithms and related data to develop an SSI prediction model according to various predictions. Three machine learning models were employed to filter variables, and their validity was assessed using the verification group. This strategy based on artificial intelligence has been adopted to help clinicians select early diagnostic approaches.13,16,17 The relationship between machine learning and medicine is extensive, involving the diagnosis and treatment of cancer, surgery, and internal diseases.1820 Machine learning includes imaging, metabolomics, proteomics, etc.;21,22 random forest, SVM, CNN, GBX, and other algorithms are a very small part of machine learning.23 We can diagnose and predict various diseases, including tumors, specific diseases, and inflammatory diseases, via machine learning.9,10,24

Many studies have assessed SSI risk factors after spinal surgery,2528 including the establishment of predictive models based on machine learning.12,29 In our study, we utilized a combination of logistic regression analysis and machine learning to identify common risk factors and develop a prediction model, which has not been accomplished in previous studies. Further, we identified four risk factors that were closely related to the occurrence of SSI: Modic change, sebum thickness, Hb, and glucose. The constructed prediction model has good predictive efficacy and visualization, further simplifying the clinicians judgment and intervention on SSI.

Modic changes cause the degeneration of the lumbar spine on imaging and are probably involved in the bodys immune response.30,31 Pradip et al found that Modic changes were chronic subclinical infection foci rather than degeneration markers alone.32 Ohtori et al reported that endplate abnormality is associated with TNF-induced axonal development and inflammation. This conclusion is drawn from the observation that patients with Modic Type 1 or 2 endplate changes on MRI exhibited a significantly higher presence of TNF immunoreactive cells and PGP 9.5 immunoreactive nerve fibers in the affected vertebral endplates compared to patients without any endplate abnormalities on MRI.33 In our study, we also determined Modic changes as a risk factor for SSI following lumbar surgery. Therefore, Modic changes are not only a manifestation of lumbar disc degeneration but also that of chronic inflammation and should hence receive added attention from clinicians.

Studies have shown that obesity is positively correlated with postoperative SSI occurrence.34,35 We found that sebum thickness, a critical factor for predicting the risk of postoperative SSI, was positively correlated with SSI occurrence. We obtained this result despite insufficient direct pathophysiological evidence for sebum thickness and SSI. As sebum thickness and obesity are often positively correlated, we believe the pathophysiological mechanism between sebum thickness and SSI is equivalent to the relationship between obesity and SSI.36,37 Preoperative fat reduction is instructive for SSI prevention.38

Hb content is often negatively correlated with SSI occurrence;39 we also confirmed this finding. Tissue growth at the incision site after surgery is inseparable from energy perfusion. Insufficient tissue blood perfusion is not conducive to tissue recovery and even leads to tissue necrosis.40,41 Anemia is closely associated with SSI development; however, it is worth noting that perioperative blood transfusion may also be an independent factor for predicting postoperative SSI.42 Moreover, glucose has been a focal point in research related to SSIs.4345 We found that preoperative blood glucose levels were positively correlated with SSI occurrence. Liu et al identified high preoperative serum glucose as an independent factor predicting SSI risk following posterior lumbar spinal surgery.46 Thus, spinal surgeons should pay attention to patients preoperative blood glucose levels and intervene in time to prevent SSI.

Given the strong predictive efficacy of the model developed in our study, spine doctors can anticipate the potential occurrence of SSIs in patients prior to surgery by considering factors such as Modic changes, sebum thickness, hemoglobin levels, and preoperative blood glucose. In cases where a high risk is identified, appropriate intervention measures can be implemented before surgery, such as stabilizing blood glucose, administering blood transfusions, and prophylactic antibiotic use. The goal is to mitigate the risk of postoperative SSIs, facilitate patients speedy recovery, and alleviate unnecessary financial burdens. Additionally, we identified intraoperative blood transfusion as a risk factor for outcomes using logistic regression analysis, Lasso regression analysis, and random forest techniques. This finding is noteworthy and warrants attention from healthcare providers and patients alike.

Although we used various screening methods and constructed a prediction model with good performance, our study has some limitations. First, there might be selection and subjective bias owing to the retrospective nature of the study. Second, we constructed the machine learning algorithm model based on data from a single center; as a result, this model might not be applicable to other centers and requires external verification. Third, additional data are warranted, which might improve the diagnostic effectiveness of our model.

In our study, we employed logistic regression analysis and machine learning to create a dynamic model with strong predictive capabilities for SSIs. This dynamic model can be a valuable tool for healthcare professionals and patients in clinical practice.

SSI, surgery site infection; CI, Confidence interval; AUC, Area under the curve; BMI, body mass index; ASA, American Society ofAnesthesiologists; OP-time, Operation time; AT, anesthesia time; WBC, white blood cell; Hb, hemoglobin; PLT, platelet; ESR, erythrocyte sedimentation rate.

The datasets used and/or analyzed during the current study are available from the corresponding author upon reasonable request.

We confirm that all subjects and/or their legal guardians provided written informed consent for participation in this study. Prior approval of the study was obtained from the institutional ethical review board of The First Affiliated Hospital of Guangxi Medical University (Approval No. 2022-E39801). The study complies with the Declaration of Helsinki.

We would like to thank Dr. Xinli Zhan and Dr. Chong Liu for their efforts in this work.

All authors made a significant contribution to the work reported, whether that is in the conception, study design, execution, acquisition of data, analysis, and interpretation, or in all these areas; took part in drafting, revising or critically reviewing the article; gave final approval of the version to be published; have agreed on the journal to which the article has been submitted; and agree to be accountable for all aspects of the work.

The authors declare that they have no competing interests.

1. Seidelman J, Anderson DJ. Surgical site infections. Infect Dis Clin North Am. 2021;35(4):901929.

2. Ma T, Lu K, Song L, et al. Modifiable factors as current smoking, hypoalbumin, and elevated fasting blood glucose level increased the SSI risk following elderly hip fracture surgery. J Invest Surg. 2020;33(8):750758.

3. Skeie E, Koch AM, Harthug S, et al. A positive association between nutritional risk and the incidence of surgical site infections: a hospital-based register study. PLoS One. 2018;13(5):e0197344.

4. Zhou J, Wang R, Huo X, Xiong W, Kang L, Xue Y. Incidence of surgical site infection after spine surgery: a systematic review and meta-analysis. Spine. 2020;45(3):208216.

5. Strobel RM, Leonhardt M, Frster F, et al. The impact of surgical site infection-a cost analysis. Langenbecks Arch Surgery. 2022;407(2):819828.

6. McFarland A, Reilly J, Manoukian S, Mason H. The economic benefits of surgical site infection prevention in adults: a systematic review. J Hosp Infect. 2020;106(1):76101.

7. Sultan AS, Elgharib MA, Tavares T, Jessri M, Basile JR. The use of artificial intelligence, machine learning and deep learning in oncologic histopathology. J Oral Pathol Med. 2020;49(9):849856.

8. Sidey-Gibbons JAM, Sidey-Gibbons CJ. Machine learning in medicine: a practical introduction. BMC Med Res Methodol. 2019;19(1):64.

9. Zhu J, Lu Q, Liang T, et al. Development and validation of a machine learning-based nomogram for prediction of ankylosing spondylitis. Rheumatol Therapy. 2022;9(5):13771397.

10. Zhou C, Huang S, Liang T, et al. Machine learning-based clustering in cervical spondylotic myelopathy patients to identify heterogeneous clinical characteristics. Front Surgery. 2022;9:935656.

11. Liu WC, Ying H, Liao WJ, et al. Using preoperative and intraoperative factors to predict the risk of surgical site infections after lumbar spinal surgery: a machine learning-based study. World Neurosurg. 2022;162:e553e60.

12. Wang H, Fan T, Yang B, Lin Q, Li W, Yang M. Development and internal validation of supervised machine learning algorithms for predicting the risk of surgical site infection following minimally invasive transforaminal lumbar interbody fusion. Front med. 2021;8:771608.

13. Handelman GS, Kok HK, Chandra RV, Razavi AH, Lee MJ, Asadi H. eDoctor: machine learning and the future of medicine. J Intern Med. 2018;284(6):603619.

14. Cote MP, Lubowitz JH, Brand JC, Rossi MJ. Artificial intelligence, machine learning, and medicine: a little background goes a long way toward understanding. Arthroscopy. 2021;37(6):16991702.

15. Borchardt RA, Tzizik D. Update on surgical site infections: the new CDC guidelines. JAAPA. 2018;31(4):5254.

16. Lee D, Yoon SN. Application of Artificial Intelligence-Based Technologies in the Healthcare Industry: opportunities and Challenges. Int J Environ Res Public Health. 2021;18(1):567.

17. Forsting M. Machine learning will change medicine. J Nucl Med. 2017;58(3):357358.

18. Hashimoto DA, Witkowski E, Gao L, Meireles O, Rosman G. Artificial Intelligence in Anesthesiology: current Techniques, Clinical Applications, and Limitations. Anesthesiology. 2020;132(2):379394.

19. Kong J, Ha D, Lee J, et al. Network-based machine learning approach to predict immunotherapy response in cancer patients. Nat Commun. 2022;13(1):3703.

20. Groot OQ, Ogink PT, Lans A, et al. Machine learning prediction models in orthopedic surgery: a systematic review in transparent reporting. J Orthop Res. 2022;40(2):475483.

21. Harrison JH, Gilbertson JR, Hanna MG, et al. Introduction to artificial intelligence and machine learning for pathology. Arch Pathol Lab Med. 2021;145(10):12281254.

22. Staziaki PV, Wu D, Rayan JC, et al. Machine learning combining CT findings and clinical parameters improves prediction of length of stay and ICU admission in torso trauma. Eur Radiol. 2021;31(7):54345441.

23. Nayarisseri A, Khandelwal R, Tanwar P, et al. Artificial intelligence, big data and machine learning approaches in precision medicine & drug discovery. Curr Drug Targets. 2021;22(6):631655.

24. Tran KA, Kondrashova O, Bradley A, Williams ED, Pearson JV, Waddell N. Deep learning in cancer diagnosis, prognosis and treatment selection. Genome Med. 2021;13(1):152.

25. Chen L, Gan Z, Huang S, et al. Blood transfusion risk prediction in spinal tuberculosis surgery: development and assessment of a novel predictive nomogram. BMC Musculoskelet Disord. 2022;23(1):182.

26. Namba T, Ueno M, Inoue G, et al. Prediction tool for high risk of surgical site infection in spinal surgery. Infect Control Hospital Epidemiol. 2020;41(7):799804.

27. Haddad S, Millhouse PW, Maltenfort M, Restrepo C, Kepler CK, Vaccaro AR. Diagnosis and neurologic status as predictors of surgical site infection in primary cervical spinal surgery. Spine J. 2016;16(5):632642.

28. Bohl DD, Shen MR, Mayo BC, et al. Malnutrition predicts infectious and wound complications following posterior lumbar spinal fusion. Spine. 2016;41(21):16931699.

29. Hopkins BS, Mazmudar A, Driscoll C, et al. Using artificial intelligence (AI) to predict postoperative surgical site infection: a retrospective cohort of 4046 posterior spinal fusions. Clin Neurol Neurosurg. 2020;192:105718.

30. Dudli S, Fields AJ, Samartzis D, Karppinen J, Lotz JC. Pathobiology of Modic changes. Eur Spine J. 2016;25(11):37233734.

31. Vigeland MD, Flm ST, Vigeland MD, et al. Correlation between gene expression and MRI STIR signals in patients with chronic low back pain and Modic changes indicates immune involvement. Sci Rep. 2022;12(1):215.

32. Pradip IA, Dilip Chand Raja S, Rajasekaran S, et al. Presence of preoperative Modic changes and severity of endplate damage score are independent risk factors for developing postoperative surgical site infection: a retrospective case-control study of 1124 patients. Eur Spine J. 2021;30(6):17321743.

33. Ohtori S, Inoue G, Ito T, et al. Tumor necrosis factor-immunoreactive cells and PGP 9.5-immunoreactive nerve fibers in vertebral endplates of patients with discogenic low back Pain and Modic Type 1 or Type 2 changes on MRI. Spine. 2006;31(9):10261031.

34. Yuan K, Chen HL. Obesity and surgical site infections risk in orthopedics: a meta-analysis. Int j Surgery. 2013;11(5):383388.

35. Lynch RJ, Ranney DN, Shijie C, Lee DS, Samala N, Englesbe MJ. Obesity, surgical site infection, and outcome following renal transplantation. Ann Surg. 2009;250(6):10141020.

36. Onyekwelu I, Glassman SD, Asher AL, Shaffrey CI, Mummaneni PV, Carreon LY. Impact of obesity on complications and outcomes: a comparison of fusion and nonfusion lumbar spine surgery. J Neurosurg Spine. 2017;26(2):158162.

37. Lee JS, Terjimanian MN, Tishberg LM, et al. Surgical site infection and analytic morphometric assessment of body composition in patients undergoing midline laparotomy. J Am Coll Surg. 2011;213(2):236244.

38. Inacio MC, Kritz-Silverstein D, Raman R, et al. The risk of surgical site infection and re-admission in obese patients undergoing total joint replacement who lose weight before surgery and keep it off postoperatively. Bone Joint J. 2014;96-b(5):629635.

39. Kim BD, Smith TR, Lim S, Cybulski GR, Kim JY. Predictors of unplanned readmission in patients undergoing lumbar decompression: multi-institutional analysis of 7016 patients. J Neurosurg Spine. 2014;20(6):606616.

40. Rammell J, Perre D, Boylan L, et al. The adverse impact of preoperative anaemia on survival following major lower limb amputation. Vascular. 2022;17085381211065622.

41. Lasocki S, Krauspe R, von Heymann C, Mezzacasa A, Chainey S, Spahn DR. PREPARE: the prevalence of perioperative anaemia and need for patient blood management in elective orthopaedic surgery: a multicentre, observational study. Eur J Anaesthesiol. 2015;32(3):160167.

42. Higgins RM, Helm MC, Kindel TL, Gould JC. Perioperative blood transfusion increases risk of surgical site infection after bariatric surgery. Surg Obesity Related Dis. 2019;15(4):582587.

43. Berros-Torres SI, Umscheid CA, Bratzler DW, et al. Centers for Disease Control and Prevention Guideline for the Prevention of Surgical Site Infection, 2017. JAMA Surg. 2017;152(8):784791.

44. Hagedorn JM, Bendel MA, Hoelzer BC, Aiyer R, Caraway D. Preoperative hemoglobin A1c and perioperative blood glucose in patients with diabetes mellitus undergoing spinal cord stimulation surgery: a literature review of surgical site infection risk. Pain Pract. 2022:76.

45. Pennington Z, Lubelski D, Westbroek EM, Ahmed AK, Passias PG, Sciubba DM. Persistent Postoperative Hyperglycemia as a Risk Factor for Operative Treatment of Deep Wound Infection After Spine Surgery. Neurosurgery. 2020;87(2):211219.

46. Liu JM, Deng HL, Chen XY, et al. Risk Factors for Surgical Site Infection After Posterior Lumbar Spinal Surgery. Spine. 2018;43(10):732737.

Read more from the original source:
Using machine learning to predict surgical site infection | IDR - Dove Medical Press

Protect AI Acquires huntr; Launches Worlds First Artificial Intelligence and Machine Learning Bug Bounty Platform – Yahoo Finance

huntr provides a platform to help security researchers discover, disclose, remediate, and be rewarded for AI and ML security threats

LAS VEGAS, August 08, 2023--(BUSINESS WIRE)--Protect AI, the leading artificial intelligence (AI) and machine learning (ML) security company, today announced the launch of huntr, a groundbreaking AI/ML bug bounty platform focused exclusively on protecting AI/ML open-source software (OSS), foundational models, and ML Systems. The company is a silver sponsor at Black Hat USA, Booth 2610.

The launch of the huntr AI/ML bug bounty platform comes as a result of the acquisition of huntr.dev by Protect AI. Originally founded in 2020 by 418Sec Founder, Adam Nygate, huntr.dev quickly rose to become the world's 5th largest Certified Naming Authority (CNA) for Common Vulnerabilities and Exposures (CVEs) in 2022. With a vast network of over ten-thousand security researchers specializing in open-source software (OSS), huntr has been at the forefront of OSS security research and development. This success provides an opportunity for Protect AI to focus this platform on a critical and emerging need for AI/ML threat research.

In today's AI-powered world, nearly 80% of code in Big Data, AI, BI, and ML codebases relies on open-source components, according to Synopsys, with more than 40% of these codebases harboring high-risk vulnerabilities. In one example, Protect AI researchers found a critical Local File Inclusion/Remote File Inclusion vulnerability in MLflow, a widely used system for managing machine learning life cycles, which could enable attackers to gain full access to a cloud account, steal proprietary data, and expose critical IP in the form of ML models.

Furthermore, there is a critical lack of AI/ML skills and expertise in the field of security research that are able to find these AI security threats. This has led to an urgent need for comprehensive AI/ML security research, with the focus on uncovering potential security flaws and safeguarding sensitive data and AI application integrity for enterprises.

Story continues

"The vast artificial intelligence and machine learning supply chain is a leading area of risk for enterprises deploying AI capabilities. Yet, the intersection of security and AI remains underinvested. With huntr, we will foster an active community of security researchers, to meet the demand for discovering vulnerabilities within these models and systems," said Ian Swanson, CEO of Protect AI.

"With this acquisition by Protect AI, huntr's mission now exclusively centers on discovering and addressing OSS AI/ML vulnerabilities, promoting trust, data security, and responsible AI/ML deployment. We're thrilled to expand our reward system for researchers and hackers within our community and beyond," said Adam Nygate, founder and CEO of huntr.dev.

The New huntr Platform

huntr offers security researchers a comprehensive AI/ML bug hunting environment with intuitive navigation, targeted bug bounties with streamlined reporting, monthly contests, collaboration tools, vulnerability reviews, and the highest paying AI/ML bounties available to the hacking community. The first contest is focused on Hugging Face Transformers offering an impressive $50,000 reward.

huntr also bridges the critical knowledge gap in AI/ML security research and operates as an integral part of Protect AIs Machine Learning Security Operations (MLSecOps) community. By actively participating in huntr's AI/ML open-source-focused bug bounty platform, security researchers can build new expertise in AI/ML security, create new professional opportunities, and receive well-deserved financial rewards.

"AI and ML rely on open source software, but security research in these systems is often overlooked. huntr's launch for AI/ML security research is an exciting moment to unite and empower hackers in safeguarding the future of AI and ML from emerging threats," said Phil Wylie, a renowned Pentester.

Chlo Messdaghi, Head of Threat Research at Protect AI, emphasized the platform's ethos, stating, "We believe in transparency and fair compensation. Our mission is to cut through the noise and provide huntrs with a platform that recognizes their contributions, rewards their expertise, and fosters a community of collaboration and knowledge sharing."

Protect AI is a Skynet sponsor at DEF CONs AI Village, where Ms. Messdaghi will be chair of a panel entitled, "Unveiling the Secrets: Breaking into AI/ML Security Bug Bounty Hunting," on Friday, August 11, at 4:00pm. The company is also a silver sponsor at Black Hat USA. These events will provide the opportunity for Protect AIs threat research team to connect in person with the security research community. To find out more, and become an AI/ML huntr, join the community at huntr.mlsecops.com. For information on participating in Protect AIs sessions at Black Hat and DEF CON visit us on LinkedIn and Twitter.

About Protect AI

Protect AI enables safer AI applications by providing organizations the ability to see, know and manage their ML environments. The company's AI Radar platform provides visibility into the ML attack surface by creating a ML Bill of Materials (MLBOM), remediates security vulnerabilities and detects threats to prevent data and secrets leakages. Founded by AI leaders from Amazon and Oracle, Protect AI is funded by Acrew Capital, boldstart ventures, Evolution Equity Partners, Knollwood Capital, Pelion Ventures and Salesforce Ventures. The company is headquartered in Seattle, with offices in Dallas and Raleigh. For more information visit us on the web, and follow us on LinkedIn and X/Twitter.

View source version on businesswire.com: https://www.businesswire.com/news/home/20230808746694/en/

Contacts

Media:Marc GendronMarc Gendron PR for Protect AImarc@mgpr.net 617-877-7480

More:
Protect AI Acquires huntr; Launches Worlds First Artificial Intelligence and Machine Learning Bug Bounty Platform - Yahoo Finance

The Technological Triad: 5G, Machine Learning, and Cloud … – Fagen wasanni

Exploring the Technological Triad: 5G, Machine Learning, and Cloud Computing in the Modern World

In the modern world, the technological triad of 5G, machine learning, and cloud computing is shaping the future of digital transformation. These three technologies are not only revolutionizing the way we live and work, but they are also driving the next wave of technological innovation.

5G, the fifth generation of wireless technology, is at the forefront of this technological triad. With its high-speed data transmission and low latency, 5G is set to revolutionize the way we communicate and interact with technology. It promises to enable a new era of smart cities, autonomous vehicles, and Internet of Things (IoT) devices, all of which require real-time data transmission and processing. Moreover, 5G is expected to provide the necessary infrastructure for the other two components of the technological triad, machine learning and cloud computing, to reach their full potential.

Machine learning, a subset of artificial intelligence (AI), is another key player in this technological triad. It involves the use of algorithms and statistical models to enable computers to perform tasks without explicit programming. In other words, machine learning allows computers to learn from data and make predictions or decisions without being explicitly programmed to do so. This technology is already being used in a wide range of applications, from recommendation systems and voice recognition to fraud detection and autonomous vehicles. With the advent of 5G, machine learning is expected to become even more prevalent as it will be able to process and analyze data in real-time, leading to more accurate and timely predictions and decisions.

The third component of the technological triad is cloud computing. This technology involves the delivery of computing services, including servers, storage, databases, networking, software, analytics, and intelligence, over the Internet (the cloud). Cloud computing offers several benefits, including cost savings, increased productivity, speed and efficiency, performance, and security. It also provides the necessary infrastructure for machine learning and 5G to function effectively. With cloud computing, businesses can store and process large amounts of data, run applications, and deliver services on a global scale. Moreover, with the advent of 5G, cloud computing is expected to become even more powerful as it will be able to process and analyze data at unprecedented speeds.

In conclusion, the technological triad of 5G, machine learning, and cloud computing is set to revolutionize the modern world. These three technologies are not only driving the next wave of technological innovation, but they are also shaping the future of digital transformation. With 5G, we can expect to see a new era of smart cities, autonomous vehicles, and IoT devices. With machine learning, we can expect to see more accurate and timely predictions and decisions. And with cloud computing, we can expect to see businesses delivering services on a global scale. As we move forward, it will be interesting to see how these three technologies continue to evolve and shape our world.

Follow this link:
The Technological Triad: 5G, Machine Learning, and Cloud ... - Fagen wasanni

Professor in Artificial Intelligence and Machine Learning job with … – Times Higher Education

Job description

Edinburgh Napier University is the #1 Modern University in Scotland. An innovative, learner centric university with a modern and fresh outlook, Edinburgh Napier is ambitious, inclusive in its ethos and applied in its approach.

Edinburgh Napier Universitys phenomenal results from the Research Excellence Framework (2021) are testament to our growing strength and capability as a research institution. These results, alongside our consistently positive National Student Survey results and sustained high levels of graduate employability, demonstrate the increasing impact of Edinburgh Napier's collective work, quality and commitment.

REF 2021 assessed 68% of our research as either world-leading or internationally excellent, up 15% since 2014. Additionally, the Universitys research power metric rocketed from 250 to 718, making Edinburgh Napier the top ranking Scottish modern university for both research power and research impact.

The Universitys improved power rating will now see our research funding increase as we take significant strides to grow our reputation as a research-focused institution as well as a teaching one. Through continuous investment in staff and our research environment, we are confident that we are well on our way to establishing ourselves as one of the UKs world-leading universities in research.

The School of Computing, Engineering & the Built Environment has over 200 academics, and around 3,100 campus-based students, and delivers programmes with professional accreditations from the British Computer Society, Institution of Engineering and Technology, The Chartered Institute of Building and other accreditation bodies. We have excellent computing, engineering and construction lab facilities. The School has embarked on a major development in the area of Industry 4.0, bringing together computer science, engineering, mathematics and construction technology. We are one of the UK's largest computer science academic units with key strengths in AI, cyber security and creative and social informatics. We house leading UK research centres in transport policy and sustainable construction. The schools are based in the lively and exciting Merchiston area at the heart of Edinburgh, Scotland's inspiring capital.

The latest UK national research assessment, REF 2021, places our Computer Science research in the top-30 in the whole UK and 3rd best in Scotland (both in power ranking). In terms of research impact 100% of our work achieved the highest rating (4*), a performance achieved only by six other universities in the whole UK. Our research is underpinned by significant amounts of funding from prestigious sources including both EPSRC and Horizon 2020.

This is a great opportunity for an experienced academic with expertise in Artificial Intelligence and Machine Learning or related fields to the work of the experienced team exploring Search-based optimisation, Evolutionary Robotics, Natural Language Generation, Multi-Modal Healthcare Data Analytics. As a professor you will be expected to contribute to the leadership of the research group as well, especially in terms of driving the research agenda and leading the exploration of new foundational research areas. Areas of desirable expertise include, but are not limited to: machine learning with applications to robotics, machine learning theoretical foundations, machine learning applied to biomedical data and healthcare, search-based optimisation, deep learning systems, adversarial machine learning, generative models in machine learning, natural language processing with machine learning, explainable machine learning, neuromorphic machine learning systems. With 80% time allocation for research, this role will allow you to explore novel and emerging areas of artificial intelligence and machine learning, deliver excellent quality research papers and secure substantial external research funding.

The Professor in Artificial Intelligence and Machine Learning will contribute to and build programmes and modules to support the expansion of the Schools teaching portfolio which explores the changing nature of IT infrastructure, AI and ML applications to big data, business intelligence, and the impact of technology on business. You will actively contribute to our existing portfolio of computer science based degree programmes.

We are looking for someone who can demonstrate enthusiasm for working in a cross-disciplinary manner in fundamental and applied research and in the development of research-informed teaching to enhance employability of our graduates. You will have the opportunity to expand your industry connections through our existing networks.

Further information about Edinburgh Napier University can be found here.

The Role

As a professor you will be a member of our Artificial Intelligence research group with:

Applicants must demonstrate:

Applicants preferably will also demonstrate:

If you would like to know, more about this exciting opportunity please click here to view our Grade 8-10 (level 1-3) role profiles.

How will we reward you?

Salary: 65,000 - 95,000 per annum (Grade 8-10; Level 1-3)

As the #1 Modern University in Scotland, Edinburgh Napier is here to make a difference. This is only possible because of the people that work here its our people that make us great. And with our people at the heart of what we do, its important that you are supported and rewarded.

We are committed to providing a wide range of benefits including:

Further information about our benefits can be found here.

Additional Information

Informal enquiries about the role can be made to Professor Peter Andras (p.andras@napier.ac.uk) or Professor Ben Paechter (b.paechter@napier.ac.uk).

Applications for the role must be submitted via the Edinburgh Napier University job applications web site emailed applications will not be accepted.

Application closing date: Tuesday 15 August @ 11:59pm

Edinburgh Napier is committed to creating an environment where everyone feels proud, confident, challenged and supported and are holders of Disability Confident, Carer Positive and Stonewall Diversity Champion status. More details can be found here.

The rest is here:
Professor in Artificial Intelligence and Machine Learning job with ... - Times Higher Education

INT Simplifies Machine Learning and Processing and Augments Analytics Capabilities with Latest Release of – Benzinga

August 10, 2023 10:15 AM | 3 min read

Massive returns are possible within this market! For a limited time, get access to the Benzinga Insider Report, usually $47/month, for just $0.99! Discover extremely undervalued stock picks before they skyrocket! Time is running out! Act fast and secure your future wealth at this unbelievable discount! Claim Your $0.99 Offer NOW!

Advertorial

The latest release of IVAAP by INT introduces an array of exciting new features and enhancements, providing users with unparalleled capabilities to extract deeper insights from their subsurface data.

HOUSTON , Aug. 10, 2023 /PRNewswire-PRWeb/ -- INT announced today the launch of IVAAP 2.11, the latest version of our Universal Cloud Data Visualization Platform. With powerful features and enhanced capabilities, IVAAP 2.11 takes subsurface data exploration and visualization to new heights, empowering users to make critical decisions with confidence and efficiency.

Enter your email and you'll also get Benzinga's ultimate morning update AND a free $30 gift card and more!

Some of the key highlights include:

"IVAAP 2.11 represents a significant milestone in our journey toward providing the oil and gas industry with the most advanced and comprehensive data visualization platform. With the introduction of external workflow support for machine learning and data processing and full compatibility with the OSDU Data Platform, IVAAP continues to empower geoscientists and engineers to explore, visualize, and automate their data like never before," said Hugues Thevoux, VP of Cloud Solutions at INT. "This release underscores our commitment to delivering cutting-edge solutions that drive efficiency, foster innovation, and enable our clients to make smarter decisions with confidence."

IVAAP 2.11 is now available for all existing users. To experience the power of IVAAP or to schedule a personalized demo, visitint.com/demo-gallery/ivaap/or contact our sales team atintinfo@int.com.

To learn more about IVAAP 2.11, please visitint.com/ivaap/.

ABOUT IVAAP:

IVAAP is a Universal Cloud Data Visualization Platform where users can explore domain data, visualize 2D/3D G&G data (wells, seismic, horizons, surface), and perform data automation by integrating with external processing workflows and machine learning.

ABOUT INT:

INT software empowers the largest energy and services companies in the world to visualize their complex subsurface data (seismic, well log, reservoir, and schematics in 2D/3D). INT offers a visualization platform (IVAAP) and libraries (GeoToolkit) developers can use with their data ecosystem to deliver subsurface solutions (Exploration, Drilling, Production). INT's powerful HTML5/JavaScript technology can be used for data aggregation, API services, and high-performance visualization of G&G and petrophysical data in a browser. INT simplifies complex subsurface data visualization.

For more information about IVAAP or INT's other data visualization products, please visithttps://www.int.com.

INT, the INT logo, and IVAAP are trademarks of Interactive Network Technologies, Inc., in the United States and/or other countries.

Pull Quote

IVAAP 2.11 represents a significant milestone in our journey toward providing the O&G industry with the most advanced and comprehensive data visualization. With this release, IVAAP continues to empower geoscientists and engineers to explore, visualize, and automate their data like never before.

Media Contact

Claudia Juarez, INT, 1 7139757434, marketing@int.com, http://www.int.com

LinkedIn

SOURCE INT

Massive returns are possible within this market! For a limited time, get access to the Benzinga Insider Report, usually $47/month, for just $0.99! Discover extremely undervalued stock picks before they skyrocket! Time is running out! Act fast and secure your future wealth at this unbelievable discount! Claim Your $0.99 Offer NOW!

Advertorial

2023 Benzinga.com. Benzinga does not provide investment advice. All rights reserved.

Visit link:
INT Simplifies Machine Learning and Processing and Augments Analytics Capabilities with Latest Release of - Benzinga

Gavel+to+Gavel%3A+Protecting+your+IP+in+the+age+of+AI – Journal Record

Drew Palmer

The recent advent of commercially available machine learning systems and other forms of artificial intelligence has presented businesses with countless possibilities to integrate these capabilities into their operations, unlocking efficiency and gaining a competitive edge. But as with any innovation, concerns arise over how other members of the supply chain may utilize business or customer data and intellectual property in unforeseen ways.

Recently, Zoom posted a blog article that shed light on this issue, discussing a change in its terms of service. The post, published five months after the updated terms were released, aimed to enhance transparency and clarify how Zoom utilizes customer data to train its AI models. While Zoom assured its customers that their data would only be used with consent, the post also disclosed that such consent would be obtained through a pop-up window presented at the moment a user chose to use any of Zooms AI features, giving users little time to read or consider the impact of providing that consent. This situation underscores the importance for organizations to understand how their suppliers leverage their data and intellectual assets, particularly with respect to AI systems that can repurpose assets in novel ways.

Organizations also must collaborate with their technology suppliers to establish necessary controls to ensure contractual compliance. This starts by integrating contractual clauses into legal agreements to regulate data and IP asset usage by requiring discrete controls on such usage. By including these types of safeguards, companies can better control how their data is used.

Every business should carefully evaluate its procurement and supply contracts to address these intellectual property concerns. By crafting specific terms that prohibit any unauthorized future use of customer data and intellectual assets, businesses can better protect their data and intellectual property rights as technology continues to evolve. Protecting these assets remains crucial to mitigate potential risks. To achieve this, businesses can require the use of advanced encryption methods, conduct regular audits, and ensure compliance through other relevant data protection safeguards.

As businesses embrace the benefits of AI and machine learning, it is vital that they take proactive measures to safeguard their data and intellectual assets. By understanding how technologies utilize their information and establishing robust contractual agreements, organizations can mitigate potential risks and confidently embrace the power of AI for a competitive advantage in the market.

Drew Palmer is an attorney with Crowe & Dunlevy, crowedunlevy.com, and a member of the Intellectual Property Practice Group.

See the article here:
Gavel+to+Gavel%3A+Protecting+your+IP+in+the+age+of+AI - Journal Record