Category Archives: Machine Learning

Lemmatization in NLP and Machine Learning – Built In

Lemmatization is one of the most common text pre-processing techniques used in natural language processing (NLP) and machine learning in general. Lemmatization is not that much different than the stemming of words in NLP. In both stemming and lemmatization, we try to reduce a given word to its root word. The root word is called a stem in the stemming process, and its called a lemma in the lemmatization process. But there are a few more differences to the two than that. Lets see what those are.

Lemmatization is a text pre-processing technique used in natural language processing (NLP) models to break a word down to its root meaning to identify similarities. For example, a lemmatization algorithm would reduce the word better to its root word, or lemme, good.

In stemming, a part of the word is just chopped off at the tail end to arrive at the stem of the word. There are different algorithms used to find out how many characters have to be chopped off, but the algorithms dont actually know the meaning of the word in the language it belongs to. In lemmatization, the algorithms do have this knowledge. In fact, you can even say that these algorithms refer to a dictionary to understand the meaning of the word before reducing it to its root word, or lemma.

So, a lemmatization algorithm would know that the word better is derived from the word good, and hence, the lemme is good. But a stemming algorithm wouldnt be able to do the same. There could be over-stemming or under-stemming, and the word better could be reduced to either bet, or bett, or just retained as better. But there is no way in stemming that can reduce better to its root word good. This is the difference between stemming and lemmatization.

More on Machine Learning: An Introduction to Classification in Machine Learning

As you can probably tell by now, the obvious advantage of lemmatization is that it is more accurate than stemming. So, if youre dealing with an NLP application such as a chat bot or a virtual assistant, where understanding the meaning of the dialogue is crucial, lemmatization would be useful. But this accuracy comes at a cost.

Because lemmatization involves deriving the meaning of a word from something like a dictionary, its very time consuming. So most lemmatization algorithms are slower compared to their stemming counterparts. There is also a computation overhead for lemmatization, however, in most machine learning problems, computational resources are rarely a cause of concern.

More on Machine Learning: Multiclass Classification With an Imbalanced Data Set

Well, I cant answer that question. Lemmatization and stemming are both much more complex than what Ive made them appear here. There are a lot more things to consider about both the approaches before making a decision. But Ive rarely seen any significant improvement in efficiency and accuracy of a product that uses lemmatization over stemming. In most cases, at least according to my knowledge, the overhead that lemmatization demands is not justified. So, it depends on the project in question. But I want to put out a disclaimer here. Most of the work I have done in NLP is for text classification, and thats where I havent seen a significant difference. There are applications where the overhead of lemmatization is perfectly justified, and in fact, lemmatization would be a necessity.

Read the original here:
Lemmatization in NLP and Machine Learning - Built In

Startup partners to bring machine learning, facial recognition to IoT … – Biometric Update

A partnership has been formed between Useful Sensors, which supports the addition of AI interfaces to consumer electronics, and OKdo, a subsidiary of RS Group which makes and sells single board computers, to increase the availability of machine learning capabilities like biometrics, presence detection, hand gesture recognition and voice interaction.

The partners plan to bring small, low-cost machine learning hardware modules to market through the global manufacturing and distribution deal.

Useful Sensors makes a coin-sized module with a camera and a microcontroller that can perform facial recognition, according to the announcement. Other applications for the companys person sensor technology include person detection, QR code scanning, and speech recognition.

Were excited to be working with OKdo, says Pete Warden, CEO and founder of Useful Sensors. The companys deep expertise in building products that solve real problems for their customers is going to improve the solutions were creating and help us come up with entirely new ways of helping system builders.

Useful Sensors was formed a year ago by Warden and CTO Manjunath Kudlur, who are both founding members of Googles TensorFlow open-source machine learning framework.

The companies are carrying out live demonstrations of their joint technology at Embedded World in Nuremberg, Germany this week.

biometric sensors | biometrics | facial recognition | machine learning | microcontroller | OKdo | person detection | research and development | Useful Sensors

See the original post:
Startup partners to bring machine learning, facial recognition to IoT ... - Biometric Update

Machine Learning Program Sorts Plastic With High Success Rate – waste360

Studies show that just about 5 percent of all plastic that people want to recycle make it through the process and is renewed. Big factors for this low number can be attributed contaminated materials, water requirements, and discarded waste along with that drastic 263 percent increase in Americans plastic waste use since 1980.

A new machine learning model has been developed by a team at University College London that is able to isolate compostable and biodegradable plastics and help to improve recycling efficiency and accuracy.

Using data based on a classification system using hyperspectral imaging, a machine learning program has been able to observe and sort individual pieces of plastic waste.

Depending on the size of the material, the program has been able to achieve perfect accuracy in sorting. Plastic materials larger than 10mm by 10mm were sorted with 100 percent accuracy, while smaller material rates were less than perfect.

Read the full article here.

Read more from the original source:
Machine Learning Program Sorts Plastic With High Success Rate - waste360

‘Meta-Semi’ machine learning approach outperforms state-of-the-art algorithms in deep learning tasks – Tech Xplore

by Tsinghua University Press

Meta-Semi trains deep networks using pseudo-labeled samples whose gradient directions are similar to labeled samples. Algorithm 1 shows the Meta Semi pseudo code. The Meta-Semi algorithm outperforms state-of-the-art semi-supervised learning algorithms. Credit: CAAI Artificial Intelligence Research, Tsinghua University Press

Deep learning based semi-supervised learning algorithms have shown promising results in recent years. However, they are not yet practical in real semi-supervised learning scenarios, such as medical image processing, hyper-spectral image classification, network traffic recognition, and document recognition.

In these types of scenarios, the labeled data is scarce for hyper-parameter search, because they introduce multiple tunable hyper-parameters. A research team has proposed a novel meta-learning based semi-supervised learning algorithm called Meta-Semi, that requires tuning only one additional hyper-parameter. Their Meta-Semi approach outperforms state-of-the-art semi-supervised learning algorithms.

The team published their work in the journal CAAI Artificial Intelligence Research.

Deep learning, a machine learning technique where computers learn by example, is showing success in supervised tasks. However, the process of data labeling, where the raw data is identified and labeled, is time-consuming and costly. Deep learning in supervised tasks can be successful when there is plenty of annotated training data available. Yet in many real-world applications, only a small subset of all the available training data are associated with labels.

"The recent success of deep learning in supervised tasks is fueled by abundant annotated training data," said Gao Huang, associate professor with the Department of Automation at Tsinghua University. However, the time-consuming, costly collection of precise labels is a challenge researchers have to overcome. "Meta-semi, as a state-of-the-art semi-supervised learning approach, can effectively train deep models with a small number of labeled samples," said Huang.

With the research team's Meta-Semi classification algorithm, they efficiently exploit the labeled data, while requiring only one additional hyper-parameter to achieve impressive performance under various conditions. In machine learning, a hyper-parameter is a parameter whose value can be used to direct the learning process.

"Most deep learning based semi-supervised learning algorithms introduce multiple tunable hyper-parameters, making them less practical in real semi-supervised learning scenarios where the labeled data is scarce for extensive hyper-parameter search," said Huang.

The team developed their algorithm working from the assumption that the network could be trained effectively with the correctly pseudo-labeled unannotated samples. First they generated soft pseudo labels for the unlabeled data online during the training process based on the network predictions.

Then they filtered out the samples whose pseudo labels were incorrect or unreliable and trained the model using the remaining data with relatively reliable pseudo labels. Their process naturally yielded a meta-learning formulation where the correctly pseudo-labeled data had a similar distribution to the labeled data. In their process, if the network is trained with the pseudo-labeled data, the final loss on the labeled data should be minimized as well.

The team's Meta-Semi algorithm achieved competitive performance under various conditions of semi-supervised learning. "Empirically, Meta-Semi outperforms state-of-the-art semi-supervised learning algorithms significantly on the challenging semi-supervised CIFAR-100 and STL-10 tasks, and achieves competitive performance on CIFAR-10 and SVHN," said Huang.

CIFAR-10, STL-10, and SVHN are datasets, or collections of images, that are frequently used in training machine learning algorithms. "We also show theoretically that Meta-Semi converges to the stationary point of the loss function on labeled data under mild conditions," said Huang. Compared to existing deep semi-supervised learning algorithms, Meta-Semi requires much less effort for tuning hyper-parameters, but achieves state-of-the-art performance on the four competitive datasets.

Looking ahead to future work, the research team's aim is to develop an effective, practical and robust semi-supervised learning algorithm. "The algorithm should require a minimal number of data annotations, minimal efforts of hyper-parameter tuning, and a minimized training time. To attain this goal, our future work may focus on reducing the training cost of Meta-Semi," said Huang.

More information: Yulin Wang et al, Meta-Semi: A Meta-Learning Approach for Semi-Supervised Learning, CAAI Artificial Intelligence Research (2023). DOI: 10.26599/AIR.2022.9150011

Provided by Tsinghua University Press

See the original post:
'Meta-Semi' machine learning approach outperforms state-of-the-art algorithms in deep learning tasks - Tech Xplore

How machine learning is used in portfolio management? – Rebellion Research

How machine learning is used in portfolio management?

Artificial Intelligence & Machine Learning

Introduction

Modern Portfolio Theory was first introduced by Harry Markowitz in 1952. The groundbreaking investment theory demonstrated that the performance of an individual stock is not as important as the performance of an entire portfolio. The interpretations of his article on Portfolio Selection [1], in The Journal of Finance, taught investors that risk, and not the best price, should be the cornerstone of a portfolio [2]. Once an investors risk tolerance has been established, building a portfolio around it is streamlined.

The goal of portfolio management is to choose the best strategy to maximize returns on investment, ensure portfolio flexibility or optimize risk. Attaining such objectives involve the optimal allocation of investment funds, optimization of risk factors, considering financial ratios (such as the Sharpe Ratio) and much more. [3]

Such models help portfolio managers with trade execution, data parsing, idea generation & pattern recognition, alpha factor design, asset allocation, position sizing, strategy testing, and eliminates human biases.

Our goal of this project is to identify an efficient Machine Learning architecture that will replicate the stock basketstrend, almost perfectly. The selected ML model will have superior prediction capabilities and will adapt to the directional changes in prices accurately. Thus, the ML model will help us predict the expected stock returns. Additionally, using the past data we can calculate the standard deviations and gauge the risk.

Now, these stocks will be parsed through various portfolio optimization techniques. To identify the correct optimization technique, in-depth research on various techniques will be carried out. This will produce an optimized portfolio with weightage of each stock.

Literature Review & Model Implementation

1) Meanvariance portfolio optimization using machine learning-based stock price prediction (Wei Chen, Haoyu Zhang, Mukesh Kumar Mehlawat, Lifen Jia) [5]

The paper presents a hybrid model based on machine learning for stock prediction and mean variance (MV) model for portfolio selection. The model follows two stages:

Stock Prediction: Combines eXtreme Gradient Boosting (XGBoost) with an improved firefly algorithm (IFA). XGBoost is an improved gradient boosted decision tree and is composed of multiple decision trees. It is, therefore, a suitable classifier for the financial markets. The accuracy of prediction is calculated using mean absolute percentage error (MAPE), mean square error (MSE), mean absolute error (MAE), and root mean square error (RMSE).

Portfolio Selection: Stocks with higher potential returns are selected according to each stocks predicted prices, and the MV model is employed for allocating the investment proportion of the portfolio. The portfolio is evaluated on its annualized mean return, annualized standard deviation, annualized Sharpe ratio, and annualized Sortino ratio (variation of Sharpe ratio considering the standard deviation of only negative portfolio returns instead of the total standard deviation.)

The paper concludes that ML portfolio management pricing methods outperform the benchmarks concerning accuracy, potency, and efficiency.

Methodology

We predicted the stock price of three companies using ARIMA, LSTM-RNN, and Transformer architectures. The accuracy of prediction is calculated using root mean square error (RMSE), mean absolute error (MAE), mean absolute percentage error (MAPE), and Profit & Loss (PNL) rubrics.

a) Terminology

ARIMA[6][7] : AutoRegressive Integrated Moving Average is a time series forecasting model that incorporates autocorrelation measures to model non-stationary data to predict future values. The autocorrelation part of the model measures the dependency of a signal with a delayed copy of itself as a function. ARIMA models are capable of capturing temporal structures in time-series data. This model uses three parameters

LSTM-RNN[8] : Long Short Term Memory based Recurrent Neural Network. RNN is a generalization of feedforward neural network that has an internal memory. As its name suggests, it is recurrent in nature, i.e., it performs the same function for every input of data while the output of the current input depends on the past one computation. Once the output is generated, it is replicated and sent back to the RNN. Decisions are based on the current input and the previous step learnt output. LSTM is a modified version of RNN which makes it easier to remember past data in memory.

Transformer[9] : it is a neural network architecture that uses a self-attention mechanism, allowing the model to focus on the relevant parts of the time-series to improve prediction qualities. This consists of Single-head Attention and Multi-head Attention. The self-attention mechanism searches for inputs sequences, for each input, and adds result to the output sequence. These processes are parallelized allowing acceleration of the learning process.

b) Data

We have incorporated the daily OHLCV (Open, High, Low, Close, Volume) data of three companies Dr Reddys Laboratories Ltd, Wipro Limited, Reliance Steel & Aluminum Co for a time frame of 252 days, from 16Sept20 till 15Sept21. The data has been extracted & stored locally from NSE website. The data set incorporated is divided into a ratio of 7:3, where 70% of it will be the training data and the remaining for testing. For this project, we will ignore the top and bottom 5% of values, as they may be outliers.

c) Predicting Stocks

ARIMA: We initially scaled our features to avoid intensive computation. We imported the ARIMA module from statsmodels.tsa.arima.model to train our model. The ARIMA models uses the first 180 rows of the dataset as training data, with parameters p=2, d=1, q=0. These learnt results are utilized to predict the close price of the further rows of dataset (row number 180 onwards)

LSTM- RNN: We are developing a neural network regressor for continuous value prediction using LSTM. We initialize the same. Now, we add the first layer with the Dropout layer. The LSTM layer has 250 neurons that will capture the model trend. Since we need to add another layer to the model, our return_sequence is set to True.

The input_shape corresponds to the umber of time stamps and indicators. For the Dropout layer, a value of 0.2 denotes that 20% of the 250 neurons will be ignored. This is repeated for layers 2,3, and 4. Finally a one-dimension output layer is created. This layer is one dimensional as we are only predicting one price each time. We now compile our code using Adam optimizer and loss as mean square error. We then fit our model.

Fig 4: LSTM-RNN Implementation (File Name: rnn.py)

Transformer: To encode the notion of time into the model, time encoders have been incorporated in the code (class Time2Vector). The time encoder comprises of two main ideas periodic component (ReLU function) & non-periodic component (linear function). The class Time2Vector encompasses two functions build (initialize matrices) and call (calculates the periodic and linear time features). We now initialize the three Transformer Encoder layers Single Head Attention, Multi Head Attention, and Transformer Encoder layer.

(Note: Snippet of this is not in the report. Kindly refer attached python files to view them.) After these initializations, the training process begins, which is followed by testing the data. The parameters used are Batch Size: 12, Sequence Length: 3, and Size of model input: 5.

The stock_prediction.py imports all these models and returns the final outputs depending on the stock ticker symbol entered.

2) Alternative Approaches to Meanvariance Optimization (Maria Debora Braga) [10] This paper illustrates two examples risk-based asset allocation strategies: the global minimum variance strategy; and the optimal risk parity strategy.

Global Minimum-Variance Strategy the asset manager recommends a portfolio already known by the reader. It is the portfolio at the leftmost end of the efficient frontier, which, given the location, is the portfolio with the smallest attainable ex ante standard deviation. It is not necessary to implement Mean-Variance Optimization to identify the global minimum-variance portfolio (GMVP). It can be replaced with a simplified optimization algorithm with the formula for portfolio variance as an objective function to be minimized and the inclusion of the traditional long-only and budget constraints. Therefore, the optimization problem to be solved to identify the GMVP can be written formally as follows:

Optimal Risk Parity Strategy aims to overcome the problem of risk concentration. The alternative designation of optimal risk parity as equally weighted risk contribution strategy or portfolio suggests the criteria used to structure the portfolio coherently with the goal: to give each asset class a weight so that the amount of risk it contributes to overall portfolio risk is equal to the amount contributed by any other asset class in the portfolio. The above condition can become written formally as equality among component risks. Moreover, defined as the product of asset class weight in the portfolio and its marginal contribution to risk.

The identification of optimal asset class weights for a risk parity portfolio involves solving an optimization problem which is different from mean-variance optimization. More precisely, it includes a new objective function to be minimized together with the traditional constraints on portfolio weights.

portfolio theory that assumes that investors will make rational decisions about investments if they have complete information. A major assumption here is that investors seek low risk and high reward. The two main components of mean-variance analysis are as follows:

Variance Represents how spread out the numbers are in a set

Expected Return Probability expressing the estimated return of the investment in the security

If two different securities have the same expected return, but one has lower variance, the one with lower variance is the better pick. Similarly, if two different securities have approximately the same variance, the one with the higher return is the better pick.

Results & Conclusion

From our results of LSTM-RNN, and the Transformer when compared to the ARIMA benchmark, our conclusion is that the LSTM is better at getting the trends to almost replicate the original trend, but it does not react to directional changes as good as the Transformer which does not score so well when it comes to closeness to the original stock trend.

Any stock prediction model cannot provide greater accuracy without increasing the complexity of prediction operations exponentially. The key to optimizing a stock forecasting system is balancing accuracy and computational expense.

The rationale behind the use of ML is not just about novel architectures that revolutionizes forecasting and time-series prediction, but also the way each use case should be evaluated in terms of the domain of the use case itself. Referring to this specific use case, metrics are surely an evaluation factor, but it is the PNL that gives deeper insight into the accuracy of the model to predict changes based on the trends and patterns that are of significance and perform the best when looked at the big picture.

Identification of correct ML architecture in Active Portfolio Management is cardinal because the stock predicted expected returns will be the base for stock selection for portfolio optimization.

Also, from our research, we conclude that risk-based optimization techniques are superior to the traditional mean-variance technique. When the risk-based strategies become considered, the portfolio selection phase undergoes changes because the strategy comes up to a single risky portfolio proposal. Therefore, an investor does not have to choose an optimal point on the efficient frontier according to their risk tolerance/aversion and investment objective.

The decision-making process is therefore analogous to the Capital Asset Pricing Model. In CAPM, the same risky portfolio is offered to all the investors. A point is then selected on the Capital Market line. This is the line, in the risk-return space, that originates from the risk-free rate and is tangent to the efficient frontier. The tangency portfolio is also the portfolio with the maximum Sharpe ratio.

Written by Varun Chandra Gupta

References & Citations

[1] Portfolio Selection, Wiley Online Library [Online]

Available: https://onlinelibrary.wiley.com/doi/full/10.1111/j.1540-6261.1952.tb01525.x [Accessed Nov. 03, 2022]

[2] Portfolio Management, Investopedia

Available: https://www.investopedia.com/articles/07/portfolio-history.asp [Accessed Nov. 03, 2022]

[3] Portfolio Management, groww.in. [Online].

Available: https://groww.in/p/portfolio-management [Accessed Oct. 07, 2022]

[4] Derek Snow, Machine Learning in Asset ManagementPart 1: Portfolio ConstructionTrading Strategies in The Journal of Financial Data Science Winter 2020, [Online Document], Available: https://jfds.pm-research.com/content/2/1/10 [Accessed Nov. 03, 2022]

[5] Wei Chen, Haoyu Zhang, Mukesh Kumar Mehlawat and Lifen Jia, Meanvariance portfolio optimization using machine learning-based stock price prediction, ScienceDirect, [Online Document], Available:

https://reader.elsevier.com/reader/sd/pii/S1568494620308814?token=072C0F96416E333E05C4C88C 0F3EF956F66CB2B591F5865147EF13701FFC6CEB96D70EA6ADA889440EE8F47A7D90D1E6& originRegion=us-east-1&originCreation=20221007063440 [Accessed Oct. 07, 2022]

[6] ARIMA Model, ProjectPro [Online]

Available: https://www.projectpro.io/article/how-to-build-arima-model-inpython/544#:~:text=Model%20in%20Python%3F-,ARIMA%20Model%2D%20Complete%20Guide%20to%20Time%20Series%20Forecasting%20in% 20Python,data%20to%20predict%20future%20values [Accessed Nov. 03, 2022]

[7] ARIMA Model, TowardsDataScience [Online]

Available: https://towardsdatascience.com/time-series-forecasting-predicting-stock-prices-using-an arima-model-2e3b3080bd70 [Accessed Nov. 03, 2022]

Available: https://towardsdatascience.com/lstm-for-google-stock-price-prediction-e35f5cc84165 [Accessed Nov. 03, 2022]

[9] Stock Predictions with state-of-the-art Transformers and Time Embeddings, TowardsDataScience [Online]

Available: https://towardsdatascience.com/stock-predictions-with-state-of-the-art-transformer-and time-embeddings-3a4485237de6 [Accessed Nov. 03, 2022]

[10] Maria Debora Braga, Alternative Approaches to Traditional Mean-Variance Optimisation, SpringerLink, [Online Document], Available: https://link.springer.com/chapter/10.1007/978-3-319- 32796-9_6 [Accessed Nov. 03, 2022]

[11] Michael Pinelis, David Ruppert, Machine learning portfolio allocation, ScienceDirect, [Online Document], Available:

https://reader.elsevier.com/reader/sd/pii/S2405918821000155?token=E142E1AA4431A9B55ECF1C2 ABA5CC7E52CA69D5E1C728796D64FEC02BCFB0E3ABEEFAA56E0AF1E431BAE0CE7C041C E5E&originRegion=us-east-1&originCreation=20221007052506 [Accessed Oct. 07, 2022]

[12] Jrn Sass, Anna-KatharinaThs, Risk reduction and portfolio optimization using clustering methods, ScienceDirect, [Online Document], Available:https://reader.elsevier.com/reader/sd/pii/S2452306221001416?token=5E952856364B9133BB345373 B3373F4A2A5D62EAC2886F34758D28024D01C053253082DAC3408677BB6E425ECB264C6F& originRegion=us-east-1&originCreation=20221007063710 [Accessed Oct. 07, 2022]

How machine learning is used in portfolio management?

Follow this link:
How machine learning is used in portfolio management? - Rebellion Research

Genomic Analysis, Machine Learning Leads to Chemotherapy … – GenomeWeb

A Hong Kong University of Science and Technology- and Samsung Medical Center-led team reporting in Genome Medicine outlines molecular features linked to chemotherapy resistance in forms of glioblastoma (GBM) containing wild-type isocitrate dehydrogenase (IDH-wt) genes. Using exome sequencing, GliomaSCAN targeted panel sequencing, and/or RNA sequencing combined with machine learning (ML), the researchers characterized molecular features found in patient-derived glioma stem-like cells (GSCs) originating from 29 temozolomide chemotherapy-resistant IDH-wild type GBM cases and 40 IDH-wild type GBM cases that were susceptible to the adjuvant chemotherapy treatment. In the process, they came up with a combined tumor signature coinciding with temozolomide chemotherapy response, highlighting a handful of genes with enhanced expression in the temozolomide-resistant cells, along with other chemotherapy resistance- or sensitivity-related features. With multisector temozolomide screening, meanwhile, the authors show that chemotherapy response can vary within tumor samples from the same individual. "We identified molecular characteristics associated with [temozolomide] sensitivity, and illustrate the potential clinical value of a ML model trained from pharmacogenomic profiling of patient-derived GSC against IDH-wt GBMs," the authors write.

View original post here:
Genomic Analysis, Machine Learning Leads to Chemotherapy ... - GenomeWeb

Deep learning-based systems aid radiographers in prediction … – Omnia Health Insights

Artificial intelligence, and deep learning in particular, has been used extensively for image classification and segmentation, including on medical images for diagnosis and prognosis prediction. However, the use of deep learning in radiotherapy prognostic modelling is still limited.

Deep learning is a subset of machine learning and artificial intelligence that has a deep neural network with a structure like the human neural system and has been trained using big data. Deep learning narrows the gap between data acquisition and meaningful interpretation without explicit programming. It has so far outperformed most classification and regression methods and can automatically learn data representations for specific tasks.

The application areas of deep learning in radiation oncology include image segmentation and detection, image phenotyping and radiomic signature discovery, clinical outcome prediction, image dose quantification, dose-response modeling, radiation adaption, and image generation.

An article published in Clinical Oncology analyses 10 studies on the subject noting that researchers suffer from the same issues that plagued early normal tissue complication probability modelling, including small, single-institutional patient cohorts, lack of external validation, poor data and model reporting, use of late toxicity data without taking time-to-event into account, and nearly exclusive focus on clinician-reported complications.

It adds that the studies, however, demonstrate how radiation dose, imaging and clinical data may be technically integrated in convolutional neural networks-based models; and some studies explore how deep learning may help better understand spatial variation in radiosensitivity. In general, there are several issues specific to the intersection of radiotherapy outcome modelling and deep learning, for example, the translation of model developments into treatment plan optimisation that will require an additional combined effort from the radiation oncology and artificial intelligence communities.

Hence, the use of machine learning and other sophisticated models to aid in prediction and decision-making has become widely popular across a breadth of disciplines. Within the greater diagnostic radiology, radiation oncology, and medical physics communities promising work is being performed in tissue classification and cancer staging, outcome prediction, automated segmentation, treatment planning, and quality assurance as well as other areas.

Dr. Ali Vahedi, Consultant Radiologist at Mubadala Healthcare, explains that AI has an important role to play in the current and future of radiology. At present, this is primarily in the form of helping clinicians improve efficiency and diagnostic capacity which is essential with the exponential increase of diagnostic tests conducted year-on-year. He adds that AI has the potential to rapidly evaluate a vast quantity of imaging data, helping to prioritise worklists and diagnoses, which will aid in reducing reporting time, improve the accuracy of reports as well as limit discrepancies. In addition, it will give radiologists more time for direct patient care and vital research.

Dr. Ali Vahedi

A study published in the Progress in Medical Physics Journal highlights that high-quality simulated three-dimensional (3D) CT images are essential when creating radiation treatment plans because the electron density and anatomical information of tumours and OARs are required to calculate and optimise dose distributions.

Radiotherapy plays an increasingly dominant role in the comprehensive multidisciplinary management of cancer. As radiation therapy machines and treatment techniques become more advanced, the role of medical physicists that ensure patients safety becomes more prominent. With the advancement of deep learning, its powerful optimisation capability has shown remarkable applicability in various fields. Its utility in radiation oncology and other medical physics areas has been discussed and verified in several research papers. These research fields range from radiation therapy processes to QA, super-resolution medical image, material decomposition, and 2D dose distribution deconvolution.

According to Dr. Vahedi, the global imaging market size is expected to grow, driven by numerous factors such as growth in the number of hospitals and clinics and rising demand for minimally invasive surgeries.

He adds that rapid advancements in medical therapy also necessitate the need for regular multimodality imaging. In addition, technological advancements in medical imaging equipment are also contributing to the growth, with manufacturers introducing new products that are more compact, more cost-effective, and produce less ionising radiation than their predecessors. This improved affordability will invariably improve patient access to imaging. it can be concluded that over the past few years there has been a significant increase in both the interest in as well as the performance of deep learning techniques in this field.

Promising results have been obtained that demonstrate how deep learning-based systems can aid clinicians in their daily work, be it by reducing the time required, or the variability in segmentation, or by helping to predict treatment outcomes and toxicities. It remains to be seen when these techniques will be employed in routine clinical practice, but it seems warranted to assume that we will see AI contribute to improving radiotherapy soon. In conclusion, the application of deep learning has great potential in radiation oncology.

This article appears in the latest issue of Omnia Health Magazine.Read the full issue online today.

Back to Clinical

Excerpt from:
Deep learning-based systems aid radiographers in prediction ... - Omnia Health Insights

Machine learning helps drive REIT investments – Connected Real Estate Magazine

The commercial real estate industry has warmed to using machine learning technology as a tool that can handle various administrative tasks such as creating contracts and automating marketing campaigns, according to real estate data company ATTOM.

CREs leveraging of tech looks like its only going to increase going forward. A little more than $13.1 billion was invested in property technology (proptech) companies during the first two quarters of 2022, according to the Center for Real Estate Technology & Innovation (CRETI). The total was a 5.6 percent year-over-year increase over 2021.

Real estate investment trusts (REITs) are using artificial intelligence (AI) and machine learning, and leading the AI revolution, according to a Deloitte survey. Today, 41 percent of REIT executives are fully on board with algorithms and machine learning models.

Some of the other admin tasks REITs are using AI for include sending scheduled marketing emails and automated rent reminders, according to ATTOM.

There are machine learning investment models that can analyze REIT-related data and develop investment strategies. This financial modeling is based on accurately estimating the current value based on the future source of cash revenue that a property would generate.

The models are based on historical data as well as unknown variables that can shift the markets and economic climate in the future. Machine learning can help influence investment decisions and serve as screening tools to help analysts improve on picking stocks and investment opportunities. Additionally, machine learning solutions can help property investors make quicker and more informed decisions as they can generate potential returns just from knowing a propertys address.

Property investors are using AI and machine learning to create financial documents for CRE buildings, according to ATTOM. Data scraping technology can pull close to real-time revenues from market data services or community websites. The tech solutions estimate expenses by using existing portfolios or digital data providers information. Then, machine learning algorithms can find competitive property sets and decide if a CRE asset is doing better or worse than the competitive sets in real time.

Publicly traded REITs are often more closely correlated with equities rather than real estate assets, the ATTOM team said. Analysts can apply data and machine learning to track the value of the underlying assets relative to the stock price and inform the analyst whether a public REIT is a buy or a sell.

The combination of big data, machine learning, macro-level analysis and human expertise can help make the best business decisions possible, according to ATTOM. That kind of well-rounded analysis helps investors sidestep assumptions. The analysis also factors in data that impacts real estate investments more accurately, such as competitor activity and climate.

Additionally, automation can improve how investors, clients and other stakeholders communicate. It might not be the most personable approach, but apps and chatbots can answer questions 24/7 and give all parties involved real-time updates on a projects progress. With mobile apps, investors can manage their portfolios regardless of their location. This allows for more successful strategic operations and better asset management.

Go here to read the rest:
Machine learning helps drive REIT investments - Connected Real Estate Magazine

Stack Overflow Survey Finds Most-Proven Technologies: Open … – Slashdot

Stack Overflow explored the "hype cycle" by asking thousands of real developers whether nascent tech trends have really proven themselves, and how they feel about them. "With AI-assisted technologies in the news, this survey's aim was to get a baseline for perceived utility and impact" of various technologies, writes Stack Overflow's senior analyst for market research and insights.

The results? "Open source is clearly positioned as the north star to all other technologies, lighting the way to the chosen land of future technology prosperity."Technologies such as blockchain or AI may dominate tech media headlines, but are they truly trusted in the eyes of developers and technologists? On a scale of zero (Experimental) to 10 (Proven), the top proven technologies by mean score are open source with 6.9, cloud computing with 6.5, and machine learning with 5.9. The lowest scoring were quantum computing with 3.7, nanotechnology with 4.5, and low code/no code with 4.6....

[When asked for the next technology that everyone will use], AI comes in at the top of the list by a large margin, but our three top proven selections (open source, machine learning, cloud computing) follow after....

It's one thing to believe a technology has a prosperous future, it's another to believe a technology deserves a prosperous future. Alongside the emergent sentiment, respondents also scored the same technologies on a zero (Negative Impact) to 10 (Positive Impact) scale for impact on the world. The top positive mean scoring technologies were open source with 7.2, sustainable technologies with 6.6 and machine learning with 6.5; the top negative mean scoring technologies were low code/no code, InnerSource, and blockchain all with 5.3. Seeing low code/no code and blockchain score so low here makes sense because both could be associated with questionable job security in certain developer careers; however it's surprising that AI is not there with them on the negative end of the spectrum. AI-assisted technology had an above average mean score for positive impact (6.2) and the percent positive score is not that far off from those machine learning and cloud computing (28% vs. 33% or 32%).

Possibly what we are seeing here as far as why developers would not rate AI more negatively than technologies like low code/no code or blockchain but do give it a higher emergent score is that they understand the technology better than a typical journalist or think tank analyst. AI-assisted tech is the second highest chosen technology on the list for wanting more hands-on training among respondents, just below machine learning. Developers understand the distinction between media buzz around AI replacing humans in well-paying jobs and the possibility of humans in better quality jobs when AI and machine learning technologies mature. Low code/no code for the same reason probably doesn't deserve to be rated so low, but it's clear that developers are not interested in learning more about it.

Open source software is the overall choice for most positive and most proven scores in sentiment compared to the set of technologies we polled our users about. One quadrant of their graph shows three proven technologies which developers still had negative feelings about: biometrics, serverless computing, and rapid prototyping tools. (With "Internet of Things" straddling the line between positive and negative feelings.)

And there were two technologies which 10% of respondents thought would never be widely used in the future: low code/no code and blockchain. "Post-FTX scandal, it's clear that most developers do not feel blockchain is positive or proven," the analyst writes.

"However there is still desire to learn as more respondents want training with blockchain than cloud computing. There's a reason to believe in the direct positive impact of a given technology when it pays the bills."

Continued here:
Stack Overflow Survey Finds Most-Proven Technologies: Open ... - Slashdot

Artmajeur.com, one of the world’s largest online galleries, just banned Crawlers from conducting AI Machine – EIN News

IA (2022) Painting by Laure Bollinger

Artmajeur Gallery Artworks

Artmajeur.com, one of the world's largest online galleries just banned Crawlers from conducting AI machine learning in order to protect artists' creative rights

Samuel Charmetant - Artmajeur CEO

Being a platform dedicated to showcasing creators' work, Artmajeur recognizes the significance of preserving their intellectual property and creative rights. As a result, we have decided to discontinue allowing crawlers to do machine learning on our website. The ultimate goal of this decision is to protect our artists' work and ensure that their labor is not exploited for the benefit of others.

Artmajeur Gallery actively advocates the proper evolution of artificial intelligence while protecting artists' rights. It was a difficult decision to no longer allow crawlers to perform machine learning on our website. We recognize that there are huge benefits to machine learning in the digital world, as well as compelling reasons to crawl websites. But, the potential harm is simply too great to ignore.

Samuel Charmetant - Artmajeur CEO: "I am told the use of AI technologies comes with great promises to make the world a better place. However, we at Artmajeur are committed to protecting our artists and their unique works. I am determined to fight against the theft of their artwork by crawlers who steal their content."

Artists devote a significant amount of time, effort, and emotion to their work. Every piece is a manifestation of their unique point of view and artistic vision. When this work is shared on Artmajeur, it is done with the expectation that it will be seen and appreciated by others. But, without the artist's permission, it is not intended to be taken and used for other purposes.

Crawlers who scan our website for data may steal images, writing, and other submitted work from artists. The content can then be used for a variety of purposes, including generating new content for a commercial gain. The fact that artists are frequently unaware that their work has been used in this manner can result in missed opportunities and lost profits.

We are actively preserving our artists' intellectual property rights by no longer allowing crawlers to perform machine learning on our website. Our decision is consistent with our commitment to provide artists with a safe and supportive environment in which to display their work and interact with other members of the art world. We acknowledge that this decision may have an impact on people who employ machine learning for legitimate purposes. Yet, we believe that protecting our artists' creative rights exceeds any potential downsides by a large degree. We invite anyone who is interested in using our website for study or other reasons to contact us so that we can discuss their needs.

We look forward to continuing to provide safe and responsible support to the artistic community.

Samuel CharmetantARTMAJEUR+33 9 50 95 99 66press@artmajeur.comVisit us on social media:FacebookTwitterLinkedIn

IA (2022) Painting by Laure Bollinger

Artmajeur Gallery Artworks

Artmajeur.com, one of the world's largest online galleries, just banned Crawlers from conducting machine learning

Visit link:
Artmajeur.com, one of the world's largest online galleries, just banned Crawlers from conducting AI Machine - EIN News