Category Archives: Data Mining

Faculty Position, Computer and Network Engineering job with UNITED ARAB EMIRATES UNIVERSITY | 305734 – Times Higher Education

Job Description

The College of Information Technology (CIT) has engaged in an ambitious reorganization effort aiming to harness the prevalence of computing and the rise of artificial intelligence to advance science and technology innovation, for the benefit of society. Under its new structure, the College will serve as the nexus of computing and informatics at the United Arab Emirates University (UAEU). CIT will build on the strength of its current research programs to create new multidisciplinary research initiatives and partnerships, across and beyond the university campus, critical to its long-term stability and growth. CIT will also expand its education portfolio with new multidisciplinary degree programs, including a BSc. in Artificial Intelligence, a BSc. in a Data Science, a BSc. in Computational Linguistics, jointly with the College of Humanities and Social Sciences, a MSc. in IoT, and a Ph.D. in Informatics and Computing. Also planned is a suite of online Microcredentials in emerging fields of study, including IoT, Cybercrime Law and Digital Forensics, Blockchains, and Cloud Computing.

About the Position:

We seek faculty candidates with strong research record in all areas of Artificial Intelligence and Data Science, with a special emphasis on emerging areas of Artificial Intelligence and Machine Learning and on the theoretical foundations and applications of Data Sciences and AI/ML in a wide range of fields and domain applications, including Smart IoT, Smart Environments and Autonomous and Intelligent Systems. The successful candidates are expected to complement and enhance the current strength of the departments in AI and Data Science related areas, including Deep Learning, Natural Language Processing, Big Data, and Data Mining, and to contribute to the teaching and research in these areas.

Candidate Qualifications:

Candidates must hold a Ph.D. degree in computer science, information science or closely related areas from a recognized university.

Preferred qualifications include:

Faculty rank is commensurate with qualifications and experience. The positions will remain open until filled. The UAEU and CIT are committed to fostering a diverse, inclusive, and equitable environment and culture for students, staff, and faculty.

Application Instructions:

Applications must be submitted online at https://jobs.uaeu.ac.ae/search.jsp (postings under CIT). The instructions to complete an application are available on the website.

A completed application must include:

About the UAEU:

The United Arab Emirates University (UAEU) is the first academic institution in the United Arab Emirates. Founded by the late Sheikh Zayed bin Sultan Al Nahyan in 1976, UAEU is committed to innovation and excellence in research and education. As the countrys flagship university, UAEU aims to create and disseminate fundamental knowledge through cutting-edge research in areas of strategic significance to the nation and the region, promote the spirit of discovery and entrepreneurship, and educate indigenous leaders of the highest caliber.

Minimum Qualification

Candidates must hold a Ph.D. degree in computer science, information science or closely related areas from a recognized university.

Preferred qualifications include:

Preferred Qualification

Strong research record in all areas of Artificial Intelligence and Data Science, with a special emphasis on emerging areas of Artificial Intelligence and Machine Learning and on the theoretical foundations and applications of Data Sciences and AI/ML in a wide range of fields and domain applications, including Smart IoT, Smart Environments and Autonomous and Intelligent Systems. The successful candidates are expected to complement and enhance the current strength of the departments in AI and Data Science related areas, including Deep Learning, Natural Language Processing, Big Data, and Data Mining, and to contribute to the teaching and research in these areas.

Expected Skills/Rank/Experience

Faculty rank is commensurate with qualifications and experience. The positions will remain open until filled.

Special Instructions to Applicant

The review process will continue until the position is filled. A completed application must be submitted electronically at: https://jobs.uaeu.ac.ae/

Division College of Information Tech. -(CIT)Department Computer &Network Engineering-(CIT)Job Close Date open until filledJob Category Academic - Faculty

Read more here:

Faculty Position, Computer and Network Engineering job with UNITED ARAB EMIRATES UNIVERSITY | 305734 - Times Higher Education

CNN-XGBoost fusion-based affective state recognition using EEG spectrogram image analysis | Scientific Reports – Nature.com

Figure 1, illustrates the proposed method, which is generally divided into two segments. On the left, we take a feature fusion-based approach, emphasizing signal processing on the acquired dataset by denoising it with a band pass filter and extracting alpha, beta, and theta bands for further processing. Numerous features have been extracted from the extracted bands. Feature extraction methods include the Fast Fourier Transform, Discrete Cosine Transform, Poincare, Power Spectral Density, Hjorth parameters, and some statistical features. The Chi-square and Recursive Feature Elimination procedures were used to choose the discriminative features among them. Finally, we utilized classification methods such as Support Vector Machine and Extreme Gradient Boosting to classify all the dimensions of emotion and obtain accuracy scores. On the other hand, we take a spectrogram image-based 2DCNN-XGBoost fusion approach, where we utilize a bandpass filter to denoise the data in the region of interest for different cognitive states. Following that, we performed the Short-Time Fourier Transform and obtained spectrogram images. To train the model on the retrieved images, we use a two-dimensional Convolutional Neural Network (CNN) and a dense layer of neural network to obtain the retrieved features from CNNs trained layer. After that, we utilized Extreme Gradient Boosting to classify all of the dimensions of emotion based on the retrieved features. Finally, we compared the outcomes from both approaches.

An overview of the proposed method.

In the proposed method (i.e., Fig. 1), we have used the DREAMER3 dataset. Audio and video stimuli were used to develop the emotional responses of the participants in this dataset. This dataset consists of 18 stimuli tested on participants, and Gabert-Quillen et al.16 selected and analyzed them to induce emotional sensation. The clips came from several films showing a wide variety of feelings. Two of each film centered on one emotion: amusement, excitement, happiness, calm, anger, disgust, fear, sad, and surprise. All of the clips are between 65 and 393 seconds long, giving users plenty of time to convey their feelings17,18. However, just the last 60 s of the video recordings were considered for the next steps of the study. The clips were shown to the participants on a 45-inch television monitor with an attached speaker so that they could hear the soundtrack and put them to the test. The EEG signals were captured with the EMOTIV EPOC, a 16-channel wireless headset. Data from sixteen distinct places were acquired using these channels. The wireless SHIMMER ECG sensor provided additional data. This study, however, focused solely on EEG signals from the DREAMER dataset.

Initially, the data collection was performed for 25 participants, but due to some technical problems, data collection from 2 of them was incomplete. As a result, the data from 23 participants were included in the final dataset. The dataset consists of signals from trail and pre-trail. Both were collected as a baseline for each stimuli test. The data dimension of EEG signals from the DREAMER dataset is shown in Table 2.

EEG signals usually have a lot of noise in them. As a result, the great majority of ocular artifacts occur below 4 Hz, muscular motions occur above 30 Hz, and power line noise occurs between 50 and 60 Hz3. For a better analysis, the noise must be decreased or eliminated. Additionally, to work on a specific area, we must concentrate on the frequency range that provides us with the stimuli-induced signals. The information linked to the emotion recognition task is included in a frequency band ranging from 4 to 30 Hz3. We utilized band pass filtering to acquire sample values ranging from 4-30 Hz to remove the noise from the signals and discover the band of interest.

The band-pass filter is a technique or procedure that accepts frequencies within a specified range of frequency bands while rejecting any frequencies above the frequency of interest. The bandpass filter is a technique that uses a combination of a low pass and a high pass filter to eliminate frequencies that arent required. The fundamental goal of such a filter is to limit the signals bandwidth, allowing us to acquire the signal we need from the frequency range we require while also reducing unwanted noise by blocking frequency regions we wont be using anyway. In both sections of our proposed method, we used a band-pass filter. On the feature fusion-based approach, we used this filtering technique to filter the frequency band between 4 and 30 Hz, which contains the crucial information we require. This helps in the elimination of unwanted noises. Weve decided to divide the signals of interest into three more bands: theta, alpha, and beta. These bands were chosen because they are the most commonly used bands for EEG signal analysis. The defining of band borders is somewhat subjective. The ranges that we use in our case are theta ranging between 4 and 8Hz, alpha ranging between 8 and 13 Hz, and beta ranging between 13 and 20 Hz. For the 2DCNN-XGBoost fusion-based approach, using this filter technique, we filtered the frequency range between 4 and 30 Hz, which contains relevant signals and generated spectrum images. Here the spectrograms from the signals were extracted using STFT and transformed into RGB pictures.

After pre-processing, we have used several feature extraction techniques for our feature fusion-based and the 2DCNN-XGBoost fusion-based approach that we discussed below:

Fast Fourier transform is among the most useful methods for processing various signals19,20,21,22,23. We used the FFT algorithm to calculate a sequence of Discrete Fourier Transform. The FFT stems are evaluated because they operate in the frequency domain, in the time or space, equally computer-feasible. The O(NlogN) result can also be determined by the FFT. Where N is the length of the vector. It functions by splitting a N time-domain signal into a N time domain with one single stage. The second stage is an estimate of the N frequency range for the N time-domain signals. Lastly, the N spectrum was synthesized into one frequency continuum to speed up the Convolutional Neural Network training phase.

The equations of FFT are shown below (1), (2):

$$begin{aligned} H(p)= & {} sum _{t=0}^{N-1} r(t) W_{N}^{p n}, end{aligned}$$

(1)

$$begin{aligned} r(t)= & {} frac{1}{N} sum _{p=0}^{N-1} H(p) W_{N}^{-p n}. end{aligned}$$

(2)

Here (H_p) represents the Fourier co efficients of r(t)

(a) A baseline EEG signal in time domain, (b) A baseline EEG signal in frequency domain using FFT, (c) A stimuli EEG signal in time domain, (d) A stimuli EEG signal in frequency domain using FFT .

We have implemented this FFT to get the coefficients shown in Fig. 2. The mean and maximum features for each band were then computed. Therefore, we get 6 features for each channel across 3 bands, for a total of 84 features distributed across 14 channels.

This method exhibits a finite set of data points for all cosine functions at varying frequencies which is used in research24,25,26,27,28. The Discrete Cosine Transformation (DCT) is usually applied to the coefficients of a periodically and symmetrically extended sequence in the Fourier Series. In signal processing, DCT is the most commonly used transformation method (TDAC).The imaginary part of the signal is zero in the time domain and in the frequency domain. The actual part of the spectrum is symmetrical, the imaginary part is unusual. With the following Eq. (3) , we can transform normal frequencies to the mel frequency:

$$begin{aligned} X_{P}=sum _{n=0}^{N-1} x_{n} cos {left[ frac{pi }{N}left( n+frac{1}{2}right) Pright] }, end{aligned}$$

(3)

where, N is the the list of real numbers and (X_p) is the set of N data values

(a) A baseline EEG signal in time domain, (b) A baseline EEG signal in frequency domain using DCT, (c) A stimuli EEG signal in time domain, (d) A stimuli EEG signal in frequency domain using DCT.

We have implemented DCT to get the coefficients shown in Fig. 3. The mean and maximum features for each band were then computed. Therefore, we get 6 features for each channel across 3 bands, for a total of 84 features distributed across 14 channels.

The Hjorth parameter is one of the ways in which a signals statistical property is indicated in the time domain and has three parameters which are Activity, Mobility, and Complexity. These parameters were calculated in many research29,30,31,32.

Activity: The parameter describes the power of the signal and, the variance of a time function. This can suggest the power spectrum surface within the frequency domain. The notation for activity is given below (4),

$$begin{aligned} var(y(t)). end{aligned}$$

(4)

Mobility: The parameter represents the average frequency or the share of the natural variation of the spectrum. This is defined as the square root of the variance of the first y(t) signal derivative, which is divided by the y(t). The notation for activity is given below (5),

$$begin{aligned} sqrt{frac{var(y'(t))}{var(y(t))}}. end{aligned}$$

(5)

Complexity: The parameter reflects the frequency shift. The parameter contrasts the signal resemblance with a pure sinusoidal wave, where the value converges to 1 if the signal is more identical. The notation for activity is given below (6),

$$begin{aligned} frac{mobility(y'(t))}{mobility(y(t))}. end{aligned}$$

(6)

For our analysis, we calculated Hjorths activity, mobility, and complexity parameters as features. Therefore, we get 9 features for each channel across 3 bands, for a total of 126 features distributed across 14 channels.

Statistics is the application of applied or scientific data processing using mathematics. We use statistical features to work on information-based data, focusing on the mathematical results of this information. We can learn and gain more and more detailed information on how statistics arrange our data in particular and how other data science methods can be optimally used to achieve more accurate and structural solutions. There is multiple research33,34,35 on emotion analysis where statistical features were used. The statistical features that we have extracted are median, mean, max, skewness and variance. As a result, we get 5 features for each channel, for a total of 70 features distributed across 14 channels.

The Poincare, which takes a series of intervals and plots each interval against the following interval, is an emerging analysis technique. In clinical settings, the geometry of this plot has been shown to differentiate between healthy and unhealthy subjects. It is also used in a time series for visualizing and quantifying the association between two consecutive data points. Since long-term correlation and memory are demonstrated in the dynamics of variations in physiological rhythms, this analysis was meant to expand the plot of Poincare by steps, instead of between two consecutive points, the association between sequential data points in a time sequence. We used two parameters in our paper which are:

SD1: Represent standard deviation from axis 1 of the distances of points and defines the width from the ellipse (short-term variability). Descriptors SD1 can be defined as (7):

$$begin{aligned} SD1 = frac{sqrt{2}}{2}SD(P_n - P_{n+1}). end{aligned}$$

(7)

SD2: The standard deviations from axis 2 and ellipse length (long-term variability) are equivalent to SD2.Descriptors SD2 can be defined as (8):

$$begin{aligned} SD2 = sqrt{2SD(P_n)^2 - frac{1}{2}SD(P_n - P_{n+1})^2}. end{aligned}$$

(8)

We have extracted 2 features which are SD1 and SD2 from each band (theta, alpha, beta). Therefore, we get 6 features for each channel across 3 bands, for a total of 84 features distributed across 14 channels.

The Welch method is a modified segmentation system and is used to assess the average periodogram, which is used in papers3,23,36. The Welch method is applied to a time series. For spectral density, it is concerned with decreasing the variance in the results. Power Spectral Density (PSD) informs us which differences in frequency ranges are high and could be very helpful for further study.The Welch method of the PSD can usually be described by the following equations: (9), (10) of the power spectra.

$$begin{aligned} P(f)= & {} frac{1}{M U}left| sum _{n=0}^{M-1} x_{i}(n) w(n) e^{-j 2 pi f}right| ^{2}, end{aligned}$$

(9)

$$begin{aligned} P_{text{ welch } }(f)= & {} frac{1}{L} sum _{i=0}^{L-1} P(f). end{aligned}$$

(10)

Here, the equation of density is defined first. Then, Welch Power Spectrum implies that for each interval, the average time is expressed. We have implemented this Welch method to get the PSD of the signal. From that, the mean power has been extracted from each band. As a result, we get 3 features for each channel across 3 bands, for a total of 42 features distributed across 14 channels.

A Convolutional Neural Network (CNN) is primarily used to process images since the time series is converted into a time-frequency diagram using a Short-Time Fourier Transform (STFT). It extracts required information from input images using multilayer convolution and pooling, and then classifies the image using fully connected layers. We have calculated the STFT using the filtered signal, which ranges between 4 and 30 Hz, and transformed them into RGB images. Some of the generated images are shown in Fig. 4.

EEG signal spectrograms using STFT with classification (a) high arousal, high valence, and low dominance, (b) low arousal, high valence, and high dominance, (c) high arousal, low valence, and low dominance.

To convert time series EEG signals into picture representations, Wavelet algorithms and Fourier Transforms are commonly utilized, which we have used in our secondary process. But in order to preserve the integrity of the original data, EEG conversion should be done solely in the time-frequency domain. As a result, STFT is the best method for preserving the EEG signals most complete anesthetic characteristics, which we have used in our second process. The spectrograms from the signal were extracted using STFT and the Eq. (11) is given below:

$$begin{aligned} Z_{n}^ {e^{(j hat{omega )}}}=e^{-jhat{omega }n}[(W(n)e^{jhat{omega }n}) times x(n)], end{aligned}$$

(11)

where, (e^{-jhat{omega }n}) is the complex bandpass filter output modulated by signal. From the above equation we have calculated the STFT from the filtered signals.

For our feature fusion-based approach, as we have pre-trail signals, we have used 4 s of pre-trail signals as baseline signals, resulting in 512 samples for each at a 128 Hz sampling rate. Then similar to the features extracted for stimuli, the features from baseline signals were also extracted. Then the stimuli features were divided by the baseline features, in order to get only the differences which can be noticed for the feature fusion-based approach by the stimuli test only, which is also done in the paper3.

After extracting all the features and calculating the ratio between stimuli features and baseline features, we have added the self-assessment ratings of arousal valence and dominance. Now the data set for the feature fusion-based approach has 414 data points with 630 features for each data point. We scaled the data using MinMax Scaling to remove the large variation in our data set. The estimator in MinMax Scaling scales and translates each value individually so that it is between 0 and 1, within the defined range.

The formula for MinMax scale is (12),

$$begin{aligned} X_{n e w}=frac{X_{i}-{text {Min}}(X)}{{text {Max}}(X)-{text {Min}}(X)}. end{aligned}$$

(12)

There are various feature selection techniques which are used by many researchers, to reduce the number of features which are not needed and only the important features which can play a big role in the prediction. So in our paper we used two feature elimination methods. One is Recursive Feature Elimination (i.e., Fig. 5) and another one is Chi-square test (i.e., Fig. 6) .

Procedure of recursive feature elimination (RFE).

Procedure of feature selection using Chi-square.

RFE (i.e., Fig. 5) is a wrapper type feature selection technique amongst the vast span of features. Here the term recursive is representative of the loop work of this method that traverses backward on loops to identify the best fitted feature giving each predictor an importance score and later eliminating the least importance scored predictor. Additionally cross-validation is used to find the optimal number of features to rank various feature subcategories and pick the best selection of features for scoring. In this method one attribute is taken and along with the target attribute and this procedure keeps forwarding combining attributes and merging with the target attribute to produce a new model. Thus different subsets of features of different combinations generate models through training. All these models are then strained out to get the maximum accuracy resulting model and its consecutive features. In short, we remove those features which result in the accuracy to be high or at least equal and return it back if the accuracy gets low after elimination . Here we have used step size of 1 to eliminate one feature at a time at each level which can help to remove the worst features early, keeping the best features in order to improve the already calculated accuracy of the overall model.

Chi-square (i.e., Fig. 6) test is a filter method that states the accuracy of a system comparing the predicted data with the observed data based on their importance. It is a test that figures out if there is any feature effective on nominal categorized data or not in order to compare between observed and expected data. In this method one predicted data set is considered as a base point and expected data is calculated from the observed value with respect to the base point.

The Chi-square value is computed by (13):

$$begin{aligned} chi ^{2}=sum _{i=1}^{m} sum _{j=1}^{k} frac{left( A_{i j}-frac{R_{i} cdot C_{j}}{N}right) ^{2}}{frac{R_{i} cdot C_{j}}{N}}, end{aligned}$$

(13)

where, m is the number of intervals, k is the amount of classes, (R_i) is the amount of patterns in the i range, (C_j) is the amount of patterns in the j range, (A_{ij}) is the amount of patterns in i and j range.

After applying RFE and Chi-square , from the achieved accuracy we have observed that, Chi-square does not incorporate a machine learning (ML) model, while RFE uses a machine learning model and trains it to decide whether it is relevant or not. Moreover, in our research, Chi-square methods failed to choose the best subset of features which can provide better results,but because of the extensive nature, RFE methods give the best subset of features mostly in our research. Therefore we consider RFE over Chi-square for feature elimination.

In research3, on this data set, they have calculated the mean and standard deviation for the self assessment. Then they have divided each dimension into two classes, high or low. The boundary between high and low was in the mid point of (0-*5) which is 2.5. But we have adjusted this boundary on our secondary process based on some of our observation. We have also calculated the mean and standard deviation of self assessment ratings, shown in Table 3, to separate each dimension of emotions into two separate classes, which will be high (1) and low (0) and will be representing two emotional category for each dimension.

Arousal: For our 2DCNN-XGBoost fusion based approach, (ratings (> 2.5)) is considered in the class of Excited/Alert and (ratings(< 2.5)) is considered as Uninterested/Bored (0). Here, from the 5796 data, 4200 was in the excited/alert class (1) and 1596 was in the uninterested/bored class. For the feature fusion-based approach, We have focused on the average ratings for excitement which co-responds to stimuli number 5 and 16, having 3.70 0.70 and 3.35 1.07 respectively. Additionally for, calmness, we can take stimuli 1 and 11 into consideration where the average ratings are, 2.26 0.75 and 1.96 0.82 respectively. Therefore, (ratings (> 2)) can be considered in the class of Excited/Alert and (ratings(< 2)) can be considered as Uninterested/Bored. Here, from the 414 data, 393 was in the excited/alert class and 21 was in the uninterested/bored class. We have also shown the parallel Coordinate plot for arousal in Fig. 8a to show the impact of different features on arousal level.

Valence: For our 2DCNN-XGBoost fusion based approach, (ratings (> 2.5)) is considered in the class of happy/elated and (ratings(< 2.5)) is considered as unpleasant/stressed. Here, from the 5796 data, 2254 was in the unpleasant/stressed class and 3542 was in happy/elated class. To store this values in the new data set, unpleasant/stressed is considered as 0 and happy/elated is considered as 1. For the feature fusion-based approach, firstly, we concentrated on the average happiness ratings, which correspond to stimuli 7 and 13, having 4.52 0.59 and 4.39 0.66 respectively. Additionally, stimuli (4, 15) and (6, 10) for fear and disgust were considered where the average ratings are, 2.04 1.02, 2.48 0.85, 2.70 1.55 and 2.17 1.15 respectively. Here, it is clear that, ratings (> 4) can be considered in the class of happy/elated and ratings(< 4) can be considered as unpleasant/stressed. Here, from the 414 data, 359 was in the unpleasant/stressed class and 55 was in happy/elated class. To store this values in the new data set, unpleasant/stressed is considered as 0 and happy/elated is considered as 1. We have also shown the parallel Coordinate plot for valence in Fig. 8b to show the impact of different features on valence level.

Dominance: For our 2DCNN-XGBoost fusion based approach, Same approach is followed here with low and high classes. Here, ratings(> 2.5) in the class of helpless/without control and ratings(< 2.5) can be considered for the class of empowered. Here, from the 5796 data, 1330 was in the helpless/without control class and 4466 was in empowered class. To store this values in the new data set, helpless/Without Control is considered as 0 and empowered is considered as 1. For the feature fusion-based approach, we have targeted stimuli number 4,6 and 8 which has targeted emotions of fear, disgust and anger, having mean rating of 4.13 0.87, 4.04 0.98 and 4.35 0.65 respectively. So, ratings(> 4) in the class of helpless/without control and rest for the class of empowered. Here, from the 414 data, 65 was in the helpless/without control class and 349 was in empowered class. To store this values in the new data set, helpless/Without Control is considered as 0 and empowered is considered as 1. We have also shown the parallel Coordinate plot for dominance in Fig. 8c to show the impact of different features on dominance level.

The overall class distribution for arousal, valence and dominance is shown in the Fig. 7.

Overall class distribution after conversion to a two-class rating score for arousal, valence and dominance.

Impact factor of features on (a) arousal, (b) valence and (c) dominance using parallel co-ordinate plot.

Convolutional Neural Network (CNN) is a type of deep neural network used to analyze visual imagery in deep learning. Figure 9, represents the overall two-dimensional Convolutional Neural Network model used in our proposed method (i.e., Fig. 1), which is also our 2DCNN-XGBoost fusion approach. We generated spectrum images before using this CNN architecture by filtering the frequency band containing significant signals between 4 and 30 Hz. Following that, we compute the Short-Time Fourier Transform of the EEG signals and convert them to spectrogram images before extracting features with a 2D Convolutional Neural Network. We train the model with 2D convolutional layers using the obtained spectrogram images, and then retrieve the trained features from the training layer with the help of another dense layer. We have implemented the test-bed to evaluate the performance of our proposed method. The proposed model is trained using the Convolutional Neural Network (CNN) described below,

The architecture of the implemented CNN model.

Basic features such as horizontal and diagonal edges are usually extracted by the first layer. This information is passed on to the next layer, which is responsible for detecting more complicated characteristics such as corners and combinational edges. As we progress deeper into the network, it becomes capable of recognizing ever more complex features such as objects, faces, and so on.The classification layer generates a series of confidence ratings (numbers between 0 and 1) on the final convolution layer, indicating how likely the image is to belong to a class. In our proposed method, we have used three layers of Conv2D and identified the classes.

The pooling layer is in charge of shrinking the convolved features spatial size. By lowering the size, the computer power required to process the data is reduced. Pooling can be divided into two types: average pooling and max pooling. We have used max pooling because it gives a better result than average pooling. We found the maximum value of a pixel from a region of the image covered by the kernel using max pooling. It removes all noisy activations and conducts de-noising as well as dimensionality reduction. In general, any pooling function can be represented by the following formula (14):

$$begin{aligned} q_{j}^{(l+1)} = Pool(q_{1}^{(l)}, ldots ,q_{i}^ {(l)},ldots ,q_{n}^{(l)}),q_{i}in R_{j}^{(l)}, end{aligned}$$

(14)

where, (R_{j}^{(l)}) is the jth pooled region at layer l and Pool() is pooling function over the pooled region

We added a dropout layer after the pooling layer to reduce overfitting. The accuracy will continuously improve as the dropout rate decreases, while the loss rate decreases. Some of the max pooling is randomly picked outputs and completely ignored. They arent transferred to the following layer.

After a set of 2D convolutions, its always necessary to perform a flatten operation.Flattening is the process of turning data into a one-dimensional array for further processing. To make a single lengthy feature vector, we flatten the output of the convolutional layers. Its also linked to the overall classification scheme.

Dense gives the neural network a completely linked layer. All of the preceding layers outputs are fed to all of its neurons, with each neuron delivering one output to the following layer.

In our proposed method, with this CNN architecture, diverse kernels are employed in the convolution layer to extract high-level features, resulting in different feature maps. At the end of the CNN model, there is a fully connected layer. The predicted class labels of emotions are generated by the output of the fully connected layer. According to our proposed method, we have added dense layer with 630 units after training layer to extracted this amount of features.

Extreme Gradient Boosting (XGBoost) is a machine learning algorithm that use a supervised learning strategy to accurately predict an objective variable by combining the predictions of several weaker models. It is a common data mining tool with good speed and performance. The XGBoost model computes 10 times faster than the Random Forest model.The XGBoost model was generated utilising the additive tree method, which involves adding a new tree to each step to complement the trees that have already been built.As additional trees are built, the accuracy generally improves. In our proposed model, we have used XGBoost after applying CNN. We extracted some amount of features from CNNs trained layer. . Then, based on the retrieved features, we used Extreme Gradient Boosting to classify all of the dimensions of emotion. The following Eqs. (15) and (16) are used in Extreme Gradient Boosting.

$$begin{aligned}{}&f(m) approx f(k)+f^{prime }(k)(m-a)+frac{1}{2} f^{n}(k)(m-k)^{2}, end{aligned}$$

(15)

$${ mathcal {L}^{(t)} simeq sum _{i=1}^{n}left[ lleft( q_{i}, q^{(t-1)}right) +r_{i} f_{t}left( m_{i}right) +frac{1}{2} s_{i} f_{t}^{2}left( m_{i}right) right] +Omega left( f_{t}right) +C },$$

(16)

where C is Constant, (r_i) and (s_i) are defined as,

$$begin{aligned} r_{i}= & partial hat{z}_{i}^{(b-1)}. int left( z_{i,} hat{z}_{i}^{(b-1)}right) , end{aligned}$$

(17)

$$begin{aligned} s_{i}= & {} partial hat{z}_{i}^{(b-1)} .int left( z_{i}, hat{z}_{i}^{(b-1)}right) . end{aligned}$$

(18)

After removing all the constants, the specific objective at step b becomes,

$$begin{aligned} sum _{i=1}^{n}left[ { r_{i}f_{t} }left( m_{i}right) +frac{1}{2}{s_{i} {f}_{t}^{2}(m_{i})}right] +Omega left( f_{t}right) , end{aligned}$$

(19)

Go here to see the original:

CNN-XGBoost fusion-based affective state recognition using EEG spectrogram image analysis | Scientific Reports - Nature.com

Gulf region flips bullish on crypto mining, but can it be green? – Al-Monitor

Crypto mining is an electricity-intensive process that requires running computer servers to solve a complex set of algorithms. In other words, mining crypto converts electricity into digital coins then sold at market value. For that reason, access to cheap power is a trump card. The energy-rich Gulf region is a suitable candidate. It is home to some of the worlds largest fossil fuel resources and boasts the world's lowest solar tariffs.

After a decade of hesitation, Gulf states have started to warm up to cryptocurrencies. The United Arab Emirates (UAE) and Bahrain, in particular, are looking to attract centralized crypto exchanges they processed more than $14 trillion worth of crypto assets in 2021 and their interest in mining crypto is rising. There is a push from the UAE government to make greater use of power generation capacities, said Abdulla Al Ameri, an Emirati crypto mining entrepreneur who has been mining for about five years, including in Kazakhstan and Russia. I expect the UAE crypto mining market to take off in the next two years, he told Al-Monitor. The question is, how green will this be?

Simultaneously, Gulf states have warmed up to renewables, solar in particular, opening the doors for solar-powered crypto mining. We are working on a hybrid crypto farm in Abu Dhabi powered by solar at day, grid at night, CryptoMiners CEO Nasser El Agha told Al-Monitor. The Dubai-headquartered crypto mining service provider cooperates with an undisclosed British company to launch the Gulfs first company-scale solar-crypto farm by December 2022. It is a proof of concept intended to be ultimately taken to the market, specifically to agricultural farms wishing to generate extra income through crypto mining.

Original post:

Gulf region flips bullish on crypto mining, but can it be green? - Al-Monitor

Automotive AI Market Projected to Hit USD 1498.3 Million by 2030 at a CAGR of 30.1% – Report by Market Research Future (MRFR) – GlobeNewswire

New York, US, Aug. 17, 2022 (GLOBE NEWSWIRE) -- According to a comprehensive research report by Market Research Future (MRFR), Automotive AI Market Analysis by Technology, by Process, by Application and by Regions - Global Forecast To 2030 valuation is poised to reachUSD1498.3 Million by 2030, registering 30.1% CAGR throughout the forecast period (20222030).

Automotive AI Market Overview

A developing business standard is growing in the modern-day digital world since artificial intelligence (AI) has become more ubiquitous. Artificial intelligence for the automotive industry is flourishing in the modern age, allowing companies to observe their operations better, offer a better results in the virtual environment, develop autonomous and semi-autonomous cars, enhance in-car customer experience, and increase business plans.

Automotive AI Market Report Scope:

Get Free Sample PDF Brochure

https://www.marketresearchfuture.com/sample_request/4258

Artificial intelligence in the automotive industry has recorded massive growth in the last few years. The market's growth is primarily credited mainly to the growing automobile industry. Furthermore, the factors such as growing investments, the growing trend of autonomous vehicles, and industry-wide standards like navigation systems are also projected to catalyze the market demand over the coming years.

Automotive AI Market USP Covered

Automotive Artificial Intelligence Market Drivers

The global market for automotive artificial intelligence has registered massive growth in recent times. The market's growth is credited to the factors such as rising demands for better user encounters, increasing preference for a top-quality vehicle, rising concern over confidentiality and protection, and an increasing trend toward automated driving.

Automotive AI Market Restraints

On the other hand, the growing concerns regarding data security are likely to impede the market's growth.

Automotive Artificial Intelligence Market Segments

Among all the technologies, the deep learning segment is anticipated to account for the largest market share across the global market for automotive artificial intelligence over the assessment timeframe. The significant investments made by OEMs are the primary aspect causing an upsurge in the segment's growth. The growing research & development activities of self-driving cars using deep learning for sound recognition, data analysis, and image processing is another prime aspect boosting the segment's growth.

Browse In-depth Market Research Report (111 Pages) on Automotive AI Market:

https://www.marketresearchfuture.com/reports/automotive-artificial-intelligence-market-4258

Among all the processes, the data mining segment is anticipated to dominate the global market for automotive artificial intelligence over the coming years. Various types of sensors in automobiles are used to accumulate information which is further used to train the automobile to detect and identify obstacles and various barriers. The massive amount of data generated is the primary aspect causing an upsurge in the segment's growth.

Among all the end-users, the semi-autonomous segment is anticipated to dominate the global market for automotive artificial intelligence over the review timeframe. The growing implementation of gesture and voice recognization systems is the main reason causing an upsurge in the segment's growth.

Automotive AI Market Regional Analysis

The global market for automotive artificial intelligence is analyzed across five major regions: Latin America, the Middle East & Africa, Asia-Pacific, Europe, and North America.

According to the analysis reports by MRFR, the North American region is anticipated to dominate the global market for automotive artificial intelligence over the coming years. The primary reason causing an upsurge in the regional market's growth is the presence of significant manufacturers in this area. Moreover, in comparison with other areas, the region has substantially more access to advanced technology to build artificial intelligence systems, which is anticipated to boost the growth of the regional market over the assessment timeframe. Furthermore, the growing expectation of autonomous cars across the United States has significantly contributed to the nation's growth. In addition, favorable government regulations, coupled with the fact that the automotive sector's prominent leaders such as Fiat Chrysler Automotive, Ford Motor Company, and General Motors, are taking part in the development of artificial intelligence in automobiles by constantly improving their products, will have a better potential in the global market.

Ask To Expert:

https://www.marketresearchfuture.com/ask_for_schedule_call/4258

COVID-19 Impact

The global COVID-19 pandemic has had an enormous impact on the majority of the market sectors across the globe. The rapid of the disease across the majority of the countries worldwide has led to the implementation of partial or complete lockdowns. The travel restrictions and social distancing norms imposed across the majority of the world caused significant disruptions in the supply chain networks for most industry areas. Some major sectors affected by the pandemic include hospitality, automobile, construction, etc. Like any other sector across the global market, the global market for automotive artificial intelligence has also faced a significant impact since the arrival of the pandemic. The global health crisis impacted public health and severely impacted the financial activities across several industry sectors. Recently, the adoption of artificial intelligence across various end-use applications belonging to various sectors has become the latest trend worldwide. During pandemic times, AI-based tools are being utilized widely worldwide. With the sudden fall in the global demand for automobiles, the global market for automotive artificial intelligence suffered significant losses in terms of labor and revenues.

On the other hand, with the pandemic fading across the globe, the global economy and industrial activities have been picking up pace in the last few months. The growing investments in research & development activities to launch innovative solutions will likely help the market get back on track over the assessment timeframe. In addition, with the rapid vaccination rates across the majority of the world, the global market is likely to experience favorable growth over the coming years.

Check for Discount:

https://www.marketresearchfuture.com/check-discount/4258

Automotive Artificial Intelligence Market Competitive Analysis

Dominant Key Players on Automotive AI Market Covered are:

Related Reports:

Off the Road Tire Market Analysis Research Report: Information By Vehicle Type, Construction Type, Distribution Channel and Region - Forecast till 2030

Industrial Vehicles Market Growth Research Report: Information by Product Type, Drive Type, Application, and Region Forecast till 2030

Powersports Market Trends Research Report: Information By Type, By Application, By Model - Forecast till 2030

About Market Research Future:

Market Research Future (MRFR) is a global market research company that takes pride in its services, offering a complete and accurate analysis regarding diverse markets and consumers worldwide. Market Research Future has the distinguished objective of providing the optimal quality research and granular research to clients. Our market research studies by products, services, technologies, applications, end users, and market players for global, regional, and country level market segments, enable our clients to see more, know more, and do more, which help answer your most important questions.

Follow Us:LinkedIn|Twitter

Continue reading here:

Automotive AI Market Projected to Hit USD 1498.3 Million by 2030 at a CAGR of 30.1% - Report by Market Research Future (MRFR) - GlobeNewswire

Factor Investing the road ahead – The Financial Express

By Bijon Pani, Chief Investment Officer, NJ Asset Management Private Limited

Factor investing is a method of choosing stocks (or other asset classes) using a predefined set of rules or parameters. The science of how to choose these parameters is what determines how successful the factor is in future. When choosing a factor, one needs to make sure they are robustly constructed, should work across multiple countries, and have a sensible rationale on why it works.

Factors offer a way of segregating a diversified portfolio returns (such as those of a fund manager you might like) into its various factor components and what then remains unexplained is the contribution of the manager.

The first example of using factors to explain returns comes from the CAPM model which showed risk and return in terms of the market exposure. But the CAPM left a lot of the return unexplained. There were multiple influential academic papers which put forth other factors such as value and size to explain returns.

It was Fama and French in 1993 who conceived a simple framework to think about returns in terms of factors. They added two powerful factors: value and size to the existing market return factor. It was further enhanced by Carhart to include momentum.

The four factor model became the bedrock of performance and risk analysis of fund management for many decades. Over time as the computing power became faster and more accessible, the academic research into factors exploded as crunching data became easier. The latest innovation uses machine learning, natural language processing and alternative datasets. There are now hundreds of documented parameters, even though most of them fall into one of the four factor styles: value, quality, low volatility and momentum.

John Cochrane, a leading academician who studies factors, rightly calls this the factor zoo. The job of a practitioner has been made hard as newer parameters keep getting reported that promise better returns compared to the older ones. It is especially more nuanced and harder when it comes to factor investing in India. This is because India suffers from two big issues.

Firstly, liquidity is a big problem beyond a certain number of stocks and when one is constructing factors, you need a large enough universe to measure the factor premiums and construct portfolios. This job gets harder if the universe has illiquid stocks as one may end up including stocks in portfolios that cannot be easily invested in.

Secondly, factor construction requires long clean data and we often suffer from inadequate and patchy data. This requires the skills of an experienced professional who can craft the factors specific to the Indian markets without losing the essence of it. It is, after all, very easy to fall into the data-mining trap and construct parameters that have worked in the past but may not in the future.

Once factors became common in the developed world, they made the life of traditional fund managers even more challenging. It wasnt easy beating the market index, but now with added factors, the overwhelming majority (more than 90 per cent by some research) couldnt beat a portfolio constructed using factors.

Morningstar India research found that only 26 per cent large cap discretionary funds could beat the benchmark over the past 10 years. The average alpha was a mere 1.15 per cent. This alpha may have been negligible if we added other factors in the regression. The future of fund management in India will be very similar to that of the west, we will see active factor based funds giving a strong competition to discretionary fund managers in providing better risk adjusted returns. The charts shows the performance of various factors and an equal weight multifactor portfolio (constructed from the 4 factor indices published by NSE) compared to the market over the last 10 years. Readers will notice that the performance of various factors and the multifactor model was better than the index during this time. On a risk adjusted basis, the numbers are even better.

Our retail participation in mutual funds is very low, over the near future as more money flows into funds, quant funds will not just grow along with traditional funds but also increase their market share. A rule based approach allows the parameter to be backtested and see how it has performed over various business cycles. This provides an added confidence in the risk and return structure of the portfolio.

The only word of caution we would like to add is that even though rule based investment strategies will grow rapidly in future, it would mostly be in the active approach where effort is invested in studying factors in the Indian context. Simply copying parameters from the developed market into India may not work very well, so rule based strategies need to be constructed keeping in mind the idiosyncrasies of Indian markets, not just replicating an index.

Factor. investing has moved from being a fundamental concept of academic finance to the next disruptor in fund management.

(Disclaimer: The views expressed above are the authors own views.)

Read the original post:

Factor Investing the road ahead - The Financial Express

Investors five-year losses continue as Hochschild Mining (LON:HOC) dips a further 12% this week, earnings continue to decline – Simply Wall St

Long term investing works well, but it doesn't always work for each individual stock. It hits us in the gut when we see fellow investors suffer a loss. Anyone who held Hochschild Mining plc (LON:HOC) for five years would be nursing their metaphorical wounds since the share price dropped 73% in that time. And some of the more recent buyers are probably worried, too, with the stock falling 51% in the last year. Furthermore, it's down 31% in about a quarter. That's not much fun for holders. This could be related to the recent financial results - you can catch up on the most recent data by reading our company report.

Given the past week has been tough on shareholders, let's investigate the fundamentals and see what we can learn.

Check out our latest analysis for Hochschild Mining

To quote Buffett, 'Ships will sail around the world but the Flat Earth Society will flourish. There will continue to be wide discrepancies between price and value in the marketplace...' One imperfect but simple way to consider how the market perception of a company has shifted is to compare the change in the earnings per share (EPS) with the share price movement.

During the five years over which the share price declined, Hochschild Mining's earnings per share (EPS) dropped by 1.1% each year. This reduction in EPS is less than the 23% annual reduction in the share price. So it seems the market was too confident about the business, in the past. The less favorable sentiment is reflected in its current P/E ratio of 11.46.

The company's earnings per share (over time) is depicted in the image below (click to see the exact numbers).

It is of course excellent to see how Hochschild Mining has grown profits over the years, but the future is more important for shareholders. Take a more thorough look at Hochschild Mining's financial health with this free report on its balance sheet.

As well as measuring the share price return, investors should also consider the total shareholder return (TSR). The TSR is a return calculation that accounts for the value of cash dividends (assuming that any dividend received was reinvested) and the calculated value of any discounted capital raisings and spin-offs. It's fair to say that the TSR gives a more complete picture for stocks that pay a dividend. In the case of Hochschild Mining, it has a TSR of -70% for the last 5 years. That exceeds its share price return that we previously mentioned. The dividends paid by the company have thusly boosted the total shareholder return.

We regret to report that Hochschild Mining shareholders are down 49% for the year (even including dividends). Unfortunately, that's worse than the broader market decline of 3.6%. However, it could simply be that the share price has been impacted by broader market jitters. It might be worth keeping an eye on the fundamentals, in case there's a good opportunity. Unfortunately, last year's performance may indicate unresolved challenges, given that it was worse than the annualised loss of 11% over the last half decade. We realise that Baron Rothschild has said investors should "buy when there is blood on the streets", but we caution that investors should first be sure they are buying a high quality business. It's always interesting to track share price performance over the longer term. But to understand Hochschild Mining better, we need to consider many other factors. To that end, you should be aware of the 3 warning signs we've spotted with Hochschild Mining .

We will like Hochschild Mining better if we see some big insider buys. While we wait, check out this free list of growing companies with considerable, recent, insider buying.

Please note, the market returns quoted in this article reflect the market weighted average returns of stocks that currently trade on GB exchanges.

Have feedback on this article? Concerned about the content? Get in touch with us directly. Alternatively, email editorial-team (at) simplywallst.com.

This article by Simply Wall St is general in nature. We provide commentary based on historical data and analyst forecasts only using an unbiased methodology and our articles are not intended to be financial advice. It does not constitute a recommendation to buy or sell any stock, and does not take account of your objectives, or your financial situation. We aim to bring you long-term focused analysis driven by fundamental data. Note that our analysis may not factor in the latest price-sensitive company announcements or qualitative material. Simply Wall St has no position in any stocks mentioned.

Simply Wall St does a detailed discounted cash flow calculation every 6 hours for every stock on the market, so if you want to find the intrinsic value of any company just search here. Its FREE.

View post:

Investors five-year losses continue as Hochschild Mining (LON:HOC) dips a further 12% this week, earnings continue to decline - Simply Wall St

Optimizing data mining from EHRs to improve the patient experience – MedCity News

In a recent webinar, Carta Healthcare CEO Matt Hollingsworth shared how his health tech business is using AI for data mining to mobilize healthcare data to enable organizations to transform the patient experience. The companys goal is to reduce the burden for healthcare organizations to pull together both structured and unstructured data in a standardized format so all their data can be used efficiently and consistently across the organizationto ultimately improve patient care.

Rachel Ford Hutman, founder of Ford Hutman Media,moderated the discussion.

Hollingsworth highlighted how patients with complex conditions, such as his mother, need to bring binders of their healthcare data to healthcare appointments because their healthcare history is not easily accessible by their physicians. His company seeks to transform the status quo in healthcare data usability with automation using natural language processing.

On the flip side, Hollingsworth noted that although natural language processing and automation are important tools that healthcare institutions are leveraging, each healthcare institution is different in the way they store and process data. This means that clinical expertise from healthcare clinicians is needed to balance the limitations of AI in discerning where the appropriate information is stored. Similarly, AI can supplement humans ability to process large amounts of information in a short time.

Because the data is messy and complicated and the same data can live in multiple places and in different levels of completion, you dont necessarily get all the data when humans alone are mining the data, because people dont have an infinite amount of time to read through infinite documentation, Hollingsworth said. Its always possible to miss things.

On the other hand, lacking clinical knowledge means that you can end up with noisy data. For instance, problem lists are notoriously inaccurate. They were often accurate when they were first captured, but theres no one that goes along and sets resolution dates for things and figures out how long the condition was there. So you cant necessarily rely on that. But that information is present in things like HMPs and progress notes. So again, being able to deal with the fact that the data lives in multiple places, you have to have someone be able to teach the system where to find the high reliability sources of information. If you dont do that, youre going to end up with very noisy data that is inaccurate.

The webinar also offers insights on:

To listen to the webinar, please fill out the form below.

Photo: ipopba, Getty Images

Here is the original post:

Optimizing data mining from EHRs to improve the patient experience - MedCity News

The One Practice That Is Separating The AI Successes From The Failures – Forbes

Anyone who has been following the news on AI in 2022 knows of the high rate of AI project failures. Somewhere between 60-80% of AI projects are failing according to different news sources, analysts, experts, and pundits. However, hidden among all that doom and gloom are the organizations who are succeeding. What are those 20%+ of organizations doing that are setting themselves apart from the failures, leading their projects to success?

Surprisingly, it has nothing to do with the people they hire or the technology or products they use. Indeed, many of the successful AI companies are using the same products and services from the same vendors that the companies with AI project failures are using. Likewise, the organizations with high rates of AI success dont have some magical team of data science or machine learning unicorns that somehow possess mysterious skills. Many of these successful AI organizations have the same skill sets that the average organization has. So if its not team and technology, what could it be?

Stop Treating your AI Projects like App Dev Projects

One of the biggest insights from these AI successes is that they dont see AI projects as application development or functionality-driven projects. Rather, they see them as data projects, or sometimes even data products. A data project doesnt start with an idea of what the functionality needs to be, but rather focuses on what insights or actions need to be gleaned from the data in whatever current shape its in.

It might seem somewhat obvious to many that AI projects are data projects, but perhaps the AI failures need to understand this at a greater level of detail. What makes an AI system work isnt specific code, but rather the data. The same algorithms with the same code can be used to generate text, recognize images, or have conversations the functionality is determined by the training data and the configuration of the system. Therefore, to achieve the desired outcome of an AI project requires focus on data iteration and data-centric methods versus coding-centric methods.

More specifically, the code for a facial recognition application doesnt actually do facial recognition, but rather the code just sets up the data to train the model and then execute the model once its trained. The data determines the functionality when it comes to AI and ML projects. So, if you are supposed to run AI projects as data projects, why are people still making the mistake of throwing developer-focused methods and approaches at what clearly doesnt have much to do with development?

Agile is Dead. Long live Agile.

The most popular methodology for application development is Agile, which focuses on short, iterative sprints tied to the immediate needs of the business user versus long development cycles. However, Agile falls flat when dealing with AI because it doesnt tell you how to deal with data, the core asset of an AI system.

Another approach is the Cross-Industry Standard Process for Data Mining (CRISP-DM), a decades old method that guides data mining efforts. However, its particularly focused on data projects and lacks some critical elements needed for AI projects, and hasnt been updated in over two decades. There have been other data-centric approaches, but they dont provide detail on how to run and manage data-centric projects, but they dont tell you how to deal with the specific requirements of AI model training and iteration, and they havent been built for Agile. This leads AI project managers to struggle with the right approach to run an AI project. No wonder so many AI organizations are making up their own approaches and failing so much at it.

CPMAI Methodology

CPMAI Methodology updates CRISP-DM with Agile and AI-specific details

The alternative to Agile is the waterfall methodology that has been around for decades. Like a waterfall, you can begin your project by designing your application, building your application, testing to make sure the application meets the criteria you built for and then deploying the application. The problem with waterfall is, especially for large and continuously changing projects, this process can take a very long time - sometimes upwards of 18-24 months. During this time the project requirements may change, new technology is created, or business needs may evolve past the original scope. This reality of waterfall is what led to the development of Agile as a more iterative approach. However, while Agile has been very successful for software development projects, if you try to use agile alone on data projects youre going to run into problems.

Take an AI-enabled chatbot for example. With each iteration the functionality doesnt change, its still a chatbot. The new iterations might change the number of words it can understand, add the ability to converse in new languages or increase the accuracy of the model but the functionality of the chatbot remains the same; it is still just a chatbot. Unlike with software projects where it may take twenty iterations to even get to the first functionality iteration of your model, AI projects have that functionality from the beginning. Therefore, you also need a data centric methodology to apply.

Taking the right approach to AI project Management

So if Agile doesnt work well, but we cant apply waterfall, and if CRISP-DM doesnt have what we need, what approaches are successful AI practitioners using? A hybrid of these approaches, of course! Agile and data centric methodologies do not compete, but rather they run on different timelines. The agile and data centric methodologies focus on different iterations. The data centric methodology focuses on the data, the agile methodology focuses on the functionality and they run together.

Agile doesnt tell you how to do things like data preparation, how to understand the data you have or need, how to build a model, retrain a model, and other critical functions of AI projects. This is why having a data centric methodology, going through specific steps in the correct order, and asking these questions in the beginning is essential. Approaches such as the Cognitive Project Management for AI (CPMAI) methodology are blending data-centric approaches with agile methods to produce methods that are more optimized for the highly data-centric, variable nature of AI projects.

Other project management methods have been tested in the space such as Microsofts Team Data Science Process (TDSP) and IBMs iteration on CRISP-DM. However, many organizations have been reluctant to adopt vendor-originated methodologies and turned to vendor-neutral approaches. Regardless of the approach used, whether its CPMAI, CRISP-DM with agile enhancements, TDSP, or others, what is setting apart the successes from the failures is, as Louis Armstrong used to say, its not what you do, its the way that you do it. Perhaps as these successes see more publicity, well see a resurgence in interest in methodology to drive AI projects forward with success.

More here:

The One Practice That Is Separating The AI Successes From The Failures - Forbes

e-Clinical Solutions Global Market Report 2022: Market is Expected to Grow to $11.60 Billion in 2026 at a CAGR of 13.5% – Long-term Forecast to 2031 -…

DUBLIN--(BUSINESS WIRE)--The "eClinical Solutions Global Market Report 2022" report has been added to ResearchAndMarkets.com's offering.

The global e-clinical solutions market is expected to grow from $5.97 billion in 2021 to $7.00 billion in 2022 at a compound annual growth rate (CAGR) of 17.2%. The e-clinical solutions market is expected to grow to $11.60 billion in 2026 at a CAGR of 13.5%.

The e-clinical solutions market consists of sales of e-clinical solutions and services by entities (organizations, sole traders, partnerships) that are used in the clinical development process by combining clinical technology expertise. e-Clinical solutions refer to the use of computerized solutions and related procedures to aid clinical trial operations by automating previously laborious tasks. e-Clinical has evolved to include a wide range of technologies that aim to help with one or more phases of clinical trials, from planning through submissions and data mining.

The main types of e-clinical solutions are electronic data capture (EDC) and clinical data management systems (CDMS), clinical trial management systems (CTMS), clinical analytics platforms, randomization and trial supply management (RTSM), clinical data integration platforms, electronic clinical outcome assessment (ECOA), safety solutions, and electronic trial master file (ETMF).

The Electronic Data Capture (EDC) and Clinical Data Management Systems (CDMS) markets refer to a program that allows a person to capture and collect clinical documents. The various delivery modes include web-based, cloud-based, and enterprise-based, which have various development phases such as phase I, phase II, phase III, and phase Iv. It is employed in several sectors, such as pharmaceutical and biopharmaceutical companies, contract research organizations, consulting service companies, medical device manufacturers, hospitals, and academic research institutions.

North America was the largest region in the e-clinical solutions market in 2021. The regions covered in the e-clinical solutions market report are Asia-Pacific, Western Europe, Eastern Europe, North America, South America, Middle East and Africa.

The e-clinical solutions market research report is one of a series of new reports that provides e-clinical solutions market statistics, including global market size, regional shares, competitors with a e-clinical solutions market share, detailed e-clinical solutions market segments, market trends and opportunities, and any further data you may need to thrive in the e-clinical solutions industry. This e-clinical solutions market research report delivers a complete perspective of everything you need, with an in-depth analysis of the current and future scenario of the industry.

Increasing research and development expenditure on drug development pipelines by pharma-biotech companies is expected to propel the growth of the e-Clinical solutions market going forward. Companies are undergoing research and development on e-clinical solutions for drug development to strengthen their position.

For instance, in 2020, Incyte, a US-based biopharmaceutical company, increased its R&D cost for drug development to $2.216 billion. They amounted to $1.1 billion in 2019. However, the majority of the rise was due to an increase in clinical research and outside services. The company spent $677 million on these goods in 2019. The sum grew to $1.701 billion in the year 2020. Therefore, the increase in spending on R & D on drug development is driving the growth of e-Clinical solutions.

Technological advancements have emerged as the key trend gaining popularity in the e-Clinical solutions market. Key players operating in the e-Clinical solutions sector are focusing on the use of advanced technologies to meet consumer demand.

The countries covered in the e-clinical solutions market report are Australia, Brazil, China, France, Germany, India, Indonesia, Japan, Russia, South Korea, UK, USA.

Major players in the e-clinical solutions market are

Key Topics Covered:

1. Executive Summary

2. e-Clinical Solutions Market Characteristics

3. e-Clinical Solutions Market Trends And Strategies

4. Impact Of COVID-19 On e-Clinical Solutions

5. e-Clinical Solutions Market Size And Growth

5.1. Global e-Clinical Solutions Historic Market, 2016-2021, $ Billion

5.1.1. Drivers Of The Market

5.1.2. Restraints On The Market

5.2. Global e-Clinical Solutions Forecast Market, 2021-2026F, 2031F, $ Billion

5.2.1. Drivers Of The Market

5.2.2. Restraints On the Market

6. e-Clinical Solutions Market Segmentation

6.1. Global e-Clinical Solutions Market, Segmentation By Product, Historic and Forecast, 2016-2021, 2021-2026F, 2031F, $ Billion

6.2. Global e-Clinical Solutions Market, Segmentation By Development Phase, Historic and Forecast, 2016-2021, 2021-2026F, 2031F, $ Billion

6.3. Global e-Clinical Solutions Market, Segmentation By Delivery Mode, Historic and Forecast, 2016-2021, 2021-2026F, 2031F, $ Billion

6.4. Global e-Clinical Solutions Market, Segmentation By End User, Historic and Forecast, 2016-2021, 2021-2026F, 2031F, $ Billion

7. e-Clinical Solutions Market Regional And Country Analysis

7.1. Global e-Clinical Solutions Market, Split By Region, Historic and Forecast, 2016-2021, 2021-2026F, 2031F, $ Billion

7.2. Global e-Clinical Solutions Market, Split By Country, Historic and Forecast, 2016-2021, 2021-2026F, 2031F, $ Billion

For more information about this report visit https://www.researchandmarkets.com/r/4w1dhp

Read more from the original source:

e-Clinical Solutions Global Market Report 2022: Market is Expected to Grow to $11.60 Billion in 2026 at a CAGR of 13.5% - Long-term Forecast to 2031 -...

Reliance Global Group Achieves 92% Increase in Revenue for the Second Quarter of 2022 – GlobeNewswire

RELI Exchange driving agency partner channel growth

Company to host conference call today at 2:00 PM

LAKEWOOD, NJ, Aug. 15, 2022 (GLOBE NEWSWIRE) -- via NewMediaWire Reliance Global Group, Inc. (Nasdaq: RELI; RELIW) (Reliance, we or the Company), which combines artificial intelligence (AI) and cloud-based technologies with the personalized experience of a traditional insurance agency, today provided a business update and reported financial results for the second quarter ended June 30, 2022.

Ezra Beyman, CEO of Reliance Global Group, commented, We are extremely pleased to report a 92% year-over-year increase in revenue for the second quarter of 2022. Our strong growth reflects the successful acquisitions of JP Kush & Associates, Medigap Health Insurance Company and Barra & Associates. Importantly, we are also experiencing solid organic growth, which illustrates the synergies of our portfolio. As an example, we recently relaunched Barra & Associates as RELI Exchange, our new business-to-business InsurTech platform and agency partner network, which builds on the artificial intelligence and data mining backbone of 5MinuteInsure.com. RELI Exchange combines the best of digital and the human element by providing agents and customers quotes from multiple carriers within minutes, while reducing back office expenses and driving operational efficiency. Due to the competitive advantages and compelling value proposition of our platform, we are aggressively adding new agency partners to RELI Exchange, as evidenced by an increase in agents of more than 30% in just three months. We are committed to achieving our goal of building RELI Exchange into the largest agency partner network in the U.S. Overall, we have built a highly scalable business model that we believe will drive significant shareholder value for years to come.

Financial results for the three months ended June 30, 2022

Financial results for the six months ended June 30, 2022

The complete financial results will be available in the Companys Form 10-Q, which is expected to be filed with the U.S. Securities & Exchange Commission later today.

Conference Call

Reliance Global Group will host a conference call today at 2:00 P.M. Eastern Time to discuss the Companys financial results for the second quarter ended June 30, 2022, as well as the Companys corporate progress and other developments.

The conference call will be available via telephone by dialing toll free 888-506-0062 for U.S. callers or +1 973-528-0011 for international callers and using entry code: 581329. A webcast of the call may be accessed at https://www.webcaster4.com/Webcast/Page/2381/46386 or on the investor relations section of the Companys website at https://relianceglobalgroup.com/investor-relations/.

A webcast replay will also be available on the Companys Investors section of the website (https://relianceglobalgroup.com/investor-relations/) through August 15, 2023. A telephone replay of the call will be available approximately one hour following the call, through August 29, 2022, and can be accessed by dialing 877-481-4010 for U.S. callers or +1 919-882-2331 for international callers and entering conference ID: 46386.

About Reliance Global Group, Inc.

Reliance Global Group, Inc. (NASDAQ: RELI, RELIW) is combining advanced technologies with the personalized experience of a traditional insurance agency model. Reliance Global Groups growth strategy is focused on both organic expansion, including 5minuteinsure.com and RELI Exchange, as well as acquiring well managed, undervalued and cash flow positive insurance agencies. Additional information about the Company is available at https://www.relianceglobalgroup.com/.

Forward-Looking Statements

This press release contains forward-looking statements within the meaning of the "safe harbor" provisions of the Private Securities Litigation Reform Act of 1995. Statements other than statements of historical facts included in this press release may constitute forward-looking statements and are not guarantees of future performance, condition or results and involve a number of risks and uncertainties. In some cases, forward-looking statements can be identified by terminology such as may, should, potential, continue, expects, anticipates, intends, plans, believes, estimates, and similar expressions and include statements such achieving the Companys goal of building RELI Exchange into the largest agency partner network in the U.S. and driving significant shareholder value for years to come Actual results may differ materially from those in the forward-looking statements as a result of a number of factors, including those described from time to time in our filings with the Securities and Exchange Commission and elsewhere and risk as and uncertainties related to: the Companys ability to generate the revenue anticipated and the ability to build the RELI Exchange into the largest agency partner network in the U.S., and the other factors described in the Companys Annual Report on Form 10-K for the fiscal year ended December 31, 2021. The foregoing review of important factors that could cause actual events to differ from expectations should not be construed as exhaustive and should be read in conjunction with statements that are included herein and elsewhere, including the risk factors included in the Company's Annual Report on Form 10-K for the fiscal year ended December 31, 2021, the Companys Quarterly Reports on Form 10-Q, the Companys recent Current Reports on Form 8-K and subsequent filings with the Securities and Exchange Commission. The Company undertakes no duty to update any forward-looking statement made herein. All forward-looking statements speak only as of the date of this press release.

Contact:Crescendo Communications, LLCTel: +1 (212) 671-1020Email: RELI@crescendo-ir.com

See the original post here:

Reliance Global Group Achieves 92% Increase in Revenue for the Second Quarter of 2022 - GlobeNewswire