Page 1,739«..1020..1,7381,7391,7401,741..1,7501,760..»

Faculty Position, Computer and Network Engineering job with UNITED ARAB EMIRATES UNIVERSITY | 305734 – Times Higher Education

Job Description

The College of Information Technology (CIT) has engaged in an ambitious reorganization effort aiming to harness the prevalence of computing and the rise of artificial intelligence to advance science and technology innovation, for the benefit of society. Under its new structure, the College will serve as the nexus of computing and informatics at the United Arab Emirates University (UAEU). CIT will build on the strength of its current research programs to create new multidisciplinary research initiatives and partnerships, across and beyond the university campus, critical to its long-term stability and growth. CIT will also expand its education portfolio with new multidisciplinary degree programs, including a BSc. in Artificial Intelligence, a BSc. in a Data Science, a BSc. in Computational Linguistics, jointly with the College of Humanities and Social Sciences, a MSc. in IoT, and a Ph.D. in Informatics and Computing. Also planned is a suite of online Microcredentials in emerging fields of study, including IoT, Cybercrime Law and Digital Forensics, Blockchains, and Cloud Computing.

About the Position:

We seek faculty candidates with strong research record in all areas of Artificial Intelligence and Data Science, with a special emphasis on emerging areas of Artificial Intelligence and Machine Learning and on the theoretical foundations and applications of Data Sciences and AI/ML in a wide range of fields and domain applications, including Smart IoT, Smart Environments and Autonomous and Intelligent Systems. The successful candidates are expected to complement and enhance the current strength of the departments in AI and Data Science related areas, including Deep Learning, Natural Language Processing, Big Data, and Data Mining, and to contribute to the teaching and research in these areas.

Candidate Qualifications:

Candidates must hold a Ph.D. degree in computer science, information science or closely related areas from a recognized university.

Preferred qualifications include:

Faculty rank is commensurate with qualifications and experience. The positions will remain open until filled. The UAEU and CIT are committed to fostering a diverse, inclusive, and equitable environment and culture for students, staff, and faculty.

Application Instructions:

Applications must be submitted online at https://jobs.uaeu.ac.ae/search.jsp (postings under CIT). The instructions to complete an application are available on the website.

A completed application must include:

About the UAEU:

The United Arab Emirates University (UAEU) is the first academic institution in the United Arab Emirates. Founded by the late Sheikh Zayed bin Sultan Al Nahyan in 1976, UAEU is committed to innovation and excellence in research and education. As the countrys flagship university, UAEU aims to create and disseminate fundamental knowledge through cutting-edge research in areas of strategic significance to the nation and the region, promote the spirit of discovery and entrepreneurship, and educate indigenous leaders of the highest caliber.

Minimum Qualification

Candidates must hold a Ph.D. degree in computer science, information science or closely related areas from a recognized university.

Preferred qualifications include:

Preferred Qualification

Strong research record in all areas of Artificial Intelligence and Data Science, with a special emphasis on emerging areas of Artificial Intelligence and Machine Learning and on the theoretical foundations and applications of Data Sciences and AI/ML in a wide range of fields and domain applications, including Smart IoT, Smart Environments and Autonomous and Intelligent Systems. The successful candidates are expected to complement and enhance the current strength of the departments in AI and Data Science related areas, including Deep Learning, Natural Language Processing, Big Data, and Data Mining, and to contribute to the teaching and research in these areas.

Expected Skills/Rank/Experience

Faculty rank is commensurate with qualifications and experience. The positions will remain open until filled.

Special Instructions to Applicant

The review process will continue until the position is filled. A completed application must be submitted electronically at: https://jobs.uaeu.ac.ae/

Division College of Information Tech. -(CIT)Department Computer &Network Engineering-(CIT)Job Close Date open until filledJob Category Academic - Faculty

Read more here:

Faculty Position, Computer and Network Engineering job with UNITED ARAB EMIRATES UNIVERSITY | 305734 - Times Higher Education

Read More..

CNN-XGBoost fusion-based affective state recognition using EEG spectrogram image analysis | Scientific Reports – Nature.com

Figure 1, illustrates the proposed method, which is generally divided into two segments. On the left, we take a feature fusion-based approach, emphasizing signal processing on the acquired dataset by denoising it with a band pass filter and extracting alpha, beta, and theta bands for further processing. Numerous features have been extracted from the extracted bands. Feature extraction methods include the Fast Fourier Transform, Discrete Cosine Transform, Poincare, Power Spectral Density, Hjorth parameters, and some statistical features. The Chi-square and Recursive Feature Elimination procedures were used to choose the discriminative features among them. Finally, we utilized classification methods such as Support Vector Machine and Extreme Gradient Boosting to classify all the dimensions of emotion and obtain accuracy scores. On the other hand, we take a spectrogram image-based 2DCNN-XGBoost fusion approach, where we utilize a bandpass filter to denoise the data in the region of interest for different cognitive states. Following that, we performed the Short-Time Fourier Transform and obtained spectrogram images. To train the model on the retrieved images, we use a two-dimensional Convolutional Neural Network (CNN) and a dense layer of neural network to obtain the retrieved features from CNNs trained layer. After that, we utilized Extreme Gradient Boosting to classify all of the dimensions of emotion based on the retrieved features. Finally, we compared the outcomes from both approaches.

An overview of the proposed method.

In the proposed method (i.e., Fig. 1), we have used the DREAMER3 dataset. Audio and video stimuli were used to develop the emotional responses of the participants in this dataset. This dataset consists of 18 stimuli tested on participants, and Gabert-Quillen et al.16 selected and analyzed them to induce emotional sensation. The clips came from several films showing a wide variety of feelings. Two of each film centered on one emotion: amusement, excitement, happiness, calm, anger, disgust, fear, sad, and surprise. All of the clips are between 65 and 393 seconds long, giving users plenty of time to convey their feelings17,18. However, just the last 60 s of the video recordings were considered for the next steps of the study. The clips were shown to the participants on a 45-inch television monitor with an attached speaker so that they could hear the soundtrack and put them to the test. The EEG signals were captured with the EMOTIV EPOC, a 16-channel wireless headset. Data from sixteen distinct places were acquired using these channels. The wireless SHIMMER ECG sensor provided additional data. This study, however, focused solely on EEG signals from the DREAMER dataset.

Initially, the data collection was performed for 25 participants, but due to some technical problems, data collection from 2 of them was incomplete. As a result, the data from 23 participants were included in the final dataset. The dataset consists of signals from trail and pre-trail. Both were collected as a baseline for each stimuli test. The data dimension of EEG signals from the DREAMER dataset is shown in Table 2.

EEG signals usually have a lot of noise in them. As a result, the great majority of ocular artifacts occur below 4 Hz, muscular motions occur above 30 Hz, and power line noise occurs between 50 and 60 Hz3. For a better analysis, the noise must be decreased or eliminated. Additionally, to work on a specific area, we must concentrate on the frequency range that provides us with the stimuli-induced signals. The information linked to the emotion recognition task is included in a frequency band ranging from 4 to 30 Hz3. We utilized band pass filtering to acquire sample values ranging from 4-30 Hz to remove the noise from the signals and discover the band of interest.

The band-pass filter is a technique or procedure that accepts frequencies within a specified range of frequency bands while rejecting any frequencies above the frequency of interest. The bandpass filter is a technique that uses a combination of a low pass and a high pass filter to eliminate frequencies that arent required. The fundamental goal of such a filter is to limit the signals bandwidth, allowing us to acquire the signal we need from the frequency range we require while also reducing unwanted noise by blocking frequency regions we wont be using anyway. In both sections of our proposed method, we used a band-pass filter. On the feature fusion-based approach, we used this filtering technique to filter the frequency band between 4 and 30 Hz, which contains the crucial information we require. This helps in the elimination of unwanted noises. Weve decided to divide the signals of interest into three more bands: theta, alpha, and beta. These bands were chosen because they are the most commonly used bands for EEG signal analysis. The defining of band borders is somewhat subjective. The ranges that we use in our case are theta ranging between 4 and 8Hz, alpha ranging between 8 and 13 Hz, and beta ranging between 13 and 20 Hz. For the 2DCNN-XGBoost fusion-based approach, using this filter technique, we filtered the frequency range between 4 and 30 Hz, which contains relevant signals and generated spectrum images. Here the spectrograms from the signals were extracted using STFT and transformed into RGB pictures.

After pre-processing, we have used several feature extraction techniques for our feature fusion-based and the 2DCNN-XGBoost fusion-based approach that we discussed below:

Fast Fourier transform is among the most useful methods for processing various signals19,20,21,22,23. We used the FFT algorithm to calculate a sequence of Discrete Fourier Transform. The FFT stems are evaluated because they operate in the frequency domain, in the time or space, equally computer-feasible. The O(NlogN) result can also be determined by the FFT. Where N is the length of the vector. It functions by splitting a N time-domain signal into a N time domain with one single stage. The second stage is an estimate of the N frequency range for the N time-domain signals. Lastly, the N spectrum was synthesized into one frequency continuum to speed up the Convolutional Neural Network training phase.

The equations of FFT are shown below (1), (2):

$$begin{aligned} H(p)= & {} sum _{t=0}^{N-1} r(t) W_{N}^{p n}, end{aligned}$$

(1)

$$begin{aligned} r(t)= & {} frac{1}{N} sum _{p=0}^{N-1} H(p) W_{N}^{-p n}. end{aligned}$$

(2)

Here (H_p) represents the Fourier co efficients of r(t)

(a) A baseline EEG signal in time domain, (b) A baseline EEG signal in frequency domain using FFT, (c) A stimuli EEG signal in time domain, (d) A stimuli EEG signal in frequency domain using FFT .

We have implemented this FFT to get the coefficients shown in Fig. 2. The mean and maximum features for each band were then computed. Therefore, we get 6 features for each channel across 3 bands, for a total of 84 features distributed across 14 channels.

This method exhibits a finite set of data points for all cosine functions at varying frequencies which is used in research24,25,26,27,28. The Discrete Cosine Transformation (DCT) is usually applied to the coefficients of a periodically and symmetrically extended sequence in the Fourier Series. In signal processing, DCT is the most commonly used transformation method (TDAC).The imaginary part of the signal is zero in the time domain and in the frequency domain. The actual part of the spectrum is symmetrical, the imaginary part is unusual. With the following Eq. (3) , we can transform normal frequencies to the mel frequency:

$$begin{aligned} X_{P}=sum _{n=0}^{N-1} x_{n} cos {left[ frac{pi }{N}left( n+frac{1}{2}right) Pright] }, end{aligned}$$

(3)

where, N is the the list of real numbers and (X_p) is the set of N data values

(a) A baseline EEG signal in time domain, (b) A baseline EEG signal in frequency domain using DCT, (c) A stimuli EEG signal in time domain, (d) A stimuli EEG signal in frequency domain using DCT.

We have implemented DCT to get the coefficients shown in Fig. 3. The mean and maximum features for each band were then computed. Therefore, we get 6 features for each channel across 3 bands, for a total of 84 features distributed across 14 channels.

The Hjorth parameter is one of the ways in which a signals statistical property is indicated in the time domain and has three parameters which are Activity, Mobility, and Complexity. These parameters were calculated in many research29,30,31,32.

Activity: The parameter describes the power of the signal and, the variance of a time function. This can suggest the power spectrum surface within the frequency domain. The notation for activity is given below (4),

$$begin{aligned} var(y(t)). end{aligned}$$

(4)

Mobility: The parameter represents the average frequency or the share of the natural variation of the spectrum. This is defined as the square root of the variance of the first y(t) signal derivative, which is divided by the y(t). The notation for activity is given below (5),

$$begin{aligned} sqrt{frac{var(y'(t))}{var(y(t))}}. end{aligned}$$

(5)

Complexity: The parameter reflects the frequency shift. The parameter contrasts the signal resemblance with a pure sinusoidal wave, where the value converges to 1 if the signal is more identical. The notation for activity is given below (6),

$$begin{aligned} frac{mobility(y'(t))}{mobility(y(t))}. end{aligned}$$

(6)

For our analysis, we calculated Hjorths activity, mobility, and complexity parameters as features. Therefore, we get 9 features for each channel across 3 bands, for a total of 126 features distributed across 14 channels.

Statistics is the application of applied or scientific data processing using mathematics. We use statistical features to work on information-based data, focusing on the mathematical results of this information. We can learn and gain more and more detailed information on how statistics arrange our data in particular and how other data science methods can be optimally used to achieve more accurate and structural solutions. There is multiple research33,34,35 on emotion analysis where statistical features were used. The statistical features that we have extracted are median, mean, max, skewness and variance. As a result, we get 5 features for each channel, for a total of 70 features distributed across 14 channels.

The Poincare, which takes a series of intervals and plots each interval against the following interval, is an emerging analysis technique. In clinical settings, the geometry of this plot has been shown to differentiate between healthy and unhealthy subjects. It is also used in a time series for visualizing and quantifying the association between two consecutive data points. Since long-term correlation and memory are demonstrated in the dynamics of variations in physiological rhythms, this analysis was meant to expand the plot of Poincare by steps, instead of between two consecutive points, the association between sequential data points in a time sequence. We used two parameters in our paper which are:

SD1: Represent standard deviation from axis 1 of the distances of points and defines the width from the ellipse (short-term variability). Descriptors SD1 can be defined as (7):

$$begin{aligned} SD1 = frac{sqrt{2}}{2}SD(P_n - P_{n+1}). end{aligned}$$

(7)

SD2: The standard deviations from axis 2 and ellipse length (long-term variability) are equivalent to SD2.Descriptors SD2 can be defined as (8):

$$begin{aligned} SD2 = sqrt{2SD(P_n)^2 - frac{1}{2}SD(P_n - P_{n+1})^2}. end{aligned}$$

(8)

We have extracted 2 features which are SD1 and SD2 from each band (theta, alpha, beta). Therefore, we get 6 features for each channel across 3 bands, for a total of 84 features distributed across 14 channels.

The Welch method is a modified segmentation system and is used to assess the average periodogram, which is used in papers3,23,36. The Welch method is applied to a time series. For spectral density, it is concerned with decreasing the variance in the results. Power Spectral Density (PSD) informs us which differences in frequency ranges are high and could be very helpful for further study.The Welch method of the PSD can usually be described by the following equations: (9), (10) of the power spectra.

$$begin{aligned} P(f)= & {} frac{1}{M U}left| sum _{n=0}^{M-1} x_{i}(n) w(n) e^{-j 2 pi f}right| ^{2}, end{aligned}$$

(9)

$$begin{aligned} P_{text{ welch } }(f)= & {} frac{1}{L} sum _{i=0}^{L-1} P(f). end{aligned}$$

(10)

Here, the equation of density is defined first. Then, Welch Power Spectrum implies that for each interval, the average time is expressed. We have implemented this Welch method to get the PSD of the signal. From that, the mean power has been extracted from each band. As a result, we get 3 features for each channel across 3 bands, for a total of 42 features distributed across 14 channels.

A Convolutional Neural Network (CNN) is primarily used to process images since the time series is converted into a time-frequency diagram using a Short-Time Fourier Transform (STFT). It extracts required information from input images using multilayer convolution and pooling, and then classifies the image using fully connected layers. We have calculated the STFT using the filtered signal, which ranges between 4 and 30 Hz, and transformed them into RGB images. Some of the generated images are shown in Fig. 4.

EEG signal spectrograms using STFT with classification (a) high arousal, high valence, and low dominance, (b) low arousal, high valence, and high dominance, (c) high arousal, low valence, and low dominance.

To convert time series EEG signals into picture representations, Wavelet algorithms and Fourier Transforms are commonly utilized, which we have used in our secondary process. But in order to preserve the integrity of the original data, EEG conversion should be done solely in the time-frequency domain. As a result, STFT is the best method for preserving the EEG signals most complete anesthetic characteristics, which we have used in our second process. The spectrograms from the signal were extracted using STFT and the Eq. (11) is given below:

$$begin{aligned} Z_{n}^ {e^{(j hat{omega )}}}=e^{-jhat{omega }n}[(W(n)e^{jhat{omega }n}) times x(n)], end{aligned}$$

(11)

where, (e^{-jhat{omega }n}) is the complex bandpass filter output modulated by signal. From the above equation we have calculated the STFT from the filtered signals.

For our feature fusion-based approach, as we have pre-trail signals, we have used 4 s of pre-trail signals as baseline signals, resulting in 512 samples for each at a 128 Hz sampling rate. Then similar to the features extracted for stimuli, the features from baseline signals were also extracted. Then the stimuli features were divided by the baseline features, in order to get only the differences which can be noticed for the feature fusion-based approach by the stimuli test only, which is also done in the paper3.

After extracting all the features and calculating the ratio between stimuli features and baseline features, we have added the self-assessment ratings of arousal valence and dominance. Now the data set for the feature fusion-based approach has 414 data points with 630 features for each data point. We scaled the data using MinMax Scaling to remove the large variation in our data set. The estimator in MinMax Scaling scales and translates each value individually so that it is between 0 and 1, within the defined range.

The formula for MinMax scale is (12),

$$begin{aligned} X_{n e w}=frac{X_{i}-{text {Min}}(X)}{{text {Max}}(X)-{text {Min}}(X)}. end{aligned}$$

(12)

There are various feature selection techniques which are used by many researchers, to reduce the number of features which are not needed and only the important features which can play a big role in the prediction. So in our paper we used two feature elimination methods. One is Recursive Feature Elimination (i.e., Fig. 5) and another one is Chi-square test (i.e., Fig. 6) .

Procedure of recursive feature elimination (RFE).

Procedure of feature selection using Chi-square.

RFE (i.e., Fig. 5) is a wrapper type feature selection technique amongst the vast span of features. Here the term recursive is representative of the loop work of this method that traverses backward on loops to identify the best fitted feature giving each predictor an importance score and later eliminating the least importance scored predictor. Additionally cross-validation is used to find the optimal number of features to rank various feature subcategories and pick the best selection of features for scoring. In this method one attribute is taken and along with the target attribute and this procedure keeps forwarding combining attributes and merging with the target attribute to produce a new model. Thus different subsets of features of different combinations generate models through training. All these models are then strained out to get the maximum accuracy resulting model and its consecutive features. In short, we remove those features which result in the accuracy to be high or at least equal and return it back if the accuracy gets low after elimination . Here we have used step size of 1 to eliminate one feature at a time at each level which can help to remove the worst features early, keeping the best features in order to improve the already calculated accuracy of the overall model.

Chi-square (i.e., Fig. 6) test is a filter method that states the accuracy of a system comparing the predicted data with the observed data based on their importance. It is a test that figures out if there is any feature effective on nominal categorized data or not in order to compare between observed and expected data. In this method one predicted data set is considered as a base point and expected data is calculated from the observed value with respect to the base point.

The Chi-square value is computed by (13):

$$begin{aligned} chi ^{2}=sum _{i=1}^{m} sum _{j=1}^{k} frac{left( A_{i j}-frac{R_{i} cdot C_{j}}{N}right) ^{2}}{frac{R_{i} cdot C_{j}}{N}}, end{aligned}$$

(13)

where, m is the number of intervals, k is the amount of classes, (R_i) is the amount of patterns in the i range, (C_j) is the amount of patterns in the j range, (A_{ij}) is the amount of patterns in i and j range.

After applying RFE and Chi-square , from the achieved accuracy we have observed that, Chi-square does not incorporate a machine learning (ML) model, while RFE uses a machine learning model and trains it to decide whether it is relevant or not. Moreover, in our research, Chi-square methods failed to choose the best subset of features which can provide better results,but because of the extensive nature, RFE methods give the best subset of features mostly in our research. Therefore we consider RFE over Chi-square for feature elimination.

In research3, on this data set, they have calculated the mean and standard deviation for the self assessment. Then they have divided each dimension into two classes, high or low. The boundary between high and low was in the mid point of (0-*5) which is 2.5. But we have adjusted this boundary on our secondary process based on some of our observation. We have also calculated the mean and standard deviation of self assessment ratings, shown in Table 3, to separate each dimension of emotions into two separate classes, which will be high (1) and low (0) and will be representing two emotional category for each dimension.

Arousal: For our 2DCNN-XGBoost fusion based approach, (ratings (> 2.5)) is considered in the class of Excited/Alert and (ratings(< 2.5)) is considered as Uninterested/Bored (0). Here, from the 5796 data, 4200 was in the excited/alert class (1) and 1596 was in the uninterested/bored class. For the feature fusion-based approach, We have focused on the average ratings for excitement which co-responds to stimuli number 5 and 16, having 3.70 0.70 and 3.35 1.07 respectively. Additionally for, calmness, we can take stimuli 1 and 11 into consideration where the average ratings are, 2.26 0.75 and 1.96 0.82 respectively. Therefore, (ratings (> 2)) can be considered in the class of Excited/Alert and (ratings(< 2)) can be considered as Uninterested/Bored. Here, from the 414 data, 393 was in the excited/alert class and 21 was in the uninterested/bored class. We have also shown the parallel Coordinate plot for arousal in Fig. 8a to show the impact of different features on arousal level.

Valence: For our 2DCNN-XGBoost fusion based approach, (ratings (> 2.5)) is considered in the class of happy/elated and (ratings(< 2.5)) is considered as unpleasant/stressed. Here, from the 5796 data, 2254 was in the unpleasant/stressed class and 3542 was in happy/elated class. To store this values in the new data set, unpleasant/stressed is considered as 0 and happy/elated is considered as 1. For the feature fusion-based approach, firstly, we concentrated on the average happiness ratings, which correspond to stimuli 7 and 13, having 4.52 0.59 and 4.39 0.66 respectively. Additionally, stimuli (4, 15) and (6, 10) for fear and disgust were considered where the average ratings are, 2.04 1.02, 2.48 0.85, 2.70 1.55 and 2.17 1.15 respectively. Here, it is clear that, ratings (> 4) can be considered in the class of happy/elated and ratings(< 4) can be considered as unpleasant/stressed. Here, from the 414 data, 359 was in the unpleasant/stressed class and 55 was in happy/elated class. To store this values in the new data set, unpleasant/stressed is considered as 0 and happy/elated is considered as 1. We have also shown the parallel Coordinate plot for valence in Fig. 8b to show the impact of different features on valence level.

Dominance: For our 2DCNN-XGBoost fusion based approach, Same approach is followed here with low and high classes. Here, ratings(> 2.5) in the class of helpless/without control and ratings(< 2.5) can be considered for the class of empowered. Here, from the 5796 data, 1330 was in the helpless/without control class and 4466 was in empowered class. To store this values in the new data set, helpless/Without Control is considered as 0 and empowered is considered as 1. For the feature fusion-based approach, we have targeted stimuli number 4,6 and 8 which has targeted emotions of fear, disgust and anger, having mean rating of 4.13 0.87, 4.04 0.98 and 4.35 0.65 respectively. So, ratings(> 4) in the class of helpless/without control and rest for the class of empowered. Here, from the 414 data, 65 was in the helpless/without control class and 349 was in empowered class. To store this values in the new data set, helpless/Without Control is considered as 0 and empowered is considered as 1. We have also shown the parallel Coordinate plot for dominance in Fig. 8c to show the impact of different features on dominance level.

The overall class distribution for arousal, valence and dominance is shown in the Fig. 7.

Overall class distribution after conversion to a two-class rating score for arousal, valence and dominance.

Impact factor of features on (a) arousal, (b) valence and (c) dominance using parallel co-ordinate plot.

Convolutional Neural Network (CNN) is a type of deep neural network used to analyze visual imagery in deep learning. Figure 9, represents the overall two-dimensional Convolutional Neural Network model used in our proposed method (i.e., Fig. 1), which is also our 2DCNN-XGBoost fusion approach. We generated spectrum images before using this CNN architecture by filtering the frequency band containing significant signals between 4 and 30 Hz. Following that, we compute the Short-Time Fourier Transform of the EEG signals and convert them to spectrogram images before extracting features with a 2D Convolutional Neural Network. We train the model with 2D convolutional layers using the obtained spectrogram images, and then retrieve the trained features from the training layer with the help of another dense layer. We have implemented the test-bed to evaluate the performance of our proposed method. The proposed model is trained using the Convolutional Neural Network (CNN) described below,

The architecture of the implemented CNN model.

Basic features such as horizontal and diagonal edges are usually extracted by the first layer. This information is passed on to the next layer, which is responsible for detecting more complicated characteristics such as corners and combinational edges. As we progress deeper into the network, it becomes capable of recognizing ever more complex features such as objects, faces, and so on.The classification layer generates a series of confidence ratings (numbers between 0 and 1) on the final convolution layer, indicating how likely the image is to belong to a class. In our proposed method, we have used three layers of Conv2D and identified the classes.

The pooling layer is in charge of shrinking the convolved features spatial size. By lowering the size, the computer power required to process the data is reduced. Pooling can be divided into two types: average pooling and max pooling. We have used max pooling because it gives a better result than average pooling. We found the maximum value of a pixel from a region of the image covered by the kernel using max pooling. It removes all noisy activations and conducts de-noising as well as dimensionality reduction. In general, any pooling function can be represented by the following formula (14):

$$begin{aligned} q_{j}^{(l+1)} = Pool(q_{1}^{(l)}, ldots ,q_{i}^ {(l)},ldots ,q_{n}^{(l)}),q_{i}in R_{j}^{(l)}, end{aligned}$$

(14)

where, (R_{j}^{(l)}) is the jth pooled region at layer l and Pool() is pooling function over the pooled region

We added a dropout layer after the pooling layer to reduce overfitting. The accuracy will continuously improve as the dropout rate decreases, while the loss rate decreases. Some of the max pooling is randomly picked outputs and completely ignored. They arent transferred to the following layer.

After a set of 2D convolutions, its always necessary to perform a flatten operation.Flattening is the process of turning data into a one-dimensional array for further processing. To make a single lengthy feature vector, we flatten the output of the convolutional layers. Its also linked to the overall classification scheme.

Dense gives the neural network a completely linked layer. All of the preceding layers outputs are fed to all of its neurons, with each neuron delivering one output to the following layer.

In our proposed method, with this CNN architecture, diverse kernels are employed in the convolution layer to extract high-level features, resulting in different feature maps. At the end of the CNN model, there is a fully connected layer. The predicted class labels of emotions are generated by the output of the fully connected layer. According to our proposed method, we have added dense layer with 630 units after training layer to extracted this amount of features.

Extreme Gradient Boosting (XGBoost) is a machine learning algorithm that use a supervised learning strategy to accurately predict an objective variable by combining the predictions of several weaker models. It is a common data mining tool with good speed and performance. The XGBoost model computes 10 times faster than the Random Forest model.The XGBoost model was generated utilising the additive tree method, which involves adding a new tree to each step to complement the trees that have already been built.As additional trees are built, the accuracy generally improves. In our proposed model, we have used XGBoost after applying CNN. We extracted some amount of features from CNNs trained layer. . Then, based on the retrieved features, we used Extreme Gradient Boosting to classify all of the dimensions of emotion. The following Eqs. (15) and (16) are used in Extreme Gradient Boosting.

$$begin{aligned}{}&f(m) approx f(k)+f^{prime }(k)(m-a)+frac{1}{2} f^{n}(k)(m-k)^{2}, end{aligned}$$

(15)

$${ mathcal {L}^{(t)} simeq sum _{i=1}^{n}left[ lleft( q_{i}, q^{(t-1)}right) +r_{i} f_{t}left( m_{i}right) +frac{1}{2} s_{i} f_{t}^{2}left( m_{i}right) right] +Omega left( f_{t}right) +C },$$

(16)

where C is Constant, (r_i) and (s_i) are defined as,

$$begin{aligned} r_{i}= & partial hat{z}_{i}^{(b-1)}. int left( z_{i,} hat{z}_{i}^{(b-1)}right) , end{aligned}$$

(17)

$$begin{aligned} s_{i}= & {} partial hat{z}_{i}^{(b-1)} .int left( z_{i}, hat{z}_{i}^{(b-1)}right) . end{aligned}$$

(18)

After removing all the constants, the specific objective at step b becomes,

$$begin{aligned} sum _{i=1}^{n}left[ { r_{i}f_{t} }left( m_{i}right) +frac{1}{2}{s_{i} {f}_{t}^{2}(m_{i})}right] +Omega left( f_{t}right) , end{aligned}$$

(19)

Go here to see the original:

CNN-XGBoost fusion-based affective state recognition using EEG spectrogram image analysis | Scientific Reports - Nature.com

Read More..

Gulf region flips bullish on crypto mining, but can it be green? – Al-Monitor

Crypto mining is an electricity-intensive process that requires running computer servers to solve a complex set of algorithms. In other words, mining crypto converts electricity into digital coins then sold at market value. For that reason, access to cheap power is a trump card. The energy-rich Gulf region is a suitable candidate. It is home to some of the worlds largest fossil fuel resources and boasts the world's lowest solar tariffs.

After a decade of hesitation, Gulf states have started to warm up to cryptocurrencies. The United Arab Emirates (UAE) and Bahrain, in particular, are looking to attract centralized crypto exchanges they processed more than $14 trillion worth of crypto assets in 2021 and their interest in mining crypto is rising. There is a push from the UAE government to make greater use of power generation capacities, said Abdulla Al Ameri, an Emirati crypto mining entrepreneur who has been mining for about five years, including in Kazakhstan and Russia. I expect the UAE crypto mining market to take off in the next two years, he told Al-Monitor. The question is, how green will this be?

Simultaneously, Gulf states have warmed up to renewables, solar in particular, opening the doors for solar-powered crypto mining. We are working on a hybrid crypto farm in Abu Dhabi powered by solar at day, grid at night, CryptoMiners CEO Nasser El Agha told Al-Monitor. The Dubai-headquartered crypto mining service provider cooperates with an undisclosed British company to launch the Gulfs first company-scale solar-crypto farm by December 2022. It is a proof of concept intended to be ultimately taken to the market, specifically to agricultural farms wishing to generate extra income through crypto mining.

Original post:

Gulf region flips bullish on crypto mining, but can it be green? - Al-Monitor

Read More..

Automotive AI Market Projected to Hit USD 1498.3 Million by 2030 at a CAGR of 30.1% – Report by Market Research Future (MRFR) – GlobeNewswire

New York, US, Aug. 17, 2022 (GLOBE NEWSWIRE) -- According to a comprehensive research report by Market Research Future (MRFR), Automotive AI Market Analysis by Technology, by Process, by Application and by Regions - Global Forecast To 2030 valuation is poised to reachUSD1498.3 Million by 2030, registering 30.1% CAGR throughout the forecast period (20222030).

Automotive AI Market Overview

A developing business standard is growing in the modern-day digital world since artificial intelligence (AI) has become more ubiquitous. Artificial intelligence for the automotive industry is flourishing in the modern age, allowing companies to observe their operations better, offer a better results in the virtual environment, develop autonomous and semi-autonomous cars, enhance in-car customer experience, and increase business plans.

Automotive AI Market Report Scope:

Get Free Sample PDF Brochure

https://www.marketresearchfuture.com/sample_request/4258

Artificial intelligence in the automotive industry has recorded massive growth in the last few years. The market's growth is primarily credited mainly to the growing automobile industry. Furthermore, the factors such as growing investments, the growing trend of autonomous vehicles, and industry-wide standards like navigation systems are also projected to catalyze the market demand over the coming years.

Automotive AI Market USP Covered

Automotive Artificial Intelligence Market Drivers

The global market for automotive artificial intelligence has registered massive growth in recent times. The market's growth is credited to the factors such as rising demands for better user encounters, increasing preference for a top-quality vehicle, rising concern over confidentiality and protection, and an increasing trend toward automated driving.

Automotive AI Market Restraints

On the other hand, the growing concerns regarding data security are likely to impede the market's growth.

Automotive Artificial Intelligence Market Segments

Among all the technologies, the deep learning segment is anticipated to account for the largest market share across the global market for automotive artificial intelligence over the assessment timeframe. The significant investments made by OEMs are the primary aspect causing an upsurge in the segment's growth. The growing research & development activities of self-driving cars using deep learning for sound recognition, data analysis, and image processing is another prime aspect boosting the segment's growth.

Browse In-depth Market Research Report (111 Pages) on Automotive AI Market:

https://www.marketresearchfuture.com/reports/automotive-artificial-intelligence-market-4258

Among all the processes, the data mining segment is anticipated to dominate the global market for automotive artificial intelligence over the coming years. Various types of sensors in automobiles are used to accumulate information which is further used to train the automobile to detect and identify obstacles and various barriers. The massive amount of data generated is the primary aspect causing an upsurge in the segment's growth.

Among all the end-users, the semi-autonomous segment is anticipated to dominate the global market for automotive artificial intelligence over the review timeframe. The growing implementation of gesture and voice recognization systems is the main reason causing an upsurge in the segment's growth.

Automotive AI Market Regional Analysis

The global market for automotive artificial intelligence is analyzed across five major regions: Latin America, the Middle East & Africa, Asia-Pacific, Europe, and North America.

According to the analysis reports by MRFR, the North American region is anticipated to dominate the global market for automotive artificial intelligence over the coming years. The primary reason causing an upsurge in the regional market's growth is the presence of significant manufacturers in this area. Moreover, in comparison with other areas, the region has substantially more access to advanced technology to build artificial intelligence systems, which is anticipated to boost the growth of the regional market over the assessment timeframe. Furthermore, the growing expectation of autonomous cars across the United States has significantly contributed to the nation's growth. In addition, favorable government regulations, coupled with the fact that the automotive sector's prominent leaders such as Fiat Chrysler Automotive, Ford Motor Company, and General Motors, are taking part in the development of artificial intelligence in automobiles by constantly improving their products, will have a better potential in the global market.

Ask To Expert:

https://www.marketresearchfuture.com/ask_for_schedule_call/4258

COVID-19 Impact

The global COVID-19 pandemic has had an enormous impact on the majority of the market sectors across the globe. The rapid of the disease across the majority of the countries worldwide has led to the implementation of partial or complete lockdowns. The travel restrictions and social distancing norms imposed across the majority of the world caused significant disruptions in the supply chain networks for most industry areas. Some major sectors affected by the pandemic include hospitality, automobile, construction, etc. Like any other sector across the global market, the global market for automotive artificial intelligence has also faced a significant impact since the arrival of the pandemic. The global health crisis impacted public health and severely impacted the financial activities across several industry sectors. Recently, the adoption of artificial intelligence across various end-use applications belonging to various sectors has become the latest trend worldwide. During pandemic times, AI-based tools are being utilized widely worldwide. With the sudden fall in the global demand for automobiles, the global market for automotive artificial intelligence suffered significant losses in terms of labor and revenues.

On the other hand, with the pandemic fading across the globe, the global economy and industrial activities have been picking up pace in the last few months. The growing investments in research & development activities to launch innovative solutions will likely help the market get back on track over the assessment timeframe. In addition, with the rapid vaccination rates across the majority of the world, the global market is likely to experience favorable growth over the coming years.

Check for Discount:

https://www.marketresearchfuture.com/check-discount/4258

Automotive Artificial Intelligence Market Competitive Analysis

Dominant Key Players on Automotive AI Market Covered are:

Related Reports:

Off the Road Tire Market Analysis Research Report: Information By Vehicle Type, Construction Type, Distribution Channel and Region - Forecast till 2030

Industrial Vehicles Market Growth Research Report: Information by Product Type, Drive Type, Application, and Region Forecast till 2030

Powersports Market Trends Research Report: Information By Type, By Application, By Model - Forecast till 2030

About Market Research Future:

Market Research Future (MRFR) is a global market research company that takes pride in its services, offering a complete and accurate analysis regarding diverse markets and consumers worldwide. Market Research Future has the distinguished objective of providing the optimal quality research and granular research to clients. Our market research studies by products, services, technologies, applications, end users, and market players for global, regional, and country level market segments, enable our clients to see more, know more, and do more, which help answer your most important questions.

Follow Us:LinkedIn|Twitter

Continue reading here:

Automotive AI Market Projected to Hit USD 1498.3 Million by 2030 at a CAGR of 30.1% - Report by Market Research Future (MRFR) - GlobeNewswire

Read More..

Factor Investing the road ahead – The Financial Express

By Bijon Pani, Chief Investment Officer, NJ Asset Management Private Limited

Factor investing is a method of choosing stocks (or other asset classes) using a predefined set of rules or parameters. The science of how to choose these parameters is what determines how successful the factor is in future. When choosing a factor, one needs to make sure they are robustly constructed, should work across multiple countries, and have a sensible rationale on why it works.

Factors offer a way of segregating a diversified portfolio returns (such as those of a fund manager you might like) into its various factor components and what then remains unexplained is the contribution of the manager.

The first example of using factors to explain returns comes from the CAPM model which showed risk and return in terms of the market exposure. But the CAPM left a lot of the return unexplained. There were multiple influential academic papers which put forth other factors such as value and size to explain returns.

It was Fama and French in 1993 who conceived a simple framework to think about returns in terms of factors. They added two powerful factors: value and size to the existing market return factor. It was further enhanced by Carhart to include momentum.

The four factor model became the bedrock of performance and risk analysis of fund management for many decades. Over time as the computing power became faster and more accessible, the academic research into factors exploded as crunching data became easier. The latest innovation uses machine learning, natural language processing and alternative datasets. There are now hundreds of documented parameters, even though most of them fall into one of the four factor styles: value, quality, low volatility and momentum.

John Cochrane, a leading academician who studies factors, rightly calls this the factor zoo. The job of a practitioner has been made hard as newer parameters keep getting reported that promise better returns compared to the older ones. It is especially more nuanced and harder when it comes to factor investing in India. This is because India suffers from two big issues.

Firstly, liquidity is a big problem beyond a certain number of stocks and when one is constructing factors, you need a large enough universe to measure the factor premiums and construct portfolios. This job gets harder if the universe has illiquid stocks as one may end up including stocks in portfolios that cannot be easily invested in.

Secondly, factor construction requires long clean data and we often suffer from inadequate and patchy data. This requires the skills of an experienced professional who can craft the factors specific to the Indian markets without losing the essence of it. It is, after all, very easy to fall into the data-mining trap and construct parameters that have worked in the past but may not in the future.

Once factors became common in the developed world, they made the life of traditional fund managers even more challenging. It wasnt easy beating the market index, but now with added factors, the overwhelming majority (more than 90 per cent by some research) couldnt beat a portfolio constructed using factors.

Morningstar India research found that only 26 per cent large cap discretionary funds could beat the benchmark over the past 10 years. The average alpha was a mere 1.15 per cent. This alpha may have been negligible if we added other factors in the regression. The future of fund management in India will be very similar to that of the west, we will see active factor based funds giving a strong competition to discretionary fund managers in providing better risk adjusted returns. The charts shows the performance of various factors and an equal weight multifactor portfolio (constructed from the 4 factor indices published by NSE) compared to the market over the last 10 years. Readers will notice that the performance of various factors and the multifactor model was better than the index during this time. On a risk adjusted basis, the numbers are even better.

Our retail participation in mutual funds is very low, over the near future as more money flows into funds, quant funds will not just grow along with traditional funds but also increase their market share. A rule based approach allows the parameter to be backtested and see how it has performed over various business cycles. This provides an added confidence in the risk and return structure of the portfolio.

The only word of caution we would like to add is that even though rule based investment strategies will grow rapidly in future, it would mostly be in the active approach where effort is invested in studying factors in the Indian context. Simply copying parameters from the developed market into India may not work very well, so rule based strategies need to be constructed keeping in mind the idiosyncrasies of Indian markets, not just replicating an index.

Factor. investing has moved from being a fundamental concept of academic finance to the next disruptor in fund management.

(Disclaimer: The views expressed above are the authors own views.)

Read the original post:

Factor Investing the road ahead - The Financial Express

Read More..

Investors five-year losses continue as Hochschild Mining (LON:HOC) dips a further 12% this week, earnings continue to decline – Simply Wall St

Long term investing works well, but it doesn't always work for each individual stock. It hits us in the gut when we see fellow investors suffer a loss. Anyone who held Hochschild Mining plc (LON:HOC) for five years would be nursing their metaphorical wounds since the share price dropped 73% in that time. And some of the more recent buyers are probably worried, too, with the stock falling 51% in the last year. Furthermore, it's down 31% in about a quarter. That's not much fun for holders. This could be related to the recent financial results - you can catch up on the most recent data by reading our company report.

Given the past week has been tough on shareholders, let's investigate the fundamentals and see what we can learn.

Check out our latest analysis for Hochschild Mining

To quote Buffett, 'Ships will sail around the world but the Flat Earth Society will flourish. There will continue to be wide discrepancies between price and value in the marketplace...' One imperfect but simple way to consider how the market perception of a company has shifted is to compare the change in the earnings per share (EPS) with the share price movement.

During the five years over which the share price declined, Hochschild Mining's earnings per share (EPS) dropped by 1.1% each year. This reduction in EPS is less than the 23% annual reduction in the share price. So it seems the market was too confident about the business, in the past. The less favorable sentiment is reflected in its current P/E ratio of 11.46.

The company's earnings per share (over time) is depicted in the image below (click to see the exact numbers).

It is of course excellent to see how Hochschild Mining has grown profits over the years, but the future is more important for shareholders. Take a more thorough look at Hochschild Mining's financial health with this free report on its balance sheet.

As well as measuring the share price return, investors should also consider the total shareholder return (TSR). The TSR is a return calculation that accounts for the value of cash dividends (assuming that any dividend received was reinvested) and the calculated value of any discounted capital raisings and spin-offs. It's fair to say that the TSR gives a more complete picture for stocks that pay a dividend. In the case of Hochschild Mining, it has a TSR of -70% for the last 5 years. That exceeds its share price return that we previously mentioned. The dividends paid by the company have thusly boosted the total shareholder return.

We regret to report that Hochschild Mining shareholders are down 49% for the year (even including dividends). Unfortunately, that's worse than the broader market decline of 3.6%. However, it could simply be that the share price has been impacted by broader market jitters. It might be worth keeping an eye on the fundamentals, in case there's a good opportunity. Unfortunately, last year's performance may indicate unresolved challenges, given that it was worse than the annualised loss of 11% over the last half decade. We realise that Baron Rothschild has said investors should "buy when there is blood on the streets", but we caution that investors should first be sure they are buying a high quality business. It's always interesting to track share price performance over the longer term. But to understand Hochschild Mining better, we need to consider many other factors. To that end, you should be aware of the 3 warning signs we've spotted with Hochschild Mining .

We will like Hochschild Mining better if we see some big insider buys. While we wait, check out this free list of growing companies with considerable, recent, insider buying.

Please note, the market returns quoted in this article reflect the market weighted average returns of stocks that currently trade on GB exchanges.

Have feedback on this article? Concerned about the content? Get in touch with us directly. Alternatively, email editorial-team (at) simplywallst.com.

This article by Simply Wall St is general in nature. We provide commentary based on historical data and analyst forecasts only using an unbiased methodology and our articles are not intended to be financial advice. It does not constitute a recommendation to buy or sell any stock, and does not take account of your objectives, or your financial situation. We aim to bring you long-term focused analysis driven by fundamental data. Note that our analysis may not factor in the latest price-sensitive company announcements or qualitative material. Simply Wall St has no position in any stocks mentioned.

Simply Wall St does a detailed discounted cash flow calculation every 6 hours for every stock on the market, so if you want to find the intrinsic value of any company just search here. Its FREE.

View post:

Investors five-year losses continue as Hochschild Mining (LON:HOC) dips a further 12% this week, earnings continue to decline - Simply Wall St

Read More..

NICE Announces Top Tier Microsoft Azure IP Co-Sell Status with the Full Power of NICE CXone Now Available Natively on Azure – StreetInsider.com

News and research before you hear about it on CNBC and others. Claim your 1-week free trial to StreetInsider Premium here.

NICE secures Microsofts highest level partner designation with a co-sell partnership for CXone

HOBOKEN, N.J.--(BUSINESS WIRE)--NICE (Nasdaq: NICE) today announced the expansion of its partnership with Microsoft, delivering the full power of CXone on Azure to create frictionless, personalized digital customer experiences. NICE has received Top Tier status, Microsofts highest level partner designation, for Azure IP Co-sell driving deeper collaboration and a strong go-to-market momentum. This partnership leverages the power of CXone to help organizations globally to transform their customers experiences and build a digital first customer service operation.

With a joint global go-to-market co-selling strategy working together with key strategic accounts enabling rapid time to value, extreme agility and a faster path to the cloud, NICE and Microsoft will accelerate organizations adoption of CXone.

CXones advanced AI and full portfolio of voice and digital solutions and with its integrations with Teams, Dynamics, Nuance, ACS (Azure Communication Services), and Customer Insights, allows organizations of all sizes to create proactive, brand-differentiating interactions that exceed the expectations of the digital-first customer and goes beyond the boundaries of the contact center.

Paul Jarman, CEO, NICE CXone, said, Consumers today expect fast, convenient digital and self-service options. Through the expanded partnership with Microsoft and with CXone now available on Azure, and with our co-sell partnership, we are taking another step in the frictionless revolution allowing organizations to meet their customers wherever they choose to start their journey and create a cohesive digital experience. This better-together offering will foster customer experience interaction (CXi) modernization and provide a standard-setting choice for customers.

About NICEWith NICE (Nasdaq: NICE), its never been easier for organizations of all sizes around the globe to create extraordinary customer experiences while meeting key business metrics. Featuring the worlds #1 cloud native customer experience platform, CXone, NICE is a worldwide leader in AI-powered self-service and agent-assisted CX software for the contact center and beyond. Over 25,000 organizations in more than 150 countries, including over 85 of the Fortune 100 companies, partner with NICE to transform - and elevate - every customer interaction. http://www.nice.com

Trademark Note: NICE and the NICE logo are trademarks or registered trademarks of NICE Ltd. All other marks are trademarks of their respective owners. For a full list of NICEs marks, please see: http://www.nice.com/nice-trademarks.

Forward-Looking StatementsThis press release contains forward-looking statements as that term is defined in the Private Securities Litigation Reform Act of 1995. Such forward-looking statements, including the statements by Mr. Jarman are based on the current beliefs, expectations and assumptions of the management of NICE Ltd. (the Company). In some cases, such forward-looking statements can be identified by terms such as believe, expect, seek, may, will, intend, should, project, anticipate, plan, estimate, or similar words. Forward-looking statements are subject to a number of risks and uncertainties that could cause the actual results or performance of the Company to differ materially from those described herein, including but not limited to the impact of changes in economic and business conditions, including as a result of the COVID-19 pandemic; competition; successful execution of the Companys growth strategy; success and growth of the Companys cloud Software-as-a-Service business; changes in technology and market requirements; decline in demand for the Company's products; inability to timely develop and introduce new technologies, products and applications; difficulties or delays in absorbing and integrating acquired operations, products, technologies and personnel; loss of market share; an inability to maintain certain marketing and distribution arrangements; the Companys dependency on third-party cloud computing platform providers, hosting facilities and service partners;, cyber security attacks or other security breaches against the Company; the effect of newly enacted or modified laws, regulation or standards on the Company and our products and various other factors and uncertainties discussed in our filings with the U.S. Securities and Exchange Commission (the SEC). For a more detailed description of the risk factors and uncertainties affecting the company, refer to the Company's reports filed from time to time with the SEC, including the Companys Annual Report on Form 20-F. The forward-looking statements contained in this press release are made as of the date of this press release, and the Company undertakes no obligation to update or revise them, except as required by law.

View source version on businesswire.com: https://www.businesswire.com/news/home/20220817005366/en/

Corporate Media ContactChristopher Irwin-Dudek, +1 201 561 4442, ET[emailprotected]

InvestorsMarty Cohen, +1 551 256 5354, ET[emailprotected]

Omri Arens, +972 3 763 0127, CET[emailprotected]

Source: NICE

See more here:
NICE Announces Top Tier Microsoft Azure IP Co-Sell Status with the Full Power of NICE CXone Now Available Natively on Azure - StreetInsider.com

Read More..

Fixing global payroll with cloud services – IT-Online

Paying people is a demanding task. It requires financial planning, input from different parts of the organisation, complying with legislation, and of course paying a valuable workforce on time. Then there are the ongoing demands of accurate reporting, improving payroll processes and protecting all the related data.

All the these issues apply to payroll for a single region or jurisdiction. Once the activity extends across several countries, it is exponentially more challenging.

A number of issues become much more complex with global payroll, says Heinrich Swanepoel, head of sales at cloud payroll platform PaySpace. The four biggest ones we encounter are compliance, security, integration and support. These factors are important for any payroll environment, but they become particularly tough when you cover multiple regions.

Cloud-based payroll systems successfully counter such challenges, and companies recognise the advantage. According to the Chartered Institute of Payroll Professionals, by 2019, 38% of companies used cloud-based payroll software, 37% used on-premise solutions, and 25% opted for licensing hosted single-tenant products.

In other words, the cloud is already leading the payroll world. But why, and what should organisations know about the advantages? Global payroll demonstrates why cloud payroll services are so successful.

The problem with paying people

Global payroll amplifies the technical challenges and shortcomings of a payroll system. Compliance is the most obvious example. Local and global laws create a minefield for administrators. On the local level, they must comply with legislation that varies from country to country and is always subject to changes. Globally, they must keep an eye on legal demands such as financial reporting standards.

The big problem with legislation is that it can change, but you can miss something crucial and get hit with penalties if you dont have enough local exposure, says Swanepoel. Most of the time, youll only discover the problem when there is an employee complaint or an inspection.

Traditional payroll systems dont cover such nuances or do so at very high costs. Alternatively, a company would need to use internal or outsourced staff to make manual updates. In contrast, global cloud payroll systems continually update legislative rules for the regions they cover. Cloud systems update universally, so all the users benefit from changes. If you wake up on a Monday and there are new payroll laws, you can expect them to reflect on your system.

Integration is the second significant barrier that on-premise and single-tenant systems struggle to overcome. Managing payroll requires information from different parts of the business, such as HR databases and department invoices. Administrators must wait for data from these areas, which can be messy (spreadsheets) or exposed to security risks (emails). If youre sitting at HQ in Nairobi and waiting for local payment data from Dar es Salaam, such issues compound very quickly. Again, cloud platforms provide an alternative.

If you use an integrated payroll system, you receive data continuously and automatically, explains Sandra Crous, MD of PaySpace. That helps stop the habit of drop everything payroll windows. It can also radically improve reporting, payroll processes, and makes payroll transactions more secure. Integration between different regional banks and currency systems saves an enormous amount of time while providing significant transparency.

Every business essentially wants a centre of record for payroll one stop which provides the data needed to process payments or create reports. Cloud platforms are very good at creating that.

Keep payroll safe and sound

Security and support are particularly crucial when working across multiple regions. Payroll data constitute among the most sensitive, critical and legally-protected information in a business. Sending payroll data manually across email and mobile messages is risky and could contravene data privacy laws.

On-premise payroll systems come from an era where such concerns barely registered. But the world has changed, and cloud payroll platforms have security in their DNA.

Good cloud platforms have to put security as a core part of their business, Swanepoel notes. We host customer data, so we must invest in good security and data practices. That means several things: using reputable cloud hosts that also invest a lot in security, employing internal security engineers, and getting the right certification. For example, we are certified ISO 27001, which is a rigorous standard ensuring we handle data correctly and securely.

Such characteristics are pillars of the best cloud hosting models and apply to every territory where the provider makes their services available. This philosophy also extends to support, says Swanepoel. You have to have local support for your customers, not just support for the software elements but also support for the business teams using the services.

Crous concludes: The impact of cloud technology on payroll systems is incredible. I have been in the payroll space for decades, yet Ive never seen this dramatic jump in what the software can do today. If youre still running payroll on older systems, especially across multiple countries, you need to look at what cloud payroll systems do differently.

Related

Go here to read the rest:
Fixing global payroll with cloud services - IT-Online

Read More..

RECUR360 Ranks No. 1776 on the 2022 Inc. 5000 Annual List – GlobeNewswire

CAVE CREEK, Ariz., Aug. 16, 2022 (GLOBE NEWSWIRE) -- Today, Inc. revealed thatRECUR360, with Three-Year Revenue Growth of 346.95 Percent, is No.1776 on its annual Inc. 5000 list, the most prestigious ranking of the fastest-growing private companies in America. The list represents a one-of-a-kind look at the most successful companies within the economy's most dynamic segmentits independent businesses. Facebook, Chobani, Under Armour, Microsoft, Patagonia, and many other well-known names gained their first national exposure as honorees on the Inc. 5000.

The companies on the 2022 Inc. 5000 have not only been successful, but have also demonstrated resilience amid supply chain woes, labor shortages, and the ongoing impact of Covid-19. Among the top 500, the average median three-year revenue growth rate soared to 2,144percent. Together, those companies added more than 68,394 jobs over the past three years.

"The accomplishment of building one of the fastest-growing companies in the U.S., in light of recent economic roadblocks, cannot be overstated," says Scott Omelianuk, editor-in-chief of Inc. "Inc. is thrilled to honor the companies that have established themselves through innovation, hard work, and rising to the challenges of today."

"I am honored and humbled to have RECUR360 listed as '1776' on the Inc5000 list for 2022.We could not have achieved this without our wonderful customer base. The achievement is an attestation to the devotion and loyalty of our staff to generate such revenue growth through the pandemic and latest economy." - Andrew B Abrams - CEO - RECUR360 TECHNOLOGIES LLC

RECUR360 TECHNOLOGIES LLC, "RECUR360", is a SaaS based platform providing enterprise level invoice generation, payment processing, sales tax and late fee calculation, accounts receivable and collections automation for QuickBooks Desktop and Online users.The RECUR360 API enables SaaS platforms to connect into RECUR360 as a bridge to QuickBooks and automate subscription billings.The R360 Cloud Hosting division provides remote desktops for the hosting of QuickBooks Desktop and integrated applications.RECUR360 was recognized as one of the Top 10 new Apps for QuickBooks Online in the 2017 $100,000 Showdown; Top Rated 25 Apps for QuickBooks Online by Maverick Merchant 2021.

CONTACT:

Andrew B Abrams - CEO -accounting@recur360.com - (602) 388-8933

More about Inc. and the Inc. 5000

Methodology

Companies on the 2022 Inc. 5000 are ranked according to percentage revenue growth from 2018 to 2021. To qualify, companies must have been founded and generating revenue by March 31, 2018. They must be U.S.-based, privately held, for-profit, and independentnot subsidiaries or divisions of other companiesas of December 31, 2021. (Since then, some on the list may have gone public or been acquired.) The minimum revenue required for 2018 is $100,000; the minimum for 2021 is $2 million. As always,Inc. reserves the right to decline applicants for subjective reasons. Growth rates used to determine company rankings were calculated to four decimal places. The top 500 companies on the Inc. 5000 are featured in Inc. magazine's September issue, available on August 23. The entire Inc. 5000 can be found at http://www.inc.com/inc5000.

About Inc.

The world's most trusted business-media brand, Inc. offers entrepreneurs the knowledge, tools, connections, and community to build great companies. Its award-winning multiplatform content reaches more than 50 million people each month across a variety of channels including websites, newsletters, social media, podcasts, and print. Its prestigious Inc. 5000 list, produced every year since 1982, analyzes company data to recognize the fastest-growing privately held businesses in the United States. The global recognition that comes with inclusion in the 5000 gives the founders of the best businesses an opportunity to engage with an exclusive community of their peers, and the credibility that helps them drive sales and recruit talent. The associated Inc. 5000 Conference & Gala is part of a highly acclaimed portfolio of bespoke events produced by Inc. For more information, visit http://www.inc.com.

Related Images

Image 1: RECUR360

Recurring Invoices, Payments, Late Fees, and Collections for QuickBooks

This content was issued through the press release distribution service at Newswire.com.

Read more:
RECUR360 Ranks No. 1776 on the 2022 Inc. 5000 Annual List - GlobeNewswire

Read More..

Global Cryptocurrency Exchange bitcastle to Launch on August 17 with the Most Advanced Binary Options Platform and Mobile Apps – GlobeNewswire

Kingstown, Saint Vincent and the Grenadines, Aug. 17, 2022 (GLOBE NEWSWIRE) -- Bear markets are for building is a common expression heard around the cryptocurrency ecosystem during times like these when a crypto winter has led to frosty market conditions and falling token prices.

One project that has been hard at work fine-tuning its development to make sure that it is ready for prime time is bitcastle, a no-fee cryptocurrency exchange that is preparing for its official release on August 17th.

In the midst of the crypto market turmoil of the past few months, developers for bitcastle have been arduously perfecting the exchanges code in beta mode and are now putting the final touches on this state-of-the-art trading platform.

Along with the full launch of the web-based bitcastle interface, the platform will also be releasing iOS and Android mobile apps that will ensure its users can access the markets any time, day or night, from anywhere with cell phone reception.

Following a series of high-profile hacks and protocol exploits, the developers behind bitcastle have gone above and beyond to ensure that they have created a safe and easy way for crypto fans to acquire tokens, no matter their level of experience.

For traders of all levels, the 0% trading fees offered by bitcastle on all major trading pairs is sure to help ease the burden of soaring inflation and allow holders to acquire even more of the tokens they desire.

The large-cap tokens that will be available at the launch of the exchange include Bitcoin (BTC), Ethereum (ETH), Bitcoin Cash (BCH), Litecoin (LTC), and XRP, along with an additional 20+ smaller cap coins that are popular around the world. As time progresses, the exchange intends to add to its list of supported tokens as the need arises.

For more experienced traders who are not opposed to taking on extra risk, bitcastle also offers its own unique binary trading option known as HIGH&LOW. This proprietary technology designed by bitcastle will offer the worlds fastest options trading experience with the ability to make price predictions in as little as 5 seconds into the future.

The simplicity of bitcastles HIGH&LOW offering makes it so that even the most recent arrivals to the crypto trading scene will be able to partake in the action. All that is required is the ability to choose whether a given crypto will close higher or lower after a designated period of time that can stretch from seconds to hours.

For those looking for a longer time horizon, the High/Low mode allows them to work with a time frame that stretches from 15 minutes up to one day. For those with a shorter attention span, the Lightning mode allows them to operate in smaller increments that can clear in as little as five seconds.

Overall, bitcastle is designed to offer every crypto trader from novice to pro a top-notch trading experience that everyone can enjoy while paying as little in fees as possible. And in keeping with one of the most popular and long-running traditions in crypto, bitcastle also has plans to provide users with exclusive access to future airdrop campaigns and referral bonuses.

Those who are interested in getting started with the exchange can start their journey off on the right foot by signing up now and completing the identity verification process to earn $15 worth of Bitcoin. Dont miss this opportunity to get in early with the next up-and-coming crypto exchange and earn a little free crypto in the process.

For more information on this campaign, you can visit bitcastle's official website or Twitter page.

Mobile Apps: https://bitcastle.onelink.me/vSX0/b7fzb1n5Media Contacts: support@bitcastle.io

Link:
Global Cryptocurrency Exchange bitcastle to Launch on August 17 with the Most Advanced Binary Options Platform and Mobile Apps - GlobeNewswire

Read More..