Category Archives: Machine Learning

Artificial Intelligence in Breast Ultrasound: The Emerging Future of Modern Medicine – Cureus

In women, the most common neoplasm in terms of malignancy is breast cancer. Also, among deaths due to cancer, breast cancer is the second leading cause [1]. By using ultrasound and X-ray, one can diagnose breast cancer. Other significant techniques are mammography and magnetic resonance imaging (MRI), which successfully help make the appropriate diagnosis. First preference in imaging is given to ultrasound for the depiction and categorization of breast lesions as it is non-invasive, feasible, and cost-effective. Along with these, its availability is high and shows acceptable diagnostic performance. Those mentioned above are the basic techniques used as diagnostic tools. Besides these, some newer techniques are available, including color Doppler and contrast-enhanced ultrasound. Spectral Doppler, as well as elastography, also contributes to the diagnosis. These newer techniques support ultrasound doctors to obtain more prcised information. However, the drawback is that it does suffer from operator dependence [2]. Deep learning (DL) algorithms, which are precisely a part of artificial intelligence (AI) in particular, have received considerable attention in the past few years due to their outstanding performance in imaging tasks. Technology inbuilt in AI makes better evaluation of the appreciated data related to imaging [3]. AI in ultrasound lays significant focus on distinguishing between benign and malignant masses related to the breast. Radiologists nowadays interpret, analyze, and detect breast images. With a heavy and long-term volume of work, radiologists are more likely to make errors in image interpretation due to exhaustion, which is likely to result in a misidentification or failed diagnosis, which AI can prevent. Humans make various errors in the diagnosis part. To reduce those errors, there is the implementation of a technique known as computer-aided diagnosis (CAD). In this, an algorithm is present that completes the processing of the image along with analysis [4]. Convolutional neural networks (CNNs), a subset of DL, are the most recent technology used in medical imaging [5,6]. AI has the potential to enhance breast imaging interpretation accuracy, speed, and quality. By standardizing and upgrading workflow, minimizing monotonous tasks, and identifying issues, AI has the potential to revolutionize breast imaging. This will most likely free up physician resources that could be used to improve communication with patients and integrate with colleagues. Figure 1 shows the relation between the various subsets of AI in the article.

Traditional machine learning is the basis and the area of focus that is included under early AI. It deals with the problems in a stepwise manner. It involves a two-step procedure that is object detection followed by object recognition. The first step is object detection, in which case there exists an algorithm for bounding box detection that the machine uses in scanning the image to locate the appropriate object area. The other step, which is the second step, includes the object recognition algorithm that is based on the initial step. Identifying certain characteristic features and encoding the same into a data type are the tasks that experts perform in the identification process. The advantage of a machine is that it extracts the characteristic features, which is followed by performing quantitative analysis, processes the information, and gives the final judgment. In this way, it provides assistance to radiologists in detecting the lesions and analyzing them [5]. Through this, both the efficiency and the accuracy of the diagnosis can be enhanced and improved. In the previous few decades, the popularity of CAD is prevailing in terms of development as well as advancement. CAD includes machine learning methodologies along with multidisciplinary understanding as well as techniques. Analyzing the information of the patient is done by using these techniques. Similarly, the results can provide assistance to clinicians in the process of making an accurate diagnosis [7]. CAD could very well evaluate imaging data. It directly provides the information after analyzing it to the clinician and also correlates the results with some diseases that involve the use of statistical modelling of previous cases in the population. It has many other applications, such as lesion detection along with characterization and staging of cancer, including the enactment of a proper treatment plan with the assessment of its response. Adding to it, prediction of prognosis and recurrence are some more applications. DL has transformed computer vision [8,9].

DL, which is an advanced form of machine learning, does not depend solely on features and ROIs (region of interest) that are preset by humans, which is the opposite of traditional machine learning algorithms [9,10]. Along with this, it prefers to complete all the targeted processes independently. CNNs are the evolving configuration in healthcare, which is a part of DL. It can be explained by an example. Majorly, the model consists of three layers: input layers, hidden layers, and output layers. In this case, the hidden layer is the most vital determinant in achieving recognition. Being the most crucial determinant of achieving recognition, a significant number of convolutional layers, along with a fully connected layer, are encompassed in the hidden layers. The various massive problems generated by the machine based on input activity are handled by the convolutional layers. They are connected to form a complex system with the help of convolution layers, and, hence, it can easily output the results [11,12]. DL methods have an excessive dependence on data and hardware. In spite of this, it has easily defeated other frameworks in computer vision completion [13]. Furthermore, DL methods perform flawlessly not only in ultrasound but also in computed tomography (CT) [14,15]. According to certain studies, it has also been shown that there has been an adequate performance of DL methods in MRI [16]. DL uses a deep neural network architecture to transform the input information into multiple layers of abstraction and thereby discover the data representations [17]. The deep neural network's multiple layers of weights are iteratively updated with a large dataset with effective functioning. This yields a mathematical model of a complex type that is capable of extracting relevant features from input data showing high selectivity. DL has made major advances in many tasks such as target identification, including characterization, speech and text recognition, and face recognition. Some other advancements are smart devices and robotics.

An ultrasonic machine is used to upload images taken to the workstation, where they are reprocessed. The DL technique (S-detect) can, on the other hand, directly pinpoint breast lesions on the ultrasound. It is also used in segmentation, feature analysis, and depictions. The BI-RADS (Breast Imaging-Reporting and Data System) 2013 lexicon may also be used for the same. It can provide instantaneous results in the form of a frozen image on an ultrasound machine to detect any malignancy. This is performed by selecting ROI automatically or by manual means [18]. They assessed the performance of S-detect in terms of diagnosis so as to confirm whether the breast lesion was benign or malignant. On setting the cutoff at category 4a in BI-RADS, it was observed that the accuracy, along with specificity and PPV (positive predictive value), was high in S-detect in comparison with the radiologist (p = 0.05 for all). Ultrasonologists typically use macroscopic and microscopic features of breast images to recognize and segment potentially malicious lesions. Shape and edge, including orientation and accurate location of calcification, can be detected. Certain features, such as rear type and echo type, along with hardness, can also be identified. Following that, suspicious masses are classified using the BI-RADS scale so as to assess and estimate the level of cancer speculation in breast lesions. However, its macroscopic and microscopic characteristics are critical in distinguishing whether the masses are of malignant type. As a result, ultrasound experts are in high demand for correctly obtaining these features.

Mammography is a non-invasive technique with high resolution that is commonly used. It also shows good repeatability. Mammography detects those masses that doctors fail to palpate in the breasts and can reliably distinguish whether the lesions are benign or malignant. Mammograms are retrieved from digital mammography (DM). They are possibly provided to process (raw imaging data) as well as to present (a post-treated form of the raw data) image layouts using DM systems [19]. Breast calcification appears on mammography as narrow white spots, which are breast calcifications caused by narrow deposits of calcium salts in the tissues of the breast. Calcification is classified into two types: microcalcifications and macrocalcifications. The large, along with rough, are macrocalcifications; they are usually benign and depend on the age group. Microcalcifications, which range in size from 0.1 mm to 1 mm, can be found within or outside visible masses and may act as early warning signs of breast cancer [20]. Nowadays, significant CAD systems are progressing to detect calcifications in mammography.

DL, like DM, DBT (digital breast tomosynthesis), and USG (ultrasonography), is primarily utilized in MRI to conduct or assist in the categorization and identification of breast lesions. The other modalities and MRI. differ in their dimensions. MRI produces 3D scans; unlike it, 2D images are formed by other modalities such as DM, DBT, and USG. Furthermore, MRI observes the input along with the outflow of contrast agents (dynamic contrast-enhanced MRI) and changes its pre-existing dimensions to 4D. Moreover, hurdles are created while applying DL models on the 3D or 4D scans because the majority of models are designed to function on 2D pictures. To address these issues, various ways have been proposed. The most frequent method is to convert 3D photos to 2D images. It is accomplished by means of slicing, in which the 3D image is sliced into 2D, or by applying the highest intensity projection (MIP) [21,22]. DL is utilized to classify the axillary group of lymph node metastases in addition to lesion categorization [23-25]. Instead of biopsy data, positron emission tomography (PET) is used as the gold standard. The reason is that, while a biopsy is conclusive as truth, it leaves artifacts such as needle marks along with biopsy clips, which may unintentionally lead to shifting of the DL algorithm toward a malignant categorization [23,24].

PET or scintigraphy is a nuclear medicine imaging technique. They are predicted to be not much more suitable than the other previously stated imaging modalities, namely DM, digital tomosynthesis, USG, and MRI, for evaluating early-stage of cancerous lesions in the breast. The nuclear techniques, on the other hand, provide added utility for detecting and classifying axillary lymph nodes along with distant staging [26]. As a result, it is not surprising that DL is being used in this imaging field, albeit in a limited capacity. PET/CT assessment of whole-body metabolic tumor volume (MTV) could provide a measure of tumor burden. If a DL model could handle this operation, it would considerably minimize manual labor because, in practical application, for acquiring MTV, all tumors must be identified [27,28]. Weber et al. investigated whether a CNN trained to detect and segment the lesions in the breast with whole-body PET/CT scans of patients who have cancer could also detect and segment lesions in lymphoma and lung cancer patients. Moreover, the technique of DL, along with nuclear medicine techniques, are used parallelly in improving the tasks that are similarly used in other imaging approaches. Li et al. developed a 3D CNN model to help doctors detect axillary lymph node metastases on PET/CT scans [28]. Because of their network, clinicians' sensitivity grew by 7.8% on average, while their specificity remained unchanged (99.0%). However, both clinicians outscored the DL model on its own.

Worldwide, it has been observed that in women, there is a higher incidence and fatality rate of breast cancer; hence, many countries have implemented screening centers for women of the appropriate age group for detection of breast cancer. The ideology behind the implementation of screening centers is to distinguish between benign breast lesions and malignant breast lesions. The primary classification system used is BI-RADS for classifying lesions in breast ultrasound. AI systems have been developed with equipped features for classifying benign and malignant breast lesions to assist clinicians in making consistent and accurate decisions. Ciritsis et al. categorized breast ultrasound images into BI-RADS 2-3 and BI-RADS 4-5 using a deep convolution neural network (dCNN) with an internal data set and an external data set. The dCNN had a classification accuracy of 93.1% (external 95.3%), whereas radiologists had a classification accuracy of 91.65% (external 94.1 2%). This indicates that deep neural networks (dCNNs) can be utilized to simulate human decision-making. Becker et al. analyzed 637 breast ultrasound pictures using DL software (84 malignant and 553 benign lesions). The software was trained on a randomly chosen subset of the photographs (n=445, 70%), with the remaining samples (n=192) used to validate the resulting model during the training process. The findings demonstrated that the neural network, which had only been trained on a few hundred examples, had the same accuracy as a radiologist's reading. The neural network outperformed a trained medical student with the same training data set [29-31]. This finding implies that AI-assisted classification and diagnosis of breast illnesses can significantly cut diagnostic time and improve diagnostic accuracy among novice doctors. Table 1 shows BIRADS scoring.

AI is still struggling to advance to a higher level. Although it is tremendously progressing in the healthcare fraternity, it still has to cover a long journey to properly blend into clinicians' work and be widely implemented around the world. Many limitations have been reported for CAD systems for breast cancer screening, including a global shortage of public datasets, a high reliance on ROI annotation, increased image standards in terms of quality, regional discrepancies, and struggles in binary classification. Furthermore, AI is designed for single-task training and cannot focus on multiple tasks at once, and, hence, it is one of the significant obstacles to the advancement of DL associated with breast imaging. CAD systems are progressively evolving in ultrasound elastography [32]. Similarly, it is also progressing in the technology related to contrast-enhanced mammography as well as MRI [33,34]. AI in breast imaging can be used to not only detect but also classify breast diseases and anticipate lymph node tumor progression [35]. Moreover, it can also predict disease recurrence. As the technology of AI advances, there will be higher accuracy, along with greater efficiency, and a more precise plan of treatment for breast ailments, enabling them to achieve early detection with accurate diagnosis among radiologists. Moreover, it can also predict disease recurrence. The lack of a consistent strategy for segmentation (2D vs. 3D), feature extraction, and selection and categorization of significant radiomic data is a common limitation shared by all imaging modalities. Future studies with greater datasets will allow for subgroup analysis by patient group and tumor type [36].

Without a crystal ball, it is impossible to predict whether further advances in AI will one day start replacing radiologists or other functions in diagnostics reportedly performed by humans, but AI will undeniably play a major role in radiology, one that is currently unfolding rapidly. When compared to traditional clinical models, AI has the added benefit of being able to pinpoint distinctive features, textures, and details that radiologists seem unable to appreciate, as well as quantitatively define image explicit details, making its evaluation more objective. Moreover, AI in breast imaging can be used to not only detect but also classify breast diseases. As a result, greater emphasis needs to be placed on higher-quality research studies that have the potential to influence treatment, outcomes for patients, and social impact.

More:
Artificial Intelligence in Breast Ultrasound: The Emerging Future of Modern Medicine - Cureus

The Adoption of Artificial Intelligence And Machine Learning In The Music Streaming Market Is Gaining Popularity As Per The Business Research…

LONDON, Sept. 07, 2022 (GLOBE NEWSWIRE) -- According to The Business Research Companys research report on the music streaming market, artificial intelligence and machine learning in music streaming devices are the key trends in the music streaming market. Technologies like artificial intelligence and Machine learning enhance the music streaming experience by increasing storage and improving the search recommendations, improving the overall experience.

For instance, in January 2022, Gaana, an India-based music streaming app introduced a new product feature using artificial intelligence to enhance the music listening experience for its listeners. The app will modify music preferences using artificial intelligence to suit a person's particular occasion or daily mood.

Request for a sample of the global music streaming market report

The global online music streaming market size is expected to grow from $24.09 billion in 2021 to $27.24 billion in 2022 at a compound annual growth rate (CAGR) of 13.08%. The global music streaming market share is expected to grow to $45.31 billion in 2026 at a compound annual growth rate (CAGR) of 13.57%.

The increasing adoption of smart devices is expected to propel the growth of the music streaming market. Smart devices such as smartphones, and smart speakers have changed the way of listening to music. They include smart features like the ability to set alarms, play music on voice command, control smart devices in-home, and stream live music, as they are powered by a virtual assistant. For instance, according to statistics from Amazon Alexa 2020, nearly 53.6 million Amazon Echo speakers (smart speakers) were sold in 2020 which increase to 65 million in 2021. Therefore, the increasing adoption of smart devices will drive the music streaming market growth.

Major players in the music streaming market are Amazon, Apple, Spotify, Gaana, SoundCloud, YouTube Music, Tidal, Deezer, Pandora, Sirius XM Holdings, iHeartRadio, Aspiro, Tencent Music Entertainment, Google, Idagio, LiveXLive, QTRAX, Saavn, Samsung, Sony Corporation, TuneIn, JOOX, NetEase, Kakao and Times Internet.

The global music streaming market is segmented by service into on-demand streaming, live streaming; by content into audio, video; by platform into application-based, web-based; by revenue channels into non-subscription, subscription; by end-use into individual, commercial.

North America was the largest region in the music streaming market in 2021. The regions covered in the global music streaming industry analysis are Asia-Pacific, Western Europe, Eastern Europe, North America, South America, the Middle East, and Africa.

Music Streaming Global Market Report 2022 Market Size, Trends, And Global Forecast 2022-2026 is one of a series of new reports from The Business Research Company that provide music streaming market overviews, music streaming market analyze and forecast market size and growth for the whole market, music streaming market segments and geographies, music streaming market trends, music streaming market drivers, music streaming market restraints, music streaming market leading competitors revenues, profiles and market shares in over 1,000 industry reports, covering over 2,500 market segments and 60 geographies.

The report also gives in-depth analysis of the impact of COVID-19 on the market. The reports draw on 150,000 datasets, extensive secondary research, and exclusive insights from interviews with industry leaders. A highly experienced and expert team of analysts and modelers provides market analysis and forecasts. The reports identify top countries and segments for opportunities and strategies based on market trends and leading competitors approaches.

Not the market you are looking for? Check out some similar market intelligence reports:

Music Recording Global Market Report 2022 By Type (Record Production, Music Publishers, Record Distribution, Sound Recording Studios), By Application (Mechanical, Performance, Synchronization, Digital), By End-User (Individual, Commercial), By Genre (Rock, Hip Hop, Pop, Jazz) Market Size, Trends, And Global Forecast 2022-2026

Content Streaming Global Market Report 2022 By Platform (Smartphones, Laptops & Desktops, Smart TVs, Gaming Consoles), By Type (On-Demand Video Streaming, Live Video Streaming ), By Deployment (Cloud, On-Premise), By End User (Consumer, Enterprise) Market Size, Trends, And Global Forecast 2022-2026

Smart Home Devices Global Market Report 2022 By Technology (Wi-Fi Technology, Bluetooth Technology), By Application (Energy Management, Climate Control System, Healthcare System, Home Entertainment System, Lighting Control System, Security & Access Control System), By Sales Channel (Online, Offline) Market Size, Trends, And Global Forecast 2022-2026

Interested to know more about The Business Research Company?

The Business Research Company is a market intelligence firm that excels in company, market, and consumer research. Located globally it has specialist consultants in a wide range of industries including manufacturing, healthcare, financial services, chemicals, and technology.

The Worlds Most Comprehensive Database

The Business Research Companys flagship product, Global Market Model, is a market intelligence platform covering various macroeconomic indicators and metrics across 60 geographies and 27 industries. The Global Market Model covers multi-layered datasets which help its users assess supply-demand gaps.

Blog: http://blog.tbrc.info/

Link:
The Adoption of Artificial Intelligence And Machine Learning In The Music Streaming Market Is Gaining Popularity As Per The Business Research...

Explainable artificial intelligence through graph theory by generalized social network analysis-based classifier | Scientific Reports – Nature.com

In this subsection, we present details on how we process the dataset, turn it into a network graph and finally how we produce, and process features that belong to the graph. Topics to be covered are:

splitting the data,

preprocessing,

feature importance and selection,

computation of similarity between samples, and

generating of the raw graph.

After preprocessing the data, the next step is to split the dataset into training and test samples for validation purposes. We selected cross-validation (CV) as the validation method since it is the de facto standard in ML research. For CV, the full dataset is split into k folds; and the classifier model is trained using data from (k-1) folds then tested on the remaining kth fold. Eventually, after k iterations,, average performance scores (like F1 measure or ROC) of all folds are used to benchmark the classifier model.

A crucial step of CV is selecting the right proportion between the training and test subsamples, i.e., number of folds. Determining the most appropriate number of folds k for a given dataset is still an open research question17, besides de facto standard for selecting k is accumulated around k=2, k=5, or k=10. To address the selection of the right fold size, we have identified two priorities:

Priority 1Class Balance: We need to consider every split of the dataset needs to be class-balanced. Since the number of class types has a restrictive effect on selecting enough similar samples, detecting the effective number of folds depends heavily on this parameter. As a result, whenever we deal with a problem which has low represented class(es), we selected k=2.

Priority 2High Representation: In our model, briefly, we build a network from the training subsamples. Efficient network analysis depends on the size (i.e., number of nodes) of the network. Thus, maximize training subsamples with enough representatives from each class (diversity) is our priority as much as we can when splitting the dataset. This way we can have more nodes. In brief, whenever we do not cross priority 1, we selected k=5.

By balancing these two priorities, we select efficient CV fold size by evaluating the characteristics of each datasets in terms of their sample size and the number of different classes. The selected fold value for each dataset will be specified in the Experiments and results section. To fulfill the class balancing priority, we employed stratified sampling. In this model, each CV fold contains approximately the same percentage of samples of each target class as the complete set.

Preprocessing starts with the handling of missing data. For this part, we preferred to omit all samples which have one or more missing feature(s). By doing this, we have focused merely on developing the model, skipping trivial concerns.

As stated earlier, GSNAc can work on datasets that may have both numerical and categorical values. To ensure proper processing of those data types, as a first step, we separate numerical and categorical features18. First, in order to process them mathematically, categorical (string) features are transformed into unique integers for each unique category by a technique called labelization. It is worth noting that, against the general approach, we do not use the one-hot-encoding technique for transforming categorical features, which is the method of creating dummy binary-valued features. Labelization does not generate extra features, whereas one-hot-encoding extend the number of features.

For the numerical part, as a very important stage of preprocessing, scaling19 of the features follows. Scaling is beneficial since the features may have a very different range and this might affect scale-dependent processes like distance computation. We have two generally accepted scaling techniques which are normalization and standardization. Normalization transforms features linearly into a closed range like [0, 1], which does not affect the variation of values among features. On the other hand, standardization transforms the feature space into a distribution of values that are centered around the mean with a unit standard deviation. This way, the mean of the attribute becomes zero and the resultant distribution has a unit standard deviation. Since GSNAc is heavily dependent on vectorial distances, we do not prefer to lose the structure of the variation within features and this way our choice for scaling the features becomes normalization. Here, it is worth mentioning that all the preprocessing is applied on the training part of the data and transformed on the test data, ensuring no data leakage occurs.

Feature Importance (FI) broadly refers to the scoring of features based on their usefulness in prediction. It is obvious that in any problem some features might be more definitive in terms of their predictive capability of the class. Moreover, a combination of some features may have a higher effect than others in total than the sum of their capacity in this sense. FI models, in general, address this type of concern. Indeed, almost all ML classification algorithms use a FI algorithm under the hood; since this is required for the proper weighting of features before feeding data into the model. It is part of any ML classifier and GSNAc. As a scale-sensitive model, vectorial similarity needs to benefit much from more distinctive features.

For computing feature importance, we preferred to use an off-the-shelf algorithm, which is a supervised k-best feature selection18 method. The K-best feature selection algorithm simply ranks all features by evaluating features ANOVA analysis against class labels. ANOVA F-value analyzes the variance between each feature and its respective class and computes F-value which is the ratio of the variation between sample means, over the variation within the samples. This way, it assigns F values as features importance. Our general strategy is to keep all features for all the datasets, with an exception for genomic datasets, that contain thousands of features, we practiced omitting. For this reason, instead of selecting some features, we prefer to keep all and use the importance learned at this step as the weight vector in similarity calculation.

In this step, we generate an undirected network graph G, its nodes will be the samples and its edges will be constructed using the distance metrics20 between feature values of the samples. Distances will be converted to similarity scores to generate an adjacency matrix from the raw graph. As a crucial note, we state that since we aim to predict test samples by using G, in each batch, we only process the training samples.

In our study for constructing a graph from a dataset we defined edge weights as the inverse of the Euclidean distance between the sample vectors. Simply, Euclidean distance (also known as L2-norm) gives the unitless straight line (shortest) distance between two vectors in space. In formal terms, for f-dimensional vectors u and v, Euclidean distance is defined as:

$$dleft(u,vright)=sqrt[2]{sum_{f}{left({u}_{i}-{v}_{i}right)}^{2}}$$

A slightly modified use of the Euclidean distance is introducing the weights for dimensions. Recall from the discussion of the feature importance in the former sections, some features may carry more information than others. So, we addressed this factor by computing a weighted form of L2 norm based on distance which is presented as:

$${dist_L2}_{w}left(u,vright)=sqrt[2]{sum_{f}{{w}_{i}({u}_{i}-{v}_{i})}^{2}}$$

where w is the n-dimensional feature importance vector and i iterates over numerical dimensions.

The use of the Euclidean distance is not proper for the categorical variables, i.e. it is ambiguous and not easy to find how much a canarys habitat sky is distant from a sharks habitat sea. Accordingly, whenever the data contains categorical features, we have changed the distance metric accordingly to L0 norm. L0 norm is 0 if categories are the same; it is 1 whenever the categories are different, i.e., between the sky and the sea L0 norm is 1, which is the maximum value. Following the discussion of weights for features, the L0 norm is also computed in a weighted form as ({dist_L0}_{w}left(u,vright)=sum_{f}{w}_{j}(({u}_{j}ne {v}_{j})to 1)), where j iterates over categorical dimensions.

After computing the weighted pairwise distance between all the training samples, we combine numerical and categorical parts as: ({{dist}_{w}left(u,vright)}^{2}={{dist_L2}_{w}left(u,vright)}^{2}+ {{dist_L0}_{w}left(u,vright)}^{2}). With pairwise distances for each pair of samples, we get a n x n square and symmetric distance matrix D, where n is the number of training samples. In matrix D, each element shows the distance between corresponding vectors.

$$D= left[begin{array}{ccc}0& cdots & d(1,n)\ vdots & ddots & vdots \ d(n,1)& cdots & 0end{array}right]$$

We aim to get a weighted network, where edge weights represent the closeness of its connected nodes. We need to first convert distance scores to similarity scores. We simply convert distances to similarities by subtracting the maximum distance on distances series from each element.

$$similarity_s(u,v)=mathrm{max}_mathrm{value}_mathrm{of}(D)-{dist}_{w}left(u,vright)$$

Finally, after removing self-loops (i.e. setting diagonal elements of A to zero), we use adjacency matrix A to generate an undirected network graph G. In this step, we delete the lower triangular part (which is symmetric to the upper triangular part) to avoid redundancy. Note that, in transition from the adjacency matrix to a graph, the existence of a (positive) similarity score between two samples u and v creates an edge between them, and of course, the similarity score will serve as the vectorial weight of this particular edge in graph G.

$$A= left[begin{array}{ccc}-& cdots & s(1,n)\ vdots & ddots & vdots \ -& cdots & -end{array}right]$$

The raw graph generated in this step is a complete graph: that is, all nodes are connected to all other nodes via an edge having some weight. Complete graphs are very complex and sometimes impossible to analyze. For instance, it is impossible to produce some SNA metrics such as betweenness centrality in this kind of a graph.

View post:
Explainable artificial intelligence through graph theory by generalized social network analysis-based classifier | Scientific Reports - Nature.com

Five Experts Address Trends in Artificial Intelligence and Machine Learning – PR Newswire UK

Trends and risks in a Golden Age of Artificial Intelligence and Machine Learning

By David Churbuck, Founder & former Editor-in chief of Forbes.com

NEW YORK, Sept. 5, 2022 /PRNewswire/ -- Over the summer of 2022, Wow AI, a global provider of high-quality AI training data based in New York City, invited a panel of experts from different industries and areas of expertise to share their insights into the current state of artificial intelligence and machine learning (AI/ML) and discuss the factors that have accelerated the recent adoption of AI in applications.

All the experts agreed that the past decade has been a Golden Age for AI, made possible by the affordable availability of AI services delivered from the cloud and the inexpensive power of graphics processing units designed to handle the types of transforms and calculations at the foundation of AI models.

However, they each have a unique perspective on different trends and issues as AI pervades society, continuously improving the human-machine interface and becoming more embedded in every aspect of our lives.

The Experts

The five experts will share more insights along with more than 20 other thought leaders in AI/ML recruited from Fortune 500 companies and organizations around the world such as Walt Disney, Deloitte, Microsoft, Oxford Brookes University, The US Department of Commerce and many others, during a two-day online discussion of contemporary AI and ML trends on September 29-30 hosted by Wow AI.

Welcome and thanks for joining.There are fears about an AI going out of control such as Skynet or Hal 9000 or devices like Amazon Alexa or Google Assistant or Apple Siri. 75 years later, there is legislation pending at the state and federal levels to regulate AI and review algorithms for signs of bias or the perpetuation of old models that could deny a person equal opportunity. What is the risk the gains of the past ten years could be reversed or future developments hindered by fear, baseless conspiracy theories, or over-regulation?

Andreas: I think if we look at people and humanity as a whole, there has always been a fear of not being the pinnacle of evolution. You need to make sure that the people who are affected by the change are part of the process, that they are aware of why and how you want to introduce a piece of technology like AI, what the limitations are, and where it can help them become better and more effective.

Noelle: There are more threatening devices than Alexa. The average smartphone has 50 applications trying to get permission to access our camera, microphone, and contacts. I've consistently been opposed to AI being applied to anything demographically oriented. [...] Biases end up perpetuating bad behavior. Maybe the models need to be infused with some inclusivity."

In the 1980s AI seemed to have potential in decision support systems but then it seemed to stall. Then, almost overnight it was in our cars, our phones, and our living rooms, to the point where we're looking at autonomous vehicles, real-time meeting transcription, and in the case of Aravind's company, Uniphore, analyzing customer interactions for tone and emotion. What happened that helped AI get over the hype that surrounded it in the past while delivering significant results after so many years of being ignored?

David Von Dollen: I would say two factors brought AI out of its "winter". One was hardware computing power primarily in the form of GPUs which have had a tremendous impact. The other factor is ongoing refinements to the underlying algorithms.

Patrick Bangert: This renaissance of AI we are experiencing today is sometimes called the "Deep Learning Revolution." Yes, some of it comes down to processing speed and we have the graphics processing units we didn't have 30 years ago, but it's not just about speed. Speed is mainly interesting and beneficial in the sense that it allows us to train much bigger models in the same amount of time. The second benefit is scientific. A lot of headway is being realized in deep learning due to the mathematics of AI gaining novel algorithms and modeling methods that are better than what we had in the 1980s.

Aravind Ganapathiraju: The difference is accuracy. The first ASR system (automated speed recognition) had a 40% error rate. On the same task today we are pushing a 5% error rate. I'm not saying it's a solved problem, but it is indicative of the evolution that has happened over the last two decades.

Let's talk about the role data has played in helping AI/ML deliver on its promises. Aside from strict laws governing the processing and storage of personal data and regulations to ensure data privacy, what should providers of AI-enabled products and services be thinking about when it comes to data?

Aravind: One of the latest products we have released at Uniphone is "Q for Sales" which analyzes conversations not just by examining the tonal information in a call such as this one, but also by other visual cues. The fusion of different data types from audio to text, from video to facial expressions provides a call center person with valuable insights and nudges to gain a better outcome from the call.

Andreas Welsch: A byproduct of the early 2000s' Big Data trends has been an influx of so many data points that it's not possible for one individual or even a team of five or ten data scientists to analyze at the speed, scale, and quality needed to make decisions in business today. With the application of AI on the task, we're able to detect these patterns in the data that allow you to automate certain parts of your business processes in a way that has never been possible before.

I also think, to Aravind's point, that there are just so many more data pools available and now we have the tools to analyze them on a much larger scale than ever before.

Patrick Bangert: At Samsung, we train all sorts of models. [...] The role data plays across the company is driving AI systems to forecast how many people will buy a particular Samsung device, at which stores, and how to get inventory to those stores upon launch. Our internal data is the fuel for those forecasting systems, data unique to our business and our success.

Noelle Silver: I think Web3 is really forcing people to rethink data and there's an ethical shift between the collectors of data and the sources of that data. Companies like Apple distinguish themselves by popping up a little dialogue that asks "Do you want to share your real address or would you like us to mask it for you?" That translates in my mind to more responsibility on the part of the companies to be responsible, ethical stewards of their users' data.

David Von Dollen: I focus a lot on what I call "narrow AI" an algorithm that's trained on a specific set of data to perform a specific task. That's what a lot of our applications do today. It's all pretty much pattern recognition but within narrowly defined constraints. I think those types of applications may turn out to be much more harmful than some sentient AI taking over in a Skynet situation.

To watch the full conversation with the experts who will be keynoting the Worldwide AI Webinar, please visit:

These five thought leaders and other experts from around the world will be taking your questions, and discussing the issues and opportunities in AI and ML applications, training models, and data sources, and other topics at Worldwide AI Webinar on September 29 and 30th.

Media Contact: David Churbuck- Founder & former editor-in-chief of Forbes, a prize-winning tech journalist - David@churbuck.com

Photo -https://mma.prnewswire.com/media/1889768/2__1.jpg

SOURCE Wow AI

Continued here:
Five Experts Address Trends in Artificial Intelligence and Machine Learning - PR Newswire UK

The ABCs of AI, algorithms and machine learning (re-air) – Marketplace

This episode originally aired on July 20, 2022.

Advanced computer programs influence, and can even dictate, meaningful parts of our lives. Think streaming services, credit scores, facial recognition software.

And as this technology becomes more sophisticated and more pervasive, its important to understand some basic terminology.

On this Labor Day, were revisiting an episode in which we explore the terms algorithm, machine learning and artificial intelligence. Theres overlap, but theyre not the same things.

We called up a few experts to help us get a firm grasp on these concepts, starting with a basic definition of algorithm. The following is an edited transcript of the episode.

Melanie Mitchell, Davis professor of complexity at the Santa Fe Institute, offered a simple explanation of a computer algorithm.

An algorithm is a set of steps for solving a problem or accomplishing a goal, she said.

The next step up is machine learning, which uses algorithms.

Rather than a person programming in the rules, the system itself has learned, Mitchell said.

For example, speech recognition software, which uses data to learn which sounds combine to become words and sentences. And this kind of machine learning is a key component of artificial intelligence.

Artificial intelligence is basically capabilities of computers to mimic human cognitive functions, said Anjana Susarla, who teaches responsible AI at Michigan State Universitys Broad College of Business.

She said we should think of AI as an umbrella term.

AI is much more broader, all-encompassing, compared to only machine learning or algorithms, Susarla said.

Thats why you might hear AI as a loose description for a range of things that show some level of intelligence. Like software that examines the photos on your phone to sort out the ones with cats to advanced spelunking robots that explore caves.

Heres another way to think of the differences among these tools: cooking.

Bethany Edmunds, professor and director of computing programs at Northeastern University, compares it to cooking.

She says an algorithm is basically a recipe step-by-step instructions on how to prepare something to solve the problem of being hungry.

If you took the machine learning approach, you would show a computer the ingredients you have and what you want for the end result. Lets say, a cake.

So maybe it would take every combination of every type of food and put them all together to try and replicate the cake that was provided for it, she said.

AI would turn the whole problem of being hungry over to the computer program, determining or even buying ingredients, choosing a recipe or creating a new one. Just like a human would.

So why do these distinctions matter? Well, for one thing, these tools sometimes produce biased outcomes.

Its really important to be able to articulate what those concerns are, Edmunds said. So that you can really dissect where the problem is and how we go about solving it.

Because algorithms, machine learning and AI are pretty much baked into our lives at this point.

Columbia Universitys engineering school has a further explanation of artificial intelligence and machine learning, and it lists other tools besides machine learning that can be part of AI. Like deep learning, neural networks, computer vision and natural language processing.

Over at the Massachusetts Institute of Technology, they point out that machine learning and AI are often used interchangeably because these days, most AI includes some amount of machine learning. A piece from MITs Sloan School of Management also gets into the different subcategories of machine learning. Supervised, unsupervised and reinforcement, like trial and error with kind of digital rewards. For example, teaching an autonomous vehicle to drive by letting the system know when it made the right decision not hitting a pedestrian, for instance.

That piece also points to a 2020 survey from Deloitte, which found that 67% of companies were already using machine learning, and 97% were planning to in the future.

IBM has a helpful graphic to explain the relationship among AI, machine learning, neural networks and deep learning, presenting them as Russian nesting dolls with the broad category of AI as the biggest one.

And finally, with so many businesses using these tools, the Federal Trade Commission has a blog laying out some of the consumer risks associated with AI and the agencys expectations of how companies should deploy it.

Read more:
The ABCs of AI, algorithms and machine learning (re-air) - Marketplace

Canadian company uses machine learning to promote DEI in the hiring process – IT World Canada

Toronto-based software company, Knockri has developed an AI-powered interview assessment tool to help companies reduce bias and bolster diversity, equity and inclusion (DEI) in the job hiring process.

Knockris interview assessment tool uses Natural Language Processing (NLP) to evaluate only the transcript of an interview, overlooking non-verbal cues, including facial expressions, body language or audio tonality. In addition, race, gender, age, ethnicity, accent, appearance, or sexual preference, reportedly, do not impact the interviewees score.

To achieve objective scoring, Faisal Ahmed, co-founder and chief technical officer (CTO) of Knockri, says that the company adopts a holistic and strategic approach in training their model, including constantly trying new and different data, training, and tests, that covers a wide range of representation in terms of race, ethnicity, gender, and accent, as well as job roles and choices. After training the model, the company conducts quality checks and adverse impacts analysis to analyze scoring patterns and ensure quality candidates do not fall through the cracks.

Though working with clients with high volume hiring such as IBM, Novartis, Deloitte, and the Canadian Department of National Defence, Ahmed says their model is not able to analyze for every job in the world. Once we have new customers, new geographies, new job roles or even new experience levels that were working with, we will wait to get an update on that, benchmark, retrain, and then push scores. Were very transparent about this with our customers.

To ensure that the data fed into the AI is not itself biased, Ahmed adds that the company avoids using data from past hiring practices, such as looking at resumes or successful hires from ten years ago, as they may have been recruiting using biased or discriminatory practices. Instead, Ahmed says, the AI model is driven by Industrial and Organizational (IO) psychology to focus purely on identifying the kind of behaviors or work activities needed for specific jobs. For example, if a customer service role requires empathy, the model will identify behaviors from the candidates past experiences and words that reflect that specific trait, Ahmed says.

He recommends that customers use Knockri at the beginning of the interview process when there is a reasonably high volume of applications, and the same experience, scoring criteria, and opportunities can be deployed for all candidates.

Ahmed says their technology seeks to help businesses lay a foundation for a fair and equitable assessment of candidates, and is not meant to replace a human interviewer. Decisions made by Knockri are reviewed by a human being, and later stages of the interview process will inevitably involve human interviewers.

Were not going to solve all your problems, but were going to set you on the right path, concludes Ahmed.

Read the original here:
Canadian company uses machine learning to promote DEI in the hiring process - IT World Canada

How avatars and machine learning are helping this company to fast track digital transformation – ZDNet

Image: LNER

Digital transformation is all about delivering change, so how do you do that in an industry that's traditionally associated with largescale infrastructures and embedded operational processes?

Danny Gonzalez, chief digital and innovation officer (CDIO) at London North Eastern Railway (LNER), says the answer is to place technology at the heart of everything your business does.

"We firmly believe that digital is absolutely crucial," he says. "We must deliver the experiences that meet or exceed customers' expectations."

Delivering to that agenda is no easy task. Gonzalez says the rail journey is "absolutely full" of elements that can go wrong for a passenger, from buying a ticket, to getting to the train station, to experiencing delays on-board, and onto struggling to get away from the station when they reach their destination.

SEE: Digital transformation: Trends and insights for success

LNER aims to fix pain points across customer journeys, but it must make those changes in a sector where legacy systems and processes still proliferate. Gonzalez says some of the technology being used is often more than 30 years' old.

"There's still an incredible amount of paper and spreadsheets being used across vast parts of the rail industry," he says.

"Our work is about looking at how things like machine learning, automation and integrated systems can really transform what we do and what customers receive."

Gonzalez says that work involves a focus on the ways technology can be used to improve how the business operates and delivers services to its customers.

This manifests as an in-depth blueprint for digital transformation, which Gonzalez refers to as LNER's North Star: "That gives everyone a focus on the important things to do."

As CDIO, he's created a 38-strong digital directorate of skilled specialists that step out of traditional railways processes and governance and into innovation and the generation of creative solutions to intractable challenges.

"It's quite unusual for a railway company to give more permission for people to try things and fail," he says.

Since 2020, the digital directorate in combination with its ecosystem of enterprise and startup partners has launched more than 60 tools and trialled 15 proof-of-concepts.

One of these concepts is an in-station avatar that has been developed alongside German national railway company Deutsche Bahn AG.

LNER ran a trial in Newcastle that allowed customers to interact in free-flowing conversations with an avatar at a dedicated booth at the station. The avatar plugged into LNER's booking engine, so customers could receive up-to-date information on service availability. Following the successful trial, LNER is now looking to procure a final solution for wider rollout.

The company is also working on what Gonzalez refers to as a "door-to-door" mobility-as-a-service application, which will keep customers up to date on the travel situation and provide hooks into other providers, such as taxi firms or car- and bike-hire specialists.

"It's about making sure the whole journey is seamlessly integrated," he says. "As a customer, you feel in control and you know we're making sure that if anything is going wrong through the process that we're putting it right."

When it comes to behind-the-scenes operational activities, LNER is investing heavily in machine-learning technology. Gonzalez's team has run a couple of impactful concepts that are now moving into production.

SEE:What is digital transformation? Everything you need to know about how technology is reshaping business

One of these is a technology called Quantum, which processes huge amounts of historical data and helps LNER's employees to reroute train services in the event of a disruption and to minimise the impact on customers.

"Quantum uses machine learning to learn the lessons of the past. It looks at the decisions that have been made historically and the impact they have made on the train service," he says.

Gonzalez: "We firmly believe that digital is absolutely crucial."

"It computes hundreds of thousands of potential eventualities of what might happen when certain decisions are made. It's completely transforming the way that our service delivery teams manage trains when there's disruption to services."

To identify and exploit new technologies, Gonzalez's team embracesconsultant McKinsey's three horizon model, delivering transformation across three key areas that allows LNER to assess potential opportunities for growth without neglecting performance in the present.

Horizon one focuses on "big, meaty products" that are essential to everyday operations, such as booking and reservations systems, while horizon two encompasses emerging opportunities that are currently being scoped out by the business.

Gonzalez says a lot of his team's activity is now focused on horizon three, which McKinsey suggests includes creative ideas for long-term profitable growth.

He says that process involves giving teams quite a lot of freedom to get on and try stuff, run proof of concepts, and actually understand where the technology works.

Crucial to this work isan accelerator called FutureLabs, where LNER works with the startup community to see if they can help push digital transformation in new and exciting directions.

"We go out with key problem statements across the business and ask the innovators to come and help us solve our challenges and that's led to some of the most impactful things that we've done as a business," says Gonzalez.

FutureLabs has already produced pioneering results. Both the Quantum machine-learning tool and the "door-to-door" mobility service have been developed alongside startup partners JNCTION and IOMOB respectively.

LNER continues to search for new inspiration and has just run the third cohort of its accelerator. Selected startups receive mentoring and funding opportunities to develop and scale up technology solutions.

Gonzalez says this targeted approach brings structure to LNER's interactions and investments in the startup community and that brings a competitive advantage.

"It's not like where I've seen in other places, where innovation initiatives tend to involve 'spray and pray'," he says. "The startups we work with are clear on the problems they're trying to solve, which leads to a much greater success rate."

SEE: Four ways to get noticed in the changing world of work

Gonzalez's advises other professionals to be crystal clear on the problems they're trying to solve through digital transformation.

"Know what the priorities are and bring the business along with you. Its really important the business understands the opportunities digital can bring in terms of how you work as an organisation," he says.

"We're fortunate that we've got a board that understood that rail wasn't where it needed to be in terms of its digital proposition. But we've put a lot of work into creating an understanding of where issues existed and the solutions that we needed if we're going to compete in the future."

Go here to read the rest:
How avatars and machine learning are helping this company to fast track digital transformation - ZDNet

Machine learning hiring levels in the retail banking industry rose in August 2022 – Retail Banker International

The proportion of major banks, central banks, and non-bank competitors companies hiring for machine learning related positions kept relatively steady in August 2022 compared with the equivalent month last year, with 35.1% of the companies included in our analysis recruiting for at least one such position.

This latest figure was higher than the 34.1% of companies who were hiring for machine learning related jobs a year ago and an increase compared to the figure of 34.1% in July 2022.

When it came to the rate of all job openings that were linked to machine learning, related job postings rose in August 2022 from July 2022, with 1.7% of newly posted job advertisements being linked to the topic.

This latest figure was the same as the 1.7% of newly advertised jobs that were linked to machine learning in the equivalent month a year ago.

Machine learning is one of the topics that GlobalData, from whom our data for this article is taken, have identified as being a key disruptive force facing companies in the coming years. Companies that excel and invest in these areas now are thought to be better prepared for the future business landscape and better equipped to survive unforeseen challenges.

Our analysis of the data shows that major banks, central banks, and non-bank competitors companies are currently hiring for machine learning jobs at a rate higher than the average for all companies within GlobalData's job analytics database. The average among all companies stood at 1% in August 2022.

GlobalData's job analytics database tracks the daily hiring patterns of thousands of companies across the world, drawing in jobs as they're posted and tagging them with additional layers of data on everything from the seniority of each position to whether a job is linked to wider industry trends.

You can keep track of the latest data from this database as it emerges by visiting our live dashboard here.

Continue reading here:
Machine learning hiring levels in the retail banking industry rose in August 2022 - Retail Banker International

Cyber-attacks in future will be about machine learning and automation around attacking and discovery of vulner – Times Now

Since the pandemic there has been a race to go digital, organisations and individuals alike. Remote working, remote learning, contactless payments, and online shopping has become the mainstay of our lives. While technology really helped us get through the pandemic, it came with a learning curve, the lack of awareness about dos and donts made many easy targets for hackers and hacking organisations. The rush to go digital exposed a lot of companies to cyber-attacks through ransomware, data breaches, and leaks. The Pegasus and Predator scandals also showed us that hacking now is a business and Politicians are also using the services of these companies for their own benefit. As technology moves ahead, the risk of cyber-attacks increases as well, to explain a bit more about the current cyber security scenario Siddharth Shankar from TimesNow speaks to Asaf Hecht, from CyberArk. Asaf manages one of the research groups in CyberArk Labs. He focuses on researching and discovering the latest attack techniques and applying lessons learned to improve cyber defenses. Prior to CyberArk, Asaf served for eight years in the Israeli Army, as a skilled helicopter pilot and as Team Leader for the advanced cyber-hunting team, an elite force that protects military top-secret networks and reveals APTs.

Excerpts

Asaf: You are absolutely correct about the situation during COVID, We saw it in the entire world that people and organization go on digital and work from home like me today, and everything is going for the internet and there is also a paradigm shift. The legacy of the traditional perimeter is finished. When you have a company and you are in a physical building, you can secure this building with what is going out and where is the entrance and exit.

Nowadays, everything is everywhere, and this is the need of the hour. The trends that you mentioned are correct. Also, contactless payment is also gaining popularity in countries like Israel. From bigger organizations to private ones to even individuals. From the youngest who thrive to use new technology to older people that don't have other options because the bank or store in the neighborhood was closed because it converted online. However, there are a few things that we need to remember regarding the outlook toward risk, it's different from the organisation side and different from the individual side of things.

For individuals, I think awareness is important to everyone and every age. Having said that, I think there are some principles that are very important.

For example, check your bank account and your credit card payment at least once per month. This is because there is a high chance of these kinds of things occurring without your knowledge. If you get scammed and phished and someone stole your credit card and pays for his desire. The most important thing for you is to detect it and then you can automatically alert the credit card company and also you will get a full refund from credit card company. The first principle, again simple awareness. The second principle is to check what goes out from your accounts and submit it for refund if something is wrong.

The third principle, the normal public is not the target, this is a big difference between attacks on organisations and attacks on individuals. While I believe threat actors can attack everyone, everything, and every device, it is only a matter of how much time is consumed, the budget and what are they gaining out of it. I can imagine myself if I have a phone and I don't think someone will want access to my phone, and it will offer him $10,000. I am sure it's not. There is nothing important there.

Siddharth - Phishing and getting money out of people or crypto wallets are quite common, but lately, we have been seeing a lot of state-sponsored threat actors and we have also been seeing private companies which are actually working only on finding exploits, finding zero-day hacks and then selling it to governments and making a big chunk of money. So now this is probably becoming a business as well and a lot of them actually stem from Israel. What are your thoughts on this? Is this quickly becoming a business prospect for hackers?

Asaf - I think it's an interesting trend and I agree with it. In the recent decade, there are few private companies that are mentioning that they do is to develop technology solutions for gaining access and intelligence. I think that in Israeli Cyber intelligence, getting cyber intelligence or getting cyber access to a device or target, has been around for over two decades, and then what happened is the people who were in the army service, some of them completed their service and they still wanted to do what they did. Also, we need to remember that most of the usage of these technologies is for humanitarian reasons for anti-terror fighting and for making sure there is no major terror attack. The fact is that for 20 years, we didn't have a devastating terror attack again (in Israel). And I think a major thing that helped this fact apart from other countermeasures is their demonstration is this kind of spying technologies that sometimes also comes from the private sector.

The challenge and the problem emanate from how you make sure that these kinds of technologies are being handled and sold to the right target. This is a problem, but these are the two sides of things. The world I think needed this kind of spying company to make it a safer place, but the problem from the other side is how to monitor and who these companies sell to and even if they sell it to a government, maybe the government will say, yeah. We are going to use it on a valid target, but I think these private companies don't really can audit, the usage of this third-party government.

Siddharth - Back in 2011, Bill Gates had said that the next big challenge for the world will not be a nuclear war, it would be a virus. We had COVID and the whole world just stopped. Do you think in the future, a full blown war like the Russia-Ukraine conflict will not happen and it will be more cyber warfare?

Asaf - I think there are more challenges, but I think cybersecurity issues are gaining more value because more assets are converting to be online and again, our daily things are online, and sometimes even from the army's perspective, its easier to do something behind the keyboard and not blast anyone and risk your people going to war, I think it's a future threat. While saying it, I think it's also there is a balance thats kind of the nuclear balance that both of the sides have nuclear power, so no one uses it. I think it might go to this one. Maybe the country could devastate the other country in the cyber war, but I think they understand that there might be attacks on the same power as well. It might be a threat that is above our head, but not really will be done with 100% power. I do think and we already saw that in a low power cyber attacks already happening right? Also, even in Ukraine and Russia conflict, Russia on the starting day of the conflict, wiped out hundreds of machines and denial of services attacks on websites in Ukraine and so on.

Siddharth - Asaf now lately, we have seen Predator, we have seen ERMAC, Follina, and a lot of other malware or ransomware coming out. Why is this happening? We have the internet the knowledge is there, and the news spreads. Yet all these things are happening so much more today than say 5 to 10 years ago.

Asaf - Yes, with the popularity of the internet and technologies and phone devices and everything is computerized, and I think that awareness and even the availability of knowledge is very easy to gain for everyone.

As an example, the lapsus$ or phishing attack, will still probably work. It is really a problem, but again we should sleep well, there are cybersecurity vendors out there doing our best. For example, at CyberArk, we try to help organizations across the world and so it will be harder for attackers to achieve their goal also if they attack a company, the damage will be reduced a lot. They will not be at a total loss, and we also see this from the other side of your question, we see more attacking groups yes, and ransomware campaigns because they have money and there are more options for people and also to build an organization and a business. As an example, there is Conti, an attacking group that does ransomware mainly. It's built like a regular company, there is human resources, HR, there is R&D, and there is a kind of marketing to make the tool available to a paying audience. Yeah, this is kind of the new world.

Siddharth - You mentioned the supply chain attacks. Now if the supply chain is crippled for a big company like say Samsung or Apple. It is going to cause a lot of damage and damage reduction will be the biggest thought once the attack has happened. What are the things they should keep in mind before an attack happens and after an attack happens to minimize the damage to them and their consumers?

Asaf - Before the attack happens, we should make sure that our network is there in the most secure place and in the most secure state. There are many protocols and steps and standards. One of the main things is to secure privileged access security and secure identity security. Nowadays, it's not only devices and laptops and phones, but also more of the identity that uses this computer in this form because it could be many identities on the same device, and multiple identities or specific identities could be accessing across multiple devices. We need to secure the focus on identity. How it has been authenticated, what it does and there are again many solutions that can help with this. I would focus on securing the identities and of course making sure to check all the standards.

If we do the preparation right in stage 1, the damage will be limited because one identity will be compromised and one network will be compromised, but the sensitive database is on a different network and there is a segmentation in the network, and so on. If we did the preparation right then the damage should be limited, but still, we should also prepare for this compromise because it might happen at any time and we should also practice it. I think most organisations will suffer from this kind or another compromise, but good preparation will limit the damage when it occurs.

Siddharth - What would your forecast be in terms of trends of security that we will be seeing in the future, like supply chain is one, next what could be it?

Asaf - Interesting question. I think cloud will be major as nowadays cloud is a popular for the technology benefits and so on and I think now that cloud services are being used much more, the attacks on this kind of scenario will be more popular. Some unique specific services like database on cloud and SQL on cloud and virtual machine on cloud and things like this.

Another thing I might say is the attacks in the future will be about machine learning and automation around attacking and automation around the discovery of vulnerabilities, open source is also a popular vector because nowadays because technology is so complex, we have several components on every product. Open source is also another vector.

Siddharth - During WWDC, Apple announced something about a passwordless feature. Do you think this is an interesting concept that will increase security?

Asaf - Yes. There are several disadvantages of having a password. Of course, its hard to remember, people tend to use the same password for 2 different services and so on. The trend of a passwordless future, I think it's good. Mainly it involves some other device or multifactorial with your phone. The passwordless thing is a good solution, From our vulnerability research, we saw after the authentication has been successfully done, it's still a token or digital token or certificate that is being stored in the computer and a device in the cloud. After the authentication phase, the token is not really authenticated more. Nowadays there is a new trend of continuous authentication. You want to continuously authenticate the identity and what it does.

Continued here:
Cyber-attacks in future will be about machine learning and automation around attacking and discovery of vulner - Times Now

Machine learning at the edge: The AI chip company challenging Nvidia and Qualcomm – VentureBeat

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.

Todays demand for real-time data analytics at the edge marks the dawn of a new era in machine learning (ML): edge intelligence. That need for time-sensitive data is, in turn, fueling a massive AI chip market, as companies look to provide ML models at the edge that have less latency and more power efficiency.

Conventional edge ML platforms consume a lot of power, limiting the operational efficiency of smart devices, which live on the edge. Thosedevices are also hardware-centric, limiting their computational capability and making them incapable of handling varying AI workloads. They leverage power-inefficient GPU- or CPU-based architectures and are also not optimized for embedded edge applications that have latency requirements.

Even though industry behemoths like Nvidia and Qualcomm offer a wide range of solutions, they mostly use a combination of GPU- or data center-based architectures and scale them to the embedded edge as opposed to creating a purpose-built solution from scratch. Also, most of these solutions are set up for larger customers, making them extremely expensive for smaller companies.

In essence, the $1 trillion global embedded-edge market is reliant on legacy technology that limits the pace of innovation.

MetaBeat 2022

MetaBeat will bring together thought leaders to give guidance on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.

ML company Sima AI seeks to address these shortcomings with its machine learning-system-on-chip (MLSoC) platform that enables ML deployment and scaling at the edge. The California-based company, founded in 2018, announced today that it has begun shipping the MLSoC platform for customers, with an initial focus of helping solve computer vision challenges in smart vision, robotics, Industry 4.0, drones, autonomous vehicles, healthcare and the government sector.

The platform uses a software-hardware codesign approach that emphasizes software capabilities to create edge-ML solutions that consume minimal power and can handle varying ML workloads.

Built on 16nm technology, the MLSoCs processing system consists of computer vision processors for image pre- and post-processing, coupled with dedicated ML acceleration and high-performance application processors. Surrounding the real-time intelligent video processing are memory interfaces, communication interfaces, and system management all connected via a network-on-chip (NoC). The MLSoC features low operating power and high ML processing capacity, making it ideal as a standalone edge-based system controller, or to add an ML-offload accelerator for processors, ASICs and other devices.

The software-first approach includes carefully-defined intermediate representations (including the TVM Relay IR), along with novel compiler-optimization techniques. This software architecture enables Sima AI to support a wide range of frameworks (e.g., TensorFlow, PyTorch, ONNX, etc.) and compile over 120+ networks.

Many ML startups are focused on building only pure ML accelerators and not an SoC that has a computer-vision processor, applications processors, CODECs, and external memory interfaces that enable the MLSoC to be used as a stand-alone solution not needing to connect to a host processor. Other solutions usually lack network flexibility, performance per watt, and push-button efficiency all of which are required to make ML effortless for the embedded edge.

Sima AIs MLSoC platform differs from other existing solutions as it solves all these areas at the same time with its software-first approach.

The MLSoC platform is flexible enough to address any computer vision application, using any framework, model, network, and sensor with any resolution. Our ML compiler leverages the open-source Tensor Virtual Machine (TVM) framework as the front-end, and thus supports the industrys widest range of ML models and ML frameworks for computer vision, Krishna Rangasayee, CEO and founder of Sima AI, told VentureBeat in an email interview.

From a performance point of view, Sima AIs MLSoC platform claims to deliver 10x better performance in key figures of merit such as FPS/W and latency than alternatives.

The companys hardware architecture optimizes data movement and maximizes hardware performance by precisely scheduling all computation and data movement ahead of time, including internal and external memory to minimize wait times.

Sima AI offers APIs to generate highly optimized MLSoC code blocks that are automatically scheduled on the heterogeneous compute subsystems. The company has created a suite of specialized and generalized optimization and scheduling algorithms for the back-end compiler that automatically convert the ML network into highly optimized assembly codes that run on the machine learning-accelerator (MLA) block.

For Rangasayee, the next phase of Sima AIs growth is focused on revenue and scaling their engineering and business teams globally. As things stand, Sima AI has raised $150 million in funding from top-tier VCs such as Fidelity and Dell Technologies Capital. With the goal of transforming the embedded-edge market, the company has also announced partnerships with key industry players like TSMC, Synopsys, Arm, Allegro, GUC and Arteris.

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.

Read the original:
Machine learning at the edge: The AI chip company challenging Nvidia and Qualcomm - VentureBeat