Page 1,621«..1020..1,6201,6211,6221,623..1,6301,640..»

Artificial Intelligence in Breast Ultrasound: The Emerging Future of Modern Medicine – Cureus

In women, the most common neoplasm in terms of malignancy is breast cancer. Also, among deaths due to cancer, breast cancer is the second leading cause [1]. By using ultrasound and X-ray, one can diagnose breast cancer. Other significant techniques are mammography and magnetic resonance imaging (MRI), which successfully help make the appropriate diagnosis. First preference in imaging is given to ultrasound for the depiction and categorization of breast lesions as it is non-invasive, feasible, and cost-effective. Along with these, its availability is high and shows acceptable diagnostic performance. Those mentioned above are the basic techniques used as diagnostic tools. Besides these, some newer techniques are available, including color Doppler and contrast-enhanced ultrasound. Spectral Doppler, as well as elastography, also contributes to the diagnosis. These newer techniques support ultrasound doctors to obtain more prcised information. However, the drawback is that it does suffer from operator dependence [2]. Deep learning (DL) algorithms, which are precisely a part of artificial intelligence (AI) in particular, have received considerable attention in the past few years due to their outstanding performance in imaging tasks. Technology inbuilt in AI makes better evaluation of the appreciated data related to imaging [3]. AI in ultrasound lays significant focus on distinguishing between benign and malignant masses related to the breast. Radiologists nowadays interpret, analyze, and detect breast images. With a heavy and long-term volume of work, radiologists are more likely to make errors in image interpretation due to exhaustion, which is likely to result in a misidentification or failed diagnosis, which AI can prevent. Humans make various errors in the diagnosis part. To reduce those errors, there is the implementation of a technique known as computer-aided diagnosis (CAD). In this, an algorithm is present that completes the processing of the image along with analysis [4]. Convolutional neural networks (CNNs), a subset of DL, are the most recent technology used in medical imaging [5,6]. AI has the potential to enhance breast imaging interpretation accuracy, speed, and quality. By standardizing and upgrading workflow, minimizing monotonous tasks, and identifying issues, AI has the potential to revolutionize breast imaging. This will most likely free up physician resources that could be used to improve communication with patients and integrate with colleagues. Figure 1 shows the relation between the various subsets of AI in the article.

Traditional machine learning is the basis and the area of focus that is included under early AI. It deals with the problems in a stepwise manner. It involves a two-step procedure that is object detection followed by object recognition. The first step is object detection, in which case there exists an algorithm for bounding box detection that the machine uses in scanning the image to locate the appropriate object area. The other step, which is the second step, includes the object recognition algorithm that is based on the initial step. Identifying certain characteristic features and encoding the same into a data type are the tasks that experts perform in the identification process. The advantage of a machine is that it extracts the characteristic features, which is followed by performing quantitative analysis, processes the information, and gives the final judgment. In this way, it provides assistance to radiologists in detecting the lesions and analyzing them [5]. Through this, both the efficiency and the accuracy of the diagnosis can be enhanced and improved. In the previous few decades, the popularity of CAD is prevailing in terms of development as well as advancement. CAD includes machine learning methodologies along with multidisciplinary understanding as well as techniques. Analyzing the information of the patient is done by using these techniques. Similarly, the results can provide assistance to clinicians in the process of making an accurate diagnosis [7]. CAD could very well evaluate imaging data. It directly provides the information after analyzing it to the clinician and also correlates the results with some diseases that involve the use of statistical modelling of previous cases in the population. It has many other applications, such as lesion detection along with characterization and staging of cancer, including the enactment of a proper treatment plan with the assessment of its response. Adding to it, prediction of prognosis and recurrence are some more applications. DL has transformed computer vision [8,9].

DL, which is an advanced form of machine learning, does not depend solely on features and ROIs (region of interest) that are preset by humans, which is the opposite of traditional machine learning algorithms [9,10]. Along with this, it prefers to complete all the targeted processes independently. CNNs are the evolving configuration in healthcare, which is a part of DL. It can be explained by an example. Majorly, the model consists of three layers: input layers, hidden layers, and output layers. In this case, the hidden layer is the most vital determinant in achieving recognition. Being the most crucial determinant of achieving recognition, a significant number of convolutional layers, along with a fully connected layer, are encompassed in the hidden layers. The various massive problems generated by the machine based on input activity are handled by the convolutional layers. They are connected to form a complex system with the help of convolution layers, and, hence, it can easily output the results [11,12]. DL methods have an excessive dependence on data and hardware. In spite of this, it has easily defeated other frameworks in computer vision completion [13]. Furthermore, DL methods perform flawlessly not only in ultrasound but also in computed tomography (CT) [14,15]. According to certain studies, it has also been shown that there has been an adequate performance of DL methods in MRI [16]. DL uses a deep neural network architecture to transform the input information into multiple layers of abstraction and thereby discover the data representations [17]. The deep neural network's multiple layers of weights are iteratively updated with a large dataset with effective functioning. This yields a mathematical model of a complex type that is capable of extracting relevant features from input data showing high selectivity. DL has made major advances in many tasks such as target identification, including characterization, speech and text recognition, and face recognition. Some other advancements are smart devices and robotics.

An ultrasonic machine is used to upload images taken to the workstation, where they are reprocessed. The DL technique (S-detect) can, on the other hand, directly pinpoint breast lesions on the ultrasound. It is also used in segmentation, feature analysis, and depictions. The BI-RADS (Breast Imaging-Reporting and Data System) 2013 lexicon may also be used for the same. It can provide instantaneous results in the form of a frozen image on an ultrasound machine to detect any malignancy. This is performed by selecting ROI automatically or by manual means [18]. They assessed the performance of S-detect in terms of diagnosis so as to confirm whether the breast lesion was benign or malignant. On setting the cutoff at category 4a in BI-RADS, it was observed that the accuracy, along with specificity and PPV (positive predictive value), was high in S-detect in comparison with the radiologist (p = 0.05 for all). Ultrasonologists typically use macroscopic and microscopic features of breast images to recognize and segment potentially malicious lesions. Shape and edge, including orientation and accurate location of calcification, can be detected. Certain features, such as rear type and echo type, along with hardness, can also be identified. Following that, suspicious masses are classified using the BI-RADS scale so as to assess and estimate the level of cancer speculation in breast lesions. However, its macroscopic and microscopic characteristics are critical in distinguishing whether the masses are of malignant type. As a result, ultrasound experts are in high demand for correctly obtaining these features.

Mammography is a non-invasive technique with high resolution that is commonly used. It also shows good repeatability. Mammography detects those masses that doctors fail to palpate in the breasts and can reliably distinguish whether the lesions are benign or malignant. Mammograms are retrieved from digital mammography (DM). They are possibly provided to process (raw imaging data) as well as to present (a post-treated form of the raw data) image layouts using DM systems [19]. Breast calcification appears on mammography as narrow white spots, which are breast calcifications caused by narrow deposits of calcium salts in the tissues of the breast. Calcification is classified into two types: microcalcifications and macrocalcifications. The large, along with rough, are macrocalcifications; they are usually benign and depend on the age group. Microcalcifications, which range in size from 0.1 mm to 1 mm, can be found within or outside visible masses and may act as early warning signs of breast cancer [20]. Nowadays, significant CAD systems are progressing to detect calcifications in mammography.

DL, like DM, DBT (digital breast tomosynthesis), and USG (ultrasonography), is primarily utilized in MRI to conduct or assist in the categorization and identification of breast lesions. The other modalities and MRI. differ in their dimensions. MRI produces 3D scans; unlike it, 2D images are formed by other modalities such as DM, DBT, and USG. Furthermore, MRI observes the input along with the outflow of contrast agents (dynamic contrast-enhanced MRI) and changes its pre-existing dimensions to 4D. Moreover, hurdles are created while applying DL models on the 3D or 4D scans because the majority of models are designed to function on 2D pictures. To address these issues, various ways have been proposed. The most frequent method is to convert 3D photos to 2D images. It is accomplished by means of slicing, in which the 3D image is sliced into 2D, or by applying the highest intensity projection (MIP) [21,22]. DL is utilized to classify the axillary group of lymph node metastases in addition to lesion categorization [23-25]. Instead of biopsy data, positron emission tomography (PET) is used as the gold standard. The reason is that, while a biopsy is conclusive as truth, it leaves artifacts such as needle marks along with biopsy clips, which may unintentionally lead to shifting of the DL algorithm toward a malignant categorization [23,24].

PET or scintigraphy is a nuclear medicine imaging technique. They are predicted to be not much more suitable than the other previously stated imaging modalities, namely DM, digital tomosynthesis, USG, and MRI, for evaluating early-stage of cancerous lesions in the breast. The nuclear techniques, on the other hand, provide added utility for detecting and classifying axillary lymph nodes along with distant staging [26]. As a result, it is not surprising that DL is being used in this imaging field, albeit in a limited capacity. PET/CT assessment of whole-body metabolic tumor volume (MTV) could provide a measure of tumor burden. If a DL model could handle this operation, it would considerably minimize manual labor because, in practical application, for acquiring MTV, all tumors must be identified [27,28]. Weber et al. investigated whether a CNN trained to detect and segment the lesions in the breast with whole-body PET/CT scans of patients who have cancer could also detect and segment lesions in lymphoma and lung cancer patients. Moreover, the technique of DL, along with nuclear medicine techniques, are used parallelly in improving the tasks that are similarly used in other imaging approaches. Li et al. developed a 3D CNN model to help doctors detect axillary lymph node metastases on PET/CT scans [28]. Because of their network, clinicians' sensitivity grew by 7.8% on average, while their specificity remained unchanged (99.0%). However, both clinicians outscored the DL model on its own.

Worldwide, it has been observed that in women, there is a higher incidence and fatality rate of breast cancer; hence, many countries have implemented screening centers for women of the appropriate age group for detection of breast cancer. The ideology behind the implementation of screening centers is to distinguish between benign breast lesions and malignant breast lesions. The primary classification system used is BI-RADS for classifying lesions in breast ultrasound. AI systems have been developed with equipped features for classifying benign and malignant breast lesions to assist clinicians in making consistent and accurate decisions. Ciritsis et al. categorized breast ultrasound images into BI-RADS 2-3 and BI-RADS 4-5 using a deep convolution neural network (dCNN) with an internal data set and an external data set. The dCNN had a classification accuracy of 93.1% (external 95.3%), whereas radiologists had a classification accuracy of 91.65% (external 94.1 2%). This indicates that deep neural networks (dCNNs) can be utilized to simulate human decision-making. Becker et al. analyzed 637 breast ultrasound pictures using DL software (84 malignant and 553 benign lesions). The software was trained on a randomly chosen subset of the photographs (n=445, 70%), with the remaining samples (n=192) used to validate the resulting model during the training process. The findings demonstrated that the neural network, which had only been trained on a few hundred examples, had the same accuracy as a radiologist's reading. The neural network outperformed a trained medical student with the same training data set [29-31]. This finding implies that AI-assisted classification and diagnosis of breast illnesses can significantly cut diagnostic time and improve diagnostic accuracy among novice doctors. Table 1 shows BIRADS scoring.

AI is still struggling to advance to a higher level. Although it is tremendously progressing in the healthcare fraternity, it still has to cover a long journey to properly blend into clinicians' work and be widely implemented around the world. Many limitations have been reported for CAD systems for breast cancer screening, including a global shortage of public datasets, a high reliance on ROI annotation, increased image standards in terms of quality, regional discrepancies, and struggles in binary classification. Furthermore, AI is designed for single-task training and cannot focus on multiple tasks at once, and, hence, it is one of the significant obstacles to the advancement of DL associated with breast imaging. CAD systems are progressively evolving in ultrasound elastography [32]. Similarly, it is also progressing in the technology related to contrast-enhanced mammography as well as MRI [33,34]. AI in breast imaging can be used to not only detect but also classify breast diseases and anticipate lymph node tumor progression [35]. Moreover, it can also predict disease recurrence. As the technology of AI advances, there will be higher accuracy, along with greater efficiency, and a more precise plan of treatment for breast ailments, enabling them to achieve early detection with accurate diagnosis among radiologists. Moreover, it can also predict disease recurrence. The lack of a consistent strategy for segmentation (2D vs. 3D), feature extraction, and selection and categorization of significant radiomic data is a common limitation shared by all imaging modalities. Future studies with greater datasets will allow for subgroup analysis by patient group and tumor type [36].

Without a crystal ball, it is impossible to predict whether further advances in AI will one day start replacing radiologists or other functions in diagnostics reportedly performed by humans, but AI will undeniably play a major role in radiology, one that is currently unfolding rapidly. When compared to traditional clinical models, AI has the added benefit of being able to pinpoint distinctive features, textures, and details that radiologists seem unable to appreciate, as well as quantitatively define image explicit details, making its evaluation more objective. Moreover, AI in breast imaging can be used to not only detect but also classify breast diseases. As a result, greater emphasis needs to be placed on higher-quality research studies that have the potential to influence treatment, outcomes for patients, and social impact.

More:
Artificial Intelligence in Breast Ultrasound: The Emerging Future of Modern Medicine - Cureus

Read More..

The Adoption of Artificial Intelligence And Machine Learning In The Music Streaming Market Is Gaining Popularity As Per The Business Research…

LONDON, Sept. 07, 2022 (GLOBE NEWSWIRE) -- According to The Business Research Companys research report on the music streaming market, artificial intelligence and machine learning in music streaming devices are the key trends in the music streaming market. Technologies like artificial intelligence and Machine learning enhance the music streaming experience by increasing storage and improving the search recommendations, improving the overall experience.

For instance, in January 2022, Gaana, an India-based music streaming app introduced a new product feature using artificial intelligence to enhance the music listening experience for its listeners. The app will modify music preferences using artificial intelligence to suit a person's particular occasion or daily mood.

Request for a sample of the global music streaming market report

The global online music streaming market size is expected to grow from $24.09 billion in 2021 to $27.24 billion in 2022 at a compound annual growth rate (CAGR) of 13.08%. The global music streaming market share is expected to grow to $45.31 billion in 2026 at a compound annual growth rate (CAGR) of 13.57%.

The increasing adoption of smart devices is expected to propel the growth of the music streaming market. Smart devices such as smartphones, and smart speakers have changed the way of listening to music. They include smart features like the ability to set alarms, play music on voice command, control smart devices in-home, and stream live music, as they are powered by a virtual assistant. For instance, according to statistics from Amazon Alexa 2020, nearly 53.6 million Amazon Echo speakers (smart speakers) were sold in 2020 which increase to 65 million in 2021. Therefore, the increasing adoption of smart devices will drive the music streaming market growth.

Major players in the music streaming market are Amazon, Apple, Spotify, Gaana, SoundCloud, YouTube Music, Tidal, Deezer, Pandora, Sirius XM Holdings, iHeartRadio, Aspiro, Tencent Music Entertainment, Google, Idagio, LiveXLive, QTRAX, Saavn, Samsung, Sony Corporation, TuneIn, JOOX, NetEase, Kakao and Times Internet.

The global music streaming market is segmented by service into on-demand streaming, live streaming; by content into audio, video; by platform into application-based, web-based; by revenue channels into non-subscription, subscription; by end-use into individual, commercial.

North America was the largest region in the music streaming market in 2021. The regions covered in the global music streaming industry analysis are Asia-Pacific, Western Europe, Eastern Europe, North America, South America, the Middle East, and Africa.

Music Streaming Global Market Report 2022 Market Size, Trends, And Global Forecast 2022-2026 is one of a series of new reports from The Business Research Company that provide music streaming market overviews, music streaming market analyze and forecast market size and growth for the whole market, music streaming market segments and geographies, music streaming market trends, music streaming market drivers, music streaming market restraints, music streaming market leading competitors revenues, profiles and market shares in over 1,000 industry reports, covering over 2,500 market segments and 60 geographies.

The report also gives in-depth analysis of the impact of COVID-19 on the market. The reports draw on 150,000 datasets, extensive secondary research, and exclusive insights from interviews with industry leaders. A highly experienced and expert team of analysts and modelers provides market analysis and forecasts. The reports identify top countries and segments for opportunities and strategies based on market trends and leading competitors approaches.

Not the market you are looking for? Check out some similar market intelligence reports:

Music Recording Global Market Report 2022 By Type (Record Production, Music Publishers, Record Distribution, Sound Recording Studios), By Application (Mechanical, Performance, Synchronization, Digital), By End-User (Individual, Commercial), By Genre (Rock, Hip Hop, Pop, Jazz) Market Size, Trends, And Global Forecast 2022-2026

Content Streaming Global Market Report 2022 By Platform (Smartphones, Laptops & Desktops, Smart TVs, Gaming Consoles), By Type (On-Demand Video Streaming, Live Video Streaming ), By Deployment (Cloud, On-Premise), By End User (Consumer, Enterprise) Market Size, Trends, And Global Forecast 2022-2026

Smart Home Devices Global Market Report 2022 By Technology (Wi-Fi Technology, Bluetooth Technology), By Application (Energy Management, Climate Control System, Healthcare System, Home Entertainment System, Lighting Control System, Security & Access Control System), By Sales Channel (Online, Offline) Market Size, Trends, And Global Forecast 2022-2026

Interested to know more about The Business Research Company?

The Business Research Company is a market intelligence firm that excels in company, market, and consumer research. Located globally it has specialist consultants in a wide range of industries including manufacturing, healthcare, financial services, chemicals, and technology.

The Worlds Most Comprehensive Database

The Business Research Companys flagship product, Global Market Model, is a market intelligence platform covering various macroeconomic indicators and metrics across 60 geographies and 27 industries. The Global Market Model covers multi-layered datasets which help its users assess supply-demand gaps.

Blog: http://blog.tbrc.info/

Link:
The Adoption of Artificial Intelligence And Machine Learning In The Music Streaming Market Is Gaining Popularity As Per The Business Research...

Read More..

Explainable artificial intelligence through graph theory by generalized social network analysis-based classifier | Scientific Reports – Nature.com

In this subsection, we present details on how we process the dataset, turn it into a network graph and finally how we produce, and process features that belong to the graph. Topics to be covered are:

splitting the data,

preprocessing,

feature importance and selection,

computation of similarity between samples, and

generating of the raw graph.

After preprocessing the data, the next step is to split the dataset into training and test samples for validation purposes. We selected cross-validation (CV) as the validation method since it is the de facto standard in ML research. For CV, the full dataset is split into k folds; and the classifier model is trained using data from (k-1) folds then tested on the remaining kth fold. Eventually, after k iterations,, average performance scores (like F1 measure or ROC) of all folds are used to benchmark the classifier model.

A crucial step of CV is selecting the right proportion between the training and test subsamples, i.e., number of folds. Determining the most appropriate number of folds k for a given dataset is still an open research question17, besides de facto standard for selecting k is accumulated around k=2, k=5, or k=10. To address the selection of the right fold size, we have identified two priorities:

Priority 1Class Balance: We need to consider every split of the dataset needs to be class-balanced. Since the number of class types has a restrictive effect on selecting enough similar samples, detecting the effective number of folds depends heavily on this parameter. As a result, whenever we deal with a problem which has low represented class(es), we selected k=2.

Priority 2High Representation: In our model, briefly, we build a network from the training subsamples. Efficient network analysis depends on the size (i.e., number of nodes) of the network. Thus, maximize training subsamples with enough representatives from each class (diversity) is our priority as much as we can when splitting the dataset. This way we can have more nodes. In brief, whenever we do not cross priority 1, we selected k=5.

By balancing these two priorities, we select efficient CV fold size by evaluating the characteristics of each datasets in terms of their sample size and the number of different classes. The selected fold value for each dataset will be specified in the Experiments and results section. To fulfill the class balancing priority, we employed stratified sampling. In this model, each CV fold contains approximately the same percentage of samples of each target class as the complete set.

Preprocessing starts with the handling of missing data. For this part, we preferred to omit all samples which have one or more missing feature(s). By doing this, we have focused merely on developing the model, skipping trivial concerns.

As stated earlier, GSNAc can work on datasets that may have both numerical and categorical values. To ensure proper processing of those data types, as a first step, we separate numerical and categorical features18. First, in order to process them mathematically, categorical (string) features are transformed into unique integers for each unique category by a technique called labelization. It is worth noting that, against the general approach, we do not use the one-hot-encoding technique for transforming categorical features, which is the method of creating dummy binary-valued features. Labelization does not generate extra features, whereas one-hot-encoding extend the number of features.

For the numerical part, as a very important stage of preprocessing, scaling19 of the features follows. Scaling is beneficial since the features may have a very different range and this might affect scale-dependent processes like distance computation. We have two generally accepted scaling techniques which are normalization and standardization. Normalization transforms features linearly into a closed range like [0, 1], which does not affect the variation of values among features. On the other hand, standardization transforms the feature space into a distribution of values that are centered around the mean with a unit standard deviation. This way, the mean of the attribute becomes zero and the resultant distribution has a unit standard deviation. Since GSNAc is heavily dependent on vectorial distances, we do not prefer to lose the structure of the variation within features and this way our choice for scaling the features becomes normalization. Here, it is worth mentioning that all the preprocessing is applied on the training part of the data and transformed on the test data, ensuring no data leakage occurs.

Feature Importance (FI) broadly refers to the scoring of features based on their usefulness in prediction. It is obvious that in any problem some features might be more definitive in terms of their predictive capability of the class. Moreover, a combination of some features may have a higher effect than others in total than the sum of their capacity in this sense. FI models, in general, address this type of concern. Indeed, almost all ML classification algorithms use a FI algorithm under the hood; since this is required for the proper weighting of features before feeding data into the model. It is part of any ML classifier and GSNAc. As a scale-sensitive model, vectorial similarity needs to benefit much from more distinctive features.

For computing feature importance, we preferred to use an off-the-shelf algorithm, which is a supervised k-best feature selection18 method. The K-best feature selection algorithm simply ranks all features by evaluating features ANOVA analysis against class labels. ANOVA F-value analyzes the variance between each feature and its respective class and computes F-value which is the ratio of the variation between sample means, over the variation within the samples. This way, it assigns F values as features importance. Our general strategy is to keep all features for all the datasets, with an exception for genomic datasets, that contain thousands of features, we practiced omitting. For this reason, instead of selecting some features, we prefer to keep all and use the importance learned at this step as the weight vector in similarity calculation.

In this step, we generate an undirected network graph G, its nodes will be the samples and its edges will be constructed using the distance metrics20 between feature values of the samples. Distances will be converted to similarity scores to generate an adjacency matrix from the raw graph. As a crucial note, we state that since we aim to predict test samples by using G, in each batch, we only process the training samples.

In our study for constructing a graph from a dataset we defined edge weights as the inverse of the Euclidean distance between the sample vectors. Simply, Euclidean distance (also known as L2-norm) gives the unitless straight line (shortest) distance between two vectors in space. In formal terms, for f-dimensional vectors u and v, Euclidean distance is defined as:

$$dleft(u,vright)=sqrt[2]{sum_{f}{left({u}_{i}-{v}_{i}right)}^{2}}$$

A slightly modified use of the Euclidean distance is introducing the weights for dimensions. Recall from the discussion of the feature importance in the former sections, some features may carry more information than others. So, we addressed this factor by computing a weighted form of L2 norm based on distance which is presented as:

$${dist_L2}_{w}left(u,vright)=sqrt[2]{sum_{f}{{w}_{i}({u}_{i}-{v}_{i})}^{2}}$$

where w is the n-dimensional feature importance vector and i iterates over numerical dimensions.

The use of the Euclidean distance is not proper for the categorical variables, i.e. it is ambiguous and not easy to find how much a canarys habitat sky is distant from a sharks habitat sea. Accordingly, whenever the data contains categorical features, we have changed the distance metric accordingly to L0 norm. L0 norm is 0 if categories are the same; it is 1 whenever the categories are different, i.e., between the sky and the sea L0 norm is 1, which is the maximum value. Following the discussion of weights for features, the L0 norm is also computed in a weighted form as ({dist_L0}_{w}left(u,vright)=sum_{f}{w}_{j}(({u}_{j}ne {v}_{j})to 1)), where j iterates over categorical dimensions.

After computing the weighted pairwise distance between all the training samples, we combine numerical and categorical parts as: ({{dist}_{w}left(u,vright)}^{2}={{dist_L2}_{w}left(u,vright)}^{2}+ {{dist_L0}_{w}left(u,vright)}^{2}). With pairwise distances for each pair of samples, we get a n x n square and symmetric distance matrix D, where n is the number of training samples. In matrix D, each element shows the distance between corresponding vectors.

$$D= left[begin{array}{ccc}0& cdots & d(1,n)\ vdots & ddots & vdots \ d(n,1)& cdots & 0end{array}right]$$

We aim to get a weighted network, where edge weights represent the closeness of its connected nodes. We need to first convert distance scores to similarity scores. We simply convert distances to similarities by subtracting the maximum distance on distances series from each element.

$$similarity_s(u,v)=mathrm{max}_mathrm{value}_mathrm{of}(D)-{dist}_{w}left(u,vright)$$

Finally, after removing self-loops (i.e. setting diagonal elements of A to zero), we use adjacency matrix A to generate an undirected network graph G. In this step, we delete the lower triangular part (which is symmetric to the upper triangular part) to avoid redundancy. Note that, in transition from the adjacency matrix to a graph, the existence of a (positive) similarity score between two samples u and v creates an edge between them, and of course, the similarity score will serve as the vectorial weight of this particular edge in graph G.

$$A= left[begin{array}{ccc}-& cdots & s(1,n)\ vdots & ddots & vdots \ -& cdots & -end{array}right]$$

The raw graph generated in this step is a complete graph: that is, all nodes are connected to all other nodes via an edge having some weight. Complete graphs are very complex and sometimes impossible to analyze. For instance, it is impossible to produce some SNA metrics such as betweenness centrality in this kind of a graph.

View post:
Explainable artificial intelligence through graph theory by generalized social network analysis-based classifier | Scientific Reports - Nature.com

Read More..

Update your domain’s name servers | Cloud DNS | Google Cloud

After you create a managed zone, youmust change the name servers that are associated with your domain registrationto point to the Cloud DNS name servers. The process differs by domainregistrar provider. Consult the documentation for your provider todetermine how to make the name server change.

If you don't already have a domain name, you can create and register a newdomain name atGoogle Domains or Cloud Domains,or you can use a third-party domain name registrar.

If you are using Cloud Domains, see Configure DNS for thedomain in theCloud Domains documentation.

If you are using Google Domains, follow these instructions to update yourdomain's name servers.

For Cloud DNS to work, you must determine the name servers thathave been associated with your managed zone and verify that they match the nameservers for your domain. Different managed zones have different name servers.

In the Google Cloud console, go to the Cloud DNS zonespage.

Go to Cloud DNS zones

Under Zone name, select the name of your managed zone.

On the Zone details page, click Registrar setup at the top rightof the page.

To return the list of name servers that are configured to serveDNS queries for your zone, run thedns managed-zones describecommand:

Replace ZONE_NAME with the name of the managed zone forwhich you want to return a list of name servers.

The IP addresses of your Cloud DNS name servers change, andmay be different for users in different geographic locations.

To find the IP addresses for the name servers in the a name server shard,run the following command:

For private zones, you can't query name servers on the public internet.Therefore, it's not necessary to find their IP addresses.

To find all the IP address ranges used by Google Cloud, seeWhere can I find Compute Engine IP ranges?

Verify that the name servers for the domain match the name servers listed inthe Cloud DNS zone.

To look up name servers that are currently in use, run the dig command:

Now that you have the list of Cloud DNS name servers hosting yourmanaged zone, use your domain registrar toupdate the name servers for your domain. Your domain registrar might be Google Domains,Cloud Domains, or a third-party registrar.

Typically, you must provide at least two Cloud DNS name serversto the domain registrar. To benefit from Cloud DNS's highavailability, you must use all the name servers.

After changing your domain registrar's name servers, it can take a while forresolver traffic to be directed to your new Cloud DNS nameservers. Resolvers could continue to use your old name servers until the TTL onthe old NS records expire.

More here:
Update your domain's name servers | Cloud DNS | Google Cloud

Read More..

Cloud servers are proving to be an unfortunately common entry route for cyberattacks – TechRadar

Cloud servers are now the number one entry route for cyberattacks, new research has claimed, with 41% of companies reporting it as the first entry point.

The problem is only getting worse, with the number of attacks using cloud servers as their initial point of entry rose 10% year-on-year, and they've also leapfrogged corporate servers as the main way for criminals to find their way into organizations.

The data, collected by cyber insurer Hiscox from a survey of 5,181 professionals from eight countries, found it's not just cloud servers that are letting hackers in, as 40%of businesses highlighted business emails as the main access point for cyberattacks.

Other common entry methods included remote access servers (RAS),which were cited by 31% of respondents, and employee-owned mobile devices,which were cited by 29% (a 6% rise from the year before).

Distributed denial of service (DDoS)attacks were also a popular method, cited by 26% of those surveyed.

The data also provided some into how cyberattacks are impacting different countries.

Businesses in the United Kingdom were found to be the least likely out of all the countries surveyed to have experienced a cyberattack in the last year at 42%, significantly beating out the Netherlands and France, who had figures of 57% and 52% respectively.

However, on the flip side, the UK had the highest median cost for cyberattacks out of all the countries looked at, coming in at $28,000.

It's not just the smaller, underfunded firms that can fall victim to cloud server-based attacks.

Accenture, one of the worlds largest IT consultancy firms, recently suffered an attack involving the LockBit ransomware strain which impacted a cloud server environment.

View original post here:
Cloud servers are proving to be an unfortunately common entry route for cyberattacks - TechRadar

Read More..

Recycling the Cloud: Singapore facility gives second life to mega servers Recycling International – Recycling International

Microsoft has opened a plant to tackle the growing stream of e-scrap from data centres. The Circular Center in Singapore provides services for the reuse of computer components in schools, for job training, and much more.

Microsoft aims to reuse 90% of its cloud computing hardware assets by 2025. The launch of this first facility in Asia is claimed to be an important step towards that goal, while also reducing Microsofts carbon footprint and creating jobs.

Microsoft Cloud is powered by millions of servers in hundreds of data centres around the world and demand for cloud services is growing exponentially. At these facilities, decommissioned servers and other types of hardware can be repurposed or dissembled by technicians before the components and equipment move on to another phase of life.

Microsofts Intelligent Disposition and Routing System (IDARS) uses AI and machine learning to establish and execute a zero-waste plan for every piece of decommissioned hardware. IDARS also works to optimise routes for these hardware assets and provide Circular Center operators with instructions on how to dispose of each one.

Singapore, with strong government and private sector commitments and agile policy environment, has already laid the foundations for an advanced recycling infrastructure to take advantage of those opportunities. A Microsoft Circular Center in Singapore is in line with this approach, says the tech multinational.

Microsofts first Circular Center opened in Amsterdam in 2020. Since its inception, the company has reused or recycled 83% of all decommissioned assets. Plans are underway to expand the programme in Washington, Chicago, Sydney and in other locations.

Would you like to share any interesting developments or article ideas with us? Don't hesitate to contact us.

Read this article:
Recycling the Cloud: Singapore facility gives second life to mega servers Recycling International - Recycling International

Read More..

Application Server Market to Hit Valuation of $40.96 Billion by 2028 | Increasing Number of Cyberattacks is growing Concerns among End-Users -…

Westford, USA, Sept. 08, 2022 (GLOBE NEWSWIRE) -- As the world continues to become more digital, businesses are increasingly looking for application servers that can facilitate large-scale web and mobile deployments. The growth of the global application server market is only expected to increase in the coming years, as market players figure out new ways to stay competitive.

There is a growing demand for application servers and companies are rushing to invest in these technologies in order to meet the needs of their customers. Application servers are now essential for any business that depends on web applications, as well as traditional desktop applications. This demand in the global application server market is due to the popularity of cloud-based services and the need for businesses to reduce IT costs. Many businesses are seeking solutions that allow them to use existing hardware and software infrastructure while offloading some of the processing burden to a third-party. This can be especially beneficial for companies that have limited resources or cannot afford to hire additional IT staff.

To meet this growing demand, vendors in the application server market are investing in new product lines and innovation. For example, IBM is introduced its Bluemix platform in 2018, which makes it easier for developers to build cloud-based applications using IBMs Hypervisor technology. Hewlett Packard Enterprise has also made considerable investments in its Applied Data Science Platform, which provides databases and analytics capabilities for application development.

Get sample copy of this report:

https://skyquestt.com/sample-request/application-server-market

Why Businesses are Rapidly Turning to Application Services?

There are a number of reasons why businesses are turning to application server market. For one, these systems can help speed up web and mobile deployments by handling the heavy lifting required to run complex applications. Additionally, application servers offer reliability and security benefits that can be priceless for organizations that depend on their websites and apps for business success.

Today, web-based applications are increasingly being used to replace desktop applications. In addition, businesses are finding that application servers are a more efficient way to manage their software infrastructure than traditional hosting providers. This is because application servers offer higher performance and reliability than traditional hosting providers.

SkyQuest in the global application server market found that most of the business are using the product to configure and run multiple applications simultaneously without slowing down. This means that businesses can use application servers to run their business applications, as well as their own personal websites and applications.

Also, there's the increasing demand from cloud services providers for application servers. Cloud services providers want to use application servers so that they can provide customers with a scalable infrastructure. By using an application server, a cloud service provider can reduce the amount of time and effort it takes to set up a new service.

SkyQuests report on application server market offers insights on market dynamics, opportunities, trends, challenges, threats, pricing analysis, average spend per user, major spender by companies, consumer behavior, market impact, competitive landscape, and market share analysis.

Browse summary of the report and Complete Table of Contents (ToC):

https://skyquestt.com/report/application-server-market

High Risk of Ransomware in Application Server Infrastructure is Posing Challenge

From the last few years, the global application server market witnessed around 129 major attacks on application server infrastructure. The increasing risk of cyber-attacks on application servers is something that businesses need to be aware of and they are a key part of many organizations, and when they are attacked, it can open up a lot of opportunities for criminals. Cyber risks to application servers have increased in recent years, as attackers have become increasingly skilled at targeting these systems. At the same time, companies are increasingly reliant on these systems to provide critical services, making them targets for hackers. In 2021, on average, ransomware attack would cost application server hack around $17,000.

SkyQuest has recently conducted a survey on application server market to understand the frequency and insights about cyber-attack 150 large and 150 small enterprises. Wherein, it was obsefved that 13% of surveyed organizations have suffered at least one cyber-attack in the past two years. The small enterprises were at least 200% more susceptible cyberattacks. More than 26% of these organizations have suffered two or more attacks during that time period. Additionally, 44% of these same organizations reported that their cyber security capabilities were inadequate to respond to the attacks they experienced. As per our finding, 88% of all detected data breaches began with stolen or illegally accessed user credentials.

Top cyberattacks in application server market

SkyQuest has published a report on global application server market. The report provides a detailed analysis of cyberattacks on the application server consumers and their overall performance. The report also offers valuable insights about top players and their key advancements in order to avoid such attacks.

Speak to Analyst for your custom requirements:

https://skyquestt.com/speak-with-analyst/application-server-market

Top Players in Global Application Server Market

Related Reports in SkyQuests Library:

Global Electronic Data Interchange (EDI) Software Market

Global Human Resource (HR) Technology Market

Global Smart Label Market

Global Field Service Management (FSM) Market

Global Point Of Sale (POS) Software Market

About Us:

SkyQuest Technologyis leading growth consulting firm providing market intelligence, commercialization and technology services. It has 450+ happy clients globally.

Address:

1 Apache Way, Westford, Massachusetts 01886

Phone:

USA (+1) 617-230-0741

Email:sales@skyquestt.com

LinkedInFacebookTwitter

The rest is here:
Application Server Market to Hit Valuation of $40.96 Billion by 2028 | Increasing Number of Cyberattacks is growing Concerns among End-Users -...

Read More..

Improving Splunk and Kafka Platforms with Cloud-Native Technologies – InfoWorld

Intel Select Solutions for Splunk and Kafka on Kubernetes use containers and S3-compliant storage to increase application performance and infrastructure utilization while simplifying the management of hybrid cloud environments.

Executive Summary

Data architects and administrators of modern analytic and streaming platforms like Splunk and Kafka continually look for ways to simplify the managementofhybrid or multi-cloud platforms, while also scaling these platforms to meet the needs of their organizations. They are challenged with increasing data volumes and the need for faster insights and responses. Unfortunately, scaling often results in server sprawl, underutilized infrastructure resources and operational inefficiencies.

The release of Splunk Operator for Kubernetes and Confluent for Kubernetes, combined with Splunk SmartStore and Confluent Tiered Storage, offers new options for architectures designed with containers and S3-compatible storage. These new cloud-native technologies, running on Intel architecture and Pure Storage FlashBlade, can help improve application performance, increase infrastructure utilization and simplify the management of hybrid and multi-cloud environments.

Intel and Pure Storage architects designed a new reference architecture called Intel Select Solutions for Splunk and Kafka on Kubernetes and conducted a proof of concept(PoC) to test the value of this reference architecture. Tests were run using Splunk Operator for Kubernetes and Confluent for Kubernetes with Intel ITs high-cardinality production data to demonstrate a real-worldscenario.

In our PoC, a nine-node cluster reached a Splunk ingest rate of 886 MBps, while simultaneously completing 400 successful dense Splunk searches per minute, with an overall CPU utilization rate of 58%.1 We also tested Splunk super-sparse searches and Splunk ingest from Kafka data stored locally versus data in Confluent Tiered Storage on FlashBlade, which exhibited remarkable results. The outcomes of this PoC informed the Intel Select Solutions for Splunk and Kafka on Kubernetes.

Keep reading to find out how to build a similar Splunk and Kafka platform that can provide the performance and resource utilization your organization needs tomeet the demands of todays data-intensive workloads.

Solution Brief

Business challenge

The ongoing digital transformation of virtually every industry means that modern enterprise workloads utilize massive amounts of structured and unstructured data. Forapplications like Splunk and Kafka, the explosion of data can be compounded by other issues. First, thetraditional distributed scale-out model with direct-attached storage requires multiple copies of data to be stored, driving up storage needs even further. Second, many organizations are retaining their data for longer periods of time for security and/or compliance reasons. These trends createmany challenges, including:

Beyond the challenges presented by legacy architectures, organizations often have other challenges. Large organizations often have Splunk and Kaka platforms in both on-prem and multi-cloud environments. Managing the differences between these environments creates complexity for Splunk and Kafka administrators, architects and engineers.

Value of Intel Select Solutions for Splunk and Kafka on Kubernetes

Many organizations understand the value of Kubernetes, which offers portability and flexibility and works with almost any type of container runtime. It has become the standard across organizations for running cloud-native applications; 69% of respondents from a recent Cloud-Native Computing Foundation (CNCF) survey reported using Kubernetes in production.2 To support their customers desire to deploy Kubernetes, Confluent developed Confluent for Kubernetes, and Splunk led the development of Splunk Operator for Kubernetes.

In addition, Splunk and Confluent have developed new storage capabilities: Splunk SmartStore and Confluent Tiered Storage, respectively. These capabilities use S3compliant object storage to reduce the cost of massive data sets. In addition, organizations can maximize data availability by placing data in centralized S3 object storage, while reducing application storage requirements by storing a single copy of data that was moved to S3, relying on the S3 platform for data resiliency.

The cloud-native technologies underlying this reference architecture enable systems to quickly process the large amounts ofdata todays workloads demand; improve resource utilization and operational efficiency; and help simplify the deployment and management of Splunk andKafkacontainers.

Solution architecture highlights

We designed our reference architecture to take advantage of the previously mentioned new Splunk and Kafka products and technologies. We ran tests with a proof of concept (PoC) designed to assess Kafka and Splunk performance running on Kubernetes with servers based on high-performance Intel architecture and S3-compliant storage supported by Pure Storage FlashBlade.

Figure 1 illustrates the solution architecture at a high level. The critical software and hardware products and technologies included in this reference architecture are listed below:

Additional information about some of these components is provided in the A Closer Look at Intel Select Solutions for Splunk and Kafka on Kubernetes section that follows.

Figure 1. The solution reference architecture uses high-performance hardware and cloud-native software to help increase performance and improve hardware utilization and operational efficiency.

A Closer Look at Intel Select Solutions for Splunk and Kafka on Kubernetes

The ability to run Splunk and Kafka on the same Kubernetes cluster connected to S3-compliant flash storage unleashes seamless scalability with an extraordinary amount of performance and resource utilization efficiency. The following sections describe some of the software innovations that make this possible.

Confluent for Kubernetes and Confluent TieredStorage

Confluent for Kubernetes provides a cloud-native, infrastructure-as-code approach to deploying Kafka on Kubernetes. It goes beyond the open-source version of Kubernetes to provide a complete, declarative API to build a private cloud Kafka service. It automates the deployment of Confluent Platform and uses Kubernetes to enhance the platforms elasticity, ease of operations and resiliency for enterprises operating at any scale.

Confluent Tiered Storage architecture augments Kafka brokers with the S3 object store via FlashBlade, storing data on the FlashBlade instead of the local storage. Therefore, Kafka brokers contain significantly less state locally, making them more lightweight and rebalancing operations orders of magnitude faster. Tiered Storage simplifies the operation and scaling of the Kafka cluster and enables the cluster to scale efficiently to petabytes of data. With FlashBlade as the backend, Tiered Storage has the performance to make all Kafka data accessible for both streaming consumers and historical queries.

Splunk Operator for Kubernetes and SplunkSmartStore

The Splunk Operator for Kubernetes simplifies the deployment of Splunk Enterprise in a cloud-native environment that uses containers. The Operator simplifies the scaling and management of Splunk Enterprise by automating administrative workflows using Kubernetes best practices.

Splunk SmartStore is an indexer capability that provides a way to use remote object stores to store indexed data. SmartStore makes it easier for organizations to retain data for a longer period of time. Using FlashBlade as the high-performance remote object store, SmartStore holds the single master copy of the warm/cold data. At the same time, a cache manager on the indexer maintains the recently accessed data. The cache manager manages data movement between the indexer and the remote storage tier. The data availability and fidelity functions are offloaded to FlashBlade, which offers N+2 redundancy.4

Remote Object Storage Capabilities

Pure Storage FlashBlade is a scale-out, all-flash file and object storage system that is designed to consolidate complete data silos while accelerating real-time insights from machine data using applications such as Splunk and Kafka. FlashBlades ability to scale performance and capacity is based on five key innovations:

A complete FlashBlade system configuration consists of up to 10 self-contained rack-mounted servers. A single 4U chassis FlashBlade can host up to 15 blades and a full FlashBlade system configuration can scale up to 10 chassis (150 blades), potentially representing years of data for even higher ingest systems. Each blade assembly is a selfcontained compute module equipped with processors, communication interfaces and either 17TB or 52 TB of flash memory for persistent data storage. Figure 2 shows how the reference architecture uses Splunk SmartStore andFlashBlade.

Figure 2. Splunk SmartStore using FlashBlade for the remote object store.

Proof of Concept Testing Process andResults

The following tests were performed in our PoC:

For all the tests, we used Intel ITs real-world high-cardinality production data from sources such as DNS, Endpoint Detection and Response (EDR) and Firewall, which were collected into Kafka and ingested into Splunk through Splunk Connect for Kafka.

Test #1: Application Performance and InfrastructureUtilization

In this test, we compared the performance of a baremetal Splunk and Kafka deployment to a Kubernetes deployment. The test consisted of reading data from four Kafka topics and ingesting that data into Splunk, while dense searches were scheduled to run every minute.

Bare-Metal Performance

We started with a bare-metal test using nine physical servers. Five nodes served as Splunk indexers, three nodes as Kafka brokers and one node served as a Splunk search head. With this bare-metal cluster, the peak ingest rate was 301 MBps, while simultaneously finishing 90 successful Splunk dense searches per minute (60 in cache, 30 from FlashBlade), with an average CPU utilization of 12%. The average search runtime for the Splunk dense search was 22seconds.

Addition of Kubernetes

Next, we deployed Splunk Operator for Kubernetes and Confluent for Kubernetes on the same nine-node cluster. Kubernetes spawned 62 containers: 35 indexers, 18 Kafka brokers and nine search heads. With this setup, we reached a peak Splunk ingest rate of 886 MBps, while simultaneously finishing 400 successful Splunk dense searches per minute (300 in cache, 100 from FlashBlade), with an average CPU utilization of 58%. Theaverage search runtime for the Splunk dense search was 16 secondsa 27% decrease from the Splunk average search time on the bare-metal cluster. Figure 3 illustrates the improved CPU utilization gained from containerization using Kubernetes. Figure 4 shows the high performance enabled by the reference architecture.

Figure 3. Deployment of the Splunk Operator for Kubernetes and Confluent for Kubernetes enabled 62Splunk and Kafka containers on the nine physical serversinthe PoC cluster.

Figure 4. Running Splunk Operator for Kubernetes and Confluent for Kubernetes enabled up to 2.9X higher ingest rate, up to 4x more successful dense searches, and a 27% reduction in average Splunk search time, compared to the bare-metal cluster.

Test #2: Data Ingest from Kafka Local Storage versus Confluent Tiered Storage

Kafkas two key functions in event streaming are producer (ingest) and consumer (search/read). In the classic Kafka setup, the produced data is maintained at the broker's local storage, but with Tiered Storage, Confluent offloads the data from the Tiered Storage to the object store and enables infinite retention. If any consumer is looking for data that is not in the local storage, the data would be downloaded from the object storage.

To compare the consumer/download performance, we started the Splunk Connect workers for Kafka after one hour of data ingestion into Kafka with all data on the local SSD storage. The Connect workers read the data from Kafka and forwarded it to the Splunk indexers, where we measured the ingest throughput and elapsed time to load all the unconsumed events. During this time, Kafka read the data from the local SSD storage, and Splunk was also writing the hot buckets into the local SSD storage that hoststhe hot tier.

We repeated the same test when the topic was enabled with Tiered Storage by starting the Splunk Connect workers for Kafka, which initially read the data out of FlashBlade and later from the local SSD storage for the last 15 minutes. We then measured the ingest throughput and the elapsed time to load all the unconsumed events.

As shown in Figure 5, there is no reduction in the Kafka consumer performance when the broker data is hosted on Tiered Storage on FlashBlade. This reaffirms that offloading Kafka data to the object store, FlashBlade, gives not only similar performance for consumers but also the added benefit of longer retention.

Figure 5. Using Confluent Tiered Storage with FlashBlade enables longer data retention while maintaining (or even improving) the ingest rate.

Test #3: Splunk Super-Sparse Searches in SplunkSmartStore

When data is in the cache, Splunk SmartStore searches are expected to be similar to non-SmartStore searches. When data is not in the cache, search times are dependent on the amount of data to be downloaded from the remote object store to the cache. Hence, searches involving rarely accessed data or data covering longer time periods can have longer response times than experienced with non-SmartStore indexes. However, FlashBlade accelerates the download time considerably in comparison to any other cheap-and-deep object storage available today.4

To demonstrate FlashBlades ability to accelerate downloads, we tested the performance of a super-sparse search (the equivalent of finding a needle in a haystack); the response time of this type of search is generally tied to I/O performance. The search was initially performed against the data in the Splunk cache to measure the resulting event counts. The search returned 64 events out of several billion events. Following this, the entire cache was evicted from all the indexers, and the same super-sparse search was issued again, which downloaded all the required data from FlashBlade into the cache to perform the search. We discovered that FlashBlade supported a download of 376 GB in just 84 seconds with a maximum download throughput of 19 GBps (see Table 1).

Table 1. Results from Super-Sparse Search

Results

Downloaded Buckets

376 GB

Elapsed Time

84 seconds

Average Download Throughput

4.45 GBps

Maximum Download Throughput

19 GBps

A super-sparse search downloading

376 GB in 84 Seconds

Configuration Summary

Introduction

The previous pages provided a high-level discussion of the business value provided by Intel Select Solutions for Splunk andKafka on Kubernetes, the technologies used in the solution and the performance and scalability that can be expected. This section provides more detail about the Intel technologies used in the reference design and the bill of materials for building the solution.

Intel Select Solutions for Splunk and Kafka on Kubernetes Design

The following tables describe the required components needed to build this solution. Customers must use firmware with the latest microcode. Tables 2, 3 and 4 detail the key components of our reference architecture and PoC. Theselection of software, compute, network, and storage components was essential to achieving the performance gains observed.

Table 2. Key Server Components

Component

Description

CPU

2x Intel Xeon Platinum 8360Y (36 cores, 2.4 GHz)

Memory

16x 32 GB DDR4 @ 3200 MT/s

Storage (Cache Tier)

1x Intel Optane SSD P5800x (1.6 TB)

Storage (Capacity Tier)

1x SSD DC P4510 (4 TB)

Boot Drive

1x SSD D3-S4610 (960 GB)

Network

Intel Ethernet Network Adapter E810-XXVDA2 (25 GbE)

Table 3. Software Components

Software

Version

Kubernetes

1.23.0

Splunk Operator for Kubernetes

1.0.1

Splunk Enterprise

8.2.0

Splunk Connect for Kafka

2.0.2

Confluent for Kubernetes

2.2.0

Confluent Platform

7.0.1 using Apache Kafka 3.0.0

Table 4. S3 Object Storage Components

Read more from the original source:
Improving Splunk and Kafka Platforms with Cloud-Native Technologies - InfoWorld

Read More..

3 practical ways to fight recession by being cloud smart – IT Brief New Zealand

As COVID almost starts to feel like a distant memory, you think wed all cop a break. But no, the threat of recession now darkens the horizons. This makes it an excellent time to get smart about how you use cloud and ensure it delivers short- and long-term value to your organisation.

In this article, we suggest three specific ways to nail down some genuine savings or optimise the benefits (and savings) from your cloud and cloud applications.

1. Save more when you choose a cloud-native application

Depending on where you are on your roadmap to cloud adoption, you may want to look sideways at some of your legacy line-of-business applications and ask if they will serve you equally well in your transformed state.

If you have enough budget, practically any application can be retrospectively modernised to work in the cloud. And, unwilling to be left behind, some vendors have re-engineered their applications to run in the cloud with varying degrees of success. But its important to realise that unless the application was specifically built from the ground up to run on the cloud (i.e., a cloud-native), it may not deliver an ROI or enable your business to keep up with the current pace of change.

Think of it this way: Its like adapting your petrol-fuelled car to run on an EV battery. While innovation may prolong your beloved vehicle's life, it will never perform to the standard of a brand spanking new state-of-the-art Tesla.

Cloud-native applications are built from birth to be inherently efficient; to perform to a much better standard than applications with non-native features, and to cost less to run.

Lets break those benefits down a bit:

2. Check out that cloud sprawl

Its easy to rack up spikes on your cloud invoice when your organisation has gone cloud crazy. Cloud crawl is when your cloud resources have proliferated out of control, and you are paying for them, often unknowingly.

So, how does that happen? It usually comes about because of a failure to eliminate services that are no longer, or never were, part of your overall cloud strategy. Its like still paying a vehicle insurance policy on a Ferrari when youve made a sensible downgrade to a family-friendly Toyota.

Cloud sprawl can come around through departments adding on or trialling cloud applications, then not unsubscribing from them. Or from maintaining unneeded storage despite deleting the associated cloud server instance. Or from services you once needed when making the original move to the cloud and not decommissioning them.

Make your cloud strategy a living document to ensure youre only paying for what you need and use. One thats shared and compared with the real-life status quo regularly. Implement policies to control those random or one-off cloud application trials when theyre done with. Talk to your technology partner about setting up automated provisioning to shut down old workloads that are no longer of value or could be managed off-peak and, therefore more cost-effectively.

And compare every invoice to identify if you are paying for cloud services that you no longer need or use. If its all sounding a bit hard, a cloud crawl health check by your managed services partner could provide a great ROI.

3. Get more value from your no-where-near dead legacy applications

While cloud-native applications may seem to offer it all, we all know that sometimes its simply not practical to move on from your investment in a legacy solution. In that case, a lift and shift (think of it as uplifting your house as is, where is - from a slightly down-at-heel suburb to a more upmarket one with better facilities) may be the best option to breathe life into ageing technology without having to invest in renovations (or buy new servers).

When done well, lift and shift is a very cost-effective way to onramp your organisation onto the cloud. Just be aware that while you will save money by not modernising your application, youll not realise the true cloud benefits from native constructs (i.e., cheaper storage, elasticity, or additional security).

Dont forget to count your savings

If youre wondering where else you can make immediate or long-term savings, dont forget that your original decision to move to the cloud has delivered your organisation a positive ROI since Day One.

And if youve chosen fully managed services, youve saved even more.

Youve already walked away from the significant overheads of expensive servers stacked in a dust-free, temperature-controlled environment, the disruption caused by software upgrades or server downtime, and the need for IT resources to manage your environment and safeguard your data from cyberattacks. And youve said hello to a low-risk, secure, highly available environment from anywhere your people work, at any time.

If youd like to discuss how to optimise your cloud benefits, and get some well-considered, practical answers, contact us here.

Continue reading here:
3 practical ways to fight recession by being cloud smart - IT Brief New Zealand

Read More..

Security pros say the cloud has increased the number of identities at their organizations – SC Media

The Identity Defined Security Alliance (IDSA) on Wednesday reported that 98% the vast majority of companies surveyed confirmed that the number of identities has increased in their organization, with 52% saying its because of the rapid adoption of cloud applications.

Other factors increasing identities at organizations are an increase in third-party relationships (46%) and in new machine identities (43%).

Given the growing number of identities in organizations as they migrate to cloud, it makes sense that 84% of respondents report having had an identity-related attack in the past year.

The IDSA report said managing and monitoring permissions at such a high scale and in convoluted environments has become extremely difficult. Attackers are exploiting this challenge and continuously attempt to escalate their attack capabilities.

Identity breaches are by far one of the most common breaches, said Alon Nachmany, Field CISO at AppViewX, who said he dealt with two breaches of this kind when he was a CISO. Nachmany said the industry slowly evolved to privileged identities and ensured that privileged accounts were a separate identity, but when organizations moved to the cloud, the lines blurred.

The days of managing your own systems with your own systems were gone, Nachmany said. As an example, with on-prem Microsoft Exchange Servers migrating to Microsoft O365 we no longer managed the authentication piece. Our local accounts were now accessible from everywhere. And a lot of security best practices were overlooked. Another issue is that as some companies blew up and more systems came onboard, they were quickly deployed with the thought that we will go back and clean it up later. With the cloud making these deployments incredibly easier and faster, the issues just evolved.

Darryl MacLeod, vCISOat LARES Consulting, said while its effective to invest in IAM solutions, organizations need to go back to the basics and educate their employees about the importance of security. MacLeod said employees need to understand the dangers of phishing emails and other social engineering attacks. They should also know how to properly manage their passwords and other sensitive information, and in doing so, MacLeod said organizations can significantly reduce their identity-related risks.

With the growth of cloud computing, organizations are now entrusting their data to third-party service providers without thinking of the implications, MacLeod said.This shift has led to a huge increase in the number of identities that organizations have to manage. As a result, its made them much more vulnerable to attack.If an attacker can gain access to one of these cloud-based services, they can potentially access all of an organizations data. If an organization doesnt have the right security controls in place, they could be left scrambling to contain the damage.

Joseph Carson, chief security scientist and advisory CISO at Delinea, said the growth in mobility and the cloud greatly increases the complexity of securing identities. Carson pointed out that organizations still attempt to try and secure them with the existing security technologies they already have, which results in many security gaps and limitations.

Some organizations even fall short by trying to checkbox security identities with simple password managers, Carson said. However, this still means relying on business users to make good security decisions.To secure identities, you must first have a good strategy and plan in place. This means understanding the types of privileged identities that exist in the business and using security technology designed to discover and protect them. The good news is that many organizations understand the importance of protecting identities.

Originally posted here:
Security pros say the cloud has increased the number of identities at their organizations - SC Media

Read More..