Page 2,546«..1020..2,5452,5462,5472,548..2,5602,570..»

The Role of Machine Learning in Health Informatics – Healthcare Tech Outlook

With digital disruption affecting every industry, including healthcare, the capacity to collect, exchange, and deliver data has become critical.

FREMONT, CA: Through algorithmic procedures, machine learning applications can increase the accuracy of treatment protocols and health outcomes. For instance, deep learning, a subset of advanced machine learning that simulates how the human brain works, is rapidly employed in radiology and medical imaging. Deep learning applications can detect, recognize, and evaluate malignant tumors from images using neural networks that learn from data without supervision.

Increased processing speeds and cloud infrastructures enable machine learning programs to discover anomalies in images that are not visible to the human eye, assisting in diagnosing and treating disease.

Machine learning developments in healthcare will continue to alter the business. The machine learning applications now in effect are a diagnostic tool for diabetic retinopathy and predictive analytics for predicting breast cancer recurrence using medical information and photos.

Three areas in which machine learning in health informatics impacts healthcare are discussed in the following sections.

Recordkeeping: In health informatics, machine learning can help streamline recordkeeping, particularly electronic health records (EHRs). Using AI to optimize EHR management can improve patient care, cost savings in healthcare and administration, and operational efficiencies.

Natural language processing is one example. It enables clinicians to take and record clinical notes without relying on human methods.

Additionally, machine learning algorithms can simplify physician usage of EHR management systems by offering clinical decision assistance, automating image analysis, and integrating telehealth technology.

The integrity of Data: Gaps in healthcare data can result in erroneous predictions from machine learning algorithms, which can severely impact clinical decision-making.

Since healthcare data was initially meant for EHRs, it must be prepared before machine learning algorithms can utilize it efficiently.

Professionals in health informatics are accountable for data integrity. Health informatics experts collect, analyze, classify, and cleanse data.

Analytical Prediction: Combining machine learning, health informatics, and predictive analytics enhances healthcare processes, the transformation of clinical decision support tools, and patient outcomes. The promise of machine learning in transforming healthcare is in its ability to harness health informatics to forecast health outcomes via predictive analytics, resulting in more accurate diagnosis and treatment and improved clinician insights for tailored and cohort therapies.

Additionally, machine learning may bring value to predictive analytics by translating data for decision-makers, allowing them to identify process gaps and optimize overall healthcare business operations.

See Also:Top Healthcare Communication Solution Companies

Link:
The Role of Machine Learning in Health Informatics - Healthcare Tech Outlook

Read More..

Connecting the continuum: Machine learning and AI are the keys – Becker’s Hospital Review

In todays healthcare environment, health systems, ACOs and skilled nursing facilities have limited visibility of patients as they transition between care settings, such as from the hospital to a post-acute facility, or from post-acute to home. Siloed information systems prevent organizations from sharing the data and analytics that can enhance the quality of patient care and avoid unnecessary costs, including readmissions.

The good news is that technology is paving the way for greater integration and visibility into patients clinical status. Beckers Hospital Review recently spoke with Anthony Laflen, director of solution design, acute and payer, with Collective Medical, a PointClickCare company, to discuss the value of data in the post-acute care arena and how software is transforming the patient experience.

Data visibility is the key to assessing and improving patient care

Historically, hospital software platforms havent communicated with the IT systems used by skilled nursing facilities. Further downstream, skilled nursing home platforms typically dont communicate with home health agency IT systems.

Some markets have adopted solutions that address these issues, but by and large, the system is broken, Mr. Laflen said. Piecing different platforms together is difficult and expensive, and its only recently become possible. This is problematic, because patient visibility is how you assess and impact patient care. If you cant understand the root cause of an issue with live data, its impossible to intervene or educate your partners.

Fortunately, its becoming more common for hospital systems to share their data. This is due in part to software enhancements, government-led efforts to encourage information sharing and improve interoperability, as well as acquisitions made by larger healthcare players.According to Mr. Laflen, When hospital systems open a portal and push data directly into the post-acute electronic health record platform, it brings tremendous value. Youll see a massive reduction in medication errors, fewer keystroke errors and better handoffs when patients move from one setting to the next. Its an exciting time.

Healthcare software advancements facilitate data flow and connected care

In 2010, Mr. Laflen worked for Marquis, an operator of skilled nursing facilities. At that time, Marquis tracked and analyzed hospital readmissions for its patients using Excel spreadsheets. Hospitals were excited to see this information since most organizations were using outdated Medicare claims data to understand readmission trends.

We had to explain that the Medicare data that hospitals and health plans were using to make assessments was 18 to 24 months old, Mr. Laflen said. Bringing our spreadsheet to the table demonstrated our willingness to be transparent. We became the preferred providers in most markets by being open and transparent.

Around that same time, Mr. Laflen learned about Collective Medi- cals care coordination platform that showed- in real time- when SNF patients were bouncing back to the ED or admitted to the hospital. (The platform allows care managers to track when patients are vis- iting any acute or post-acute facility that participates in Collectives national network). Marquis was one of the first groups to sign on to the Collective platform.

We wanted to be alerted and intervene if patients were readmitted. If it was clinically appropriate, we would tell emergency department physicians to send patients back to our skilled nursing facilities rather than admit them to the hospital. Thanks to the data sharing through Collective Medical, we drove our readmission rates at Marquis from the low 20 percent range to the single digits and we did it in less than two months, Mr. Laflen said.

The journey to connected care continues with enhanced data sharing, machine learning and AI

PointClickCares recent acquisition of Collective Medical provides a single pane of glass for care managers, showing whats happening in real time at skilled nursing facilities.

When you take Collective Medicals network breadth and marry it with PointClickCare, which is the leading EHR provider in the skilled nursing setting, its exciting. Mr. Laflen said.

Collective Medical has around 3,000 hospitals and over 6,200 other nodes in its network, as well as 100 percent of the national health plans. PointClickCare has over 22,000 customers, and around 97 percent of all U.S. hospital discharges to a skilled nursing facility are to a facility using PointClickCares EHR.

Looking ahead, Laflen sees opportunities for optimizing patient length of stay in post-acute facilities. Many risk-bearing entities, such as ACOs, try to restrict the amount of time that patients spend in post-acute settings, in hopes of achieving an optimal length of stay that minimizes unnecessary costs. However, when ACOs attempt to manage length of stay without access to real time clinical data, they risk discharging patients prematurely. If unstable individuals are discharged home or to the community, they may end up back in the hospital.

To address this challenge, PointClickCare and Collective Medical are leveraging machine learning and AI. Our machine learning models will tell you based on live data what is happening with individuals, Mr. Laflen said. They predict whether the probability of an incident has increased, and they can alert caregivers in both skilled nursing and hospital settings. Its groundbreaking.

This article was sponsored by Collective Medical.

More:
Connecting the continuum: Machine learning and AI are the keys - Becker's Hospital Review

Read More..

Understanding the AUC-ROC Curve in Machine Learning Classification – – Analytics India Magazine

A critical step after implementing a machine learning algorithm is to find out how effective our model is based on metrics and datasets. Different performance metrics available are used to evaluate the Machine Learning Algorithms. As an example, to distinguish between different objects, we can use classification performance metrics such as Log-Loss, Average Accuracy, AUC, etc. If the machine learning model is trying to predict, then an RMSE or root mean squared error can be used to calculate the efficiency of the model.

In this article, we will be discussing the performance metrics used in classification and also explore the significant use of two, in particular, the AUC and ROC. Below is the outline of important points that we will be discussing in the article.

The metrics that one chooses to evaluate a machine learning model play an important role. The choice of metric influences how the performance of machine learning algorithms can be measured and compared. But, Metrics possess a slight difference from loss functions. Loss functions are meant to show the measure of model performance. Theyre used to train a machine learning model, maybe using a kind of optimization like Gradient Descent, and are usually differentiable in the models parameters. Metrics on the other hand are used to monitor and evaluate the performance of a model during training and testing, not needing to be differentiable. The importance of various characteristics in the result will also be influenced completely by the metric.

One of the basic classification metrics is the Confusion Matrix. It is a tabular visualization of the truth labels versus the models predictions. Each row of the confusion matrix represents instances in a predicted class and each column represents instances in an actual class. Confusion Matrix is not entirely a performance metric but provides a basis on which other metrics can evaluate the results. There are 4 classes of a Confusion Matrix. The True Positive signifies how many positive class samples the created model has predicted correctly. True Negative signifies how many negative class samples the created model predicted correctly. False Positive signifies how many negative class samples the created model predicted incorrectly and vice versa goes for False Negative.

Precision-recall and F1 scores are the metrics for which the values are obtained from a confusion matrix as they are based on true and false classifications. The recall is also termed as the true positive rate or sensitivity, and precision is termed as the positive predictive value in classification.

Accuracy in terms of Performance Metrics is the measure of correct prediction of the classifier compared to its overall data points. It is the ratio of the units of correct predictions and the total number of predictions made by the classifiers. These additional performance evaluations help out to derive more meaning from your model.

AUC ROC is used to visualize the performance of a classification model based on its rate or correct and incorrect classifications. Further in this article, we will discuss in detail the AUC-ROC.

ROC curve, also known as Receiver Operating Characteristics Curve, is a metric used to measure the performance of a classifier model. The ROC curve depicts the rate of true positives with respect to the rate of false positives, therefore highlighting the sensitivity of the classifier model. The ROC is also known as a relative operating characteristic curve, as it is a comparison of two operating characteristics, the True Positive Rate and the False Positive Rate, as the criterion changes. An ideal classifier will have a ROC where the graph would hit a true positive rate of 100% with zero false positives. We generally measure how many correct positive classifications are being gained with an increment in the rate of false positives.

ROC curve can be used to select a threshold for a classifier, which maximizes the true positives and in turn minimizes the false positives. ROC Curves help determine the exact trade-off between the true positive rate and false-positive rate for a model using different measures of probability thresholds. ROC curves are more appropriate to be used when the observations present are balanced between each class. This method was first used in signal detection but is now also being used in many other areas such as medicine, radiology, natural hazards other than machine learning. A discrete classifier returns only the predicted class and gives a single point on the ROC space. But for probabilistic classifiers, which give a probability or score that reflects the degree to which an instance belongs to one class rather than another, we can create a curve by changing the threshold for the score.

Area Under Curve or AUC is one of the most widely used metrics for model evaluation. It is generally used for binary classification problems. AUC measures the entire two-dimensional area present underneath the entire ROC curve. AUC of a classifier is equal to the probability that the classifier will rank a randomly chosen positive example higher than that of a randomly chosen negative example. The Area Under the Curve provides the ability for a classifier to distinguish between classes and is used as a summary of the ROC curve. The higher the AUC, it is assumed that the better the performance of the model at distinguishing between the positive and negative classes.

The area under the curve is one of the good ways to estimate the accuracy of the model. An excellent model poses an AUC near to the 1 which tells that it has a good measure of separability. A poor model will have an AUC near 0 which describes that it has the worst measure of separability. In fact, it means it is reciprocating the result and predicting 0s as 1s and 1s as 0s. When an AUC is 0.5, it means the model has no class separation capacity present whatsoever.

AUC-ROC is the valued metric used for evaluating the performance in classification models. The AUC-ROC metric clearly helps determine and tell us about the capability of a model in distinguishing the classes. The judging criteria being Higher the AUC, better the model. AUC-ROC curves are frequently used to depict in a graphical way the connection and trade-off between sensitivity and specificity for every possible cut-off for a test being performed or a combination of tests being performed. The area under the ROC curve gives an idea about the benefit of using the test for the underlying question. AUC ROC curves are also a performance measurement for the classification problems at various threshold settings.

The AUC-ROC curve of a test can also be used as a criterion to measure the tests discriminative ability, telling us how good the test is in a given clinical situation. The closer an AUC-ROC curve is to the upper left corner, the more efficient the test being performed will be. To combine the False Positive Rate and the True Positive Rate into a single metric, we can first compute the two former metrics with many different thresholds for the logistic regression, then plot them on a single graph. The resulting curve metric we consider is the area under this curve, which we call AUC-ROC.

Image Source

AUC-ROC can be easily performed in Python using Numpy. The metric can be implemented on different Machine Learning Models to explore the potential difference between the scores. Here I have inculcated the same on two models, namely logistic Regression and Gaussian Naive Bias.

Just as discussed above, you can apply a similar formula using Python,

Output :

A Deterministic AUC-ROC plot can also be created to gain a deeper understanding. Here, plotting for Logistic Regression ;

For Gaussian Naive Bayes,

The results may vary given the stochastic nature of the algorithms the evaluation procedure used or differences in numerical precision.

The AUC-ROC is an essential technique to determine and evaluate the performance of a created classification model. Performing this test only increases the value and correctness of a model and in turn, helps improve its accuracy. Using this method helps us summarize the actual trade-off between the true positive rate and the predictive value for a predictive model using different probability thresholds which is an important aspect of classification problems.

In this article, we understood what a Performance Metric actually is and explored a classification metric, known as the AUC-ROC curve. We determined why it should be used and how it can be performed using python through a simple example. I would like to encourage the reader to explore the topic further as it is an important aspect while creating a classification model.

Happy Learning!

Understanding ROC and AUC

An introduction to ROC analysis

More here:
Understanding the AUC-ROC Curve in Machine Learning Classification - - Analytics India Magazine

Read More..

Machine learning tool 99% accurate at spotting early signs of Alzheimers in the lab – ZME Science

Researchers at the Kaunas Universities in Lithuania have developed an algorithm that can predict the risk of someone developing Alzheimers disease from brain images with over 99% accuracy.

Alzheimers is the worlds leading cause of dementia, according to the World Health Organization, causing or contributing to an estimated 70% of cases. As living standards improve and the average age of global populations increase, it is very likely that the number of dementia cases will increase greatly in the future, as the condition is highly correlated with age.

However, since the early stages of dementia have almost no clear, accepted symptoms, the condition is almost always identified in its latter stages, where intervention options are limited. The team from Kaunas hopes that their work will help protect people from dementia by allowing doctors to identify those at risk much earlier.

Medical professionals all over the world attempt to raise awareness of an early Alzheimers diagnosis, which provides the affected with a better chance of benefiting from treatment. This was one of the most important issues for choosing a topic for Modupe Odusami, a Ph.D. student from Nigeria, says Rytis Maskelinas, a researcher at the Department of Multimedia Engineering, Faculty of Informatics, Kaunas University of Technology (KTU), Odusamis Ph.D. supervisor.

One possible early sign of Alzheimers is mild cognitive impairment (MCI), a middle ground between the decline we could reasonably expect to see naturally as we age, and dementia. Previous research has shown that functional magnetic resonance imaging (fMRI) can identify areas of the brain where MCI is ongoing, although not all cases can be detected in this way. At the same time, finding physical features associated with MCI in the brain doesnt necessarily prove illness, but is more of a strong indicator that something is not working well.

While possible to detect early-onset Alzheimers this way, however, the authors explain that manually identifying MCI in these images is extremely time-consuming and requires highly specific knowledge, meaning any implementation would be prohibitively expensive and could only handle a tiny amount of cases.

Modern signal processing allows delegating the image processing to the machine, which can complete it faster and accurately enough. Of course, we dont dare to suggest that a medical professional should ever rely on any algorithm one-hundred-percent. Think of a machine as a robot capable of doing the most tedious task of sorting the data and searching for features. In this scenario, after the computer algorithm selects potentially affected cases, the specialist can look into them more closely, and at the end, everybody benefits as the diagnosis and the treatment reaches the patient much faster, says Maskelinas, who supervised the team working on the model.

The model was trained on fMRI images from 138 subjects from The Alzheimers Disease Neuroimaging Initiative fMRI dataset. It was asked to separate these images into six categories, ranging across the spectrum from healthy through to full-onset Alzheimers. Several tens of thousands of images were selected for training and validation purposes. The authors report that it was able to correctly identify MCI features in this dataset, achieving accuracies between 99.95% and 99.99% for different subsets of the data.

While this is not the first automated system meant to identify early onset of Alzheimers from this type of data, the accuracy of this system is nothing short of impressive. The team cautions that such high numbers are not indicators of true real-life performance, but the results are still encouraging, and they are working to improve their algorithm with more data.

Their end goal is to turn this algorithm into a portable, easy-to-use software perhaps even an app.

Technologies can make medicine more accessible and cheaper. Although they will never (or at least not soon) truly replace the medical professional, technologies can encourage seeking timely diagnosis and help, says Maskelinas.

The paper Analysis of Features of Alzheimers Disease: Detection of Early Stage from Functional Brain Changes in Magnetic Resonance Images Using a Finetuned ResNet18 Network has been published in the journal Diagnostics.

Go here to see the original:
Machine learning tool 99% accurate at spotting early signs of Alzheimers in the lab - ZME Science

Read More..

Neonates with a low Apgar score after induction of labor | RMHP – Dove Medical Press

Background

Labor induction (IOL) is the artificial stimulation of uterine contractions during pregnancy prior to the onset of labor in order to promote a vaginal birth.1 Recent advances in obstetric and fetal monitoring techniques have resulted in the majority of induced pregnancies having favorable outcomes; however, adverse health outcomes resulting in low Apgar scores in neonates continue to exist.2 The Apgar score tool, developed by Virginia Apgar, is a test administered to newborns shortly after birth. This examination analyzes the heart rate, muscle tone, and other vital indicators of a baby to determine if extra medical care or emergency care is required.3 The test is usually administered twice: once at 1 minute after birth and again at 5 minutes.4 Apgar scores obtained 5 minutes after birth have become widely used in the prediction of neonatal outcomes such as asphyxia, hypoxic-ischemic encephalopathy, and cerebral palsy.5 Additionally, recent research has established that Apgar values <7 five minutes after birth are related with impaired cognitive function, neurologic disability, and even subtle cognitive impairment as determined by scholastic achievement at the age of 16.6 Perinatal morbidity and death can be decreased by identifying and managing high-risk newborns effectively.7 Accurate detection of low Apgar scores at 5 minutes following labor induction is hence one among the ways to ensure optimal health and survival of the newborn.8 Several studies based on statistical learning have shown relationship and the interplay of maternal and neonatal variables for low Apgar scores.9,10 However, no studies have been conducted to date that focus exclusively on modeling neonatal Apgar scores following IOL intervention. As machine learning is applied to increasingly sensitive tasks and on increasingly noisy data, it is critical that these algorithms are validated against neonatal healthcare data.11 In addition, myriad studies have reported the potential of ensemble learning algorithms in predictive tasks.12,13 In the current study, we assessed the performance metrics of the three powerful ensemble learning algorithms. Due to skewed or imbalanced distribution of the outcome of interest, we further assessed whether the synthetic minority oversampling technique (SMOTE), Borderline-SMOTE and random undersampling (RUS) techniques would impact the learning process of the models.

We analyzed data from the Kilimanjaro Christian Medical Centre (KCMC) birth registry for women who gave birth to singleton infants between 2000 and 2015. This facility serves a population of around 11 million people from the region and neighboring areas. The register collects data on the mothers health prior to and during pregnancy, as well as complications and the infants status. All induced women who delivered singleton infants vaginally during the study period and had complete birth records were eligible for this study. Women with multiple gestations, stillbirths were excluded. These exclusions were necessary to offset the effect of possible overestimation of the prevalence of low Apgar scores (Figure 1). More information about the KCMC birth registry database can be found elsewhere.14 The final sample comprised 7716 induced deliveries.

Figure 1 Schematic diagram for sample size estimation.

Abbreviations: CS, cesarean section; IOL, induction of labor.

The response variable was Apgar score at 5 minutes (coded 0 for normal, and 1 for low) which was computed using five criteria. The first criterion included the strength and regularity of newborns heart rate where babies with 100 beats per minute or more scored 2 points while those with less than 100 scored 1 point and those with 0 heart rate scored 0 points. The second criterion assessed lung maturity or breathing effort, awarding 2 points to newborns with regular breathing, 1 point to those with irregular breathing with 30 breaths per minute, and 0 points to those with no breath at all. Muscle tone and mobility make the third component, for which active neonates received 2 points, moderately active ones received 1 point, and those who limped received no point. The fourth factor is skin color and oxygenation, where infants with pink color receiving 2 points, those with bluish extremities receiving 1 point, and those with completely bluish color receiving 0 points. The final component assesses reflex responses to irritating stimuli, with crying receiving 2 points, whimpering receiving 1 point, and silence receiving 0 points. The investigator then added the scores for each finding and defined a number less than seven (<7) as low and >7 as normal Apgar score. The current study examined the predictors of low Apgar scores previously reported in literature such as parity, maternal age, gestational age, number of prenatal visits, induction method used, body mass index (BMI). The gestational age at birth was calculated using the last menstrual period date and expressed in whole weeks, with deliveries of less than 37 weeks classified as preterm, those between 37 and 41 weeks as term, and those of 41 weeks or more as postterm. Additional behavioral and neonatal risk factors included child sex, smoking and alcohol consumption during pregnancy, as well as the history of using any form of family planning method were also examined. These factors were categorized as yes or no, with yes indicating the occurrence of these outcomes. The categories of the covariates for some factors variables were selected following a preliminary examination of the data.

Boosting algorithms have received significant attention in recent years in data science and machine learning. Boosting algorithms combine several weak models to produce a strong or more accurate model.15,16 Boosting techniques such as AdaBoost, Gradient boosting, and extreme gradient boosting (XGBoost) are all examples of ensemble learning algorithms that are often employed, particularly in data science contests.17 AdaBoost is designed to boost the performance of weak learners. The algorithm constructs an ensemble of weak learners iteratively by modifying the weights of misclassified data in each iteration. It gives equal weight to each training set sample when training the initial weak learner.18 Weights are revised for each succeeding weak learner in such a way that samples misclassified by the current weak learner receive a larger weight. Additionally, the family of boosting algorithms are said to be advantageous for resolving class imbalance problems since they provide a greater weight to the minority class with each iteration, as data from this class is frequently misclassified in other ML algorithms.19 Gradient boosting (GB) constructs an additive model incrementally and it enables optimization of arbitrary differentiable loss functions. It makes use of the gradient descent algorithm to reduce the number of errors in sequential models.20 In contrast to conventional gradient boosting, XGBoost employs its own way of tree construction, with the similarity score and gain determining the optimal node splits. So, it is a decision-tree-based ensemble method that utilizes a gradient boosting framework.21 Figure 2 shows the basic mechanism of boosting-based algorithm in modelling process.

Figure 2 Basic mechanism for boosting-based algorithms.

Our dataset was imbalanced in terms of class frequency, as the positive class (low Apgar score newborns) had only 733 individuals (9.5%). If one of the target classes contains a small number of occurrences in comparison to the other classes, the dataset is said to be imbalanced.22,23 Numerous ways to deal with unbalanced datasets have been presented recently.2426 This paper presents two approaches for balancing the dataset including synthetic minority oversampling technique (SMOTE) and random undersampling (RUS) technique. In contrast to traditional boosting, which assigns equal weight to all misclassified cases, resampling methods (SMOTE or RUS) and boosting algorithms (AdaBoost, Gradient boosting, XGBoost) applied to several highly and somewhat imbalanced datasets have been shown to improve prediction on the minority class.27,28 Re-sampling is a preprocessing approach that balances the distribution of an unbalanced dataset before it is sent to any classifiers.29 Resampling methods are designed to change the composition of a training dataset for an imbalanced classification task. SMOTE begins by randomly selecting an instance of a minority class and determining its k nearest minority class neighbors. The synthetic instance is then formed by selecting one of the k closest neighbors at random in the feature space to form a line segment.30 Borderline-SMOTE begins by classifying observations belonging to the minority class. It considers any minority observation to be noise if all of its neighbors are members of the majority class and the minority observation is discarded while constructing synthetic data. Additionally, it resamples entirely from a few places designated as border points with both majority and minority class. Additionally, it resamples entirely from a few places designated as border points with both majority and minority class instances. Undersampling (RUS) approaches eliminate samples from the training dataset that belong to the majority class in order to more evenly distribute the classes. The strategy reduces the dataset by removing examples from the majority class with the goal of balancing the number of examples in each class.31 Figure 3 indicates the basic mechanism for both RUS and SMOTE techniques.

Figure 3 Mechanisms of resampling techniques used: (A) RUS random undersampling (B) SMOTE synthetic minority oversampling techniques.

Descriptive statistics were obtained using STATA version 14. Data preprocessing and the main analyses were performed using Python programming (version 3.8.0). The predictive models for low Apgar scores were generated with test and training sets using Python scikit-learn (version 0.24.0) packages for machine learning. The parameters to assess the predictive performance of the selected ensemble machine learning algorithms have been evaluated in equations (1) through (8). The dataset was firstly converted to comma-separated values (CSV) file and imported to Python tool. We used open-source libraries in Python including Scikit-learn, Numpy and Pandas. The python codes used to generate the results along with the outputs are attached herein (Supplementary File 1).

(1)

(2)

(3)

(4)

(5)

(6)

(7)

(8)

where TP, FP, TN, FN, FPrate, PPV and NPV representtrue positive, false positive, true negative, false negative, false-positive rate, positive predictive value and negative predictive value respectively.

The sociodemographic and obstetric characteristics of the participants are summarized in Table 1. A total of 7716 Singleton births were analyzed. Of these, 55% of the deliveries were from nulliparous women while majority (88%) of study participants were aged <35 years and about 80% of the total deliveries were at term. The proportion of neonates with low Apgar scores (<7) was found to be 9.5%.

Table 1 Demographic Information of the Study Participant (N=7716)

Prior to the use of resampling techniques, all models performed nearly identically. Of all the resampling techniques considered in the current study, borderline-SMOTE was shown to significantly improve the performance of all the models in terms of all the metrics under observation (Table 2). RUS and SMOTE exhibited little or no improvement on baseline performance in all instances of their respective ensemble models. Performance in terms of AUC metrics for AdaBoost, GB, and XGBoost has been shown in Figure 4.

Table 2 Predictive Performance for of Low Apgar Score Following Labor Induction Using Ensemble Learning

Figure 4 Receiver operating characteristic (ROC) curve diagrams for boosting-based ensemble classifiers comparing the performance by resampling methods.

In this paper, we trained and evaluated the performance of three ensemble-based ML algorithms on a rare event (9.5% for <7 Apgar score versus 90.5% for >7 Apgar score). We then demonstrated how the resampling techniques can affect the learning process of the selected models on the imbalanced data. Kubat et al proposed a heuristic under-sampling method for balancing the data set by removing noise and redundant instances of the majority class.32 Chawla et al oversampled the minority class using the SMOTE (Synthetic Minority Oversampling Technique) technique, which generated new synthetic examples along the line between the minority examples and their chosen nearest neighbors.33 In the current study, both sampling techniques (SMOTE and RUS) were seen to slightly improve the sensitivity of the minority class, with the largest improvement seen from using borderline-SMOTE technique. Improvement of sensitivity means the ratio of correct positive predictions, that is, neonates with <7 Apgar score, to the total positive examples is relatively high. In other words, with the improvement shown by XGBoost following the Borderline-SMOTE resampling techniques, the model was able to correctly identify 93% (an improvement from 20% baseline performance) of the neonates with a low Apgar score, while missing 7% only. On the other hand, all the models performed well (Specificity = 99%) in correctly identifying neonates with normal (>7) Apgar score without the application of resampling methods. This could be because the number of neonates with a normal Apgar score was significantly greater than those with a low Apgar score in this database (n=6983 vs n=733), making the negative class more likely to be predicted. Notable is the Positive Predicted Value (PPV) obtained with XGBoost using the Borderline-SMOTE resampling method, which indicates that 94% of neonates predicted to have a low Apgar score actually had one. Numerous studies have demonstrated the critical importance of maximizing models sensitivity as well as PPV particularly when dealing with class imbalanced datasets.34 Precision and sensitivity make it possible and desirable to evaluate a classifiers performance on the minority class, resulting in another metric called the F-score.35 The F-score is high when both sensitivity and precision are high.36 Again, the best F-score was obtained in all models when borderline-SMOTE was used. However, the best F-score was reached by borderline-SMOTE applied specifically on XGBoost classifier. In terms of AUROC, borderline-SMOTE demonstrated a considerable improvement in the ensemble learners learning process. Neither SMOTE nor RUS techniques could improve the learning process in this occasion. Numerous studies have identified reasons for ineffectiveness in these resampling techniques, the most frequently cited being class overlap in feature space, which makes it more difficult for the classifier to learn the decision boundary. Studies have established that, if there is an overlapping between the classes given the variables in the dataset, SMOTE would be generating synthetic points affecting the separability.37,38 In addition, studies have pointed out that Tomek links, which are pairs of opposing instances that are very close together prior to model building, could be generated as well as other points, therefore harming the classification.39,40

Researchers working on artificial intelligence particularly on computer-assisted decision-making in healthcare as well as developers who are interested in developing predictive models for decision support system for neonatal healthcare can obtain clues on the efficiency of the ensemble learners particularly when the data is imbalanced and the respective resampling techniques that are likely to improve such prediction and hence make an informed decision. In totality, based on historical registry data, these model predictions enable healthcare informaticians to make highly accurate guesses about the likely outcomes of the intervention.

As we examined data from a single tertiary institution, our findings may have good internal validity but limited generalizability or external validity. It is possible that the study will show different results for datasets collected from other tertiary hospitals in north Tanzania; thus, caution should be exercised when concluding the specific finding. Furthermore, because we only looked at AUROC, F-scores, precision, NPV, PPV, sensitivity and specificity as performance indicators for boosting-based algorithms, our findings may be rather limited. Future research may shed light on other performance metrics, particularly those for unbalanced data, such as informedness, markedness, and Matthews correlation coefficient (MCC). Additionally, the current study did not conduct variable selection or feature engineering, nor did it address confounding variables, which could have limited or reduced classifier performance by increasing the likelihood of model overfitting. It would have been interesting to investigate whether or not the impact of feature engineering and confounding effects would result in improved results for both the SMOTE and RUS methods.

We encourage further research into other strategies for improving the learning process in this neonatal outcome, such as the ADASYN (ADAptive SYNthetic) sampling approach and the use of other SMOTE variants such as Safe-Level-SMOTE, SVM-SMOTE and KMeans-SMOTE. The combination of hybrid methods, that is, executing SMOTE and RUS methods concurrently on these ensemble methods, is also worth trying.

Predicting neonatal low Apgar scores after labor induction using this database may be more effective and promising when borderline-SMOTE is executed along with the ensemble methods. Future research may focus on testing additional resampling techniques mentioned earlier, performing feature engineering or variable selection, and optimizing further the ensemble learning hyperparameters.

This study was approved by the Kilimanjaro Christian Medical University College (KCMU-College) research ethics committee (reference number 985). Because the interview was conducted shortly after the mother had given birth, consent was only obtained verbally before the interview and enrollment. Trained nurses provided the information to the participants about the birth registry project and the information that they would need from them. However, following the consent, the woman could still choose whether or not to respond to specific questions. The KCMC hospital provided administrative clearance to access the data, and the Kilimanjaro Christian Medical College Research Ethics and Review Committee (KCMU-CRERC) approved all consent procedures. The database used in the current study contained no personally identifiable information in order to protect the study participants confidentiality and privacy.

The Birth Registry, the Obstetrics & Gynecology Department, and Epidemiology & Applied Biostatistics Department of the Kilimanjaro Christian Medical University College provided invaluable assistance during this investigation. Thanks to the KCMC birth registry study participants and the Norwegian birth registry for supplying the limited dataset utilized in this investigation.

This work was supported by the Research on CDC-Hospital-Community Trinity Coordinated Prevention and Control System for Major Infectious Diseases, Zhengzhou University 2020 Key Project of Discipline Construction [XKZDQY202007], 2021 Postgraduate Education Reform and Quality Improvement Project of Henan Province [YJS2021KC07], and National Key R&D Program of China [2018YFC0114501].

The authors declare that they have no competing interest.

1. Rayburn WF, Zhang J. Rising rates of labor induction: present concerns and future strategies. Obstet Gynecol. 2002;100(1):164167.

2. Grobman WA, Gilbert S, Landon MB, et al. Outcomes of induction of labor after one prior cesarean. Obstet Gynecol. 2007;109(2):262269. doi:10.1097/01.AOG.0000254169.49346.e9

3. Casey BM, McIntire DD, Leveno KJ. The continuing value of the Apgar score for the assessment of newborn infants. New Eng J Med. 2001;344(7):467471. doi:10.1056/NEJM200102153440701

4. Finster M, Wood M, Raja SN. The Apgar score has survived the test of time. J Am Soc Anesthesiol. 2005;102(4):855857.

5. Leinonen E, Gissler M, Haataja L, et al. Low Apgar scores at both one and five minutes are associated with longterm neurological morbidity. Acta Paediatrica. 2018;107(6):942951. doi:10.1111/apa.14234

6. Ehrenstein V, Pedersen L, Grijota M, Nielsen GL, Rothman KJ, Srensen HT. Association of Apgar score at five minutes with long-term neurologic disability and cognitive function in a prevalence study of Danish conscripts. BMC Pregnancy Childbirth. 2009;9(1):17. doi:10.1186/1471-2393-9-14

7. Manning FA, Harman CR, Morrison I, Menticoglou SM, Lange IR, Johnson JM. Fetal assessment based on fetal biophysical profile scoring: IV. An analysis of perinatal morbidity and mortality. Am J Obstet Gynecol. 1990;162(3):703709. doi:10.1016/0002-9378(90)90990-O

8. Yeshaneh A, Kassa A, Kassa ZY, et al. The determinants of 5th minute low Apgar score among newborns who delivered at public hospitals in Hawassa City, South Ethiopia. BMC Pediatr. 2021;21:266. doi:10.1186/s12887-021-02745-6

9. Lai S, Flatley C, Kumar S. Perinatal risk factors for low and moderate five-minute Apgar scores at term. Eur J Obstet Gynecol Reprod Biol. 2017;210:251256. doi:10.1016/j.ejogrb.2017.01.008

10. Rogers JF, Graves WL. Risk factors associated with low Apgar scores in a lowincome population. Paediatr Perinat Epidemiol. 1993;7(2):205216. doi:10.1111/j.1365-3016.1993.tb00394.x

11. Ahmad MA, Eckert C, Teredesai A. Interpretable machine learning in healthcare. Proceedings of the 2018 ACM International Conference on Bioinformatics, Computational Biology, and Health Informatics, 15 August 2018. 559560.

12. Mung PS, Phyu S. Effective analytics on healthcare big data using ensemble learning. In: 2020 IEEE Conference on Computer Applications (ICCA); February 27, 2020; IEEE. 14.

13. Liu N, Li X, Qi E, Xu M, Li L, Gao B. A novel ensemble learning paradigm for medical diagnosis with imbalanced data. IEEE Access. 2020;8:171263171280. doi:10.1109/ACCESS.2020.3014362

14. Bergsj P, Mlay J, Lie RT, Lie-Nielsen E, Shao JF. A medical birth registry at Kilimanjaro Christian Medical Centre. East Afr J Public Health. 2007;4(1):14.

15. Robinson JW. Regression tree boosting to adjust health care cost predictions for diagnostic mix. Health Serv Res. 2008;43(2):755772. doi:10.1111/j.1475-6773.2007.00761.x

16. Park Y, Ho J. Tackling overfitting in boosting for noisy healthcare data. In: IEEE Transactions on Knowledge and Data Engineering; December 16, 2019.

17. Joshi MV, Kumar V, Agarwal RC. Evaluating boosting algorithms to classify rare classes: comparison and improvements. In Proceedings 2001 IEEE International Conference on Data Mining, 29 November 2001. IEEE; 257264.

18. Ying C, Qi-Guang M, Jia-Chen L, Lin G. Advance and prospects of AdaBoost algorithm. Acta Autom Sin. 2013;39(6):745758. doi:10.1016/S1874-1029(13)60052-X

19. Lee W, Jun CH, Lee JS. Instance categorization by support vector machines to adjust weights in AdaBoost for imbalanced data classification. Inf Sci (Ny). 2017;381:92103. doi:10.1016/j.ins.2016.11.014

20. Lusa L. Gradient boosting for high-dimensional prediction of rare events. Comput Stat Data Anal. 2017;113:1937. doi:10.1016/j.csda.2016.07.016

21. Wang H, Liu C, Deng L. Enhanced prediction of hot spots at protein-protein interfaces using extreme gradient boosting. Sci Rep. 2018;8(1):13.

22. Zhao Y, Wong ZS, Tsui KL. A framework of rebalancing imbalanced healthcare data for rare events classification: a case of look-alike sound-alike mix-up incident detection. J Healthc Eng. 2018;2018. doi:10.1155/2018/6275435

23. Li J, Liu LS, Fong S, et al. Adaptive swarm balancing algorithms for rare-event prediction in imbalanced healthcare data. PLoS One. 2017;12(7):e0180830. doi:10.1371/journal.pone.0180830

24. Zhu B, Baesens B, Vanden Broucke SK. An empirical comparison of techniques for the class imbalance problem in churn prediction. Inf Sci. 2017;408:8499. doi:10.1016/j.ins.2017.04.015

25. Gosain A, Sardana S. Handling class imbalance problem using oversampling techniques: a review. In: 2017 International Conference on Advances in Computing, Communications and Informatics (ICACCI); September 13, 2017; IEEE. 7985.

26. Amin A, Anwar S, Adnan A, et al. Comparing oversampling techniques to handle the class imbalance problem: a customer churn prediction case study. IEEE Access. 2016;26(4):79407957. doi:10.1109/ACCESS.2016.2619719

27. Elreedy D, Atiya AF. A comprehensive analysis of synthetic minority oversampling technique (SMOTE) for handling class imbalance. Inf Sci. 2019;1(505):3264. doi:10.1016/j.ins.2019.07.070

28. Prusa J, Khoshgoftaar TM, Dittman DJ, Napolitano A. Using random undersampling to alleviate class imbalance on tweet sentiment data. In: 2015 IEEE International Conference on Information Reuse and Integration; August 13, 2015; IEEE. 197202.

29. Chernick MR. Resampling methods. Wiley Interdiscip Rev Data Min Knowl Discov. 2012;2(3):255262.

30. Cheng K, Zhang C, Yu H, Yang X, Zou H, Gao S. Grouped SMOTE with noise filtering mechanism for classifying imbalanced data. IEEE Access. 2019;7:170668170681. doi:10.1109/ACCESS.2019.2955086

31. Triguero I, Galar M, Merino D, Maillo J, Bustince H, Herrera F. Evolutionary undersampling for extremely imbalanced big data classification under apache spark. In: 2016 IEEE Congress on Evolutionary Computation (CEC); July 24, 2016; IEEE. 640647.

32. Kubat M, Matwin S. Addressing the course of imbalanced training sets: one-sided selection. In: ICML. Vol. 97. Citeseer; 1997:179186.

33. Chawla NV, Bowyer KW, Hall LO, Kegelmeyer WP. SMOTE: synthetic minority over-sampling technique. J Artif Intell Res. 2002;16:321357. doi:10.1613/jair.953

34. Sokolova M, Japkowicz N, Szpakowicz S. Beyond accuracy, F-score and ROC: a family of discriminant measures for performance evaluation. In: Australasian Joint Conference on Artificial Intelligence; December 4, 2006; Springer, Berlin, Heidelberg. 10151021.

35. Goutte C, Gaussier E. A probabilistic interpretation of precision, recall and F-score, with implication for evaluation. In: European Conference on Information Retrieval; March 21, 2005; Springer, Berlin, Heidelberg. 345359.

36. Guns R, Lioma C, Larsen B. The tipping point: f-score as a function of the number of retrieved items. Inf Process Manag. 2012;48(6):11711180. doi:10.1016/j.ipm.2012.02.009

37. Alahmari F. A comparison of resampling techniques for medical data using machine learning. J Inf Knowl Manag. 2020;19:113.

38. Vuttipittayamongkol P, Elyan E, Petrovski A. On the class overlap problem in imbalanced data classification, knowledge-based systems 212; 2021. Available from: http://www.sciencedirect.com/science/article/pii/S0950705120307607. Accessed August 31, 2021.

39. Zeng M, Zou B, Wei F, Liu X, Wang L. Effective prediction of three common diseases by combining SMOTE with Tomek links technique for imbalanced medical data. In 2016 IEEE International Conference of Online Analysis and Computing Science (ICOACS); May 28, 2016; IEEE. 225228.

40. Ning Q, Zhao X, Ma Z. A novel method for Identification of Glutarylation sites combining Borderline-SMOTE with Tomek links technique in imbalanced data. In: IEEE/ACM Transactions on Computational Biology and Bioinformatics; July 8, 2021.

Read more here:
Neonates with a low Apgar score after induction of labor | RMHP - Dove Medical Press

Read More..

CryptoCodex: Crypto Price Extreme Greed As Bitcoin Gears Up For A Big Week And An NFT Bombshell – Forbes

The following is an excerpt from the daily CryptoCodex email newsletter.Sign up now for free here

The cryptocurrency market has stormed into the new month, with expectations high thatbitcoincould break its historically poor September performance. A look back at monthly price data since 2013 shows bitcoin has closed September in the green just twice in eight yearsin 2015 and again in 2016but even then with only small gains.Cointelegraphhas a full write-up. With bitcoin looking healthy today, many areanticipating a pumpthanks to the attention bitcoin's set to see from El Salvador's adoption of the cryptocurrency tomorrow.

The crypto market continues to climb despite sentiment switching to "extreme greed."

Elsewhere, cryptocurrencies across the board are continuing to climb.Chainlink, a 2020 crypto darling, has led the major market higher with a 15% rise over the last 24 hours and adding to gains of almost 40% this past week.Ethereumis holding onto its huge more-than-20% gains over the last seven days but is flat on this time yesterday. Among the crypto top ten Ripple'sXRPpayment token is leading the pack, up 10% on the last 24 hours.Dogecoinis also outperforming other tokens with a 5% gain as an upgrade continues to improve confidence.

Now read this:Cryptocurrencies: developing countries provide fertile ground

Solana, one of the many ethereum rivals jostling for attention, has now climbed to the seventh spot among the worlds top 10 largest cryptocurrencies, after its price has tripled in about three weeks, giving it a value of more than $42 billion, according to CoinMarketCap. The market could be getting dangerously hot, however, with theCrypto Fear & Greed Index, a measure of market sentiment, showing traders are back in the "extreme greed" mindset. With a score of 79/100, the gauge is just 16 points away from its historical top zone, an area that has sparked corrective moves in the past.

TheNew York Times'big weekend of crypto coverage:

-Cryptos rapid move into banking elicits alarm in Washington

-Crypto banking and decentralized finance, explained

-Bitcoin uses more electricity than many countries. How is that possible?

FTX CEO and crypto billionaire Sam Bankman-Fried

NFTs for all:FTX, the derivatives-focused bitcoin and crypto exchange that's seen its volumes explode this year, is ramping up its support for NFTsnon-fungible tokens that digitize all manner of different assets and have become a collecting craze. This morning, FTX chief executive Sam Bankman-Friedtweetedthat the exchange will offer the ability to mint NFTs directly on its platform.

Minted:FTX users, including on the U.S.FTX.US, will be able to create their own artwork and mint them as NFTs directly on FTX and then sell them on its marketplace, with the exchange hoping to win market share from dedicated NFT platforms such as OpenSea to be sold within its marketplace. Other crypto exchanges, including Binance and OKEx, also offer NFT marketplaces to varying degrees and on a mixture of blockchains. FTX NFTs will be cross-chain, across the ethereum and solana blockchains, according to Bankman-Fried. Ethereum remains by far the most popular NFT blockchain despite its eye-watering fees. Bankman-Fried and FTX have close ties to ethereum rival solana.

The big question:Is bitcoin losing its position as the crypto market's leader?

Crypto craze 2.0:The NFT market has roared back in recent weeks, defying suggestions the bottom had fallen out of the market after Beeple's $69 million salein March. Last week, one day's sales volume of the CryptoPunks NFT collection alone touched $150 million, according to data from NFT tracking websiteCryptoSlam. Last month, Visa set the market alight whenit announcedit had bought CryptoPunk #7610 for 49.50 ethereumaround $150,000 at the time's price.

Testing patience:Bankman-Fried's scribble of the word "test" with the caption "I'm testing out a DIY NFT listing onFTX.US" hasalreadyreached a price of $1,100 with 19 bids. But FTX isn't giving out NFTs for free: it will charge 5% to the buyer and to the seller per salea 10% fee in total, it was noted byThe Block.

Now read this:Meet the self-hosters, taking back the internet one server at a time

Don't miss:A father and son who help clients find forgotten crypto passwords estimate billions of dollars worth of lost bitcoin is recoverable

Continue reading here:
CryptoCodex: Crypto Price Extreme Greed As Bitcoin Gears Up For A Big Week And An NFT Bombshell - Forbes

Read More..

What August’s record breaking month for crypto flows means for bitcoin – Yahoo Finance

Over the past month, the crypto market has looked like a rising tide for all coins but data suggest growth across the asset class hasnt been equal.

Last week, Bitcoin (BTC-USD) breached $50,000 for the second time in two weeks, extending a rally that put a grim sell-off that started in May further in the rear-view mirror. While notable for its volatility, gains in the the largest cryptocurrency may have gotten lost in the swell of rising prices across the entire asset class.

With a majority of decentralized finance and non-fungible token (NFT) trading happening on the Ethereum (ETH-USD) blockchain, the second largest cryptocurrency by market capitalization rose by a third from $2,700 to $3,900, a growth rate 17 percent higher than BTC.

And other blockchain-based currencies such as the third highest valued cryptocurrency, Cardano (ADA-USD) has more than doubled while a newer one, Solana (SOL-USD), has more than tripled in value over the past month. ADA and SOL have continued to notch almost daily all time highs for the past two weeks.

Bitcoin IRA, an investment platform that helps retail investors gain crypto exposure in their retirement accounts, saw record-breaking inflows of new accounts over the previous month.

We broke our record in the first quarter right before Bitcoin ran from $45,000 to $65,000, the companys Chief Operating Officer, Chris Kline told Yahoo Finance. Were seeing the same pattern happen again. So this past month [August] felt a lot like April, but about twice as big.

Currently, Bitcoin IRA has close to 120,000 client accounts, with approximately $2 billion in assets on the platform. Although platform's heft doesnt move the market, the swell of retail investors opening new accounts especially for tax-advantaged IRA accounts is an indicator of how curious investors are as they seek more traditional ways to participate in this market.

By rough approximation across all accounts, Kline said his clients hold 43% of their portfolio in bitcoin, 27% in ethereum, and the remaining 30 percent in a mix of other cryptocurrencies. The company offers 10 different cryptocurrencies in total, and is planning to more than double their crypto offerings in the fall.

Story continues

Back in early May when Ethereum started rising to its all-time high above $4,000, the company saw a large influx of swaps or pairing from BTC to ETH. It signaled many of his clients were shifting their portfolios from BTC to ETH.

However, in recent weeks? Not so much this time, Kline told Yahoo Finance.

To be sure, there could be a lag. Retail buyers are looking for percentage growth. While bitcoin reigns supreme, it has relatively stable growth while there is exponential growth happening on ethereum. Thats what really gets their attention, Kline explained.

Bitcoin's August peak at $50K served as a key technical and psychological level, according to Will Clemente, an analyst at crypto mining and hardware broker Blockware Solutions.

Clemente told Yahoo Finance that for the last seven days, bitcoins price has remained in what he called a volatility squeeze. The idea being that buyers and sellers have balanced each other out, thereby reducing the assets typically high volatility.

But the analyst suggested that could be about to change. A volatility squeeze for bitcoin usually takes a week to two weeks to resolve.

Thats not telling you the direction, it's just telling you that theres going to be a big move soon, said Clemente.

Analyzing price action alone remains a dominant, more contested method for predicting buyers and sellers around a cryptocurrency. But Clementes specialization, on-chain analysis, has quickly become a crucial tool kit of metrics for investors hoping to gleam some clarity into the nascent asset-class.

Similar to technical analysis, the on-chain technique tries to forecast future moves based on supply and demand. However, it relies on a far larger quantity of data only available for assets operating on publicly available blockchains.

While Clemente cannot predict the price shift of Bitcoin, he pointed to a handful of supply shock ratios, such as the movement of coin supply from speculators to long term holders and the exchange supply ratio, which shows the number of Bitcoins available to buy on exchanges relative to the overall circulating supply.

Each of these metrics continue to rise higher after Bitcoin crested above $50,000, according to Clemente. Historically, supply shocks begin before the Bitcoin price moves upward.

Bitcoin Illiquid Supply (RSI)

David Hollerith covers cryptocurrency for Yahoo Finance. Follow him @dshollers.

READ MORE:

Follow Yahoo Finance on Twitter, Facebook, Instagram, Flipboard, LinkedIn, YouTube, and reddit

See the original post:
What August's record breaking month for crypto flows means for bitcoin - Yahoo Finance

Read More..

Yes, Bitcoin Is A Brand. Heres What That Means. – Bitcoin Magazine

Just as the Apple brand is to tech, McDonalds is to burgers, Nike is to sneakers and Coke is to cola, so is the Bitcoin brand to cryptocurrency.

Whoa hold the phone! you say. Bitcoin isnt a brand. It's a currency. Currencies arent brands!

Well, thats a traditional way of looking at it. And Bitcoin is anything but traditional.

Think of it this way: blockchain is a technology. Coins are an asset on the blockchain. Bitcoin is a brand that brings its unique selling proposition to blockchain.

To be clear, Bitcoin is not an ordinary brand. Its what I call a User-Generated Brand (UGB). Because Bitcoin lacks a centralized brand owner or chief marketing officer, it is molded by a large ecosystem of foundation members, technologists, investors, miners, commentators, thought leaders, innovators, journalists and more. Nevertheless, as a UGB, the cumulative effect of its brand assets stand for something.

Still skeptical? Consider this:

Moreover, what makes the Bitcoin brand so intriguing are some other less cut-and-dry factors:

Okay, so youre onboard: Bitcoin is a brand. A User-Generated Brand. Next question you may be asking is: so what? Consider these:

The Contract A great brand, at the end of the day, is a promise (a contract) that the sum total of it what it stands for and how it behaves will provide confidence and comfort that if you put your faith in it (by purchasing, investing, advocating, etc.), youll be rewarded. For all the reasons stated above, Bitcoin is in an enviable pole position, particularly as it relates to wooing institutional investors and their allocation committees who, once fully bought in, would represent a true tipping point in the race to bitcoin adoption. To propel this forward, as a UGB, it is incumbent on the communitys most vocal believers to not just talk amongst themselves (as they are apt to do) but to the masses in ways that will elucidate that promise to them.

The Community A great brand feeds off the passion of its most active users. In fact, passion is the fuel that ignites any brands fire. So while Bitcoin, as a UGB, may not have a chief marketing officer, it does have an army of de facto marketing officers (many of whom read this magazine). Collectively, they believe that Bitcoin and its underlying technology is a true force for good in democratizing finance and have many forums for sharing that point of view. For them, its important to spread the word: Bitcoins primary purpose is not about making money, its about making a change. And, as with any movement, the rubber meets the road when its story can be told in calm and simple terms, using analogies that everyone understands.

The Coattails Versus The Contrarian Project founders, foundations and decentralized autonomous organizations of every size and shape have a decision to make: regardless of their technical or functional relationship, do they ride Bitcoins coattails or cast it off as a fine but flawed product that is ripe for disruption? It will vary from case to case for sure, but contrarians should be forewarned: brands with the community, contract, passion and purpose that Bitcoin has are formidable. While a small group of insurgents may rejoice at the thought of dethroning the king, most of these contrarians efforts will be rejected entirely by the Bitcoin community, as proven by previous hard forks.

The Conventional Wisdom Dominant brands are typically expected to act in conventional ways. In fact, it can be argued, it is the shackles of category conventions that box them in, allowing challengers to erode or overtake their position. So is Bitcoin a conventional player in an unconventional category? Hardly. Conventional behaviors come with time and a sense of dominance that is considered an impenetrable moat. This leads to a risk-averse, defensive posture and possible stagnation. But Bitcoin is still in its infancy and is at the center of a tsunami of innovation. To the UGB community that is pushing boundaries and challenging the status quo I say, Rock on!

To summarize: to some, the very thought of traditional, centralized marketing in the Bitcoin space is antithetical to the category. As with any radical change in conventions and norms, this is to be understood. But, even as a User-Generated Brand, the marketing of Bitcoin most certainly is influential in ways that will certainly evolve over time. As advertising veteran Regis McKenna famously said, marketing is everything, and everything is marketing.

Today, while metrics such as Reddit subscribers, social comments per hour, Twitter followers, website traffic and community size dominate the discussion of brand health (and are closely watched by investors), there will certainly be other, perhaps more influential metrics, as the roles, bullhorns and motivations of key voices decentralized and centralized in the ecosystem evolve.

As it does, you can be sure that Bitcoin will be at the forefront of this evolution. Because if it looks like a brand, acts like a brand and works like a brand, then it is a brand. That its a UGB simply means that you cant expect the same rules that governed branding and marketing over the past 25 years to hold.

This is a guest post by Rich Feldman. Opinions expressed are entirely their own and do not necessarily reflect those of BTC Inc or Bitcoin Magazine.

Read more here:
Yes, Bitcoin Is A Brand. Heres What That Means. - Bitcoin Magazine

Read More..

Bitcoin miners and oil and gas execs mingled at a secretive meetup in Houston here’s what they talked about – CNBC

Bitcoin enthusiasts, miners, and oil & gas execs gathered at a meetup in Houston to talk about the future of bitcoin mining.

HOUSTON On a residential back street of Houston, in a 150,000 square-foot warehouse safeguarding high-end vintage cars, 200 oil and gas execs and bitcoin miners mingled, drank beer, and talked shop on a recent Wednesday night in August.

These two groups of people may seem as though they are at opposite ends of the professional and social spectrums, but their worlds are colliding fast. As it turns out, the industries make for compatible bedfellows.

Just take Hayden Griffin Haby III, an oilman turned bitcoiner. The Texas native and father of three has spent 14 years in oil and gas, and he epitomizes what this monthly meetup is all about.

Haby started as a surface landman where he brokered land contracts, and later, ran his own oil company. But for the last nine months, he's exclusively been in the business of mining bitcoin.

As Haby describes it, he was "orange pilled" in November 2020 a term used to describe the process of convincing a fiat-minded person that they are missing out by not investing in bitcoin. A month later, he co-founded Limpia Creek Technologies, which powers bitcoin mining rigs with flared, vented, and stranded natural gas assets.

"When I heard that you could make this much money per MCF (a metric used to measure natural gas), instead of just burning it up into the atmosphere, thanks to the whole 'bitcoin mining thing,' I couldn't look away," Haby said. "You can't unsee that."

When China kicked out all its crypto miners this spring an exodus which Haby calls the "Chexit" that poured kerosene on the flames. "This is an opportunity we didn't think was coming," he said.

Haby tells CNBC they are already seeing demand rushing to Texas, and he is convinced that the state is poised to capture most of the Chinese hashrate looking for a new home on friendlier shores.

Bitcoin miners care most about finding cheap sources of electricity, so Texas with its crypto-friendly politicians, deregulated power grid, and crucially, abundance of inexpensive power sources is a virtually perfect fit. The union becomes even more harmonious when miners connect their rigs to otherwise stranded energy, like natural gas going to waste on oil fields across Texas.

"This is Texas, boys. We got what you need, so come on down," said Haby. "We are sitting on the energy capital of the world."

"I think Kevin Costner said it best: 'If you build it, they will come,'" said Haby.

An underground meetup of bitcoin miners and oil & gas execs was held at a 150,000 square-foot warehouse safeguarding high-end vintage cars.

Parker Lewis is one of Texas' de facto bitcoin ambassadors. Everyone knows him. Everyone likes him. And virtually any bitcoiner you ask refers to him as the future mayor of Austin.

Lewis is an executive at Unchained Capital, a bitcoin-native financial services firm. He isn't in politics yet but he is hustling across the state of Texas to spread the good word on the world's biggest cryptocurrency. In May, the Houston Bitcoin Meetup consisted of only 20 people in a fluorescent-lit conference room in an office. Then Lewis decided to get involved.

"I just knew Houston would be prime to explode because of the energy connection to mining if we organized a good meetup," Lewis told CNBC. "It's also key to Texas being the bitcoin capital of the world."

His efforts are paying off. Wednesday's meetup drew more than 200 attendees from across the state of Texas, as well as California, Colorado, Louisiana, Pennsylvania, New York, Australia and the UK.

The buzz was electric on Wednesday night. You had to shout to be heard. And no one in the room mentioned any cryptocurrency beside bitcoin. There was also an unmistakable air of stealth and FOMO. The people who showed up to this event did so, at least in part, because they didn't want to get left behind.

Capturing excess and otherwise wasted natural gas from drilling sites and then using that energy to mine bitcoin is still firmly in the category of avant-garde tech.

Haby, who's affable and an open book on most things, clams up when it comes to sharing the location of his company's mining sites. "West Texas" is as much as Haby would give CNBC, though if the name "Limpia Creek" is any indication, that would place them 100 miles due north of Big Bend National Park.

His secrecy was par for the course that evening.

Oilmen, turned bitcoin miners, Griffin Haby with Conner Murphree and Jordan Kuntz at one of their bitcoin mining sites in Texas.

Bitcoin miner Alejandro de la Torre was born in Spain, but he's spent years minting bitcoin all over the world, most recently in China. When Beijing cracked down on all things crypto, De La Torre got a call from his boss at 3 A.M. telling him he had to go to Texas. He was in Austin the next day.

Since then, he's been shipping his new-generation mining gear to the U.S. in bulk.

"It's all through ships and from the Pacific side," De La Torre told CNBC. "The port depends on the location of where the rigs will end up."

That was as much as De La Torre would divulge, because, as he explains it, any further details about the destination, or the gear itself, could give his competitors an edge.

Bitcoin believers care a lot about privacy, as do the oil and gas guys. Some cited non-disclosure agreements as a reason to speak to CNBC in vague platitudes about business deals. Others were only willing to share their thoughts on the condition of anonymity. And some attendees worried about their job security should their employer find out they were there.

These weren't tycoons -- they were mostly up-and-coming young execs, hungry to get ahead and make a name by taking a gamble on bitcoin mining.

For years, oil and gas companies have struggled with the problem of what to do when they accidentally hit a natural gas formation while drilling for oil. Whereas oil can easily be trucked out to a remote destination, gas delivery requires a pipeline.

If a drilling site is right next door to a pipeline, they chuck the gas in and take whatever cash the buyer on the other end is willing to pay that day. "There's no choice. There's no middle finger. Whatever gas comes out that day has to be sold," explained Haby.

But if it's 20 miles from a pipeline, things start to get more complicated.

More often than not, the gas well won't be big enough to warrant the time and expense of building an entirely new pipeline. If a driller can't immediately find a way to sell the stash of natural gas, most look to dispose of it on site.

One method is to vent it, which releases methane directly into the air a poor choice for the environment, as its greenhouse effects are shown to be much stronger than carbon dioxide.A more environmentally friendly option is to flare it, which means actually lighting the gas on fire.

"Chemistry is amazing," explained Adam Ortolf, who heads up business development in the U.S. for Upstream Data, a company that manufactures and supplies portable mining solutions for oil and gas facilities.

"When CH4, or methane, combusts, the only exhaust is CO2 and H2O vapor. That's literally the same thing that comes out of my mouth when I exhale," continued Ortolf.

But Ortolf points out, flares are only 75 to 90% efficient. "Even with a flare, some of the methane is being vented without being combusted," he said.

This is when on-site bitcoin mining can prove to be especially impactful.

When the methane is run into an engine or generator, 100% of the methane is combusted and none of it leaks or vents into the air, according to Ortolf.

"But nobody will run it through a generator unless they can make money, because generators cost money to acquire and maintain," he said. "So unless it's economically sustainable, producers won't internally combust the gas."

A panel of bitcoin miners and oil & gas execs share what it's like to mine bitcoin in Texas.

Bitcoin makes it economically sustainable for oil and gas companies to combust their methane rather than externally combust it with a flare.

"There is no such thing as stranded gas anymore," said Haby.

But Ortolf has taken years to convince people that parking a trailer full of ASICs on an oil and gas field is a smart and financially sound idea.

"In 2018, I got laughed out of the room when I talked about mining bitcoin on flared gas," said Ortolf. "The concept of bringing hydrocarbons to market without a counterparty was laughable."

Fast forward three years, and business at Upstream, a company founded by lead engineer Steve Barbour, is booming. It now works with 140 bitcoin mines across North America.

"This is the best gift the oil and gas industry could've gotten," said Ortolf. "They were leaving a lot of hydrocarbons on the table, but now, they're no longer limited by geography to sell energy."

It is also helping to curtail the overall carbon footprint of some of these oil and gas sites. Recent production stats show that in the U.S. alone about 1.5 billion cubic feet of natural gas is wasted on a daily basis. And these are just the reported numbers, so the actual figures are likely higher.

Meanwhile, bitcoin miners get what they want most: cheap electricity.

The thing about all these grand visions for bitcoin mining to stay the course, it requires some manpower on Capitol Hill to safeguard its plan to scale.And right now, politicians in Washington are scrambling to figure out what and how to regulate cryptocurrencies and all the ancillary services that make up the wider ecosystem for digital currencies.

That's why another big topic of conversation at the Houston Bitcoin Meeting was political activism.

"Who knows a staffer or a representative?" one member of the crowd posed to the group. At least half a dozen people raised their hands and one stepped up to confirm they would reach out to their contact in Senator Cruz's office.

There was a sense of momentum in the audience.Several people made the point that the bitcoin contingent across the country had paralyzed a $1 trillion rubber-stamped, bipartisan bill, no small feat for a voting bloc which hitherto hadn't been viewed as much of a threat on the Hill.

But it's not just about being on the defensive for these tens of millions of voters and bitcoin faithful.They're going on the offensive by working to install like-minded people into office so that they can do something "before they do it to us," as one member of the audience said to the group.They're also teaching veteran lawmakers about bitcoin, as many representatives don't understand it.

"We need to target anyone who is anti-bitcoin. There are 45 million of us in America, and we are not silent," said this same attendee.

Read the original here:
Bitcoin miners and oil and gas execs mingled at a secretive meetup in Houston here's what they talked about - CNBC

Read More..

These 3 altcoins mooned as Bitcoin price rallied to $52,000 – Cointelegraph

The wider cryptocurrency market is showing signs of strength on Sept. 6 as Bitcoin (BTC) bulls battle for control at the $51,500 level.

Altcoins have benefited from Bitcoins strong showing, with many seeing gains in excess of 20%, and the Altseason Indicator from Cointelegraph Markets Pro continues to signal that market conditions are tilted toward further gains for altcoins.

Data from Cointelegraph Markets Pro and TradingView shows that the biggest gainers over the past 24 hours were Oasis Network's ROSE, Parsiq's PRQ and Travala's AVA.

Oasis Network is a blockchain protocol with privacy-enhancing features that create a secure platform for open finance and responsible data management.

VORTECS Score data from Cointelegraph Markets Pro began to detect a bullish outlook for ROSE on Sept. 1, prior to the recent price rise.

The VORTECS Score, exclusive to Cointelegraph, is an algorithmic comparison of historic and current market conditions derived from a combination of data points including market sentiment, trading volume, recent price movements and Twitter activity.

As seen in the chart above, the VORTECS Score for ROSE climbed into the green on Sept. 1 and reached a high of 76, around 82 hours before its price surged 135% over the next two days.

The sudden rise in the price of ROSE comes following the Sept. 3 announcement that the project has partnered with API3 and will be co-sponsoring a grant program for development teams wishing to build a Rust version of the protocol's Airnode service.

Parsiq, a blockchain-based analytics platform, saw the price of its PRQ token rally 51% over the past 24 hours.

VORTECS Score data from Cointelegraph Markets Pro began to detect a bullish outlook for PRQ on Sept. 5, prior to the recent price rise.

As seen in the chart above, the VORTECS Score for PRQ climbed into the green on Sept. 5 and reached a high of 72 roughly 12 hours before its price spiked 52% over the next day.

This growing momentum for Parsiq comes following the introduction of its new subscription model that makes the protocol the worlds first decentralized software as a service (SaaS).

Related: US SEC releases fresh investor alert against crypto investment scams

Travala is a leading blockchain-based travel booking platform that offers flights and accommodation services to more than 90,000 destinations in 230 countries and territories around the world.

VORTECS Score data from Cointelegraph Markets Pro began to detect a bullish outlook for AVA on Sept. 1, prior to the recent price rise.

As seen in the chart above, the VORTECS Score for AVA reached a high of 70 on Sept. 5, around 120 hours before its price spiked 54% over the next day.

The spike in price comes as the project has been making its end-of-summer push to engage users, and the project also allows users to spend stablecoins such as USD Coin (USDC) or Dai to book their next vacation.

The overall cryptocurrency market capitalization now stands at $2.341 trillion, and Bitcoins dominance rate is 41.4%.

The views and opinions expressed here are solely those of the author and do not necessarily reflect the views of Cointelegraph. Every investment and trading move involves risk, and you should conduct your own research when making a decision.

Follow this link:
These 3 altcoins mooned as Bitcoin price rallied to $52,000 - Cointelegraph

Read More..