Category Archives: Data Mining

Commentary: Medicare is cleaning up the FDAs mess on Biogens Alzheimers drug – Boston Herald

Medicare has decided once and for all not to pay for Biogens new Alzheimers drug Aduhelm unless patients are enrolled in a clinical study.

The agencys final call was unsurprising, but blessedly rational. It corrects the Food and Drug Administrations mistake in letting Aduhelm onto the market. At the same time, it leaves room for future Alzheimers drugs to be covered as long as studies show they are safe and effective.

This will encourage beneficial innovation in Alzheimers drug development, and ensure that patients get medicines that can truly help them.

The decision by the Centers for Medicare & Medicaid Services marks a turning point in Aduhelms long and contentious journey. In 2014, the drug raised hopes among Alzheimers doctors and patients when, in a small phase 2 trial, it appeared to clear amyloid plaques in patients brains and in a first for the field ease their cognitive decline. Biogen promptly began a large, expensive phase 3 study to confirm those results and, to prepare for the drugs eventual approval, invested $2.5 billion in manufacturing capacity.

In larger trials, however, the stunning early results couldnt be replicated. And that seemed to end all hope for the drug until Biogen said it found buried in the data a signal that the drug could still be effective. Then, according to an investigation by Stat News reporters, the company secretly lobbied the FDA for Aduhelms approval.

In 2020, the FDAs scientific advisory committee harshly criticized the companys data mining and overwhelmingly recommended against approving Aduhelm. Then the agency stunned everyone by approving the drug anyway, based on its ability to clear amyloid plaques, with the proviso that Biogen would run another trial to prove that the plaque-clearing would slow cognitive decline.

Biogen audaciously priced the drug at $56,000 per year. And Medicare, faced with the possibility of paying for treatment for millions of qualified Americans, had to schedule a big rise in monthly premiums for Part B coverage. (After an outcry, Biogen eventually halved the price.)

Now that CMS has settled on a way to limit spending on the drug until its benefit is proved, Medicare will be able to dial back that premium increase. The decision also likely spells the end of Aduhelm, which doctors were already shunning. In 2021, it brought in only $3 million in sales.

Biogen, patient advocacy groups and even some members of Congress have suggested that CMSs refusal to cover Aduhelm could have a chilling effect on innovation in Alzheimers. They have argued that drug companies will have no incentive to develop new medicines if insurers wont cover them.

But in a clear and sober explanation of its thinking on Aduhelm, CMS pointed out that the opposite is true: The CMS final decision provides clarity on the criteria to receive coverage for any drug in this class (and thus what evidence is necessary to meet the standard for reasonable and necessary for this particular treatment).

A drug can be considered innovative only if it actually improves patients lives. In a disease as devastating as Alzheimers, even marginal improvements matter. But evidence from several large clinical studies indicates that Aduhelm fails to offer that.

Medicare has laid a path for other companies to understand where the bar for coverage is set: A drug must be safe and offer a meaningful benefit to patients, and it must do so over time. This is good news for Eli Lilly & Co. and Roche, both of which have Alzheimers therapies that will soon be up for approval.

CMS, which is expected to foot the bill for Medicare patients drugs, perhaps had greater incentive than the FDA to make sure the drug works. But the FDA is the agency that should have set the bar. FDAs mandate is to follow the science. As it weighs other loaded decisions, particularly for neurodegenerative diseases, it should make sure that Medicare never again has to correct its mistakes.

Lisa Jarvis, the former executive editor of Chemical & Engineering News, writes about biotech, drug discovery and the pharmaceutical industry for Bloomberg Opinion.

Read the rest here:

Commentary: Medicare is cleaning up the FDAs mess on Biogens Alzheimers drug - Boston Herald

Introduction to Data Mining – University of Minnesota

Avoiding False Discoveries: A completely new addition in the second edition is a chapter on how to avoid false discoveries and produce valid results, which is novel among other contemporary textbooks on data mining. It supplements the discussions in the other chapters with a discussion of the statistical concepts (statistical significance, p-values, false discovery rate, permutation testing, etc.) relevant to avoiding spurious results, and then illustrates these concepts in the context of data mining techniques. This chapter addresses the increasing concern over the validity and reproducibility of results obtained from data analysis. The addition of this chapter is a recognition of the importance of this topic and an acknowledgment that a deeper understanding of this area is needed for those analyzing data.

Classification: Some of the most significant improvements in the text have been in the two chapters on classification. The introductory chapter uses the decision tree classifier for illustration, but the discussion on many topicsthose that apply across all classification approacheshas been greatly expanded and clarified, including topics such as overfitting, underfitting, the impact of training size, model complexity, model selection, and common pitfalls in model evaluation. Almost every section of the advanced classification chapter has been significantly updated. The material on Bayesian networks, support vector machines, and artificial neural networks has been significantly expanded. We have added a separate section on deep networks to address the current developments in this area. The discussion of evaluation, which occurs in the section on imbalanced classes, has also been updated and improved.

Anomaly Detection: Anomaly detection has been greatly revised and expanded. Existing approachesstatistical, nearest neighbor/density-based, and clustering basedhave been retained and updated, while new approaches have been added: reconstruction-based, one-class classification, and information-theoretic. The reconstruction-based approach is illustrated using autoencoder networks that are part of the deep learning paradigm.

Association Analysis: The changes in association analysis are more localized. We have completely reworked the section on the evaluation of association patterns (introductory chapter), as well as the sections on sequence and graph mining (advanced chapter).

Clustering: Changes to cluster analysis are also localized. The introductory chapter added the K-means initialization technique and an updated discussion of cluster evaluation. The advanced clustering chapter adds a new section on spectral graph clustering.

Data: The data chapter has been updated to include discussions of mutual information and kernel-based techniques.

Exploring Data: The data exploration chapter has been removed from the print edition of the book, but is available on the web.

Visit link:

Introduction to Data Mining - University of Minnesota

Advanced analytics can help achieve fuel optimization in open-pit mining – McKinsey

Metals mining contributes 3 to 4 percent of global CO2 emissions. For open-pit mining, about 45 percent of carbon emissions are typically Scope 1, of which about 35 percent comes from fuel consumed in hauling.

Fuel optimization achieved by harnessing existing data and machine learning can reduce carbon emissions immediately while alternative technologies to diesel for off-highway trucks are developed and scaled.

A proven machine-learning platform enables discovery of correlations and highlights drivers of fuel consumption based on a truck fleets past performance by connecting fleet management, enterprise asset management, machine IoT, and other operational data (for example, tire pressure, road layout and quality sensors, and fuel quality). In addition, creation of a digital twin makes it possible to solve for fuel efficiency while maintaining productivity and integrating with both internal and external data sets.

As ore grades decrease and pits become deeper, haulingand its associated costsis of greater importance to maintain mine operating expenditures. McKinseys experience shows that leveraging proven machine-learning-based solutions, along with a change in management strategy, can improve hauling-fuel efficiency relatively quickly and with limited investment.

Download the full commentaryfor more insight (PDF76.3 KB).

Continued here:

Advanced analytics can help achieve fuel optimization in open-pit mining - McKinsey

Data from friends and strangers show where you are – Futurity: Research News

Share this Article

You are free to share this article under the Attribution 4.0 International license.

Turning off your data tracking doesnt mean youre untraceable, a new study warns.

Data about our habits and movements are constantly collected via mobile phone apps, fitness trackers, credit card logs, websites visited, and other means. But even with it off, data collected from acquaintances and even strangers can predict your location.

Switching off your location data is not going to entirely help, says Gourab Ghoshal, an associate professor of physics, mathematics, and computer science at the University of Rochester.

Ghoshal and colleagues applied techniques from information theory and network science to find out just how far-reaching a persons data might be. The researchers discovered that even if individual users turned off data tracking and didnt share their own information, their mobility patterns could still be predicted with surprising accuracy based on data collected from their acquaintances.

Worse, says Ghoshal, almost as much latent information can be extracted from perfect strangers that the individual tends to co-locate with.

The researchers analyzed four datasets: three location-based social network datasets composed of millions of check-ins on apps such as Brightkite, Facebook, and Foursquare, and one call-data record containing more than 22 million calls by nearly 36,000 anonymous users.

They developed a colocation network to distinguish between the mobility patterns of two sets of people:

By applying information theory and measures of entropythe degree of randomness or structure in a sequence of location visitsthe researchers learned that the movement patterns of people who are socially tied to an individual contain up to 95% of the information needed to predict that individuals mobility patterns.

However, even more surprisingly, they found that strangers not tied socially to an individual could also provide significant information, predicting up to 85% of an individuals movement.

The ability to predict the locations of individuals or groups can be beneficial in areas such as urban planning and pandemic control, where contact tracing based on mobility patterns is a key tool to stopping the spread of disease. In addition, many consumers appreciate the ability of data mining to offer tailored recommendations for restaurants, TV shows, and advertisements.

However, Ghoshal says, data mining is a slippery slope, especially because, as the research shows, individuals sharing data via mobile apps may be unwittingly providing information about others.

Were offering a cautionary tale that people should be aware of how far-reaching their data can be, he says. This research has a lot of implications for surveillance and privacy issues, especially with the rise of authoritarian impulses. We cant just tell people to switch off their phones or go off the grid. We need to have dialogues to put in place laws and guidelines that regulate how people collecting your data use it.

Additional coauthors of the paper are from the University of Exeter, the Federal University of Rio de Janeiro, Northeastern University, and the University of Vermont.

Source: University of Rochester

Read more:

Data from friends and strangers show where you are - Futurity: Research News

Central Michigan University professor receives Professor of the Year award – The Morning Sun

Central Michigan University Professor Carl Lee has been recognized as one of the states three recipients of the Michigan Distinguished Professor of the Year award.

This award recognizes the outstanding contributions and dedication exhibited by the faculty from Michigans 15 public universities to the education of undergraduate students.

The Academic Affairs Officers committee of the Michigan Association of State Universities will recognize the nominees and recipients of this annual award on April 15 during a luncheon at the Lansing Center. The other two winners are Michigan State University Professor Vashti Sawtelle and Wayne State University Professor Sandra Gonzales.

Supporting and preparing future educators

Professor Lee is among the award nominees who continue to bring new scholarship and innovation in teaching and learning to Michigans public universities, said Dr. Daniel J. Hurley, CEO of the Michigan Association of State Universities. These professors have the highest dedication to their students, ensuring that they are well prepared to make a meaningful impact in their careers and in their communities.

Dr. Carl Lee is Founding Chair and Professor of Department of Statistics, Actuarial and Data Sciences at Central Michigan University, where he has taught for more than 38 years and developed undergraduate and graduate programs and taught courses ranging from introductory to Ph.D. level courses. Dr. Lee earned his B.S. in agronomy from National Taiwan University, his M.A. in mathematics at West Florida State University and his Ph.D. in statistics at Iowa State University.

He is a Fellow of the American Statistical Association (ASA) and a recipient of the Haimo Distinguished Teaching of Mathematics Award from the Mathematical Association of America (MAA), the Distinguished Teaching Award from the Michigan Section of the MAA, CMUs University Distinguished Service Award, and numerous other honors.

Dr. Lees dedication to undergraduate education goes beyond his classroom rapport with students. He has consistently been innovative in his approach to pedagogy, emphasizing projects, hands-on activities, cooperative learning, and exercises, or what he calls a PACE model. He has authored an extensive array of scholarly papers on the topic of teaching and learning in the field of statistics. Since statistics is considered by many students to be among the most difficult subjects, this motivates Dr. Lees interest in conducting research to investigate how students learn quantitative concepts, the misconceptions and difficulties encountered, as well as the effect of technology on learning. Among his 150 publications, 42 papers are associated with teaching and student learning.

Biochemistry student awarded Goldwater Scholarship

Dr. Lee has also made substantial contributions to the teaching of statistics at CMU, developing statistics programs at the undergraduate and graduate levels and nationally through his involvement with ASAs Undergraduate Statistics Education Initiatives, and the National Consortium for the Advancement of Undergraduate Statistics Education.At CMU, he helped design the undergraduate statistics and actuarial science program and also initiated and developed a graduate certificate program in data mining, the MS in Applied Statistics and Analytics, and completely revised the doctoral curriculum.For the past two years, Dr. Lee has worked tirelessly with colleagues from across the university to create the most interdisciplinary program on campus in data science.

The monumental task has involved coordinating a new degree and major with colleagues from four colleges and nine academic departments, continuing to inspire fellow faculty and administrators at CMU.

Throughout his career, Dr. Carl Lee has consistently demonstrated his devotion to students and high-quality undergraduate education through his engaging classroom approach, innovative use of pedagogy, high quality scholarship in teaching and learning of statistics, and his contributions to program development and curricular reform at CMU and in his professional field, said Richard Rothaus, Central Michigan University Interim Executive Vice President and Provost.

View original post here:

Central Michigan University professor receives Professor of the Year award - The Morning Sun

Moon mining and satellite collisions make list of DoD concerns in space – Federal News Network

Outer space is becoming increasingly militarized as China, Russia, the United States and other countries continue to vie for dominance in the domain and even consider mining off-planet assets.

The Defense Department identified its two biggest competitors, along with increasing congestion in the area just outside the Earths atmosphere as some of the largest threats to the United States space interests.

China and Russia value superiority in space, and as a result, theyll seek ways to strengthen their space and counterspace programs and determine better ways to integrate them within their respective militaries, Kevin Ryder, Defense Intelligence Agency senior analyst for space and counterspace, said Tuesday at the Pentagon. Both nations seek to broaden their space exploration initiatives together and individually with plans to explore the moon and Mars during the next 30 years. If successful, these efforts will likely lead to attempts by Beijing and Moscow to exploit the moons natural resources.

In a new report from the DIA, the organization found that since 2019 competitor space operations have increased in pace and scope across nearly all major categories including communications, remote sensing, navigation, and science and technology demonstration.

Ryder said that China and Russia intend to undercut the United States and its allies in space.

The report states the two nations increased their number of satellites around the Earth by 70% in the last two years.

Other advancements include China landing a rover on Mars and a robotic spacecraft on the dark side of the moon.

What weve seen so far has been more civilian in nature, Ryder said. However, China emphasizes in their writings, civil-military integration and dual-use purpose space capabilities. While we do understand that right now, it is civil in nature, we continue to monitor for any possibility of military activity.

Its not just competition that DIA is outlining as a threat to U.S. space efforts. The intelligence agency noted that the probability of collisions of massive derelict objects in low earth orbit is growing, and will continue through at least 2030.

As of January 2022, more than 25,000 objects of at least 10 centimeters in size were tracked and cataloged in Earths orbit to include active satellites, the report states. The primary risk to spacecraft in orbit is from uncataloged lethal nontrackable debris (LNT), which are objects between 5 millimeters and 10 centimeters in size. An estimated 600,000 to 900,000 pieces of uncataloged LNT are in low earth orbit.

Looking to the future, the U.S. is now considering deep space operations and the challenges they will present for tracking and monitoring spacecraft.

DoD has outlined space as a crucial domain for the United States. The Pentagon is increasing its investments in space capabilities. The 2023, budget request asks for $27.6 billion for space capabilities, command and control and resilient architectures.

Thats not to mention that Congress and the department set up the Space Force in the last couple years, a military branch solely focused on operations outside of earth.

The Biden administration is asking for the largest ever Space Force budget next year at $24.5 billion.

Read the original post:

Moon mining and satellite collisions make list of DoD concerns in space - Federal News Network

Overcoming the Challenges of Mining Big Data – Tahawul Tech

by Bill Scudder, Senior Vice President and General Manager, AIoT Solutions, AspenTech

Big Data remains of critical importance to every organisation operating today, no matter its size or location. Data volumes continue to rise. Big Data growth statistics from Statista reveal that data creation will be over 180 zettabytes by 2025. That will be about 118.8 zettabytes more than in 2020. IDC recommends Industrial organisations are among those most significantly impacted by data growth. In the Middle East, Turkey and Africa, IDC expects big data analytics spending to grow by 8% and be valued at $3bn.

Making the most of data

The impact is a mixture of opportunity and challenge. On the one hand, Big Data has an important role to play in arming organisations with the resources and information they need to enable data-driven decisions that can improve business-related outcomes. When analysed properly, the benefits of Big Data can include optimising production, real-time visibility, and enhanced decision-making, allowing teams to be more productive, effective, and innovative. These benefits can make the difference between an organisations success over its competitors.

Indeed, for capital-intensive industries such as manufacturing and industrial facilities, Big Data is essential to operations. It can help with predictive maintenance, so supervisors can schedule plant downtime to repair assets before unexpected costly breakdowns occur, provide anomaly detection to alert workers to small deviations from the norms of quality, and predict with greater certainty around international supply chain management challenges.

Thats essentially the opportunity. However, the problem often lies in harnessing this data and then making use of it to advance the organisations goals. With the Industrial Internet of Things (IIoT) and capital-intensive industries amassing more and more data, theyre having to face the increasing challenges of being unable to manage it or leverage it effectively.

Starting to address the challenge

More data isnt always better data. With the influx of data that organisations have received through digitalisation efforts, many have found themselves in the middle of a data swamp with every piece of possible data included.

We are seeing signs of progress. COVID-19 undoubtedly accelerated the digitalisation of organisations across the world, down to the way they store and access their data. However, this transformation also revealed the limitations of the traditional model of data management, where data is siloed by teams, sources, and locations. This kind of data gatekeeping significantly hinders visibility, as only certain people with unique access or domain expertise can understand or even access data sets that may be relevant to others across the enterprise.

Industrial organisations must switch focus from mass data accumulation to strategic industrial data management, specifically homing in on data integration, data mobility and data accessibility across the organisation. They need to bring together data stored in various silos, often at a range of different facilities worldwide with the goal of using AI-enabled technologies to unlock hidden value in previously unoptimised and undiscovered sets of industrial data. In effect, they need to start moving to an approach based on big data mining.

Debunking the myths

Before embarking on this approach, however, capital-intensive organisations need to counter the myths that have hampered the approach in the past. When it comes to Big Data mining specifically, one of the biggest challenges organisations face is operating under the assumption that there is a one-size-fits-all solution.

This is not true organisations must continuously re-evaluate their workflows and processes for collecting, storing, optimising, and presenting data to ensure theyre reaping the greatest business value from it. This shows up in practice when thinking about auto discovery for example. Many IT leaders believe that there are tools that will auto-discover relevant information across all data, where in fact there is an age limit on what these tools can work with data that is past a certain time frame is usually undetectable, for example.

Another challenge comes from a generational, operational expertise gap. Many organisations are having difficulty finding the right people who have the means and knowledge of where data is stored and what format its in. This all circles back to making sure they have the correct data integration strategy in place it makes it infinitely easier on the employees when a set plan has been made and executed on.

Executing on a strategy

Once organisations are fully aware of the myths of data mining and prepared to counter the challenges, they can start on the process of making more of their Big Data stores.

To help facilitate this change, enterprise IT teams should put a clear strategy in place to ensure that theyre implementing all the proper tools they need to get adequate information from all sources. A big part of this strategy should be to implement a data historian. These tools have evolved, moving beyond standardised aggregations of process data to become the anchor technology for industrial data management strategies. In todays world, the data historian serves as a democratising force, making it possible for data to be accessed and actioned on by anyone in the organisation.

Next, organisations need to start applying Industrial AI to make data more visible, accessible, and actionable across the enterprise. By building a cloud-ready industrial architecture that connects the latest AI technologies with the IIoT can not only collect and transmit data but also turn it into intelligence to drive smarter decisions.

This emerging confluence of AI and IIoT offers capital-intensive businesses a range of benefits because they can create a sustainable advantage when analysing large volumes of industrial data for real-time reporting, automation and decision making and then integrate it across assets, plants and sites. They can then build on the advantage by visualising data to identify trends, outliers, and patterns to drive mission-critical apps and actionable business workflows and collaborating across AI-powered apps to help achieve safety, sustainability and profitability goals.

The strategic data management approach that this combination of data historians and industrial AI delivers also helps to bridge a growing skills gap. As veteran employees with years of expertise retire, replaced with younger employees with much less experience, an AI-powered, datadriven approach ensures that critical, historic knowledge is preserved and shared widely across the organisation regardless of team, geographical location, or siloes.

A positive future approach

Big Data will continue to play a mission-critical role in arming industrial organisations with the resources and insights needed for making data-driven decisions tied to concrete business value outcomes.

This could be about helping with predictive maintenance so supervisors can schedule plant downtime to repair assets before unexpected costly breakdowns occur, providing anomaly detection to alert workers to small deviations from the norms of quality, and predicting with greater certainty around supply chain management challenges.

It could also mean anything from optimising production lines to providing real-time process visibility, all to help teams become more productive, effective, and innovative. But to reap the most value from Big Data and apply it meaningfully to industrial applications, capital-intensive businesses must switch their focus from mass data accumulation to more thoughtful, strategic industrial data management homing in on data integration, mobility, and accessibility across the organisation. By deploying technologies like next-gen data historians and Industrial AI, these businesses can unlock new, hidden value from previously unoptimised and undiscovered sets of industrial data.

More here:

Overcoming the Challenges of Mining Big Data - Tahawul Tech

Medicare is cleaning up the FDA’s mess on Biogen’s Alzheimer’s drug – Kingsport Times News

Medicare has decided once and for all not to pay for Biogens new Alzheimers drug Aduhelm unless patients are enrolled in a clinical study.

The agencys final call was unsurprising, but blessedly rational. It corrects the Food and Drug Administrations mistake in letting Aduhelm onto the market. At the same time, it leaves room for future Alzheimers drugs to be covered as long as studies show they are safe and effective.

This will encourage beneficial innovation in Alzheimers drug development and ensure that patients get medicines that can truly help them.

The decision by the Centers for Medicare & Medicaid Services marks a turning point in Aduhelms long and contentious journey. In 2014, the drug raised hopes among Alzheimers doctors and patients when, in a small phase 2 trial, it appeared to clear amyloid plaques in patients brains and in a first for the field ease their cognitive decline. Biogen promptly began a large, expensive phase 3 study to confirm those results and, to prepare for the drugs eventual approval, invested $2.5 billion in manufacturing capacity.

In larger trials, however, the stunning early results couldnt be replicated. And that seemed to end all hope for the drug until Biogen said it found buried in the data a signal that the drug could still be effective. Then, according to an investigation by Stat News reporters, the company secretly lobbied the FDA for Aduhelms approval.

In 2020, the FDAs scientific advisory committee harshly criticized the companys data mining and overwhelmingly recommended against approving Aduhelm. Then the agency stunned everyone by approving the drug anyway, based on its ability to clear amyloid plaques, with the proviso that Biogen would run another trial to prove that the plaque-clearing would slow cognitive decline.

Biogen audaciously priced the drug at $56,000 per year. And Medicare, faced with the possibility of paying for treatment for millions of qualified Americans, had to schedule a big rise in monthly premiums for Part B coverage. (After an outcry, Biogen eventually halved the price.)

Top stories, delivered straight to your inbox.

Now that CMS has settled on a way to limit spending on the drug until its benefit is proved, Medicare will be able to dial back that premium increase. The decision also likely spells the end of Aduhelm, which doctors were already shunning. In 2021, it brought in only $3 million in sales.

Biogen, patient advocacy groups and even some members of Congress have suggested that CMSs refusal to cover Aduhelm could have a chilling effect on innovation in Alzheimers. They have argued that drug companies will have no incentive to develop new medicines if insurers wont cover them.

But in a clear and sober explanation of its thinking on Aduhelm, CMS pointed out that the opposite is true: The CMS final decision provides clarity on the criteria to receive coverage for any drug in this class (and thus what evidence is necessary to meet the standard for reasonable and necessary for this particular treatment).

A drug can be considered innovative only if it actually improves patients lives. In a disease as devastating as Alzheimers, even marginal improvements matter. But evidence from several large clinical studies indicates that Aduhelm fails to offer that.

Medicare has laid a path for other companies to understand where the bar for coverage is set: A drug must be safe and offer a meaningful benefit to patients, and it must do so over time. This is good news for Eli Lilly & Co. and Roche, both of which have Alzheimers therapies that will soon be up for approval.

CMS, which is expected to foot the bill for Medicare patients drugs, perhaps had greater incentive than the FDA to make sure the drug works. But the FDA is the agency that should have set the bar. FDAs mandate is to follow the science.

As it weighs other loaded decisions, particularly for neurodegenerative diseases, it should make sure that Medicare never again has to correct its mistakes.

Lisa Jarvis, the former executive editor of Chemical & Engineering News, writes about biotech, drug discovery and the pharmaceutical industry for Bloomberg Opinion.

Excerpt from:

Medicare is cleaning up the FDA's mess on Biogen's Alzheimer's drug - Kingsport Times News

Read This Engaging Story Of How The Computer Has Been Eating The World For 75 Years – Forbes

A New History of Modern Computing by Thomas Haigh and Paul E. Ceruzzi is a must-read for investors, entrepreneurs, executives, and anyone interested in understanding the technology that is embedded in the lives of most of the worlds population.

A New History of Modern Computing by Thomas Haigh and Paul E. Ceruzzi

Haigh and Ceruzzi tackled the challenge of writing a definitive, comprehensive history of an ever-changing technology, by breaking the seventy-five years (1945-2020) of modern computing into about fifteen distinct themes, each focusing on a specific group of users and applications around which the computer is redefined with the addition of new capabilities. Along the way, they trace the transformation of computing from scientific calculations to administrative assistance to personal appliances to a communications medium, a constant reinvention that continues today.

Computers made an astounding variety of other technologies vanish into itself, write Haigh and Ceruzzi. We conceptualize this convergence of tasks on a single platform as a dissolving of those technologies, and in many cases, their business models, by a device that come ever closer to the status of a universal technological solvent.

In Silicon Valley parlance, dissolving is disrupting. In the dominant tech zeitgeist (to some extent since the 1950s, without exception since the 1990s), each computer transformation is a revolution. Which is why historyand understanding the actual (factual) details of past transformationsis typically of no interest to the denizens of the next big thing.

Haigh and Ceruzzi deftly demonstrate why it is important to understand the evolution of computing, why knowing where you came from is a foundation of success, why tradition is a key to innovation. Architectural advances pioneered by Cary supercomputers now help your phone to play Netflix video more effectively is one example highlighting the remarkable continuity of computing, as opposed to the make-believe disruptive innovations. Whenever the computer became a new thing, it did not stop being everything it had been before, Haigh and Ceruzzi sum up the real business of innovating while standing on the shoulders of giants.

Possibly reacting to the endless pronouncements that this or that new computing innovation is changing the world, Haigh and Ceruzzi remind us that the computers influence on our lives has so far been less fundamental than that of industrial age technologies such as electric light or power, automobiles or antibiotics. Armed with this useful historical perspective, they have tried to give a reasonably comprehensive answer to a more tractable question: How did the world change the computer?

Numerous inventors, engineers, programmers, entrepreneurs and users have been responsible for the rapid and reliable change in the scale and scope of computing, not any inherent laws or some kind of inevitable, deterministic technology trajectory. In the process, they have changed the computer industry, what we mean by industry, and what we perceive as the essence of computing.

Just like the technology around which it has grown by leaps and bounds, the computer industry has gone through multiple transformations. From a handful of vertically integrated companiesprimarily IBM and DEC; to a number of companies focusing on horizontal industry segments such as semi-conductors, storage, networking, operating systems, and databasesprimarily Intel, EMC, Cisco, Microsoft, and Oracle; to companies catering mostly to individual consumersprimarily Apple, Google, Facebook, and Amazon. To this latter group we may add Tesla, which Haigh and Ceruzzi discuss as a prime example of the convergence of computing and transportation. Just like computing technology, the ever-changing computer industry has not stopped being what it was earlier when it moved into a new phase of its life, preserving at least some elements of previous stages in its evolution.

Still, the new stages eventually dissolved the business models of the past, leading to todays reliance by many large and small computer companies on new (to the industry) sources of revenues such as advertising. Eating other industries, especially media businesses, brought on huge profits and, eventually, serious indigestion.

While swallowing other industries, the computer industry has also made the very term industry quite obsolete. The digitization of all analog devices and channels for the creation, communications, and consumption of information, spurred by the invention of the Web, shuttered the previously rigid boundaries of economic sectors such as publishing, film, music, radio, and television. In 2007, 94% of storage capacity in the world was digital, a complete reversal from 1986, when 99.2% of all storage capacity was analog.

I would argue that the data resulting from the digitization of everything is the essence of computing, of why and how high-speed electronic calculators were invented seventy-five years ago and of their transformation over the years into a ubiquitous technology, embedded, for better or worse, in everything we do. This has been a journey from data processing to big data.

As Haigh and Ceruzzi write early computers wasted much of their incredibly expensive time waiting for data to arrive from peripherals. This problem of latency, of efficient access to data, played a crucial role in the computing transformations of subsequent years, but it has been overshadowed by the dynamics of an industry driven by the rapid and reliable advances in processing speeds. Responding (in the 1980s) to computer vendors telling their customers to upgrade to a new, faster processor, computer storage professionals wryly noted they are all waiting [for data] at the same speed.

The rapidly declining cost of computer memory (driven by the scale economies of personal computers) helped address latency issues in the 1990s, just at the time business executives started to use the data captured by their computer systems not only for accounting and other internal administrative processes. They stopped deleting the data, instead storing it for longer periods of time, and started sharing it among different business functions and with their suppliers and customers. Most important, they started analyzing the data to improve various business activities, customer relations, and decision-making. Data mining became the 1990s new big thing, as the business challenge shifted from how to get the data quickly? to how to make sense of the data?

A bigger thing that decade, with much larger implications for data and its usesand for the definition of computingwas the invention of the Web and the companies it begat. Having been born digital, living the online life, meant not only excelling in hardware software development (and building their own clouds), but also innovating in the collection and analysis of the mountains of data produced by the online activities of millions of individuals and enterprises. Data has taken over from hardware and software as the center of everything computing, the lifeblood of tech companies. And increasingly, the lifeblood of any type of business.

In the last decade or so, the cutting edge of computing became big data and AI (more accurately labeled deep learning), the sophisticated statistical analysis of lots and lots of data, the merging of software development and data mining skills (data science).

As Haigh and Ceruzzi suggest, we can trace how the world has changed the computer rather than how computer technology has changed the world. For example, tracing the changes in how we describe what we do with computers, the prestige-chasing transformations from data processing to information technology (IT), from computer engineering to computer science, and from statistical analysis to data science. The computerand its datahave brought many changes to our lives, but has not changed much what drives us, what makes humans tick. Among many other things, it has not influenced at all, it could not have influenced at all, the all-consuming desire for prestige and status, whether as individuals or as nations.

See the article here:

Read This Engaging Story Of How The Computer Has Been Eating The World For 75 Years - Forbes

Payor audits on the rise. How private practices can get ready. – American Medical Association

Claims processing and payments from health insurers should flow easily under negotiated contracts, but that isnt always the case. Payment and claims audits are more common than ever and physician private practices should know what to anticipate, according to experts from one of the nations preeminent health care law firms.

Ross Burris and Sean Timmons of the Polsinelli law firm offered their expert advice and detailed the trends in payor audits and disputes in the first of a two-part AMA webinar. You can watch part two for even more audit-response strategies.

Medicares auditing process

The Centers for Medicare & Medicaid Services has introduced an audit process called Targeted Probe and Educate that was designed to help health care claimants reduce denials and appeals and improve their administrative process.

But the process hasnt lived up to its goals, Burris said. The new system ties physicians and their practices to Medicare Administrative Contractors (MACs) who are tasked with helping claimants identify and correct errors.

We are seeing them a lot, especially for physicians. It sounds really nice, Burris said. But the process is onerous, requiring a lot of time to prepare supporting documents for MAC review, which can then lead to broader and more complicated audits. The Polsinelli lawyers said the audits can take up to two years to resolve and they have seen a recent increase in these types of audits. Learn more with the AMA about the MAC review process.

Commercial payor audits, meanwhile, can be even harder to navigate. Many contracts have not been updated in years, and commercial audits are based on contracts with unique opportunities for denial, rate changes and termination, Burris said.

Payors may request itemized bills and medical records before payment and as a result, we are seeing a lot of claims being held up, he said.

Contract termination is a capital punishment outcome for audits, and physicians should know that terminations can come at almost any point in an audit or negotiation, the Polsinelli experts said.

The AMA Payor Audit Checklist (PDF) helps practices respond effectively to payor records requests while minimizing the administrative burden associated with responding to such requests. A thorough and timely response could reduce the likelihood that a practice will have to return money to the payor, pay a penalty or lose access to the plans beneficiaries.

Mining data to ID outliers

Payors are getting more aggressive in terminating physicians and practices that they identify as outliers by using data-mining and claims analysis to identify physicians that deem to be performing and billing for procedures outside of what they see as normal in their coverage regions, the lawyers from Polsinelli said.

We had a case last year when a client had a pretty favorable rate they had negotiated, and [the insurer] came in and decided that day they wanted to amend the contract and change the rate, Burris said.

When the physician practice reminded the insurer that a change was not within the period allowed for rate changes, the payor announced it would terminate the contract completelywhich was allowed by the contract.

Payors often make little or no effort to understand the reason for the billings that led to the audit. Physician practices should be mindful that termination can happen without warning, leaving their doctors out of network, the Polsinelli experts said.

Its a draconian tactic, but it happens from time to time, Burris said.

State insurance regulators may have the power to intercede in insurance company behaviors such as contracts and audits, but are unlikely to do so because they are more interested in protecting beneficiaries, Timmons said. It is difficult to get their attention. That leaves insurers with a lot more options to pressure physician practices.

It takes astute clinical judgment as well as a commitment to collaboration and solving challenging problems to succeed in independent settings that are often fluid, and theAMA offers the resources and support physicians needto both start and sustain success in private practice.

As physicians strive to continue to provide care to patients and maintain their practices during the ongoing COVID-19 pandemic, the AMA is providing an updated guide to help doctors to keep their practices open.Find out more about theAMA Private Practice Physicians Section, which seeks to preserve the freedom, independence and integrity of private practice.

See the rest here:

Payor audits on the rise. How private practices can get ready. - American Medical Association