Category Archives: Data Mining

Reduced Data Accuracy Helps Save Energy Tampere University, Finland, Is Coordinating a Project That Trains Young Scientists From Around the World to…

Business Wire India

It is estimated that by 2040, computers will need more electricity than the worlds energy resources can generate. The Approximate Computing for Power and Energy Optimisation (APROPOS) project will train 15 junior researchers around Europe to tackle the challenges of future-embedded and high-performance computing energy efficiency by using disruptive methodologies.

This press release features multimedia. View the full release here: https://www.businesswire.com/news/home/20211111005657/en/

Professor Jari Nurmi is coordinating the APROPOS project and co-supervising four of its researchers. His current research interests include approximate and reconfigurable computing, software-defined radio and networks, and wireless positioning hardware. Photo: Sari Laapotti

The energy consumption of mobile broadband networks is comparable to that of data centres. The Internet of Things paradigm will soon connect up to 50 billion devices through wireless networks. The APROPOS project aims at decreasing energy consumption in both distributed computing and communications for cloud-based cyber-physical systems.

Luckily, in many parts of global data acquisition, transfer, computation, and storage systems, its possible to reduce accuracy to allow reduced energy and time consumption. By introducing accuracy to design optimisation, energy efficiency can be improved even up to 50-fold, says Professor Jari Nurmi from the Electrical Engineering Unit of Tampere University.

Nurmi points out that, for example, numerous sensors measure noisy or inexact inputs, and the algorithms processing the acquired signals can be stochastic. Sensor swarms measuring natural environments produce a lot of noisy and inexact data that can be transferred and processed with less accuracy without losing the essential trends of the phenomena observed.

The applications using data may not need completely correct results; acceptable accuracy may be sufficient. This means that the system can be resilient against random errors and, for example, a coarse classification may be enough for a data mining system.

New solutions needed to tackle increasing energy consumption

The overall energy consumption of computing and communication systems is rapidly growing, despite the recent advances in semiconductor technology and energy-aware system design. The APROPOS project will train fifteen research fellows in a multisectoral, international environment to form the basis for enhanced features in products and energy-aware system design.

The early-stage researchers of APROPOS will be trained in both entrepreneurial and academic directions. Thus, they will be able to develop the commercial potential of their research and come up with innovative product and service ideas, Nurmi adds.

APROPOS is a four-year project funded by the European Unions Horizon 2020 Marie Sklodowska Curie Innovative Training Networks. It is coordinated by Tampere University, Finland. Many of the young researchers are coming from outside the EU to work in Finland, Sweden, the Netherlands, Austria, Italy, Spain, Switzerland, France, and the UK.

Read more at http://www.apropos-itn.eu

Tampere University

The multidisciplinary Tampere University is the second largest university in Finland. The spearheads of our research and learning are technology, health and society. The University is committed to addressing the greatest challenges that are facing our society and creating new opportunities. Almost all the internationally recognised fields of study are represented at the University. Together, Tampere University and Tampere University of Applied Sciences comprise the Tampere Universities community made up of more than 30,000 students and close to 5,000 employees.www.tuni.fi/en

View source version on businesswire.com: https://www.businesswire.com/news/home/20211111005657/en/

More:

Reduced Data Accuracy Helps Save Energy Tampere University, Finland, Is Coordinating a Project That Trains Young Scientists From Around the World to...

Data Mining Tools Market increasing demand with Industry Professionals: IBM, SAS Institute, Oracle The Host – The Host

GlobalData Mining ToolsMarket Report is an objective and in-depth study of the current state aimed at the major drivers,Data Mining Toolsmarket strategies, andData Mining Toolskey players growth. TheData Mining Toolsstudy also involves the important Achievements of theData Mining Toolsmarket,Data Mining ToolsResearch & Development,Data Mining Toolsnew product launch,Data Mining Toolsproduct responses andData Mining Tools indusryregional growth of the leading competitors operating in the market on a universal and local scale. The structured analysis contains graphical as well as a diagrammatic representation of worldwideData Mining ToolsMarket with its specific geographical regions.

[Due to the pandemic, we have included a special section on the pre-post Impact of COVID 19 on the @Market which would mention How the Covid-19 is Affecting the Data Mining Tools

Get Data Mining Tools sample copy of report @jcmarketresearch.com/report-details/1468438/sample

** The Values marked with XX is confidential data. To know more aboutData Mining Tools industryCAGR figures fill in your information so that our JCMR business development executive can get in touch with you.

Global Data Mining Tools(Thousands Units) and Revenue (Million USD) Market Split by following coverage:-

By Type On-premises Cloud By Application Application I Application II Application III

The research Data Mining Tools study is segmented by Application such as Laboratory, Data Mining Tools Industrial Use, Data Mining Tools Public Services & Others with historical and projected market share and compounded annual growth rate.GlobalData Mining Toolsby Region (2021-2029)

Geographically,this Data Mining Tools report is segmented into several key Regions, with production, consumption, revenue (million USD), and Data Mining Tools market share and growth rate ofData Mining Toolsin these regions, from 2013to 2029(forecast)covering.

Additionally, the Data Mining Tools export and import policies that can make an immediate impact on theData Mining Tools. ThisData Mining Toolsstudy contains a EXIM* related chapter on theData Mining Toolsmarket and all its associated companies with their profiles, which gives valuable data pertaining to their outlook in terms of Data Mining Tools industry finances, Data Mining Tools product portfolios, Data Mining Tools investment plans, and Data Mining Tools marketing and Data Mining Tools business strategies. The report on theData Mining Toolsan important document for every market enthusiast, policymaker, investor, and player.

Key questions answered in this Data Mining Tools industry report Data Survey Report 2029

What will the Data Mining Tools market size be in 2029and what will the growth rate be?What are the key Data Mining Tools market trends?What is driving Data Mining Tools?What are the challenges to Data Mining Toolsmarket growth?Who are the Data Mining Tools key vendors inspace?What are the key market trends impacting the growth of theData Mining Tools?What are the key outcomes of the five forces analysis of theData Mining Tools?

Get Interesting Data Mining Tools Report Discount with Additional Customization@jcmarketresearch.com/report-details/1468438/discount

There are 15 Chapters to display theData Mining Tools.

Chapter 1, to describe Definition, Specifications and Classification ofData Mining Tools, Applications ofData Mining Tools, Market Segment by Regions;

Chapter 2, to analyze the Data Mining Tools Manufacturing Cost Structure, Data Mining Tools Raw Material and Suppliers, Data Mining Tools Manufacturing Process, Data Mining Tools Industry Chain Structure;

Chapter 3, to display the Technical Data and Manufacturing Plants Analysis ofData Mining Tools, Data Mining Tools Capacity and Commercial Production Date, Data Mining Tools Manufacturing Plants Distribution, Data Mining Tools R&D Status and Technology Source, Data Mining Tools Raw Materials Sources Analysis;

Chapter 4, to show the Overall Data Mining Tools Market Analysis, Data Mining Tools Capacity Analysis (Company Segment), Data Mining Tools Sales Analysis (Company Segment), Data Mining Tools Sales Price Analysis (Company Segment);

Chapter 5 and 6, to show the Data Mining Tools Regional Market Analysis that includes North America, Europe, Asia-Pacific etc.,Data Mining ToolsSegment Market Analysis by various segments;

Chapter 7 and 8, to analyze theData Mining ToolsSegment Market Analysis (by Application) Major Manufacturers Analysis ofData Mining Tools;

Chapter 9,Data Mining ToolsMarket Trend Analysis, Regional Market Trend, Market Trend by Product Types , Market Trend by Applications;

Chapter 10, Data Mining Tools Regional Marketing Type Analysis, Data Mining Tools International Trade Type Analysis, Data Mining Tools Supply Chain Analysis;

Chapter 11, to analyze the Consumers Analysis ofData Mining Tools;

Chapter 12, to describeData Mining ToolsResearch Findings and Conclusion, Appendix, methodology and data source;

Chapter 13, 14 and 15, to describeData Mining Toolssales channel, distributors, traders, dealers, Research Findings and Conclusion, appendix and data source.

Buy Instant Copy of Full Data Mining Tools Research Report: @jcmarketresearch.com/checkout/1468438

Find more research reports onData Mining Tools Industry.By JC Market Research.

Thanks for reading this article; you can also get individual chapter wise section or region wise report version like North America, Europe or Asia.

About Author:JCMR global research and market intelligence consulting organization is uniquely positioned to not only identify growth opportunities but to also empower and inspire you to create visionary growth strategies for futures, enabled by our extraordinary depth and breadth of thought leadership, research, tools, events and experience that assist you for making goals into a reality. Our understanding of the interplay between industry convergence, Mega Trends, technologies and market trends provides our clients with new business models and expansion opportunities. We are focused on identifying the Accurate Forecast in every industry we cover so our clients can reap the benefits of being early market entrants and can accomplish their Goals & Objectives.

Contact Us:JCMARKETRESEARCHMark Baxter (Head of Business Development)Phone:+1 (925) 478-7203Email:[emailprotected]

Connect with us at LinkedIn

Read the original here:

Data Mining Tools Market increasing demand with Industry Professionals: IBM, SAS Institute, Oracle The Host - The Host

COVID19 patient diagnosis and treatment data mining algorithm based on association rules – Wiley

3.1 Research objects and criteria

General information mainly includes gender, age, underlying disease, contact history, and so forth. Clinical data mainly included first symptoms and signs, Mulbsta score, critical time interval during diagnosis and treatment (including onset to dyspnea, first diagnosis, admission, mechanical ventilation, death, and time from first diagnosis to admission, etc.), laboratory examination, complications and main treatment conditions, and cause of death, and so forth. The frequency of drug use was counted, and association rule algorithm was used to analyse and study the effect of drug treatment. Through the study and analysis, the results obtained can provide reference for rational drug use of COVID-19 patients, reduce the cost of disease treatment and reduce the disease pain of patients.

A retrospective analysis was conducted on 49 cases of COVID-19 deaths diagnosed on January 29, 2020, BBB 0 and March 6, 2020 in our hospital. Inclusion criteria: All the enrolled patients met the diagnostic criteria of confirmed cases in Novel Pneumonia Diagnosis and Treatment Protocol for Coronavirus Infection (Trial Seventh Edition) issued by the National Health Commission on March 3, 2020. Clinical manifestations: fever and/or respiratory symptoms; COVID-19 imaging features: Multiple small patches and interstitial changes were present in the early stage, with obvious extrapulmonary zone; Further development for double lung ground glass shadow, infiltrating shadow, serious cases may appear lung consolidation. Exclusion criteria: excluded successfully treated cases and excluded undefinitively diagnosed cases. The study was non-interventional and did not require patients to sign informed consent.

According to the course records, the laboratory examination results on admission (D1+1), 4+1day (D4+1), 7+1day (D7+1) and 14+2days (D14+2) were recorded, including routine blood, blood gas analysis, PCT, hypersensitive C-reactive protein (HSCRP), myocardial enzymes, liver enzymes, renal function, coagulation indexes, electrolytes and etiological data.

In this study, the data were obtained from the regional health information platform based on health records. In the final analysis, it belongs to the medical information system, which is closely related to the real world. In order to improve the efficiency of data mining in data processing, it is necessary to pre-process these data. In the data table of personal basic information, in addition to previous history records, there are other fields unrelated to the research, such as the person who built the file, the date of the file, the medical institution, and so forth. In this application, only the past history records are needed. Therefore, there is no need to pre-process these irrelevant fields, only the past history fields are processed.

Secondly, in the application of this data mining, the main objective is to extract association rules of COVID-19 complications. So its properties for mining should be various diseases. Therefore, it is necessary to classify individual disease types. In the data storage of a person's disease history, it is often a personal disease history composed of multiple diseases, so it needs to be classified and labelled. For example, in the database, the data in the past history column of Zhang San is hypertension, COVID-19, indicating that Zhang San had suffered from hypertension and COVID-19 before. Therefore, in the information column of Zhang San, the column of Hypertension is marked as A, and the column of COVID-19 is marked as B.

The data cleaning process is to remove the noise design in the original data and some data that is not relevant to the data mining of association rules, and also to process the missing data. Mainly includes missing data processing and error data processing, and complete some data type conversion work.

Due to the large amount of data in electronic health records, which are generated in different places, and the complicated process of generation, it is inevitable that there will be data loss, duplication, and even wrong data. So the data is cleaned.

Fill the void value: Because some attributes in a record may be related to a certain degree of Novel Coronavirus, but its record is empty, so it needs to fill the void value. Filling the void value can be handled by: Ignore the record: When some data rows in the data lack the class label required for their classification, this row can be ignored and the data can be deleted. If the number of tuples missing a class label is very large, this approach will be difficult to work with. Manually fill in missing values: This method compares the cost of time. Especially if the data set is very large. Global constant padding: This method is to populate the records for which some of the attributes are missing with a uniform constant. Although this is an easy way to do it, it is not safe. Mean padding: Calculates the average value of an attribute so that records with missing values in that attribute can be filled in with this average value.

Modify error value: because a lot of data in the medical information system are entered artificially by medical workers, there are some errors in some values, so they need to be modified. Values of data attributes that belong to the canonical standard can be modified by the range standard.

For the original data, after data cleaning, cannot be directly used. You also need to convert some of the attributes into the required form. In the original data, the age of an individual is not stored, only the date of birth is stored. Therefore, the age of an individual will be determined according to the date of birth and the date of filing. But the format of these two dates is not the same in some records, some use year - month - day format, and some use year - month - day format, in order to deal with the convenience, all use year, month, day format; An individual's age is then calculated from the difference between the date of birth and the date of filing. The calculated age belongs to continuous attribute, which is not good for the classification of discrete attribute. So you need to discretize. The transformation of age attributes is shown in Table1.

Go here to read the rest:

COVID19 patient diagnosis and treatment data mining algorithm based on association rules - Wiley

Data Mining Software Market 2021 Detailed Analysis of top Ventures with Regional Outlook | Key Companies: IBM, RapidMiner, GMDH, SAS Institute,…

The latest research report on the Global Data Mining Software Market provides the cumulative study on the COVID-19 outbreak to provide the latest information on the key features of the Data Mining Software market. This intelligence report contains investigations based on current scenarios, historical records and future forecasts. The report contains various market forecasts related to market size, revenue, production, CAGR, consumption, gross margin in the form of charts, graphs, pie charts, tables and more. While emphasizing the main driving and restraining forces in this market, the report also offers a comprehensive study of future trends and developments in the market. It also examines the role of the major market players involved in the industry, including their business overview, financial summary and SWOT analysis. It provides a 360-degree overview of the industries competitive landscape. Data Mining Software Market shows steady growth and CAGR is expected to improve during the forecast period.

The Global Data Mining Software Market Report gives you in-depth information, industry knowledge, market forecast and analysis. The global Data Mining Software industry report also clarifies financial risks and environmental compliance. The Global Data Mining Software Market Report helps industry enthusiasts including investors and decision makers to make reliable capital investments, develop strategies, optimize their business portfolio, succeed in innovation and work safely and sustainably.

Get FREE Sample copy of this Report with Graphs and Charts at:https://reportsglobe.com/download-sample/?rid=283372

The segmentation chapters enable readers to understand aspects of the market such as its products, available technology and applications. These chapters are written to describe their development over the years and the course they are likely to take in the coming years. The research report also provides detailed information on new trends that may define the development of these segments in the coming years.

Data Mining Software Market Segmentation:

Data Mining Software Market, By Application (2016-2027)

Data Mining Software Market, By Product (2016-2027)

Major Players Operating in the Data Mining Software Market:

Company Profiles This is a very important section of the report that contains accurate and detailed profiles for the major players in the global Data Mining Software market. It provides information on the main business, markets, gross margin, revenue, price, production and other factors that define the market development of the players studied in the Data Mining Software market report.

Global Data Mining Software Market: Regional Segments

The different section on regional segmentation gives the regional aspects of the worldwide Data Mining Software market. This chapter describes the regulatory structure that is likely to impact the complete market. It highlights the political landscape in the market and predicts its influence on the Data Mining Software market globally.

Get up to 50% discount on this report at:https://reportsglobe.com/ask-for-discount/?rid=283372

The Study Objectives are:

This report includes the estimation of market size for value (million USD) and volume (K Units). Both top-down and bottom-up approaches have been used to estimate and validate the market size of Data Mining Software market, to estimate the size of various other dependent submarkets in the overall market. Key players in the market have been identified through secondary research, and their market shares have been determined through primary and secondary research. All percentage shares, splits, and breakdowns have been determined using secondary sources and verified primary sources.

Some Major Points from Table of Contents:

Chapter 1. Research Methodology & Data Sources

Chapter 2. Executive Summary

Chapter 3. Data Mining Software Market: Industry Analysis

Chapter 4. Data Mining Software Market: Product Insights

Chapter 5. Data Mining Software Market: Application Insights

Chapter 6. Data Mining Software Market: Regional Insights

Chapter 7. Data Mining Software Market: Competitive Landscape

Ask your queries regarding customization at: https://reportsglobe.com/need-customization/?rid=283372

How Reports Globe is different than other Market Research Providers:

The inception of Reports Globe has been backed by providing clients with a holistic view of market conditions and future possibilities/opportunities to reap maximum profits out of their businesses and assist in decision making. Our team of in-house analysts and consultants works tirelessly to understand your needs and suggest the best possible solutions to fulfill your research requirements.

Our team at Reports Globe follows a rigorous process of data validation, which allows us to publish reports from publishers with minimum or no deviations. Reports Globe collects, segregates, and publishes more than 500 reports annually that cater to products and services across numerous domains.

Contact us:

Mr. Mark Willams

Account Manager

US: +1-970-672-0390

Email: [emailprotected]

Website: Reportsglobe.com

See more here:

Data Mining Software Market 2021 Detailed Analysis of top Ventures with Regional Outlook | Key Companies: IBM, RapidMiner, GMDH, SAS Institute,...

Top Roles and Skills needed to build a Data Science Team – APN News

Published on November 1, 2021

By Lakshmi Mittra, VP and Head Clover Academy

Data is a major part of every organization. Organizations need professionals who can work with data to derive insights and help with business-critical decisions.

With the recent advancement in technology, data roles are changing along with increased adoption of AI & ML. Many of us are familiar with roles such as data scientist and data analyst but a true data science team consists of much more.

Why Should One Invest in Building a Data Science Team?

Data is an important asset of an organization and the management must plan and build a data science team to utilize the asset (data) and derive information and insights. The following are some reasons that organizations must invest in building a data science team:

To plan future actions based on insights

To identify growth opportunities

To help in better decision making

Organizations structure their teams based on cost, program objectives, and overall organizational structure. However, there are few common data science team structures that are followed globally:

Key Data Science Roles

Regardless of the industry and scale of an organization, data science teams must have the ability to understand business, embrace technology, and deliver analytics. Big players typically have a mix of data science roles and have a large team working in cohesion. Small enterprises may start with one professional who can deliver the required insights and may scale up from there if required.

Here are some key roles to consider when building a data dream team.

In Conclusion

Data science teams can supplement different business units and operate within their specific fields of analytical interest. Depending on the scale of an organization and their analytics objectives, data science teams can be reshaped to boost operational speed and foster business critical decision-making.

Original post:

Top Roles and Skills needed to build a Data Science Team - APN News

Here are the 14 most important pieces of surveillance technology that make up the US ‘digital border wall,’ according to immigrant-rights groups -…

The US-Mexico border wall near Sasabe, Arizona. Ross D. Franklin/Associated Press

Mijente, Just Futures Law, and the No Border Wall Coalition highlight how CBP uses digital tools.

A new report says CBP relies on cameras, drones, and license-plate-reading tech to monitor migrants.

ICE also keeps biometric information like face, iris, fingerprint, and voice data, a report said.

US Customs and Border Protection and Immigration and Customs Enforcement use phone hacking, license-plate scanning, iris recognition, and a host of other surveillance technology to monitor the US border, according to a report released on Thursday by immigrant-rights groups.

"The Deadly Digital Border Wall," compiled by Mijente, Just Futures Law, and the No Border Wall Coalition, doesn't identify every piece of software and hardware the agencies use. One example is Raven, short for Repository for Analytics in a Virtualized Environment, a data-mining and analytics tool ICE is developing. But it does provide an overview of some of the key technologies that ICE and CBP rely on.

Cinthya Rodriguez, a Mijente organizer, said the US border was a "testing ground" for more widespread expansions of government surveillance.

"We remain steadfast in our #NoTechForICE demand and call on the Biden administration to follow up on promises made to immigrant communities with meaningful action," she added.

Here are the top 14 border-surveillance technologies, according to the report:

Automated license-plate recognition

This is a camera system that takes pictures of car license plates as they pass by and logs other relevant information like date, time, and place. Immigration agencies install these cameras at and near borders. CBP pays Motorola Solutions for it, while ICE pays Vigilant Solutions.

Integrated fixed towers

Along the US-Mexico border, CBP operates tall towers designed to spot people from almost 10 miles away. The agency has paid the Israeli military contractor Elbit Systems to install these structures.

Remote video surveillance

These surveillance towers are slightly smaller and movable. General Dynamics won a $177 million contract for installing these in 2013. The project is scheduled to be completed in 2023.

Story continues

Mobile video-surveillance system

This consists of a telescope, a laser light, a thermal-imaging device, and a video system that's attached to the back of a truck, which itself is equipped with geospatial analytics. Since 2015, Tactical Micro has been providing CBP with dozens of the systems, and PureTech Systems has been providing the in-vehicle geospatial analytics.

Autonomous surveillance towers

CBP uses these towers to scan the landscape for people. Anduril has been providing them since 2018. They come equipped with software designed to distinguish people from animals and store images of human faces.

Drones

CBP has used large and small drones to surveil remote areas of land, and even the sea, for evidence of migrants. General Atomics, AeroVironment, FLIR Systems, Lockheed Martin, and Anduril have all provided CBP with surveillance drones.

E3 portal

This software sends files of fingerprint scans, pictures of faces, and iris images - captured by separate pieces of hardware - to both "ICE's case management system for Enforcement and Removal Operations (ERO) and to the DHS-wide IDENT database," the report said. According to the Department Homeland Security, the automated biometric identification system stores "260 million unique identities and processes more than 350,000 biometric transactions per day."

HART biometric database

According to the report, the Homeland Advanced Recognition Technology (HART) System is a database of biometric data such as face images, DNA profiles, iris scans, fingerprint scans, and audio files, called "voice prints," that's hosted on Amazon Web Services and being developed by Northrop Grumman. It will eventually "replace the automated biometric identification system (IDENT) currently used by DHS," the report said.

Biometric facial comparison

This is a face-matching tool that CBP uses at the US border and entry ports like airports. A camera takes a picture of a traveler and compares it with a passport or other form of ID. CBP says it stores comparison photos of US citizens in IDENT for 15 years.

CBP One application

This is a mobile app on which Border Patrol agents can collect biometric and other personal information on migrants and asylum seekers. Data that can be logged in to the app includes phone numbers, employment and family information, marital status, people traveling together, permanent address abroad, and destination in the US, as well as a photograph to be run through CBP biometric records, the report said.

Mobile-phone hacking

Using services from the Israeli company Cellebrite, Canada's Magnet Forensics, and the US company Grayshift, CBP can search mobile phones and other devices for information about a person. Grayshift and Cellebrite can sometimes unlock phones. US border agents don't need warrants to search people's devices, according to a 2019 court ruling, and border agents searched more than 40,000 devices in 2019, Reuters reported.

Vehicle forensic kits

These are pieces of software designed to extract personal information stored in cars, including call logs and navigation histories. CBP has paid Berla Corp., which partnered with the Swedish mobile-forensics company MSAB, for this service.

Venntel location tracking

Venntel, a subsidiary of the data broker Gravy Analytics, sells licenses to CBP and other DHS entities that allow them to use consumer cellphone data to track migrants and asylum seekers. A DHS document leaked to BuzzFeed News said it was possible to "combine" the consumer location data "with other information and analysis to identify an individual user."

Intelligent computer-assisted detection

ICAD is a system of underground sensors and cameras that "detects the presence or movement of individuals" and sends data back to CBP. Border Patrol agents then seek out the people captured on ICAD, ask these people for their biographic data, and store this information, according to the report.

Read the original article on Business Insider

More here:

Here are the 14 most important pieces of surveillance technology that make up the US 'digital border wall,' according to immigrant-rights groups -...

Data Preprocessing in Data Mining – GeeksforGeeks

Preprocessing in Data Mining:Data preprocessing is a data mining technique which is used to transform the raw data in a useful and efficient format.

Attention reader! Dont stop learning now. Get hold of all the important CS Theory concepts for SDE interviews with the CS Theory Course at a student-friendly price and become industry ready.

Steps Involved in Data Preprocessing:

1. Data Cleaning:The data can have many irrelevant and missing parts. To handle this part, data cleaning is done. It involves handling of missing data, noisy data etc.

2. Data Transformation:This step is taken in order to transform the data in appropriate forms suitable for mining process. This involves following ways:

3. Data Reduction:Since data mining is a technique that is used to handle huge amount of data. While working with huge volume of data, analysis became harder in such cases. In order to get rid of this, we uses data reduction technique. It aims to increase the storage efficiency and reduce data storage and analysis costs.

The various steps to data reduction are:

Read the original here:

Data Preprocessing in Data Mining - GeeksforGeeks

Miners Are The Optimal Buyers: The Data Behind Bitcoin-Led Decarbonization In Texas – Bitcoin Magazine

Recently, Ars Technica published an article from staff writer and environmental science PhD Tim de Chant, aiming to rebut Texas Senator Ted Cruzs comments from the Texas Blockchain Summit earlier this month.

De Chant took issue with the following statement from Cruz:

Because of the ability of bitcoin mining to turn on or off instantaneously, if you have a moment where you have a power shortage or a power crisis, whether its a freeze or some other natural disaster where power generation capacity goes down, that creates the capacity to instantaneously shift that energy to put it back on the grid.

De Chant offered a number of responses, but generally seems to misunderstand the substance of Senator Cruzs point. Additionally, he made a significant mathematical error (later retracted) that called into question his literacy on Bitcoin mining.

But first, its worth quoting Cruz in full, as the intent of his claims are lost without the full context. We have included a transcript excerpt of Cruzs comments on mining from his conversation at the summit with Jimmy Song below, in which he referenced a recent winter storm that left many in Texas without access to power for days:

There were lots of things that went wrong [during the winter storm] that I think are worthy of study, but I do think that Bitcoin has the potential to address a lot of aspects of that. Number one, from the perspective of Bitcoin, Texas has abundant energy. You look at wind, were the number one wind producer in the country, by far. Number two, I think there are massive opportunities when it comes [indistinct audio]. If you look at natural gas right now, in West Texas the amount of natural gas that is being flared 50% of the natural gas in this country that is flared, is being flared in the Permian right now in West Texas. I think that is an enormous opportunity for Bitcoin, because thats right now energy that is just being wasted. Its being wasted because there is no transmission equipment to get that natural gas where it could be used the way natural gas would ordinarily be employed; its just being burned.

And so some of the really exciting endeavors that people are looking at is can we capture that gas instead of burning it?. Use it to put in a generator right there on site. Use that power to mine Bitcoin. Part of the beauty of that is, the instant youre doing it, youre helping the environment enormously because rather than flaring that natural gas youre putting it to productive use. But secondly, because of the ability to Bitcoin mining to turn on or off instantaneously, if you have a moment where you have a power shortage or a power crisis whether its a freeze or some other natural disaster where power generation capacity goes down, that creates the capacity to instantaneously shift that energy to put it back on the grid. If youre connected to the grid, they become excess reserves that can strengthen the grids resilience by providing a significant capacity of additional power to be available for critical services if and when it is needed. So I think that has enormous potential and its something that in five years I expect to see a dramatically different terrain, with Bitcoin mining playing a significant role as strengthening and hardening the resilience of the grid.

Its a weird point. A lot of the discussion around Bitcoin views Bitcoin as a consumer of energy. A lot of the criticism directed at it is the consumption of energy. The perspective Im suggesting is very much the reverse, which is as a way to strengthen our energy infrastructure. And it also has one of the exciting things about crypto also, is the ability to unlock stranded renewables. So there are a lot of places on earth where the sun shines a lot and the wind blows a lot but there arent any power lines. And so its not economically feasible to use that energy. And the beauty of Bitcoin mining is that if you can connect to the internet, you can use that energy and derive value from those renewables in a way that would be impossible otherwise. And I think were going to see in the next five years massive innovations in that regard as well.

De Chant made a number of points in reaction to Cruzs statements. We will tackle them in turn.

De Chant starts with the admission that it stands to reason that bitcoin mining could create enough demand that investors would be enticed to build new power plants. Those plants could theoretically be tasked with providing power to the grid in cases of emergency.

But this isnt really the point that Cruz and the Bitcoin community are making. Instead, we are pointing out that power providers will have improved economics from the existence of bitcoin mining as an additional source of offtake. These improved economics could induce extra construction. But we havent come across the suggestion that mining would finance the construction of bitcoin-only plants that would be directed to the grid in emergency situations.

The other claim is that bitcoin miners represent a unique type of interruptible load, whose ability to dial back energy consumption can help safeguard the grid from instability.

De Chant continued by pointing out that the February blackout in Texas was caused by significant winter storms in conjunction with a poorly weatherized grid although Cruz completely acknowledged this in his remarks. This doesnt score a point against Cruz hes fully aware of why the grid failed: significant winter storms in conjunction with a poorly weatherized grid, alongside other contributing factors such as natural gas delivery. What happened was that, alongside power plant failures, the natural gas infrastructure was unable to deliver natural gas to power plants. Additionally, the Electric Reliability Council of Texas (ERCOT) underforecasted its high case peak load scenario by around 10 gigawatts (GWs), which was a huge miss.

Cruz was not claiming that Bitcoin would prevent a black swan weather-driven grid meltdown. Ultimately, only better planning can do this.

De Chant continued by pointing out that Bitcoin miners wouldnt spend extra cash to winterize their operations. But this is a confusing point: De Chant appears to be conflating miners and energy producers. In practice, the two are distinct. General grid failures have nothing to do with Bitcoin, and no one is suggesting that Bitcoin will cause power plants to fully avoid two-sigma tail events.

The main line of argument from De Chant is simply his claim that the economics of mining dont support curtailment, even when prices are high. In his words: Bitcoin miners would be unlikely to offer their generating capacity to the grid unless they were sufficiently compensated. In the first version of his article, he originally claimed that miners would need to be paid $31,700 per megawatt hour (MWh) during the February 2021 winter storm to turn off their machines, an estimate which he revised to $600 per MWh later on. But both estimates are erroneous.

Even for the highest-end equipment (Antminer S19s), the turn-off point in February 2021 for miners would have been $480 per MWh. Older equipment has a lower turn-off threshold as it is more sensitive to electricity prices. When electricity prices reach a certain threshold, miners are no longer breaking even and turn off their machines whether or not they are enrolled in a formal grid program to compensate them for downtime.

Miners are acutely aware of their economics and can adjust to grid conditions in real time. De Chant was off by a factor of 66 in his initial estimate. In his revised estimate, he maintained erroneously that miners would turn off their rigs at $600 per MWh, which is still an overestimate. Put simply, Bitcoin miners are highly price sensitive and engage in economic dispatch meaning that they react to prices and simply do not run their equipment if electricity prices get too high. This is independent of whether they are participating in a demand response program, which formally employs power consumers to curtail their usage during periods of electricity scarcity.

In the below chart, you can see that miners would have turned off their machines well before the $9,000 per MWh price cap was reached for electricity in ERCOT.

The precise threshold at which miners curtailed their usage depends on the types of machines employed higher-end machines have a higher opportunity cost, and are hence kept online through more expensive periods of power pricing.

Electricity is generally cheap in ERCOT, which might imply relatively few instances in which miners would curtail their usage. But of course, the average doesnt tell the story. The nature of the spot-driven grid is that much of the time, energy is cheap or even free (depending on where it's being consumed), and a small fraction of the time its very scarce and expensive (this is a feature the high prices are a signal to incentivize new generation to be built).

Its during those right-tail events that Bitcoin miners can significantly benefit the grid by interrupting their load. Running the rest of the time means that energy is generally more abundant, because the presence of miners is an economic pressure that improves grid economics, making it more worthwhile to build new energy projects (who can now for the first time have the option to sell their full generation capacity to the grid or to Bitcoin).

For example, Lancium is a Houston-based technology company that is creating software and intellectual property solutions that enable more renewable energy on the grid. In 2020, it was the first company ever to qualify a load as a controllable load resource (CLR) (more on these later).

As of today, the company owns and/or operates all load-only CLRs in ERCOT with approximately 100 MWs of Bitcoin mining load under control for CLR. These mining facilities are being optimized on both a daily and hourly basis to mine when it is economic to do so and to turn down when it is not.

Its worth diving into the distribution of power prices on a grid like ERCOT to fully understand how miners engage with the grid. Much of the time, energy is abundant and cheap. In West Texas, prices are routinely negative, as the supply of wind and solar periodically vastly outstrips demand, and theres a limited ability to export the supply to load centers elsewhere in Texas.

What the miners do is provide a load resource which eagerly gobbles up negatively priced or cheap power (everything on the left side of the chart), while interrupting itself during those right-tail events (you can see the winter storm to the right).

On the one hand, this improves the economics of energy producers who for the first time have a new buyer to sell their electricity to, beyond just the inflexible grid. This promotes the construction of more renewable energy infrastructure and improves the prospects for existing installations. On the other hand, a highly interruptible load that can tolerate downtime means theres more power available for households and hospitals during periods of scarcity, when supply trips offline through weather or other interruptions.

From the miners perspective, accepting interruptions to their service is actually an economically rational decision, for two reasons:

The below table shows the average yearly electricity price for consumers willing to tolerate various amounts of downtime. You can see that if you strategically avoided high-priced periods (as miners are motivated to do), you dramatically saved on power overall.

In 2021, with the right-tail event due to the winter storm causing prices to spike, if you reduced your uptime expectation from 100% to 95%, you were able to drive your overall power cost for the year from $178 per MWh to a mere $25 per MWh. So, the grid does not need to rely on the beneficence of miners to expect them to turn off their machines during times of grid stress: as profit-maximizing entities, they have a clear economic motive to do so.

For a more holistic look at what prices in ERCOT have done during the last five years, we have included a chart showing the cumulative distribution by year below. Given that it hosted the winter storm, 2021 has the fattest right tail, with 5% of hours being priced over $100 per MWh.

You can see that wholesale spot prices are low much of the time, but are characterized by extreme spikiness as you get to the last 15% of the distribution. Neither tail is desirable: negative or low prices indicate an excess of supply causing a mismatch, and imply poor economics for energy producers; extremely high prices are indicative of blackouts and households not getting the energy they need. The presence of flexible load on the grid chops off both tails of the distribution. It is not a panacea and it cannot stop poorly-winterized equipment from failing during once-a-century storms, but the net effect is positive regardless.

Additionally, the existence of flexible load is so useful to grid operators that they have designed specific programs to pay these load centers for a type of grid insurance. Broadly, these programs are known as demand response (DR). This term covers a range of load responses that generally reduce load at the instruction of the grid operator. Virtually all independent system operators maintain demand response programs, but most of them have programs that require 10 to 30 minutes of response time on the load.

In fact, on a percentage of peak demand basis, ERCOT lags its peers like MISO (the Midcontinent Independent System Operator) when it comes to enrolling utilities in demand response.

As ERCOT is a single balancing authority interconnection that is not synchronously connected with any other interconnection, it is essentially an islanded electrical grid. This means that ERCOT cannot lean on its neighbors for help when faced with an expected energy shortfall and instead must balance on its own.

Texas leads all states in having the highest levels of installed wind generation capacity in the country and is expected to double its renewable capacity over the next three to five years. Being an islanded grid with a significant portion of energy supply coming from renewables requires ERCOT to procure and utilize more responsive DR products, with requirements to respond in seconds or even at the sub-second frequency in addition to the more traditional 10-to-30-minute response times.

What De Chant simply failed to mention but Ted Cruz hinted at is the remarkable ability of miners to act as these controllable load resources.

In ERCOT parlance, this is a type of power consumer that can dial down their consumption and back up again in response to grid operator commands on a second-by-second basis. Most data centers cant do this in fact, the selling point for many data centers is precisely their high uptime and non-interruptibility.

The Bitcoin network is a much more forgiving client: it doesnt really care if you interrupt the action of mining, because each successive hash is statistically independent of the last (this is known as memorylessness). Aside from making slightly less revenue, nothing adverse happens if a Bitcoin mining data center only runs at 60% or even 0% capacity for a few minutes or hours. Compare that to a hospital, a smelter, a factory, or commercial real estate. These sources of load need constant uptime, and cannot tolerate interruptions.

Due to the statistical properties of mining and the physical tolerance for mining hardware to deal with interruption, Bitcoin data centers can therefore dial up and down their consumption on a highly-granular basis and on short notice.

For a grid operator, such a load type is a dream, because it gives them the ability to balance supply and demand from the demand side, rather than having to tweak supply (typically by spinning up and down natural gas turbines). There have historically been some semi-interruptible loads that grid operators relied on for similar programs, like arc furnaces, wood pulp production, cement mills, or aluminum electrolysis, but none could provide the flexibility or response times Bitcoin miners can.

The industries mentioned are industrial loads which cannot easily power up and down, and certainly not on extremely short notice, as is necessary for a modern CLR. For context, CLRs have to be able to curtail their targeted load reduction by 70% within 16 seconds. Before Bitcoin mining, no load type qualified in ERCOT.

You can think of a CLR as a power generator in reverse. Instead of adding expensive power to the grid during a period of scarcity, the CLR receives a real-time price signal from the grid operator and if it's above its economic turn-off point, it will automatically dispatch down (curtail consumption) to make way for other, more critical loads. Therefore, instead of only having flexible (and CO2-emitting) thermal energy from a coal or natural gas generation available to the grid operator during peak demand periods, the CLR capacity not reserved as grid insurance is offered into ERCOTs security-constrained economic dispatch (SCED) and will automatically dispatch down when the real-time price is higher than the turn-off point for the bitcoin mining load.

An added benefit to ERCOT in having bitcoin mining loads as a load resource'' is that during local shortages or system emergencies, ERCOT can directly turn down the load. This is a very big deal. For the data center, its a great deal, because they can sell ancillary services (basically, a bundle of products that give the grid operator the right to curtail the data centers production should they need to), collect a premium for doing that, and mine the rest of the time. So, they collect a premium on an ongoing basis (even if not called upon to curtail their usage), effectively lowering their all-in power cost, while also providing a valuable service to the grid.

In contrast, a generation resource which sells ancillary services has a real opportunity cost: it has to run below its maximum in order to retain some slack in case it is called upon to increase its power.

So, when Cruz mentioned the possibility of Bitcoin mining playing a significant role as strengthening and hardening the resilience of the grid, he is likely referring to the strong benefits that interruptible load offers to a grid operator. The existence of qualifying CLRs means that policymakers can target structurally-higher renewable penetration and feel comfortable in the grid operator's ability to procure more insurance against adverse events. As grids become increasingly renewable and move from fossil-fuel-powered steady baseload to more volatile wind and solar power, these kinds of controllable loads will become increasingly critical.

Additionally, the ability of Bitcoin miners to colocate with renewable assets and act as an independent buyer when the grid has no demand provides a base level of monetization which was not available previously. This incentive means that intermittent energy sources like wind and solar (which are often curtailed, as they are frequently distant from load centers) have improved economics.

Indeed, an analysis from Dr. Joshua Rhodes and Dr. Thomas Deetjen with IdeaSmiths LLC demonstrated that flexible data centers would actually promote the stability of an increasingly-renewable grid and allow for more renewable penetration than the grid could otherwise support.

The analysis from Rodes and Deetjen found that operating data centers in a flexible manner can result in a net reduction of carbon emissions and can increase the resilience of the grid by reducing demand during high stress times (low reserves) on the grid.

Under the scenario where 5 GWs of flexible data center growth was added to the base case with a range of uptimes between 85% to 87%, the flexible data center consumes about 35.5 million MWh, but supports the deployment of an additional 39.5 million MWhs of wind and solar energy.

In simple terms, the incremental MWh output from solar and wind is greater than the incremental MWh consumption from the flexible data centers hence, carbon negative.

For a visualization of how the intermittency of wind and solar affects electricity pricing, we have assembled real data from earlier in October in West Texas in the chart below:

Source: Lancium, ERCOT

Between October 8 and October 10, wind and solar generation averaged over 20 GWs with the power lines that connect West Texas to the major load centers in the east being at maximum capacity. With so much wind and solar online, West Texas power prices averaged $3 for these three days with several hours settling negative. On October 11, wind generation was 20 GWs at around midnight, 2 GWs by noon and back up to 20 GWs by end of day. As wind dropped, power flowing across the West Texas power lines also dropped, which caused West Texas prices to be on par with the rest of ERCOT.

This drop in wind means natural gas must yet again pick up the slack. A grid with even higher wind and solar penetration would face these problems in abundance. Having a significant quantity of flexible loads on the grid to dial down consumption during rapid drawdowns in renewable generation would help attenuate spikes in power prices, without requiring as much support from less efficient combustion turbine peakers.

In our view, contemporary industrial Bitcoin mining has four key properties that make the industry an extremely suitable buyer of energy for increasingly renewable grids. These are interruptibility, unconstrained location agnosticism, scale independence, and attenuation:

Among industrial load centers, these qualities are unique. Prior to Bitcoin, there simply wasnt a load resource that satisfied these four qualities. Some industrial consumers of load have some of these features, but none with such fidelity. Aluminum smelting, for instance, has a degree of location agnosticism, as has been well-documented with Alcoa smelters being colocated with abundant energy resources, effectively exporting energy via the refined metal.

Certain types of factories like aluminum arc furnaces and paper mills have a degree of interruptibility, but only for short periods of time, and only with significant latency. Certain power companies offer households and industrial consumers the ability to participate in demand-response programs, but only in a reduced capacity and never as a Controllable Load Resource. Non-Bitcoin data centers also exhibit location agnosticism to a certain degree, but require high throughput internet and cannot tolerate interruptibility.

By satisfying these qualities, Bitcoin represents a truly novel industrial load center, and offers superlative benefits to grid operators and policymakers aiming to decarbonize the grid. This permits Bitcoin miners to effectively sell high-quality insurance into the grid by participating in the ancillary service market in ERCOT, something that was mostly limited to the supply-side previously (through thermal generation resources).

Even without these more advanced CLR products, its clear that miners have a strong economic motive to curtail their consumption during periods of high prices, allowing households to get more power during periods of scarcity.

A joint realization is currently underway. First, Bitcoin miners are learning that there are significant economic benefits to accepting reduced uptime and going from dumb load to smart load. Beyond the merit of economic dispatch, participating in formal demand response programs is an additional source of revenue and hardens the grid, too. Additionally, independent system operators are beginning to discover the possible importance of Bitcoin miners as a flexible load resource, one most unlike those they have historically considered for demand-response programs.

We hope and expect that many more grid operators will realize the significant benefits offered by flexible data centers and begin to design with Bitcoin mining in mind. Their ambitions for increasingly renewable grids may well depend on it.

This is a guest post by Nic Carter and Shaun Connell. Opinions expressed are entirely their own and do not necessarily reflect those of BTC Inc or Bitcoin Magazine.

The rest is here:

Miners Are The Optimal Buyers: The Data Behind Bitcoin-Led Decarbonization In Texas - Bitcoin Magazine

The Benefits of Data Analytics in Clinical Reporting – Diagnostic and Interventional Cardiology

Accurate reporting is essential to producing the reliable analytics needed to meet lab accreditation standards. This is a process Susan Wayne, RRT, RVT, RCDS, is quite familiar with.

Wayne is clinical coordinator for noninvasive cardiovascular services at Carolina Pines Regional Medical Center in Hartsville, S.C. As of this writing, she is working through the re-accreditation process for both the echocardiogram and vascular labs, all while providing personal bedside care to cardio patients each day. Both our labs are accredited through IAC the Intersocietal Accreditation Commission and we go through this process every three years, she said. Im in my window for both of these right now. I need to provide data on volume by sonographer and for each procedure. That all used to be a manual process before.

Waynes team previously relied on manual log books. There was always the possibility that a technologist would forget to log something, she recalled. Each day, wed have to do charge reconciliation by manually tabulating a report from the previous day to make sure the correct charge went to the right patient. Wed also need to check that each procedure was logged. We might miss billing as many as four or five procedures a month because they were never recorded in our log book.

Wayne had heard about other facilities that were using Fujifilms VidiStar structured reporting to produce data analytics. As Carolina Pines was looking to transition to a PACS, they invited Fujifilm to come onsite and demo its capabilities. It was an easy choice. Once we were onboarded, the vendor started teaching me about the data mining the system was able to do something Id been doing manually before.

Carolina Pines transition to Fujifilm Data Analytics was highly seamless. Because its a cloud-based system, the analytics option was simply added to their existing account so it was accessible when they logged in. There are useful pre-built analytics that come with the system, but VidiStar also allows custom queries to be saved as presets. Once we had access, the Fujifilm representative helped me tailor the reports I need so that they run automatically each month, noted Wayne.

For the re-accreditation process, data must be provided that shows reporting is logged within 24 hours for inpatients and within 48 hours for outpatients. In addition to the data that must be supplied about volume by sonographer and by procedure type, the accrediting body requires submission of case studies for each physician. This involves selecting individual exams that are as technically perfect as possible. This used to be more difficult, Wayne said, because that information needs to be collected and logged over a period of several years. It was a laborious process to keep track of. Now, she can check a box on a case that looks like a good candidate and have that propagate into a report she can later review in order to select the exams that best meet IACs criteria.

Wayne is still learning how to build new reports, but Fujifilm has remained a resource during the ramp-up process. Theyre always available to guide me through, she said. They have fantastic customer support. I can either call a toll-free number or send an online message to their help desk. Any time I get stuck building a new report, theyll respond within a couple of hours and help me get it done. Theyre very responsive.

Another advantage of Carolina Pines analytics capability is how easy it is to respond to an ad hoc request for data. I occasionally receive an inquiry about physician statistics from the credentialing department, said Wayne. Its so easy to pull these with analytics.

She can also pull clean, professional-looking reports for the reading physicians offices to ensure they are able to bill their professional fees.

The amount of manual labor required to produce reports used to be onerous. For Carolina Pines, that has definitely changed. I dont know how labs function without data mining, said Wayne. The streamlining provided by data analytics has made her more efficient, better able to support facility administrators and physicians needs for information, and even spend more time with patients.

Wayne started out as a respiratory therapist, but was asked to get into the cardiovascular field 24 years ago. Her response at the time: If the hospital needs me to do it, then I will. That can-do attitude has persisted; combined with the power of Fujifilm Data Analytics, shes more empowered than ever to meet the needs of Carolina Pines.

Editor's note:This blog is the fourth in a four-part series about the benefits of cardiovascular information systems. The first blog ishere, the second blog ishere, and the third blog is here.

To learn more visithttps://hca.fujifilm.com/it

.

Go here to see the original:

The Benefits of Data Analytics in Clinical Reporting - Diagnostic and Interventional Cardiology

Ex-WNBA player Alana Beard joins effort to bring expansion team to Oakland – ESPN

6:51 PM ET

Mechelle VoepelESPN.com

Former WNBA player Alana Beard has joined the effort to bring a WNBA expansion team to Oakland, it was announced Thursday. The 14-year WNBA veteran will partner with the African American Sports and Entertainment Group (AASEG) in pursuit of a new franchise.

Beard, the No. 2 overall draft pick out of Duke in 2004, played six seasons for the Washington Mystics and then another eight with the Los Angeles Sparks, with whom she won the 2016 league championship.

"As a professional athlete who made the transition into the business world, I understand now more than at any point the importance of having a great team and strong partnerships," said the 39-year-old Beard, who moved to the Bay Area to work in venture capital after her retirement following the 2019 WNBA season. "I've always envisioned being an owner of a WNBA team. It made sense to come together to partner on this."

The WNBA, which just completed its 25th season, has 12 franchises. It has not had an expansion team since the Atlanta Dream in 2008. The Dream, who were sold in February, have another former WNBA player, Renee Montgomery, as part of their ownership group.

"That was something I truly admired," said Beard, a two-time winner of the WNBA's Defensive Player of the Year award and a four-time All-Star. "Kudos to Renee and other women who are pursuing this exact dream."

One of the league's original eight franchises in 1997 was in Sacramento, but that team folded after the 2009 season. The Bay Area has been considered a possibility for expansion since. The short-lived women's American Basketball League had a team in San Jose in 1996-98, and the reigning national champion Stanford Cardinal are perennially one of the best women's college programs in the country.

Beard appeared on a video call Thursday along with Oakland Vice Mayor Rebecca Kaplan and others who support bringing a franchise to Oakland, including Gina Johnson Lillard, the mother of Portland Trail Blazers guard Damian Lillard and the director for the western region of Mothers of Professional Basketball.

"The leadership of Black women here is incredible," said Alicia Garza, co-founder of the International Black Lives Matter Movement. "WNBA, we definitely need and want you in Oakland. This project aligns on every level in terms of my values and the things I prioritize."

The AASEG said the Oakland City Council, Alameda County Board of Supervisors and the Joint Powers Authority Commission have approved a path for a WNBA team to play at Oakland Arena, which was home of the Golden State Warriors from 1971 to 2019. The WNBA said it had no statement on Thursday's announcement from the Oakland group. Commissioner Cathy Engelbert told ESPN earlier this month that the league is hopeful of expansion within the next five years.

"We're doing all that data mining now," Engelbert said. "I suspect by next summer or this time next year, in our 26th season, we'll be talking about the number of teams and a list of where."

Follow this link:

Ex-WNBA player Alana Beard joins effort to bring expansion team to Oakland - ESPN