Category Archives: Data Mining

Advanced modeling of housing locations in the city of Tehran using machine learning and data mining techniques … –

To conduct research and determine the research strategy, theoretical-applied implications of grounded theory, choice theory, evaluation, random utility, and content analysis methods were adopted. Each of these perspectives and approaches directly impacted the description and analysis of this research. Data derived from five data science-related libraries in Python programming (Online, 2022a, 2022b, 2022c, 2022d, 2022e, 2022f, 2022g, 2022h, 2022i, 2022j, 2022k, 2022l, 2022m, 2022n, 2022o) format were utilized as a reference for data discovery and optimization. Although this method seems simple, using this method for this research has helped to measure, model, and present the findings better.

The following five models were also employed for the measurement and modeling:

Lasso regression: It is a type of linear regression that draws on shrinkage. Shrinkage describes where data values are shrunk towards a central point data (e.g., mean). This model best suits data that follow multiple alignments (Online, 2022a, 2022b, 2022c, 2022d, 2022e, 2022f, 2022g, 2022h, 2022i, 2022j, 2022k, 2022l, 2022m, 2022n, 2022o).

Kernel regression: The basis of this statistical model is a non-parametric method for estimating the conditional expectation of a random variable, and its mission is to identify a nonlinear relationship between the two variables x and y (Online, 2022a, 2022b, 2022c, 2022d, 2022e, 2022f, 2022g, 2022h, 2022i, 2022j, 2022k, 2022l, 2022m, 2022n, 2022o).

Elastic net: It is a regulated model that linearly integrates the L1 and L2penalties of the lasso and ridge methods (Online, 2022a, 2022b, 2022c, 2022d, 2022e, 2022f, 2022g, 2022h, 2022i, 2022j, 2022k, 2022l, 2022m, 2022n, 2022o).

Gradient boosting regressor: A machine learning method that draws on the results of weaker models (e.g., decision trees) to improve learning outcomes (Online, 2022a, 2022b, 2022c, 2022d, 2022e, 2022f, 2022g, 2022h, 2022i, 2022j, 2022k, 2022l, 2022m, 2022n, 2022o).

XGB Regressor: A more powerful version of Gradient boosting regressor (Online, 2022a, 2022b, 2022c, 2022d, 2022e, 2022f, 2022g, 2022h, 2022i, 2022j, 2022k, 2022l, 2022m, 2022n, 2022o).

This is an exploratory study that adopts a descriptive-analytical perspective. The research sampling is also theoretical. That is a purposive sampling method in which the researcher tries to perform data mining and explore the phenomenon by drawing on the knowledge and opinions of the subjects (Kopai, 2015). Purposive sampling was also used to collect data, mainly extracted from the official sources and statistics (Online, 2022a, 2022b, 2022c, 2022d, 2022e, 2022f, 2022g, 2022h, 2022i, 2022j, 2022k, 2022l, 2022m, 2022n, 2022o). Also, the research data was derived from a systematic review of documents and techniques over 2 years. Data analysis was conducted based on the grounded theory and coding to discover priority variables in housing locations. Also, to convert nominal data to numerical data (the column related to the neighborhood), the One Hot Encoding method and Python programming language as content and data mining were used. Converting nominal data to numerical one is a requirement for learning models. The rationale for using data mining is to expand the size of existing and future data. Although data mining, like other techniques, could only be conducted with human intervention, it enables analysis, who may need to be more expert in statistics or programming, to manage the knowledge extraction process effectively (Wickramasinghe, 2005). The study population consisted of 18,000 samples of villas and apartments selected. After extracting and deleting duplicate data, data distribution on the map of Tehran was determined and data analysis was carried out in 3 steps. First, after validation, 8,000 data from 22 districts and 317 neighborhoods of Tehran were selected and evaluated in terms of 9 variables of the warehouse, elevator, parking lot, surface area, neighborhood, rent, mortgage, year, and total secure deposit affecting the housing prices. Then, the extent of positive or negative correlation of the selected indicators was measured using the Dython Library in the Python programming language. Finally, the learning models were estimated in the existing data using the cross-validation method.

Finally, five regression-based models were implemented on the research data to achieve 85% accuracy to enhance research validity. Therefore, based on Table 1, the accuracy of these models was measured using cross-validation (Online, 2022a) in two stages, before deleting the outliers and the warehouse column, and after deleting the data and the warehouse column (Table 1).

Negative values in Table 1 suggest very low accuracy of models (Online, 2022e), and the closer the precision of a model is to 1 (assuming a maximum accuracy of 100%), the results would be better, and vice versa. A significant improvement in the accuracy of the models is because the skewness and kurtosis of the value distribution forms of each data column were optimized by deleting the outliers, which was essential for the modeling. The skewness and kurtosis optimization does not improve the accuracy of each model (Online, 2022). Still, the models adopted in this paper benefited from this optimization in the best possible manner. Since data with a surface area of more than 200m2 had an asymmetrical distribution, settlements with a maximum area of 200m2 were evaluated and measured. In this research, each data includes the house price, presented by each seller according to the determinants of residential housing prices. After selection, the research data was organized into a database, and several columns formed a matrix for valuation and encoding. Each column contains nine variables: Warehouse, elevator, parking lot, area, neighborhood, rent, mortgage, year, and total deposit, and the amount of data is shown in each row. Each of these variables plays a significant role in housing pricing and location. The neighborhood name column was converted into columns with numerical variables in the research process using the One Hot Encoding (Online-retrieved, 2022) method. For clarity, Table 2 displays the matrix of variables and the data values for selling or buying housing in some Tehran neighborhoods and urban areas (Table 2).

According to the data analysis, some values of the total value column were zero because they had been put on sale for a negotiated price. Therefore, the equivalent rent and deposit were zero. Thus, containing this value in this column was deleted because prices outside the natural range interrupt the learning process of models and yield false predictions.

For example, Figs. 1 and 2 show outliers for the columns relate to Area and Total values after the preprocessing data step (Figs. 1 and 2).

This figure shows the density plot of the area column data after the removal of outliers. The x-axis represents the area in square meters, while the y-axis represents the density. The plot indicates the frequency distribution of area sizes within the specified range, highlighting the peak and spread of the data.

This figure illustrates the density plot of the total value column data after removing outliers. The x-axis represents the house total value in Iranian Toman, while the y-axis represents the density. The plot highlights the frequency distribution of house values, showing the range, peak, and overall distribution pattern of the data.

The bulk of data has a relative value of zero compared to other data, indicating that the data is too large with low frequency. In the research data section, by limiting the range of values, attempts have been made to bring the distribution of importance of these columns closer to the normal distribution. Also, the probability function pertained to the area columns, and the Total value before removing outliers caused by data that are too large or have low frequency was plotted this way. Figures 3 and 4 reveal the results after omitting outliers (Figs. 3 and 4).

This figure presents a Q-Q (quantilequantile) plot comparing the probability distribution of the total value column data, post-outlier removal, against a normal distribution. The x-axis represents the theoretical quantiles, while the y-axis represents the sample quantiles. The blue points indicate the observed values, and the red line represents the reference line for a normal distribution. The plot demonstrates how well the data conforms to a normal distribution, with deviations indicating departures from normality.

This figure shows a Q-Q (quantilequantile) plot comparing the probability distribution of the area column data, after the removal of outliers, to a normal distribution. The x-axis represents the theoretical quantiles, while the y-axis represents the sample quantiles. The blue points indicate the observed values, and the red line represents the reference line for a normal distribution. The plot assesses how closely the area data follows a normal distribution, with deviations highlighting discrepancies from normality.

The statistical studies suggested that the column dedicated to the year of construction also contained abnormal data; hence, houses built earlier than 1995 were removed as outliers. In addition, the skewness and kurtosis of the distribution curve related to the area, year of construction, and Total value, before and after the omission of outliers, are presented in Table 3 (Table 3).

The skewness and kurtosis of the distribution curve of each column exert a direct effect on the learning of prediction models, diminishing or improving the accuracy of the models. The closer the skewness and kurtosis are to their optimal value, the more accurate the models prediction will be. The skewness and kurtosis of the other columns were not investigated due to the lack of continuous data. On the other hand, skewness in the range of 0.5 to 0.5 means that the data are relatively symmetric, and the kurtosis between 2 and 2 is acceptable (George, 2010). Therefore, skew values after removing remote data help the model learn more.

Follow this link:

Advanced modeling of housing locations in the city of Tehran using machine learning and data mining techniques ... -

Greenpeace: Bitcoin mining companies are hiding energy data, Wall Street is responsible –

In a new report filed by Greenpeace, the climate group called for Wall Street accountability in crypto mining, and it correlated bitcoin mining to excessive global energy usage.

Greenpeace claimed that Bitcoin (BTC) mining has evolved into a significant industry dominated by traditional financial companies that are buying up and operating large-scale facilities, using lots of energy.

In 2023, global Bitcoin mining used approximately 121 TWh of electricity, comparable to the entire gold mining industry or a country like Poland. This resulted in significant carbon emissions, the report contended, as these facilities consume as much electricity as a small city.

Despite the guise of Bitcoin being independent from the mainstream financial system, the industry is deeply connected to traditional finance for Bitcoin mining companies to access capital and to enable trading and investing in Bitcoin, the report read.

The report highlighted traditional financial institutions substantial role in supporting Bitcoin mining. These companies rely on capital from banks, asset managers, insurers, and venture capital firms to build and maintain their operations.

The report identified the top five financiers of carbon pollution from Bitcoin mining in 2022: Trinity Capital, Stone Ridge Holdings, BlackRock, Vanguard, and MassMutual. Together, they were responsible for over 1.7 million metric tons of CO2 emissions, equivalent to the annual electricity use of 335,000 American homes.

Bitcoin mining companies Marathon Digital, Hut 8, Bitfarms, Riot Platforms, and Core Scientific generated emissions comparable to 11 gas-fired power plants.

The report pointed out that Bitcoins environmental impact compared to its market value is similar to beef production and gasoline from crude oil. It also mentioned that Bitcoins environmental effects have worsened as the industry has expanded.

Bitcoin uses a lot of electricity due to its Proof-of-Work (PoW) consensus mechanism. Unlike traditional currencies, cryptocurrencies operate through a decentralized digital ledger. Bitcoins PoW requires miners to solve complex algorithms that use significant electricity.

Energy-hungry miners are straining electrical grids across the U.S. and worlddraining electricity when more is needed to power electrification of housing, transportation, and manufacturing to meet global climate targets, the report read.

The report contended that Wall Street, traditional financers, and banks are more responsible for the alleged energy disparity than Bitcoin miners themselves. Greenpeace contended that institutions encourage (through tax breaks and bank benefits) miners to use more energy.

The report contended that miners depend on backing from banks and asset managers, and Wall Street and the banking industry are responding favorably, seeking their portion of the rewards.

Greenpeace argued that financial institutions should be more transparent about their environmental incentives to reduce the negative impact of these incentives.

Bitcoin miners need to disclose data about their energy use and carbon emissions, the report read. Financial companies also need to report on the financed and facilitated emissions associated with their investments, loans, and underwriting services for Bitcoin mining companies.

They called for Bitcoin miners to pay a fair share for their electricity use, strain on electrical grids, greenhouse gas emissions, water consumption, and disruption to nearby communities. They suggested implementing a different consensus mechanism for Bitcoin to address the current energy-intensive proof-of-work model and ultimately resolve Bitcoins environmental impact.

Read the original here:

Greenpeace: Bitcoin mining companies are hiding energy data, Wall Street is responsible -

NJIT Researcher Michael Houle Proves Theory for Detecting Data Anomalies – NJIT News |

In data analysis, its the outlier information that is usually the most interesting, yet sometimes that information goes unrecognized by the most common evaluation methods because they make inaccurate assumptions.

But now Michael Houle, a senior university lecturer at New Jersey Institute of Technologys Ying Wu College of Computing, along with collaborators in Australia, Denmark and Serbia have become outliers themselves for developing the math to prove that breaking those assumptions can work better than conventional methods.

Outlier detection, one of the most fundamental tasks in data mining, aims to identify observations that deviate from the general distribution of the data. Such observations often deserve special attention as they may reveal phenomena of extreme importance, such as network intrusions, sensor failures or disease, they wrote in an award-winning paper about their new proof, Dimensionality-Aware Outlier Detection, given at the recent SIAM International Conference on Data Mining (SDM24) in Houston.

Dimensionality is the number of features that you use to describe your data. If you have a 100 pixel by 100 pixel image, which is three colors per pixel, thats 30,000 features, Houle explained.

Dimensionality-Aware Outlier Detection was tested on 800 datasets. It uses a mathematical concept called local intrinsic dimensionality.

It is awareness of local variations in dimensionality that makes our method unique, Houle noted. "The intrinsic dimensionality can be interpreted as the number of main influences that best describe the distribution of the data in space. It does not depend directly on the number of data features or the dimension of the space itself.

"Intrinsic dimensionality is a popular concept in machine learning, particularly when data is regarded as lying within a sheet, or 'manifold'. The manifold can have a much smaller dimension than the data space, and knowing this number -- the intrinsic dimensionality -- can be an advantage in data modeling and data processing. Fitting a manifold can be very computationally expensive, though."

"Instead, we assess the effective number of dimensions directly, using a notion of intrinsic dimensionality that is entirely local to the data point being tested. Local intrinsic dimensionality infers the dimensional properties through the distribution of distances from the test point to its nearest neighbors within the dataset."

"Using the LID theory, we were able to derive an outlierness criterion that not only took dimensionality properly into account, but did so in a way that the conventional methods have ignored until now."

Traditionally, data miners would seek anomalies using techniques such as local outlier factor (LOF), simplified LOF and something called k-nearest neighbors. Those all ignore dimensionality.

"My main role was the theoretical contribution and coordinating with my colleagues on the experimental design and evaluation. The theory had been in place for quite some time before we found a way to show the effects in practice. And the effect turned out to be elusive, Houle stated.

Houle said he became interested in the subject because he was first interested in something even more arcane.

Anomaly detection, from my point of view, is only the tip of the iceberg. It was not my first motivation for getting into this line of research - it was just a nice outcome of the theory. Anomaly detection is a classical problem in data mining that has not been fully settled in the two decades since LOF came out. In this context, we were able to nail down through experimental justification what many researchers have been assuming for a while now: intrinsic dimensionality can vary from one part of a dataset to another. And by taking variation in LID into account, we were able to comprehensively outperform anomaly detection methods that have been the state of the art for the past 25 years."

Outlier detection turned out to be one of the potential application areas. So the way that I'm looking at it is, I'm working on certain fundamentals in data mining, deep learning, indexing, databases and a number of areas. My colleagues and I are still exploring where it can possibly fit, and what we're targeting these days more than anything else is to try to reinterpret machine learning and deep learning in the light of what this model, what this characterization of complexity, could reveal.

View post:

NJIT Researcher Michael Houle Proves Theory for Detecting Data Anomalies - NJIT News |

Bitcoin in the Permian? Data centers test Texas grid. – E&E News by POLITICO

The nations most prolific oil region is becoming a hub for industries that could be a major strain on the Texas electrical grid: bitcoin mining and data centers.

The migration of those technologies into the Permian Basin is occurring as the oil and gas industry is also trying to electrify much of its equipment to meet net-zero goals, setting up a clash that could test the regions already overloaded power system.

Its sort of stunning how much is coming online, and not from oil and gas, said Cyrus Reed, a member of a state committee studying electricity demand and conservation director of the Sierra Clubs Lone Star chapter. Its almost overwhelming.

The situation in Texas is emblematic not only of how surging electricity demand is changing the grid and creating potential connection backlogs nationally, but underscores how many regions long associated with oil and gas production are set to be transformed by new technologies.

The Electric Reliability Council of Texas, the states main grid operator, estimates electricity demand from industries in the Permian region will more than double by 2030 compared to 2021, with companies consuming 23,959 megawatts at peak demand times more power than the entire state of Tennessee generates during similar periods.

By 2030, electricity demand from emerging technologies could eclipse that of the oil and gas industry in the region. The majority of that demand 58 percent is expected to come from cryptomining operations, according to ERCOT. Currently, almost all of the industrial electricity demand in the Permian comes from oil and gas.

The shift is raising concerns about how the Permian region will be able to keep up with the electricity surge, especially in terms of adding new transmission lines.

Todd Staples, president of the Texas Oil and Gas Association, said a lack of transmission is already a problem.

That infrastructure is not keeping up and the need to electrify these operations has [been an] ongoing issue since I started with TXOGA almost 10 years ago, he said. Growth is going to continue, and companies need reliability with their power supplies.

To address the looming grid crunch, Texas legislators in 2023 ordered a study of transmission and generation needs in the Permian. A draft of the study is expected to be published in June by ERCOT.

Among the issues being examined is how much electrified oil and gas infrastructure will be able to connect to the grid especially electric fracking rigs that each use about as much power as a small town.

Members of the regional planning committee studying the Permian which include oil and gas officials, technology companies, ERCOT forecasters, and others are also examining transmission needs, but there are numerous challenges to building more infrastructure, including adequate funding.

ERCOT declined to make someone available for an interview, and instead referred to an April press release about planning for growth on the Texas grid as a whole.

As a result of Texas continued strong economic growth, new load is being added to the ERCOT system faster and in greater amounts than ever before, ERCOT President and CEO Pablo Vegas said in the release. As we develop and implement the tools provided by the prior two legislatures, ERCOT is positioned to better plan for and meet the needs of our incredibly fast-growing state.

Utility giant Oncor which is responsible for building transmission lines in the region said in a statement it has been sharing independent demand growth studies with ERCOT, talking to industries about their power needs and starting to build power line projects.

Oncor will continue to do its part to support efforts to identify growth needs, incorporate those into long-term infrastructure planning and ensure Texas electric delivery infrastructure keeps pace with the needs of industry today and in the years to come, said Oncor spokesperson Connie Piloto.

Other industries expected to stress the Permian grid include green hydrogen, which is projected to constitute 22 percent of non-oil and gas electricity demand by 2030, according to ERCOT. Crypto mines are forecast to require even more power, reaching more than 6,957 MW of demand by decades end.

Ryan Luther, director of energy transition research with Enverus, said his company has tracked a shift of cryptomining operations to West Texas from China, which banned the technology in 2021. The migration is being driven partly by a search for cheap natural gas for power.

Theyre trying to find stranded gas that cant get to market, he said of crypto companies.

The reason many are heading to the Permian is because producers are looking for ways to cut emissions by no longer venting or flaring excess natural gas left over from oil production and instead offering the fuel for electricity. Oil and gas developers have been pushed to change their practices partly by EPA, which has finalized several rules in recent months to curb methane emissions.

Among those is a fee of $900 for every ton of methane that oil and gas operators vent or flare during nonemergencies. The fee is set to rise to $1,500 per ton in 2026.

Using excess gas for power production is one of the few alternatives to flaring, especially since limited pipelines in the Permian are available to take excess fuel to larger markets like the Gulf Coast.

For the producers in the Permian, they would rather see in-basin demand for their gas grow than have to build more gas pipelines. Their primary objective is getting oil to market gas is not as much in the value mix, said Luther.

Kinder Morgan Executive Chair Richard Kinder, for example, told analysts on the companys first-quarter earnings call that major technology companies with artificial intelligence and data centers are going to want to locate as close to fuel sources as possible, including natural gas.

The power needed for AI and the massive data centers being built today and planned for the near future require affordable electricity that is available without interruption, 24 hours a day, 365 days a year, he said. And I think it will tend to be located near reliable electric generation because if you're a Microsoft or a Google, you want that power as close to your facility as possible.

Still, using gas for power instead of venting or flaring it could be controversial among environmental groups, said Doug Lewin, founder and president of Stoic Energy consulting. A lot of folks wont like it, but venting and flaring is much worse [for emissions] than using [gas] for power.

The Permian region is also a hub for wind, solar and batteries, helping make Texas the renewable capital of the United States. The region can produce 11,747 MW of power at peak times from wind and solar alone, enough to power New York City more than twice over, according to ERCOT.

Much of the available power is trapped in West Texas because of transmission constraints, providing another incentive for energy-hungry companies to move into the region.

I think if youre an AI data center looking for mostly clean energy with some backup gas nearby, I cant imagine too many better places in the country to go than the Permian, Lewin said. You can be 70 to 80 percent carbon-free and use gas the other times.

Lewin said he was more skeptical of how much demand from cryptomining could actually come online, a view echoed by Lee Bratcher, president of the Texas Blockchain Council, an industry group.

While theres nearly 7,000 MW of demand from crypto miners applications in the Permian alone, some of those projects wont come to fruition, Bratcher said.

A lot of those estimates are significant overestimates, I think ERCOT knows that, but they have to put it in there because there are applications in the queue, Bratcher said.

Theres also not enough investment dollars to build that much infrastructure, so we anticipate a pretty steady growth of about 300 MW per quarter of bitcoin mining that will level off to much lower than that after a year or two. We never anticipate more than 5,000 MW of bitcoin mining in Texas, Bratcher said. That level of bitcoin mining could keep oil and gas as the dominant electricity user in the region.

Bratcher added that bitcoin has growth limits because of its economics, which follow the principle that the more miners come online, the less profitable each unit becomes.

Even if bitcoin doesn't grow as much as expected, the Permian grid is facing major strain because of oil and gas companies electrifying their fleets to help meet emissions goals.

Exxon Mobil, for example, has pledged to achieve net-zero emissions from its Permian operations by 2030, largely by electrifying equipment that currently runs on fossil fuels.Chevron has pledged to reach net zero in its global oil and gas production by 2050.

Occidental Petroleum announced earlier this year it contracted with Axis Energy Services to deploy its first fully electric well service rig, which performs frequent maintenance on equipment. The companys EPIC rig needs a maximum of about 1.25 MW to run, enough electricity to power about 250 homes on a hot summer day.

Clay Holland, senior vice president of operations for Axis, said the advantages of going electric reach beyond oil and gas companies environmental, social and governance commitments and marketing. The older well service rigs, which run by burning diesel, had ancient braking systems and were prone to equipment failures.

Theres also cost savings. Buying and hauling 150 gallons of diesel a day to run the traditional well service rigs is costly, especially when multiplied across rigs in a huge geographic area.

It does take more investment for EPIC rigs on a day-rate basis, but even with higher day rate, the efficiency gained in less maintenance, less safety-related downtime this has significant cost savings, Holland said. You can follow the dollar and figure out why it's advantageous; what we have seen in the industry is if it doesnt make dollars and cents, people will run from it.

Other electrified machines use much greater quantities of power than service rigs. Electric fracking rigs, which blast fluid deep below the earths surface to crack rocks in order to squeeze out more oil and gas, can use more than 25 MW enough electricity to power more than 6,250 homes at peak demand times in Texas.

But service rigs and other large pieces of equipment in the Permian are not stationary, making it more difficult to site transmission projects.

Connecting equipment like electric frack rigs to the grid and tapping into West Texas abundant renewable energy supply could prove almost impossible, according to Luther. Thats largely because the rigs move from well to well frequently, once they finish fracking in a given spot.

In the power space, youre not going to build transmission unless you see a 50-year lifespan on that line, Luther said. Electric fracking fleets [are] not going to be connected because theyre going to move around.

Bratcher of the Texas Blockchain Council said miners have been putting down collateral to have more transmission built in the Permian, adding that he doesnt see the electricity demand as a competition between crypto and oil and gas.

The oil and gas industry has been great for Texas and has been for a long time, Bratcher said. Bitcoin mining is a new industry that will create rural jobs, like oil and gas has done for decades, and in a unique way.

There isnt a road map for a massive infrastructure build-out of transmission and new power plants in the region in such a short period of time, however.

All this is kind of new because not many people have done this yet, said Lewin.


Bitcoin in the Permian? Data centers test Texas grid. - E&E News by POLITICO

Shiny Celebi Masterwork Research, 8th Anniversary Event, Community Day Texts, and more spotted by data miners … – Pokmon GO Hub

Attention, Pokmon Trainers! The latest data mining reports are in and weve got what looks like a new Shiny Celebi Masterwork Research, the texts for both Goomy and Cyndaquil, Some texts suggesting were getting an 8th anniversary research, and more!

Disclaimer: You know the drill by now. Please read through all of this with a grain of salt we often post data mining reports that take months to release, and we dont want our readers disappointed. Be smart, read this like speculation, and be happy once it goes live.

All the information contained in this article has been provided publicly by the PokMiners, and this article includes some of my commentaries. Remember, while the data miners have provided this information, always take these updates with a grain of salt. Some of these features might take a while to go live or may never go live at all.

This is probably going to be the same as the paid Shiny Mew research that was only available to trainers who hadnt completed the previously offered Shiny Mew research. So if you already have a shiny Celebi, or have the shiny Celebi research but havent finished it you wont be able to obtain this.

Looks like well be getting this research on June 28th, and it will be obtainable until July 3rd

Shiny Celebi maybe.

Not a bad haul.

Date and Time, already announced

Standard Community Day stuff

Field Research

The sprites for Slakoth wearing a visor were added, both regular and shiny versions.

It appears that were getting 8th-anniversary Premium Timed Research. No other details on this yet.

Hard to say what all of this is, but it all points toward what Ive been speculating is new AR functionality that will allow us to drop our Pokmon at Pokstops for other trainers to take pictures with. For some reason, Wailord specifically was updated? Perhaps Niantic is using the giant Pokmon for some testing.

Original post:

Shiny Celebi Masterwork Research, 8th Anniversary Event, Community Day Texts, and more spotted by data miners ... - Pokmon GO Hub

The diabetes mellitus multimorbidity network in hospitalized patients over 50 years of age in China: data mining of … – BMC Public Health

Standl E, Khunti K, Hansen TB, Schnell O. The global epidemics of diabetes in the 21st century: Current situation and perspectives. Eur J Prev Cardiol. 2019;26:714.

Article PubMed Google Scholar

Huang Y. IDF Diabetes Atlas 8th Edition. 2017.

Google Scholar

Xu X, Mishra GD, Jones M. Evidence on multimorbidity from definition to intervention: An overview of systematic reviews. Ageing Res Rev. 2017;37:5368.

Article PubMed Google Scholar

Marengoni A, Angleman S, Melis R, et al. Aging with multimorbidity: a systematic review of the literature. Ageing Res Rev. 2011;10:4309.

Article PubMed Google Scholar

Schfer I, von Leitner EC, Schn G, Koller D, Hansen H, Kolonko T, Kaduszkiewicz H, Wegscheider K, Glaeske G, van den Bussche H. Multimorbidity patterns in the elderly: a new approach of disease clustering identifies complex interrelations between chronic conditions. PLoS One. 2010;5(12):e15941.

Atun R. Transitioning health systems for multimorbidity. Lancet. 2015;386:7212.

Article PubMed Google Scholar

Prados-Torres A, Poblador-Plou B, Caldern-Larraaga A, Gimeno-Feliu LA, Gonzlez-Rubio F, Poncel-Falc A, Sicras-Mainar A, Alcal-Nalvaiz JT. Multimorbidity patterns in primary care: interactions among chronic diseases using factor analysis. PLoS One. 2012;7(2):e32190.

Wu H, Yang S, Huang Z, He J, Wang X. Type 2 diabetes mellitus prediction model based on data mining. In: Informatics in Medicine Unlocked. 2018.

Google Scholar

Boyd CM, Darer J, Boult C, Fried LP, Boult L, Wu AW. Clinical practice guidelines and quality of care for older patients with multiple comorbid diseases: implications for pay for performance. JAMA. 2005;294:71624.

Article CAS PubMed Google Scholar

Moffat K, Mercer SW. Challenges of managing people with multimorbidity in todays healthcare systems. BMC Fam Pract. 2015;16:129.

Article PubMed PubMed Central Google Scholar

Skou ST, Mair FS, Fortin M, et al. Multimorbidity Nat Rev Dis Primers. 2022;8:48.

Article PubMed Google Scholar

Zemedikun DT, Gray LJ, Khunti K, Davies MJ, Dhalwani NN. Patterns of Multimorbidity in Middle-Aged and Older Adults: An Analysis of the UK Biobank Data. Mayo Clin Proc. 2018;93:85766.

Article PubMed Google Scholar

Salisbury C. Multimorbidity: redesigning health care for people who use it. Lancet. 2012;380:79.

Article PubMed Google Scholar

Balakrishnan S, Karmacharya I, Ghimire S, et al. Prevalence of multimorbidity and its correlates among older adults in Eastern Nepal. BMC Geriatr. 2022;22:425.

Article PubMed PubMed Central Google Scholar

Lee Y, Kim H, Jeong H, Noh Y. Patterns of multimorbidity in adults: an association rules analysis using the Korea health panel. Int J Environ Res Public Health. 2020;17(8):2618. Erratum in: Int J Environ Res Public Health. 2021;18(21). PMID:32290367; PMCID:PMC7215522.

Wang X, Yao S, Wang M, Cao G, Chen Z, Huang Z, Wu Y, Han L, Xu B, Hu Y. Multimorbidity among two million adults in China. Int J Environ Res Public Health. 2020;17(10):3395.

Held FP, Blyth F, Gnjidic D, et al. Association Rules Analysis of Comorbidity and Multimorbidity: The Concord Health and Aging in Men Project. J Gerontol A Biol Sci Med Sci. 2016;71:62531.

Article PubMed Google Scholar

Zhang R, Lu Y, Shi L, Zhang S, Chang F. Prevalence and patterns of multimorbidity among the elderly in China: a cross-sectional study using national survey data. BMJ Open. 2019;9.

Article PubMed PubMed Central Google Scholar

Yao SS, Cao GY, Han L, et al. Prevalence and Patterns of Multimorbidity in a Nationally Representative Sample of Older Chinese: Results From the China Health and Retirement Longitudinal Study. J Gerontol A Biol Sci Med Sci. 2020;75:197480.

Article PubMed Google Scholar

Hernndez B, Reilly RB, Kenny RA. Investigation of multimorbidity and prevalent disease combinations in older Irish adults using network analysis and association rules. Sci Rep. 2019;9:14567.

Article PubMed PubMed Central Google Scholar

Jin L, Guo X, Dou J, et al. Multimorbidity Analysis According to Sex and Age towards Cardiovascular Diseases of Adults in Northeast China. Sci Rep. 2018;8:8607.

Article PubMed PubMed Central Google Scholar

Whitson HE, Johnson KS, Sloane R, et al. Identifying Patterns of Multimorbidity in Older Americans: Application of Latent Class Analysis. J Am Geriatr Soc. 2016;64:166873.

Article PubMed PubMed Central Google Scholar

Simes D, Arajo FA, Severo M, et al. Patterns and Consequences of Multimorbidity in the General Population: There is No Chronic Disease Management Without Rheumatic Disease Management. Arthritis Care Res (Hoboken). 2017;69:1220.

Article PubMed Google Scholar

Wang R, Yan Z, Liang Y, Tan EC, Cai C, Jiang H, Song A, Qiu C. Prevalence and patterns of chronic disease Pairs and Multimorbidity among Older Chinese Adults Living in a Rural Area. PLoS One. 2015;10(9):e0138521.

Clerencia-Sierra M, Caldern-Larraaga A, Martnez-Velilla N, Vergara-Mitxeltorena I, Aldaz-Herce P, Poblador-Plou B, Machn-Sobrado M, Egs Olazabal N, Abelln-van Kan G, Prados-Torres A. Multimorbidity patterns in hospitalized older patients: associations among chronic diseases and geriatric syndromes. PLoS One. 2015;10(7):e0132909.

Garin N, Koyanagi A, Chatterji S, et al. Global Multimorbidity Patterns: A Cross-Sectional, Population-Based, Multi-Country Study. J Gerontol A Biol Sci Med Sci. 2016;71:20514.

Article PubMed Google Scholar

Tan PN, Steinback M, Kumar V. Introduction to Data Mining. In: Data Mining. 2011.

Google Scholar

Zhang Y, Chen C, Huang L, et al. Associations Among Multimorbid Conditions in Hospitalized Middle-aged and Older Adults in China: Statistical Analysis of Medical Records. JMIR Public Health Surveill. 2022;8.

Article PubMed PubMed Central Google Scholar

Wang C, Guo XJ, Xu JF, et al. Exploration of the association rules mining technique for the signal detection of adverse drug events in spontaneous reporting systems. PLoS ONE. 2012;7.

Article CAS PubMed PubMed Central Google Scholar

Ramezankhani A, Pournik O, Shahrabi J, Azizi F, Hadaegh F. An application of association rule mining to extract risk pattern for type 2 diabetes using tehran lipid and glucose study database. Int J Endocrinol Metab. 2015;13.

Article PubMed PubMed Central Google Scholar

Kamalesh MD, Prasanna KH, Bharathi B, Dhanalakshmi R, Canessane RA. Predicting the risk of diabetes mellitus to subpopulations using association rule mining. In: International conference on soft computing systems. 2016.

Google Scholar

Kasemthaweesab P, Kurutach W. Study of diabetes mellitus (DM) with ophthalmic complication using association rules of data mining technique. Parallel Comput. 2011;38:42137.

Google Scholar

Breuer R, Mattheisen M, Frank J, et al. Detecting significant genotype-phenotype association rules in bipolar disorder: market research meets complex genetics. Int J Bipolar Disord. 2018;6:24.

Article PubMed PubMed Central Google Scholar

Nahar J, Imam T, Tickle KS, Chen YPP. Association rule mining to detect factors which contribute to heart disease in males and females. Expert Syst Appl. 2013;40:108693.

Article Google Scholar

Ordonez C, Ezquerra N, Santana CA. Constraining and summarizing association rules in medical data. Springer-Verlag. 2006.

Ma H, Ding J, Liu M, Liu Y. Connections between Various Disorders: Combination Pattern Mining Using Apriori Algorithm Based on Diagnosis Information from Electronic Medical Records. Biomed Res Int. 2022;2022:2199317.

Article PubMed PubMed Central Google Scholar

Abner EL, Nelson PT, Kryscio RJ, et al. Diabetes is associated with cerebrovascular but not Alzheimers disease neuropathology. Alzheimers Dement. 2016;12:8829.

Article PubMed Google Scholar

Zhao W, Rasheed A, Tikkanen E, et al. Identification of new susceptibility loci for type 2 diabetes and shared etiological pathways with coronary heart disease. Nat Genet. 2017;49:14507.

Article CAS PubMed PubMed Central Google Scholar

Emdin CA, Khera AV, Natarajan P, et al. Genetic Association of Waist-to-Hip Ratio With Cardiometabolic Traits, Type 2 Diabetes, and Coronary Heart Disease. JAMA. 2017;317:62634.

Article PubMed PubMed Central Google Scholar

Lo C, Toyama T, Wang Y, et al. Insulin and glucose-lowering agents for treating people with diabetes and chronic kidney disease. Cochrane Database Syst Rev. 2018;9:Cd011798.

PubMed Google Scholar

Pang Y, Kartsonaki C, Turnbull I, et al. Plasma Glucose, and Incidence of Fatty Liver, Cirrhosis, and Liver Cancer: A Prospective Study of 0.5 Million People. Hepatology. 2018;68:130818.

Article CAS PubMed Google Scholar

Lehrke M, Marx N. Diabetes Mellitus and Heart Failure. Am J Med. 2017;130:S40-s50.

Article CAS PubMed Google Scholar

Bardini G, Rotella CM, Giannini S. Dyslipidemia and diabetes: reciprocal impact of impaired lipid metabolism and Beta-cell dysfunction on micro- and macrovascular complications. Rev Diabet Stud. 2012;9:8293.

Article PubMed PubMed Central Google Scholar

Hamilton SJ, Watts GF. Atherogenic dyslipidemia and combination pharmacotherapy in diabetes: recent clinical trials. Rev Diabet Stud. 2013;10:191203.

Article PubMed PubMed Central Google Scholar

Di Francesco S, Robuffo I, Caruso M, Giambuzzi G, Ferri D, Militello A, Toniato E. Metabolic alterations, aggressive hormone-Nave prostate cancer and cardiovascular disease: a complex relationship. Medicina (Kaunas). 2019;55(3):62.

Lai HM, Chen CJ, Su BY, et al. Gout and type 2 diabetes have a mutual inter-dependent effect on genetic risk factors and higher incidences. Rheumatology (Oxford). 2012;51:71520.

Article CAS PubMed Google Scholar

Siracuse JJ, Chaikof EL. Diabetes and peripheral vascular disease. In: Diabetes and peripheral vascular disease. 2012.

Google Scholar

Iwata N, Takayama H, Xuan M, Kamiuchi S, Matsuzaki H, Okazaki M, Hibino Y. Effects of etanercept against transient cerebral ischemia in diabetic rats. Biomed Res Int. 2015;2015:189292.

Acosta A, Camilleri M. Gastrointestinal morbidity in obesity. Ann N Y Acad Sci. 2014;1311:4256.

Article CAS PubMed PubMed Central Google Scholar

Alpantaki K, Kampouroglou A, Koutserimpas C, Effraimidis G, Hadjipavlou A. Diabetes mellitus as a risk factor for intervertebral disc degeneration: a critical review. Eur Spine J. 2019;28:212944.

Article PubMed Google Scholar

Mahmoud M, Kokozidou M, Auffarth A, Schulze-Tanzil G. The relationship between diabetes mellitus type II and intervertebral disc degeneration in diabetic rodent models: a systematic and comprehensive review. Cells. 2020;9(10):2208.

Mirrakhimov AE. Chronic obstructive pulmonary disease and glucose metabolism: a bitter sweet symphony. Cardiovasc Diabetol. 2012;11:132.

Article PubMed PubMed Central Google Scholar

Cavaills A, Brinchault-Rabin G, Dixmier A, et al. Comorbidities of COPD. Eur Respir Rev. 2013;22:45475.

Article PubMed PubMed Central Google Scholar

Diederichs C, Berger K, Bartels DB. The measurement of multiple chronic diseasesa systematic review on existing multimorbidity indices. J Gerontol A Biol Sci Med Sci. 2011;66:30111.

Article PubMed Google Scholar

Han S, Mo G, Gao T, Sun Q, Liu H, Zhang M. Age, sex, residence, and region-specific differences in prevalence and patterns of multimorbidity among older Chinese: evidence from Chinese Longitudinal Healthy Longevity Survey. BMC Public Health. 2022;22:1116.

Article PubMed PubMed Central Google Scholar

Read more:

The diabetes mellitus multimorbidity network in hospitalized patients over 50 years of age in China: data mining of ... - BMC Public Health

Datamining Report: Potential Global GO Fest Catch Cards, Three Ultra Space Wonders Event Catch Challenges, and … – Pokmon GO Hub

Attention, Pokmon Trainers! The latest data mining reports are in and weve got potential background catch cards for Global GO Fest 2024, more details about Pikachus Indonesia Journey: Yogyakarta, Three Ultra Space Wonders Event Catch Challenges, and more!

Disclaimer: You know the drill by now. Please read through all of this with a grain of salt we often post data mining reports that take months to release, and we dont want our readers disappointed. Be smart, read this like speculation, and be happy once it goes live.

All the information contained in this article has been provided publicly by the PokMiners, and this article includes some of my commentaries. Remember, while the data miners have provided this information, always take these updates with a grain of salt. Some of these features might take a while to go live or may never go live at all.

These look like background catch cards for Solgaleo and Lunala for Global GO Fest?

These could be variants of catch cards, or theyll be used for something else.

Texts for Solgeleos and Lunalas signature moves.

Some text updates for the new Iris AR.

The special research for Pikachus Indonesia Journey: Yogyakarta in August will have 5 stages.

Badges for the event

Looks like there will be collection challenges for both day 1 and day 2

Looks like there will also be snapshot encounters

The Ultra Space Wonders event will have 3 collection challenges: Catch, Raid, and Research

Yes, I know that says 4, but their research always starts with Step 1 technically being Step 0 in the code, and yes this is the only step they have pushed so far.

The announced Timed Special Research for the GBL International Championships will have 2 pages. Only the North American one has been announced, but there are texts for the European and Latin American ones as well. Some say 2025 in the code, and 2024 in the texts. Make of that what you will.

The Pokbox filter text was changed from fuse to fusion.

Read this article:

Datamining Report: Potential Global GO Fest Catch Cards, Three Ultra Space Wonders Event Catch Challenges, and ... - Pokmon GO Hub

Data mining and safety analysis of avatrombopag: a retrospective pharmacovigilance study based on the US food and … –


From the first quarter of 2018 to the fourth quarter of 2023, this study obtained a total of 10,530,937 adverse event reports from the FAERS database. After removing duplicates of 2254, of the 9,060,312 cases reported, 1211 reports listed avatrombopag as primary suspected drug. An overview of AEs reported in association with avatrombopag is provided in Table 1. Women (54.3%) accounted for a larger proportion of AEs than men. Patients aged65years accounted for a larger proportion (22.2%) of participants. The largest number of AEs was reported in the United States (88.2%), followed by Spain (2.1%), Italy (1.8%), China (1.7%), and Australia (0.8%). Serious outcomes included hospitalization, death, life-threatening conditions, disability, and other serious outcomes. Excluding the other serious outcomes, hospitalization (34.6%) was the most frequently reported serious outcome, followed by death (15.4%). Consumers, physicians, and health professionals reported the most AEs (42.3%, 26.0%, and 24.6%, respectively). AEs were reported in 2018 (n=55, 4.5%), 2019 (n=110, 9.1%), 2020 (n=262, 21.6%), 2021 (n=257, 21.2%), 2022 (n=245, 20.2%), and 2023 (n=282, 23.3%).

In Table 2, potential signals for avatrombopag are described in accordance with the SOC. Statistics show that 26 organ systems were affected by reported AEs. No potential signals satisfied our signal criteria when AE reports were classified at the SOC level. Significant potential signals for the nervous system disorders, general disorders and administration site conditions, vascular disorders, investigations, and hepatobiliary disorders SOCs were identified for at least one of the four disproportionality indices.

In total, disproportionality signals were identified for 44 PTs involved 17 SOCs conforming to the four algorithms simultaneously are shown in Table 3. In our statistical results, the most common AEs were platelet count decreased (20.2%, n=165, PT: 10,035,528), headache (16.7%, n=136, PT: 10,019,211), platelet count increased (11.9%, n=97, PT: 10,051,608), platelet count abnormal (6.3%, n=51, PT: 10,035,526), contusion (2.7%, n=22, PT: 10,050,584), pulmonary embolism (2.3%, n=19, PT: 10,037,377), and deep vein thrombosis (2.1%, n=17, PT: 10,051,055). In this study, PTs that were reported at a high relative frequency were unlabeled in the avatrombopag product labeling7 were seasonal allergy (PT: 10,048,908), rhinorrhea (PT: 10,039,101), abnormal liver function (PT: 10,024,690), antiphospholipid syndrome (PT: 10,002,817), ear discomfort (PT: 10,052,137), and photopsia (PT: 10,034,962).

The onset times of AEs reported with avatrombopag were extracted from the database. Patients whose time-to-onset analysis report fields in FAERS were blank or contained inaccurate information were excluded, 499 AEs onset times (41.2%) were reported (median 60days). In approximately 55.7% of cases (n=278), AEs occurred within the first month after initiation of avatrombopag (Fig.2A). Additionally, the proportion of cases in which AEs occurred after 2months (n=62, 12.4%) and 3months (n=80, 16.0%) was significantly less than the number of AEs that occurred in the first month (P<0.01), and the proportion of occurrences gradually decreased after 3months. Furthermore, the highest number of AEs occurred on the first (n=57, 20.5%) and second (n=35, 12.6%) days after initiation of avatrombopag in the first month (Fig.2B).

Time to onset of reported AEs. (A) Time to onset of reported AEs grouped by month. (B) Time to onset of reported AEs grouped by days with avatrombopag in the first month. AE adverse event.

Continue reading here:

Data mining and safety analysis of avatrombopag: a retrospective pharmacovigilance study based on the US food and ... -

Quantzig Empowers Businesses with Advanced Marketing Data Mining Solutions – PR Newswire

NEW YORK, May 15, 2024 /PRNewswire/ -- In an era where data reigns supreme, businesses across industries are increasingly turning to advanced analytics to drive growth and stay ahead of the curve. Quantzig, a leading analytics and advisory firm, is at the forefront of this revolution, empowering organizations with cutting-edge marketing data mining solutions.

The modern business landscape is defined by data - its collection, analysis, and interpretation. In this context, marketing data mining emerges as a powerful tool, enabling businesses to extract valuable insights from vast datasets and translate them into actionable strategies. Quantzig's latest offering in this domain promises to revolutionize the way businesses harness the power of data to fuel growth and innovation.

Feel free to request a complimentary demo of our Marketing Analytics dashboard solutions.

Unleashing the Potential of Marketing Data Mining

Quantzig's marketing data mining solutions are designed to unlock the full potential of customer data, providing businesses with a comprehensive understanding of consumer behavior, preferences, and trends. By leveraging advanced analytics techniques such as predictive modeling, segmentation analysis, and pattern recognition, Quantzig helps businesses identify hidden patterns and correlations within their marketing data, paving the way for targeted marketing campaigns, personalized customer experiences, and enhanced ROI.

Transformative Impact on Business Performance

The impact of Quantzig's marketing data mining solutions on business performance is nothing short of transformative. By harnessing the power of data, businesses can optimize their marketing efforts, streamline operations, and drive revenue growth. Quantzig's solutions have been instrumental in helping clients across industries achieve their business objectives, from increasing customer acquisition and retention to maximizing marketing ROI and driving competitive advantage.

Key Features and Benefits

Customer Success Stories

Quantzig's marketing data mining solutions have delivered tangible results for clients across various industries. From retail and e-commerce to healthcare and finance, businesses have leveraged Quantzig's expertise to drive growth, improve operational efficiency, and gain a competitive edge in the market. Some notable success stories include:


Quantzig is a leading analytics and advisory firm with a proven track record of helping clients harness the power of data to drive business growth and innovation. With a team of experienced analysts and data scientists, Quantzig delivers cutting-edge analytics solutions that empower organizations to make informed decisions and stay ahead of the competition.

For more information about marketing data mining solutions and how they can help your business drive growth and success, visit Quantzig's website

Quantzig US: +1 630 538 7144 Canada: +1 647 800 8550 UK: +44 208 629 1455 India: +91 806 191 4606 Website:

SOURCE Quantzig

Visit link:

Quantzig Empowers Businesses with Advanced Marketing Data Mining Solutions - PR Newswire

Asteroid Institute and Google Cloud Identify 27,500 New Asteroids with Cloud-Based Astrodynamics and Data Mining – Datanami

MILL VALLEY, Calif. and SUNNYVALE, Calif., April 30, 2024 Asteroid Institute, a program of B612 Foundation, and Google Cloud today announced the most significant results of their partnership to date: identifying 27,500 new, high-confidence asteroid discovery candidates. The work, which took place over several weeks, has the potential to enable the mapping of the solar system and protect the Earth from collisions, advancing the field of minor planet discovery.

The project was done without the benefit of new observations of the sky, but rather by leveraging Google Cloud technology to run sophisticated algorithms developed by Asteroid Institute and University of Washington researchers, and by mining historical datasets from the NOIRLab Source Catalog Data Release 2 (NSC DR2).

The majority of the new discoveries are Main Belt Asteroids that orbit the Sun between Mars and Jupiter, but Asteroid Institute also discovered more than 100 Near-Earth Asteroids whose orbits take them much closer towards Earth.

In partnership with the University of Washingtons DiRAC Institute, Asteroid Institute developed a novel algorithm called Tracklet-less Heliocentric Orbit Recovery (THOR), which runs on a cloud-based, open-source astrodynamics platform called Asteroid Discovery Analysis and Mapping (ADAM). THOR projects theoretical orbits across millions of observed moving points of light and links together those points that are consistent with real physical orbits. Google Clouds Office of the CTO collaborated with Asteroid Institute to help it scale and tune its algorithms on ADAM using Google Cloud.

Asteroid Institute selected Google Cloud as its cloud provider of choice due to its scalability, ease of use, and state-of-the art data and AI products, specifically focused on:

What is exciting is that we are using electrons in data centers, in addition to the usual photons in telescopes, to make astronomical discoveries, said Dr. Ed Lu, Executive Director, Asteroid Institute.

Asteroid Institute is also exploring the use of Googles AI technologies to automatically vet and verify candidate images identified by the THOR algorithm. AI automation will be a critical step to scale this work further, because the initial verification of likely candidates is a major bottleneckcurrently conducted manually by volunteer high school students, undergraduate and postgraduate students, scientists, and astronomers. If successful, human verification needs may be reduced significantly and the pipelines built for NOIRLab can be adapted to run on much larger datasets, such as ones from the Vera C. Rubin Observatory, boosting new discoveries even further.

At Google, we always like hard computational challenges, and Asteroid Institute provided us with complex unstructured data that required heavy computational processing, large tracking requirements and novel AI capabilities, said Massimo Mascaro, Technical Director, Google Clouds Office of the CTO. Were proud to partner with Asteroid Institute to help further scientific discovery and expand our worlds awareness of the beautiful neighbors we have in our solar system.

The NSC DR2 catalog is the first of many Asteroid Institute plans to scan. The largest will likely become available in 2025, after the commissioning of the Vera C. Rubin Observatory. With THOR running on ADAM on Google Cloudand with the help of AIresearchers can scale this work and scan new datasets more efficiently and effectively as they become available. Asteroid Institutes goal is to automate this process to the benefit of the astronomical community and space industry.

Asteroid Institute results are more than exciting for the Vera C. Rubin Observatory: they may help us reoptimize our observing strategy and obtain gains for some science programs, such as cosmologically important supernovae explosions, equivalent to cloning another Rubin observatory, said Dr. Zeljko Ivezic, Rubin Observatory Construction Director.

Asteroid Institute and Google Cloud have partnered since 2017 with the goal of transforming asteroid discovery. Learn more about the technology that made this discovery possible here.

About B612 Foundation and Asteroid Institute

Asteroid Institute brings together scientists, researchers, and engineers to develop tools and technologies to understand, map, and navigate our solar system. A program of B612 Foundation, Asteroid Institute leverages advances in computer science, instrumentation, and astronomy to find and track asteroids. Since 2002, the foundation has supported research and technologies to enable the economic development of space and enhance our understanding of the evolution of our solar system in addition to supporting educational programs, including Asteroid Day and the new Schweickart Prize. Founding Circle and Asteroid Circle members, and individual donors from 46 countries provide financial support for the work. For more information, visit

About Google Cloud

Google Cloud is the new way to the cloud, providing AI, infrastructure, developer, data, security, and collaboration tools built for today and tomorrow. Google Cloud offers a powerful, fully integrated and optimized AI stack with its own planet-scale infrastructure, custom-built chips, generative AI models and development platform, as well as AI-powered applications, to help organizations transform. Customers in more than 200 countries and territories turn to Google Cloud as their trusted technology partner.

Source: Google Cloud

Here is the original post:

Asteroid Institute and Google Cloud Identify 27,500 New Asteroids with Cloud-Based Astrodynamics and Data Mining - Datanami