Category Archives: Data Mining

Humanitys fight against Covid: The promise of artificial intelligence – Times of India

Few know that Coronavirus and its allied disease Covid-19 was first discovered by a data-mining program. HealthMap, a website run by Boston Childrens Hospital, raised an alarm about multiple cases of pneumonia in Wuhan, China, rating its urgency at three on a scale of five. Soon after this discovery, the pandemic hit the world like a tsunami. As it progressed, Governments struggled to deal with the unprecedented crisis on multiple fronts and were forced to look at innovative ways to augment their efforts; presenting an opportunity to leverage Artificial Intelligence (AI).

AI was used in varied settings including drug discovery, testing, prevention and overcoming resource constraints, and its success opened a whole new door of possibilities. Heres a look at some of the most intuitive, innovative and advantageous uses of the technology during COVID-19, outlined under the four categories of diagnosis and prognosis, prediction and tracking, patient care and drug development:

Diagnosis and prognosis of COVID-19 using AI

AI assistance in prediction and tracking of Covid-19

AI-backed, superior care for COVID-19 patients

In Xinchang County, China, the drones delivered medical supplies to centers in need, and thermal sensing drones 14 identified people running fever, potentially infected with the virus.

Drug development with AI

There are several mechanisms, where AI accelerated the research on Covid 19. The key thing to note is much of the cutting-edge research is open source and thus available to the scientific and medical research community for further development or consumption.

Predictions for quicker vaccine development: Messenger RNA (mRNA) has a secondary structure that instructs cells to make proteins. Understanding the instruction and protein translation was key to the development of mRNA vaccine. However, mRNAs have a short half-life and degrade rapidly. This impacts the structural analysis of the virus. Access to quick viral structural analysis was significant to shorten the time it takes to design a potential mRNA vaccine with higher stability and better effectiveness, providing an opportunity to save thousands of lives.

Baidu AI team deployed Linearfold 2 , a model to predict the secondary structure prediction for the COVID-19 RNA sequence, reducing overall analysis time from 55 minutes to 27 seconds.Baidu also released the model for public use.

Challenges and the future of AI in health care

An AI solution must undergo a wide range of conditions and edge scenarios before it is deemed fit for use in terms of fairness, reliability, accountability, privacy, transparency, and safety. It also requires continuous monitoring of output vis--vis the everchanging real world, so it can learn from it.

Finally, policy formulation must support adoption of technology but tread with caution. FDA is actively working with stakeholders to define a comprehensive lifecycle-based framework that addresses the use of these technologies in medical care. This evolving framework differs significantly from FDAs own traditional regulatory control paradigm.

Artificial Intelligence has proved its value during the pandemic and holds much promise for mitigating future health care crises. However, this is just a start and the possibilities for intelligent care are limitless. This makes AI in health care an area of great opportunity for talented technologists who are also passionate about making an impact on people and communities through their work. Based on lessons learnt from the use of AI during the pandemic, policy makers, research institutes, businesses and technologists must incorporate these learnings as they chart the way forward.

Views expressed above are the author's own.

END OF ARTICLE

Read more from the original source:

Humanitys fight against Covid: The promise of artificial intelligence - Times of India

Filings buzz in the mining industry: 30% increase in big data mentions in Q2 of 2022 – Mining Technology

Mentions of big data within the filings of companies in the mining industry rose 30% between the first and second quarters of 2022.

In total, the frequency of sentences related to big data between July 2021 and June 2022 was 279% higher than in 2016 when GlobalData, from whom our data for this article is taken, first began to track the key issues referred to in company filings.

When companies in the mining industry publish annual and quarterly reports, ESG reports and other filings, GlobalData analyses the text and identifies individual sentences that relate to disruptive forces facing companies in the coming years. Big data is one of these topics - companies that excel and invest in these areas are thought to be better prepared for the future business landscape and better equipped to survive unforeseen challenges.

To assess whether big data is featuring more in the summaries and strategies of companies in the mining industry, two measures were calculated. Firstly, we looked at the percentage of companies which have mentioned big data at least once in filings during the past twelve months - this was 57% compared to 24% in 2016. Secondly, we calculated the percentage of total analysed sentences that referred to big data.

Of the 10 biggest employers in the mining industry, Caterpillar was the company which referred to big data the most between July 2021 and June 2022. GlobalData identified 25 big data-related sentences in the United States-based company's filings - 0.4% of all sentences. Sibanye-Stillwater mentioned big data the second most - the issue was referred to in 0.18% of sentences in the company's filings. Other top employers with high big data mentions included Honeywell, ThyssenKrupp and CIL.

Across all companies in the mining industry the filing published in the second quarter of 2022 which exhibited the greatest focus on big data came from Erdemir. Of the document's 2,780 sentences, 10 (0.4%) referred to big data.

This analysis provides an approximate indication of which companies are focusing on big data and how important the issue is considered within the mining industry, but it also has limitations and should be interpreted carefully. For example, a company mentioning big data more regularly is not necessarily proof that they are utilising new techniques or prioritising the issue, nor does it indicate whether the company's ventures into big data have been successes or failures.

GlobalData also categorises big data mentions by a series of subthemes. Of these subthemes, the most commonly referred to topic in the second quarter of 2022 was 'data analytics', which made up 72% of all big data subtheme mentions by companies in the mining industry.

See the original post here:

Filings buzz in the mining industry: 30% increase in big data mentions in Q2 of 2022 - Mining Technology

Data Scientist Training: Resources and Tips for What to Learn – Dice Insights

Data science is a complex field that requires its practitioners to think strategically. On a day-to-day basis, it requires aspects of database administration and data analysis, along with expertise in statistical modeling (and even machine learning algorithms). It also needs, as you might expect, a whole lot of training before you can plunge into a career as a data scientist.

There are a variety of training options out there for data scientists at all points in their careers, from those just starting out to those looking to master the most cutting-edge tools. Here are some platforms and training tips for all data scientists.

Kevin Young, senior data and analytics consultant at SPR, says that many data scientists treat Kaggle as a go-to learning resource. Kaggle is a Google-owned machine learning competition platform with a series of friendly courses to get beginners started on their data science journey.

Topics covered range from Python to deep learning and more. Once a beginner gains a base knowledge of data science, they can jump into machine learning competitions in a collaborative community in which people are willing to share their work with the community, Young says.

In addition to Kaggle, there are lots of other online resources that data scientists (or aspiring data scientists) can use to boost their knowledge of the field. Here are some free resources:

And here are some that will cost (although youll earn a certification or similar proof of completion at the end):

This is just a portion of whats out there, of course. Fortunately, the online education ecosystem for data science is large enough to accommodate all kinds of learning styles.

Seth Robinson, vice president of industry research at CompTIA, explains that individuals near the beginning of a data science career will need to build familiarity with data structures, database administration, and data analysis.

Database administration is the most established job role within the field of data, and there are many resources teaching the basics of data management, the use of SQL for manipulating databases, and the techniques of ensuring data quality. Beyond traditional database administration, an individual could learn about newer techniques involving non-relational databases and unstructured data, he adds.

Training for data analysis is newer, but resources such as CompTIAs Data+ certification can add skills in data mining, visualization, and data governance. From there, specific training around data science is even more rare, but resources exist for teaching or certifying advanced skills in statistical modeling or strategic data architecture, Robinson says.

Young cites two main segments of data science training: model creation and model implementation.

Model creation training is the more academic application of statistical models on an engineered dataset to create a predictive model: This is the training that most intro to data science courses would cover.

This training provides the bedrock foundations for creating models that will provide predictive results, he says. Model creation training is usually taught in Python, and covers the engineering of the dataset, creation of a model and evaluation of that model.

Model implementation training opportunities cover the step after the model is created, which is getting the model into production. This training is often vendor or cloud-specific to get the model to make predictions on live incoming data. This type of training would be through cloud providers such as AWS giving in-person or virtual education on their machine learning services such as Sagemaker, Young explains.

These cloud services provide the ability to take machine learning models produced on data scientists laptops and persist the model in the cloud, allowing for continual analysis. This type of training is vital as the time and human capital are usually much larger in the model implementation phase than in the model creation phase, Young says.

This is because when models are created, they often use a smaller, cleaned dataset from which a single data scientist can build a model. When that model is put into production engineering teams, DevOps engineers, and/or cloud engineers are often needed to create the underlying compute resources and automation around the solution.

The more training the data scientist has in these areas, the more likely the project will be successful, he says.

Young says one of the lessons learned during the pandemic that professionals in technology roles can be productive remotely. This blurs the lines a bit on the difference between boot camps compared to online courses as many boot camps have moved to a remote model, he says. This puts an emphasis on having the ability to ask questions to a subject matter expert irrespective of whether you are in a boot camp or online course.

He adds certifications can improve organizations standing with software and cloud vendors. This means that candidates for hire move to the top of the resume stack if they have certifications that the business values, Young says.

For aspiring data scientists deciding between boot camps versus online courses, he says probably the most important aspect to compare the two are the career resources offered. A strong boot camp should have a resource dedicated to helping graduates find employment after the boot camp, he says.

Robinson adds its important to note that data science is a relatively advanced field.

All technology jobs are not created equal, he explains. Someone considering a data science career should recognize that the learning journey is likely to be more involved than it would be for a role such as network administration or software development.

Young agrees, adding that data scientists need to work in a collaborative environment with other data scientists and subject matter experts reviewing their work. Data science is a fast-developing field, he says. Although fundamental techniques do not change, how those techniques are implemented does change as new libraries are written and integrated with the underlying software on which models are built.

From his perspective, a good data scientist is always learning, and any strongly positioned company should offer reimbursement for credible training resources.

Robinson notes in-house resources vary from employer to employer, but points to a macro trend of organizations recognizing that workforce training needs to be a higher priority. With so many organizations competing for so few resources, companies are finding that direct training or indirect assistance for skill building can be a more reliable option for developing the exact skills needed, while improving the employee experience in a tight labor market, he says.

Membership has its benefits. Sign up for a free Dice profile, add your resume, discover great career insights and set your tech career in motion. Register now

Excerpt from:

Data Scientist Training: Resources and Tips for What to Learn - Dice Insights

Pecan AI Leaps Over the Skills Gap to Enable Data Science On Demand – Datanami

As the big data analytics train keeps rolling on, there are still kinks to work out when implementing it in the business world. Building and maintaining a big data infrastructure capable of quickly turning large data sets into actionable insights requires data science expertise a skillset in high demand but with often low availability. There is also a skills gap between data scientists, analysts, and business users, and while several low or no-code platforms have aimed to resolve this, complexity remains for certain use cases.

One company looking to bridge the gap between business analytics and data science is Pecan AI. The company says its no-code predictive analytics platform is designed for business users across sales, marketing, and operations, as well as the data analytics teams that support them.

Pecan was built under the assumption that the demand for data science far exceeds the supply of data scientists. We said from the get-go, we wanted to help non-data scientists, specifically BI analysts, to basically leap through the gap of data science knowledge with our platform, Pecan AI CEO Zohar Bronfman told Datanami in an interview.

The Pecan AI platform allows users to connect their various data sources through its no-code integration capabilities. A drag-and-drop, SQL-based user interface enables users to create machine learning-ready data sets. Pecans proprietary AI algorithms can then build, optimize, and train predictive models using deep neural networks and other ML tools, depending on the needs of the specific use case. With less statistical knowledge required, along with automated data preparation and feature selection, the platform removes some of the technical barriers that BI analysts may face when leveraging data science.

Interestingly enough, in most of the data science use cases, you would spend, as a data scientist, more time and effort on getting the data right, extracting it, cleansing it, collating it, structuring it, and many other things that basically define data science use cases. And thats what weve been able to automate, so that analysts who have never done this before will be able to do so, said Bronfman.

Additionally, the platform offers monitoring features to continually analyze data for more accurate predictions, prioritize features as their importance changes over time, and monitor model performance via a live dashboard.

In data science, the changes that happen around us are very, very impactful and meaningful, and also potentially dangerous, said Bronfman, referencing how patterns of customer behavior can change as a reaction to factors such as inflation and supply chain disruptions, rendering current models obsolete. According to Bronfman, to continue delivering accurate predictions, the platform automatically looks for changes in patterns within data, and once it identifies a change, the models are retrained and updated by feeding new data into the algorithms to accommodate the more recent patterns.

An example Pecan AI dashboard showing a predicted churn rate. Source: Pecan AI

Bronfman and co-founder and CTO Noam Brezis started Pecan AI in 2016. The two met in graduate school while working toward PhDs in computational neuroscience, and their studies led them to research recent advancements in AI, including its capacity for automating data mining and statistical processes. Brezis became a data analyst with a focus on business analytics, and he was surprised to find that data science know-how was often relegated to highly specialized teams, isolated from the business analysts who could benefit the most from data sciences predictive potential. Bronfman and Brezis saw an opportunity to build a SQL-oriented platform that could leverage the power of data science for a BI audience while eliminating much of the manual data science work.

Pecan AI serves a variety of use cases including sales analytics, conversion, and demand forecasting. Bronfman is especially enthusiastic about Pecans predictive analytics capabilities for customer behavior, an area in which he sees three main pillars. The first pillar is acquisition, a stage when companies may be asking how to acquire and engage with new customers: For the acquisition side of things, predicted lifetime value has been one of the key success stories for us, Bronfman said of Pecans predictive lifetime value models. Those models eventually give you a very good estimation, way before things actually happen, of how well your campaigns are going to do from the marketing side. Once you have a predicted lifetime value model in place, you can wait just a couple of days with the campaign and say, Oh, the ally is going to disinvest in a month or three months time, so I should double down my spend on this campaign, or, in other cases, I should refrain from investing more.

The second customer behavior pillar is the monetization pillar, a time when companies may be asking how they can offer the customer a better experience to encourage their continued engagement: If you have the opportunity to offer an additional product, service, [or] brand, whatever that might be, you need to optimize both for what you are offering, and not less importantly, when you are offering [it]. So again, our predictions are able to tell you at the customer level, who should be offered what and when, said Bronfman.

Finally, the third pillar is retention, an area where Bronfman notes it is far more economically efficient to retain customers rather than acquire new ones: For the retention side of things, the classic use case, which has been extremely valuable and gotten us excited, is churn prediction. Churn is a very interesting data science domain because predicting churn has been notoriously challenging, and its a classic case where if youre not doing it right, you might, unfortunately, get to a place where you are accurate with your predictions but you are ineffective.

Pecan AI co-founders: CEO, Zohar Bronfman and CTO, Noam Brezis.

When predicting churn, Bronfman says that time is of the essence: When a customer has already made a final decision to churn, even if youre able to predict it before theyve communicated it, you wont be able in most cases, to change their mind. But if youre able to predict churn way in advance, which is what we specialize in, then you still have this narrow time window of opportunity to preemptively engage with the customer to give them a better experience, a better price, a better retargeting effort, whatever that might be, and increase your retention rates.

Investors and customers alike seem keen on what Pecan has to offer, and the company is seeing significant growth. So far, the company has raised a total of $116 million, including its latest Series C funding round of $66 million occurring in February, led by Insight Partners, with participation from GV and existing investors S-Capital, GGV Capital, Dell Technologies Capital, Mindset Ventures, and Vintage Investment Partners.

Pecan recently announced it has more than doubled its revenue in the first half of this year, with its annual recurring revenue increasing by 150%. Its customer count increased by 121%, with mobile gaming companies Genesis and Beach Bum and wellness brand Hydrant joining its roster which already includes Johnson & Johnson and CAA Club Group. The company also expanded its number of employees to 125 for a 60% increase.

Bronfman says Pecans growth stems from a strong tailwind of two factors: Analysts are loving the fact that they can evolve, upskill, and start being data scientists on demand. But also, we came to realize that business stakeholders love that they can drive quick and effective data science without necessarily requiring data science resources.

Related Items:

Pecan AI Announces One-Click Model Deployment and Integration with Common CRMs

Foundry Data & Analytics Study Reveals Investment, Challenges in Business Data Initiatives

Narrowing the AI-BI Gap with Exploratory Analysis

View original post here:

Pecan AI Leaps Over the Skills Gap to Enable Data Science On Demand - Datanami

Asia Pacific will lead the new wave of transformation in data innovation: Nium’s CTO Ramana Satyavarapu – ETCIO South East Asia

Ramana Satyavarapu, Chief Technology Officer, NiumIn a market such as Asia Pacific, the sheer volume of data and various emerging types of data create innumerable complexities for businesses that still require the adoption of data strategies from ground-up. For organisations that have understood the importance of data, they are yet to instil stronger data management practices in the current state of modern array. According to research revealed by Accenture, while only 3 of the 10 most valuable enterprises were actively taking a data-driven approach in 2008, that number has risen to 7 out of 10 today. All of it points to the fact that designing data-driven business processes is the only effective way to achieve fast-paced results and goals for organisations across sectors.

To further decode the nuances of the data landscape, with a special focus on the Asia Pacific region, we conducted an exclusive interaction with Ramana Satyavarapu, the Chief Technology Officer of Nium. Ramana is an engineering leader with a strong track record of delivering great products, organising and staffing geographically and culturally diverse teams, and mentoring and developing people. Throughout his career, he has delivered highly successful software products and infrastructure at big tech companies such as Uber, Google and Microsoft. With a proven track record of result-oriented execution by bringing synergy within engineering teams to achieve common goals, Ramana has a strong passion for quality and strives to create customer delight through technological innovation.

In this feature, he shares his outlook on the most relevant data management practices, effective data functionalities, building headstrong data protection systems, and leveraging optimal data insights for furthering business value propositions.

Ramana, what according to you are the most effective functions of data in the evolution of tech and innovation in Asia Pacific?

Data is becoming ubiquitous. Especially in Asia Pacific, because of the sheer number of people going digital. The amount of data available is huge. I will streamline its functions into three main areas:

First, understand the use case. Second, build just enough systems for storing, harnessing, and mining this data. For which, dont build everything in-house. Theres a lot of infrastructure out there. Data engineering has now turned into lego building, you dont have to build the legos from ground up. Just build the design structure using the existing pieces such as S3, Redshift and Google Storage. You can leverage all of these things to harness data. Thirdly, make sure the data is always encrypted, secure, and that there are absolutely robust, rock-solid, and time-tested protections around the data, which has to be taken very seriously. Those would be my three main principles while dealing with data.

How would you describe the importance of data discovery and intelligence to address data privacy and data security challenges?When you have a lot of data, reiterating my point about big datasets and their big responsibility, the number of security challenges and surface area attacks will be significantly higher. In order to understand data privacy and security challenges, more than data discovery and intelligence, one has to play a role in terms of two aspects - where we are storing it is a vault, we need to make sure the pin of the vault is super secure. Its a systems engineering problem more than a data problem. The second is, you need to understand what kind of data is this. No single vault is rock solid. Instead, how do we make sure that an intelligent piece of data is secure? Just store it in different vaults that individually, even if hacked or exposed - doesnt hurt it entirely. The aggregation of the data will be protected. Therefore, it must be a twofold strategy. Understand the data, mine it intelligently, so that you can save it not just in a single vault, but save it in ten different vaults. In layman terms, you dont put all your cash in a single bank or system. Therefore, the loss is mitigated and no one can aggregate and get ahold of all the data at once. Also, just make sure that we have solid security engineering practices to ensure the systems are protected from all kinds of hacks and security vulnerabilities.

The interpretative value of data provides immense scope for evaluating business processes. What role does data analytics play in the evolution of business success?There is a natural point where the functional value proposition that can be added or given to the customer, will start diminishing. There will be a natural point where data will be the differentiator. Ill give a pragmatic example which everybody knows - the difference between the Google search and Microsoft Bing search, both of which are comparably similar kinds of algorithms. But the results are significantly different! That's because one adopts fantastic data engineering practices. Its all about the insights and the difference that they can provide. At one point, the value addition from the algorithm diminishes and the quality and insights that you can draw from the data, will be the differentiator.

Twofold advantages of data insights or analytics. One, providing value to the customer beyond functionality. Like in the context of say Nium, or payments, or anyone whos doing a global money movement, weve identified an xyz company doing a massive money movement on the first day of every month - say to the Philippines or Indonesia. Instead of doing it on the first day of every month, why dont you do it on the last day of the previous month. That has been historically proven to be a better interchange or FX rate. At the end of the day, its all about supply and demand. Doing it one day before can save you a huge FX rate conversion which will benefit the business in many ways by one quantifiable amount, that is very powerful. Those kinds of insights can be provided to the customers by Nium. Being a customer-centric company, its our strong value proposition - we grow when the customer grows. Those insights, in addition to the business intelligence that can be drawn from it. Offering a new value proposition to the customer and just improving their processes is important.

For example, we are seeing that on an average, these customers transactions are taking x days or minutes, or this customer's acceptance rate is low, then we can improve the value, the reliability, and the availability of the system using analytics. We had a massive customer in the past, none other than McDonalds. We were looking at the data and we observed that theres a very specific pattern of transaction decline rate. However, independently, youll look at it and notice that only a few transactions are being declined. But if you look at it on a global scale, thats a significant amount of money and customer loss. When we analysed it further, we identified that this is happening with a very specific type of point of sale device in the east coast at the peak hour. We sent a detailed report of it to McDonalds saying we are identifying this kind of a pattern. McDonalds then contacted the point of sale device manufacturer and said that at this peak, these kinds of transactions, your devices are failing. That would have saved them hundreds and thousands of dollars.

Saachi, the whole idea is having a clear strategy of how we are going to use the data and we need to demystify this whole data problem space. There are data lakes, warehouses, machine learning, data mining, all of which are super complex terms. At the end of the data, break it down, and its really not that complex if you keep it simple.

In a world continually dealing with new-age data, mention some best data management practices for tech leaders. Again, theres no one set of practices that can determine that this will solve all your data problems. Then youd have to call me the data guru or something! To keep it simple, the three main aspects that I talked about - collection, aggregation, and insights - there are specific management practices for each of these strategies.

First, when it comes to data collection, focus on how to deal with heterogeneity. Data is inherently heterogeneous. From CSV files to text files to satellite images, theres no standard. Find a good orchestration layer and a good reliable, retry logic, with enough availability of ETLs to make sure this heterogeneous data is consistently and reliably collected. Thats number one. Im a big believer of: that what cannot be measured, is whats not done. Measure, measure, measure. In this case, have some validators, have some quality checks on consistency, reliability, freshness, timeliness, all the different parameters of if the data is coming to us in an accurate way. Thats the first step.

Second is standardisation. Whether its web-crawled data or Twitter information or traffic wave information or even satellite images, there was a dataset where we were measuring the number of sheep eating grass in New Zealand - so we were using image processing techniques to see the sheep. And why is that useful? Using that, you can observe the supply of merino wool sweaters in the world. If the sheep are reduced, the wool is less, and therefore the jacket will be costly. How do we store such data, though? Start with a time series and a standard identification. Every dataset, every data row, and every data cell has to be idempotent. Make sure that every piece of data, and the transformations of it, are traceable. Just have a time series with a unique identifier for each data value so that it can be consistently accessed. Thats a second.

Third, start small. Everyone presents people with machine learning or advanced data mining. Those are complex. Start with linear regressions and start identifying outliers. Start doing pattern matching. These are not rocket science to implement, start with them. Machine learning, in my opinion, is like a ten pound hammer. Its very powerful. But you want to have the right surface area and the right nail to hit it. If you use a ten pound hammer on a pushpin, the walls going to break. You need to have the right surface area or problem space to apply it. Even with ML, start with something like supervised learning, then move onto semi-supervised learning, then unsupervised learning, and then go to clustering, in a very phased manner.

That would be my approach on dividing it into the collection - having good validators or quality checks on it to ensure reliability, standardisation in the form of a time series, and then pattern recognition or simple techniques, wherefrom you can progress gradually onto how we want to mine the data and provide the insights.

To summarise, keep the data problem simple. Make sure you have a clear understanding of it - what is the use case that we are aiming to solve before we attempt to build a huge data lake or data infrastructure? Being pragmatic about the usage of data is very important. Again, data is super powerful. With lots of data, come lots of responsibilities, take it very seriously. Customers and users are entrusting us with their personal data, and that comes with a lot of responsibility. I urge every leader, engineer, and technologist out there to take it very seriously. Thank you!

Continued here:

Asia Pacific will lead the new wave of transformation in data innovation: Nium's CTO Ramana Satyavarapu - ETCIO South East Asia

Hut 8 Mining Production and Operations Update for August 2022 – Yahoo Finance

375 Bitcoin mined, bringing reserves to 8,111

TORONTO, Sept. 6, 2022 /CNW/ - Hut 8 Mining Corp. (Nasdaq:HUT) (TSX:HUT), ("Hut 8" or the "Company")one of North America's largest, innovation-focused digital asset mining pioneers and high performance computing infrastructure provider, increased its Bitcoin holdings by 375in the period endingAugust 31, bringing its total self-mined holdings to 8,111 Bitcoin.

(CNW Group/Hut 8 Mining Corp)

Production highlights forAugust2022:

375 Bitcoin were generated, resulting in an average production rate of approximately 12.1 Bitcoin per day.

Keeping with our longstanding HODL strategy, 100% of the self-mined Bitcoin in August were deposited into custody.

Total Bitcoin balance held in reserve is 8,111 as of August 31, 2022.

Installed ASIC hash rate capacity was 2.98 EH/s at the end of the month, which excludes certain legacy miners that the Company anticipates will be fully replaced by the end of the year.

Hut 8 produced 125.8 BTC/EH in August.

Additional updates:

In late August, Hut 8 installed 180 NVIDIA GPUs in its flagship data centre in Kelowna, B.C. Currently mining Ethereum, the multi-workload machines will be designed to pivot on demand to provide Artificial Intelligence, Machine Learning, or VFX rendering services to customers.

Hut 8 is partnering with Zenlayer to bring their on-demand high-performance computing to Canadian Web 3.0 and blockchain customers for the first time.

"Our team delivered very strong results across our mining and high performance infrastructure businesses in August, positioning us well for continued success," saidJaime Leverton, CEO. "We continue to receive and install our monthly shipments of new MicroBT miners on time, while actively adding to the suite of services we offer our data centre customers."

About Hut 8

Hut 8 is one ofNorth America'slargest innovation-focused digital asset miners, led by a team of business-building technologists, bullish on bitcoin, blockchain, Web 3.0, and bridging the nascent and traditional high performance computing worlds. With two digital asset mining sites located inSouthern Albertaand a third site inNorth Bay, Ontario, all located inCanada, Hut 8 has one of the highest capacity rates in the industry and one of the highest inventories of self-mined Bitcoin of any crypto miner or publicly-traded company globally. With 36,000 square feet of geo-diverse data centre space and cloud capacity connected to electrical grids powered by significant renewables and emission-free resources, Hut 8 is revolutionizing conventional assets to create the first hybrid data centre model that serves both the traditional high performance compute (Web 2.0) and nascent digital asset computing sectors, blockchain gaming, and Web 3.0. Hut 8 was the first Canadian digital asset miner to list on the Nasdaq Global Select Market. Through innovation, imagination, and passion, Hut 8 is helping to define the digital asset revolution to create value and positive impacts for its shareholders and generations to come.

Story continues

Cautionary Note Regarding ForwardLooking Information

Thispress release includes "forward-looking information" and "forward-looking statements" within the meaning of Canadian securities laws andUnited Statessecurities laws, respectively (collectively, "forward-looking information"). All information, other than statements of historical facts, included in this press release that address activities, events or developments that the Company expects or anticipates will or may occur in the future, including such things as future business strategy, competitive strengths, goals, expansion and growth of the Company's businesses, operations, plans and other such matters is forward-looking information. Forward-looking information is often identified by the words "may", "would", "could", "should", "will", "intend", "plan", "anticipate", "allow", "believe", "estimate", "expect", "predict", "can", "might", "potential", "predict", "is designed to", "likely" or similar expressions. In addition, any statements in this press release that refer to expectations, projections or other characterizations of future events or circumstances contain forward-looking information and include, among others, statements regarding: Bitcoin and Ethereum network dynamics; the Company's ability to advance its longstanding HODL strategy;the Company's ability to produce additional Bitcoin and maintain existing rates of productivity at all sites; the Company's ability to deploy additional miners; the Company's ability to continue mining digital assets efficiently; the Company's expected recurring revenue and growth rate from its high performance computing business; and the Company's ability to successfully navigate the current market.

Statements containing forward-looking information are not historical facts, but instead represent management's expectations, estimates and projections regarding future events based on certain material factors and assumptions at the time the statement was made. While considered reasonable by Hut 8 as of the date of this press release, such statements are subject to known and unknown risks, uncertainties, assumptions and other factors that may cause the actual results, level of activity, performance or achievements to be materially different from those expressed or implied by such forward-looking information, including but not limited to, security and cybersecurity threats and hacks, malicious actors or botnet obtaining control of processing power on the Bitcoin or Ethereum network, further development and acceptance of Bitcoin and Ethereum networks, changes to Bitcoin or Ethereum mining difficulty, loss or destruction of private keys, increases in fees for recording transactions in the Blockchain, erroneous transactions, reliance on a limited number of key employees, reliance on third party mining pool service providers, regulatory changes, classification and tax changes, momentum pricing risk, fraud and failure related to cryptocurrency exchanges, difficulty in obtaining banking services and financing, difficulty in obtaining insurance, permits and licenses, internet and power disruptions, geopolitical events, uncertainty in the development of cryptographic and algorithmic protocols, uncertainty about the acceptance or widespread use of cryptocurrency, failure to anticipate technology innovations, the COVID19 pandemic, climate change, currency risk, lending risk and recovery of potential losses, litigation risk, business integration risk, changes in market demand, changes in network and infrastructure, system interruption, changes in leasing arrangements, and other risks related to the cryptocurrency and data centre business. For a complete list of the factors that could affect the Company, please see the "Risk Factors" section of the Company's Annual Information Form datedMarch 17, 2022, and Hut 8's other continuous disclosure documents which are available on the Company's profile on the System for Electronic Document Analysis and Retrieval at http://www.sedar.com and on the EDGAR section of the U.S. Securities and Exchange Commission's website at http://www.sec.gov.

These factors are not intended to represent a complete list of the factors that could affect Hut 8; however, these factors should be considered carefully. There can be no assurance that such estimates and assumptions will prove to be correct. Should one or more of these risks or uncertainties materialize, or should assumptions underlying the forward-looking statements prove incorrect, actual results may vary materially from those described in this press release as intended, planned, anticipated, believed, sought, proposed, estimated, forecasted, expected, projected or targeted and such forward-looking statements included in this press release should not be unduly relied upon. The impact of any one assumption, risk, uncertainty, or other factor on a particular forward-looking statement cannot be determined with certainty because they are interdependent and Hut 8's future decisions and actions will depend on management's assessment of all information at the relevant time. The forward-looking statements contained in this press release are made as of the date of this press release, and Hut 8 expressly disclaims any obligation to update or alter statements containing any forward-looking information, or the factors or assumptions underlying them, whether as a result of new information, future events or otherwise, except as required by law.

Cision

View original content to download multimedia:https://www.prnewswire.com/news-releases/hut-8-mining-production-and-operations-update-for-august-2022-301617489.html

SOURCE Hut 8 Mining Corp

Cision

View original content to download multimedia: http://www.newswire.ca/en/releases/archive/September2022/06/c8714.html

Read this article:

Hut 8 Mining Production and Operations Update for August 2022 - Yahoo Finance

Ethereum Miners Eye Cloud, AI To Repurpose Equipment That The Merge Will Make Obsolete – Forbes

getty

Crypto miners are turning to cloud computing and artificial intelligence as Ethereum begins the so-called Merge without incident.

The blockchains switch to a proof-of-work model from proof-of-stake will make it much more energy efficient, an important catalyst for the struggling crypto industry but one that will leave the miners who validated transactions and created new coins holding a lot of specialized computer gear that will no longer be useful in creating the No. 2 cryptocurrency.

HIVE Blockchain (Nasdaq: HIVE) said Tuesday it has a pilot project for testing a portion of its mining equipment in cloud computing at a Tier 3 data center. Such centers have multiple sources of power and cooling systems and do not require a total shutdown during maintenance or equipment replacement.

The Vancouver, Canada-based company has been using a range of Nvidias flagship graphic processing units (GPUs) to mine ether, Ethereums native cryptocurrency, but these GPUs, capable of supporting large data sets, can also be used for purposes including AI acceleration and virtual simulations, according to Nvidia. HIVE said it does not own Nvidias special-purpose CMP GPUs, which are limited to cryptocurrency mining. The miner said it produced 3,010 ETH in August, worth nearly $5 million, but sold its holdings of the currency to fund the expansion of its bitcoin mining operations.

Similarly, another Canadian crypto miner, Hut 8 Mining (Nasdaq: HUT), announced that in late August it had installed 180 NVIDIA GPUs in its data center in Kelowna, British Columbia to repurpose these machines for providing artificial intelligence, machine learning, or VFX rendering services to customers on demand.

Falling profits against the backdrop of plummeting crypto prices put publicly traded miners like HIVE and Hut 8 in a squeeze, pushing their shares down by more than 60% this year.

Stocks of publicly traded crypto miners have taken a big hit this year

The miners have also been beset by the lack of post-Merge alternatives. According to crypto intelligence firm Messari, Ethereum miners generated nearly $19 billion in revenue in 2021. To replace the lost revenue, some companies, including HIVE, said they would consider mining alternative proof-of-work coins such as Ethereum Classic, however, the market capitalization of these cryptocurrencies is less than 5% of Ethereum's $194 billion.

The likely outcome of a successful Merge is that GPUs will flood the resale market as alternative proof-of-work coins will only remain profitable for a small number of miners with access to cheap energy, writes Messari analyst Sami Kassab. Miners willing to invest the time and additional capital will be able to transition into high-performance data centers or node operators/providers for Web3 compute protocols both rapidly growing markets.

Further reading:

Ethereum Miners Will Have Few Good Options After The Merge

First Phase Of Ethereum Merge, Biggest Thing In Crypto Since Bitcoin, Goes Live

Ether Prices Rally As Bellatrix Upgrade Moves Ethereum One Step Closer To The Merge

Link:

Ethereum Miners Eye Cloud, AI To Repurpose Equipment That The Merge Will Make Obsolete - Forbes

Tax digitalisation: Not the future, but the present – International Tax Review

Tax digitalisation should not only be linked to digital service tax, digital permanent establishments, the taxation of crypto assets, or even OECD pillars one and two discussions. In fact, the first consequence of widespread tax digitalisation is the imposition of new policy standards that allow the adaptation of legislation to the possibility of automation and machine reading instruments (ensuring the creation and validation of legal metadata).

Moreover, the design of a data strategy at international, national, or company level may lead us to a scenario where we can use specific modelling tools and select the most useful technologies for each tax problem (under EU or OECD guidelines, for example).

This relationship between tax and technology is not a novelty. However, the recent increase in data availability has allowed for a paradigm shift, considering that for the usage of technology, proper management of data is first required. In fact, what we need is qualitative, raw, and structured data. Data mining and quality issues will follow as part of a data awareness culture.

This data-centric rationale reflects the pre-eminent role and data monopoly that the tax authorities have these days.

Take the example of Portugal, where we have more than 60 ancillary obligations to be filed periodically, not covering tax returns themselves (and adding up to automatic exchange of information, fulfilment of the Common Reporting Standard requirements, and financial information exchange). Until this massive data lake is made public, the unbalance and unfair relationship between the tax authorities and taxpayers will stay untouchable.

On the one hand, the tax community tends to think about digitalisation, robot process automation, artificial intelligence (AI), and machine learning, among others, as if these soundbites would be equivalent. On the other hand, we tend to believe in digitalisation as a science fiction topic that will, sooner rather than later, replace all our human work. After all, if the Deep Blue chess computer beat Garry Kasparov in 1997, what can a computer do in 2022?

In fact, machines (or bots, as they are now named) cannot do a lot. Set your brilliant tax minds to rest that we will not soon be replaced but we do need to adapt.

What machines are already experts at is the performance countless times of several binary tasks (Boolean-valued functions). Thus, it seems advisable to request them to act as supplementary tools in research, HR, filing returns, compliance, etc. Only then, after evolving to a development phase of data mining, may we arrive at a level of legal maturity that will face severe limitations towards what science fiction commands in our imagination.

Nevertheless, a degree of opacity still associated with some AI models is one of the biggest obstacles to its growth. For instance, requiring visibility and auditability of the algorithms ruling tax IT software may become a new international standard.

Adding to the above, merely data-driven solutions may work fine as well, provided we ensure explainable design thinking (using decision trees) and transparent solutions as the modern preservation of taxpayers rights. This is not the future but the present: the need for translators between IT and tax is creating a new role for tax advisers.

The future is already here

The second big boost in technology democratisation is the low code/no code basic idea, even having features available based on Microsoft Power Apps, with interfaces for commonly used software, and avoiding overengineering the systems (with affordable IT solutions). This allows, for instance, dashboarding from country-by-country reporting to profit and loss clustering, litigation screening, and so many others.

The combination of low code/no code tools with specialised sectorial tools as well as enterprise resource planning (ERP) system integration leads tech-driven companies to a different level of real-time controlling and proactive tax strategy and vision.

There are already quite a few tax/legal start-ups that are booming, such as Blue J Legal, Do Not Pay, Jurimetra, Codex Stanford, E-clear, Taxdoo, WTS Global, Summitto, and Luminance. They are already out there pushing taxation to the boundaries of the current technical limitations.

All these examples from judicial analytics to decision tree implementation, to machine learning and some AI components governing transfer pricing matters teach us that in each tax problem there is an opportunity to model a process, improve it, and automate it.

Although it is true that Tim Berners-Lees initial idea of the Semantic Web, or Web 3.0, did not flourish, easy communication among different profiles and the agile use of the JSON-LD language (as an example) have allowed significant developments by the biggest players in the world (Google, for example) that will sooner or later extend to tax-related domains (while in Portugal, contact with the tax authorities is mainly covered by XML format files).

Furthermore, there are several use cases from a public sector perspective; take VAT and customs matters as examples. The tax authorities are effectively using machine learning technology for anomaly detection through mirror analysis (cross checking import declarations with export declarations) or real-time processing of VAT inputs to speed up refunds or pre-filled-in VAT returns. Not to mention the chatbots introduced across public authorities to respond to basic queries from taxpayers.

It all boils up to data governance and data awareness as an international standard. Imagine a brave new world where policy options need to be sustained by data, contributing to tax transparency as well as measurement of the economic impact of the options. Thus, technology is able to serve public policy options and tax collection, and is available to tax professionals and suitable to be adapted and enhanced by market needs.

And so, it is the case, my fellow tax professionals: ask not what technology can do for you, ask what you can do for technology!

Continue reading here:

Tax digitalisation: Not the future, but the present - International Tax Review

Why data and technology will be crucial for the UKs new leader – Global Banking And Finance Review

By Jason Foster, CEO and Founder, Cynozure

According to the European Commission, the EU and UK data economy could hit 1tn by 2025. For context, thats bigger than Tesla or Facebooks owner, Meta Platforms. Clearly, there is a huge opportunity for companies and governments to capitalise on the ever-increasing amount of data they hold.

As the world rebuilds from the pandemic, we have a unique opportunity to harness the power of data to drive economic growth. Given rising inflation and the Bank of Englands warning that the UK will enter recession later this year, its imperative that data becomes increasingly central to government decision making.

Learning from crisis

The pandemic offers a clear example of the benefits of a data-led response. Without the accurate use of data, the global response to the pandemic would have been far less aligned, slower, and less agile and, most importantly, less effective.

Without meaningful data insights, could governments have accurately tracked infection hotspots in real-time or rapidly introduced measures to protect communities and save lives? I think not, and certainly not to the same speed and accuracy. Of course, data was also crucial for developing and administering the vaccines. Put simply, the effective use of data was central to the global pandemic response, and it will be vital to economic recovery.

The pandemic also had a wider social impact in terms of digital transformation. Suddenly, the entire UK had to move to digital channels to work, shop, and socialise that was the case for individuals, businesses, and government.

Things have changed forever, and society has now fully embraced the digital transformation. Data management is central to this with an increased online footprint, theres an ever-growing volume of data that can be utilised in countless ways to support businesses, boost the economy, and help the UK emerge from the pandemic in the strongest possible position.

Taking lessons from business

Whilst data usage in government decision-making processes has undeniably improved, there is always scope for further positive development.

In business, almost 40% of CEOs plan to invest in data over the next three years, with 70% expecting this investment to have a large impact on their bottom line. Whilst governments may not be profit-driven in the same way as corporates, investing in data boosts efficiency and maximises effectiveness.

The data industry can be hugely valuable to the UK economy more broadly, but the government must act quickly if it is to take full advantage of the rapidly growing market and cement the UKs position as a global leader.

Laying the groundwork for success

What tools or support are therefore needed to help promote the use of data and allow the wider data economy to thrive?

Embracing the power of data in a positive way can be a force for good for government and businesses of all sizes in all sectors. However, it requires a data-literate leader to drive this shift. Many leaders are now recognising that a business strategy isnt complete without a comprehensive data strategy.

This doesnt mean reinventing the wheel policymakers can use proven standards and best practise when defining and delivering strategy, which will help them to get a running start, as well as giving them time to upskill or hire in new talent to manage data programmes when necessary.

There is also the question of trust. Many consumers still perceive the issue of data be that sharing, or how their data is stored by governments and businesses with sceptism. The future leader must face this head on and take measures to reassure consumers that their data is safe and wont be weaponised, as we have seen in the past with election scandals.

In terms of tangible steps to be taken, this may mean establishing clear guidelines about what is allowed and not when it comes to topics like data mining and AI deployment. Ethics are vital, and the full potential of data will only be realised when the public trust how their data is being used.

Steps in the right direction but is it enough?

Government initiatives are key the Department for Digital, Culture, Media and Sport recently launched a competition with a 12m Digital Growth Grant to deliver a new digital and tech sector support programme aimed at scaling tech companies. Similar programmes are needed to support the data revolution.

The Data Protection and Digital Information Bill is a welcome step that aims to remove some regulatory barriers and support data-focused innovation. However, whether the bill goes far enough remains to be seen.

There is also the issue of how aligned the UK will be with the EU once legislation is enforced. Many are concerned that the reforms will diverge too far from the EUs GDPR standards and actually increase the regulatory burden placed upon the data economy, ultimately curtailing growth. This is a hugely important issue that government needs to resolve quickly.

As with any innovation, collaboration is vital. Again, we can learn lessons from the pandemic with effective data-sharing beyond borders, the global community was able to work together to manage the risks posed by Covid and to minimise the risks of infection. This data-sharing approach is crucial if governments are going to effectively introduce data strategies.

Working in isolation is not an option when it comes to data. It requires collaboration between governments, citizens, businesses, and technology vendors to develop and road test policies, strategies, and plans about how data is captured, stored, and used.

If the UK is to have the best chance of success, its imperative that our new leader is aware of this fact and is willing and open to leading positive change through data.

More:

Why data and technology will be crucial for the UKs new leader - Global Banking And Finance Review

Ethereum Mining and PoS Activities Are ‘Prohibited’ Says Data Provider – BeInCrypto

As Ethereum (ETH) approaches the Merge, the debate over its shift to a proof-of-stake (PoS) network appears to be growing. Many people have expressed their worries regarding the current centralization of Ethereums validator nodes, claiming that moving to a PoS system would worsen matters.

This has led to rising concerns among crypto investors that the current proof-of-work (PoW) system might not be as secure as it should be, and that a switch to PoS would mean that a single entity could, with a possibility of 51%, attack the network.

For years, the switch to proof of stake for Ethereum has been delayed. We thought it would take one year to put POS in place, but its taken approximately six years, Ethereum co-founder Vitalik Buterin stated

According to a tweet by Maggie Love, the co-founder of Web Cloud, Ethereum cannot be decentralized if the stack is not decentralized. She points out the 69% of nodes that are hosted on the ETH mainnet, with over 50% of that coming from Amazon Web Services (AWS), over 15% from Hetzner, and 4.1% from OVH.

In anticipation of the upcoming Merge, various platforms that use the Ethereum blockchain have announced their contingency plans. Under the PoS system, crypto investors stake a specific quantity of their cryptocurrency on the standard network, rather than utilizing big amounts of electricity to generate more cryptocurrency.

Even if none of these issues come up, the future of stablecoins still represents a major challenge for the decentralized finance (DeFi) sector. With centralized stablecoins dominating decentralized protocols, many DeFi projects have been considering algorithmic stablecoins. But there are still potential regulations for stablecoins that could impact DeFi after the Ethereum merge.

The stablecoin market is huge, with more than a $100 billion market cap, and its use on public blockchains like Ethereum has grown significantly, making them integral to DeFi operations. But as the Ethereum network approaches the merge, the stability of these assets becomes more important than ever.

The Ethereum Merge, set for September 15th, could possibly affect the stability of digital assets that are pegged to real-world currencies. Most DeFi applications could be hosted on the Ethereum blockchain after it switches from its current PoW consensus mechanism to PoS. The upcoming Ethereum fork to replace the PoW consensus system with a PoS one is expected to speed up the development of Ethereum toward institutional-grade investment.

For Be[In]Cryptos latestBitcoin(BTC) analysis,click here.

DisclaimerAll the information contained on our website is published in good faith and for general information purposes only. Any action the reader takes upon the information found on our website is strictly at their own risk.

Read the original here:

Ethereum Mining and PoS Activities Are 'Prohibited' Says Data Provider - BeInCrypto