Page 10«..9101112..2030..»

Iran’s government blames smog and massive blackouts on bitcoin mining operations in its country – Boing Boing

Iran is experiencing wide-scale blackouts and unhealthy levels of smog. The government says bitcoin mining operations, which use a great amount of electricity, are to blame. The government has already shut down at least one licensed mining operation and is now going after rogue mining operations. People running the bitcoin mines in Iran are say the government is simply blaming the miners for the problems and that the cause of the blackouts and smog lies elsewhere.

From The Washington Post:

"The miners have nothing to do with the blackouts," Ziya Sadr, a cryptocurrency researcher in Tehran, told The Washington Post. "Mining is a very small percentage of the overall electricity capacity in Iran."

He added, "It is a known fact that the mismanagement and the very terrible situation of the electricity grid in Iran and the outdated equipment of power plants in Iran can't support the grid."

The government itself has pointed to cheap electricity rates, enabled by government subsidies, as another major cause of the blackouts. A member of the board of the Iranian Blockchain Associationtold IRNAthat the electricity used by cybercurrency miners in Iran was estimated to be about equal to the electricity lost by the network during distribution.

One quibble with The Washington Post piece. It states, "Decentralized cryptocurrencies rely on high-powered computers to verify that transactions are legitimate by solving complicated mathematical problems." That's incorrect on two counts. First, transactions are not verified by solving complicated mathematical problems. Miners compete for rewards by trying to be the first to guess a random number. There's nothing complicated about guessing it. And second, cryptocurrencies do not "rely on high-powered computers to verify that transactions are legitimate." The reason miners use high powered computers is that the miners are competing with each other to make as many guesses as possible, and the more power their computers, the faster they can make guesses. But if 99.9% of all the computing power devoted to mining was shut down, the remaining miners would still be able to verify the transactions at the same rate they are verified now, because the bitcoin network automatically adjusts the difficulty of guessing the random number to match the computing power devoted to guessing it.

Continue reading here:

Iran's government blames smog and massive blackouts on bitcoin mining operations in its country - Boing Boing

Read More..

How Data in India Went From Being a Tool of Economic Planning to Big Data Aggregation – The Wire

The following is an excerpt from Lives of Data: Essays on Computational Cultures from India, edited by Sandeep Mertia, foreword by Ravi Sundaram, Institute of Network Cultures (Amsterdam, 2020).

This book is published under the Creative Commons Attribution-NonCommercial- NoDerrivatives 4.0 International (CC BY-NC-SA 4.0) license and is available here.

It is not difficult to see what is wrong with official statistics in India. There is gap between theory and practice. There is gap between the means and the end in the absence of any clearly perceived purpose.

~ P. C. Mahalanobis, Statistics as Key Technology, 1965

Data is its own means. It is an unlimited non-rivalrous resource. Yet, it isnt shared freely. What began as a differentiator is now the model itself.

~ Nandan Nilekani, Why India needs to be a Data Democracy, 2017

Data shadows our situation. Many believe it can determine our situation. There were enthusiastic claims that Big Data would lead to a fourth industrial revolution and the end of Theory, and that it will transform how we live, work, and think. Arguably, much of the early 2010s hype around the big data revolution has already been replugged into popular narratives of artificial intelligence (AI). The media infrastructures that enliven digital data and the fast-moving claims of data revolution are now evidently more globalized and capitalized than ever before. If we look a little under the hood, techniques such as data mining have moved from the margins of techno-scientific practice to normative centers of global computing in less than two decades. How did data become so powerful, pervasive, and relatable in the first place? To understand the global momentum of the data revolution, it is crucial to inquire into the many lineages, affinities, and relations of data in context-sensitive ways.

Lives of Data: Essays on Computational Cultures from India Ed. Sandeep MertiaInstitute of Network Cultures (Amsterdam, 2020)

Data Revolution(s) in Context

The contrast between the two epigraphs above is a good place to begin tracking lives of data. The first epigraph is from a lecture in 1965 at the 125th Annual Meeting of the American Statistical Association by P. C. Mahalanobis, founder of the Indian Statistical Institute (ISI) and a member of the Planning Commission, a powerful body at that time. In this lecture, he emphasized the need to establish a purposive view of statistics as a fully developed technology of a multi-discipline character. This was especially so in the underdeveloped countries where the principle of authority of the government reigned supreme over independent statistical analysis and interpretation. Mahalanobis made these observations at a time when the ISI and Indias official statistics and economic planning system were receiving global recognition for pioneering work in research, training, sample-survey methods, and economic planning. He clearly placed statistical knowledge production in the service of postcolonial nation-building. The desire to perceive a clearly defined purpose when the ISI was already at the cutting edge of large-scale data collection and processing stands in puzzling contrast to contemporary modes of data-driven governance which claim data is its own means.

The second epigraph is from an opinion piece by Nandan Nilekani, co-founder of Infosys and founding Chairman of Unique Identification Authority of India (UIDAI), the government body responsible for the worlds largest biometric database, Aadhaar. In this article he argues for the value of big data and artificial intelligence for disrupting existing patterns of information management, and cautions against data colonization by state and global platforms. It is important to note that what we now know as Aadhaar actually began in 1999 as an identity card project for citizens living in border states. The Rangarajan Commission, set up in January 2000 to look into the growing concern regarding the quality of data in the entire statistical system, recommended the creation of a centralized database of citizens (population register) in which every citizen would have a unique identification number. Within a few years of the UIDAI being set up in 2009, Aadhaar became a primary key linking databases of bank accounts, mobile phones, income tax returns, payment apps, email IDs, and so on, even if such a linking is not mandated by the law. Aadhaar has afforded development of application programming interfaces (APIs), and web and mobile applications with payment interfaces demanding Aadhaar verification for government and private services across domains. Perhaps nobody in 2009 could have imagined connecting biometric data to mobile phone SIM cards. Anumeha Yadav (Chapter 7) draws on her detailed field reports to show how the project grew from select pilot implementation in 2011 to a national legal and policy imperative by 2017. She notes a growing public alertness to the importance of enrolling with Aadhaar to ensure the ratification of rights, irrespective of the unclear legal status and the widespread technological glitches in the everyday functioning of the project. The story of Aadhaar raises questions about what counts as data, who can design its purposes, and how its means and ends are discovered. It is a story that is at once expansionist and contingent: in India, the evolution of Aadhaar indicates that we need to reflect on computational culture without prefiguring the object of computation and its potential relationship to taxonomies of social control.

To understand the shift that has taken place between the data in the mid-20th-century statistical regime of economic planning and big data aggregation and prediction in the contemporary, we need to re-examine the history of computing in India, which has been largely tethered to the IT revolution. We examine different techniques and affordances of computation in different media ecologies consisting of human computers and mass-media such as telecom in the decades before the emergence of the internet. In Chapter 1, I explore the role of the first computers of Indiaboth human and electronicfrom the 1930s to 1960s in generating official statistics. In Chapter 2, Karl Mendonca analyses the role of computerization in the 1980s at a major advertising company involved in the cinema business, and how the company later repurposed its cinema distribution network into a courier company. In different ways, both chapters challenge the notion of a clear and stable rationale for the evolution of computers and big data.

It was not until the early 2000s that database practitioners began to seriously look at data mining as a mode of knowledge production. New concepts of scale and computational processing power emerged and developed through trade-offs and reconfigurations of statistical accuracy, localized data storage and retrievability, hardware and software load balancing, and electricity consumption. Of particular importance was the shift from relational (structured design) to non-relational (distributed design) database management systems. Here, we must not forget the co-production of affordances, users, and publics. After all, a computer database is only one specific instance of a wider set of relationalities made durable by the thoroughly material and well-constructed craft of software engineeringeven if it is widely imagined to be abstract and mystical. In the Indian context, while the IT industry has become symbolic of a new middle-class imaginary of technology and social mobility, the epistemic cultures of software engineering and their relations with global developments are yet to be adequately unpacked. We do not know how Indias political and infrastructural conditions affect Aadhaars database design or the development of high energy-consuming data centers for data sovereignty, to name but two examples.

In a post-colony like India, any critical engagement with data-driven knowledge production has to consider the persistent role of colonial biopolitics. It is well established that statisticsformerly termed political arithmetichave played a key role in the production of people, identity, and nation-states. From the construction of enlightenment ideas such as the individual, national populations in Europe, and the citizen in the USA, the intended and unintended consequences of counting and categorizing people run far and wide. European colonies became sites for exotic and imperious enumerative and classificatory systems framed by orientalist pedagogies that displaced and serialized existing social orders. From the inventions of fingerprinting and the enumeration of complex traditions of faith and social difference into the fixities of religious identity and objectification of caste, such a biopolitics sought to make populations knowable and governable.

Post-independence India saw an expansion of bureaucracy, official statistics, and planning. Subsequently, government and transnational businesses used data modelling of the economy and populations to understand citizenship entitlements and consumer profiles. The intersections of state and market interests after economic liberalization in 1991 transformed the national political economy as well as the everyday cultural conditions of governance. In particular, the entry of private digital technology vendors and consultants in state and international development projects afforded new means and incentives for collecting and analyzing data. Supporters of the Aadhaar project often claim that the state is a much more benign collector of data than companies such as Google and Facebook. Putting questions of veracity aside, the implications of this distinction are suggestive. The purported commensurability between data imaginaries and practices of Indias welfare state and those of big technology companies widens the scope of inquiry into the politics of data-driven governance and bureaucracy. From state-owned biometrics to state-promoted transnational mobile apps, the contemporary (surveillance-friendly) road between the ideology of the state and that of popular digital media is punctuated by diverse and distributed data-driven pathways.

Representative image of an Aadhaar card. Photo: PTI

At one level, the shift from colonial fingerprinting to contemporary biometric technologies shows some continuity in terms of tactics of governance and subjectification of bodies. If we look closely though, the machinic-readability of fingerprints opens new analytical challenges for theorizing governmentality. The contemporary modes of data-driven subjectification are deeply entangled with proliferation of digital technologies of identification in governance, finance, media, and consumer products across developmental and business models. How can we map this expansion and proliferation in sociotechnically specific ways? From navigating the nudge marketing of discount codes on mobile payment apps to facing new determinations of citizenship and identity through myriad paper-based and digital documents, among other things, the emergent mutations of power, subjectivity, and data demand a closer look into the design and material form of media. This is particularly challenging in conditions of fragmented digital infrastructures, where diverse intermedial forms emerge and coalesce in everyday practices for bypassing the lack of end-to-end connectivity and formal access.

Sandeep Mertia is a PhD Candidate at the Department of Media, Culture, and Communication, and Urban Doctoral Fellow at New York University. He is an ICT engineer by training, and former Research Associate at The Sarai Programme, Centre for the Study of Developing Societies.

Lives of Data emerged from research projects and workshops at the Sarai programme, Centre for the Study of Developing Societies. It seeks to better understand the status of data objects, relationalities, and difference in computational cultures. A critical focus on India necessitates pluralistic vantage points for examining the contemporary global discourse of data revolution in relation to the enduring legacies of colonialism and 20th-century modernisation programs. From state-supported technological boosterism of its digital superpower status to everyday lives of over a billion people in one of the most diverse and unequal societies in the world, Indias sociotechnical conditions assemble deeply contrasting lives of data.

This collection of essays features a diverse group of interdisciplinary scholars and practitioners, engaging the emergence, limits, potentialities, politics, practices, and consequences of data-driven knowledge production and circulation. Encompassing history, anthropology, science and technology studies (STS), media studies, civic technology, data science, digital humanities, and journalism, the essays open up possibilities for a truly situated global and sociotechnically specific understanding of data, computing, and society.

See the original post:

How Data in India Went From Being a Tool of Economic Planning to Big Data Aggregation - The Wire

Read More..

Theia 456 is a stretched-out stream of sibling stars – EarthSky

Illustration depicting star streams in the Milky Way (not the Theia streams) via NASA/ Northwestern University.

Astronomers have identified many more or less spherical clumps of stars born together and still traveling together through space. We call them open star clusters; many are well known as beautiful places in the sky to see through binoculars or a small telescope. On January 15, 2021, at a virtual session during this weeks meeting of the American Astronomical Society, astronomers presented new research on a different sort of collection of sibling stars. Theia 456 isnt an open cluster. Its whats called a stellar stream, a group of stars stretched out linearly, in this case over some 500 light-years. The astronomers studying Theia 456 combined multiple datasets including those captured by ESAs Gaia satellite, which is carefully tracking the positions (and hence movements) of over a billion stars over a five-year period. They found that despite its stretched-out shape all of Theia 456s 468 stars are indeed siblings, born at the same time and traveling in the same direction across the sky.

That means our understanding of how sibling stars can exist with one another, within the confines of our Milky Way galaxy, is evolving.

The 2021 lunar calendars are here! Order yours before theyre gone.

Jeff Andrews of Northwestern University is a member of the research team and the presenter of the new information about Theia 456 at the AAS meeting last week. He said in a statement:

Most stellar clusters are formed together. Whats exciting about Theia 456 is that its not a small clump of stars together. Its long and stretched out. There are relatively few streams that are nearby, young and so widely dispersed.

Heres an ordinary open star cluster, what we typically think of when speaking of sibling stars. This cluster is called the Pleiades. Its stars were born from a single cloud of gas; you can see here that theyre still surrounded by a veil of nebulosity. Gazing at the sky with just your eye, you can easily glimpse the tiny, distinct dipper-like shape of the Pleiades. Star streams cant be seen in that same sense. Theyre found by mining data from spacecraft like Gaia. Astronomer Fred Espenak captured this photo in 2018. Read more about open star clusters like the Pleiades.

Researchers believe long strings of stars started out as tight open clusters. They believe these particular clusters were gradually ripped apart and stretched out into long streams by tidal forces within the Milky Way. Andrews said:

As weve started to become more advanced in our instrumentation, our technology and our ability to mine data, weve found that stars exist in more structures than clumps. They often form these streams across the sky. Although weve known about these for decades, were starting to find hidden ones.

In other words, until recently, most known stellar streams didnt consist of young, sibling stars orbiting in the flat disk or plane of our Milky Way galaxy. A look at a short list of known stellar streams in the Milky Way on Wikipedia, for example, shows their origins as globular star clusters (old, symmetrical clusters found in our galaxys halo) or dwarf galaxies orbiting outside the Milky Ways flat plane. These sorts of stellar streams were discovered by telescopes pointed above or below the Milky Ways plane.

Theia 456 does dwell within the Milky Ways flat plane or disk. These astronomers referred to it as hidden because its easily lost within the galaxys backdrop of 400 billion disk stars. Andrews commented:

We tend to focus our telescopes in other directions because its easier to find things. Now were starting to find these streams in the [disk of the] galaxy itself. Its like finding a needle in a haystack. Or, in this case, finding a ripple in an ocean.

Theia 456 isnt the only one of its kind, although it may be the only one so far thats been studied so extensively. In fact, these astronomers said:

The Milky Way houses 8,292 recently discovered stellar streams, all named Theia.

Heres a schematic of our Milky Way galaxy. Most known stellar streams lie outside the galaxys flat disk. Theia 456 lies within the disk of the galaxy. Image via Cosmos.

These astronomers statement explained:

Identifying and examining these structures is a data science challenge. Artificial intelligence algorithms combed huge datasets of stellar data in order to find these structures. Then Andrews developed algorithms to cross-reference those data with preexisting catalogs of documented stars iron abundances.

Andrews and his team found that the 500 stars within Theia 456 had similar iron abundances, which means that 100 million years ago the stars likely formed together.

They also found that the stars in Theia 456 are moving together in the same direction. Andrews said:

If you know how the stars are moving, then you can backtrack to find where the stars came from. As we rolled the clock backwards, the stars became closer and closer together. So, we think all these stars were born together and have a common origin.

In addition to date from the Gaia satellite, these astronomers used data from NASAs Transiting Exoplanet Survey Satellite (TESS) and from the Zwicky Transient Facility. Andrews said that combining datasets and data mining is essential to understanding the universe around us:

You can only get so far with one dataset. When you combine datasets, you get a much richer sense of whats out there in the sky.

Bottom line: Researchers have discovered that Theia 456 a stellar stream, or linear collection of stars moving together through space contains 468 stars born at the same time. This tells us that stars born together can move together in clumps (open star clusters) or in streams like Theia 456.

Source: Theia 456: A New Stellar Association in the Galactic Disk, presented as part of a session on the modern Milky Way at the 237th meeting of the American Astronomical Society.

Read more about Gaias 2nd data release: 1.7 billion stars!

Read more about Gaias 3rd data release: Gazing toward the galactic anticenter

Via Northwestern

View original post here:

Theia 456 is a stretched-out stream of sibling stars - EarthSky

Read More..

Ping An Uses Artificial Intelligence to Drive New ESG Investment Strategies – PRNewswire

HONG KONG and SHANGHAI, Jan. 14, 2021 /PRNewswire/ -- The Ping An Digital Economic Research Center (PADERC), a member of Ping An Insurance (Group) Company of China, Ltd. (HKEx:2318; SSE:601318), has created four new investment strategies for environmental, social and corporate governance (ESG) investing using Ping An's proprietary CN-ESG data for China A-shares, in light of surging demand in China for ESG ratings and data with wider coverage and a better fit for China's market.

Ping An ESG framework aligns with international standards and Chinese regulations

The investment strategies detailed in the report, "Applications of Ping An CN-ESG Data and Framework in Quantitative Investment Strategy", use the proprietary CN-ESG database and scoring framework developed by the Ping An Group. Ping An was the first asset owner in China to sign the United Nations Principles for Responsible Investment. The framework leverages Ping An's expertise in finance and technology and aligns with international standards as well as guidelines from Chinese regulators to incorporate material topics for Chinese companies.

With technologies such as web crawlers, data mining, machine learning, knowledge graphs, natural language processing (NLP) and satellite remote sensing, the CN-ESG system can verify ESG disclosure-based data as well as mine non-disclosure-based data to provide investors with richer multi-dimensional information.

PADERC's report provides an in-depth analysis on the data characteristics, effectiveness, and strategy back-testing results of the CN-ESG database and scoring framework, which covers more than 3,900 listed companies in the China A-share market with five years of historical data (2015-2019). The framework can provide quarterly results that are further adjusted based on news sentiment scores in real-time compared to annual or semi-annual updates from most ESG rating providers.

ESG factors independent of financial factors

PADERC found the Ping An's CN-ESG scores among A-share companies is close to a normal distribution. The factor correlation test results show that scores have notable performance of quality factors. The overall correlation between CN-ESG factors and traditional financial factors is generally low, showing high levels of independence of ESG factors, which indicates these can provide new data and viewpoints for investment decisions.

The results of the factor layered test show that Ping An CN-ESG factors have a relatively strong positive screening effect on the Chinese Securities Index (CSI) 300 and CSI 800 stock pools. The financial window dressing factors constructed by evaluating the quality and authenticity of the company's financial data yielded 11.61% of long-short gains since 2015.

ESG investment strategies that balance excess returns with ESG objectives

Based on CN-ESG data, PADERC constructed four types of ESG investment strategies that use artificial intelligence (AI) to balance excess investment returns and ESG investment targets:

1) Ping An AI-ESG Selected 100 Strategy: This positive screening strategy selects companies with the highest ESG scores. Based on the broader CSI 800 stock pool, it can better leverage additional information from ESG scores. This strategy achieved an annualized excess return of 4.44%. The annual weighted average ESG score quantile of the portfolio is 94.2% among the benchmark stock pool.

2) Ping An AI-ESG Enhancement Strategy: On the basis of ESG scores-based positive screening, PADERC added ESG factors to its Ping An Digital Economic Research Center 500+ No.1 AI Stock Selection Strategy and there was notable excess return. The AI stock selection strategy is based on linear and non-linear algorithms to capture complex market structures to predict the excess return of individual stocks. The Ping An AI-ESG Enhancement Strategy has an annualized excess return of 16.34%, and the annual weighted average ESG score quantile of the portfolio is 78.7% among the benchmark stock pool.

3) CSI 300 ESG Style Index Series:The CSI 300 ESG Growth Index explores the growth value of the CSI 300 stocks, while controlling its tail risks. The CSI 300 ESG Low Volatility Index reinforces the stability features of ESG investment in both the short and long term. The ESG growth index achieved annualized excess returns of 5.67% and the low volatility index achieved 8.61% relative to the benchmark. The annual weighted average ESG score quantile of the portfolios are 75.1% (ESG growth index) and 73.1% (low volatility index) relative to the benchmark stock pool.

Further testing of excess returns shows that the above active management strategies have almost all achieved excess returns in adverse market conditions, including bond crises, annual bear market downturns, Sino-US trade war, and COVID-19, verifying the effectiveness of ESG factors in challenging environments.

4) AI-ESG MAX Strategy: ESG enhancement of mainstream ETFs enables investors to gradually incorporate ESG concepts into their investing process without changing their traditional investing habits. Based on the CSI 300, controlling for sector deviation, this strategy sets tracking errors to 1%, 3% and 5%. Under different tracking error assumptions, the strategy maximizes ESG scores while achieving annualized excess returns of 3.61%, 3.40% and 3.43% respectively against the benchmark. The back-testing results of the strategy over the past five years show good performance, and excess returns were stable. This type of index enhancement strategy based on ESG factors could help drive an increase in the scale of ESG investing.

Building a richer ESG strategy portfolio to meet investors' diverse needs

Ping An's CN-ESG framework will expand to include fixed income ESG data and climate risk-related AI-driven factors. It will enable more diverse investment options, such as ESG fixed income indices and climate risk-focused indices, to meet investors' diverse needs. Ping An also developed a series of AI-ESG products focusing on corporate management, risk monitoring and analytics solutions for ESG and climate risk analysis, including portfolio sustainability footprint analysis, a portfolio adjustment tool, a sustainable funds screening tool, and climate risk asset pricing models to support ESG investment.

PADERC is a professional institution specializing in macroeconomics and policy research, using big data and artificial intelligence to provide insights on macroeconomic trends, including developments in ESG disclosures and ratings.

For the full report, click here.

About Ping An Group

Ping An Insurance (Group) Company of China, Ltd. ("Ping An") is a world-leading technology-powered retail financial services group. With over 210 million retail customers and 560 million Internet users, Ping An is one of the largest financial services companies in the world.

Ping An has two over-arching strategies, "pan financial assets" and "pan health care", which focus on the provision of financial and health care services through our integrated financial services platform and our five ecosystems of financial services, health care, auto services, real estate services and smart city services. Our "finance + technology" and "finance + ecosystem" strategies aim to provide customers and internet users with innovative and simple products and services using technology. As China's first joint stock insurance company, Ping An is committed to upholding the highest standards of corporate reporting and corporate governance. The Group is listed on the stock exchanges in Hong Kong and Shanghai.

In 2020, Ping An ranked 7th in the Forbes Global 2000 list and ranked 21st in the Fortune Global 500 list. Ping An also ranked 38th in the 2020 WPP Kantar Millward Brown BrandZTM Top 100 Most Valuable Global Brands list. For more information, please visit

About Ping An Digital Economic Research Center

Ping An Digital Economic Research Center utilizes more than 50 TB high frequency data points, more than 30 years of historical data and more than 1.5 billion data points to drive research on the "AI + Macro Forecast" and provide insights and methods towards precise macroeconomic trends.

SOURCE Ping An Insurance (Group) Company of China, Ltd.


Ping An Uses Artificial Intelligence to Drive New ESG Investment Strategies - PRNewswire

Read More..

Patient Journey Analytics: Improving patient outreach and clinical outcomes through evidence-based solutions powered by innovative technology and…

LONDON--(BUSINESS WIRE)--Quantzig, one of the worlds leading analytics solutions provider, announces the addition of new services to its advanced portfolio of patient journey analytics solutions. Quantzigs real world evidence analytics capabilities cover various aspects of the patient journey, including- Patient Journey Mapping, Diagnostic Data Analytics, and solutions designed to enhance Patient Outreach programs.

Quantzigs patient journey analytics solutions aim to reduce the burden of normalizing, integrating, and analyzing data by drawing actionable insights that solve some of the unmet needs in healthcare. Contact an analytics expert for immediate, actionable solution insights.

With constant fluctuations in market dynamics, healthcare organizations are experiencing significant challenges. Most of these challenges revolve around managing patient engagement, enhancing patient experiences, and driving outcomes while reducing costs. At the same time, healthcare service providers face many challenges due to the influx of patient data, budget constraints, and limited resources. Due to these factors streamlining workflows, increasing efficiencies, and enhancing the quality of care have become extremely important to maintain positive patient experiences. Our comprehensive expertise and advanced real-world evidence analytics solutions support businesses across the healthcare continuum, including pharma and life sciences, medical device manufacturers, healthcare payers and providers, and other R&D organizations within healthcare.

Our scalable solutions are powered by sophisticated machine learning and natural language processing techniques that reduce the time associated with translating unstructured, fragmented, and disparate data sets into insights. Learn more.

How Can Quantzigs Patient Journey Analytics Solutions Help You?

Analyzing patient journeys in todays highly regulated business world is challenging. Quantzig offers holistic patient journey analytics solutions that act as a dominant force to drive patient outcomes and satisfaction rates. Take a look at the complete portfolio of patient journey analytics solutions here:

A unique mix of ML and NLP capabilities and technology expertise is what sets us apart, making us the most preferred patient journey analytics service provider globally. Our solutions also help businesses to drive impactful outcomes by-

Drawing on our expertise in the field of real world evidence analytics, we focus on integrating the best practices to drive continuous process improvements in healthcare. Schedule a FREE demo to get started.

Why Partner with Quantzig?

With a huge clientele, which ranges from CEOs to BU heads to stakeholders of Fortune 500 companies, we have played an active part in improving business outcomes globally. Our expertise and domain knowledge also reflect the number of projects weve worked on and the results that have prompted businesses to engage with us on an ongoing basis, making us the most preferred analytics partner for leading businesses.

With more than 16 years of experience and a dedicated team of 550+ analysts and data science professionals, we have a proven track record of helping healthcare organizations across North America, Europe, EMEA, and APAC leverage analytics to drive better outcomes. Request a FREE proposal to gain detailed insights into our engagement policies.

About Quantzig

Quantzig is the world's foremost full-service advanced analytics and business intelligence solution provider, turning clients' complex, unstructured data into intelligent, actionable insights that enable them to solve complex business problems and inspire innovation, change, and growth.

Over the past 16 years, our insights have helped over 120 clients spanning across industries and sectors like Pharmaceutical and Life Sciences, Retail and CPG, Food and Beverage, and more. We have successfully delivered 1500 in-depth solutions in areas like Marketing Analytics, Customer Analytics, Supply Chain Analytics, and more. For more information on our engagement policies and pricing plans, visit:

Read the original here:

Patient Journey Analytics: Improving patient outreach and clinical outcomes through evidence-based solutions powered by innovative technology and...

Read More..

Deep Learning Market Demand to Shoot with the Increasing Uses of Security & Surveillance Solutions Increasing Demand Forecast 2023 – NeighborWebSJ

Market Overview

The global deep learning market is growing at a rapid pace. Market growth attributes to the increasing adoption of cloud-based services and large scale generation of unstructured data. Besides, the increasing demand for deep learning technology for natural language processing and voice/speech recognition applications drive the growth of the market. Moreover, increasing applications of deep learning models for image/speech recognition, data mining, and language translations escalate market growth.

In its detailed analysis, Market Research Future (MRFR) asserts that the global deep learning market is poised to register a 30.87% CAGR throughout the forecast period (2017 -2023). Growing developments of humanoid robots, like Sophia, and the rising uses of augmented (AR) and virtual reality (VR) displays for the automotive and 3D gaming sectors boost the market growth. The uses of deep learning technologies in medical image analysis are rising exponentially.

Resultantly, the burgeoning healthcare sectors worldwide are cited as a major contributing factor for the market growth. Additionally, growing developments in deep learning technology and growth in the numbers of chatbots, alongside increasing R&D investments by companies, push the markets growth. The encouraging adoption of digital voice assistants increasing numbers of start-ups focused on augmented and virtual reality technologies influences the markets growth.


Global Market for Deep Learning Segmentation

The report is segmented into four dynamics;

By Component : Hardware (processor, memory, network, others), Software (solution, platform, others), and Services (installation, training, support & maintenance, others).

By Application : Image Recognition, Data Mining, Signal Recognition, and others.

By End-User : Manufacturing, Security, Retail, Media & Entertainment, Automotive, BFSI, Healthcare, Agriculture, and others.

By Regions : Americas, Europe, Asia Pacific, and the Rest-of-the-World.

Global Deep Learning Market Regional Analysis

North America dominates the global Deep Learning market. The largest market share attributes to the growing adoption of deep learning models for application image recognition, voice & image recognition, data mining, signal recognition, and diagnostics purposes. Besides, substantial investments transpired by key players in the development of AI technology platforms positively impact regional market growth.

Increasing R&D investments and rising business digital transformation push the growth of the market demand in the region. Moreover, the rising use of deep learning technologies for image recognition, data mining, and signal recognition substantiates the market growth. Manufacturing and automotive sectors in the US, Canada, and Mexico boost market growth.

The Asia Pacific region stands second in the global deep learning market. The market is driven by the growing development and penetration of deep learning technology in the region. Additionally, the proliferation of digitization and the spurring rise in the image recognition segment drive the regional markets growth. Rising foreign investments and implementing deep learning models for applications, such as voice, image, and signal recognition, favor the regional markets growth.

The European deep learning market is growing rapidly. Factors such as the augmenting uptake of deep learning technology by the European governments for surveillance, fraud detection, and data mining purposes, and healthcare diagnostics purposes boost the regional market growth. Furthermore, the presence of various notable players and large deployments across the industries in the region act as major tailwinds for the markets growth.

Global Deep Learning Market Competitive Analysis

Highly competitive, the deep learning market appears fragmented due to the presence of several well-established players. Mergers & acquisitions, innovation, and brand reinforcement remain prevailing key trends for the leading players as these strategies support their growth and expansion plans. They deliver reliable, leading-edge solutions and services. For which they invest substantially in R&D to develop adept technologies and products.

Major Players:

Players leading the deep learning market include Intel Corporation (USA), Amazon Inc. (USA), Samsung Electronics Co Ltd (South Korea), Sensory Inc. (USA), Micron Technology (USA), Xilinx Inc. (USA), Mellanox Technologies (USA), Google LLC (USA), Adapteva, Inc. (USA), NVIDIA Corporation (USA), Qualcomm Technologies Inc. (USA), Baidu Inc (China), Advanced Micro Devices Inc. (USA), IBM Corporation (USA), Facebook (USA), Microsoft Corporation (USA), and Tenstorrent (Canada), among others.


Industry/Innovation/Related News:

November 21, 2020 -Abacus.AI (the US), a leading AI research company, announced raising USD 22 MN in Series B funding. The company has raised over USD 40 MN since the day of its inception.

November 20, 2020 -A team of researchers at Binghamton University, State University of New York, announced using deep-learning techniques to analyze the numbers and suggest improved public safety by re-allocating resources. The team states that DeepER could be tweaked for other large cities or clusters of smaller cities with similar characteristics that would provide enough data to make predictions.

November 03, 2020 -A team of deep learning enthusiasts from IIT Roorkee (The Indian Institutes of Technology, in Roorkee) won the top prize at the recently-concluded tech4heritage hackathon. The team Ancient AI was able to develop restorative outcomes with the help of deep learning techniques to digitally restore damaged murals of Ajanta caves, utilizing a dataset of reference paintings to develop their AI model.

For More Reference:

Company name: Market Research Future

About Market Research Future:

At Market Research Future (MRFR), we enable our customers to unravel the complexity of various industries through our Cooked Research Report (CRR), Half-Cooked Research Reports (HCRR), Raw Research Reports (3R), Continuous-Feed Research (CFR), and Market Research & Consulting Services.

MRFR team have supreme objective to provide the optimum quality market research and intelligence services to our clients. Our market research studies by Components, Application, Logistics and market players for global, regional, and country level market segments, enable our clients to see more, know more, and do more, which help to answer all their most important questions.

Contact:Market Research FutureOffice No. 528, Amanora ChambersMagarpatta Road, Hadapsar,Pune 411028Maharashtra, India+1 646 845 9312Email:[emailprotected]

Read more from the original source:

Deep Learning Market Demand to Shoot with the Increasing Uses of Security & Surveillance Solutions Increasing Demand Forecast 2023 - NeighborWebSJ

Read More..

Top Crypto Analyst Betting on Select Group of Altcoins, Says DeFi Bull Run Will Make 2017 Look Like Childs Play – The Daily Hodl

A popular crypto trader says the decentralized finance (DeFi) sector of the crypto market is poised to go through a moon phase that will put 2017s extended bull run to shame.

The pseudonymous trader known in the industry as Altcoin Psycho tells his 82,000 Twitter followers that buying DeFi assets is similar to buying Ethereum back in 2016, when its price ranged from about $1.00 to a high of $19.

DeFi bubble is going to make the ICO bubble look like childs play when things really get going

Plenty of people dont even know what DeFi stands for, getting into DeFi right now is like buying ETH in 2016.

At the start of the month, the crypto trader revealed hes starting to scale into Reserve Rights Token (RSR), which is the utility token of the Reserve Rights stablecoin protocol.

Hes also investing in the decentralized cross-chain liquidity token THORchain (RUNE), the governance and liquidity token of the decentralized exchange SushiSwap (SUSHI) and the decentralized derivatives exchange Serum (SRM).

In addition, the analyst is naming the synthetic asset protocol Synthetix (SNX), smart contract platform Cardano (ADA), and pure proof-of-stake blockchain protocol Alogrand (ALGO) as coins whose time has come.

As for the crypto king Bitcoin, the trader says investors should realize BTC remains virtually unchallenged as the number one store-of-value asset.

In the next alt season, there will be a lot of narratives for a new Ethereum killer. There will be 0 narratives for a Bitcoin killer. Im not a maximalist, but keep this in mind.

Featured Image: Shutterstock/Sergey Nivens

See the original post here:
Top Crypto Analyst Betting on Select Group of Altcoins, Says DeFi Bull Run Will Make 2017 Look Like Childs Play - The Daily Hodl

Read More..

Popular Trader Says Six Altcoins Built on Ethereum Are Ready To Explode – The Daily Hodl

Crypto trader Lark Davis is bullish on a slew of blue-chip altcoins built on Ethereum.

The analyst says hes highlighting projects that will be around for the long haul, with well established track records and good cash flow.

These so-called decentralized finance (DeFi) bluechips could also be some of the best holds during the bull run in terms of actually giving those higher gains while also being relatively safer plays compared to a lot of the smaller cap cryptocurrencies in the market which might give higher gains but are also higher risk.

The analysts first pick is the lending and borrowing protocol Aave (AAVE), which he says already has close to $3 billion of total value locked in the platform and is likely to get a lot bigger.

This is just the beginning for Aave, since the technology behind Aave is actually addressing a multi-trillion dollar market. However, Aave goes way beyond just simple lending and borrowing services, it is also the pioneer in technology like flash loans, it has recently introduced a credit delegation, theyve got lots of other stuff in the pipeline coming. Aave is even working to crack into the mortgage market, with tokenized mortgages being able to be used as collateral for loans.

The second asset Davis is bullish on is, Uniswap (UNI). Davis notes that the decentralized exchange has higher volume than Coinbase on some days and has been instrumental in allowing projects to launch their tokens without the gatekeeping involved in launching on centralized exchanges.

Next on Davis list is the derivatives liquidity protocol Synthetix (SNX), which is taking aim at the multi-trillion dollar derivatives market. Davis notes that a number of new projects are leveraging the Synthetix protocol, adding that with the projects nearing upgrades, he expects it to remain the leader in its field for the foreseeable future.

The fourth altcoin Davis mentions is Andre Cronjes decentralized finance platform (YFI), which is evolving into multi-purpose lending and insurance platform that allows token holders to receive a portion of fees garnered on the platform.

The YFI governance token is famous for being ridiculously expensive, but the token holders receive a percentage of fees made to the platform now its too early to say how lucrative holding the YFI tokens will be in the long run but the market is currently betting that they will be worth a lot and that the cash flow will be significant.

Davis fifth pick is liquidity provider Kyber Network (KNC), which Davis says is helping connect the dots in DeFi. Davis notes Kyber holders receive a portion of fees made on the platform, though he warns that one must own a lot of KNC to make the feature worthwhile. However, Davis says the token has a diminishing supply, making for some great tokenomics.

Finally, Davis names Maker (MKR) as his sixth pick, labeling it the OG of DeFi protocols. Maker is the number one DeFi protocol in terms of total value locked, says Davis. With a lot of new features in the pipeline, Davis believes the asset could do well this cycle.


Featured Image: Shutterstock/Muhammad Shairazi

Go here to read the rest:
Popular Trader Says Six Altcoins Built on Ethereum Are Ready To Explode - The Daily Hodl

Read More..

5 Middleware Altcoin Projects Worth Investing In CryptoMode – Crypto Mode

Making strategic investments in specific cryptocurrencies can often yield decent returns on investment. Finding the right markets to experiment with can be challenging. Middleware altcoin projects are seemingly a hot commodity lately, as they all note a strong price performance.

Many services and platforms in the cryptocurrency space rely on Chainlink middleware. Their decentralized price oracles, for example, are commonly found in top DeFi projects. Moreover, the technology can prove beneficial for other purposes as well. Allowing smart contracts to interface with real-world data is a big step forward for the industry.

The LINK price shows strong bullish momentum since January 1st, 2021. Its price has gone up by roughly 100% since then, confirming this middleware altcoins demand is prevalent.

When The Graph launches initially, most people werent sure what to make of it. Positioning itself as an indexing protocol or querying networks, The Graph lets anyone build and publish open APIs to make data more accessible -a robust middleware business model that can certainly spark some interest globally.

In the price department, GRT is noting strong momentum despite a relatively big dip a week ago. Its price is up by nearly 100% this month, and the all-time high is well within reach.

Another middleware project that keeps making waves is Band Protocol. This scalable and blockchain-agnostic decentralized oracle can rival Chainlink and perhaps improve it in some regards. Having some competition in this department will ensure decentralization remains prevalent.

The BAND price is a bit volatile lately yet still notes a substantial gain since January 1st. Rising from $5.34 to $10.07 is no easy feat and shows there may be further price potential ahead. Returning to the all-time high of $17.51 will be difficult, though.

Tellor positions itself as another decentralized oracle network. This particular business model seems quite popular among middleware providers, which is a good thing. Having multiple options to put real-world data on-chain allows for more complicated smart contracts and dApps.

The TRB price currently sits at $32.87, a near 100% increase from the $17.5 on January 1st. Despite this high value, the projects market cap sits below $54 million.

Whereas many people may overlook Keep3rV1 as a middleware project it is still in beta the decentralized keeper network for external DevOps is getting a lot of attention. It is one of the many projects built by Andre Cronje, which explains the growing KP3R price.

Speaking of which, KP3R notes a price gain of roughly $100 compared to January 1st. A strong performance, particularly for a project that may remain in beta for a while.

Looking to advertise?We will gladly help spread the word about your project, company, or service.CryptoMode produces high quality content for cryptocurrency companies. We have provided brand exposure for dozens of companies to date, and you can be one of them. All of our clients appreciate our value/pricing ratio.Contact us if you have any questions: [emailprotected]

Read the original here:
5 Middleware Altcoin Projects Worth Investing In CryptoMode - Crypto Mode

Read More..

Crypto Strategist Unveils His Top Altcoin Picks for the Year – CryptoGlobe

The host of the Coin Bureau channel on YouTube has revealed his top five altcoins picks for this year in a video published for his 225,000 subscribers, and noted he expects a smaller number of altcoins to benefit from the bull market.

In the video, first spotted by Daily Hodl, the pseudonymous analyst noted that those who have followed the cryptocurrency space know altcoins usually pump after the price of BTC surges in whats called the altseason, but warned that as the markets grow more sophisticated, its unlikely that this rising tide will lift all boats.

He then outlined his top altcoin picks for the year. In first place came privacy-centric coin Monero (XMR) which the analyst claims enforcement agencies have tried to decode to no avail, and pointed out its trading volume tops that of all other privacy coins.

The developers working on XMR, he added, help the cryptocurrencys security and privacy, ensuring those who use it can remain anonymous. Exchanges have, however, been facing regulatory pressure to delist these currencies.

Crypto exchange Bittrex has earlier this year revealed it was delisting XMR, ZEC, and DASH. While the exchange did not specify why it was delisting the tokens, its widely assumed the move was a bid to comply with increasingly strict regulations.

The second pick was Algorand (ALGO), which the analyst pointed out aims t develop a scalable, secure, and decentralized blockchain, and has a higher transaction throughput than the Ethereum network.

Recently, Centres USDC stablecoin was launched on Algorand, and to the analyst, this means demand for ALGO will increase, as transaction fees are paid in the cryptoasset.

This means USDC users will be able to send the stablecoin on the Algorand Network cheaper and faster that would be done on Ethereum, for example.

He pointed out that Visa has recently partnered with Circle, one of the firms behind USDC, connecting its network of 60 million merchants to the stablecoin. When Circle graduates from Visas Fast Track program, a credit card letting businesses send and receive USDC may be launched.

His third pick was Theta, a decentralized streaming and content delivery network that per his words competes with YouTube and Twitch, which have millions of users. The firm, he said, has been growing from just streaming eSports to add poker, crypto events, and more to its portfolio.

In fourth came the Injective Protocol (I_NJ), a decentralized derivatives exchange built on top of the Cosmos network. He said:

They are trying to create a paradigm shift in the DEX space. Quite simply, it will allow users to trade spot, swaps, and futures in a completely permissionless way.

The analyst added the exchange will also structure a derivatives market for anything that has a price.

In fifth place came BarnBridge (BOND), which according to the YouTuber seeks to tokenize risk by pooling funds and allocating them to different decentralized protocols. He added:

Then once the funds have been pulled, they will tranche the yield such that it can be tokenized individually. This, therefore means that DeFi investors are able to invest in different risk tranches based on their yield and risk tolerance.

With the DeFi space seeing its total value locked growing, he concluded, BarnBridge is well-positioned to gain more users.

Featured image via Pixabay.

The views and opinions expressed here do not reflect that of and do not constitute financial advice. Always do your own research.

Read more from the original source:
Crypto Strategist Unveils His Top Altcoin Picks for the Year - CryptoGlobe

Read More..