Category Archives: Data Mining

Visionwebppc, a digital marketing agency, just announced their … – Digital Journal

PRESS RELEASE

Published April 3, 2023

Visionwebppc, a digital marketing agency, just announced its newest SEO updates. Designed with the user in mind, these new features will help businesses reach their target audience more quickly and efficiently.

The announcement includes a combination of technical and creative features that will make it easier for website developers to achieve success on search engine results pages. Additionally, Visionwebppc has supplemented its services with data mining capabilities so businesses can gain valuable insights into how their website is performing in relation to competitors.

"Visionwebppc wants its customers to succeed in the ever-changing world of digital marketing," said James Amato, CEO of Visionwebppc. "By staying up to date on the latest SEO tactics and trends to ensure that their customers have the tools they need to outpace their competition."

These updates focus on optimizing traditional SEO tactics, including meta tags, link building, keyword research, and content optimization. By leveraging data mining technology, Visionwebpc is also able to provide detailed reports about each businesss online visibility across multiple channels, such as organic search results, social media sites, local listings, and industry-specific websites. This type of analysis offers greater insight into what changes need to be made for optimal visibility in each industry.

In addition to its core features, Visionwebppc also offers comprehensive support plans that range from managing website development projects all the way to comprehensive consultation services on content marketing strategy and any related topics. Their team of dedicated professionals is available when needed to answer questions or provide assistance with implementing best practices in any area of digital marketing.

With these new updates and customer support plans in place, Visionwebppc provides businesses with an edge against competitors when it comes to driving organic traffic effectively across multiple internet platforms. For a detailed look at all of the available service offerings from Visionwebppc, please visit the website today.

Your brand has a story. Prodigy Press Wire will help bring it to life.

Here is the original post:

Visionwebppc, a digital marketing agency, just announced their ... - Digital Journal

Socso hit with 683 false claims from years 2018-2022 – The Star Online

PETALING JAYA: In just five years, from 2018 to 2022, the Social Security Organisation (Socso) detected 683 cases of fraudulent claims amounting to RM43mil.

In some cases, people even try using a dead persons number to make claims, said Socso chief executive Datuk Seri Dr Mohammed Azman Aziz Mohammed.

Of the 683 cases, 487 are being investigated further; from that, 318 cases or 65.3% totalling RM28.8mil have been repudiated.

With this action, Socso has managed to save about RM86mil in terms of future savings for the organisation, Mohammed Azman revealed to The Star.

Fraudulent claims are a bane to Socso, and protecting its funds from them involves tedious data mining that reveals discrepancies like sharing of addresses, localities and phone numbers by claimants.

The organisations Anti-Fraud System has accumulated millions of data bytes since its inception in 2017.

However, Mohammed Azman said that as they continue to build the system with more data, it is difficult to ascertain the actual amount of losses caused by fraudulent claims.

When asked how detrimental fraudulent claims can be for Socso, Mohammed Azman said that even though the organisation has a reasonable amount of assets, it may not be enough to sustain it and be relevant in the long term.

This is due to its commitment of roughly RM5bil for all benefit payments given out to insured persons (workers covered by Socso) or their dependants, Mohammed Azman explained.

For example, if an insured person earns RM5,000 a month, his or her contribution under the scheme is RM49.50 every month, based on the employees contribution rate of roughly 1% of their monthly salary.

If he is certified as an invalid by the Medical Board due to his illness and fulfils the invalidity claim requirements, he will receive an invalidity pension up to RM3,217 a month for life.

If the worker dies and he has a wife as well as children, Socso will support his family with a survivors pension given to his dependants for life, and to his children until they are 21 years old or married, whichever is earlier.

If the workers wife is 30 years old at the time of his death, that pension would amount to about RM1mil eventually.

This is the kind of long-term liability we have to commit to under Act 4. That is why we need to monitor fraudulent claims closely, he said, referring to the Employees Social Security Act 1969.

Mohammed Azman also said that retrieving payments poses a difficulty due to claims lapsing over time; as such, stopping payments as a first step is the best option in most fraud cases that are detected.

When an invalidity claim comes in, we will process it and bring the claimant before the JD (Medical Board) to determine the claimants invalidity.

If he is certified invalid, Socso will make the payment, but after the benefit payment is made, we will analyse the data again.

In any case of suspected fraud, we will resubmit the case to the board for verification again. If the case is a straightforward one with an admission of guilt, Socso will immediately stop payments, he said.

Mohammed Azman pointed out that the organisation has a zero tolerance policy towards abuse of claims, and the goal is to minimise it as much as possible.

To achieve this, he said Socso has a supervision plan in place and also monitors transactions and carries out surveillance.

Socso has also adopted the Anti-Bribery Management System certification to deter attempts to bribe its officers.

We also use data science and artificial intelligence and establish close cooperation with the police and the Malaysian Anti-Corruption Commission to prevent fraudulent claims and take action against the culprits, he said.

Follow this link:

Socso hit with 683 false claims from years 2018-2022 - The Star Online

UC Irvine Earth system scientists uncover ice-age shift in Pacific … – UCI News

Irvine, Calif., March 29, 2023 The overturning circulation of the Pacific Ocean flipped during the last ice age, altering the placement of ancient waters rich in carbon dioxide, according to Earth system scientists at the University of California, Irvine.

In a paper published in Science Advances, the researchers suggest that this shift in the 3D churning of such a large ocean basin must have enhanced the sequestration of CO2 in the deep sea, thereby lowering the amount of the greenhouse gas in ice-age Earths atmosphere. They uncovered this transposition by analyzing traces of carbon-14, or radiocarbon, in thousands of fossil sediment samples from around the world, some dating back as far as 25,000 years.

Its intuitive to think that the Pacific would play a major role in climate regulation during the last glacial period its huge, double the volume of the Atlantic but we didnt have a lot of data to say that previously, said lead author Patrick Rafter, UCI assistant researcher in Earth system science. Our study has established a benchmark of radiocarbon measurements of the major ocean basins, and having compiled and analyzed that data, we can confidently say that changing overturning circulation in the Pacific is consistent with the ocean being a significant driver of lower greenhouse gases during the last ice age.

He said carbon-14 is the isotope of choice for researchers hoping to reconstruct the relationship of the deep sea and the atmosphere over long time scales. Radiocarbon is produced in the atmosphere when cosmic ray neutrons hit nitrogen, and it becomes carbon dioxide after chemical reactions with oxygen. After this, it enters the ocean exactly like regular CO2, because it is CO2, Rafter said. Thats what makes carbon-14 a powerful and useful tracer for how the ocean interacts with the atmosphere.

For this project, he and his colleagues employed techniques perfected over decades in UCIs Department of Earth System Science and worked with cutting-edge machinery custom-designed to perform this type of carbon dating.

Beginning in the 1990s, professors Ellen Druffel and Sue Trumbore, founding faculty members in the department, were determined to make UCI a world-leading center for the use of carbon-14 in geosciences research. Key steps included obtaining funding for what ultimately became the W.M. Keck Carbon Cycle Accelerator Mass Spectrometer Facility and the hiring of John Southon, UCI researcher in Earth system science, to oversee lab operations.

This kind of synthesis has been tried before but not on anything like the scale of our teams work, which involved a huge data mining effort as well as production of new results, said Southon, study co-author. The payoff is that for the first time there are sufficient data to show clear evidence that the glacial ocean circulation was not just a slower version of todays but radically different.

This kind of synthesis has been tried before but not on anything like the scale of our teams work, which involved a huge data mining effort as well as production of new results, says study co-author John Southon, UCI researcher in Earth system science. The payoff is that for the first time there are sufficient data to show clear evidence that the glacial ocean circulation was not just a slower version of todays but radically different. Steve Zylius / UCI

The team collected marine fossils from all over the world, sand grain-sized bits that were identified by more than 20 Earth system science undergraduates staffing a bank of microscopes in Rafters Croul Hall laboratory space. Then these calcium carbonate shells were converted into graphite, a pure form of carbon.

This material was introduced to the Earth system science departments accelerator mass spectrometer to yield precise measurements of radiocarbon values equal to the seawater the fossils lived in. With this data in hand, the next step was a bit like assembling a puzzle in which we had to combine our research with previous studies, according to Rafter.

There had been work done before on the North Atlantic, which made sense because thats an important region where the ocean breathes in the atmosphere, where a great deal of carbon dioxide enters the ocean, he said. We added our own analysis of fossil radiocarbon from sediment cores in the Pacific and Southern oceans so we could interpret all the major ocean basins together for the past 25,000 years, which had not been done before.

Rafter said that this new knowledge about the relationship between the deep sea and the atmosphere going back into the last ice age can help oceanographers and Earth system scientists fully comprehend the role of the ocean in controlling climate warming and cooling now and into the future.

Joining Rafter and Southon on this project, which was funded in part by the National Science Foundation, were researchers from Frances University of Paris-Saclay, Massachusetts Woods Hole Oceanographic Institution, Scotlands University of St. Andrews, Germanys Kiel University, UC Santa Cruz, UC Santa Barbara, Oregon State University and the California Institute of Technology.

UCIs Brilliant Future campaign: Publicly launched on Oct. 4, 2019, the Brilliant Future campaign aims to raise awareness and support for the university. By engaging 75,000 alumni and garnering $2 billion in philanthropic investment, UCI seeks to reach new heights of excellence instudent success, health and wellness, research and more. The School of Physical Sciences plays a vital role in the success of the campaign. Learn more at https://brilliantfuture.uci.edu/uci-school-of-physical-sciences.

About the University of California, Irvine:Founded in 1965, UCI is a member of the prestigious Association of American Universities and is ranked among the nations top 10 public universities byU.S. News & World Report. The campus has produced five Nobel laureates and is known for its academic achievement, premier research, innovation and anteater mascot. Led by Chancellor Howard Gillman, UCI has more than 36,000 students and offers 224 degree programs. Its located in one of the worlds safest and most economically vibrant communities and is Orange Countys second-largest employer, contributing $7 billion annually to the local economy and $8 billion statewide.For more on UCI, visitwww.uci.edu.

Media access: Radio programs/stations may, for a fee, use an on-campus ISDN line to interview UCI faculty and experts, subject to availability and university approval. For more UCI news, visit news.uci.edu. Additional resources for journalists may be found at communications.uci.edu/for-journalists.

Here is the original post:

UC Irvine Earth system scientists uncover ice-age shift in Pacific ... - UCI News

An IIT Grad & Stanford Prof Was the Mentor Who Inspired The Creation of Google – The Better India

Without the mentorship and guidance of the late professor Rajeev Motwani, legendary Silicon Valley entrepreneurs Sergey Brin and Larry Page may not have been able to build Google up to what its todaythe most significant internet search engine in human history.

A product of IIT-Kanpur and the University of California, Berkeley, Professor Motwanis pioneering work in data mining and algorithms at Stanford University, as well as a unique ability to solve deeply complex mathematical problems, played a key role in developing the search engine system that would go on to make Brin and Page billionaires.

As Brin noted in his blog paying tribute to professor Motwani, Today, whenever you use a piece of technology, there is a good chance a little bit of Rajeev Motwani is behind it.

However, his impact as a computer science professor at Stanford University, angel investor and mentor to many other technology companies, stretches far beyond the confines of Google and Silicon Valley. Remembered as an exceptionally brilliant mind, his path-breaking research in data mining and algorithms helped him earn a string of accolades, including the prestigious Gdel prize (given for outstanding papers in the area of theoretical computer science) in 2001.

Beyond his research work, however, Motwani was deeply engaged in transforming academic ideas into commercial ventures. According to IIT Kanpurs Office of Resources and Alumni, he played an an active role in the Business Association of Stanford Entrepreneurial Students (BASES).

He was an avid angel investor and helped fund a number of startups to emerge from Stanford. He sat on the boards of several companies including Google, Kaboodle, Adchemy, Baynote, Vuclip, Tapulous and Stanford Student Enterprises, noted the IIT-Kanpur description.

Going further, he co-authored seminal research papers on the internet alongside Larry Page, Sergey Brin and Stanford academic Terry Winograd on PageRankalgorithm and What Can You Do With A Web In Your Pocket. He would also go on to teach and advise a lot of Googles pioneering developers and researchers including their first employee Craig Silverstein.

Born on 24 March 1962 in Jammu, Professor Motwani grew up in a military household with his father serving in the Indian Army. Although the family moved around a lot thanks to his fathers postings, he ended up graduating from high school at St Columbas boys school in New Delhi.

From a very young age, he showed a genuine appreciation and aptitude for numbers. Growing up, he read a plethora of books about scientists and mathematicians lying around in his familys collection. Professor Motwani was particularly inspired after reading one about the legendary 19th-century German mathematician and physicist Carl Friedrich Gauss.

This [desire to become a mathematician] was partly shaped by the books I had at home. My parents for some reason had a lot of these books 10 great scientists or five famous mathematicians their life stories and so on. As a child, whatever heroes you read about you want to become, said Rajeev in an interview with Alumni Connect (IIT-Kanpur).

But when the time came to choose what subject he wanted in college, his family encouraged him to choose computer science, a subject they saw as more stable and lucrative than pure mathematics, according to a 2009 profile by tech journalist Bobbie Johnson for The Guardian.

Despite his apprehensions, he enrolled at IIT-Kanpur, and soon discovered that computer science was a discipline, which contained a high degree of mathematics.

I truly wanted to be a mathematician, and my parents were hesitant because how do you make money as a mathematician, how do you support a family. I was basically forced into going into computer science even though I did not want to, but it turned out to be a wonderful surprise that computer science is actually quite mathematical as a field, he recalled in an interview.

As a student among the first cohort of undergraduate computer science students at IIT-Kanpur, Motwani stood out for not just his immaculate intelligence, but also his variety of interests. An avid reader of science fiction literature, he wasnt a student confined to his classroom and dormitory. He spent his time solving difficult crossword puzzles, playing volleyball and bridge, and was also known among his peers as a fun loving, rock-n-rollin party guy.

But Professor Kesav Nori, who taught Rajeevs first class on programming TA 306: Principles of Programming, also had this to say: Rajeev knew that [the] purpose of programming is not just coding; it is to formulate the problem. Rajeevs thinking was clear; his expression [was] direct. No unnecessary stuff. Rajeev had a knack for creating the most elegant and brief answers to the hardest of programming problems. It was a joy to read his papers.

Graduating from IIT-Kanpur in 1983, he would go on to earn his PhD (1988) at the University of California, Berkeley, under the supervision of Professor Richard M Karp. Within a couple of years, he became a professor at Stanford, an event which sparked a remarkable journey.

At Stanford, he founded the Mining Data at Stanford (MIDAS) project, an umbrella organisation for several groups looking into new and innovative data management concepts. His research areas included databases, data mining, Web search and information retrieval, noted a profile of Professor Rajeev Motwani by the Office of Resources and Alumni, IIT-Kanpur.

Besides authoring two standout textbooks on theoretical computer science, he also served on editorial boards of many well-regarded scientific journals.

According to a short tribute published by Stanford University, He made fundamental contributions to the foundations of computer science, search and information retrieval, streaming databases and data mining, and robotics. In these areas, he considered questions as philosophical as what makes problems inherently intractable, and as practical as finding similar images and documents from a database. His text book, Randomized Algorithms, with Prabhakar Raghavan, epitomises this meeting of the abstract and the concrete, and has been a source of inspiration to countless students.

Employing his expertise in data mining and algorithms, professor Motwani understood the limitless possibilities of the world wide web. According to Bobbie Johnson for The Guardian, he helped start a number of classes and groups at Stanford aimed at investigating how to apply the mathematical principles he had worked on to the online world.

It was Brin who initially sought out professor Motwani for advice. Despite some initial scepticism about Brins idea for a new web search engine in what was considered a crowded market, Motwani saw potential for something much bigger.

However, he saw something different in their work and co-authored several papers that developed their strategy for finding information online taking on the role of informal adviser to Google as a result. In return for his involvement, Motwani was rewarded with a stake in the company, a relationship that paid off when Google reached the stock market in 2004, making Page and Brin billionaires and reaping great rewards for himself, wrote Johnson.

In his blog, heres what Sergey Brin said, Officially, Rajeev was not my advisor, and yet he played just as big a role in my research, education, and professional development.

When my interest turned to data mining, Rajeev helped to coordinate a regular meeting group on the subject. Even though I was just one of hundreds of graduate students in the department, he always made the time and effort to help. Later, when Larry [Page] and I began to work together on the research that would lead to Google, Rajeev was there to support us and guide us through challenges, both technical and organisational, added Brin.

The creation of Google was a deeply collaborative effort. It began when Professor Jeff Ulman, a pioneering researcher in the field of computer science at Stanford University, Sergey Brin and professor Motwani came together to form the research group MIDAS at Stanford.

In a 2002 interview with author and journalist Shivanand Kanavi, who was researching for his book Sand to Silicon: The amazing story of digital technology, Motwani recollected, We did a lot of good work on data mining. Then there was this guy called Larry Page who wasnt really a part of the MIDAS group but was a friend of Sergey and would show up for these meetings. He was working on this very cool idea of doing random walks on the web.

When I understood what the World Wide Web would look like, I knew I had to somehow force randomness into it. When Larry showed us what he was doing, it was like a complete epiphany, we thought it was absolutely the right thing to do. So Sergey got involved and it became a sub group inside MIDAS. I was really a good sounding board for Sergey and Larry and I could relate to what they were doing through randomness. They then created a search engine called Backrub. It was running as a search engine from Stanford just like Yahoo ran till the traffic got big and the IT guys sent it off the campus, added Motwani.

What started out as a fun research project eventually turned into something significant and serious. And calling the search engine Backrub wasnt going to cut it anymore.

So somebody came up with the name Google. Google means 10 raised to the power of 100. It is actually spelt as GOOGOL, but somebody misspelt it and thats how the search engine got its name..the official story is we deliberately spelt it that way but my guess is we misspelt it. So Google started and pretty soon everybody in the world was using Google, he recalled.

Meanwhile, this is how Larry Page remembered him: Rajeev was a wise theoretician that had the rare knack and desire to turn theory into practical applications.With his always open door and clever insights, Rajeev was instrumental in the early work that led to Google.

Although professor Motwani did not reach the heights of mainstream popularity that his former students, his experience with Google turned him towards helping other young innovators and entrepreneurs. He would go on to advise and invest in a myriad of other companies including PayPal, the online payments service, and Sequoia Capital, a leading global venture capital firm. He had developed a remarkable network between innovators, nascent entrepreneurs and potential investors, and became a go-to man of sorts in Silicon Valley.

He tragically passed away on 5 June 2009 at the age of just 47 in his swimming pool, with the official cause of death cited as accidental drowning. Although his life was cruelly cut short, he left behind an incredible legacy as a scientist, innovator and investor who never turned his back on those who sought out his help and expertise. In fact, one technology investor and friend, Ron Conway, called him one of the smartest people who has ever existed in Silicon Valley.

(Edited by Divya Sethu; Images courtesy Wikimedia Commons, IIT-Kanpur & Twitter/Asha Motwani)

Read more here:

An IIT Grad & Stanford Prof Was the Mentor Who Inspired The Creation of Google - The Better India

BSC Leads Multi-Partner Consortium in Innovative Data Mining … – HPCwire

March 22, 2023 The Barcelona Supercomputing Center (BSC) is the coordinator of the EU-funded EXTRACT project which began on January 1, 2023, bringing together a 10-partner consortium from Spain, France, Italy, Finland, Israel and Switzerland.

This three-year project will work to provide a distributed data-mining software platform for extreme data across the compute continuum. It pursues an innovative and holistic approach to data mining workflows across edge, cloud and high-performance computing (HPC) environments and will be validated through two use cases that require extreme data: crisis management in the City of Venice and an astrophysics use case.

Data has become one of the most valuable assets worldwide due to its ubiquity in the thriving technologies of Cyber-Physical Systems (CPS), Internet of Things (IoT) and Artificial Intelligence (AI). While these technologies provide vast data for a variety of applications, deriving value from this raw data requires the ability to extract relevant and secure knowledge that can be used to form advanced decision-making strategies.

The BSC researchers will play a critical role in the project by developing data-driven deployment and scheduling methods required to select the most appropriate computing resource. They will also develop a distributed monitoring architecture capable of securely observing the performance, security, and energy consumption of data-mining workflow execution. Moreover, the BSC will explore various strategies, including AI-based orchestration for deploying and scheduling workflows, to ensure that the various goals are optimized holistically while respecting the constraints imposed by extreme data characteristics.

Current practices and technologies are only able to cope with some data characteristics independently and uniformly. EXTRACT will create a complete edge-cloud-HPC continuum by integrating multiple computing technologies into a unified secure compute-continuum. It will do so by considering the entire data lifecycle, including the collection of data across sources, the mining of accurate and useful knowledge and its consumption.

The EXTRACT platform will be validated in two real-world use-cases, each having distinct extreme data and computing requirements.

BSC researchers will develop the data-driven deployment and scheduling methods needed to select the most appropriate computing resource. This task will address the hereto unaddressed challenge of orchestration on the edge-cloud-HPC continuum. It will include ensuring that orchestration technologies are explicitly aware of extreme data characteristics and workflow description.

The BSC will also develop a distributed monitoring architecture that will be capable of securely observing the performance, security, and energy consumption of data-mining workflow execution. To ensure that various goals are optimised holistically while respecting constraints imposed by extreme data characteristics, BSC will explore various strategies including AI-based orchestration for deploying and scheduling workflows.

Eduardo Quiones, established researcher at the Barcelona Supercomputing Center and EXTRACT coordinator, is confident that:By seamlessly integrating major open-source AI and Big Data frameworks, EXTRACT technology will contribute to providing the technological solutions Europe needs to effectively deal with extreme data. It will go beyond facilitating the wider and more effective use of data to reinforce Europes ability to manage urgent societal challenges.

About EXTRACT

The EXTRACT project (A distributed data-mining software platform for extreme data across the compute continuum) is funded under Horizon Research and Innovation Action number 101093110. The project began on 1 January 2023 and will end 31 December 2025. The consortium, formed of 10 partners, is coordinated by the Barcelona Supercomputing Center (BSC). Consortium members include: Ikerlan (Spain), Universitat Rovira I Virgili (Spain), Observatoire de Paris (France), the Centre National de la Recherce Scientifique (France), Universit Paris Cit (France), Logos Ricerca e Innovazione (Italy), City of Venice (Italy), Binare (Finland), Mathema srl (Italy), IBM Israel (Israel), sixsq (Switzerland).

Source: BSC-CNS

More:

BSC Leads Multi-Partner Consortium in Innovative Data Mining ... - HPCwire

From Text Mining to Abstract Mining – Drew Today

Tags: Caspersen, data analytics, Homepage, Professors

March 2023 What are text mining and abstract miningand how are they useful tools in medical research?

These questions and many others were answered during a hybrid event featuring Drew Universitys Dr. Ellie Small, Norma Gilbert Junior Assistant Professor of Mathematics and Computer Science.

Hosted by Drews Caspersen School of Graduate Studies Data Analytics program, attendees had the privilege of learning about text mining, abstract mining, and new methods developed to identify novel ideas for medical research.

Data analytics has seen a surge in growth opportunities, largely due to the availability of data, the need to analyze the data in various ways, and the increased ability to store and analyze data.

The cost of computer storage has decreased, while computation power has increased, said Small, who specializes in data science and has completed research papers in networks and text mining.

Small explained the difference between various types of data analysis: text mining is analyzing text in documentsfrom one document to thousands of documentsand abstract mining is the ability to analyze multiple words or short phrases in documents.

Utilizing PubMed, a biomedical literature database, Small developed logic to extract frequently occurring phrases from the housed papers and cluster them according to the frequency of the phrases within the papers.

This application of data greatly simplifies medical research for students and the medical community at large.

Alex Rudniy, assistant professor of data analytics, also offered an overview of many usages of the data analytics tools and the industries that utilize these toolsfrom marketing, travel, health care, and beyond.

Link:

From Text Mining to Abstract Mining - Drew Today

Insurer Zurich experiments with ChatGPT for claims and data mining – Financial Times

What is included in my trial?

During your trial you will have complete digital access to FT.com with everything in both of our Standard Digital and Premium Digital packages.

Standard Digital includes access to a wealth of global news, analysis and expert opinion. Premium Digital includes access to our premier business column, Lex, as well as 15 curated newsletters covering key business themes with original, in-depth reporting. For a full comparison of Standard and Premium Digital, click here.

Change the plan you will roll onto at any time during your trial by visiting the Settings & Account section.

If you do nothing, you will be auto-enrolled in our premium digital monthly subscription plan and retain complete access for $69 per month.

For cost savings, you can change your plan at any time online in the Settings & Account section. If youd like to retain your premium access and save 20%, you can opt to pay annually at the end of the trial.

You may also opt to downgrade to Standard Digital, a robust journalistic offering that fulfils many users needs. Compare Standard and Premium Digital here.

Any changes made can be done at any time and will become effective at the end of the trial period, allowing you to retain full access for 4 weeks, even if you downgrade or cancel.

You may change or cancel your subscription or trial at any time online. Simply log into Settings & Account and select "Cancel" on the right-hand side.

You can still enjoy your subscription until the end of your current billing period.

We support credit card, debit card and PayPal payments.

Go here to read the rest:

Insurer Zurich experiments with ChatGPT for claims and data mining - Financial Times

Bitcoin mining booms in Texas – Reuters

MCCAMEY, Texas, March 23 (Reuters) - Cryptocurrency bankruptcies and worries over electric power consumption have failed to dent the industry's growth in Texas, according to a top trade group, citing the rise in the miners' power demands.

Bitcoin miners consume about 2,100 megawatts of the state's power supplies, said Lee Bratcher, president of industry group Texas Blockchain Council. That power usage rose 75% last year and was nearly triple that of the prior 12 months, Bratcher said.

Those demands amount to about 3.7% of the state's lowest forecast peak load this year, according to data from grid operator Electric Reliability Council of Texas (ERCOT).

"There's been some challenges with the Bitcoin mining industry," Bratcher said, noting his group recently saw two prominent bankruptcies and other miners scaling back expansions.

The industry also faces new federal regulations, including a proposed 30% tax on electricity usage for digital mining and calls by the U.S. Treasury secretary and commodities regulator for a regulatory framework.

New York this year imposed a ban on some cryptocurrency mining that runs on fossil fuel-generated power. Other states are expected to follow suit.

But in Texas, some counties have offered tax incentives and miners continue to be drawn to its wind and solar power, which could supply about 39% of ERCOT's energy needs in 2023.

"Bitcoin mining is a very energy intensive business, which is why we tend to find places like West Texas to be full of Bitcoin miners," said Matt Prusak, chief commercial officer at cryptocurrency miner U.S. Bitcoin Corp, which has one of its mining operations in a 280-megawatt wind farm in Texas.

Its McCamey, Texas, site last month consumed 173,000 megawatt hours of power about 60% provided by the grid and nearly 40% from the nearby wind farm. The average American home uses about 10 MWh in a year, according to the Energy Information Administration.

In Texas, where about 250 people died during a winter storm blackout that exposed the fragility of the state's grid, the prospect of higher crypto demand has raised alarms.

"There are a lot of Bitcoin mines that are trying to connect to the system," said Joshua Rhodes, a research scientist at the University of Texas at Austin. "If all of them were to connect in the timelines that they are looking to connect, then it probably would present an issue to the grid because that load would be growing way faster than it ever has before."

Reporting by Evan Garcia and Dan Fastenberg; writing by Laila Kearney; Editing by Chizu Nomiyama

Our Standards: The Thomson Reuters Trust Principles.

Read more here:

Bitcoin mining booms in Texas - Reuters

An In-Depth Examination of Business Intelligence and Data Integration – CIOReview

The purpose of data analytics is to collect, analyze, and visualize data so businesses can make data-driven decisions about their operations

Fremont, CA: As organizations have become more automated and big data-driven, data integration and business intelligence have become increasingly important. Businesses must consolidate and process vast amounts of data from various sources, including internal systems, cloud-based solutions, and third-party sources. A central repository for analyzing data can be created through data integration tools that help businesses bring data from several sources together.

In addition to automation, the need for accurate and timely data has also grown. Data integration is necessary for businesses to run their operations efficiently.

Data integration

The data integration process combines data from different sources into a unified view. An easy way to access and analyze data is by converting and loading it into an easily accessible central repository or data warehouse. Integrating accurate and timely data is critical to making informed decisions for any data-driven organization. Integration of data from different sources involves a complex process. In many cases, enterprises integrate their data using ETL (Extract, Transform, Load) tools, which convert data from disparate sources into a consistent format, then load it into a centralized repository. In addition to improving data quality and reducing redundancy, data integration can also streamline data management methods. A few popular solutions for data integration include Informatica PowerCenter, Talend Open Studio, and Microsoft SQL Server Integration Services (SSIS).

Business intelligence

A business intelligence system analyzes and interprets data to provide valuable insights that can help in making decisions. The purpose of data analytics is to collect, analyze, and visualize data so businesses can make data-driven decisions about their operations.

Data is used for business intelligence to gain insights into business operations, processes, and performance. There are many steps involved in the process of business intelligence, including data warehousing, data mining, reporting, and analysis.

Benefits of data integration

Integrating data helps make decisions more informed because the data is accurate, consistent, and up-to-date. Businesses can reduce costs and save time by integrating data from multiple sources. Integrating data can make data management easier and faster, improving access and analysis speed. Businesses can gain deeper insights into their operations by combining data into a unified view.

Go here to see the original:

An In-Depth Examination of Business Intelligence and Data Integration - CIOReview

PAAB Publishes Draft Guidance Document on Use of Real-World … – Fasken

The Pharmaceutical Advertising Advisory Board (PAAB) has published a draft guidance document on the use of real-world evidence (RWE) in advertising directed to healthcare professionals (HCPs). The draft guidance document is available upon request from PAAB. PAAB is inviting industry stakeholders to provide feedback on the draft guidance document until the consultation period closes on April 3, 2023.

According to the draft guidance document, PAAB recognizes that not all clinical data that is relevant to clinical decisions by HCPs may be supportable by controlled and well-designed clinical trials with demonstrated statistical significance (so-called gold standard data). PAABs aim is to create a framework for use of RWE in advertising to facilitate delivery of the best data currently available to HCPs even in the absence of gold standard data, provided that the RWE data is sufficiently robust to be relevant and valuable to clinical practice.

Under PAABs proposed approach outlined in the draft guidance document, RWE may be used in advertising in addition to gold standard data, provided that the RWE data meets certain base requirements for validity and relevance and is presented in the advertisement in alignment with certain formatting principles, each of which is described below.

The draft guidance document provides the following nine criteria to be used to ascertain whether RWE meets basic requirements for validity and relevance:

The draft guidance document provides the following five principles for presentation of RWE data in advertising:

The draft guidance document provides examples of the application of these principles to visual advertisements, as well as adaptations of these principles for advertisements in video or audio formats.

Ourlife sciences team has significant expertise advising the pharmaceutical industry on advertising compliance and other matters and is available to consult on the draft guidance document.

Read the original post:

PAAB Publishes Draft Guidance Document on Use of Real-World ... - Fasken