Page 2,389«..1020..2,3882,3892,3902,391..2,4002,410..»

Top Machine Learning Tools Used By Experts In 2021 – Analytics Insight

The amount of data generated on a day-to-day basis is humungous so much so that the term given to identify such a large volume of data is coined as big data. Big data is usually raw and cannot be used to meet business objectives. Thus, transforming this data into a form that is easy to understand is important. This is exactly where machine learning comes into play. With machine learning in place, it is possible to understand the customer demands, their behavioral pattern and a lot more thereby enabling the business to meet its objectives. For this very purpose, companies and experts rely on certain machine learning tools. Here is our find of top machine learning tools used by experts in 2021. Have a look!

Keras is a free and open-source Python library popularly used for machine learning. Designed by Google engineer Franois Chollet, Keras acts as an interface for the TensorFlow library. In addition to being user-friendly, this machine learning tool is quick, easy and runs on both CPU and GPU. Keras is written in Python language and functions as an API for neural networks.

Yet another widely used machine learning tool across the globe is KNIME. It is easy to learn, free and ideal for data reporting, analytics, and integration platforms. One of the many remarkable features of this machine learning tool is that it can integrate codes of programming languages like Java, JavaScript, R, Python, C, and C++.

WEKA, designed at the University of Waikato, in New Zealand is a tried-and-tested solution for open-source machine learning. This machine learning tool is considered ideal for research, teaching I models, and creating powerful applications. This is written in Java and supports platforms like Linux, Mac OS, Windows. It is extensively used for teaching and research purposes and also for industrial applications for the sole reason that the algorithms employed are easy to understand.

Shogun, an open-source and free-to-use software library for machine learning is quite easily accessible for businesses of all backgrounds and sizes. Shoguns solution is entirely in C++. One can access it in other development languages, including R, Python, Ruby, Scala, and more. Everything from regression and classification to Hidden Markov models, this machine learning tool has got you covered.

If you are a beginner then there cannot be a better machine learning tool to start with other than Rapid Miner. It is because of the fact that it doesnt require any programming skills in the first place. This machine learning tool is considered to be ideal for text mining, data preparation, and predictive analytics. Designed for business leaders, data scientists, and forward-thinking organisations, Rapid Miner surely has grabbed attention for all the right reasons.

TensorFlow is yet another machine learning tool that has gained immense popularity in no time. This open-source framework blends both neural network models with other machine learning strategies. With its ability to run on both CPU as well as GPU, TensorFlow has managed to make it to the list of favourite machine learning tools.

Share This ArticleDo the sharing thingy

About AuthorMore info about author

Go here to read the rest:
Top Machine Learning Tools Used By Experts In 2021 - Analytics Insight

Read More..

New exhibition to investigate the history of AI & machine learning in art. – FAD magazine

Gazelli Art House to present Code of Arms a group exhibition investigating the history of artificial intelligence (AI) and machine learning in art. The exploration of implementing code and AI in art in the 1970s 80s comes at a time of rapid change in our understanding and appreciation of computer art.

The exhibition brings together pioneer artists in computer and generative art such as George Nees (b.1926), Frieder Nake (b.1938), Manfred Mohr (b.1938) and Vera Molnar (b.1924), and iconic artists employing AI in their practice such as Harold Cohen (b.1928), Lynn Hershman Leeson (b.1941), and Mario Klingemann (b.1970).

Code of Arms follows the evolution of the medium through the works of exhibited artists. Harold Cohens painting, Aspect (1964), a work shown at the Whitechapel Gallery in 1965 (Harold Cohen: Paintings 1960-1965), marks the artists earliest point of enquiry unfolding his scientific and artistic genius. Cohen, who was most famous for creating the computer program AARON, a predecessor of contemporary AI technologies, implemented the program in his work from 80s onwards as seen by the drawings from this period in the exhibition.

Much of the early computer artworks explored geometric forms and structure employing the technology which was still in the infant stage. Plotter drawings carried out by flatbed precision plotter and early printouts by Manfred Mohr, Georg Nees, Frieder Nake and Vera Molnar from the mid-1960s through the 1980s are an excellent representation of that period: the artists focused on the visual forms rather than addressing the underlying meaning and ethics of using computers in their art. The artists saw machines as an external force that would allow them to explore the visual aspect of the works and experiment with the form in an objective manner. Coming from different backgrounds, they worked alongside each other and made an immense contribution to early computer art.

Initially working as an abstract expressionist artist, Manfred Mohr (b. 1938) was inspired by Max Benses information aesthetics which defined his approach to the creative process from the 1960s onwards. Encouraged by the computer music composer Pierre Barbaud whom he met in 1967, Mohr programmed his first computer drawings in 1969. On display are Mohrs plotter drawings of the 70s and 80s alongside a generative software piece from 2015.

Georg Nees (1926-2016) was a German academic who showed one of the worlds first computer graphics created with a digital computer in 1965. In 1970, at the 35th Venice Biennale, he presented his sculptures and architectural designs, which he continued to work on through the 1980s as seen through his drawings in this exhibition.

Frieder Nake (b. 1938) was actively pursuing computer art in the 1960s. With over 300 works produced and shown at various exhibitions (including Cybernetic Serendipity at ICA, London in 1968), Nake brought his background in computer science and mathematics into his art practice. At the The Great Temptation exhibition at the ZKM in 2005, Nees said: ?There it was, the great temptation for me, for once not to represent something technical with this machine but rather something useless geometrical patterns.? Alongside his iconic 60s plotter drawings, Nakes recent body of work (Sets of Straight Lines, 2018) will be on view as a reminder of the artists ability to transform and move away from the geometric abstraction.

Vera Molnar (b. 1924) is a Hungarian-French artist who is considered a pioneer of computer and generative art. Having created combinational images since 1959, Molnars first non-representational images (abstract geometric and systematic paintings) were produced in 1946. Her plotter drawings from the 80s are displayed alongside her later canvas and work on paper (Double Signe Sans Signification, 2005; Deux Angles Droits, 2006) demonstrating the artists consistency and dedication to the process over three decades.

The exhibition moves on to explore relationships between digital technologies and humans through works by Lynn Hershman Leeson (b.1941), an American artist and filmmaker working in moving image, collage, drawing and new media. The artist, who has recently been the focus of a solo exhibition at the New Museum, New York, will show a series of her rare drawings from the 60s and 70s, as well as her seminal work; Agent Ruby, commissioned by SFMoMA (2001), which is an algorithmic work that interacts with online users through a website, shaping the AIs memory, knowledge and moods. Leeson is known for the first interactive piece using Videodisc (Lorna (1983)), and Deep Contact (1984), the first artwork to incorporate a touch screen.

Mario Klingemann brings neural networks, code and algorithms into the contemporary context. The artist investigates systems of todays society employing deep learning, generative and evolutionary art, glitch art, data classification. The exhibition features his recent digital artwork Memories of Passersby I (Solitaire Version), 2018, and prints Morgan le Fay and Cobalamime from 2017.

Mark Westall

Mark Westall is the Founder and Editor of FAD magazine Founder and co-publisher of Art of Conversation and founder of the platform @worldoffad

The New Museums summer 2021 exhibition line-up features three monographic presentations installed in the Museums main galleries. On the Second []

DRAG: Self-portraits and Body Politics ?is the first institutional exhibition to expand on the traditional representations of drag, involving drag queens, drag kings and bio drags from different generations and backgrounds.

Art Basel will screen a premier program of 16 film and video works presented by the shows participating galleries. The Film program is curated for the fourth consecutive year by Cairo-based film curator Maxa Zoller

Dreamlands: Immersive Cinema and Art, 19052016 focuses on the ways in which artists have dismantled and reassembled the conventions of cinemascreen, projection, darknessto create new experiences of the moving image. The exhibition will fill the Whitney Museums 18,000-square-foot fifth-floor galleries, and will include a film series in the third-floor theatre.

Read more here:
New exhibition to investigate the history of AI & machine learning in art. - FAD magazine

Read More..

Machine Learning May Help Predict Success of Prescription Opioid Regulations | Columbia Public Health – Columbia University

Hundreds of laws aimed at reducing inappropriate prescriptionopioiddispensing have been implemented in the United States, yet due to the complexity of the overlapping programs, it has been difficult to evaluate their impact. Anew study by researchers at Columbia University Mailman School of Public Health uses machine learning to evaluate the laws and their relation to prescription opioid dispensing patterns.They found that the presence of prescription drug monitoring programs (PDMPs) that give prescribers and dispensers access topatient datawere linked to high-dispensing and high-dose dispensing counties.The findings are published in the journal Epidemiology.

The aim of our study was to identify individual and prescription opioid-related law provision combinations that were most predictive of high opioid dispensing and high-dose opioid dispensing in U.S. counties, said Silvia Martins, MD, PhD, associate professor of epidemiology at Columbia Mailman School. Our results showed that not all prescription drug monitoring programs laws are created equal or influence effectiveness, and there is a critical need for better evidence on how law variations might affect opioid-related outcomes. We found that a machine learning approach could help to identify what determines a successful prescription opioid dispensing model.

Using 162 prescription opioid law provisions capturing prescription drug monitoring program access, reporting and administration features, pain management clinic provisions, and prescription opioid limits, the researchers examined various approaches and models toattempt to identify laws most predictive of county-level and high-dose dispensing in different overdose epidemic phasesthe prescription opioid phase (2006-2009), the heroin phase (2010-2012), and the fentanyl phase (2013-2016)to further explore pattern shifts over time.

PDMP patient data access provisions most consistently predicted high-dispensing and high-dose dispensing counties. Pain management clinic-related provisions did not generally predict dispensing measures in the prescriptionopioidphase but became more discriminant of high dispensing and high-dose dispensing counties over time, especially in the fentanyl period.

While further research employing diverse study designs is needed to better understand how opioid laws generally, and specifically, can limit inappropriate opioid prescribing and dispensing to reduce opioid-related harms, we feel strongly that the results of our machine learning approach to identify salient law provisions and combinations associated with dispensing rates will be key for testing which law provisions and combinations of law provision work best in future research, noted Martins.

The researchers observe that there are at least two major challenges to evaluating the impacts of prescription opioid laws on opioid dispensing. First, U.S. states often adopt widely different versions of the same general type of law, making it particularly important to examine the specific provisions that make these laws more or less effective in regards to opioid-related harms. Second, states tend to enact multiple law types simultaneously, making it difficult to isolate the effect of any one law or specific provisions.

Machine learning methods are increasingly being applied to similar high-dimensional data problems, and may offer a complementary approach to other forms of policy analysis including as a screening tool to identify policies and law provision interactions that require further attention, said Martins.

Co-authors are Emilie Bruzelius, Jeanette Stingone, Hanane Akbarnejad, Christine Mauro, Megan Marzial, Kara Rudolph, Katherine Keyes, and Deborah Hasin, Columbia University Mailman School; Katherine Wheeler-Martin and Magdalena Cerd, NYU Grossman School of Medicine; Stephen Crystal and Hillary Samples, Rutgers University; and Corey Davis, Network for Public Health Law.

The study was supported by the National Institute on Drug Abusegrants DA048572, DA047347, DA048860 and DA049950; the Agency for Healthcare Quality and Research, grant R18 HS023258; and the National Center for Advancing Translational Sciences and the New Jersey Health Foundation, grant TR003017.

Read more from the original source:
Machine Learning May Help Predict Success of Prescription Opioid Regulations | Columbia Public Health - Columbia University

Read More..

An Illustrative Guide to Extrapolation in Machine Learning – Analytics India Magazine

Humans excel at extrapolating in a variety of situations. For example, we can use arithmetic to solve problems with infinitely big numbers. One can question if machine learning can do the same thing and generalize to cases that are arbitrarily far apart from the training data. Extrapolation is a statistical technique for estimating values that extend beyond a particular collection of data or observations. In contrast to extrapolation, we shall explain its primary aspects in this article and attempt to connect it to machine learning. The following are the main points to be discussed in this article.

Lets start the discussion by understanding extrapolation.

Extrapolation is a sort of estimation of a variables value beyond the initial observation range based on its relationship with another variable. Extrapolation is similar to interpolation in that it generates estimates between known observations, but it is more uncertain and has a higher risk of giving meaningless results.

Extrapolation can also refer to a methods expansion, presuming that similar methods are applicable. Extrapolation is a term that refers to the process of projecting, extending, or expanding known experience into an unknown or previously unexperienced area in order to arrive at a (typically speculative) understanding of the unknown.

Extrapolation is a method of estimating a value outside of a defined range. Lets take a general example. If youre a parent, you may recall your youngster calling any small four-legged critter a cat because their first classifier employed only a few traits. They were also able to correctly identify dogs after being trained to extrapolate and factor in additional attributes.

Even for humans, extrapolation is challenging. Our models are interpolation machines, no matter how clever they are. Even the most complicated neural networks may fail when asked to extrapolate beyond the limitations of their training data.

Machine learning has traditionally only been able to interpolate data, that is, generate predictions about a scenario that is between two other, known situations. Because machine learning only learns to model existing data locally as accurately as possible, it cannot extrapolate that is, it cannot make predictions about scenarios outside of the known conditions. It takes time and resources to collect enough data for good interpolation, and it necessitates data from extreme or dangerous settings.

When We use data in regression problems to generalize a function that translates a set of input variables X to a set of output variables y. A y value can be predicted for any combination of input variables using this function mapping. When the input variables are located between the training data, this procedure is referred to as interpolation; however, if the point of estimation is located outside of this region, it is referred to as extrapolation.

The grey and white sections in the univariate example in Fig above show the extrapolation and interpolation regimes, respectively. The black lines reflect a selection of polynomial models that were used to make predictions within and outside of the training data set.

The models are well limited in the interpolation regime, causing them to collapse in a tiny region. However, outside of the domain, the models diverge, producing radically disparate predictions. The absence of information given to the model during training that would confine the model to predictions with a smaller variance is the cause of this large divergence of predictions (despite being the same model with slightly different hyperparameters and trained on the same set of data).

This is the risk of extrapolation: model predictions outside of the training domain are particularly sensitive to training data and model parameters, resulting in unpredictable behaviour unless the model formulation contains implicit or explicit assumptions.

In the absence of training data, most learners do not specify the behaviour of their final functions. Theyre usually made to be universal approximators or as close as possible with few modelling constraints. As a result, in places where there is little or no data, the function has very little previous control. As a result, we cant regulate the behaviour of the prediction function at extrapolation points in most machine learning scenarios, and we cant tell when this is a problem.

Extrapolation should not be a problem in theory; in a static system with a representative training sample, the chances of having to anticipate a point of extrapolation are essentially zero. However, most training sets are not representative, and they are not derived from static systems, therefore extrapolation may be required.

Even empirical data derived from a product distribution can appear to have a strong correlation pattern when scaled up to high dimensions. Because functions are learned based on an empirical sample, they may be able to extrapolate effectively even in theoretically dense locations.

Extrapolation works with linear and other types of regression to some extent, but not with decision trees or random forests. In the Decision Tree and Random Forest, the input is sorted and filtered down into leaf nodes that have no direct relationship to other leaf nodes in the tree or forest. This means that, while the random forest is great at sorting data, the results cant be extrapolated because it doesnt know how to classify data outside of the domain.

A good decision on which extrapolation method to use is based on a prior understanding of the process that produced the existing data points. Some experts have recommended using causal factors to assess extrapolation approaches. We will see a few of them. These are pure mathematical methods one should relate to your problem properly.

Linear extrapolation is the process of drawing a tangent line from the known datas end and extending it beyond that point. Only use linear extrapolation to extend the graph of an essentially linear function or not too much beyond the existing data to get good results. Linear extrapolation produces the function if the two data points closest to the point x* to be extrapolated are (xk-1,yk-1) and (xk,yk).

A polynomial curve can be built using all of the known data or just a small portion of it (two points for linear extrapolation, three points for quadratic extrapolation, etc.). The curve that results can then be extended beyond the available data. The most common way of polynomial extrapolation is to use Lagrange interpolation or Newtons method of finite differences to generate a Newton series that matches the data. The data can be extrapolated using the obtained polynomial.

Five spots near the end of the given data can be used to make a conic section. If the conic section is an ellipse or a circle, it will loop back and rejoin itself when extrapolated. A parabola or hyperbola that has been extrapolated will not rejoin itself, but it may curve back toward the X-axis. A conic sections template (on paper) or a computer could be used for this form of extrapolation.

Further, we will see the simple python implementation of linear extrapolation.

The technique is beneficial when the linear function is known. Its done by drawing a tangent and extending it beyond the limit. When the projected point is close to the rest of the points, linear extrapolation delivers a decent result.

Extrapolation is a helpful technique, but it must be used in conjunction with the appropriate model for describing the data, and it has limitations after you leave the training area. Its applications include predicting in situations where you have continuous data, such as time, speed, and so on. Prediction is notoriously imprecise, and the accuracy falls as the distance from the learned area grows. In situations where extrapolation is required, the model should be updated and retrained to lower the margin of error. Through this article, we have understood extrapolation and its interpolation mathematically and related them with the ML, and seen their effect on the ML system. We have also seen particularly where it fails, and methods that can be used.

Visit link:
An Illustrative Guide to Extrapolation in Machine Learning - Analytics India Magazine

Read More..

We mapped every large solar plant on the planet using satellites and machine learning – The Conversation UK

An astonishing 82% decrease in the cost of solar photovoltaic (PV) energy since 2010 has given the world a fighting chance to build a zero-emissions energy system which might be less costly than the fossil-fuelled system it replaces. The International Energy Agency projects that PV solar generating capacity must grow ten-fold by 2040 if we are to meet the dual tasks of alleviating global poverty and constraining warming to well below 2C.

Critical challenges remain. Solar is intermittent, since sunshine varies during the day and across seasons, so energy must be stored for when the sun doesnt shine. Policy must also be designed to ensure solar energy reaches the furthest corners of the world and places where it is most needed. And there will be inevitable trade-offs between solar energy and other uses for the same land, including conservation and biodiversity, agriculture and food systems, and community and indigenous uses.

Colleagues and I have now published in the journal Nature the first global inventory of large solar energy generating facilities. Large in this case refers to facilities that generate at least 10 kilowatts when the sun is at its peak. (A typical small residential rooftop installation has a capacity of around 5 kilowatts).

We built a machine learning system to detect these facilities in satellite imagery and then deployed the system on over 550 terabytes of imagery using several human lifetimes of computing.

We searched almost half of Earths land surface area, filtering out remote areas far from human populations. In total we detected 68,661 solar facilities. Using the area of these facilities, and controlling for the uncertainty in our machine learning system, we obtain a global estimate of 423 gigawatts of installed generating capacity at the end of 2018. This is very close to the International Renewable Energy Agencys (IRENA) estimate of 420 GW for the same period.

Our study shows solar PV generating capacity grew by a remarkable 81% between 2016 and 2018, the period for which we had timestamped imagery. Growth was led particularly by increases in India (184%), Turkey (143%), China (120%) and Japan (119%).

Facilities ranged in size from sprawling gigawatt-scale desert installations in Chile, South Africa, India and north-west China, through to commercial and industrial rooftop installations in California and Germany, rural patchwork installations in North Carolina and England, and urban patchwork installations in South Korea and Japan.

Country-level aggregates of our dataset are very close to IRENAs country-level statistics, which are collected from questionnaires, country officials, and industry associations. Compared to other facility-level datasets, we address some critical coverage gaps, particularly in developing countries, where the diffusion of solar PV is critical for expanding electricity access while reducing greenhouse gas emissions. In developed and developing countries alike, our data provides a common benchmark unbiased by reporting from companies or governments.

Geospatially-localised data is of critical importance to the energy transition. Grid operators and electricity market participants need to know precisely where solar facilities are in order to know accurately the amount of energy they are generating or will generate. Emerging in-situ or remote systems are able to use location data to predict increased or decreased generation caused by, for example, passing clouds or changes in the weather.

This increased predictability allows solar to reach higher proportions of the energy mix. As solar becomes more predictable, grid operators will need to keep fewer fossil fuel power plants in reserve, and fewer penalties for over- or under-generation will mean more marginal projects will be unlocked.

Using the back catalogue of satellite imagery, we were able to estimate installation dates for 30% of the facilities. Data like this allows us to study the precise conditions which are leading to the diffusion of solar energy, and will help governments better design subsidies to encourage faster growth.

Knowing where a facility is also allows us to study the unintended consequences of the growth of solar energy generation. In our study, we found that solar power plants are most often in agricultural areas, followed by grasslands and deserts.

This highlights the need to carefully consider the impact that a ten-fold expansion of solar PV generating capacity will have in the coming decades on food systems, biodiversity, and lands used by vulnerable populations. Policymakers can provide incentives to instead install solar generation on rooftops which cause less land-use competition, or other renewable energy options.

The github, code, and data repositories from this research have been made available to facilitate more research of this type and to kickstart the creation of a complete, open, and current dataset of the planets solar energy facilities.

View original post here:
We mapped every large solar plant on the planet using satellites and machine learning - The Conversation UK

Read More..

Sama has been named the Best in Machine Learning Platforms at the 2021 AI TechAwards – HapaKenya – HapaKenya

Sama, a training data provider in Artificial Intelligence (AI) projects, has announced it has received the 2021 AI TechAward for Best in Machine Learning Platforms.

The annual awards celebrate companies leading in technical innovation, adoption, and reception in the AI and machine learning industry and by the developer community, presented by AI DevWorld.

The winners for the 2021 awards were selected from more than 100 entries submitted per category globally, and announced during a virtual AI DevWorld conference. The conference targeted software engineers and data scientists interested in AI as well as AI dev professionals looking for a landscape view on the newest AI technologies.

For over a decade, organizations such as Google, Microsoft, NVIDIA and others have continued to rely on Sama to deliver secure, high-quality training data and model validation for their machine learning projects.

Sama hires over 90% of its workforce from low-income backgrounds and marginalized populations, including unemployed urban and rural youth that are traditionally excluded from the digital economy.

The company has remained committed to connecting people to dignified digital work and paying them living wages that solve some of the worlds most pressing challenges. This includes reducing poverty, empowering women, and mitigating climate change. As a result, it has helped over 56,000 people lift themselves out of poverty, increased wages of workers up to 4 times, and provided over 11,000 hours of training to youth and women, who comprise over 50% of their workforce in both Kenya and Uganda offices.

Earlier this year, the company was recognized as one of the fastest-growing private companies in America on the 2021 Inc. 5000 list for the second year in a row.

Here is the original post:
Sama has been named the Best in Machine Learning Platforms at the 2021 AI TechAwards - HapaKenya - HapaKenya

Read More..

What to do if your Bitcoin, ether or other cryptocurrency gets stolen – CNET

Protect your cryptocurrency from cybercriminals.

If you've invested in Bitcoin, ether or any other cryptocurrency, here are two truths: Your savings are a target for thieves, and it can be tough to get your funds back if the worst happens.

Crypto exchanges are hacked surprisingly often. One of the biggest heists occurred in August, when cybercriminals stole $610 million in various cryptocurrencies from the Chinese platform Poly Network. The hackers eventually returned the funds.

That's an uncommon case. Mt. Gox, a Japanese exchange, was forced intobankruptcyin 2014 after crooks lifted $450 million in Bitcoin and other cryptocurrencies. Losses from crypto hacks, thefts, fraud and misappropriation totaled $681 million in the first seven months of this year, according to areport from crypto intelligence company CipherTrace. If losses continue on pace, they'd total $1.17 billion, though that would be a drop from last year's $1.9 billion.

Even if you store your crypto at one of the well-established exchanges, you might face a slog recovering your funds. After reportedly receiving thousands of customer complaints related to its customer service, Coinbase, one of the most popular exchanges, started a live phone support line in September, which doesn't appear to have pleased some of its unhappy customers.

Coinbase didn't respond to a request for comment butnotes on its website that it carries "crime insurance" protecting a portion of digital assets held across its storage systems against losses from theft, including data breaches.

In addition, the company confirmed Wednesday that it's started testing a new subscription service that will allow customers to buy, sell and convert digital currencies without paying a fee for each trade. Website The Blockreportedearlier that the service also includes features like additional account protection and "prioritized phone support."

Read more: Crypto security can be a pain, but a few safeguards will go a long way

Of course, that won't help if someone hacks your personal wallet -- the software and sometimes hardware used to store crypto -- rather than the exchange itself. No one's in charge of cryptocurrencies, which are decentralized. You might want to complain, but good luck finding someone to listen.

What's worse than having your funds robbed? Watching the money move around on the blockchain, the technology that powers cryptocurrencies by creating a public record of transactions.

"Your stolen funds are right there in plain sight, but there's no way to get them back," said Don Pezet, co-founder of the online IT training company ITProTV. "It's like someone stole your car and parked it right in front of your house."

The best approach, of course, is to make sure your crypto never gets stolen. That means moving as much of it as possible into "cold" wallets that aren't connected to the internet. Secure any funds you leave in "hot" wallets," which are hosted online, as tightly as possible.

Should something bad happen, don't lose hope. Here are some tips from the experts:

If there's anything left in your compromised wallet, transfer it out, Pezet says. Delete the wallet and get a new one.

Any passwords related to your exchange account should be changed as soon as possible, says Andrew Gunn, senior threat intelligence analyst at ZeroFox. Switch email accounts. If you think the device you used to access your account might be compromised, reformat it or, preferably, don't use it anymore.

If your exchange is larger and better known, you're more likely to get some help. Act fast, and your exchange might be able to freeze your funds, depending on what stage the theft is at, Gunn says.

Be aware, however, that many exchanges aren't under much obligation to help. Some exchanges are located in countries with few regulations that cover cryptocurrencies. Some countries don't consider crypto to be an asset, Pezet says, reducing the odds of help from the authorities even further.

It's unlikely a formal report will help in recovering stolen crypto, but it doesn't hurt to have a case number or documentation. You never know if there will be an insurance claim or lawsuit you can be part of. Having evidence you took the theft seriously will help you establish standing if you have to.

In some cases, the FBI and crypto-tracing companies have been able to recover cryptocurrency. For example, in the case of the Colonial Pipeline ransomware attack, the FBI, with the help of tracing experts, was able to recover about $2.3 million of the $4.4 million paid in Bitcoin as ransom. But isn't likely federal authorities would go to those kinds of lengths for the average person.

See the article here:
What to do if your Bitcoin, ether or other cryptocurrency gets stolen - CNET

Read More..

Cryptocurrency ether hits all time high of $4400 – Reuters

The exchange rates and logos of Bitcoin (BTH), Ether (ETH), Litecoin (LTC) and Monero (XMR) are seen on the display of a cryptocurrency ATM of blockchain payment service provider Bity at the House of Satochi bitcoin and blockchain shop in Zurich, Switzerland March 4, 2021. REUTERS/Arnd Wiegmann

HONG KONG, Oct 29 (Reuters) - Ether , the world's second largest cryptocurrency, hit an all-time high on Friday, a little over a week after larger rival bitcoin set its own record.

As cryptocurrency markets have rallied sharply in recent weeks, ether is up more than 60% since its late September trough.

The token, which underpins the ethereum blockchain network, rose as much as 2.6% to $4,400 in Asian hours, breaching the previous top of $4,380 set on May 12.

"It wouldn't surprise me if we go blasting through in European and U.S. trade," said Chris Weston, research head at Melbourne-based broker Pepperstone. "This is a momentum beast at the moment, and it looks bloody strong."

A recent technical upgrade to the Ethereum network seemed to have helped, he added.

"A lot of the time, with these technological upgrades and bits and pieces, this is news that fuels the beast, it's fodder for people to say, 'This is what we bought in for,' and as soon as it starts moving, it's like a red rag to a bull, people just go and buy."

Bitcoin, which hit its record high of $67,016 on Oct. 20, was last up 1.4% at $61,457, for an increase of about 50% since late September.

Among the biggest recent movers in cryptocurrencies, however, is meme-based cryptocurrency shiba inu, whose price has rocketed about 160% this week, and is the world's eighth largest token.

Shiba inu is a spinoff of dogecoin, itself born as a satire of a cryptocurrency frenzy in 2013, and has barely any practical use. read more

Reporting by Alun John in Hong Kong and Kevin Buckland in Tokyo; Editing by Clarence Fernandez

Our Standards: The Thomson Reuters Trust Principles.

More here:
Cryptocurrency ether hits all time high of $4400 - Reuters

Read More..

Commonwealth Bank to offer cryptocurrency trading in first for Australias big four – The Guardian Australia

The Commonwealth Bank will allow its customers to buy and sell cryptocurrency through its app, in the first move of its kind by a major Australian bank.

Australias largest bank announced on Wednesday it had partnered with US-based crypto exchange Gemini and blockchain analysis firm Chainalysis to offer the service to its 6.5m CommBank app users.

Customers will be able to buy up to 10 crypto assets including bitcoin, Ethereum and Litecoin.

The bank will conduct a pilot in the next few weeks, ahead of a wider launch in 2022.

We believe we can play an important role in crypto to address whats clearly a growing customer need and provide capability, security and confidence in a crypto trading platform, CBAs chief executive Matt Comyn said in a statement.

The bank said research on its customers found many had either expressed interest in crypto assets, or were already trading crypto through exchanges.

Customers have expressed concern regarding some of the crypto services in the market today, including the friction of using third party exchanges, the risk of fraud, and the lack of trust in some new providers. This is why we see this as an opportunity to bring a trusted and secure experience for our customers, Comyn said.

Dr Dimitrios Salampasis, a lecturer of fintech leadership and entrepreneurship at Swinburne Business School, said he was not surprised CBA had entered the cryptocurrency field.

He said the bank was trying to get first mover advantage in Australia, and hoped it would bring more legitimacy to the cryptocurrency space.

Having this coming from a systemic and the biggest bank in Australia, its definitely a move that will change a lot, he said.

And it will hopefully bring legitimacy, bring further harmonisation, push further regulation and also minimise debanking, which has been a massive pain for all cryptocurrency startups in particular.

Debanking is where financial institutions refuse to offer services to businesses in Australia.

A Senate select committee report on fintech services in Australia, tabled this month, cited several cryto businesses that had been rejected by dozens of financial institutions in Australia, such as the exchange Bitcoin Babe.

The committee, chaired by Liberal senator Andrew Bragg, recommended the government regulate the sector to allow it to fully operate in Australia, including a market licensing regime for digital currency exchanges, and for the government to develop a clear process for businesses to deal with debanking.

Salampasis said the committees report, along with CBAs gradual move into the sector, would likely foster regulation of cryptocurrency in Australia.

There has to be regulation, there has to be provisions, especially in relation to custody, especially in relation to licensing, he said.

I do believe that Australia has a once-in-a-lifetime opportunity to become a leader in the space and really drive a complete regulatory framework around cryptocurrencies.

Bragg welcomed the announcement from CBA.

For too long, banks have cast aside cryptocurrency as an illegitimate fringe pursuit. I am pleased the tide is turning, as digital assets are mainstreamed, he said.

Now banks are adopting cryptocurrency, they should stop debanking hardworking Australians.

CBA told the committee that it does not have a policy around debanking due to competitive or market factors but when making a decision on lending to new customers, we take a range of risk considerations into account including the terms and conditions of any loan documentation and possible security provisions provided.

See the article here:
Commonwealth Bank to offer cryptocurrency trading in first for Australias big four - The Guardian Australia

Read More..

How Venture Capitalists Think Cryptocurrency Will Reshape Commerce – The New York Times

Decentralized finance and artificial intelligence

Crypto finance can sound like science fiction. But this is our reality. Right now, all over the internet, on decentralized finance programs like Uniswap, people are trading, borrowing and lending digital assets on platforms where computer code runs the show. There is now about $235 billion invested in DeFi, by one industry account.

On the DeFi protocol Compound, a recent programming snafu revealed vulnerabilities in systems deliberately designed to eliminate the middlemen regulators traditionally rely on to oversee financial transactions and guarantee consumer protection. After a bug was introduced during a software upgrade, $160 million worth of cryptocurrency was put at risk of improper distribution, and about $90 million of that was actually wrongly paid out, the company said.

Technically, Compound is not brokering trades, just programming software for transactions. But its founder, Robert Leshner, conceded in an interview with The New York Times this summer that he has long feared an error could result in major losses. For the first couple of years of Compound, I woke up in a cold sweat every morning, he said.

Started in 2017, the company now claims to have $18 billion worth of cryptocurrency earning interest on its platform. Mr. Leshners recurring nightmare was that somebody would find a flaw in the program, a line of bad code, and steal everything. All it takes is one bug, he said.

A16Z is backing a network called Helium. This decentralized wireless infrastructure company hopes to someday compete with established brands like Verizon or AT&T. Community members create a hotspot in their neighborhood with a special device and earn data and Heliums crypto tokens in exchange for helping to power this group 5G cellular system.

Popularitys value on social networks can now be calculated when you tokenize yourself and create an economy fueled by your own crypto.

On BitClout, every user gets a coin and its value suggests what the internet thinks of them. There is no company behind it its just coins and code, the developers claim. An account with the name Elon Musk is the top-valued token at about $115 dollars. But the projects launch was controversial, with crypto insiders calling out the dystopian social network for relying on data collected by giants like Twitter to calculate reputation, among other critiques. DeSo, short for Decentralized Social, is a blockchain network for developers to build decentralized social media programs.

Read more:
How Venture Capitalists Think Cryptocurrency Will Reshape Commerce - The New York Times

Read More..