Page 5«..4567..1020..»

Fundamental Satoshi Nakamoto Statement Revealed From Hidden Emails – Investing.com

U.Today - An early contributor, Martti Malmi, has shared a collection of previously undisclosed emails with Satoshi Nakamoto. These emails shed new light on Bitcoin's early days and Nakamoto's philosophical approach to the digital currency.

Key among these insights is Nakamoto's perspective on Bitcoin as primarily a medium of exchange, not merely an investment vehicle. He highlighted the energy efficiency of Bitcoin's proof-of-work mechanism compared to traditional banking, addressing environmental concerns before they became a major talking point.

Bitcoin/USD Chart by TradingViewNakamoto's email from May 2, 2009, commends Malmi for grasping Bitcoin's potential, mentioning that linking Bitcoin to fiat currencies could boost its value a topic he was hesitant to discuss publicly until the right moment. He also stressed the importance of preparing for an influx of users, anticipating widespread adoption.

Furthermore, Nakamoto envisioned Bitcoin's ability to scale up to handle transaction volumes much larger than those handled by conventional financial systems, at a fraction of the cost. He assured that as the network grew, it would become more secure, dismissing early vulnerabilities as minor startup issues.

Another interesting and somewhat funny detail from the emails is Nakamoto's request for help with website content. This humanizes the often mythologized figure of Nakamoto, showing his willingness to collaborate and delegate.

Bitcoin-like encryption, backups and user-friendliness were also topics of discussion. They have shown that Nakamoto is committed to making Bitcoin accessible and secure for masses.

These email exchanges enrich the narrative around Satoshi Nakamoto and Bitcoin's origins and provide more interesting details, which, when analyzed, may shed more light on Nakamoto's secret identity. For now, only one thing is clear: Satoshi Nakamoto's vision is close to what we have today, despite the continuous evolution of Bitcoin.

This article was originally published on U.Today

Read more:

Fundamental Satoshi Nakamoto Statement Revealed From Hidden Emails - Investing.com

Read More..

Satoshi Missed ‘Big Opportunity’ Avoiding This Date for Bitcoin Halving: Anthony Pompliano – Investing.com

U.Today - Venture investor, founder of Pomp Investments private credit fund Anthony Pompliano has taken to his X account to send a message about Satoshi Nakamoto and the Bitcoin halving to his numerous followers.

He jokingly tweeted that the enigmatic Bitcoin founder Satoshi Nakamoto missed a big opportunity to set the halving day as April 20. This day is known as the weed day and more recently, as the Day, even though the iconic meme cryptocurrency was not released in April.

Thus, Pompliano implied that Satoshi Nakamoto had missed an opportunity to integrate its brainchild even deeper into the minds of average people. Bitcoin was made to oppose fiat money and the traditional banking system in the first place, but weed has been known to be a symbol of opposing the system for decades now. It has been also legalized in many countries already.

Image via XUsers tweeted that for the aforementioned countries, the halving took place not on April 19 but on April 20. One user tweeted that Satoshi Nakamoto was based in Europe, therefore he certainly kept that 4/20 day in mind.

Numerous cryptocurrency platforms, including one of the most popular meme coins Floki, have published tweets to congratulate their communities on the Dogecoin day.

This article was originally published on U.Today

View original post here:

Satoshi Missed 'Big Opportunity' Avoiding This Date for Bitcoin Halving: Anthony Pompliano - Investing.com

Read More..

Bitcoin’s halving is a major spectacle that’s the whole point – Blockworks

The Bitcoin halving is imminent.

But even if you know what it is, you may not know why it is.

In our view, the halving exists to make bitcoin interesting and interesting things attract attention. Bitcoins pseudonymous inventor, Satoshi Nakamoto, could have chosen a boring issuance schedule. Instead, he imbued bitcoin with a seasonal fireworks display, commanding attention from an increasingly wide and diverse group of bitcoin users.

Bitcoin famously has a supply cap of 21 million, 1.3 million of which remain unminted.The network will mint these coins through the year 2140 in the same way bitcoins have always been minted.

Satoshi designed the system himself to reward miners who publish new blocks. He could have designed those rewards to hold steady over time with a constant amount per block, say 10. Or he might have designed the rewards to decrease steadily at a constant rate.

Read more: Why is 2140 the end of bitcoin inflation?

Satoshi instead chose halvings. Every 210,000 blocks, the block reward suddenly drops by half. The first 210,000 blocks each yielded 50 new bitcoin to the miner; the next 210,000 blocks yielded 25; and so on. Tomorrow, and for the next four years, each block will yield 3.125 bitcoin.

By their very nature, halvings bring an economic shock, especially to miners. Block 840,0001 will appear roughly ten minutes after block 840,000. But the miner of block 840,000 will earn $400,000 worth of new bitcoin, while the miner of block 840,001 will earn only $200,000 worth of bitcoin at todays prices, anyway.

Bitcoins volatility owes, in part, to its halving schedule. If demand remains relatively constant despite a sudden drop in newly available bitcoin, bitcoins price will likely increase. At least, thats what has happened historically.

The dollar price of bitcoin increased 5,000% between the first and second halving, from $12.53 in November 2012 to $640 in July 2016; 1,300% between the second and third halving, from $640 in July 2016 to $9,000 in May 2020; and 700% between the third and fourth halving, from $9,000 in May 2020 to $70,000 in April 2024. Of course, bitcoins price has also crashed many times during those periods. Like the weather, demand is a fickle thing.

Read more from our opinion section: Bitcoins most promising, least dramatic halving is almost here

Halvings also spark discussions about bitcoins price volatility in the short term and price trajectory in the long term. Each halving brings up the same inevitable question, especially considering past wild post-halving price swings: What will we see this time? For weeks now, TV networks have been interviewing CEOs and bitcoin thought leaders about the potential impact that the halving might have on bitcoins price.

We think Satoshi anticipated the potential for this kind of frenzy, and deliberately chose the four-year halving cycle to attract attention to bitcoin.

Satoshi was familiar with the idea of global spectacles that happen every four years. The World Cup and the Olympics garner massive attention especially from people who otherwise rarely watch sports! Would you watch the Olympics annually? Monthly? Not likely. These events garner interest partly because of their rarity. The interval allows for hype, and interest, to build. Networks run specials on the athletes expected to make a splash. Magazines run photo spreads. And when the opening ceremonies finally broadcast, three billion people watch worldwide.

Satoshi was a master promoter. He designed logos, built chat forums and schemed with users on those forums about how to stir up interest in bitcoin. He also designed a system to capture interest by being interesting.

Compare bitcoin to gold. Gold has a global brand earned over millennia. But whens the last time gold mining caught major headlines? If we mined an asteroid for gold or discovered that we had mined every last nugget that would capture attention. As things stand, however, gold mining is steady, predictable and unremarkable. Bitcoin is predictable, too. Yet it is predictably unsteady, especially with halvings thrown in, and thus remarkable.

Bitcoin is much younger than gold, with just 15 years since its creation. Yet bitcoins quadrennial halving events and corresponding price fluctuations garner headlines worldwide. Interest has snowballed with every halving, as have new users. Thats the goal.

Bitcoin halvings are spectacles, by design. And the design seems to be working. After all, it brought you to this article.

The authors are co-authors of the forthcoming academic book Resistance Money: A Philosophical Case for Bitcoin (Routledge Press).

Andrew M. Bailey is an interdisciplinary teacher and scholar whose work spans philosophy, politics, and economics. He is Associate Professor of Humanities at Yale-NUS College (Singapore).

Bradley Rettler is Associate Professor of Philosophy at the University of Wyoming, and has published peer-reviewed academic articles on metaphysics, philosophy of religion, epistemology, and cryptocurrency

Craig Warmke researches money at the intersection of philosophy, economics, and computer science. He is Associate Professor of Philosophy at Northern Illinois University.

Start your day with top crypto insights from David Canellis and Katherine Ross. Subscribe to the Empire newsletter.

Read the original:

Bitcoin's halving is a major spectacle that's the whole point - Blockworks

Read More..

Satoshi Nakamoto Was Concerned Over Bitcoin as an Investment: Report – Investing.com Nigeria

Coin Edition -

Amidst controversies regarding the identity of Bitcoins pseudonymous founder, Satoshi Nakamoto, 120-page email correspondence between Nakamoto and his earlier collaborator, Martti Malmi, shed light on the early days of Bitcoin creation.

Recently, Chinese crypto journalist Colin Wu, on his official page known on X as Wu Blockchain, shared insights on the emails shared by Malmi earlier this year, initially produced as evidence against Craig Wrights claim to be the original Bitcoin founder.

As per Colin Wus X post, the conversation between Nakamoto and Malmi indicated Nakamotos concerns over identifying Bitcoin as an investment. In a previous X post, Wu highlighted Nakamotos earlier warning against Bitcoins significant energy consumption. In addition, Nakamotos concern over labeling Bitcoin as an investment also gained traction at the time. His recent post further stated,

Further, Wu pinpointed Nakamotos insistence on not promoting anonymity. Nakamotos email read,

On February 23, Martti Malmi took to X to draw the communitys attention to the 2009-2011 email correspondence between Malmi and Nakamoto. He added that he wasnt initially comfortable with making the emails public, but Wrights trial has forced him to produce it as evidence.

However, the emails do not stand as significant evidence to prove Satoshi Nakamotos real identity. But, the conversation could significantly provide insights into Bitcoin creators vision and concerns.

The post Satoshi Nakamoto Was Concerned Over Bitcoin as an Investment: Report appeared first on Coin Edition.

Read more on Coin Edition

Go here to read the rest:

Satoshi Nakamoto Was Concerned Over Bitcoin as an Investment: Report - Investing.com Nigeria

Read More..

Bitcoin is halving again what does that mean for cryptocurrency and market? – Aju Press

Now a hotly anticipated recurring event that happens roughly every four years is taking place: the bitcoin halving. This could have further significant impact on the value of the cryptocurrency.

To understand what the halving is and what it could mean, we have to understand how bitcoin works. Bitcoin is a digital currency that makes use of what's called blockchain technology to securely store, record and publicly publish all transactions.

It is distinct from fiat currencies, such as dollars or pounds, because it has no central authority and members of the network have equal power. Each transaction is made and recorded with the user's public address, a code that enables them to remain anonymous.

Bitcoins are created by so-called miners who contribute computing power to secure the network and solve complex mathematical puzzles in order to process transaction data. These miners are then rewarded for their work with newly minted bitcoins.

The idea for bitcoin was first proposed in a white paper published online in 2008 by a mysterious individual or group using the pseudonym Satoshi Nakamoto. To combat inflation, Nakamoto wrote into the code that the total number of bitcoins will only ever be 21 million. Currently, more than 19.6 million bitcoins have been mined.

At the beginning, back in 2009, miners received 50 bitcoins for every block (unit of transaction data) they mined. But after every 210,000 blocks (roughly every four years), the reward halves.

So in 2012 the reward fell to 25 bitcoins, then to 12.5 bitcoins in 2016 and to 6.25 bitcoins in 2020. The latest halving means the reward will be just 3.125 bitcoins.

Why does bitcoin halve?

Nakamoto has never explained explicitly the reasons behind the halving. Some speculate that the halving system was designed to distribute coins more quickly at the beginning to incentivise people to join the network and mine new blocks. Block rewards are programmed to halve at regular intervals because the value of each coin rewarded is deemed likely to increase as the network expands.

But this may lead to users holding bitcoin as a speculative asset rather than using it as a medium of exchange. Additionally, the 21 million cap on the number of coins that can enter circulation makes them scarce (at least in comparison to dollars or euros), which for some people is enough to make them valuable.

So what impact does the halving have on the price? After the halving, the number of new bitcoin entering circulation shrinks. Demand should, in theory, be unaffected by this event and therefore the price should go up.

"The theory is that there will be less bitcoin available to buy if miners have less to sell," said Michael Dubrovsky, a co-founder of PoWx, a crypto research non-profit. While the first halving happened in 2012, when bitcoin was less well known and quite hard to buy and sell, we can learn from the subsequent two halvings.

The second halving on July 16 2016 was highly anticipated. The price dropped by 10 percent, but then shot back up to where it had been before. Although the immediate impact on the price was small, bitcoin did eventually respond and some argue that the 2017 bull run when the market boomed was a delayed result of the halving.

Beginning the year around US$900, by the end of 2017 bitcoin was trading above US$19,000. The third halving in 2020 happened during a bullish period for bitcoin and it continued to rise to more than US$56,000 in 2021.

Making an asset of scarcity

These few data points are not enough however to offer any concrete causal relationship or trend. But we do know that instantly miners' rewards are halved, meaning their revenue immediately halves and their profit margins are severely affected. Consequently, unless there is a price appreciation, many miners may become unprofitable and could cease the practice.

Bitcoin's scarcity is arguably one of its most significant characteristics, especially in a time of high inflation, quantitative easing and high interest rates. With the real value of fiat currencies falling, bitcoin's limited supply is an attractive feature and can be reassuring for investors.

Bitcoin hit an all-time high in February following the approval of bitcoin exchange-traded funds, which effectively make it easier for retail investors and big banks to invest in bitcoin.

This, coupled with a more favorable regulatory environment on the horizon and the fact that it is becoming more integrated in the financial system, means bitcoin may continue on the rise it has experienced in 2024 so far.

-------------------------------------------------------------------------------------------------------------------------

Andrew Urquhart is a professor of Finance & Financial Technology, ICMA Centre, Henley Business School at University of Reading in England.

This article was republished under a Creative Commons license with The Conversation. The views and opinions in this article are solely those of the author.

https://theconversation.com/bitcoin-is-halving-again-what-does-that-mean-for-the-cryptocurrency-and-the-market-228213

Read the original post:

Bitcoin is halving again what does that mean for cryptocurrency and market? - Aju Press

Read More..

New emails reveal Satoshi Nakamoto’s original vision for Bitcoin – Cryptodnes.bg

A recent set of emails between Satoshi Nakamoto, the pseudonymous founder of Bitcoin, and one of the cryptocurrency's early adopters, Marty Malmi, shed light on the digital currency's initial philosophy and initial operational problems.

These emails, discovered during legal proceedings involving Craig Wright, reveal Nakamoto's specific goals for Bitcoin, specifically his concern that it would be perceived as a speculative asset and his concerns about anonymity.

The leaked e-mail conversations show that Nakamoto had reservations about classifying Bitcoin primarily as an investment. This perspective is important because it underscores his view of Bitcoin as a means of payment and not solely as a speculative tool. This distinction underscores Bitcoin's utility for transactions without the need for a trusted third partya key feature of its creation.

Another highlight of the announcement is Nakamoto's attitude to anonymity. Contrary to popular belief that Bitcoin itself is an anonymous network, Nakamoto recommends a cautious approach to anonymity. He suggested that while Bitcoin offers the possibility of anonymity, the community needs to recognize its shortcomings in this regard.

In his emails, he presents a thoughtful concept of privacy that takes into account the real presence of these technologies, including their characteristics and limitations. This approach not only avoids potential legal and moral complications, but also helps create a high-quality user base.

Nakamoto also shed light on the environmental impact of Bitcoin's proof-of-work (PoW) system. He was aware of early criticisms about the mechanism's energy consumption, but even then he pointed to PoW's energy efficiency compared to traditional banking systems.

In addition, Nakamoto expresses confidence in the scalability of Bitcoin, which can handle volumes several times larger than those of traditional financial systems, but at a much lower cost. These moments demonstrate his foresight and willingness to tackle future challenges that will eventually become the subject of debate among crypto advocates.

Continue reading here:

New emails reveal Satoshi Nakamoto's original vision for Bitcoin - Cryptodnes.bg

Read More..

Meta’s AI Needs to Speak With You – New York Magazine

Photo-Illustration: Intelligencer

Meta has an idea: Instead of ever leaving its apps, why not stay and chat with a bot? This past week, Mark Zuckerberg announced an update to Metas AI models, claiming that, in some respects, they were now among the most capable in the industry. He outlined his companys plans to pursue AGI, or Artificial General Intelligence, and made some more specific predictions: By the end of the decade, I think lots of people will talk to AIs frequently throughout the day, using smart glasses like what were building with Ray-Ban Meta.

Maybe so! But for now, the company has something else in mind. Meta is deploying its chatbot across its most popular apps, including Facebook, Instagram, WhatsApp, and Messenger. Users might encounter the chatbot commenting on Facebook posts, chiming in when tagged in group messages, or offering suggestions in social feeds. You can chat with it directly, like ChatGPT. Itll generate images and write messages for you; much in the way that Microsoft and Google have built AI assistants into their productivity software, Meta has installed helpers into a range of social contexts. Itll be genuinely interesting to see if and how people use them in these contexts, and Meta will find out pretty quickly.

This move has been described as both savvy and desperate. Is Meta playing catchup, plowing money into a fad, and foisting half-baked technology on its users? Or is Meta now the de facto leader in AI, with a capable model, a relevant hardware business, and more users than anyone else? Like AI models themselves, claims like these are hard to benchmark every player in AI is racing in the same direction toward an ill-defined destination where they, or at least their investors, believe great riches await.

In actual usage, though, Metas AI tells a more mundane story about its intentions. The place most users are likely to encounter Metas chatbots most of the time is in the context of search:

Meta AI is also available in search across Facebook, Instagram, WhatsApp and Messenger. You can access real-time information from across the web without having to bounce between apps. Lets say youre planning a ski trip in your Messenger group chat. Using search in Messenger you can ask Meta AI to find flights to Colorado from New York and figure out the least crowded weekends to go all without leaving the Messenger app.

This is both a wide and conspicuous deployment, in practice. The box used to search for other people or pages, or groups, or locations, or topics is now also something between a chatbot and a search engine. It looks like this:

Like ChatGPT, you can ask it about whatever you want, and it will synthesize a response. In contrast to some other chatbots, and in line with the sorts of results you might get from an AI-powered search engine like Perplexity or Googles Search Generative Experience, Metas AI will often return something akin to search results, presented as a summary with footnoted links sourced from the web. When it works, the intention is pretty clear: Rather than providing something else to do within Facebook or Instagram, these features are about reducing the need to ever leave. Rather than switch out of Instagram to search for something on Google, or to tap around the web for a while, you can just tap Metas search bar and get your question answered there.

This isnt a simple case of Meta maximizing engagement, although thats surely part of it. Deploying this sort of AI, which is expensive to train and uses a lot of computing power to run, is almost certainly costing Meta a huge amount of money at this scale, which is why OpenAI charges users for similar tools. Its also a plan for a predicted future in which the web that is, openly accessible websites that exist outside of walled gardens like Metas is diminished, harder to browse, and less central to the online lives of most people. Now, smartphone users bounce between apps and web browsers and use web browsers within apps. Links provide connective tissue between apps that otherwise dont really talk to one another, and the web is a common resource to which most apps refer, at least somewhat. Here, Meta offers a preview of a world in which the web is reduced to a source for summarization and reference, less a thing that you browse than a set of data thats browsed on your behalf, by a machine.

This wouldnt be great news for the web, or the various parties that currently contribute to it; indeed, AI firms broadly rapacious approach to any and all existing and available sources of data could have the effect of making such data harder to come by, and its creators less likely to produce or at least share it (as currently built, Metas AI depends on results from Google and Bing). And lets not get ahead of ourselves: the first thing I did when I got this feature on Instagram was type New York, which presented me with a list of accounts and a couple of suggested searches, including, curiously, New York fries near me. I decided to check it out:

Guess its a good thing I didnt actually want any fries. Elsewhere, Metas AI is giving parenting advice on Facebook claiming its the parent of a both gifted and disabled child whos attending a New York City public school.

Maybe Zuckerbergs right that well be having daily conversations with AIs in our Ray Bans by the end of the decade. But right now, Meta is expecting us to have those conversations even if we dont like, need, or understand what we hear back. Were stuck testing the AI, and it us.

Get an email alert as soon as a new article publishes.

By submitting your email, you agree to our Terms and Privacy Notice and to receive email correspondence from us.

View original post here:

Meta's AI Needs to Speak With You - New York Magazine

Read More..

AI’s Illusion of Rapid Progress – Walter Bradley Center for Natural and Artificial Intelligence

The media loves to report on everything Elon Musk says, particularly when it is one of his very optimistic forecasts. Two weeks ago he said: If you define AGI (artificial general intelligence) as smarter than the smartest human, I think it’s probably next year, within two years.”

In 2019, he predicted there would be a million robo-taxis by 2020 and in 2016, he said about Mars, “If things go according to plan, we should be able to launch people probably in 2024 with arrival in 2025.”

On the other hand, the media places less emphasis on negative news such as announcements that Amazon would abandon its cashier-less technology called “Just Walk Out, because it wasnt working properly. Introduced three years ago, the tech purportedly enabled shoppers to pick up meat, dairy, fruit and vegetables and walk straight out without queueing, as if by magic. That magic, which Amazon dubbed ‘Just Walk Out’ technology, was said to be autonomously powered by AI.

Unfortunately, it wasnt. Instead, the checkout-free magic was happening in part due to a network of cameras that were overseen by over 1,000 people in India who would verify what people took off the shelves. Their tasks included “manually reviewing transactions and labeling images from videos.

Why is this announcement more important than Musks prediction? Because so many of the predictions by tech bros such as Elon Musk are based on the illusion that there are many AI systems that are working properly, when they are still only 95% there, with the remaining 5% dependent on workers in the background. The obvious example is self-driving vehicles, which are always a few years away, even as many vehicles are controlled by remote workers.   

But self-driving vehicles and cashier-less technology are just the tip of the iceberg. A Gizmodo article listed about 10 examples of AI technology that seemed like they were working, but just werent.

A company named Presto Voice sold its drive-thru automation services, purportedly powered by AI, to Carls Jr, Chilis, and Del Taco, but in reality, Filipino offsite workers are required to help with over 70% of Prestos orders.

Facebook released a virtual assistant named M in 2015 that purportedly enabled AI to book your movie tickets, tell you the weather, or even order you food from a local restaurant. But it was mostly human operators who were doing the work.

There was an impressive Gemini demo in December of 2023 that showed how Geminis AI could allegedly decipher between video, image, and audio inputs in real-time. That video turned out to be sped up and edited so humans could feed Gemini long text and image prompts to produce any of its answers. Todays Gemini can barely even respond to controversial questions, let alone do the backflips it performed in that demo.

Amazon has offered a service for years called Mechanical Turk of which one service was Expensify in 2017 in which you could take a picture of a receipt and the app would automatically verify that it was an expense compliant with your employers rules, and file it in the appropriate location. In reality, Amazon used a team of secure technicians to file the expense on your behalf, who were often Amazon Mechanical Turk workers.

Twitter offered a virtual assistant in 2016 that had access to your calendar and could correspond with you over email. In reality, humans, posing as AI, responded to emails, scheduled meetings on calendars, and even ordered food for people.”

Google claims that AI is scanning your Gmail inbox for information to personalize ads, but in reality, humans are doing the work, and are seeing your private information.

In the last three cases, real humans were viewing private information such as credit card numbers, full names, addresses, food orders, and more.

Then there are the hallucinations that keep cropping up in the output from large-language models. Many experts claim that the lowest hallucination rates among tracked AI models are around 3 to 5%, and that they arent fixable because they stem from the LLMs doing exactly what they were developed and trained to do: respond, however they can, to user prompts.

Every time you hear one of the tech bros talking about the future, keep in mind that they think large language models and self-driving vehicles already work almost perfectly. They have already filed away those cases as successfully done and they are thinking about whats next.

For instance, Garry Tan, the president and CEO of startup accelerator Y Combinator, claimed that Amazons cashier-less technology was:

“ruined by a professional managerial class that decided to use fake AI. Honestly it makes me sad to see a Big Tech firm ruined by a professional managerial class that decided to use fake AI, deliver a terrible product, and poison an entire market (autonomous checkout) when an earnest Computer Vision-driven approach could have reached profitable.

The president of Y Combinator should have known that humans were needed to make Amazons technology work, and many other AI systems. It is one of Americas most respected venture capital firms. It has funded around 4,000 startups and Sam Altman, currently CEO of OpenAI, was president of it between 2014 and 2019. For the president, Rodney Tan, to claim that Amazon could have succeeded if they had used real tech after many other companies have failed doing the same thing suggests he is either misinformed or lying.

So the next time you hear that AGI is imminent or jobs will soon be gone, remember that most of these optimistic predictions assume that Amazons cashierless technology, self-driving vehicles, and many other systems already work, when they are only 95 percent there, and the last five percent is the hardest.

In reality, those systems wont be done for years because the last few percentage points of work usually take as long as the first 95%. So what the media should be asking the tech bros about is how long will it take before those systems go from 95% successfully done autonomously to 99.99% or higher. Similarly, companies should be asking the consultants is when the 95% will become 99.99% because the rapid progress is an illusion.

Too many people are extrapolating from the systems that are purportedly automated, even though they arent yet working properly. This means that any extrapolations should attempt to understand when they will become fully automated, not just when those new forms of automated systems will begin to be used. Understanding whats going on in the background is important for understanding what the future will be in the foreground.

Read the original here:

AI's Illusion of Rapid Progress - Walter Bradley Center for Natural and Artificial Intelligence

Read More..

Will AI help or hinder trust in science? – CSIRO

By Jon Whittle 23 April 2024 6 min read

In the past year, generative artificial intelligence tools such as ChatGPT , Gemini , and OpenAIs video generation tool Sora have captured the publics imagination.

All that is needed to start experimenting with AI is an internet connection and a web browser. You can interact with AI like you would with a human assistant: by talking to it, writing to it, showing it images or videos, or all of the above.

While this capability marks entirely new terrain for the general public, scientists have used AI as a tool for many years.

But with greater public knowledge of AI will come greater public scrutiny of how its being used by scientists.

AI is already revolutionising science six percent of all scientific work leverages AI, not just in computer science, but in chemistry, physics, psychology and environmental science.

Nature, one of the worlds most prestigious scientific journals, included ChatGPT on its 2023 Natures 10 list of the worlds most influential and, until then, exclusively human scientists.

The use of AI in science is twofold.

At one level, AI can make scientists more productive.

When Google DeepMind released an AI-generated dataset of more than 380,000 novel material compounds, Lawrence Berkeley Lab used AI to run compound synthesis experiments at a scale orders of magnitude larger than what could be accomplished by humans.

But AI has even greater potential: to enable scientists to make discoveries that otherwise would not be possible at all.

It was an AI algorithm that for the first time found signal patterns in brain-activity data that pointed to the onset of epileptic seizures, a feat that not even the most experienced human neurologist can repeat.

Early success stories of the use of AI in science have led some to imagine a future in which scientists will collaborate with AI scientific assistants as part of their daily work.

That future is already here. CSIRO researchers are experimenting with AI science agents and have developed robots that can follow spoken language instructions to carry out scientific tasks during fieldwork.

While modern AI systems are impressively powerful especially so-called artificial general intelligence tools such as ChatGPT and Gemini they also have drawbacks.

Generative AI systems are susceptible to hallucinations where they make up facts.

Or they can be biased. Googles Gemini depicting Americas Founding Fathers as a diverse group is an interesting case of over-correcting for bias.

There is a very real danger of AI fabricating results and this has already happened. Its relatively easy to get a generative AI tool to cite publications that dont exist .

Furthermore, many AI systems cannot explain why they produce the output they produce.

This is not always a problem. If AI generates a new hypothesis that is then tested by the usual scientific methods, there is no harm done.

However, for some applications a lack of explanation can be a problem.

Replication of results is a basic tenet in science, but if the steps that AI took to reach a conclusion remain opaque, replication and validation become difficult, if not impossible.

And that could harm peoples trust in the science produced.

A distinction should be made here between general and narrow AI.

Narrow AI is AI trained to carry out a specific task.

Narrow AI has already made great strides. Google DeepMinds AlphaFold model has revolutionised how scientists predict protein structures.

But there are many other, less well publicised, successes too such as AI being used at CSIRO to discover new galaxies in the night sky, IBM Research developing AI that rediscovered Keplers third law of planetary motion , or Samsung AI building AI that was able to reproduce Nobel prize winning scientific breakthroughs .

When it comes to narrow AI applied to science, trust remains high.

AI systems especially those based on machine learning methods rarely achieve 100 percent accuracy on a given task. (In fact, machine learning systems outperform humans on some tasks, and humans outperform AI systems on many tasks. Humans using AI systems generally outperform humans working alone and they also outperform AI working alone. There is a large scientific evidence base for this fact, including this study. )

AI working alongside an expert scientist, who confirms and interprets the results, is a perfectly legitimate way of working, and is widely seen as yielding better performance than human scientists or AI systems working alone.

On the other hand, general AI systems are trained to carry out a wide range of tasks, not specific to any domain or use case.

ChatGPT, for example, can create a Shakespearian sonnet, suggest a recipe for dinner, summarise a body of academic literature, or generate a scientific hypothesis.

When it comes to general AI, the problems of hallucinations and bias are most acute and widespread. That doesnt mean general AI isnt useful for scientists but it needs to be used with care.

This means scientists must understand and assess the risks of using AI in a specific scenario and weigh them against the risks of not doing so.

Scientists are now routinely using general AI systems to help write papers , assist review of academic literature, and even prepare experimental plans.

One danger when it comes to these scientific assistants could arise if the human scientist takes the outputs for granted.

Well-trained, diligent scientists will not do this, of course. But many scientists out there are just trying to survive in a tough industry of publish-or-perish. Scientific fraud is already increasing , even without AI.

AI could lead to new levels of scientific misconduct either through deliberate misuse of the technology, or through sheer ignorance as scientists dont realise that AI is making things up.

Both narrow and general AI have great potential to advance scientific discovery.

A typical scientific workflow conceptually consists of three phases: understanding what problem to focus on, carrying out experiments related to that problem and exploiting the results as impact in the real world.

AI can help in all three of these phases.

There is a big caveat, however. Current AI tools are not suitable to be used naively out-of-the-box for serious scientific work.

Only if researchers responsibly design, build, and use the next generation of AI tools in support of the scientific method will the publics trust in both AI and science be gained and maintained.

Getting this right is worth it: the possibilities of using AI to transform science are endless.

Google DeepMinds iconic founder Demis Hassabis famously said : Building ever more capable and general AI, safely and responsibly, demands that we solve some of the hardest scientific and engineering challenges of our time.

The reverse conclusion is true as well: solving the hardest scientific challenges of our time demands building ever more capable, safe and responsible general AI.

Australian scientists are working on it.

This article was originally published by360info under a Creative Commons license. Read theoriginal article .

Professor Jon Whittle is Director of CSIROs Data61, Australias national centre for R&D in data science and digital technologies. He is co-author of the book, Responsible AI: Best Practices for Creating Trustworthy AI Systems.

Dr Stefan Harrer is Program Director of AI for Science at CSIROs Data61, leading a global innovation, research and commercialisation programme aiming to accelerate scientific discovery through the use of AI. He is the author of the Lancet article Attention is not all you need: the complicated case of ethically using large language models in healthcare and medicine.

Stefan Harrer is an inventor on several granted US and international patents that relate to using AI for science.

See the original post:

Will AI help or hinder trust in science? - CSIRO

Read More..

Can Oil, Gas Companies Use Generative AI to Help Hire People? – Rigzone News

Artificial Intelligence (AI) will definitely help oil and gas companies hire people.

Thats what Louisiana based OneSource Professional Search believes, according to Dave Mount, the companys president.

Our search firm is already implementing AI to augment our traditional recruiting/headhunting practices to more efficiently source a higher number of candidates along with managing the extra activity related to sourcing and qualifying a larger amount of candidates/talent pool, Mount revealed to Rigzone.

Were integrating AI as we speak and its definitely helping in covering more ground and allowing us to access a larger talent pool, although its a learning process to help the quality of the sourcing/screening match the increased quantity of qualified candidates, he added.

Gladney Darroh - an energy search specialist with 47 years of experience who developed and coaches the interview methodology Winning the Offer, which earned him the ranking of #1 technical and professional recruiter in Houston for 17 consecutive years by HAAPC - told Rigzone that oil and gas companies will use generative AI to help hire people, and so will everyone else.

Generative AI is a historic leap in technology and oil and gas companies have used technology for years to hire people, the Founding Partner and President of Houston, Texas, based Piper-Morgan Associates Personnel Consultants said.

It is typically a time intensive exercise to develop an initial pool of qualified candidates, determine which will consider a job change, which will consider a job change for this opportunity, who is really gettable, who meets the expectations of the hiring company in terms of what he/she brings to the table now, and if she/he possesses the talent to become a long term asset, Darroh added.

Deep learning models can be trained on key word content searches for anything and everything: education, training, skillset, general and specific experience all quantitative data, Darroh continued.

Once AI is trained this way and applied to searches, AI will generate in seconds what an in-house or outside recruiter might generate over days or weeks, he went on to state.

Darroh also noted that AI is developing inference - the ability to draw conclusions from data, which is the qualitative data that helps determine a candidates long-term potential for promotion and leadership roles.

For companies who are racing against their competitors to identify and hire the right talent, whether an entry level or an experienced hire, they will all adopt AI to help hire people, Darroh concluded.

Earlier this year, Enverus Chief Innovation Officer, Colin Westmoreland, revealed to Rigzone that the company believes generative AI will shape oil and gas decision making in 2024 and into the future.

Generative AI will reduce the time to value significantly by providing rapid analysis and insights, leveraging vast amounts of curated data, he said.

Westmoreland also told Rigzone that generative AI is expected to become commonplace among oil and gas companies over the next few years.

Back in January, Trygve Randen, the Senior Vice President of Digital Products and Solutions at SLB, outlined to Rigzone that generative AI will continue to gain traction in the oil and gas industry this year.

In an article published on its website in January 2023, which was updated in April 2024, McKinsey & Company noted that generative AI describes algorithms, such as ChatGPT, that can be used to create new content, including audio, code, images, text, simulations, and videos.

OpenAI, which describes itself as an A.I. research and deployment company whose mission is to ensure that artificial general intelligence benefits all of humanity, introduced ChatGPT on November 30, 2022.

In April 2023, Rigzone looked at how ChatGPT will affect oil and gas jobs. To view that article, clickhere.

To contact the author, emailandreas.exarheas@rigzone.com

Read the original here:

Can Oil, Gas Companies Use Generative AI to Help Hire People? - Rigzone News

Read More..