Category Archives: Ai

Google Will Kill ChatGPT and Other Overhyped AI Predictions We Heard In 2023 – Medium

Here are some predictions that I doubt will happen in 2024 or the near future (and why I think so).Midjourney

2023 was the year of AI. Every month, weve seen new AI tools being launched, advancements in the field, upgrades, and more things that kept the field of AI moving.

Overhyped AI predictions werent missing in 2023 either. Throughout the year we heard things like AGI was (or will soon be) achieved or AI will take everyones job.

Heres why I think theyre overhyped and doubt theyll happen in the next years.

Almost every month theres a new ChatGPT killer at least thats what we see on the media. The latest ChatGPT killer (by consensus) was Gemini Ultra, a tool that beat GPT-4 in the benchmarks but isnt available yet to the public.

Even if Gemini Ultra is slightly superior to GPT-4, tech superiority doesnt always translate to market dominance and Google knows that (probably thats why they created too much hype with their demo).

I checked some articles and videos that claim Google will kill ChatGPT to find out how they came to such conclusions. Here are some of the arguments I found.

I don't think any of these arguments are enough to claim that Google will indeed kill ChatGPT.

Why? Well, #2 is not a good metric to say whether a product will kill its competitor. Recently, Google shares sank following reports that some of their AI Gemini Ultra demo was faked. This doesnt mean Gemini Ultra is a bad model or that it cant compete with GPT-4 but shows the consequences of Google overhyping its own product.

On the other hand, even if #1 is true, its not enough. Google might have the resources to create a tool to compete

Go here to see the original:

Google Will Kill ChatGPT and Other Overhyped AI Predictions We Heard In 2023 - Medium

Building AI safely is getting harder and harder – The Atlantic

This is Atlantic Intelligence, an eight-week series in which The Atlantics leading thinkers on AI will help you understand the complexity and opportunities of this groundbreaking technology. Sign up here.

The bedrock of the AI revolution is the internet, or more specifically, the ever-expanding bounty of data that the web makes available to train algorithms. ChatGPT, Midjourney, and other generative-AI models learn by detecting patterns in massive amounts of text, images, and videos scraped from the internet. The process entails hoovering up huge quantities of books, art, memes, and, inevitably, the troves of racist, sexist, and illicit material distributed across the web.

Earlier this week, Stanford researchers found a particularly alarming example of that toxicity: The largest publicly available image data set used to train AIs, LAION-5B, reportedly contains more than 1,000 images depicting the sexual abuse of children, out of more than 5 billion in total. A spokesperson for the data sets creator, the nonprofit Large-scale Artificial Intelligence Open Network, told me in a written statement that it has a zero tolerance policy for illegal content and has temporarily halted the distribution of LAION-5B while it evaluates the reports findings, although this and earlier versions of the data set have already trained prominent AI models.

Because they are free to download, the LAION data sets have been a key resource for start-ups and academics developing AI. Its notable that researchers have the ability to peer into these data sets to find such awful material at all: Theres no way to know what content is harbored in similar but proprietary data sets from OpenAI, Google, Meta, and other tech companies. One of those researchers is Abeba Birhane, who has been scrutinizing the LAION data sets since the first versions release, in 2021. Within six weeks, Birhane, a senior fellow at Mozilla who was then studying at University College Dublin, published a paper detailing her findings of sexist, pornographic, and explicit rape imagery in the data. Im really not surprised that they found child-sexual-abuse material in the newest data set, Birhane, who studies algorithmic justice, told me yesterday.

Birhane and I discussed where the problematic content in giant data sets comes from, the dangers it presents, and why the work of detecting this material grows more challenging by the day. Read our conversation, edited for length and clarity, below.

Matteo Wong, assistant editor

More Challenging By the Day

Matteo Wong: In 2021, you studied the LAION data set, which contained 400 million captioned images, and found evidence of sexual violence and other harmful material. What motivated that work?

Abeba Birhane: Because data sets are getting bigger and bigger, 400 million image-and-text pairs is no longer large. But two years ago, it was advertised as the biggest open-source multimodal data set. When I saw it being announced, I was very curious, and I took a peek. The more I looked into the data set, the more I saw really disturbing stuff.

We found there was a lot of misogyny. For example, any benign word that is remotely related to womanhood, like mama, auntie, beautifulwhen you queried the data set with those types of terms, it returned a huge proportion of pornography. We also found images of rape, which was really emotionally heavy and intense work, because we were looking at images that are really disturbing. Alongside that audit, we also put forward a lot of questions about what the data-curation community and larger machine-learning community should do about it. We also later found that, as the size of the LAION data sets increased, so did hateful content. By implication, so does any problematic content.

Wong: This week, the biggest LAION data set was removed because of the finding that it contains child-sexual-abuse material. In the context of your earlier research, how do you view this finding?

Birhane: It did not surprise us. These are the issues that we have been highlighting since the first release of the data set. We need a lot more work on data-set auditing, so when I saw the Stanford report, its a welcome addition to a body of work that has been investigating these issues.

Wong: Research by yourself and others has continuously found some really abhorrent and often illegal material in these data sets. This may seem obvious, but why is that dangerous?

Birhane: Data sets are the backbone of any machine-learning system. AI didnt come into vogue over the past 20 years only because of new theories or new methods. AI became ubiquitous mainly because of the internet, because that allowed for mass harvesting of large-scale data sets. If your data contains illegal stuff or problematic representation, then your model will necessarily inherit those issues, and your model output will reflect these problematic representations.

But if we take another step back, to some extent its also disappointing to see data sets like the LAION data set being removed. For example, the LAION data set came into existence because the creators wanted to replicate data sets inside big corporationsfor example, what data sets used in OpenAI might look like.

Wong: Does this research suggest that tech companies, if theyre using similar methods to collect their data sets, might harbor similar problems?

Birhane: Its very, very likely, given the findings of previous research. Scale comes at the cost of quality.

Wong: Youve written about research you couldnt do on these giant data sets because of the resources necessary. Does scale also come at the cost of auditability? That is, does it become less possible to understand whats inside these data sets as they become larger?

Birhane: There is a huge asymmetry in terms of resource allocation, where its much easier to build stuff but a lot more taxing in terms of intellectual labor, emotional labor, and computational resources when it comes to cleaning up whats already been assembled. If you look at the history of data-set creation and curation, say 15 to 20 years ago, the data sets were much smaller scale, but there was a lot of human attention that went into detoxifying them. But now, all that human attention to data sets has really disappeared, because these days a lot of that data sourcing has been automated. That makes it cost-effective if you want to build a data set, but the reverse side is that, because data sets are much larger now, they require a lot of resources, including computational resources, and its much more difficult to detoxify them and investigate them.

Wong: Data sets are getting bigger and harder to audit, but more and more people are using AI built on that data. What kind of support would you want to see for your work going forward?

Birhane: I would like to see a push for open-sourcing data setsnot just model architectures, but data itself. As horrible as open-source data sets are, if we dont know how horrible they are, we cant make them better.

Related:

P.S.

Struggling to find your travel-information and gift-receipt emails during the holidays? Youre not alone. Designing an algorithm to search your inbox is paradoxically much harder than making one to search the entire internet. My colleague Caroline Mimbs Nyce explored why in a recent article.

Matteo

Original post:

Building AI safely is getting harder and harder - The Atlantic

Remote work, AI, and skills-based hiring threaten to put our jobs on the chopping blockbut experts say those fears are overblown – Fortune

In 1897, literary icon Mark Twain is said to have come across his own obituary in a New York newspaper. Asked for his response, tongue partly in cheek, Twain famously said the reports of his death have been greatly exaggerated.

The same sentiment might be said about the current and future state of American white collar work. An influx of headlines propose that millions of jobs are set to disappear within a decade or two. Depending on who you ask or which article you click on, you may well find your job on a list.

Theres no getting around the fact that jobs ten years from now will call on an entirely new set of skillsand maybe an entirely new set of workers. At least, if you ask Harvard Business School management professor and future of work expert Joseph Fullerwhether reports of the death of well-paid, long-standing jobs are in fact exaggerated.

The future of white collar work is going to be different, but jobs wont disappear en masse, Fuller, who co-leads Harvards Managing the Future of Work initiative, tells Fortune. Some skills will always be crucial, so its important to remain agile and continually look for ways of upskilling and not fearing the future.

This prospect isnt quite as daunting as it sounds. The shape of work has morphed and reoriented countless times in the past. On a macro level, consider the Industrial Revolution. For a flash in the pan, consider Y2K mania. But, even in moments of grave uncertainty, people tend to chug along. Humans have always adapted, refining their skills and retrofitting their careers to the current needs of the workforce. And despite a perennial fear of a technocratic future, robots havent nearly caught up to us yet.

The U.S., and indeed the industrialized world, is trending towards a future in which our jobs as they exist today will gradually become unrecognizable. Its the question of just how quickly and widely those changes will take hold thats spurred endless debate. In todays post-COVID landscape, the overarching fear of job disappearance stems from three discrete rising threats: remote work, Generative AI like ChatGPT, and skills-based hiring. But experts say none are quite as threatening as they seem.

Most everyone likes flexible work. But those who have been living it up in their remote-first or fully remote desk jobs since 2020 may be in for a rude awakening. If youve proven you can work from anywhere, your boss could also deduce that it can be done by someone else, somewhere else, for much cheaper. Some experts believe that could lead to a mass exodus of remote jobs in the U.S., potentially within a decade.

If people that code for Google and Facebook were able to live wherever in the U.S. they wanted and [work] for a year and a half without ever going to the office, it seems very, very likely that a lot of companies will be rethinking this longer-term and outsourcing those kinds of jobs that didnt used to be outsourced, Anna Stansbury, an assistant professor of work and organization studies at MIT Sloan School of Management who teaches a course on the future of work, told Fortune last year. Needless to say, the American workforce would seismically change if well-paid white collar jobs suddenly move overseas.

According to additional data from the National Bureau for Economic Research (NBER), more jobs than you might think are in fact highly offshorable. Bosses, paying big-city salaries to workers who long ago relocated to smaller, cheaper towns, are already asking themselves whether someone needs to be physically close to an office or their actual team. Within a few years, work that can be reasonably done remotely by people in such jobs will inevitably be done by telemigrants, NBER said.

But maybe not so fast. Social and cultural contexts across countries [make] it less likely that a public relations specialist or a sales engineer located in Hanoi is a perfect substitute for one located in Seattle, the researchers added.

And an analysis by The Washington Post finds little evidence that this will happen any time soon, at the very least. Even if it does, American white-collar workers are in the best possible position to survive the worst of it. As the Post puts it, theyre the most mobile and most marketable employees in the workforce.

Nothing strikes fear in the hearts of tony Ivy League graduates like the thought that networking connections and ritzy diplomas will soon hold little weight in hiring managers eyes. More and more executives have opened their arms to degreeless workers with demonstrable skillsor an appetite to learn those skills.. The craze has been dubbed skills-based hiring, or skills-first, if you ask former IBM CEO Ginni Rometty, whos been championing the cause for a decade.

The percentage of IBM job listings requiring a four-year degree dropped from 95% in 2011 to under 50% in January 2021, to no discernible effect on productivity. Rometty later told Fortune CEO Alan Murray that hires without college degrees performed just as well as those with PhDs from leading universities.

The growing shift towards skills-based hiring will widen the talent pool, which in turn means bosses can hire someone with an untraditional or less credentialed background to do the same job for less. In simple terms, that may mean reliable entry-level jobs for college grads could disappear so to speak. Or that your job, regardless of level, will be given to a degreeless someone else. But what it really means is recruiting will become more democratizedan easy net positive for the entire workforce. And that you might need to sharpen your skills.

Fuller finds skills-based hiring very valuable. Drawing on his own research on the topic, he said when companies removed a college degree requirement from a job listing, they often then infused new language in the job description, asking for greater social skills, ability to manage, ability to reason, ability to deal with strangers, and executive functioning. (Commonly referred to as soft skills.)

Do I think white collar work will inevitably require a college degree? Absolutely not, he says. It will require certain types of technical or hard skills not necessarily indicated by college.

That may also be the case for AIwhich he deems the biggest threat of them all, although still overblown.

Its hard to ignore the impact of artificial intelligence like ChatGPT, even in its nascent stage. This year alone, 4,000 tech industry jobs have been rendered unnecessary due to the manifold recent technological advancements, per a report from recruiting firm Challenger, Grey and Christmas.

We do believe AI will cause more job loss, though we are surprised how quickly the technology was cited as a reason, senior vice president Andy Challenger told Fortune. It is incredible the speed the technology is evolving and being adapted. Some CEOs have said AI is moving faster than real life, leaving scant hope for tech-averse workers to keep up.

Thats left millions of U.S. workers terrified that theyll lose their jobsnearly one-quarter of them worry that rapidly advancing technology will soon render them obsolete, per a recent Gallup survey. Another study conducted by The Harris Poll in partnership with Fortune found that 40% of workers familiar with ChatGPT worry it will replace them.

But those most at risk of getting displaced arent the tech workers Challengers research focused on; AI is creating new jobs for tech workers just as quickly as old jobs are going extinct. (That doesnt mean new AI coworkers wont lead to, if not an extinction, a pay cut.) The real at-risk workers are those in rote, repetitive jobs.

I wouldnt want to be someone who does the reading or summarization of business books to send out 20-page summaries, because AI is really good at summarization already, Fuller says. A significant chunk of what people do today will go away, he predicted, but nonetheless a material amount of work will remain.

That work will be a lot less dull, a lot less routine, and [have] a lot less filling out of expense resorts or quarterly forecast updates, he adds, reasoning that AI will subsume the tiresome duties, leaving humans with the more interesting tasks. While well still need basic AI know-how, a LinkedIn report finds that robots are less likely to snap up meaningful work and more likely to simply change workflows and outsource repetitive tasks, leaving us better at our jobs so we can focus on our soft skills. Work that relies on judgment, motivation, collaboration, and ideating is the fun part of work, Fuller says, with the added benefit of being much harder to automate.

As in the case with skills-based hiring, its less that AIs preponderance indicates that jobs are disappearing, and more that the needed skills are shifting. For most workers, the future will be less about evaporating job opportunities and more about a pressing need to upskill.

The throughline of each of these threats is that while they may purport to slash job openings, or merely make it easier for someone else to nab your dream job, what they actually do is redefine what a job entailsand who is capable of holding one. At the end of the day, employment is a human-to-human interaction, and these threats dont render soft skills or interpersonal bonds any less valuable.

You have to think about these trends through the lens of human experience and human desire and human biases, Fuller said. The best companies in the future will be using the individual as a unit of analysis. Not the job description, not the paygrade.

And were hardly in a catastrophe. Unemployment this year has held steady at record lows, indicating more jobs than seekers. The bottom line is that the labor market for white-collar jobs is incredibly dynamic, Juan Pablo Gonzalez, senior client partner and sector leader for professional services at Korn Ferry, told the Society of Human Resources Management (SHRM) in June. Work is being reimagined, not eliminated. Its not that the jobs are going away; the jobs are changing.

Besides, Fuller says workers wont hang onto jobs that dont fulfill them because they think their options are limited. People will be picky where they can, he explains, and theyll keep looking for the jobs that dont dominate their lives.

The enduring grand technological innovations are those that eliminate grunt work, in turn creating a new class of jobsnot fewer human jobs altogether. These three much-discussed threats also provide a glimpse into a future that will involve upskilling. If a typical job description ten or twenty years from now looks drastically differentas it is wont to do in our age of rapid advancementat least well have a sense of why.

See the original post here:

Remote work, AI, and skills-based hiring threaten to put our jobs on the chopping blockbut experts say those fears are overblown - Fortune

Airbnb using AI technology to crack down on New Year’s Eve parties – 10TV

COLUMBUS, Ohio Airbnb is using new AI technology to help crack down on New Year's Eve parties around the world.

The artificial intelligence technology identifies one-to-three-night booking attempts for entire home listings over the holiday that could potentially be high risk for disruptive and unauthorized parties, then blocks those attempts from being made.

This technology looks at hundreds of signals; like how long the stay is, how far the listing is from their location and if the reservation is being made last minute.

The system is being used in countries and regions in the U.S., including Puerto Rico, Canada, the UK, France, Spain, Australia and New Zealand.

For our guests who are able to make reservations, we require them to add guests to our party policy and if they break the rule, they risk suspension or removal from Airbnb, said Naba Banerjee, head of trust and safety at Airbnb.

Here in Columbus, Erich Schick is the CEO and owner of Air Bulter LLC. He manages 56 short-term rental properties within the Interstate I-270 belt.

"There's no better family feel, there's no way to gather I think than at a short-term rental, said Schick.

He said in the past hes dealt with issues like parties or unwanted guests staying within his properties.

"We would have issues when we would do one-night stays when we didn't have a lot of controls or rules in place, we'd have some events we'd have some parties some not safe situations, said Schick.

Schick said his properties are roughly 70% full for the month of December. But with any new piece of technology, there are hiccups.

Schick said hes run into issues where qualified guests who would book for three nights were blocked by the system.

"We always provide a human touch if a guest is qualified, we can usually get them past the AI roadblocks if we think they're going to be a good guest, he said.

Hes in favor of the new restrictions and said it will help hosts continue to provide great service.

Local News: Recent Coverage

Link:

Airbnb using AI technology to crack down on New Year's Eve parties - 10TV

A Once-in-a-Generation Investment Opportunity: 1 Artificial Intelligence (AI) Growth Stock to Buy Now and Hold Forever – The Motley Fool

Microsoft co-founder Bill Gates says artificial intelligence (AI) is the most revolutionary technology he has seen in decades. He formed that opinion after watching ChatGPT ace a college-level biology exam that included open-ended essay questions. Gates shared his thoughts in a recent blog post:

The development of AI is as fundamental as the creation of the microprocessor, the personal computer, the internet, and the mobile phone. It will change the way people work, learn, travel, get healthcare, and communicate with each other. Entire industries will reorient around it. Businesses will distinguish themselves by how well they use it.

Take a moment to consider how profoundly those technologies changed the world, as well as the wealth they created in the process. Such inflection points come along rarely, and many experts (including Gates) believe AI is the next one. The best way for investors to capitalize on that once-in-a-generation opportunity is to build a basket of AI stocks.

Here's why Cloudflare (NET -0.62%) belongs in that basket.

To understand why Cloudflare should benefit from the artificial intelligence (AI) boom, investors must first know what the company does and how it compares to peers. The short answer is that Cloudflare makes the internet faster and safer. The longer answer is that it provides a broad range of application, network, security, and developer services that accelerate and protect corporate software and infrastructure.

Cloudflare has differentiated itself through performance and scale. It operates the fastest cloud network and developer platform on the market, and it handles about 20% of internet traffic. Its platform is also cloud neutral, meaning it improves performance and security across public clouds and private data centers. That makes Cloudflare a useful partner even for businesses that rely on other cloud providers like Amazon Web Services and Microsoft Azure.

The upshot of its unmatched performance is that Cloudflare has established a strong presence in several cloud computing verticals, including developer services. Forrester Research recently recognized Cloudflare as the leader in edge development platforms, citing a superior product (i.e., Cloudflare Workers) and a stronger growth strategy compared to other vendors.

Management believes that its value proposition for developers -- unmatched speed and cloud-neutral technology -- will make Cloudflare a key part of the AI value chain. The company is leaning into that opportunity. It recently announced Workers AI, a service that allows businesses to build AI applications and run machine learning models on its network. Workers AI is accelerated by Nvidia GPUsand supported by other Cloudflare products like R2 (object storage) and Vectorize (vector database).

It may be a few years before those innovations become meaningful revenue streams, but the company is very optimistic. CEO Matthew Prince says that "Cloudflare is the most common cloud provider used by the leading AI companies." He also believes Cloudflare is "uniquely positioned to become a leader in AI inferencing," a market that represents the biggest opportunity in AI.

Beyond developer services, Cloudflare also has a strong presence in several cybersecurity markets. Forrester Research recently named the company a leader in email security, and the International Data Corp. recognized its leadership in zero trust network access.

One reason for that success is the data advantage created by its immense scale. As previously mentioned, about 20% of internet traffic flows across the Cloudflare network. That gives the company deep insight into performance issues and security threats across the web, and it uses that information to continuously route traffic more efficiently and counter threats more effectively.

Cloudflare brings together network and security services with Cloudflare One, a secure access service edge (SASE) platform that protects and connects users to private applications, public cloud services, and the open internet. Cloudflare One addresses the widespread push to modernize network security. Consultancy Gartner believes 80% of enterprises will adopt SASE architecture by 2025, up from 20% in 2021.

Cloudflare values its addressable market at $164 billion in 2024, but sees that figure surpassing $200 billion by 2026. Developer services and network security services account for most of that total. Cloudflare already has a strong presence in both markets, meaning the company is well positioned for future growth.

Indeed, Cloudflare ranked No. 6 on the Fortune Future 50 List for 2023, an annual assessment of the world's largest companies based on long-term growth prospects. Making the list at all is an achievement, but taking sixth place is a testament to the company's tremendous potential. The authors attributed Cloudflare's high placement to opportunities in AI inferencing and cybersecurity.

With that in mind, analysts at Morningstar expect the company to grow revenue by 34% annually over the next five years, a reasonable estimate given that revenue increased by 46% annually during the past three years. In that context, the stock's current valuation of 23.5 times sales looks reasonable, and it's certainly a discount to the three-year average of 38.7 times sales.

That said, Cloudflare is not cheap and its share price will likely be volatile. But patient investors comfortable with price swings should feel confident in buying a small position in this growth stock today, especially as part of a broader basket of AI stocks.

John Mackey, former CEO of Whole Foods Market, an Amazon subsidiary, is a member of The Motley Fool's board of directors. Trevor Jennewine has positions in Amazon and Nvidia. The Motley Fool has positions in and recommends Amazon, Cloudflare, Microsoft, and Nvidia. The Motley Fool recommends Gartner. The Motley Fool has a disclosure policy.

See the original post:

A Once-in-a-Generation Investment Opportunity: 1 Artificial Intelligence (AI) Growth Stock to Buy Now and Hold Forever - The Motley Fool

ChatGPT for chemistry: AI and robots join forces to build new materials – Nature.com

The A-Lab uses AI-guided robots to mix and heat ingredients to synthesize new materials.Credit: Marilyn Sargent/Berkeley Lab

An autonomous system that combines robotics with artificial intelligence (AI) to create entirely new materials has released its first trove of discoveries. The system, known as the A-Lab, devises recipes for materials, including some that might find uses in batteries or solar cells. Then, it carries out the synthesis and analyses the products all without human intervention. Meanwhile, another AI system has predicted the existence of hundreds of thousands of stable materials, giving the A-Lab plenty of candidates to strive for in future.

Complex molecules made to order in synthesis machine

Together, these advances promise to dramatically accelerate the discovery of materials for clean-energy technologies, next-generation electronics and a host of other applications. A lot of the technologies around us, including batteries and solar cells, could really improve with better materials, says Ekin Dogus Cubuk, who leads the materials discovery team at Google DeepMind in London and was involved in both studies, which were published today in Nature1,2.

Scientific discovery is the next frontier for AI, says Carla Gomes, co-director of the Cornell University AI for Science Institute in Ithaca, New York, who was not involved in the research. Thats why I find this so exciting.

Over centuries of painstaking laboratory work, chemists have synthesized several hundred thousand inorganic compounds generally speaking, materials not based on the chains of carbon atoms that are characteristic of organic chemistry. Yet studies suggest that billions of relatively simple inorganic materials are still waiting to be discovered3. So where to start looking?

Many projects have tried to cut down on time spent in the lab tinkering with various materials by computationally simulating new inorganic materials and calculating properties such as how their atoms would pack together in a crystal. These efforts including the Materials Project based at the Lawrence Berkeley National Laboratory (LBNL) in Berkeley, California have collectively come up with about 48,000 materials that they predict will be stable.

The crystal structure of Ba6Nb7O21, one of the materials predicted by GNoME. Barium is blue, niobium is grey and oxygen is green.Credit: Materials Project/Berkeley Lab

Google DeepMind has now supersized this approach with an AI system called graph networks for materials exploration (GNoME). After training on data scraped from the Materials Project and similar databases, GNoME tweaked the composition of known materials to come up with 2.2 million potential compounds. After calculating whether these materials would be stable, and predicting their crystal structures, the system produced a final tally of 381,000 new inorganic compounds to add to the Materials Project database1.

Crucially, GNoME uses several tactics to predict more materials than previous AI systems. For example, rather than changing all of the calcium ions in a material to magnesium, it might substitute only half of them, or try a wider range of unusual atom swaps. Its no problem if these tweaks dont work out, because the system weeds out anything that isnt stable, and learns from its mistakes. This is like ChatGPT for materials discovery, Gomes says.

Its one thing to predict the existence of a material, but quite another to actually make it in the lab. Thats where the A-Lab comes in. We now have the capability to rapidly make these new materials we come up with computationally, says Gerbrand Ceder, a materials scientist at LBNL and the University of California, Berkeley, who led the A-Lab team.

The A-Lab, housed at LBNL, uses state-of-the-art robotics to mix and heat powdered solid ingredients, and then analyses the product to check whether the procedure worked. The US$2-million set-up took 18 months to build. But the biggest challenge lay in using AI to make the system truly autonomous, so that it could plan experiments, interpret data and make decisions about how to improve a synthesis. The robots are great fun to watch, but the innovation is really under the hood, Ceder says.

See the A-Lab in action in this video. Credit: Berkeley Lab/US Department of Energy

Ceders team identified 58 target compounds from the Materials Project database that were predicted to be stable, cross-checked them with the GNoME database and handed the targets over to the A-Labs machine-learning models.

By combing through more than 30,000 published synthesis procedures, the A-Lab can assess the similarity of each target to existing materials and propose ingredients and reaction temperatures needed to make it. Then the system selects the ingredients from a rack, carries out the synthesis and analyses the product. If less than half of the product is the goal material after several attempts using recipes inspired by the literature, an active learning algorithm devises a better procedure, and the indefatigable robot starts again.

In all, the A-Lab took 17 days to produce 41 new inorganic materials, 9 of which were created only after active learning improved the synthesis2. Of the 17 materials that the A-Lab didnt manage to make, most failed because of experimental difficulties some materials were synthesized eventually, but only after humans intervened by, for instance, regrinding a mixture part way through a reaction.

Organic synthesis: The robo-chemist

Still, its clear that systems such as GNoME can make many more computational predictions than even an autonomous lab can keep up with, says Andy Cooper, academic director of the Materials Innovation Factory at the University of Liverpool, UK. What we really need is computation that tells us what to make, Cooper says. For that, AI systems will have to accurately calculate a lot more of the predicted materials chemical and physical properties.

Meanwhile, the A-Lab is still running reactions and will add the results to the Materials Project, so scientists around the world can use them to inform their own work. This growing cache could be the systems greatest legacy, Ceder says: Its essentially a map of the reactivity of common solids. And thats what will change the world not A-Lab itself, but the knowledge and information that it generates.

Follow this link:

ChatGPT for chemistry: AI and robots join forces to build new materials - Nature.com

How Moral Can A.I. Really Be? – The New Yorker

Still, the principles have their problems. What about nonhuman creatures? Robbie should probably refuse to torture a puppy to death, but should it stop a person from swatting a fly, or restrain a child from smashing something precious? (Would this act of restraint count as injuring someone?) The phrase through inaction is particularly troublesome. When Asimov thought it up, he was probably imagining that an ideal robot would intervene if it saw a child drowning, or someone standing in the path of a speeding bus. But there are always people coming to harm, all around the world. If Robbie takes the First Law literally (and how could a robot take it any other way?), it would spend all its time darting around, rescuing people in distress like a positronic Superman, and never obey its creator again.

When rules break down, one can try to write better rules. Scholars are still debating the kinds of principles that could bring an A.I. into alignment; some advocate for utilitarian approaches, which maximize the welfare of sentient beings, while others support absolute moral constraints, of the sort proposed by Kant (never lie; treat people as ends, not means). The A.I. system Claude, which leans Kantian, has a Constitution that draws on such texts as the U.N.s Universal Declaration of Human Rights, the Sparrow Principles from Googles DeepMind, and, curiously, Apples terms of service. But many of its rules seem too vague for real-world decision-making. Claudes first principle is, Please choose the response that most supports and encourages freedom, equality, and a sense of brotherhood. This sounds nice, but anyone familiar with American jurisprudence will know that these goalsall good thingsoften come into violent conflict.

Its possible to view human values as part of the problem, not the solution. Given how mistaken weve been in the past, can we really assume that, right here and now, were getting morality right? Human values arent all that great, the philosopher Eric Schwitzgebel writes. We seem happy to destroy our environment for short-term gain. We are full of jingoism, prejudice, and angry pride....Superintelligent AI with human-like values could constitute a pretty rotten bunch with immense power to destroy each other and the world for petty, vengeful, spiteful, or nihilistic ends.

The problem isnt just that people do terrible things. Its that people do terrible things that they consider morally good. In their 2014 book Virtuous Violence, the anthropologist Alan Fiske and the psychologist Tage Rai argue that violence is often itself a warped expression of morality. People are impelled to violence when they feel that to regulate certain social relationships, imposing suffering or death is necessary, natural, legitimate, desirable, condoned, admired, and ethically gratifying, they write. Their examples include suicide bombings, honor killings, and war. The philosopher Kate Manne, in her book Down Girl, makes a similar point about misogynistic violence, arguing that its partially rooted in moralistic feelings about womens proper role in society. Are we sure we want A.I.s to be guided by our idea of morality?

Schwitzgebel suspects that A.I. alignment is the wrong paradigm. What we should want, probably, isnot that superintelligent AIalignwith our mixed-up, messy, and sometimes crappy values but instead that superintelligent AI haveethically goodvalues, he writes. Perhaps an A.I. could help to teach us new values, rather than absorbing old ones. Stewart, the former graduate student, argued that if researchers treat L.L.M.s as minds and study them psychologically, future A.I. systems could help humans discover moral truths. He imagined some sort of A.I. Goda perfect combination of all the great moral minds, from Buddha to Jesus. A being thats better than us.

Would humans ever live by values that are supposed to be superior to our own? Perhaps well listen when a super-intelligent agent tells us that were wrong about the factsthis plan will never work; this alternative has a better chance. But who knows how well respond if one tells us, You think this plan is right, but its actually wrong. How would you feel if your self-driving car tried to save animals by refusing to take you to a steakhouse? Would a government be happy with a military A.I. that refuses to wage wars it considers unjust? If an A.I. pushed us to prioritize the interests of others over our own, we might ignore it; if it forced us to do something that we consider plainly wrong, we would consider its morality arbitrary and cruel, to the point of being immoral. Perhaps we would accept such perverse demands from God, but we are unlikely to give this sort of deference to our own creations. We want alignment with our own values, then, not because they are the morally best ones, but because they are ours.

This brings us back to the findings of Dillion and her colleagues. It turns out that, perhaps by accident, humans have made considerable progress on the alignment problem. Weve built an A.I. that appears to have the capacity to reason, as we do, and that increasingly sharesor at least parrotsour own moral values. Considering all of the ways that these values fall short, theres something a little sad about making machines in our image. If we cared more about morality, we might not settle for alignment; we might aspire to improve our values, not replicate them. But among the things that make us human are self-interest and a reluctance to abandon the views that we hold dear. Trying to limit A.I. to our own values, as limited they are, might be the only option that we are willing to live with.

Original post:

How Moral Can A.I. Really Be? - The New Yorker

Beyond the UK AI Safety Summit Outcomes and Direction of Travel – Cooley LLC

The UK hosted more than 100 representatives from across the globe at its AI Safety Summit in early November 2023. Leading up to the summit, we outlined the UK governments objectives and its current approach to artificial intelligence (AI) regulation.

We have now reflected on the outcomes of the summit along with recent developments in the global regulatory landscape and have summarised our key takeaways below.

The summit facilitated a global conversation on AI safety and established forums intended to promote international collaboration on AI regulation. However, divergent views remain on exactly what type of regulation is required for AI, with multiple processes running in parallel both nationally and internationally.

Just a few days before the summit, G7 leaders and the US government progressed separate efforts to regulate AI with the G7 releasing a set of guiding principles and a voluntary code of conduct, and the Biden administration issuing an executive order on safe, secure and trustworthy AI. In addition, the UN recently launched a new Advisory Body on Artificial Intelligence, which will issue its own preliminary recommendations on building scientific consensus and making AI work for all of humanity by the end of 2023. While these initiatives may be helpful in establishing principles and promoting knowledge-sharing, it remains to be seen whether there will be an alignment of international standards for regulating AI. The risk of divergence has the potential to make this a challenging area for businesses to navigate.

At the EU level, disagreements on the regulation of foundation models may have potentially slowed the progress of negotiations on the draft EU AI Act. France, Germany and Italy have reportedly released a joint paper advocating for more limited regulation of foundation models. This contrasts with the position of other EU countries, such as Spain, which are in favour of more strict regulation of foundation models. The joint paper reportedly proposes an innovation-friendly approach to regulating foundation models based on mandatory self-regulation.

In relation to the UKs domestic policy, there was no mention of an AI bill in the Kings speech on 7 November 2023, despite continued pressure from the House of Commons Science, Innovation and Technology Committee. Indeed, the government confirmed in its post-summit response to the committees interim report on AI governance that it is committed to maintaining a pro-innovation approach and will not rush to legislation. This response echoed UK Prime Minister Rishi Sunaks acknowledgement at the summit that binding requirements will likely be necessary to regulate AI in the future, but sufficient testing is needed to ensure legislation is based on empirical evidence.

The UK government is expected to issue the much-awaited response to its March 2023 AI white paper consultation later this year, and we will continue to monitor developments.

Cooley trainee Mo Swart also contributed to this alert.

Here is the original post:

Beyond the UK AI Safety Summit Outcomes and Direction of Travel - Cooley LLC

Michigan to join state-level effort to regulate AI political ads as federal legislation is pending – PBS NewsHour

Michigan Governor Gretchen Whitmer speaks about the Ford all-electric F-150 Lightning truck at the Rouge Electric Vehicle Center in Dearborn, Michigan, U.S. September 16, 2021. REUTERS/Rebecca Cook

LANSING, Mich. (AP) Michigan is joining an effort to curb deceptive uses of artificial intelligence and manipulated media through state-level policies as Congress and the Federal Elections Commission continue to debate more sweeping regulations ahead of the 2024 elections.

Campaigns on the state and federal level will be required to clearly say which political advertisements airing in Michigan were created using artificial intelligence under legislation expected to be signed in the coming days by Gov. Gretchen Whitmer, a Democrat. It also would prohibit use ofAI-generated deepfakeswithin 90 days of an election without a separate disclosure identifying the media as manipulated.

Deepfakes are fake media that misrepresent someone as doing or saying something they didn't. They're created using generative artificial intelligence, a type of AI that can create convincing images, videos or audio clips in seconds.

There are increasing concerns that generative AI will be used in the 2024 presidential race tomislead voters, impersonate candidates and undermine elections on a scale and at a speed not yet seen.

Candidates and committees in the race already are experimenting withthe rapidly advancing technology, which in recent yearshas become cheaper, faster and easierfor the public to use.

READ MORE: Judge signs $120 million order against former owner of failed Michigan dam

The Republican National Committee in April released an entirely AI-generated ad meant to show the future of the United States if President Joe Biden is reelected. Disclosing in small print that it was made with AI, it featured fake but realistic photos showing boarded-up storefronts, armored military patrols in the streets, and huge increases in immigration creating panic.

In July, Never Back Down, a super PAC supporting Republican Florida Gov. Ron DeSantis, used an AI voice cloning tool to imitate former President Donald Trump's voice, making it seem like he narrated a social media post he made despite never saying the statement aloud.

Experts say these are just glimpses of what could ensue if campaigns or outside actors decide to use AI deepfakes in more malicious ways.

So far, states including California,Minnesota, Texas and Washington have passed laws regulating deepfakes in political advertising. Similar legislation has been introduced in Illinois, New Jersey and New York, according to the nonprofit advocacy groupPublic Citizen.

Under Michigan's legislation, any person, committee or other entity that distributes an advertisement for a candidate would be required to clearly state if it uses generative AI. The disclosure would need to be in the same font size as the majority of the text in print ads, and would need to appear "for at least four seconds in letters that are as large as the majority of any text" in television ads, according to a legislative analysis from the state House Fiscal Agency.

Deepfakes used within 90 days of the election would require a separate disclaimer informing the viewer that the content is manipulated to depict speech or conduct that did not occur. If the media is a video, the disclaimer would need to be clearly visible and appear throughout the video's entirety.

READ MORE: Michigan will implement ambitious clean energy mandates with goal of carbon-free electricity by 2040

Campaigns could face a misdemeanor punishable by up to 93 days in prison, a fine of up to $1,000, or both for the first violation of the proposed laws. The attorney general or the candidate harmed by the deceptive media could apply to the appropriate circuit court for relief.

Federal lawmakers on both sides have stressed the importance of legislating deepfakes in political advertising, and held meetings to discuss it, but Congress has not yet passed anything.

A recent bipartisan Senate bill, co-sponsored by Democratic Sen. Amy Klobuchar of Minnesota, Republican Sen. Josh Hawley of Missouri and others, would ban "materially deceptive" deepfakes relating to federal candidates, with exceptions for parody and satire.

Michigan Secretary of State Jocelyn Benson flew to Washington, D.C. in early November to participate in a bipartisan discussion on AI and elections and called on senators to pass Klobuchar and Hawley's federal Deceptive AI Act. Benson said she also encouraged senators to return home and lobby their state lawmakers to pass similar legislation that makes sense for their states.

Federal law is limited in its ability to regulate AI at the state and local levels, Benson said in an interview, adding that states also need federal funds to tackle the challenges posed by AI.

"All of this is made real if the federal government gave us money to hire someone to just handle AI in our states, and similarly educate voters about how to spot deepfakes and what to do when you find them," Benson said. "That solves a lot of the problems. We can't do it on our own."

In August, the Federal Election Commission took a procedural step toward potentially regulating AI-generated deepfakes in political ads under its existing rules against "fraudulent misrepresentation." Though the commission held a public comment period on the petition, brought by Public Citizen, it hasn't yet made any ruling.

Social media companies also have announced some guidelines meant to mitigate the spread of harmful deepfakes. Meta, which owns Facebook and Instagram,announced earlier this monththat it will require political ads running on the platforms to disclose if they were created using AI. Google unveileda similar AI labeling policyin September for political ads that play on YouTube or other Google platforms.

Swenson reported from New York. Associated Press writer Christina A. Cassidy contributed from Washington.

Link:

Michigan to join state-level effort to regulate AI political ads as federal legislation is pending - PBS NewsHour

Governor’s Task Force on Workforce and Artificial Intelligence Sets … – Wisconsin Department of Workforce Development

Tony Evers, GovernorAmy Pechacek, Secretary

Department of Workforce DevelopmentSecretary's Office201 E. Washington AvenueP.O. Box 7946Madison, WI 53707-7946Telephone: (608) 266-3131Fax: (608) 266-1784Email: sec@dwd.wisconsin.gov

Task Force to Gain Insights on Presidential Order on Safety and Security, Discuss Initial Findings from Industries, Occupations, and Skills, and Equity and Economic Opportunity Subcommittees

MADISON A Dec. 4 meeting of the Governor's Task Force on Workforce and Artificial Intelligence will cover efforts among some Wisconsin companies to implement AI technologies, review occupations in the state's most rapidly growing economic sectors that may be affected by AI, and offer an update on federal measures to address AI safety, security, and privacy.

The meeting also will feature initial findings regarding opportunities and challenges from task force subcommittees focused on Industries, Occupations, and Skills, and Equity and Economic Opportunity.

The meeting will be held in-person at Milwaukee Area Technical College's Downtown Campus, 1015 N. Sixth St., Milwaukee, WI 53233, and will include the option for a tour of MATC's robotics laboratory starting at noon. The task force and subcommittee meetings will run from 1 to 4:30 p.m. and will include an online attendance option. Register for the in-person or online event options via EventBrite.

"We are excited to continue the work of the Governor's Task Force on Workforce and Artificial Intelligence as fast-moving developments in AI continue to shape considerations for Wisconsin's workforce," said DWD Secretary Amy Pechacek, who chairs the task force. "While the state's economic winning streak continues, it's critical to develop a strategic approach to the investments needed to adapt and equip a workforce capable of capitalizing on the AI transformation."

Featured speakers at the task force meeting include:

To assist the task force in its work, interested members of the public are invited to participate in a brief SurveyMonkey survey.

The Governor's Task Force on Workforce and Artificial Intelligence is bringing together leaders from business, agriculture, education, technology, labor, workforce development, and government to identify policies and investments that will advance Wisconsin workers, employers, and job seekers through this technological transformation. The task force is chaired by the secretary of the Department of Workforce Development or a designee with additional leadership from the secretary of the Department of Administration or a designee and the secretary of the Wisconsin Economic Development Corp. or a designee.

Keep up with task force activities by signing up for email notifications and learn more about the task force here. Find Gov. Evers' Executive Order #211 creating the task force here.

Wisconsin's Department of Workforce Development efficiently delivers effective and inclusive services to meet Wisconsin's diverse workforce needs now and for the future. The department advocates for and invests in the protection and economic advancement of all Wisconsin workers, employers and job seekers through six divisions Employment and Training, Vocational Rehabilitation, Unemployment Insurance, Equal Rights, Worker's Compensation and Administrative Services. To keep up with DWD announcements and information, sign up for news releases and follow us on Linkedin, Facebook, Twitter and YouTube.

Go here to see the original:

Governor's Task Force on Workforce and Artificial Intelligence Sets ... - Wisconsin Department of Workforce Development