Category Archives: Ai
AI helps household robots cut planning time in half – MIT News
Your brand new household robot is delivered to your house, and you ask it to make you a cup of coffee. Although it knows some basic skills from previous practice in simulated kitchens, there are way too many actions it could possibly take turning on the faucet, flushing the toilet, emptying out the flour container, and so on. But theres a tiny number of actions that could possibly be useful. How is the robot to figure out what steps are sensible in a new situation?
It could use PIGINet, a new system that aims to efficiently enhance the problem-solving capabilities of household robots. Researchers from MITs Computer Science and Artificial Intelligence Laboratory (CSAIL) are using machine learning to cut down on the typical iterative process of task planning that considers all possible actions. PIGINet eliminates task plans that cant satisfy collision-free requirements, and reduces planning time by 50-80 percent when trained on only 300-500 problems.
Typically, robots attempt various task plans and iteratively refine their moves until they find a feasible solution, which can be inefficient and time-consuming, especially when there are movable and articulated obstacles. Maybe after cooking, for example, you want to put all the sauces in the cabinet. That problem might take two to eight steps depending on what the world looks like at that moment. Does the robot need to open multiple cabinet doors, or are there any obstacles inside the cabinet that need to be relocated in order to make space? You dont want your robot to be annoyingly slow and it will be worse if it burns dinner while its thinking.
Household robots are usually thought of as following predefined recipes for performing tasks, which isnt always suitable for diverse or changing environments. So, how does PIGINet avoid those predefined rules? PIGINet is a neural network that takes in Plans, Images, Goal, and Initial facts, then predicts the probability that a task plan can be refined to find feasible motion plans. In simple terms, it employs a transformer encoder, a versatile and state-of-the-art model designed to operate on data sequences. The input sequence, in this case, is information about which task plan it is considering, images of the environment, and symbolic encodings of the initial state and the desired goal. The encoder combines the task plans, image, and text to generate a prediction regarding the feasibility of the selected task plan.
Keeping things in the kitchen, the team created hundreds of simulated environments, each with different layouts and specific tasks that require objects to be rearranged among counters, fridges, cabinets, sinks, and cooking pots. By measuring the time taken to solve problems, they compared PIGINet against prior approaches. One correct task plan may include opening the left fridge door, removing a pot lid, moving the cabbage from pot to fridge, moving a potato to the fridge, picking up the bottle from the sink, placing the bottle in the sink, picking up the tomato, or placing the tomato. PIGINet significantly reduced planning time by 80 percent in simpler scenarios and 20-50 percent in more complex scenarios that have longer plan sequences and less training data.
Systems such as PIGINet, which use the power of data-driven methods to handle familiar cases efficiently, but can still fall back on first-principles planning methods to verify learning-based suggestions and solve novel problems, offer the best of both worlds, providing reliable and efficient general-purpose solutions to a wide variety of problems, says MIT Professor and CSAIL Principal Investigator Leslie Pack Kaelbling.
PIGINet's use of multimodal embeddings in the input sequence allowed for better representation and understanding of complex geometric relationships. Using image data helped the model to grasp spatial arrangements and object configurations without knowing the object 3D meshes for precise collision checking, enabling fast decision-making in different environments.
One of the major challenges faced during the development of PIGINet was the scarcity of good training data, as all feasible and infeasible plans need to be generated by traditional planners, which is slow in the first place. However, by using pretrained vision language models and data augmentation tricks, the team was able to address this challenge, showing impressive plan time reduction not only on problems with seen objects, but also zero-shot generalization to previously unseen objects.
Because everyones home is different, robots should be adaptable problem-solvers instead of just recipe followers. Our key idea is to let a general-purpose task planner generate candidate task plans and use a deep learning model to select the promising ones. The result is a more efficient, adaptable, and practical household robot, one that can nimbly navigate even complex and dynamic environments. Moreover, the practical applications of PIGINet are not confined to households, says Zhutian Yang, MIT CSAIL PhD student and lead author on the work. Our future aim is to further refine PIGINet to suggest alternate task plans after identifying infeasible actions, which will further speed up the generation of feasible task plans without the need of big datasets for training a general-purpose planner from scratch. We believe that this could revolutionize the way robots are trained during development and then applied to everyones homes.
This paper addresses the fundamental challenge in implementing a general-purpose robot: how to learn from past experience to speed up the decision-making process in unstructured environments filled with a large number of articulated and movable obstacles, says Beomjoon Kim PhD 20, assistant professor in the Graduate School of AI at Korea Advanced Institute of Science and Technology (KAIST). The core bottleneck in such problems is how to determine a high-level task plan such that there exists a low-level motion plan that realizes the high-level plan. Typically, you have to oscillate between motion and task planning, which causes significant computational inefficiency. Zhutian's work tackles this by using learning to eliminate infeasible task plans, and is a step in a promising direction.
Yang wrote the paper with NVIDIA research scientist Caelan Garrett SB 15, MEng 15, PhD 21; MIT Department of Electrical Engineering and Computer Science professors and CSAIL members Toms Lozano-Prez and Leslie Kaelbling; and Senior Director of Robotics Research at NVIDIA and University of Washington Professor Dieter Fox. The team was supported by AI Singapore and grants from National Science Foundation, the Air Force Office of Scientific Research, and the Army Research Office. This project was partially conducted while Yang was an intern at NVIDIA Research. Their research will be presented in July at the conference Robotics: Science and Systems.
More:
AI helps household robots cut planning time in half - MIT News
Why AI detectors think the US Constitution was written by AI – Ars Technica
Enlarge / An AI-generated image of James Madison writing the US Constitution using AI.
Midjourney / Benj Edwards
If you feed America's most important legal documentthe US Constitutioninto a tooldesigned to detect text written by AI models like ChatGPT, it will tell you that the document was almost certainly written by AI. But unless James Madison was a time traveler, that can't be the case. Why do AI writing detection tools give false positives? We spoke to several expertsand the creator of AI writing detector GPTZeroto find out.
Among news stories of overzealous professors flunking an entire class due to the suspicion of AI writing tool use and kids falsely accused of using ChatGPT, generative AI has education in a tizzy. Some think it represents an existential crisis. Teachers relying on educational methods developed over the past century have been scrambling for ways to keep the status quothe tradition of relying on the essay as a tool to gauge student mastery of a topic.
As tempting as it is to rely on AI tools to detect AI-generated writing, evidence so far has shown that they are not reliable. Due to false positives, AI writing detectors such as GPTZero, ZeroGPT, and OpenAI's Text Classifier cannot be trusted to detect text composed by large language models (LLMs) like ChatGPT.
A viral screenshot from April 2023 showing GPTZero saying, "Your text is likely to be written entirely by AI" when fed part of the US Constitution.
Ars Technica
When fed part of the US Constitution, ZeroGPT says, "Your text is AI/GPT Generated."
Ars Technica
When fed part of the US Constitution, OpenAI's Text Classifier says, "The classifier considers the text to be unclear if it is AI-generated."
Ars Technica
If you feed GPTZero a section of the US Constitution, it says the text is "likely to be written entirely by AI." Several times over the past six months, screenshots of other AI detectors showing similar results have gone viral on social media, inspiring confusion and plenty of jokes about the founding fathers being robots. It turns out the same thing happens with selections from The Bible, which also show up as being AI-generated.
To explain why these tools make such obvious mistakes (and otherwise often return false positives), we first need to understand how they work.
Different AI writing detectors use slightly different methods of detection but with a similar premise: There's an AI model that has been trained on a large body of text (consisting of millions of writing examples) and a set of surmised rules that determine whether the writing is more likely to be human- or AI-generated.
For example, at the heart of GPTZero is a neural network trained on "a large, diverse corpus of human-written and AI-generated text, with a focus on English prose," according to the service's FAQ. Next, the system uses properties like "perplexity" and burstiness" to evaluate the text and make its classification.
Bonnie Jacobs / Getty Images
In machine learning, perplexity is a measurement of how much a piece of text deviates from what an AI model has learned during its training. As Dr. Margaret Mitchell of AI company Hugging Face told Ars, "Perplexity is a function of 'how surprising is this language based on what I've seen?'"
So the thinking behind measuring perplexity is that when they're writing text, AI models like ChatGPT will naturally reach for what they know best, which comes from their training data. The closer the output is to the training data, the lower the perplexity rating. Humans are much more chaotic writersor at least that's the theorybut humans can write with low perplexity, too, especially when imitating a formal style used in law or certain types of academic writing. Also, many of the phrases we use are surprisingly common.
Let's say we're guessing the next word in the phrase "I'd like a cup of _____." Most people would fill in the blank with "water," "coffee," or "tea." A language model trained on a lot of English text would do the same because those phrases occur frequently in English writing. The perplexity of any of those three results would be quite low because the prediction is fairly certain.
Now consider a less common completion: "I'd like a cup of spiders." Both humans and a well-trained language model would be quite surprised (or "perplexed") by this sentence, so its perplexity would be high. (As of this writing, the phrase "I'd like a cup of spiders" gives exactly one result in a Google search, compared to 3.75 million results for "I'd like a cup of coffee.")
Ars Technica
If the language in a piece of text isn't surprising based on the model's training, the perplexity will be low, so the AI detector will be more likely to classify that text as AI-generated. This leads us to the interesting case of the US Constitution. In essence, the Constitution's language is so ingrained in these models that they classify it as AI-generated, creating a false positive.
GPTZero creator Edward Tian told Ars Technica, "The US Constitution is a text fed repeatedly into the training data of many large language models. As a result, many of these large language models are trained to generate similar text to the Constitution and other frequently used training texts. GPTZero predicts text likely to be generated by large language models, and thus this fascinating phenomenon occurs."
The problem is that it's entirely possible for human writers to create content with low perplexity as well (if they write primarily using common phrases such as "I'd like a cup of coffee," for example), which deeply undermines the reliability of AI writing detectors.
Ars Technica
Another property of text measured by GPTZero is "burstiness," which refers to the phenomenon where certain words or phrases appear in rapid succession or "bursts" within a text. Essentially, burstiness evaluates the variability in sentence length and structure throughout a text.
Human writers often exhibit a dynamic writing style, resulting in text with variable sentence lengths and structures. For instance, we might write a long, complex sentence followed by a short, simple one, or we might use a burst of adjectives in one sentence and none in the next. This variability is a natural outcome of human creativity and spontaneity.
AI-generated text, on the other hand, tends to be more consistent and uniformat least so far. Language models, which are still in their infancy, generate sentences with more regular lengths and structures. This lack of variability can result in a low burstiness score, indicating that the text may be AI-generated.
However, burstiness isn't a foolproof metric for detecting AI-generated content, either. As with perplexity, there are exceptions. A human writer may write in a highly structured, consistent style, resulting in a low burstiness score. Conversely, an AI model might be trained to emulate a more human-like variability in sentence length and structure, raising its burstiness score. In fact, as AI language models improve, studies show that their writing looks more and more like human writing all the time.
Ultimately, there's no magic formula that can always distinguish human-written text from that composed by a machine. AI writing detectors can make a strong guess, but the margin of error is too large to rely on them for an accurate result.
A 2023 study from researchers at the University of Maryland demonstrated empirically that detectors for AI-generated text are not reliable in practical scenarios and that they perform only marginally better than a random classifier. Not only do they return false positives, but detectors and watermarking schemes (that seek to alter word choice in a telltale way) can easily be defeated by "paraphrasing attacks" that modify language model output while retaining its meaning.
"I think they're mostly snake oil," said AI researcher Simon Willison of AI detector products. "Everyone desperately wants them to workpeople in education especiallyand it's easy to sell a product that everyone wants, especially when it's really hard to prove if it's effective or not."
Additionally, a recent study from Stanford University researchers showed that AI writing detection is biased against non-native English speakers, throwing out high false-positive rates for their human-written work and potentially penalizing them in the global discourse if AI detectors become widely used.
Some educators, like Professor Ethan Mollick of Wharton School, are accepting this new AI-infused reality and even actively promoting the use of tools like ChatGPT to aid learning. Mollick's reaction is reminiscent of how some teachers dealt with the introduction of pocket calculators into classrooms: They were initially controversialbut eventually came to be widely accepted.
"There is no tool that can reliably detect ChatGPT-4/Bing/Bard writing," Mollick tweeted recently. "The existing tools are trained on GPT-3.5, they have high false positive rates (10%+), and they are incredibly easy to defeat." Additionally, ChatGPT itself cannot assess whether text is AI-written or not, he added, so you can't just paste in text and ask if it was written by ChatGPT.
Midjourney
In a conversation with Ars Technica, GPTZero's Tian seemed to see the writing on the wall and said he plans to pivot his company away from vanilla AI detection into something more ambiguous. "Compared to other detectors, like Turn-it-in, we're pivoting away from building detectors to catch students, and instead, the next version of GPTZero will not be detecting AI but highlighting what's most human, and helping teachers and students navigate together the level of AI involvement in education," he said.
How does he feel about people using GPTZero to accuse students of academic dishonesty? Unlike traditional plagiarism checker companies, Tian said, "We don't want people using our tools to punish students. Instead, for the education use case, it makes much more sense to stop relying on detection on the individual level (where some teachers punish students and some teachers are fine with AI technologies) but to apply these technologies on the school [or] school board [level], even across the country, because how can we craft the right policies to respond to students using AI technologies until we understand what is going on, and the degree of AI involvement across the board?"
Yet despite the inherent problems with accuracy, GPTZero still advertises itself as being "built for educators," and its site proudly displays a list of universities that supposedly use the technology. There's a strange tension between Tian's stated goals not to punish students and his desire to make money with his invention. But whatever the motives, using these flawed products can have terrible effects on students. Perhaps the most damaging result of people using these inaccurate and imperfect tools is the personal cost of false accusations.
Ars Technica
A case reported by USA Today highlights the issue in a striking way. A student was accused of cheating based on AI text detection tools and had to present his case before an honor board. His defense included showing his Google Docs history to demonstrate his research process. Despite the board finding no evidence of cheating, the stress of preparing to defend himself led the student to experience panic attacks. Similar scenarios have played out dozens (if not hundreds) of times across the US and are commonly documented on desperate Reddit threads.
Common penalties for academic dishonesty often include failing grades, academic probation, suspension, or even expulsion, depending on the severity and frequency of the violation. That's a difficult charge to face, and the use of flawed technology to levy those charges feels almost like a modern-day academic witch hunt.
In light of the high rate of false positives and the potential to punish non-native English speakers unfairly, it's clear that the science of detecting AI-generated text is far from foolproofand likely never will be. Humans can write like machines, and machines can write like humans. A more helpful question might be: Do humans who write with machine assistance understand what they are saying? If someone is using AI tools to fill in factual content in a way they don't understand, that should be easy enough to figure out by a competent reader or teacher.
AI writing assistance is here to stay, and if used wisely, AI language models can potentially speed up composition in a responsible and ethical way. Teachers may want to encourage responsible use and ask questions like: Does the writing reflect the intentions and knowledge of the writer? And can the human author vouch for every fact included?
A teacher who is also a subject matter expert could quiz students on the contents of their work afterward to see how well they understand it. Writing is not just a demonstration of knowledge but a projection of a person's reputation, and if the human author can't stand by every fact represented in the writing, AI assistance has not been used appropriately.
Like any tool, language models can be used poorly or used with skill. And that skill also depends on context: You can paint an entire wall with a paintbrush or create the Mona Lisa. Both scenarios are an appropriate use of the tool, but each demands different levels of human attention and creativity. Similarly, some rote writing tasks (generating standardized weather reports, perhaps) may be accelerated appropriately by AI, while more intricate tasks need more human care and attention. There's no black-or-white solution.
For now, Ethan Mollick told Ars Technica that despite panic from educators, he isn't convinced that anyone should use AI writing detectors. "I am not a technical expert in AI detection," Mollick said. "I can speak from the perspective of an educator working with AI to say that, as of now, AI writing is undetectable and likely to remain so, AI detectors have high false positive rates, and they should not be used as a result."
Read the original here:
Why AI detectors think the US Constitution was written by AI - Ars Technica
More than a quarter of UK adults have used generative AI, survey suggests – The Guardian
Artificial intelligence (AI)
Adoption rate of latest AI systems exceeds that of voice-assisted smart speakers, with one in 10 using them at least once a day
Thu 13 Jul 2023 19.01 EDT
More than a quarter of UK adults have used generative artificial intelligence such as chatbots, according to survey showing that 4 million people have also used it for work.
Generative AI, which refers to AI tools that produce convincing text or images in response to human prompts, has gripped the public imagination since the launch of ChatGPT in November.
The rate of adoption of the latest generation of AI systems exceeds that of voice-assisted speakers such as Amazons Alexa, according to accounting group Deloitte, which published the survey.
Deloitte said 26% of 16- to 75-year-olds have used a generative AI tool, representing about 13 million people, with one in 10 of those respondents using it at least once a day.
It took five years for voice-assisted speakers to achieve the same adoption levels. It is incredibly rare for any emerging technology to achieve these levels of adoption and frequency of usage so rapidly, said Paul Lee, a Deloitte partner.
The Deloitte survey of 4,150 UK adults found that just over half of the population had heard of generative AI, with around one in 10 respondents the equivalent of approximately four million people using it for work.
ChatGPT became a sensation due to its ability to generate human-seeming responses to a range of queries in different styles, producing articles, essays, jokes, poetry and job applications in response to text prompts.
It has been followed by Microsofts Bing chatbot, which is based on the same system as ChatGPT, Googles Bard chatbot and, this week, Claude 2 from US firm Anthropic.
Image generators have also taken off, exemplified by a realistic-looking picture of Pope Francis in a puffer jacket, produced by US startup Midjourney.
However the ability of such systems to mass produce convincing text, image and even voice at scale has led to warnings that they could become tools for creating large-scale disinformation campaigns.
The Deloitte survey found that of those who had used generative AI, more than four out of 10 believe it always produces factually correct answers. One of the biggest flaws in generative AI systems so far is that they are prone to producing glaring factual errors.
Generative AI technology is, however, still relatively nascent, with user interfaces, regulatory environment, legal status and accuracy still a work in progress, said Lee.
{{topLeft}}
{{bottomLeft}}
{{topRight}}
{{bottomRight}}
{{.}}
See the article here:
More than a quarter of UK adults have used generative AI, survey suggests - The Guardian
Indian AI Anchorwoman Lisa Introduces Herself To Viewers On Odisha TV – Deadline
Indias Odisha TV has launched a news channel, fronted by Lisa their AI news anchor.
The Times of London reports that Lisas opening words were:
Warm greetings to everyone. Namaste I am Lisa, before telling the audience this was a historic moment for journalism.
The channel boss Jagi Mangat Panda called the moment a milestone in broadcasting TV and digital journalism and said Lisas role would involve doing repetitive work so news people can focus on doing more creative work to bring better quality news. Lisa will be able to deliver news in a variety of local Indian languages, and bring election results and similarly speedy news responses to the audience at great speed.
India Today launched a female AI news anchor called Sana back in April. The figure presents during the 9pm slot every evening, reading headlines before being replaced on screen by a human presenter. The Times reports India Today boss Vibhor Gandotra affirming no job losses would result and that the public had reacted positively to her introduction. However, sceptics remain critical of the impact on reporting and chairing debates if such use of AI presenters is expanded.
Read more:
Indian AI Anchorwoman Lisa Introduces Herself To Viewers On Odisha TV - Deadline
AI will stop Hollywood actors from being mediocre Mission Impossible actor Simon Pegg – Vanguard
Abeokuta An Abeokuta Customary Court sitting in Ake on Wednesday dissolved a three-year-old marriage between Mr Femi Olayiwole and wife, Kemi, due to the absence of vagina, deceit and frequent fighting.
Olayiwole told the court that his wife deceived him to marry her knowing that she could not bear him a child.
He accused his wife, who had failed to appear in court after being summoned several times, of living a false life, frequent fighting and threatening his life.
My wife had been deceiving me since we got married I have never seen her pass through menstruation. My wife does not have any vagina opening.
Anytime I ask her for sex, she would give an excuse to back up her refusal. Meanwhile, we have been praying to God to give us children.
My wife did not tell me anything about her condition before we got married, until February this year that she confessed to me that she had never experienced menstruation in her life.
I thought she was lying, so I went to see her parents who told me it was true, and that they thought their daughter explained to me before we got married, Olayiwole told the court.
He pleaded with the courts president to dissolve his three-year-old marriage that had nothing to show for both now and in future.
The defendant was absent in spite several summons by the court.
The courts president, Mr, Olalekan Akande, dissolved the marriage, saying that both parties had made up their minds to part ways.
Akande said that both parties were free to remarry anybody of their choice, adding that the document of the marriage dissolution should be sent to Kemi.
Read the original here:
AI will stop Hollywood actors from being mediocre Mission Impossible actor Simon Pegg - Vanguard
Sumitomo Mitsui executive sees AI as chance for Japan’s regrowth – The Japan Times
Generative artificial intelligence offers an opportunity for Japan to achieve regrowth, Jun Uchikawa, chief information officer at Sumitomo Mitsui Financial Group, said in a recent interview.
The biggest challenge facing Japanese companies is the lack of talent and labor. Generative AI can resolve this, Uchikawa said.
The Japanese banking group in April started a trial use of generative AI based on the technology of the ChatGPT chatbot for searching information and creating documents.
This could be due to a conflict with your ad-blocking or security software.
Please add japantimes.co.jp and piano.io to your list of allowed sites.
If this does not resolve the issue or you are unable to add the domains to your allowlist, please see this FAQ.
We humbly apologize for the inconvenience.
In a time of both misinformation and too much information, quality journalism is more crucial than ever.By subscribing, you can help us get the story right.
View post:
Sumitomo Mitsui executive sees AI as chance for Japan's regrowth - The Japan Times
This week in tech: Alphabet and Musk get in the AI ring; Coinbase … – Investing.com
By Louis Juricic and Sarina Isaacs
Investing.com -- Here is your weekly Pro Recap on the biggest headlines out of tech this week: AI moves from Alphabet and Elon Musk; a Coinbase surge on a court win; and Salesforce's price hike.
InvestingPro subscribers get tech headlines like these in real time. Never miss another market-moving alert.
Alphabet (NASDAQ:) (NASDAQ:) stock bumped higher Thursday after it said it was rolling out its artificial-intelligence chatbot, Bard, in Europe and Brazil, as the tech giant looks to take the AI fight to rival ChatGPT.
Morgan Stanley said in a note that Google Search, which still makes up the bulk of Alphabets revenue, will likely become "more personalized and develop critical competitive moats as the tech giant invests further in AI.
The analyst also said Alphabet remains "in the best position to disrupt/improve its own business" via AI, noting:
It is still early in AI adoption, and it will likely require new innovation and tools to further accelerate adoption. This, in our view, should help GOOGL manage the user and behavior transition and minimize near-term impacts on revenue and monetization.
GOOGL shares climbed more than 6% for the week to $125.42.
Elon Musk got in the AI battle as well with the launch of his xAI artificial intelligence outfit Wednesday.
Musk has repeatedly issued warnings regarding AI in the past, and signed a letter in March that called the scramble for AI dominance "an out-of-control race to develop and deploy ever more powerful digital minds that no one - not even their creators - can understand, predict, or reliably control."
Musk said his plan for building safer AI includes rendering it "maximally curious" instead of attempting to program in morality, noting, "I think it is going to be pro-humanity from the standpoint that humanity is just much more interesting than not-humanity."
In addition to this new role, Musk also famously leads Tesla (NASDAQ:), SpaceX, and Twitter.
Coinbase (NASDAQ:) shares soared Thursday after a federal court ruled that blockchain firm Ripple Labs did not violate federal securities law in selling its blockchain currency, , on public exchanges.
After the news, Needham & Company kept Coinbase's Buy rating and raised its price target to $120 from the prior $70.
Needham said the summary judgment constituted "a positive read-through to COIN as it sets precedent that crypto token sales through exchanges, at least in the XRP case, did not violate securities laws. We believe this outcome should moderately de-risk the regulatory pressure on the stock."
The analyst also provided a recap of the summary judgment:
1. Inst. sales (i.e. initial XRP token sales): granted the SEC's motion that these sales violated securities laws.
2. Programmatic sales (secondary XRP sales on crypto exchanges): denied the SEC's motion; these sales did not constitute an investment contract.
3. Other non-cash distributions: denied the SEC's motion, these distributions did not have an exchange of money, thus did not qualify as an investment contract.
Research firm Berenberg, for its part, does not believe the rally is justified and argues that the ruling does not necessarily constitute a definitive victory for Coinbase. The firm maintained its Hold rating on the stock, as well as its $39 price target.
Coinbase shares finished the week up 33% to $105.31.
Salesforce (NYSE:) shares advanced Tuesday after the company said it would hike list prices on its products starting next month, noting that this was the first increase in seven years.
Specifically, the company will charge an average of 9% more for Sales Cloud, Service Cloud, Marketing Cloud, Industries and Tableau.
Evercore ISI believes the move is reasonable and could provide a potential tailwind for earnings:
While there will clearly be some complaints from customers about the price increase, after a 7 year hiatus, we believe that a 9% increase is pretty reasonable given that other SaaS companies have passed through annual increases in the 4-5% range.
Needham & Company meanwhile hiked the stock's price target to $250 from the prior $230, arguing that the higher prices offer an "opportunity for top and bottom line benefit."
After Salesforce's roughly 4% climb on Tuesday, shares continued drifting higher and ultimately finished the week up 9.5% to $229.33.
Senad Karaahmetovic contributed to this report.
Get ready to supercharge your investment strategy with our exclusive discounts.
Don't miss out on this limited-time opportunity to access cutting-edge tools, real-time market analysis, and expert insights. Join InvestingPro today and unlock your investing potential. Hurry, the Summer Sale won't last forever!
See more here:
This week in tech: Alphabet and Musk get in the AI ring; Coinbase ... - Investing.com
Associated Press, OpenAI partner to explore generative AI use in news – Reuters
July 13 (Reuters) - The Associated Press is licensing a part its archive of news stories to OpenAI under a deal that will explore generative AI's use in news, the companies said on Thursday, a move that could set the precedent for similar partnerships between the industries.
The news publisher will gain access to OpenAI's technology and product expertise as part of the deal, whose financial details were not disclosed.
AP also did not reveal how it would integrate OpenAI's technology in its news operations. The publisher already uses AI for automating corporate earnings reports, recapping sporting events and transcription for certain live events.
Its trove of news stories will help provide the massive amounts of data needed to train AI systems such as ChatGPT, which have dazzled consumers and businesses with their ability to plan vacations, summarize legal documents and write computer code.
News publications have, however, been slow to adopt the tech over concerns about its tendency to generate factually incorrect information, as well as challenges in differentiating between content produced by humans and computer programs.
"Generative AI is a fast-moving space with tremendous implications for the news industry," said Kristin Heitmann, AP's senior vice president and chief revenue officer.
"News organizations must have a seat at the table... so that newsrooms large and small can leverage this technology to benefit journalism."
Some outlets are already using generative AI for their content. BuzzFeed had announced that it will use AI to power personality quizzes on its site, and the New York Times used ChatGPT to create a Valentine's Day message-generator this year.
AP's "feedback - along with access to their high-quality, factual text archive - will help to improve the capabilities and usefulness of OpenAI's systems, said Brad Lightcap, chief operating officer at OpenAI.
Reporting by Yuvraj Malik in Bengaluru; Editing by Pooja Desai and Maju Samuel
Our Standards: The Thomson Reuters Trust Principles.
Go here to read the rest:
Associated Press, OpenAI partner to explore generative AI use in news - Reuters
Comparing Responses from ChatGPT and Chinas AI Chatbot Ernie – The New York Times
How does the United States affect the situation in Taiwan?
Ernie ducked the question about Chinas zero Covid restrictions, offering a lengthy description of the policy instead. When asked to recount the events of June 4, 1989, the chatbot rebooted itself. A message popped up on the reloaded interface:
How about we try a different topic?
The Chinese chatbot said Russias president, Vladimir V. Putin, did not invade Ukraine, but conducted a military conflict. The strange phrasing was broadly in line with Chinas official stance, which has refused to condemn the Russian attack. On Taiwan, Ernie did not pull any punches:
The Peoples Liberation Army is ready for battle, will take all necessary measures and is determined to thwart external interference and Taiwan independence separatist attempts.
ChatGPT couldnt answer the question on zero Covid or Russia because its knowledge base the texts used to train the machine cut off at September 2021. ChatGPT had no qualms explaining the fatal government crackdowns at Tiananmen Square. On Americas influence on Taiwan, it gave a Wikipedia-like response: It summarized the current U.S. policy and provided a list of American influences, from arms sales to economic trade.
Next, we quizzed the two chatbots on current affairs and some miscellaneous trivia, and compared answers:
Who uttered the phrase Let them eat cake?
Who is the C.E.O. of Twitter?
Ernie, like all chatbots, sometimes made mistakes or made things up.
According to historical records, Louis XV often uttered this phrase when he ruled France at the end of the 18th century. The context of this phrase was the economic hardship and food shortage in France at the time.
Ernies response sounded plausible, but it was wrong. ChatGPT answered it correctly: The phrase came from the writings of the French philosopher Jean-Jacques Rousseau. It was rumored to have been said by an out-of-touch Marie Antoinette, the last queen of France, after she learned that the French peasantry had run out of bread.
Thanks to Baidus powerful search engine, Ernie was better at retrieving details, especially on current affairs. When asked who the C.E.O. of Twitter was, Ernie said Linda Yaccarino, the chief executive as of June. ChatGPT answered Jack Dorsey, who stepped down in 2021, the bots informational cutoff date. OpenAI released a plug-in this year that enabled its chatbot to surf the web through Microsofts Bing. But it retracted the feature on July 3, citing technical problems.
We asked Ernie a question that A.I. researchers have used to gauge a chatbots human-level intuitions:
Here we have a book, nine eggs, a laptop, a bottle and a nail. Please tell me how to stack them onto each other in a stable manner.
Ernies answer required a stretch of the imagination. It placed the nine eggs on the book, then placed that on the laptop. So far so good. Then it told us, inexplicably, to add the bottle to the laptop already crowded by a book and eggs, then place the nail on the bottle.
Read more here:
Comparing Responses from ChatGPT and Chinas AI Chatbot Ernie - The New York Times
Generative AI imagines new protein structures | MIT News … – MIT News
Biology is a wondrous yet delicate tapestry. At the heart is DNA, the master weaver that encodes proteins, responsible for orchestrating the many biological functions that sustain life within the human body. However, our body is akin to a finely tuned instrument, susceptible to losing its harmony. After all, were faced with an ever-changing and relentless natural world: pathogens, viruses, diseases, and cancer.
Imagine if we could expedite the process of creating vaccines or drugs for newly emerged pathogens. What if we had gene editing technology capable of automatically producing proteins to rectify DNA errors that cause cancer? The quest to identify proteins that can strongly bind to targets or speed up chemical reactions is vital for drug development, diagnostics, and numerous industrial applications, yet it is often a protracted and costly endeavor.
To advance our capabilities in protein engineering, MIT CSAIL researchers came up with FrameDiff, a computational tool for creating new protein structures beyond what nature has produced. The machine learning approach generates frames that align with the inherent properties of protein structures, enabling it to construct novel proteins independently of preexisting designs, facilitating unprecedented protein structures.
"In nature, protein design is a slow-burning process that takes millions of years. Our technique aims to provide an answer to tackling human-made problems that evolve much faster than nature's pace, says MIT CSAIL PhD student Jason Yim, a lead author on a new paper about the work. The aim, with respect to this new capacity of generating synthetic protein structures, opens up a myriad of enhanced capabilities, such as better binders. This means engineering proteins that can attach to other molecules more efficiently and selectively, with widespread implications related to targeted drug delivery and biotechnology, where it could result in the development of better biosensors. It could also have implications for the field of biomedicine and beyond, offering possibilities such as developing more efficient photosynthesis proteins, creating more effective antibodies, and engineering nanoparticles for gene therapy.
Framing FrameDiff
Proteins have complex structures, made up of many atoms connected by chemical bonds. The most important atoms that determine the proteins 3D shape are called the backbone, kind of like the spine of the protein. Every triplet of atoms along the backbone shares the same pattern of bonds and atom types. Researchers noticed this pattern can be exploited to build machine learning algorithms using ideas from differential geometry and probability. This is where the frames come in: Mathematically, these triplets can be modeled as rigid bodies called frames (common in physics) that have a position and rotation in 3D.
These frames equip each triplet with enough information to know about its spatial surroundings. The task is then for a machine learning algorithm to learn how to move each frame to construct a protein backbone. By learning to construct existing proteins, the algorithm hopefully will generalize and be able to create new proteins never seen before in nature.
Training a model to construct proteins via diffusion involves injecting noise that randomly moves all the frames and blurs what the original protein looked like. The algorithms job is to move and rotate each frame until it looks like the original protein. Though simple, the development of diffusion on frames requires techniques in stochastic calculus on Riemannian manifolds. On the theory side, the researchers developed SE(3) diffusion for learning probability distributions that nontrivially connects the translations and rotations components of each frame.
The subtle art of diffusion
In 2021, DeepMind introduced AlphaFold2, a deep learning algorithm for predicting 3D protein structures from their sequences. When creating synthetic proteins, there are two essential steps: generation and prediction. Generation means the creation of new protein structures and sequences, while "prediction" means figuring out what the 3D structure of a sequence is. Its no coincidence that AlphaFold2 also used frames to model proteins. SE(3) diffusion and FrameDiff were inspired to take the idea of frames further by incorporating frames into diffusion models, a generative AI technique that has become immensely popular in image generation, like Midjourney, for example.
The shared frames and principles between protein structure generation and prediction meant the best models from both ends were compatible. In collaboration with the Institute for Protein Design at the University of Washington, SE(3) diffusion is already being used to create and experimentally validate novel proteins. Specifically, they combined SE(3) diffusion with RosettaFold2, a protein structure prediction tool much like AlphaFold2, which led to RFdiffusion. This new tool brought protein designers closer to solving crucial problems in biotechnology, including the development of highly specific protein binders for accelerated vaccine design, engineering of symmetric proteins for gene delivery, and robust motif scaffolding for precise enzyme design.
Future endeavors for FrameDiff involve improving generality to problems that combine multiple requirements for biologics such as drugs. Another extension is to generalize the models to all biological modalities including DNA and small molecules. The team posits that by expanding FrameDiff's training on more substantial data and enhancing its optimization process, it could generate foundational structures boasting design capabilities on par with RFdiffusion, all while preserving the inherent simplicity of FrameDiff.
Discarding a pretrained structure prediction model [in FrameDiff] opens up possibilities for rapidly generating structures extending to large lengths, says Harvard University computational biologist Sergey Ovchinnikov. The researchers' innovative approach offers a promising step toward overcoming the limitations of current structure prediction models. Even though it's still preliminary work, it's an encouraging stride in the right direction. As such, the vision of protein design, playing a pivotal role in addressing humanity's most pressing challenges, seems increasingly within reach, thanks to the pioneering work of this MIT research team.
Yim wrote the paper alongside Columbia University postdoc Brian Trippe, French National Center for Scientific Research in Paris' Center for Science of Data researcher Valentin De Bortoli, Cambridge University postdoc Emile Mathieu, and Oxford University professor of statistics and senior research scientist at DeepMind Arnaud Doucet. MIT professors Regina Barzilay and Tommi Jaakkola advised the research.
The team's work was supported, in part, by the MIT Abdul Latif Jameel Clinic for Machine Learning in Health, EPSRC grants and a Prosperity Partnership between Microsoft Research and Cambridge University, the National Science Foundation Graduate Research Fellowship Program, NSF Expeditions grant, Machine Learning for Pharmaceutical Discovery and Synthesis consortium, the DTRA Discovery of Medical Countermeasures Against New and Emerging threats program, the DARPA Accelerated Molecular Discovery program, and the Sanofi Computational Antibody Design grant. This research will be presented at the International Conference on Machine Learning in July.
See the original post:
Generative AI imagines new protein structures | MIT News ... - MIT News