Category Archives: Ai
BCSO using AI to track retail crime in the metro – KOB 4
ALBUQUERQUE, N.M. Whether its gunshot detection or car break-ins, it seems like artificial intelligence is everywhere and now the Bernalillo County Sheriffs Office is using it to track retail crime.
The departments most recent retail crime initiative was just rolled out on Tuesday, and it focuses on partnering with business owners.
AI technology is used to gather information on cars that are driving into business parking lots and to see if they are associated with any crime.
We want to make sure that we are partnering with businesses with the technology we are bringing into the office and in Bernalillo County. To make sure that we can expand what we are going on with our axon body-worn cameras and our license plate readers, said Bernalillo County Sheriff John Allen.
The Department is using Flock Safety but is also looking at other companies they can partner with to expand how they use AI.
To use an example, Coronado Mall, lets say you had somebody that did retail crime numerous times in the same vehicle. It will alert us saying that that vehicle they have a criminal trespass warning with that company, they need to be off of that property so the business and us are alerted, said Allen.
He said the program expands on their current license plate system and scans vehicles as they come into business parking lots.
It really emphasizes to have an unbiased investigation we dont want profiling or things of that nature to happen, we just dont want to focus on one license plate we want to focus on a description of vehicles because we know there are some commonalities in the retail crime we are seeing in Bernalillo County, he said.
Four businesses have already signed up for the initiative that was launched this week. Its been a combination of local shops and big box stores.
The Bernalillo County Sheriff said any business owner thats interested in this technology can reach out to the department. Theyre also looking to partner with other agencies on this initiative.
Continued here:
Artificial Intelligence (AI) Software Revenue Is Zipping Toward $14 … – The Motley Fool
Artificial intelligence (AI) promises to improve productivity in many different industries, potentially doubling the output of the average knowledge worker by the end of the decade. Meanwhile, the falling cost of training AI models is making the technology ever more accessible. The intersection of those trends could trigger a demand boom in the coming years.
Indeed, Cathie Wood's Ark Invest says AI software revenue will hit $14 trillion by 2030, up from $1 trillion in 2021, as enterprises chase efficiency. Many companies will undoubtedly benefit from the boom, but Microsoft (MSFT 0.34%) and Datadog (DDOG 0.40%) are particularly well positioned to capitalize on the growing demand for AI software.
Here's what investors should know about these AI growth stocks.
Microsoft announced solid financial results for the June quarter, topping consensus estimates on the top and bottom lines. Revenue rose 8% to $56.2 billion, driven by double-digit growth in enterprise software (e.g., Microsoft 365, Dynamics 365) and Azure cloud services, and generally accepted accounting principles (GAAP) earnings jumped 21% to $2.69per diluted share as cost-cutting efforts paid off. But the company may be able to accelerate growth in future quarters.
The investment thesis is simple: Microsoft is the gold standard in enterprise software, and Microsoft Azure is the second-largest cloud services provider in the world. In both segments, the company aims to turbocharge growth by leaning into artificial intelligence (AI), and its exclusive partnership with ChatGPT creator OpenAI should be a significant tailwind. Indeed, Morgan Stanley analyst Keith Weiss says Microsoft is the software company "best positioned" to monetize generative AI.
In enterprise software, Microsoft accounted for 16.4% of global software-as-a-service (SaaS) revenue last year, earning nearly twice as much as its closest competitor, and industry experts have recognized its leadership in several quickly growing SaaS verticals, including office productivity, communications, cybersecurity, and enterprise resource planning (ERP) software. All four markets areexpectedtogrowat a double-digit pace through 2030, according to Grand View Research.
In cloud computing, Microsoft Azure accounted for 23% of cloud infrastructure and platform services revenue in the first quarter of 2023, up from 21% one year ago, 19% two years ago, and 17% three years ago. Those consistent market share gains reflect strength in hybrid computing, AI supercomputinginfrastructure, and AI developer services, according to CEO Satya Nadella, and they hint at strong growth in the coming years. The cloud computing market is expected to increase at 14.1%annually through 2030.
In AI software, Microsoft recently announced Microsoft 365 Copilot and Dynamics 365 Copilot, products that lean on generative AI to automate a variety of business processes and workflows. For instance, Microsoft 365 Copilot can draft emails in Outlook, analyze data in Excel, and create presentations in PowerPoint. Similarly, Azure OpenAI Services empowers developers to build cutting-edge generative AI software by connecting them with prebuilt AI models from OpenAI, including the GPT family of large language models.
Currently, shares trade at 11.9 times sales, a slight premium to the three-year average of 11.3 times sales, but a reasonable price to pay for a high-quality AI growth stock like Microsoft.
Datadog has yet to release results for the June quarter, but the company turned in a solid financial report for the March quarter. Its customer count rose 29%, and the average customer spent over 30% more, despite a broader pullback in business IT investments. In turn, revenue climbed 33% to $482 million, and non-GAAP net income jumped 17% to $0.28 per diluted share.
Going forward, the investment thesis centers on digital transformation: Datadog provides observability and cybersecurity software that helps clients resolve performance issues and security threats across their applications, networks, and infrastructure. Demand for such products should snowball in the years ahead, as IT environments are made more complex by cloud migrations and other digital transformation projects.
Datadog has distinguished itself as a leader in several observability software categories, including application performance monitoring, network monitoring, log monitoring, and AI for IT operations. Industry experts attribute that success to its broad product portfolio, robust innovation pipeline, and data science capabilities. Indeed, Datadog brings together more than two dozen monitoring products on a single platform, and it leans on AI to automate tasks like anomaly detection, incident alerts, and root cause analysis.
Looking ahead, Datadog says its addressable market will reach $62 billion by 2026, and any trend that adds complexity to corporate IT environments should be a tailwind. For instance, Wolfe Research analyst Alex Zukin believes interest in generative AI could help Datadog become "the fastest-growing software company."
Currently, shares trade at 19.8 times sales, a bargain compared to the three-year average of 36.6 times sales. At that price, risk-tolerant investors should feel comfortable buying a few shares of this growth stock.
See the rest here:
Artificial Intelligence (AI) Software Revenue Is Zipping Toward $14 ... - The Motley Fool
Using AI to protect against AI image manipulation | MIT News … – MIT News
As we enter a new era where technologies powered by artificial intelligence can craft and manipulate images with a precision that blurs the line between reality and fabrication, the specter of misuse looms large. Recently, advanced generative models such as DALL-E and Midjourney, celebrated for their impressive precision and user-friendly interfaces, have made the production of hyper-realistic images relatively effortless. With the barriers of entry lowered, even inexperienced users can generate and manipulate high-quality images from simple text descriptions ranging from innocent image alterations to malicious changes. Techniques like watermarking pose a promising solution, but misuse requires a preemptive (as opposed to only post hoc) measure.
In the quest to create such a new measure, researchers from MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) developed PhotoGuard, a technique that uses perturbations minuscule alterations in pixel values invisible to the human eye but detectable by computer models that effectively disrupt the models ability to manipulate the image.
PhotoGuard uses two different attack methods to generate these perturbations. The more straightforward encoder attack targets the images latent representation in the AI model, causing the model to perceive the image as a random entity. The more sophisticated diffusion one defines a target image and optimizes the perturbations to make the final image resemble the target as closely as possible.
Consider the possibility of fraudulent propagation of fake catastrophic events, like an explosion at a significant landmark. This deception can manipulate market trends and public sentiment, but the risks are not limited to the public sphere. Personal images can be inappropriately altered and used for blackmail, resulting in significant financial implications when executed on a large scale, says Hadi Salman, an MIT graduate student in electrical engineering and computer science (EECS), affiliate of MIT CSAIL, and lead author of a new paper about PhotoGuard.
In more extreme scenarios, these models could simulate voices and images for staging false crimes, inflicting psychological distress and financial loss. The swift nature of these actions compounds the problem. Even when the deception is eventually uncovered, the damage whether reputational, emotional, or financial has often already happened. This is a reality for victims at all levels, from individuals bullied at school to society-wide manipulation.
PhotoGuard in practice
AI models view an image differently from how humans do. It sees an image as a complex set of mathematical data points that describe every pixel's color and position this is the image's latent representation. The encoder attack introduces minor adjustments into this mathematical representation, causing the AI model to perceive the image as a random entity. As a result, any attempt to manipulate the image using the model becomes nearly impossible. The changes introduced are so minute that they are invisible to the human eye, thus preserving the image's visual integrity while ensuring its protection.
The second and decidedly more intricate diffusion attack strategically targets the entire diffusion model end-to-end. This involves determining a desired target image, and then initiating an optimization process with the intention of closely aligning the generated image with this preselected target.
In implementing, the team created perturbations within the input space of the original image. These perturbations are then used during the inference stage, and applied to the images, offering a robust defense against unauthorized manipulation.
The progress in AI that we are witnessing is truly breathtaking, but it enables beneficial and malicious uses of AI alike, says MIT professor of EECS and CSAIL principal investigator Aleksander Madry, who is also an author on the paper. It is thus urgent that we work towards identifying and mitigating the latter. I view PhotoGuard as our small contribution to that important effort.
The diffusion attack is more computationally intensive than its simpler sibling, and requires significant GPU memory. The team says that approximating the diffusion process with fewer steps mitigates the issue, thus making the technique more practical.
To better illustrate the attack, consider an art project, for example. The original image is a drawing, and the target image is another drawing thats completely different. The diffusion attack is like making tiny, invisible changes to the first drawing so that, to an AI model, it begins to resemble the second drawing. However, to the human eye, the original drawing remains unchanged.
By doing this, any AI model attempting to modify the original image will now inadvertently make changes as if dealing with the target image, thereby protecting the original image from intended manipulation. The result is a picture that remains visually unaltered for human observers, but protects against unauthorized edits by AI models.
As far as a real example with PhotoGuard, consider an image with multiple faces. You could mask any faces you dont want to modify, and then prompt with two men attending a wedding. Upon submission, the system will adjust the image accordingly, creating a plausible depiction of two men participating in a wedding ceremony.
Now, consider safeguarding the image from being edited; adding perturbations to the image before upload can immunize it against modifications. In this case, the final output will lack realism compared to the original, non-immunized image.
All hands on deck
Key allies in the fight against image manipulation are the creators of the image-editing models, says the team. For PhotoGuard to be effective, an integrated response from all stakeholders is necessary. Policymakers should consider implementing regulations that mandate companies to protect user data from such manipulations. Developers of these AI models could design APIs that automatically add perturbations to users images, providing an added layer of protection against unauthorized edits, says Salman.
Despite PhotoGuards promise, its not a panacea. Once an image is online, individuals with malicious intent could attempt to reverse engineer the protective measures by applying noise, cropping, or rotating the image. However, there is plenty of previous work from the adversarial examples literature that can be utilized here to implement robust perturbations that resist common image manipulations.
A collaborative approach involving model developers, social media platforms, and policymakers presents a robust defense against unauthorized image manipulation. Working on this pressing issue is of paramount importance today, says Salman. And while I am glad to contribute towards this solution, much work is needed to make this protection practical. Companies that develop these models need to invest in engineering robust immunizations against the possible threats posed by these AI tools. As we tread into this new era of generative models, lets strive for potential and protection in equal measures.
The prospect of using attacks on machine learning to protect us from abusive uses of this technology is very compelling, says Florian Tramr, an assistant professor at ETH Zrich. The paper has a nice insight that the developers of generative AI models have strong incentives to provide such immunization protections to their users, which could even be a legal requirement in the future. However, designing image protections that effectively resist circumvention attempts is a challenging problem: Once the generative AI company commits to an immunization mechanism and people start applying it to their online images, we need to ensure that this protection will work against motivated adversaries who might even use better generative AI models developed in the near future. Designing such robust protections is a hard open problem, and this paper makes a compelling case that generative AI companies should be working on solving it.
Salman wrote the paper alongside fellow lead authors Alaa Khaddaj and Guillaume Leclerc MS 18, as well as Andrew Ilyas 18, MEng 18; all three are EECS graduate students and MIT CSAIL affiliates. The teams work was partially done on the MIT Supercloud compute cluster, supported by U.S. National Science Foundation grants and Open Philanthropy, and based upon work supported by the U.S. Defense Advanced Research Projects Agency. It was presented at the International Conference on Machine Learning this July.
More here:
Using AI to protect against AI image manipulation | MIT News ... - MIT News
AI companies arent afraid of regulation we want it to be international and inclusive – The Guardian
Opinion
If our industry is to avoid superficial ethics-washing, historically excluded communities must be brought into the conversation
AI is advancing at a rapid pace, bringing with it potentially transformative benefits for society. With discoveries such as AlphaFold, for example, were starting to improve our understanding of some long-neglected diseases, with 200m protein structures made available at once a feat that previously would have required four years of doctorate-level research for each protein and prohibitively expensive equipment. If developed responsibly, AI can be a powerful tool to help us deliver a better, more equitable future.
However, AI also presents challenges. From bias in machine learning used for sentencing algorithms, to misinformation, irresponsible development and deployment of AI systems poses the risk of great harm. How can we navigate these incredibly complex issues to ensure AI technology serves our society and not the other way around?
First, it requires all those involved in building AI to adopt and adhere to principles that prioritise safety while also pushing the frontiers of innovation. But it also requires that we build new institutions with the expertise and authority to responsibly steward the development of this technology.
The technology sector often likes straightforward solutions, and institution-building may seem like one of the hardest and most nebulous paths to go down. But if our industry is to avoid superficial ethics-washing, we need concrete solutions that engage with the reality of the problems we face and bring historically excluded communities into the conversation.
To ensure the market seeds responsible innovation, we need the labs building innovative AI systems to establish proper checks and balances to inform their decision-making. When the language models first burst on to the scene, it was Google DeepMinds institutional review committee an interdisciplinary panel of internal experts tasked with pioneering responsibly that decided to delay the release of our new paper until we could pair it with a taxonomy of risks that should be used to assess models, despite industry-wide pressure to be on top of the latest developments.
These same principles should extend to investors funding newer entrants. Instead of bankrolling companies that prioritise novelty over safety and ethics, venture capitalists (VCs) and others need to incentivise bold and responsible product development. For example, the VC firm Atomico, at which I am an angel investor, insists on including diversity, equality and inclusion, and environmental, social governance requirements in the term sheets for every investment it makes. These are the types of behaviours we want those leading the field to set.
We are also starting to see convergence across the industry around important practices such as impact assessments and involving diverse communities in development, evaluation and testing. Of course, there is still a long way to go. As a woman of colour, Im acutely aware of what this means for a sector where people like me are underrepresented. But we can learn from the cybersecurity community.
Decades ago they started offering bug bounties a financial reward to researchers who could identify a vulnerability or bug in a product. Once reported, the companies had an agreed time period during which they would address the bug and then publicly disclose it, crediting the bounty hunters. Over time, this has developed into an industry norm called responsible disclosure. AI labs are now borrowing from this playbook to tackle the issue of bias in datasets and model outputs.
Last, advancements in AI present a challenge to multinational governance. Guidance at the local level is one part of the equation, but so too is international policy alignment, given the opportunities and risks of AI wont be limited to any one country. Proliferation and misuse of AI has woken everyone up to the fact that global coordination will play a crucial role in preventing harm and ensuring common accountability.
Laws are only effective, however, if they are future-proof. Thats why its crucial for regulators to consider not only how to regulate chatbots today, but also how to foster an ecosystem where innovation and scientific acceleration can benefit people, providing outcome-driven frameworks for tech companies to work within.
Unlike nuclear power, AI is more general and broadly applicable than other technologies, so building institutions will require access to a broad set of skills, diversity of background and new forms of collaboration including scientific expertise, socio-technical knowledge, and multinational public-private partnerships. The recent Atlantic declaration between the UK and US is a promising start toward ensuring that standards in the industry have a chance of scaling into multinational law.
In a world that is politically trending toward nostalgia and isolationism, multilayered approaches to good governance that involve government, tech companies and civil society will never be the headline-grabbing or popular path to solving the challenges of AI. But the hard, unglamorous work of building institutions is critical for enabling technologists to build toward a better future together.
{{topLeft}}
{{bottomLeft}}
{{topRight}}
{{bottomRight}}
{{.}}
Go here to read the rest:
AI companies arent afraid of regulation we want it to be international and inclusive - The Guardian
AI-enhanced images a threat to democratic processes, experts warn – The Guardian
Artificial intelligence (AI)
Call for action comes after Labour MP shared a digitally manipulated image of Rishi Sunak on social media
Experts have warned that action needs to be taken on the use of artificial intelligence-generated or enhanced images in politics after a Labour MP apologised for sharing a manipulated image of Rishi Sunak pouring a pint.
Karl Turner, the MP for Hull East, shared an image on the rebranded Twitter platform, X, showing the prime minister pulling a sub-standard pint at the Great British beer festival while a woman looks on with a derisive expression. The image had been manipulated from an original photo in which Sunak appears to have pulled a pub-level pint while the person behind him has a neutral expression.
The image brought criticism from the Conservatives, with the deputy prime minister, Oliver Dowden, calling it unacceptable.
I think that the Labour leader should disown this and Labour MPs who have retweeted this or shared this should delete the image, it is clearly misleading, Dowden told LBC on Thursday.
Experts warned the row was an indication of what could happen during what is likely to be a bitterly fought election campaign next year. While it was not clear whether the image of Sunak had been manipulated using an AI tool, such programs have made it easier and quicker to produce convincing fake text, images and audio.
Wendy Hall, a regius professor of computer science at the University of Southampton, said: I think the use of digital technologies including AI is a threat to our democratic processes. It should be top of the agenda on the AI risk register with two major elections in the UK and the US looming large next year.
Shweta Singh, an assistant professor of information systems and management at the University of Warwick, said: We need a set of ethical principles which can assure and reassure the users of these new technologies that the news they are reading is trustworthy.
We need to act on this now, as it is impossible to imagine fair and impartial elections if such regulations dont exist. Its a serious concern and we are running out of time.
Prof Faten Ghosn, the head of the department of government at the University of Essex, said politicians should make it clear to voters when they are using manipulated images. She flagged efforts to regulate the use of AI in politics by the US congresswoman Yvette Clarke, who is proposing a law change that would require political adverts to tell voters if they contain AI-generated material.
If politicians use AI in any form they need to ensure that it carries some kind of mark that informs the public, said Ghosn.
The warnings contribute to growing political concern over how to regulate AI. Darren Jones, the Labour chair of the business select committee, tweeted on Wednesday: The real question is: how can anyone know if a photo is a deepfake? I wouldnt criticise @KarlTurnerMP for sharing a photo that looks real to me.
In reply to criticism from the science secretary, Michelle Donelan, he added: What is your department doing to tackle deepfake photos, especially in advance of the next election?
The science department is consulting on its AI white paper, which was published earlier this year and advocates general principles to govern technology development, rather than specific curbs or bans on certain products. Since that was published, however, Sunak has shifted his rhetoric on AI from talking mostly about the opportunities it will present to warning that it needs to be developed with guardrails.
Meanwhile, the most powerful AI companies have acknowledged the need for a system to watermark AI-generated content. Last month Amazon, Google, Meta, Microsoft and ChatGPT developer OpenAI agreed to a set of new safeguards in a meeting with Joe Biden that included using watermarking for AI-made visual and audio content.
In June Microsofts president, Brad Smith, warned that governments had until the beginning of next year to tackle the issue of AI-generated disinformation. We do need to sort this out, I would say by the beginning of the year, if we are going to protect our elections in 2024, he said.
{{topLeft}}
{{bottomLeft}}
{{topRight}}
{{bottomRight}}
{{.}}
View original post here:
AI-enhanced images a threat to democratic processes, experts warn - The Guardian
Meta Will Let You Chat With AI Bots With Personalities, Report Says – CNET
Meta, the parent company of Facebook, Instagram and nowThreads, plans to launch AI chatbots with a variety of personalities, the Financial Times reported Tuesday. The chatbots, reportedly called personas, could expand the company's social networks with a range of new online tools and entertainment options.
The company could announce the chatbots as soon as September, the report said. Meta will offer the chatbots to improve search and recommendations, like travel advice in the style of a surfer, and to give people an online personality that's fun to "play with," the report said. One such AI persona the company tried building is a digital incarnation of President Abraham Lincoln.
If successful, the AI chatbots could help keep the 4 billion people who use Meta services each month more engaged, addressing a major Meta challenge as growth becomes harder and rivals such as TikTok draw people's attention elsewhere. Meta consolidated its artificial intelligence efforts earlier this year to "turbocharge" its work and build better "creative and expressive tools," Chief Executive Mark Zuckerberg said at the time.
AI chatbots also could provide the company with a new wealth of personal information useful for targeting advertisements, Meta's main revenue source. Search engines already craft ads based on the information you type into them, but AI chatbots could capture a new dimension of people's interests and attributes for more detailed profiling. Privacy is one of Meta's biggest challenges, and regulators already have begun eyeing AI warily.
Meta declined to comment.
AI chatbots, exemplified by OpenAI's ChatGPT, have become vastly more useful and engaging. Their use of large language models trained on vast swaths of the internet gives them a vastly greater ability to understand human text and offer helpful responses to our questions and conversation.
Chatbots are not without risks. They're prone to fabricating plausible but bogus responses, a phenomenon called hallucination, and can have a hard time with facts. LLM creators often hire "red teams" to try to discover and thwart potential abuses, like people using LLMs for sexual or violent purposes. But the area of AI security and abuse is new, andresearchers are finding new ways to evade LLM restrictionsas they dig into the area.
Many ChatGPT rivals are available already, including Anthropic's Claude 2, Microsoft's Bing and Google's Bard. Such tools are often available for use by other software and services, letting direct Meta rivals like Snap offer chatbots of their own. So getting ahead simply by offering an AI chatbot doesn't guarantee success.
Facebook has billions of users, though, and deep AI expertise. In July it released its own Llama 2 large language model.
Editors' note: CNET is using an AI engine to help create some stories. For more, seethis post.
See more here:
Meta Will Let You Chat With AI Bots With Personalities, Report Says - CNET
Googles AI Search Generative Experience is getting video and … – The Verge
Googles AI-powered Search Generative Experience is getting a big new feature: images and video. If youve enabled the AI-based SGE feature in Search Labs, youll now start to see more multimedia in the colorful summary box at the top of your search results. Googles also working on making that summary box appear faster and adding more context to the links it puts in the box.
SGE may still be in the experiment phase, but its very clearly the future of Google Search. It really gives us a chance to, now, not always be constrained in the way search was working before, CEO Sundar Pichai said on Alphabets most recent earnings call. It allows us to think outside the box. He then said that over time, this will just be how search works.
The SGE takeover raises huge, thorny questions about the very future of the web, but its also just a tricky product to get right. Google is no longer simply trying to find good links for you every time you search its trying to synthesize and generate relevant, true, helpful information. Video in particular could go a long way here: Google has integrated YouTube more and more into search results over the years, linking to a specific chapter or moment inside a video that might help you with that why is my dryer making that noise query.
You can already see the publish dates and images starting to show up in SGE summaries. Image: Google / David Pierce
Surfacing and contextualizing links is also still going to be crucial for Google if SGE is going to work. Its now going to display publish dates next to the three articles in the summary box in an effort to help you better understand how recent the information is from these web pages, Google said in a blog post announcing the new features. 9to5Google also noticed Google experimenting with adding in-line links to the AI summary, though so far, that appears to have just been a test. Finding the right balance between giving you the information you were looking for and helping you find it yourself and all the implications of both those outcomes is forever one of the hardest problems within Google Search.
Making SGE faster is also going to take Google a while. All these large language model-based tools, from SGE and Bing to ChatGPT and Bard, take a few seconds to generate answers to your questions, and in the world of search, every millisecond matters. In June, Google said it had cut the loading time in half though Ive been using SGE for a few months, and I cant say Ive noticed a big difference before and after. SGE is still too slow. Its always the last thing to load on the page by a wide margin.
Still, Ive been consistently impressed with how useful SGE is in my searches. Its particularly handy for all the where should I go and what should I watch types of questions, where theres no right answer but Im just looking for ideas and options. Armed with more sources, more media, and more context, SGE might start to usurp the 10 blue links even further.
Here is the original post:
Googles AI Search Generative Experience is getting video and ... - The Verge
AI for all? Google ups the ante with free UK training courses for firms – The Guardian
Artificial intelligence (AI)
US tech giant starts charm offensive on artificial intelligence with basic courses to help firms understand and exploit emerging phenomenon
A larger-than-life Michelle Donelan beams on to a screen in Googles London headquarters. The UK science and innovation secretary is appearing via video to praise the US tech behemoth for its plans to equip workers and bosses with basic skills in artificial intelligence (AI).
The recent explosion in the use of AI tools like ChatGPT and Googles Bard show that we are on the cusp of a new and exciting era in artificial intelligence, and it is one that will dramatically improve peoples lives, says Donelan. Googles ambitious training programme is so important and exceptional in its breadth, she gushes in a five-minute video, filmed in her ministerial office.
Welcome to the AI arms race, where nations are bending over backwards to attract cash and research into the nascent technology. Googles move is a vote of confidence in the UK, supporting the governments aim to make the UK both the intellectual home and the geographical home of AI, says Donelan.
Few countries have been more accommodating than the UK, with Donelans tone underlining the red carpet treatment given by Rishi Sunaks government to tech firms and his desire to lure AI companies in particular.
Googles educational courses cover the basics of AI, which it says will help individuals, businesses and organisations to gain skills in the emerging technology.
The tuition consists of 10 modules on a variety of topics, in the form of 45-minute presentations, two of which, covering growing productivity and understanding machine learning, are already available.
The courses are rudimentary: they cover the basics of AI and Google says they do not require any prior technological knowledge.
About 50 people, including small business owners, attended the first course at Googles Kings Cross offices in London last week, just across the road from where its monolithic 1bn new UK HQ, complete with rooftop exercise trail and pool, is being built.
The UK home to Googles AI research subsidiary, DeepMind is the launchpad for its new training, but the company said it expected to roll it out to other countries in the future. Co-founded in 2011 by Demis Hassabis, a child chess prodigy, DeepMind was sold to Google for 400m in 2014 and now leads Googles AI development under the new Google DeepMind title. It has increasingly embedded itself into the machinery of the state, from controversially partnering with the NHS to try to build apps to help doctors monitor kidney infections, to Hassabis advising the government during the Covid-19 pandemic.
The first sessions are the latest addition to the digital skills training offered by the company in the UK since 2015, accessed by 1m people.
We see a cry for more training in the AI space specifically, Debbie Weinstein, the managing director of Google UK and Ireland, tells the Guardian.
We are hearing this need from people and at the same time we hear from businesses that they are looking for people with digital skills that can help them.
Googles pitch is that AI could increase productivity for businesses, including by taking care of time-consuming administrative tasks. It cites a recent economic impact report, compiled for Google by the market research firm Public First, which estimated that AI could add 400bn in economic value to the UK by 2030, through harnessing innovation powered by AI.
The company said the report also highlighted a lack of tech skills in the UK, which could hold back growing businesses.
But there is little mention of any of the feared downsides of AI, including the impact on huge swathes of the economy by making roles redundant. Those attending the inaugural presentations appear more keen to know basics, such as whether AI can help with tasks including responding to emails and booking appointments.
The charm offensive by Google may also highlight deep unease about the breakneck pace of AI expansion and its potential to completely upend the world of work, and the Silicon Valley companys nervousness over any backlash.
Google and other tech firms, including Microsoft, Amazon and Meta, are working feverishly to develop AI tools, all hoping to steal a march on rivals in what some believe is a winner-takes-all competition with unlimited earnings potential.
Google launched its Bard chatbot in the US and UK in March, its answer to OpenAIs ChatGPT and Microsofts Bing Chat, a service which is capable of answering detailed questions, giving creative answers and engaging in conversations. Facebooks parent company Meta has recently released an open-source version of an AI model, Llama 2.
A recent report by the Organisation for Economic cooperation and Development (OECD) warned that AI-driven automation could trigger mass job losses across skilled professions such as law, medicine and finance, with highly skilled jobs facing the biggest threat of upheaval.
Others are concerned that profit-maximising private tech companies are expanding apace in a fledgling sector where there is now no regulation, with echoes of the early days of the internet, when the land grab by tech companies left regulators and ministers trailing in their wake and eventually forcing a belated reckoning for social media giants.
Dr Andrew Rogoyski, of the Institute for People-Centred Artificial Intelligence at the University of Surrey, says Googles training drive is unlikely to be motivated by altruism. Making free training available makes absolute sense, he says. If you use one companys training material, youre more likely to use their AI platform.
Rogoyski adds that tech firms of all sizes are offering educational courses.
I think a lot of businesses are struggling at the moment with the feeling that they should be doing something with AI and not knowing where to start, he says.
I would like to see more warnings, the things that businesses should be aware of when looking at AI, [that] its not just about technical and coding skills to knock something up that you can push out to your website.
He also wants companies to be aware of potential pitfalls.
There are much more impactful issues that people need to think about such as privacy, security, data basis, all of the concerns and limitations that you might feel are being glossed over if [tech firms] are pushing us to try AI and start tinkering.
Politicians are waking up to the risks of AI. Labours digital spokesperson Lucy Powell recently said the UK should bar technology developers from working on advanced AI tools unless they have a licence to do so. Powell suggested AI should be licensed in a similar way to medicines or nuclear power, both of which are governed by arms-length governmental bodies. But both main parties are captivated by potential prize: Sir Keir Starmer recently held a shadow cabinet meeting at Googles London office, and the Labour leader and Sunak focused on AI in their recent London Tech Week speeches.
Globally, governments including the UKs, are working out how they can reap the benefits of tech firms like Google up-skilling its workforce, at the same time as they are hoping to rein in those very firms.
Sunak has changed his tone on AI in the past couple of months, and is now planning to host a global summit on safety in the nascent technology, as he aims to position the UK as the international hub for its regulation.
The sudden adoption of AI chatbots and other tools are worrying managers in the UK, leaving them fearful about potential job losses triggered by the technology, as well as the associated risks to security and privacy.
Two in five managers (43%) told the Chartered Management Institute (CMI) they were concerned that jobs in their organisations will be at risk from AI technologies, while fewer than one in 10 (7%) managers said employees in their organisation were adequately trained on AI, even on high-profile tools like ChatGPT.
Anthony Painter, the CMIs director of policy, who met a group of Google executives and small business representatives on the sidelines of the training launch, says that AI brings huge opportunity, but also huge risks, and we have to take time to get that right
The practical skills necessary to adopt AI arent where they need to be [among businesses], he says. But we dont have the regulatory structure to do that effectively, and it might not be bad to have a bit of a go-slow while we think through regulation, ethics and skills in practical terms.
{{topLeft}}
{{bottomLeft}}
{{topRight}}
{{bottomRight}}
{{.}}
Read the original:
AI for all? Google ups the ante with free UK training courses for firms - The Guardian
Citi stays positive on A.I. theme and lays out the key to finding … – CNBC
The early innings of the artificial intelligence trade may be over, but Citigroup is staying positive on the tech subsector, viewing cash flows as the key to unlocking the winners of the next phase. "In sum, our message is not to be overly deterred by the significant year-to-date move in profitable AI stocks," the bank said in a Friday note to clients. "Medium- to long-term opportunities still exist as the AI theme has an accelerating growth trajectory and attractive [free cash flow] dynamics that should further improve from here." So far this year, anything connected to AI has seen a significant uptick in valuation, with Nvidia shares leading the pack, surging more than 200%. While the jaw-dropping price action may suggest AI is no longer an early trade, Citi reiterated that the "initial positive thesis" looks intact and warned investors to avoid overlooking free cash flows. Citi expects many names to meet accelerated growth expectations and views free cash flows as "increasingly compelling." "Profitable stocks within this theme are already impressive cash generating machines," the bank wrote. "Recent AI developments should accentuate this characteristic and push FCF margins and growth to new highs." Given this setup, Citi screened for AI-related stocks expected to outpace market growth expectations and experience an uptick in free cash flow margins. Here are some of the stocks that made the cut: Amazon has the highest consensus expectation of more than 48% growth over the long term. Shares have gained almost 54% this year as Wall Street rotates back into technology stocks following the slump in 2022. Some investors have viewed the e-commerce giant as lagging behind its peers in the AI race. During an i nterview with CNBC this month, CEO Andy Jassy soothed some of those concerns, reiterating Amazon's plan to invest in AI across segments. Earlier this year , Amazon also unveiled a generative AI service called Bedrock for its Amazon Web Services unit, allowing clients to use language models to create their own chatbots and image-generation services. Competing chatbot heavyweight Alphabet also made the cut. Shares of the Google parent and Bard creator have rallied 38% as it battles it out with Microsoft -backed OpenAI's ChatGPT. Consensus estimates peg long-term growth at more than 17%, with a near-term free cash flow margin of nearly 24%. GOOGL YTD mountain Alphabet shares in 2023 A handful of financial stocks were also included in Citi's screen. Mastercard offers the greatest near-term free cash flow yield of the group, at 48.4%. Its long-term consensus growth estimate hovers around 19%. Shares have gained about 15% year to date. Ford Motor , Match Group and ServiceNow also made the list. CNBC's Michael Bloom contributed reporting.
Originally posted here:
Citi stays positive on A.I. theme and lays out the key to finding ... - CNBC
From Hollywood to Sheffield, these are the AI stories to read this month – World Economic Forum
AI regulation is progressing across the world as policymakers try to protect against the risks it poses without curtailing AI's potential.
In July, Chinese regulators introduced rules to oversee generative AI services. Their focus stems from a concern over the potential for generative AI to create content that conflicts with Beijings viewpoints.
The success of ChatGPT and similarly sophisticated AI bots have sparked announcements from Chinese technology firms to join the fray. These include Alibaba, which has launched an AI image generator to trial among its business customers.
The new regulation requires generative AI services in China to have a licence, conduct security assessments, and adhere to socialist values. If "illegal" content is generated, the relevant service provider must stop this, improve its algorithms, and report the offending material to the authorities.
The new rules relate only to generative AI services for the public, not to systems developed for research purposes or niche applications, striking a balance between keeping close tabs on AI while also making China a leader in this field.
The use of AI in film and TV is one of the issues behind the ongoing strike by Hollywood actors and writers that has led to production stoppages worldwide. As their unions renegotiate contracts, workers in the entertainment sector have come out to protest against their work being used to train AI systems that could ultimately replace them.
The AI proposal put forward by the Alliance of Motion Picture and Television Producers reportedly stated that background performers would receive one day's pay for getting their image scanned digitally. This scan would then be available for use by the studios from then on.
China is not alone in creating a framework for AI. A new law in the US regulates the influence of AI on recruitment as more of the hiring process is handed over to algorithms.
From browsing CVs and scoring interviews to scraping social media for personality profiles, recruiters are increasingly using the capabilities of AI to speed up and improve hiring. To protect workers against a potential AI bias, New York City's local government is mandating greater transparency about the use of AI and annual audits for potential bias in recruitment and promotion decisions.
A group of AI experts, including Meta, Google, and Samsung, has created a new framework for developing AI products safely. It consists of a checklist with 84 questions for developers to consider before starting an AI project. The World Ethical Data Foundation is also asking the public to submit their own questions ahead of its next conference. Since its launch, the framework has gained support from hundreds of signatories in the AI community.
In response to the uncertainties surrounding generative AI and the need for robust AI governance frameworks to ensure responsible and beneficial outcomes for all, the Forums Centre for the Fourth Industrial Revolution (C4IR) has launched the AI Governance Alliance.
The Alliance will unite industry leaders, governments, academic institutions, and civil society organizations to champion responsible global design and release of transparent and inclusive AI systems.
Meanwhile, generative AI is gaining a growing user base, sparked by the launch of ChatGPT last November. A survey by Deloitte found that more than a quarter of UK adults have used generative AI tools like chatbots. This is even higher than the adoption rate of voice-assisted speakers like Amazon's Alexa. Around one in 10 people also use AI at work.
Nearly a third of college students have admitted to using ChatGPT for written assignments such as college essays and high-school art projects. Companies providing AI-detecting tools have been run off their feet as teachers seek help identifying AI-driven cheating. With only one full academic semester since the launch of ChatGPT, AI detection companies are predicting even greater disruption and challenges as schools need to take comprehensive action.
30% of college students use ChatGPT for assignments, to varying degrees.
Image: Intelligent.com
Another area where AI could ring in fundamental changes is journalism. The New York Times, the Washington Post, and News Corp are among publishers talking to Google about using artificial intelligence tools to assist journalists in writing news articles. The tools could help with options for headlines and writing styles but are not intended to replace journalists. News about the talks comes after the Associated Press announced a partnership with OpenAI for the same purpose. However, some news outlets have been hesitant to adopt AI due to concerns about incorrect information and differentiating between human and AI-generated content.
Developers of robots and autonomous machines could learn lessons from honeybees when it comes to making fast and accurate decisions, according to scientists at the University of Sheffield. Bees trained to recognize different coloured flowers took only 0.6 seconds on average to decide to land on a flower they were confident would have food and vice versa. They also made more accurate decisions than humans, despite their small brains. The scientists have now built these findings into a computer model.
Generative AI is set to impact a vast range of areas. For the global economy, it could add trillions of dollars in value, according to a new report by McKinsey & Company. It also found that the use of generative AI could lead to labour productivity growth of 0.1-0.6% annually through 2040.
At the same time, generative AI could lead to an increase in cyberattacks on small and medium-sized businesses, which are particularly exposed to this risk. AI makes new, highly sophisticated tools available to cybercriminals. However, it can be used to create better security tools to detect attacks and deploy automatic responses, according to Microsoft.
Because AI systems are designed and trained by humans, they can generate biased results due to the design choices made by developers. AI may therefore be prone to perpetuating inequalities, and this can be overcome by training AI systems to recognize and overcome their own bias.
Link:
From Hollywood to Sheffield, these are the AI stories to read this month - World Economic Forum