Category Archives: Ai
AI Keeps Using More And More Energy. Where Will It End? – ScienceAlert
Amidst the excitement surrounding ChatGPT and the impressive power and potential of artificial intelligence (AI), the impact on the environment has been somewhat overlooked.
Analysts predict that AI's carbon footprint could be as bad if not worse than bitcoin mining, which currently generates more greenhouse gases than entire countries.
Record-shattering heat across land, sky, and seas suggests this is the last thing our fragile life support systems need.
Currently, the entire IT industry is responsible for around 2 percent of global CO2 emissions. If the AI industry continues along its current trajectory, it will consume 3.5 percent of global electricity by 2030, predicts consulting firm Gartner.
"Fundamentally speaking, if you do want to save the planet with AI, you have to consider also the environmental footprint," Sasha Luccioni, an ethics researcher at the open-source machine learning platform Hugging Face, told The Guardian.
"It doesn't make sense to burn a forest and then use AI to track deforestation."
Open.AI spends an estimated US$700,000 per day on computing costs alone in order to deliver its chatbot service to more than 100 million users worldwide.
The popularity of Microsoft-backed ChatGPT has set off an arms race between the tech giants, with Google and Amazon quickly deploying resources to generate natural language processing systems of their own.
Many companies have banned the use of ChatGPT but are developing their own AI in-house.
Like cryptocurrency mining, AI depends on high-powered graphics processing units to crunch data. ChatGPT is powered by gigantic data centers using tens of thousands of these energy-hungry computer chips.
The total environmental impact of ChatGPT and other AI systems is complex to calculate, and much of the information required to do so is not available to researchers.
"Obviously these companies don't like to disclose what model they are using and how much carbon it emits," computer scientist Roy Schwartz from the Hebrew University of Jerusalem told Bloomberg.
It's also hard to predict exactly how much AI will scale up over the next few years, or how energy-efficient it will become.
Researchers have estimated that training GPT-3, the predecessor of ChatGPT, on a database of more than 500 billion words would have taken 1,287 megawatt hours of electricity and 10,000 computer chips.
The same amount of energy would power around 121 homes for a year in the United States.
This training process would have produced around 550 tonnes of carbon dioxide, which is equivalent to flying from Australia to the UK 33 times.
GPT-4, the version released in July, was trained on 570 times more parameters than GPT-3, suggesting it might use more energy than its predecessors.
Another language model called BLOOM was found to consume 433 megawatt hours of electricity when it was trained on 1.6 terabytes of data.
If the growth of the AI sector is anything like cryptocurrency, it's only going to become more energy-intensive over time.
Bitcoin now consumes 66 times more energy than it did in 2015, so much energy that China and New York have banned cryptocurrency mining.
Computers must complete lengthy calculations to mine crypto, and it can take up to a month to earn a single bitcoin.
Bitcoin mining burns through 137 million megawatt hours a year of electricity, with a carbon footprint that is almost as large as New Zealand.
Innovation and protecting Earth's limited resources require a careful balancing act.
Visit link:
AI Keeps Using More And More Energy. Where Will It End? - ScienceAlert
Can This Popular AI Stock Join the Illustrious "Trillion-Dollar Club … – The Motley Fool
Electric vehicle (EV) leader Tesla (TSLA -2.11%) has been in the news for the wrong reasons of late, as the National Highway Traffic Safety Administration announced on Aug. 1 that it has opened a preliminary investigation into 280,000 units of Model 3 and Model Y vehicles after receiving reports from a dozen owners asserting that their 2023 model year vehicles had lost steering control and power steering.
This only added to the headaches of Tesla investors, as shares of the company have been trending lower since it released its second-quarter reporton July 19. Investors weren't happy with the contraction in the company's operating margin as a result of price cuts, a lack of clarity about when the company will start delivering Cybertrucks, and the progress on its robotaxi. Nor were they pleased with the current quarter's slowdown in vehicle production due to factory shutdowns.
Tesla's stock price has dropped by 10% since its earnings report, bringing its market cap down from more than $920 billion to less than $830 billion as of this writing.
TSLA Market Cap data by YCharts.
Elon Musk's auto company was widely believed to be on track to hit a $1 trillion market cap before its latest pullback, but now there are more doubts. However, investors should not forget that Tesla's focus on artificial intelligence (AI) could help the stock regain its mojo and the core business could by itself lift the stock into the $1 trillion market cap club by the end of 2023.
According to Tesla's second-quarter shareholder presentation, the company has a "commitment to being at the forefront of AI development." The company has already started the production of its Dojo supercomputer, for which it has ambitious plans over the next year or so. More specifically, Tesla aims to equip Dojo with 300,000 data center graphics processing units (GPUs) by October 2024, a move that will take the supercomputer's capacity to 100 exaflops.
That would be a massive improvement over Tesla's existing supercomputer that's capable of 1.8 exaflops of computing power, driven by the 6,000 GPUs that it is equipped with. Musk says Tesla plans to spend $1 billion on the Dojo supercomputer over the coming year as it looks to hone its vehicles' self-driving capabilities.
Dojo will enable Tesla to boost its machine-learning and computer-vision training model development, ingesting huge amounts of data -- such as video feeds from its EVs around the globe -- to help the company develop self-driving software. Musk indicated in the company's July conference call with analysts that he believes that the new supercomputer could play a key role in accelerating Tesla's development of a completely autonomous car.
According to Musk, the company's current full self-driving (FSD) feature is expected to go "from being as good as a human" to "being vastly better than a human." What's more, Musk said he sees "a clear path to full self-driving being 10 times safer than the average human driver."
It is worth noting that Musk has been promising to deliver a fully autonomous car since2016. But the company's FSD capability remains a work in progress and still requires human supervision. It is rated as a Level 2 driving assistance system (out of the five levels of autonomous driving).
Tesla's FSD has been adopted by more than 400,000 customers since it was launched in 2020. That's impressive considering that customers need to pay $15,000 up front and then a monthly subscription of $99 or $199. If Tesla can substantially improve its EVs' self-driving capabilities with the help of a more powerful supercomputer and the 300 million miles of data that FSD has already racked up, it could gain wider adoption.
One way that may happen is through licensing deals. Musk is reportedly in discussions with other major automotive OEMs (original equipment manufacturers) about letting them use Tesla's FSD. Potential licensees would need to buy Tesla's hardware and software to deploy this feature.
Given that the market for autonomous driving components is expected to be worth between $55 billion and $80 billion by 2030, Tesla has plenty of incentive to upgrade its autonomous driving system.
Tesla's AI initiatives based on the Dojo supercomputer are now in the early stages of development, and it would take at least a year for Dojo to drive tangible revenue growth for the company once it is complete. So, it may take some time for AI to positively impact Tesla's valuation.
However, Tesla seems on track to deliver solid revenue growth in 2023 from its core automotive business. Analysts expect the company's revenue to increase by 22% this year to almost $100 billion. Based on Tesla's current price-to-sales ratio of almost 10, there is a chance that the company could achieve a $1 trillion market cap in 2023.
But then, AI stocks tend to command far higher sales multiples, so there's a reasonable possibility of Tesla's market cap exceeding the $1 trillion mark if investors bump up the multiple. Moreover, Tesla is expected to sustain impressive revenue growth over the next couple of years.
TSLA Revenue Estimates for Current Fiscal Year data by YCharts.
With the addition of catalysts such as AI, which could help Tesla gain licensing revenue and increase the adoption of its self-driving hardware and software, the company may be able to deliver faster revenue growth in the long run and increase its market cap well beyond the $1 trillion mark.
Go here to read the rest:
Can This Popular AI Stock Join the Illustrious "Trillion-Dollar Club ... - The Motley Fool
This AI Stock Could Soar Big-Time This Month, and It Is a Screaming … – The Motley Fool
Shares of Marvell Technology (MRVL 0.55%) have been red-hot on the stock market over the past three months, surging more than 70% thanks to the company's fiscal 2024 first-quarter results, which were releasedat the end of May.
The chipmaker shot up nearly 30% in a single session following its earnings report for the period ended April 29, as it beat Wall Street's expectations on revenue and earnings, and pointedout that its "revenue growth will accelerate in the second half of the fiscal year." Artificial intelligence (AI) is going to play an important role in boosting Marvell's growth, as management said on the company's May earnings conference call.
That's why investors looking for a stock to benefit from the rapid adoption of AI technology may want to act before the company releases its fiscal 2024 second-quarter results on Aug. 24.
Marvell Technology's fiscal Q2 results likely won't be great. The company, whose chips are deployed in multiple end markets such as data centers, telecom infrastructure, enterprise networking, and automotive, expects revenue of $1.33 billion for the three months that ended in July 2023. It expects non-GAAP earnings of $0.32 per share for the quarter.
Those numbers point toward a substantial decline over the prior-year period's adjusted earnings of $0.57 per share on revenue of $1.52 billion. But this hasn't deterred investors from buying Marvell stock hand over fist in the past couple of months due to one reason -- AI. Marvell CEO Matt Murphy is expecting AI to contribute significantly to Marvell's business. He said he expects"Marvell's overall AI revenue to at least double in fiscal 2024," and it won't be surprising to see this catalyst help the company deliver stronger-than-expected growth.
But this is just the beginning, as Murphy added on the company's previous conference call: "In aggregate, we foresee our overall AI revenue to at least double again next year. In other words, we are forecasting an AI revenue growth CAGR over 100% over the fiscal 2023 to 2025 timeframe."
However, it is worth noting that AI could drive Marvell's growth for years to come as the growing adoption of this technology is going to create the need for more storage and faster connections within data centers. Murphy explained this on the earnings call:
To give you an idea, the latest dual CPU server in a cloud data center today can drive up to 200 gigabits per second of IO and contains the network interfaces to support that bandwidth. In contrast, an example of an advanced AI system containing eight accelerators can drive close to 30 terabits of full duplex bandwidth. That's hundreds of times more bandwidth required to connect these systems together.
As AI data centers are equipped with multipleaccelerators to tackle massive workloads, Marvell expects this technology to significantly increase its revenue opportunity in the long run. This explains why Marvell has significantly bumped its revenue expectations from AI-specific chips that will be deployed in the cloud, statingthat "the relative proportion of projected lifetime revenue from AI has increased from approximately 20% in our prior forecast to well over half today."
The good part is that catalysts such as AI could play an important role in boosting Marvell's top and bottom lines. The following chart shows that Marvell's growth is expected to pick up from the next fiscal year.
MRVL Revenue Estimates for Current Fiscal Year data by YCharts
Even better, analysts are expecting Marvell's earnings to increaseat an annual rate of 14% for the next five years. However, it won't be surprising to see Marvell deliver faster growth than what analysts are looking for given the fast-growing nature of the AI chip market. Next Move Strategy Consulting estimates that the global AI chip market could be worth $304 billion in 2030 compared to just under $29 billion last year.
Marvell stock is now tradingat 38 times forward earnings. While that's well above the forward earnings multiple of 16 at the end of 2022, this semiconductor stock still looks like a good bet thanks to a couple of reasons.
First, Marvell's forward earnings multiple is relatively cheaper than other AI stocks. Nvidia, for instance, tradesat 55 times forward earnings. Micron Technology, which is another company that could win big from the AI revolution, has a forward earnings multiple of77.
Second, the potential acceleration in Marvell's growth and the solid long-term prospects driven by AI could make the stock look cheap in the long run. As such, if you're looking to add an AI stock to your portfolio, it may be a good idea to consider going long Marvell stock as a strong set of earnings later this month could send the stock higher and make it more expensive.
Harsh Chauhan has no position in any of the stocks mentioned. The Motley Fool has positions in and recommends Nvidia. The Motley Fool recommends Marvell Technology. The Motley Fool has a disclosure policy.
Excerpt from:
This AI Stock Could Soar Big-Time This Month, and It Is a Screaming ... - The Motley Fool
LPL Puts AI Center Stage at Yearly Event – ThinkAdvisor
By Janet LevauxAugust 05, 2023 at 09:06 AM
At the start of its yearly conference, LPL Financial executives are emphasizing the growing role of artificial intelligence and their optimistic outlook for the advice business.
As we look forward together, we are bullish on the future of financial advice and our strategy to help advisors and institutions turn a blue ocean opportunity into reality, President & CEO Dan Arnold said in a statement.
For us, that means empowering advisors and institutions to deliver great advice to their clients and to be great operators of their businesses, Arnold explained in remarks shared with the press early Saturday before the Focus 2023 events general session.
In terms of technology, the broker-dealer highlighted its growing interest in the role of AI as it did during its July 27 earnings call. When it comes to AI, we dont believe theres any replacing the advisor-client relationship, Chief Technology Advisor Greg Gates said in statement Saturday.We see AI as an additional opportunity for advisors to enhance the way they engage with clients and run their business.
In the second quarter, the firm introduced a new performance tool, Practice Hub, which relies on AI and aims to serve as a co-pilot for advisors, its executives said last week.
Practice Hub, a centralized dashboard in the firms ClientWorks portal, gives advisors information about their key practice metrics, so they can set goals, benchmark their success against their peers and receive help to run their business moreefficiently, according to the firm.
As for more AI-powered technology, the firm recently introduced Advertising Review Tool, which helps speed up the approval rate for advisors marketing content.
Next year, it plans to roll outClientWorks Rebalancer, a trading and rebalancing system for practices of many different sizes, affiliation models and sophistication levels.
While it is still early to make predictions, the history of technologys influence on the wealth management industry gives us reason for optimism about AIs future in further enriching the role of the financial advisor, Gates explained.
The firms Focus conference featured Col.Nicole Malachowski, a retired officer of the U.S. Air Force, on Friday. Other speakers includeBlue Owl Capital CEO Doug Ostrover and author Erik Wahl. Singer Bryan Adams will take the stage for Sundaysfinal-night reception.
As of June 30, LPL worked with 21,942 financial advisors, and reported total advisory and brokerage assets of $1.24 trillion.
See: LPL Advisor Headcount Hits New High
Pictured: LPL CEO Dan Arnold
Read the original post:
How layoffs and A.I. are impacting tech workers – CNBC
Tech companies shed more than 386,000 jobs last year and in the first half of this year, according to Layoffs.fyi. That number is climbing.
But while layoffs have taxedworkers, a booming artificial intelligence market is giving the industry a renewed sense of optimism.
"I have been in San Francisco for almost 12 years now and I have never felt this kind of energy," said Flo Crivello, CEO of AI startup Lindy. "And I was here for the mobile boom."
Crivello said it's quite clear what's driving the enthusiasm.
"Every cafe, every restaurant, every conversation that you overhear in the street, half of the time, it's ChatGPT, it's AI, it's the latest company that is being funded," he said.
Generative AI startup deals announced or finalized In the first quarter of this year totaled more than $12 billion, compared to about $4.5 billion invested in the space all of last year, according to PitchBook.
Amazon, Alphabet and Microsoft have also made significant AI investments.
So how have layoffs impacted tech workers and what will the AI boom mean for their future? Watch the video to learn more.
View post:
AI demand is pushing Amazon’s cloud revenue up – Quartz
Amazon is one of the many businesses racing to cater to the boom in generative AI services. In the three months ending in June, sales of Amazon Web Services (AWS), Amazons cloud computing service, rose 12% from the same period prior, beating Wall Street estimates, the company reported.
Quartz Smart Investing with NewEdge Wealth's Ben Emons
Amazon is a retail giant, but 70% of its profit comes from AWS. The company is also the leader in the global market share for cloud services at 40%. With its cloud expertise, Amazon is seizing the opportunity to benefit from growing AI demand.
Inside Amazon, every one of our teams is working on building generative AI applications that reinvent and enhance their customers experience, said Andrew Jassey, Amazons CEO, on a conference call with analysts and investors on Thursday, Aug. 3. But while we will build a number of these applications ourselves, most will be built by other companies, and were optimistic that the largest number of these will be built on AWS.
The company also reported that AWSs operating income generated $5.4 billion, which is down 6% from the same period last year. That comes as companies, across industries, are cutting back on spending, including by switching to lower-cost computing products and trading less in crypto.
Meanwhile, last week, Microsoft reported that its earnings were driven by growth in Azure, its cloud computing service. More broadly, mentions of AI on conference calls have helped propel many tech stocks this yearAmazons stock is up 62%, whereas, Microsofts stock is up 34% so far this year.
In April, Amazon further doubled down on AI efforts by releasing Bedrock, which provides developers a platform for building generative AI applications on AWS with pre-trained models from AI startups like Anthropic and Stability AI. Amazon is also creating its own custom AI chips to compete with Nvidia and Intel.
Its important to remember that were in the very early days of the adoption and success of generative AI, and that consumer applications is only one layer of the opportunity, Jassey said on the call.
Read the rest here:
AI is getting into the dating game – Morning Brew
Become smarter in just 5 minutes
Get the daily email that makes reading the news actually enjoyable. Stay informed and entertained, for free.
Bots are usually the last thing you want to encounter on a dating app while swiping through endless photos of people standing atop mountains, but Tinder is preparing to harness the power of AI for the good of your profile.
The app, which already uses artificial intelligence in its matching algorithms, has started testing new customer-facing tools. One sifts through your photos and selects the best five to display (hopefully with the good sense to avoid the ambiguous group shot), addressing a pain point where many users get flustered when building their profile.
Its just the beginning of the AI dating revolution. Tinders parent company, Match Group, which also owns Hinge and OKCupid, revealed plans in its recent earnings announcement to integrate new AI features across its brandsnot just to help build profiles but also to highlight why someone may be a good match.
And the company cant control whether daters bring AI into the mix themselves: A recent UK study found that over 50% of single men would use chatbots to help chat up potential dates.
Lo-fi alternatives: As dating apps go increasingly high-tech, some singles are turning to more staid Google Docs and creating long-form date-me docs to find romance outside the app grind.AR
The rest is here:
Google’s Responsive Search Ad Guide: Navigating AI In Advertising – Search Engine Journal
Google recently released a comprehensive guide to help marketers better understand and utilize Responsive Search Ads (RSAs).
The guide provides an in-depth look at how Google leverages AI technology to optimize RSA performance for each search query. It aims to give marketers the knowledge to take full advantage of this adaptive ad format.
This article summarizes the key information presented in Googles RSA guide.
Whether starting out with RSAs or looking to improve existing campaigns, this summary highlights the core advice from Google that can help advertisers succeed.
The guide begins by discussing advertisers difficulty in targeting the appropriate ads, recognizing how search queries and user behavior constantly evolve.
Googles internal data indicates that 15% of searches have never been entered. This constant change makes it challenging for companies to predict relevant trends and search patterns.
The guide points out that Googles Responsive Search Ads can be used in Search campaigns to deal with the challenge of finding the optimal combination of headlines and descriptions for different queries.
RSAs automatically test different headline and description variants to determine which combinations will likely perform best for any search query.
Responsive search ads generated by Googles artificial intelligence pick the most relevant headline and description pairings for each user.
The more varied headlines and descriptions provided, the more likely the AI can deliver ads tailored to potential customers.
The guide discusses the Pinning feature, which lets you choose a specific asset always to be included in ads.
This can be useful for complying with local regulations. However, the guide also notes that pinning limits the ability to generate unique ad combinations, which could negatively impact performance.
Googles guide gives a thorough explanation of how RSAs create search ads.
The process starts by comprehending the context behind each search query and keyword used for matching.
It then combines available assets based on their relevance to the query and predicted performance.
These creative combinations are scored, and the top-ranking ones continue to the auction.
After new assets are used, an AI model that continuously learns starts evaluating which assets and combinations lead to the best performance for each search query.
This evaluation process usually begins within a few hours of when a new asset is initially served.
The goal is to maximize performance for advertisers by determining the optimal assets and asset combinations to show for each query.
Google highlights its Ad Strength feature, which gives advertisers forward-looking feedback on how well their responsive search ad assets align with attributes that tend to boost performance.
Ad Strength offers real-time ratings of Poor, Average, Good, or Excellent that update dynamically as changes are made to the ad copy and assets.
This allows advertisers to optimize their ads by iterating based on the Ad Strength feedback provided by Google.
Googles guide outlines several tools to help users create high-quality assets, including asset suggestions, recommendations for improving Ad Strength, and the option to use automatically created assets.
Asset suggestions are headline and description options when creating or editing a responsive search ad. These are generated based on the final URL and are relevant to the ads context.
Recommendations for improving ad strength are shown to help optimize responsive search ads at scale. These appear for ads with Poor or Average strength and include asset suggestions.
The automatically created assets option is enabled at the campaign level. When turned on, the system will generate headlines and descriptions tailored to each responsive ads unique context.
Googles guide provides suggestions on how to assess the success of RSAs. It recommends that users prioritize boosting the business results of their ads and use those as benchmarks for performance.
The guide also emphasizes the value of analyzing asset performance ratings, which give insight into how well individual ad components have worked in the past.
The guide suggests utilizing AI-driven tools for bidding, keywords, and ad copy to get the best outcomes.
Implementing Smart Bidding, broad match keywords, and responsive search ads in combination can assist with showing the most relevant ad to each searcher at an optimal cost.
The guide wraps up by highlighting the main points to remember, including:
As Google keeps integrating the newest AI advancements into responsive search ads, the company hopes to streamline generating ads that accomplish business goals.
Featured Image: IB Photography/Shutterstock
See the original post:
Google's Responsive Search Ad Guide: Navigating AI In Advertising - Search Engine Journal
Protecting data in the era of generative AI: Nightfall AI launches … – VentureBeat
Head over to our on-demand library to view sessions from VB Transform 2023. Register Here
All organizations are eager to harness the productivity gains of generative AI, starting with ChatGPT, despite the security threat of their confidential data being leaked into large language models (LLMs). CISOs tell VentureBeat theyre split on the issue, with AI governance becoming a hot topic in risk management discussions with boards of directors.
Alex Philips, CIO at National Oilwell Varco (NOV), told VentureBeat in an interview earlier this year that hes taking an education-centric approach to keep his board of directors up to date on the latest advantages, risks and current state of gen AI technologies. Philips says having an ongoing educational process helps set expectations about what gen AI can and cant do, and helps NOV put guardrails in place to avert confidential data leaks.
Several healthcare CISOs and CIOs are restricting ChatGPT access across all research and development, pricing and licensing business units. VentureBeat has learned that CISOs are divided on if and how they manage the security threat of confidential data finding its way into LLMs. Not having gen AI as a research tool is a competitive disadvantage healthcare providers are willing to go without as the risks to their intellectual property, pricing and licensing are too great.
Unlocking productivity while reducing risk
VB Transform 2023 On-Demand
Did you miss a session from VB Transform 2023? Register to access the on-demand library for all of our featured sessions.
The challenge is to keep confidential data secure while allowing employees to be more productive using gen AI and ChatGPT at the browser, app and API levels. Cloud data loss prevention (DLP) platform Nightfall AI today announced the first data security platform for gen AI that spans API browser and Software-as-a-Service (SaaS) application gen AI protection.
Designed to take on the productivity paradox CISOs and CIOs are facing when it comes to gen AI in their organizations, Nightfall AIs platform is the first DLP platform that scales across the three top threat vectors CISOs need the most help securing when gen AI and ChatGPT are in use across their organizations. The goal is to enable organizations to securely use AIs benefits while protecting sensitive data and reducing risk.
The Nightfall for GenAI data security platform consists of three products that include:
Nightfall for ChatGPT. Nightfall AIs browser-based solution provides real-time scanning and redaction of sensitive data entered by employees into chatbots before exposure. Providing a browser-based extension is one of the less obtrusive ways to protect data because its a technique that lends itself well to minimizing the impact on users experiences. Nightfall AI CEO Isaac Madan told VentureBeat that the experiences users have with Nightfall for ChatGPT formed the foundation of the products design goals.
Madan says the initial browsers supported include Apple Safari, Google Chrome and Microsoft Edge.
Eric Cohen, Vice President of Security at Genesys, considers Nightfall for ChatGPT a breakthrough in providing colleagues in Genesys with access to gen AI products while reducing the risk. Cohen told VentureBeat that the ideal is for Nightfall AI to take a collaborative approach to help users self-remediate data risks without requiring them to be generative AI experts.
Nightfall for LLMs: APIs are one of Nightfall AIs core strengths, reflected in how theyve taken on the challenge of enabling enterprises at scale to achieve productivity gains from gen AI. Nightfall for LLMs is a developer API that detects and redacts data developers input to train LLMs combined into a software development kit (SDK). Many industry leaders have already integrated these APIs into their workflows.
Cohen told VentureBeat that Nightfall AIs API strategy provides the customizability and flexibility Genesys needs to scale gen AI protection across their organization and tech stacks. Nightfall AI also provides insights into redaction rates, adding greater insights and learning into how gen AI can be securely used for greater productivity, he said.
Nightfall for SaaS: Nightfall for SaaS provides data leak prevention directly within the workflows of popular SaaS applications, allowing companies to detect and redact sensitive data as third-party AI systems are processing it. This prevents sensitive information from being exposed in chatbot conversations, documents, cloud storage and other SaaS apps. Nightfall for SaaS has been implemented by MovableInk, Aarons and Klaviyo, who need to secure customer data within their SaaS ecosystems. By natively leveraging Nightfalls DLP capabilities within these apps, these companies can leverage third-party AI while maintaining control and visibility into their sensitive data.
All of these products are available today to explore. Nightfall for ChatGPT is available on the Google Chrome store as part of a 14-day free trial Nightfall AI offers.
Securing the future of generative AIs productivity gains
Cohen told VentureBeat that gen AIs productivity is integral to enabling Genesys to continue excelling for their clients. Generative AI offers significant productivity gains for organizations across teams but until Nightfall AI, there was a lack of security products that allowed us to use these tools safely, he said. Cohen found Nightfall AI while actively researching DLP solutions to solve a data privacy problem Genesys was facing. The customizability of Nightfalls data rules presented an advantage over other options he had looked into.
CISOs tell VentureBeat they have three main concerns regarding adopting GenAI as a research and productivity platform. First, theyre concerned that employees will include sensitive data (such as software credentials or customer PII) in chatbot prompts. Second, theyre worried that employees might inadvertently expose confidential company data using SaaS apps such as Notion that use third-party AI sub-processors such as Anthropic. Lastly, their third concern revolves around engineers and data scientists using confidential data to build and train their LLMs. This last concern is underscored by a recent incident where users tricked ChatGPT into generating active API keys for Windows.
GenAI has the potential to offer substantial productivity benefits for employers and employees, but the lack of a complete DLP solution is impeding the safe adoption of AI, said Madan. As a result, many organizations have either completely blocked these tools, or have resorted to using multiple security products as a patchwork solution to mitigate the risk. This struggle ultimately drove the creation of Nightfalls latest innovation: Nightfall for GenAI.
Frederic Kerrest, cofounder and executive vice chairman of Okta, commended Nightfall and compares its latest initiatives to Oktas early days. When using Nightfall, I have seen many similarities with our early vision at Okta, where we centralized user access and management security for all cloud apps. Nightfall is now doing the same for data security across generative AI and the cloud.
Early adopters like Genesys highlight the benefits of Nightfalls customizable data rules and remediation insights that help users self-correct. For CISOs, the platform provides the visibility and control needed to leverage AI while maintaining data security confidently. The availability of Nightfalls gen AI-focused platform marks an important milestone in realizing AIs potential.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.
View original post here:
Protecting data in the era of generative AI: Nightfall AI launches ... - VentureBeat
Google reshuffles Assistant unit, lays off some staffers, to ‘supercharge’ products with A.I. – CNBC
Google CEO Sundar Pichai speaks on-stage during the Google I/O keynote session at the Google Developers Conference in Mountain View, California, on May 10, 2023.
Josh Edelson | AFP | Getty Images
Google wants to "supercharge" its seven-year-old smart assistant using new advancements in generative artificial intelligence, as part of the latest major reorganization of the Assistant unit.
In an email to employees Monday, Peeyush Ranjan, Google's vice president of engineering at Assistant, said the latest reshuffle will include a small number of layoffs. Ranjan said the company will look to push large language model, or LLM, technology into Assistant, Google's voice-powered software that's similar to Apple's Siri or Amazon's Alexa.
"As a team, we need to focus on delivering high quality, critical product experiences for our users," Ranjan wrote in the email, which was viewed by CNBC. "We've also seen the profound potential of generative AI to transform people's lives and see a huge opportunity to explore what a surpercharged Assistant, powered by the LLM technology, would look like."
A portion of the Assistant team has already started working on the efforts, Ranjan added. Employees who are being laid off will be given 60 days to search for other jobs within Google.
Axios first reported some of the unit's changes.
As a part of the reorganization, executives announced a dozen changes to the company's "Speech" team, which oversees voice commands. Francoise Beaufays, who had been the head of Speech, is moving to work under Sissie Hsiao, who oversees Bard and Assistant.
"This is an exciting moment for AI, with nearly every product demanding world-class AI-driven Speech," Beaufays wrote in a separate email announcing changes to the unit. Some members of the Speech team will focus their efforts specifically on Bard, she wrote.
Assistant executives said the changes will allow the division to move with "speed and focus."
Jennifer Rodstrom, a Google spokesperson, said in an email to CNBC that the company is "excited to explore how LLMs can help us supercharge Assistant and make it even better."
"Hundreds of millions of people use the Assistant every month and we're committed to giving them high quality experiences," she wrote.
The rapid developments in generative AI, which responds to text-based queries with intelligent and creative answers and can convert text to images, is pushing Google to embed the technology in as many products as possible.
For the older Assistant organization, that's meant frequent refinements. Assistant is used in Google's mobile and home devices, including its Pixel smartphone and in Nest smart speakers and devices. It's also used in smart watches, smart displays, TVs and in vehicles through the Android Auto platform.
In March, Hsiao announced changes to the organization, underscoring a prioritizing of Bard. Ranjan, who had been vice president of commerce, stepped in as engineering lead for the unit and oversees more than 1,700 full-time employees, according to an internal document.
Since the launch late last year of OpenAI's ChatGPT, Amazon has also emphasized the emerging importance of generative AI, adding it into Alexa products.
For Google, which has dominated internet search for the better part of two decades, there's more at stake, as ChatGPT and Microsoft Bing, which uses OpenAI's model, give people alternative ways to search for answers.
Google has been rolling out updates to Bard after launching it publicly in March. Last month, the company said it expanded to over 40 languages in more countries, and will include features like audio responses, thanks to its newest LLM, Palm 2.
WATCH: Google kicks off I/O event
More:
Google reshuffles Assistant unit, lays off some staffers, to 'supercharge' products with A.I. - CNBC