Page 4«..3456..1020..»

All eyes on AI to drive big tech earnings –

Over the next two weeks, the quarterly results of tech giants would offer a glimpse of the bankability of artificial intelligence (AI) and whether the major investments AI requires are sustainable for the long haul.

Analysts at Wedbush Securities Inc, one of Wall Streets biggest believers in AIs potential, expect growth and earnings to accelerate with the AI revolution and the wave of transformation it is causing.

The market generally agrees with this rosy AI narrative. Analysts forecast double-digit growth for heavyweights Microsoft Corp and Google, in contrast to Apple Inc, a latecomer to the AI party, with only 3 percent growth expected.

The iPhone maker, which releases its results on Thursday next week, unveiled its new Apple Intelligence system only last month and plans to roll it out gradually over the next months, and only on the latest models.

CFRA Research analyst Angelo Zino said Apples upcoming earnings would show improvement in China sales, a black spot since last year.

Apples forecasts for the current quarter will be important in assessing the companys momentum, he added.

Zino said he was a little bit more concerned about Meta Platforms Inc, which raised its investment projections in April last year as it devoted a few billion dollars more on the chips, servers and data centers needed to develop generative AI.

CFRA expects Metas growth to decelerate through the end of the year. Combined with the expected increase in spending on AI, that should put earnings under pressure.

As for the earnings of cloud giants Microsoft (which is to release its results on Tuesday next week) and Amazon (which is to release its results on Thursday next week), we expect them to continue to report very good results, in line with or better than market expectations, Zino said.

Microsoft is among the best positioned to monetize generative AI, having moved the fastest to implement it across all its products, and pouring US$13 billion into OpenAI, the start-up stalwart behind ChatGPT.

Winning the big bet on AI is crucial for the group, Emarketer analyst Jeremy Goldman said, but the market is willing to give them a level of patience.

The AI frenzy has helped Microsofts cloud computing business grow in the double digits, something that analysts said could be hard to sustain.

This type of growth cannot hold forever, but the synergies between cloud and AI make it more likely that Microsoft holds onto reliable cloud growth for some time to come, Goldman said.

As for Inc, investors will want to see that the reacceleration of growth over the first quarter wasnt a one-off at Amazon Web Services (AWS), the companys world-leading cloud business, Hargreaves Lansdown PLC analyst Matt Britzman said.

Since AWS leads in everything data-related, it should be well placed to capture a huge chunk of the demand coming from the AI wave, Britzman said.

The picture might be a little less clear for Google parent Alphabet Inc, which would be the first to publish results on Tuesday, because of their search business online, Zino said.

Skepticism around AI Overviews, introduced by Google in mid-May, is certainly justified, Emarketer analyst Evelyn Mitchell-Wolf said.

This new feature, which offers a written text at the top of results in a Google search, ahead of the traditional links to sites, got off to a rocky start.

Internet users were quick to report strange, or potentially dangerous, answers proposed by the feature that had been touted by Google executives as the future direction of search.

Data compiled by BrightEdge and relayed by Search Engine Land showed that the number of searches presenting a result generated by AI Overviews has plummeted in recent weeks as Google shies away from the feature.

Still, many are concerned about the evolution of advertising across the Internet if Google pushes on with the Overviews model, which reduces the necessity of clicking on links. Content creators, primarily the media, fear a collapse in revenues.

As long as Google maintains its status as the default search engine across most smartphones and major browsers, it will continue to be the top destination for search, and the top destination for search ad spending, Mitchell-Wolf said.

Comments will be moderated. Keep comments relevant to the article. Remarks containing abusive and obscene language, personal attacks of any kind or promotion will be removed and the user banned. Final decision will be at the discretion of the Taipei Times.

Go here to read the rest:
All eyes on AI to drive big tech earnings -

Read More..

AI will transform every aspect of our life, Gov. Healey says at artificial intelligence task force meeting at Northeastern – Northeastern University

Massachusetts Gov. Maura Healey, Boston Mayor Michelle Wu, state economic development and technology leaders and other officials visited Northeasterns Boston campus on Thursday to learn from university professors how artificial intelligence can solve some of the most pressing issues in the state and the world. AI is a technology with the potential to transform not just the potential, it will transform every aspect of our life, Healey said during an event at the EXP research complex. Massachusetts innovators, as well see in todays presentations, are already at the forefront.

Provost David Madigan said Northeastern was the exactly the right place to talk about AI, pointing out that AI research at the university is being used to detect cancers, track infant health, prevent climate change and more.

And all with an eye on ethical AI and responsible AI, Madigan said. That has been a key theme of everything we do here at this university how do we harness this extraordinary technology to do good.

The event occurred prior to a meeting at Northeastern of the Artificial Intelligence Strategic Task Force commissioned by Healey, who also attended the session.

The taskforce was established in February to study AI and generative artificial intelligence technology and its impact on the state, private businesses, higher education institutions and constituents. It is made up of leaders from large companies, startups, academia, investors and nonprofits.

Northeasterns Usama Fayyad is a member of the task force, and described his work as executive director of The Institute for Experiential AI at the university to those in attendance.

Rupal Patel, a professor in the Khoury College of Computer Sciences and the Bouve College of Health Sciences, showcased her work using AI to create bespoke synthesized voices for individuals with various health conditions.

Auroop Ganguly, director of AI for climate and sustainability at The Institute for Experiential AI, gave a presentation on how AI can help predict local flooding particularly flooding around Bostons Logan Airport from extreme precipitation events.

College of Engineering Distinguished Professor Jennifer Dy explained her research with hospitals in New York City and Boston using AI to detect skin cancer and to treat patients with chronic obstructive pulmonary disease.

The nice thing about being an AI person in Massachusetts, and in Boston in particular, is that we have world-leading hospitals that are highly concentrated in the area, Dy said. And with advances in AI, theres a lot that we can do together.

Taskin Padir, director of the Institute for Experiential Robotics at Northeastern, finished the presentations by showcasing a robotic arm with a gripper inspired by chopsticks that can help seafood processors sort and grade scallops.

We are so fortunate to have you all here in Boston, Healey told the professors before she, Wu and Economic Development Secretary Yvonne Hao took turns operating the robot. I know theyll appreciate this in New Bedford.

Healey said repeatedly during the event that she wants Massachusetts to become a global hub for applied AI finding real-world applications for AI just as the state is a hub for the life sciences.

She said a $2.8 billion economic development bond bill she has proposed called the Mass Leads Act is crucial to this goal.

The bill includes $100 million to leverage AI to spur technological advances in the life sciences, health care, advanced manufacturing and robotics sectors, support incubation of AI startups, advance AI software and hardware tech development, and support commercialization activities. This funding would also incentivize public-private partnerships between industry and academia.

Its about taking that knowledge and making it practical, Healey said.

Also crucial to the goal are universities like Northeastern, as Healey referenced the Huskies who recently completed projects for the state through the AI for Impact Co-op Program.

What I saw with the students is it can just cut exponentially the amount of time it takes to get answers to people, Healey said, referencing one of the students projects, which included work for The Ride paratransit service and streamlining the grants process with the Office of Energy and Environmental Affair.

It will get better service and better answers to customers, whether theyre people looking for grant funding, people looking for permitting, you know, or other customers that we serve as a government and its really exciting, she said.

Our universities are really our secret sauce, and they have been for so many parts of our economy, added Hao.

She noted that Massachusetts leads the country in terms of AI graduates per capita and is among the top states for AI graduates.

We have the talent here at our universities, Hao continued. Working closely with our cities and our state government and all of our different private sectors, we can really lead here.

Read more:
AI will transform every aspect of our life, Gov. Healey says at artificial intelligence task force meeting at Northeastern - Northeastern University

Read More..

Researchers Used Artificial Intelligence To Identify Three New Distinct Subtypes Of Parkinsons Disease In A Groundbreaking New Study – Chip Chick

Over 10 million people around the world are living with Parkinsons disease, according to the Parkinsons Foundation. The causes of this neurodegenerative disorder also remain largely unknown.

But, artificial intelligence is helping expand scientists understanding of this complex condition and how to treat it.

Using machine learning techniques, researchers at Weill Cornell Medicine have identified three new distinct subtypes of Parkinsons disease based on the rate of symptom progression. The discovery could lead to more tailored treatments based on individual patient symptoms.

Parkinsons disease is highly heterogenous, which means that people with the same disease can have very different symptoms, explained Dr. Fei Wang, the studys senior author.

This indicates there is not likely to be a one-size-fits-all approach to treating it. We may need to consider customized treatment strategies based on a patients disease subtype.

The three new subtypes are known as Inching Pace, Moderate Pace, and Rapid Pace.

The Inching Pace (PD-I) subtype affects approximately 36% of patients and has mild symptoms that progress gradually. The Moderate Pace (PD-M) subtype affects approximately 51% of patients and begins with milder symptoms that progress at a moderate pace. Lastly, the Rapid Pace (PD-R) subtype progresses the quickest.

The researchers used deep learning a form of artificial intelligence capable of analyzing massive datasets to uncover patterns that might elude human detection to discover these subtypes.

By examining anonymous clinical records from two different sizable databases, the researchers identified these three distinct patterns of Parkinsons progression.

Sign up for Chip Chicks newsletter and get stories like this delivered to your inbox.

Visit link:
Researchers Used Artificial Intelligence To Identify Three New Distinct Subtypes Of Parkinsons Disease In A Groundbreaking New Study - Chip Chick

Read More..

AI Sparks a Creative Revolution in Business, With an Unexpected Twist –

In the race to harness artificial intelligence (AI), businesses are discovering an unexpected wrinkle: AI that sparks individual brilliance may be flattening the creative landscape. As companies from tech startups to Madison Avenue ad agencies embrace these digital muses, theyre grappling with a paradox that could reshape innovation and their bottom lines.

A recent U.K. study on AI-assisted short story writing has thrown a wrench into the notion that machines will simply replace human ingenuity. The research, conducted by a team at the University of Cambridge, found that while AI can serve as a powerful muse for individual creators, its widespread adoption may paradoxically lead to a decline in overall creative output. This surprising finding has executives and creatives alike questioning whether the rush to embrace AI could inadvertently be programming businesses into a creative corner.

What distinguishes todays AI, particularly generative AI, is its dual role in not only boosting efficiency but also fostering creativity, Sarah Hoffman, AI evangelist at AlphaSense, told PYMNTS. This duality is at the heart of the creative conundrum facing industries from advertising to product design.

Experts say AIs role as a creativity catalyst is reshaping workflows and profit margins across industries. From advertising firms churning out campaigns at breakneck speeds to product designers iterating prototypes in days instead of months, the technology is compressing timelines and expanding possibilities. This AI-powered efficiency is allowing businesses to respond more nimbly to market trends, potentially translating into faster time-to-market and increased revenues.

The study of 300 aspiring authors reveals AIs double-edged impact on creativity. When tasked with crafting micro-stories for young adults, AI assistance significantly boosted the less creative writers output making their work up to 26.6% better written and 15.2% less boring. The digital muse, however, left the more naturally creative wordsmiths talents largely untouched.

But heres the plot twist: AI might enhance personal creativity but could dull the collective creative edge. Researchers found AI-assisted stories shared more similarities, potentially leading to a sea of sameness in the creative landscape. As businesses embrace this digital inspiration, they face a new challenge: harnessing AIs power to elevate individual performance without sacrificing the diverse, innovative thinking that drives industries forward.

The paradox is evident in the world of visual art. AI allows you to iterate very quickly and test many ideas in a short period of time, which should potentially expand our creative horizons, Sergei Belousov, lead AI/ML research engineer at ARTA, an AI image generator, told PYMNTS. Yet he cautions, If everyone uses the same AI tools, you can ultimately experience a decline in creativity and individuality because creative pieces will depend on the characteristics of AI you utilize.

This homogenization effect is already being observed. AI is already impacting creative industries, and while it is saving time and money for brands, the output tends to be homogeneous, Sabrina H. Williams, data and communication program director at the University of South Carolina, told PYMNTS. She points to the advertising industry, where AI-generated campaigns risk blending into a sea of algorithmic sameness.

To navigate this new terrain, experts suggest a human-first approach. Williams recommends brainstorming away from digital tools, then using AI as a secondary step. This strategy aligns with Hoffmans view that AI can be an effective brainstorming partner that complements human creativity, especially given that current AI tools still hallucinate and cant be completely trusted.

A more tailored approach to AI implementation could also be key. Invest in tailoring the AI tools to your business specifics and objectives, advised Belousov. A companys internal data is its competitive advantage. It should fuel the training of your in-house AI in order to adapt it to the specifics of your business and optimize the outcomes.

As the creative landscape evolves, a balanced skill set becomes crucial. Businesses need to ensure their employees have hard skills, of course, but also offer training in creative thinking and problem-solving, Williams said. This approach may be vital in industries like product design, where the human touch can differentiate a product in an increasingly AI-influenced market.

For all PYMNTS AI coverage, subscribe to the daily AINewsletter.

Continue reading here:
AI Sparks a Creative Revolution in Business, With an Unexpected Twist -

Read More..

The Data That Powers A.I. Is Disappearing Fast – The New York Times

For years, the people building powerful artificial intelligence systems have used enormous troves of text, images and videos pulled from the internet to train their models.

Now, that data is drying up.

Over the past year, many of the most important web sources used for training A.I. models have restricted the use of their data, according to a study published this week by the Data Provenance Initiative, an M.I.T.-led research group.

The study, which looked at 14,000 web domains that are included in three commonly used A.I. training data sets, discovered an emerging crisis in consent, as publishers and online platforms have taken steps to prevent their data from being harvested.

The researchers estimate that in the three data sets called C4, RefinedWeb and Dolma 5 percent of all data, and 25 percent of data from the highest-quality sources, has been restricted. Those restrictions are set up through the Robots Exclusion Protocol, a decades-old method for website owners to prevent automated bots from crawling their pages using a file called robots.txt.

The study also found that as much as 45 percent of the data in one set, C4, had been restricted by websites terms of service.

Were seeing a rapid decline in consent to use data across the web that will have ramifications not just for A.I. companies, but for researchers, academics and noncommercial entities, said Shayne Longpre, the studys lead author, in an interview.

We are having trouble retrieving the article content.

Please enable JavaScript in your browser settings.

Thank you for your patience while we verify access. If you are in Reader mode please exit andlog intoyour Times account, orsubscribefor all of The Times.

Thank you for your patience while we verify access.

Already a subscriber?Log in.

Want all of The Times?Subscribe.

Visit link:
The Data That Powers A.I. Is Disappearing Fast - The New York Times

Read More..

Responsible AI Principles. As Artificial Intelligence or AI become | by Hirdesh Baghel | Jul, 2024 – Medium

Photo by ZHENYU LUO on Unsplash

Let us discuss about certain Responsible AI Principles which we should know.

AI without unintended negative consequences:

These were the six Responsible AI principles which we have looked it.

Thanks for Reading

Wisdom is not a product of schooling but of the lifelong attempt to acquire it. Albert Einstein


See more here:
Responsible AI Principles. As Artificial Intelligence or AI become | by Hirdesh Baghel | Jul, 2024 - Medium

Read More..

Big Techs AI Ambitions Face Reality Check, Report Shows –

Despite big budgets and bold plans, a new survey conducted by PYMNTS Intelligence reveals most large companies are struggling to implement AI in meaningful ways, lagging behind in the race to leverage artificial intelligence for transformative business impact.

The findings detailed in The Impact Of GenAI on a COOs Priorities, the third edition of PYMNTS Intelligences 2024 CAIO Project, offer a sobering reality check for the AI revolution. Surveying chief operating officers from companies with at least $1 billion in annual revenue, the report uncovers a significant gap between the perceived potential of generative AI and its current applications in the corporate world.

Seventy percent of COOs from firms surveyed all with at least $1 billion in revenue agree that GenAI is a critical part of strategic planning, the report stated. Nonetheless, there is a gulf between aspiration and reality.

This disconnect between vision and execution is particularly striking, given AIs high-profile nature in todays business landscape. With tech giants and startups alike touting its transformative power, many had expected to see more rapid and widespread adoption of advanced AI applications in large enterprises.

Instead of leveraging AI for high-level decision-making or innovative product development, many companies deploy the technology for more routine tasks. The survey found that nearly 6 in 10 COOs (58%) say their firms use GenAI for accessing information, while half of the executives say they use it with chatbots for customer service.

This focus on less complex applications extends to other areas as well. The report noted that 53% of COOs use AI technology to create data visualizations. However, the effectiveness of these applications varies, with 22% of respondents indicating that GenAI was not highly effective for this purpose.

The tendency to prioritize mundane tasks over more strategic applications is particularly evident in certain key business areas. COOs are less likely to credit GenAI as necessary for production purposes, such as managing inventory or running logistics, the report states. Just 35% of COOs say GenAI is highly important for HR management and logistics.

This cautious approach to AI implementation may stem from a need for greater familiarity with the technologys full capabilities. The survey revealed that 38% of COOs consider familiarizing themselves with the complete range of AI possibilities a drawback to implementation.

While many firms are playing it safe with their AI deployments, the report suggests that this conservative approach may limit their potential returns on investment. The report finds a clear correlation between strategic AI use and positive financial outcomes.

The report showed that 29% of the firms using the technology in highly impactful and strategic ways report very positive ROI. However, in contrast, just 8.8% of firms using GenAI for more routine and less impactful tasks reported positive ROI.

This disparity in outcomes highlights the potential benefits of more ambitious AI strategies. Companies willing to trust AI with more complex and consequential tasks reap greater rewards.

One example of this disconnect between potential and actual use is in code generation. The report classifies this as a medium impact strategic use of AI, noting, Although using the technology for code generation was highly effective according to all those who used it, just 18% of COOs reported generating code with GenAI.

Beyond its impact on business processes and financial outcomes, the adoption of AI also significantly affects workforce composition and skills requirements. Contrary to fears of widespread job losses due to automation, the survey suggests that AI is driving a shift in labor needs rather than simply eliminating positions.

The report found that 88% reported that their organizations need for analytically skilled workers has increased. This surge in demand for analytical talent comes even as 42% of COOs agree that using GenAI has decreased the companys need for lower-skilled workers.

This shift in workforce requirements presents challenges and opportunities for companies and employees. Firms may need to invest heavily in retraining and upskilling programs to ensure their workforce can effectively leverage AI technologies. Meanwhile, workers with strong analytical skills may be in increasingly high demand.

The focus on analytical skills aligns with the broader trend of data-driven decision-making in modern business. As AI systems generate more insights and predictions, companies need employees to interpret this information and translate it into actionable strategies.

Despite the challenges in implementation, COOs remain optimistic about AIs potential to drive efficiencies and reduce costs. The report showed that executives primarily focus on efficiency-related metrics when assessing their AI investments.

Nearly all COOs surveyed, 92%, report using at least one measure of investment return that focuses on cost reduction, such as reduced operational costs, capital expenditures or headcount, the report stated. This emphasis on cost-cutting metrics outweighs increased profits or market expansion measures, with only 70% of COOs citing profit-related measures of AI success.

This focus on efficiency gains may explain the current preference for using AI in more routine tasks, where the impact on costs is more immediately apparent and easier to quantify.

Looking Ahead

As companies continue to navigate the AI landscape, those who can effectively leverage the technology for strategic purposes may gain a significant competitive advantage. However, realizing this potential will require overcoming implementation hurdles, rethinking traditional approaches to workforce management, and taking calculated risks with more ambitious AI deployments.

The report concluded, The opportunity is ripe for larger firms to focus their AI use in highly impactful ways and employ more analytically skilled workers to fill the gaps they are currently experiencing.

For all PYMNTS AI coverage, subscribe to the dailyAINewsletter.

View original post here:
Big Techs AI Ambitions Face Reality Check, Report Shows -

Read More..

States strike out on their own on AI, privacy regulation – Maine Morning Star

As congressional sessions have passed without any new federal artificial intelligence laws, state legislators are striking out on their own to regulate the technologies in the meantime.

Colorado just signed into effect one of the most sweeping regulatory laws in the country, which sets guardrails for companies that develop and use AI. Its focus is mitigating consumer harm and discrimination by AI systems, and Gov. Jared Polis, a Democrat, said he hopes the conversations will continue on the state and federal level.

Other states, like New Mexico, have focused on regulating how computer generated images can appear in media and political campaigns. Some, like Iowa, have criminalized sexually charged computer-generated images, especially when they portray children.

We cant just sit and wait, Delaware state Rep. Krista Griffith, D-Wilmington, who has sponsored AI regulation, told States Newsroom. These are issues that our constituents are demanding protections on, rightfully so.

Griffith is the sponsor of the Delaware Personal Data Privacy Act, which was signed last year, and will take effect on Jan. 1, 2025. The law will give residents the right to know what information is being collected by companies, correct any inaccuracies in data or request to have that data deleted. The bill is similar to other state laws around the country that address how personal data can be used.

Theres been no shortage of tech regulation bills in congress, but none have passed. The 118th congress saw bills relating to imposing restrictions on artificial intelligence models that are deemed high risk, creating regulatory authorities to oversee AI development, imposing transparency requirements on evolving technologies and protecting consumers through liability measures.

In April, a new draft of the American Privacy Rights act of 2024 was introduced, and in May, the Bipartisan Senate Artificial Intelligence Working Group released a roadmap for AI policy which aims to support federal investment in AI while safeguarding the risks of the technology.

Griffith also introduced a bill this year to create the Delaware Artificial Intelligence Commission, and said that if the state stands idly by, theyll fall behind on these already quickly evolving technologies.

The longer we wait, the more behind we are in understanding how its being utilized, stopping or preventing potential damage from happening, or even not being able to harness some of the efficiency that comes with it that might help government services and might help individuals live better lives, Griffith said.

States have been legislating about AI since at least 2019, but bills relating to AI have increased significantly in the last two years. From January through June of this year, there have been more than 300 introduced, said Heather Morton, who tracks state legislation as an analyst for the nonpartisan National Conference of State Legislatures.

Also so far this year, 11 new states have enacted laws about how to use, regulate or place checks and balances on AI, bringing the total to 28 states with AI legislation.

Technologists have been experimenting with decision-making algorithms for decades early frameworks date back to the 1950s. But generative AI, which can generate images, language, and responses to prompts in seconds, is whats driven the industry in the last few years.

Many Americans have been interacting with artificial intelligence their whole lives, and industries like banking, marketing and entertainment have built much of their modern business practices upon AI systems. These technologies have become the backbone of huge developments like power grids and space exploration.

Most people are more aware of their smaller uses, like a companys online customer service chatbot or asking their Alexa or Google Assistant devices for information about the weather.

Rachel Wright, a policy analyst for the Council of State Governments, pinpointed a potential turning point in the public consciousness of AI, which may have added urgency for legislators to act.

I think 2022 is a big year because of ChatGPT, Wright said. It was kind of the first point in which members of the public were really interacting with an AI system or a generative AI system, like ChatGPT, for the first time.

Andrew Gamino-Cheong cofounded AI governance management platform Trustible early last year as the states began to pump out legislation. The platform helps organizations identify risky uses of AI and comply with regulations that have already been put in place.

Both state and federal legislators understand the risk in passing new AI laws: too many regulations on AI can be seen as stifling innovation, while unchecked AI could raise privacy problems or perpetuate discrimination.

Colorados law is an example of this it applies to developers on high-risk systems which make consequential decisions relating to hiring, banking and housing. It says these developers have a responsibility to avoid creating algorithms that could have biases against certain groups or traits. The law dictates that instances of this algorithmic discrimination need to be reported to the attorney generals office.

At the time, Logan Cerkovnik, the founder and CEO of Denver-based, called the bill wide-reaching but well-intentioned, saying his developers will have to think about how the major social changes in the bill are supposed to work.

Legislature rejects paths to a comprehensive data privacy law in Maine

Are we shifting from actual discrimination to the risk of discrimination before it happens? he added.

But Delawares Rep. Griffith said that these life-changing decisions, like getting approved for a mortgage, should be transparent and traceable. If shes denied a mortgage due to a mistake in an algorithm, how could she appeal?

I think that also helps us understand where the technology is going wrong, she said. We need to know where its going right, but we also have to understand where its going wrong.

Some who work in the development of big tech see federal or state regulations of AI as potentially stifling to innovation. But Gamino-Cheong said he actually thinks some of this patchwork legislation by states could create pressure for some clear federal action from lawmakers who see AI as a huge growth area for the U.S.

I think thats one area where the privacy and AI discussions could diverge a little bit, that theres a competitive, even national security angle, to investing in AI, he said.

Wright published research late last year on AIs role in the states, categorizing the approaches states were using to create protections around the technology. Many of the 29 laws enacted at that point focused on creating avenues for stakeholder groups to meet and collaborate on how to use and regulate AI. Others recognize possible innovations enabled by AI, but regulate data privacy.

Transparency, protection from discrimination and accountability are other major themes in the states legislation. Since the start of 2024, laws that touch on the use of AI in political campaigns, schooling, crime data, sexual offenses and deepfakes convincing computer-generated likenesses have been passed, broadening the scope in how a law can regulate AI. Now, 28 states have passed nearly 60 laws.

Heres a look at where legislation stands in July 2024, in broad categorization:

Many states have enacted laws that bring together lawmakers, tech industry professionals, academics and business owners to oversee and consult on the design, development and use of AI. Sometimes in the form of councils or working groups, they are often on the lookout for unintended, yet foreseeable, impacts of unsafe or ineffective AI systems. This includes Alabama (SB 78), Illinois (HB 3563), Indiana (S 150), New York (AB A4969, SB S3971B and A 8808), Texas (HB 2060, 2023), Vermont (HB 378 and HB 410), California (AB 302), Louisiana (SCR 49), Oregon (H 4153), Colorado (SB 24-205), Louisiana (SCR 49), Maryland (S 818), Tennessee (H 2325), Texas (HB 2060), Virginia (S 487), Wisconsin (S 5838) and West Virginia (H 5690).

Second most common are laws that look at data privacy and protect individuals from misuse of consumer data. Commonly, these laws create regulations about how AI systems can collect data and what it can do with it. These states include California (AB 375), Colorado (SB 21-190), Connecticut (SB 6 and SB 1103), Delaware (HB 154), Indiana (SB 5), Iowa (SF 262), Montana (SB 384), Oregon (SB 619), Tennessee (HB 1181), Texas (HB 4), Utah (S 149) and Virginia (SB 1392).

The Maine Legislaturerejected two competing proposalsfor a comprehensive data privacy law this year, one that would have made the states regulations on companies that collect consumer information online among the strictest in the country and another backed by businesses and technology companies that followed a template increasingly adopted by other states in recent years.

Some states have enacted laws that inform people that AI is being used. This is most commonly done by requiring businesses to disclose when and how its in use. For example, an employer may have to get permission from employees to use an AI system that collects data about them. These states have transparency laws: California (SB 1001), Florida (S 1680), Illinois (HB 2557), and Maryland (HB 1202).

These laws often require that AI systems are designed with equity in mind, and avoid algorithmic discrimination, where an AI system can contribute to different treatment of people based on race, ethnicity, sex, religion or disability, among other things. Often these laws play out in the criminal justice system, in hiring, in banking or other positions where a computer algorithm is making life-changing decisions. This includes California (SB 36), Colorado (SB 21-169), Illinois (HB 0053), and Utah (H 366).

Laws focusing on AI in elections have been passed in the last two years, and primarily either ban messaging and images created by AI or at least require specific disclaimers about the use of AI in campaign materials. This includes Alabama (HB 172), Arizona (HB 2394), Idaho (HB 664), Florida (HB 919), New Mexico (HB 182), Oregon (SB 1571), Utah (SB 131), and Wisconsin (SB 664).

States that have passed laws relating to AI in education mainly provide requirements for the use of AI tools. Florida (HB 1361) outlines how tools may be used to customize and accelerate learning, and Tennessee (S 1711) instructs schools to create an AI policy for the 2024-25 school year which describes how the board will enforce its policy.

The states which have passed laws about computer-generated explicit images criminalize the creation of sexually explicit images of children with the use of AI. These include Iowa (HF 2240) and South Dakota (S 79).

While most of the AI laws enacted have focused on protecting users from the harms of AI, many legislators are also excited by its potential.

A recent study by the World Economic Forum has found that artificial intelligence technologies could lead to the creation of about 97 million new jobs worldwide by 2025, outpacing the approximately 85 million jobs displaced to technology or machines.

Rep. Griffith is looking forward to digging more into the technologies capabilities in a working group, saying its challenging to legislate about technology that changes so rapidly, but its also fun.

Sometimes the tendency when somethings complicated or challenging or difficult to understand is like, you just want to run and stick your head under the blanket, she said. But its like, everybody stop. Lets look at it, lets understand it, lets read about it. Lets have an honest discussion about how its being utilized and how its helping.

Continued here:
States strike out on their own on AI, privacy regulation - Maine Morning Star

Read More..

Top Deep Learning Interview Questions and Answers for 2024 – Simplilearn

The demand for Deep Learning has grown over the years and its applications are being used in every business sector. Companies are now on the lookout for skilled professionals who can use deep learning and machine learning techniques to build models that can mimic human behavior. As per indeed, the average salary for a deep learning engineer in the United States is $133,580 per annum. In this tutorial, you will learn the top 45 Deep Learning interview questions that are frequently asked.

Check out some of the frequently asked deep learning interview questions below:

If you are going for a deep learning interview, you definitely know what exactly deep learning is. However, with this question the interviewee expects you to give an in-detail answer, with an example.Deep Learning involves taking large volumes of structured or unstructured data and using complex algorithms to train neural networks. It performs complex operations to extract hidden patterns and features (for instance, distinguishing the image of a cat from that of a dog).

Neural Networks replicate the way humans learn, inspired by how the neurons in our brains fire, only much simpler.

The most common Neural Networks consist of three network layers:

Each sheet contains neurons called nodes, performing various operations. Neural Networks are used in deep learning algorithms like CNN, RNN, GAN, etc.

As in Neural Networks, MLPs have an input layer, a hidden layer, and an output layer. It has the samestructure as a single layer perceptron with one or more hidden layers. A single layer perceptron can classify only linear separable classes with binary output (0,1), but MLP can classify nonlinear classes.

Except for the input layer, each node in the other layers uses a nonlinear activation function. This means the input layers, the data coming in, and the activation function is based upon all nodes and weights being added together, producing the output. MLP uses a supervised learning method called backpropagation. In backpropagation, the neural network calculates the error with the help of cost function. It propagates this error backward from where it came (adjusts the weights to train the model more accurately).

The process of standardizing and reforming data is called Data Normalization. Its a pre-processing step to eliminate data redundancy. Often, data comes in, and you get the same information in different formats. In these cases, you should rescale values to fit into a particular range, achieving better convergence.

One of the most basic Deep Learning models is a Boltzmann Machine, resembling a simplified version of the Multi-Layer Perceptron. This model features a visible input layer and a hidden layer -- just a two-layer neural net that makes stochastic decisions as to whether a neuron should be on or off. Nodes are connected across layers, but no two nodes of the same layer are connected.

At the most basic level, an activation function decides whether a neuron should be fired or not. It accepts the weighted sum of the inputs and bias as input to any activation function. Step function, Sigmoid, ReLU, Tanh, and Softmax are examples of activation functions.

Also referred to as loss or error, cost function is a measure to evaluate how good your models performance is. Its used to compute the error of the output layer during backpropagation. We push that error backward through the neural network and use that during the different training functions.

Gradient Descent is an optimal algorithm to minimize the cost function or to minimize an error. The aim is to find the local-global minima of a function. This determines the direction the model should take to reduce the error.

This is one of the most frequently asked deep learning interview questions. Backpropagation is a technique to improve the performance of the network. It backpropagates the error and updates the weights to reduce the error.

In this deep learning interview question, the interviewee expects you to give a detailed answer.

A Feedforward Neural Network signals travel in one direction from input to output. There are no feedback loops; the network considers only the current input. It cannot memorize previous inputs (e.g., CNN).

A Recurrent Neural Networks signals travel in both directions, creating a looped network. It considers the current input with the previously received inputs for generating the output of a layer and can memorize past data due to its internal memory.

The RNN can be used for sentiment analysis, text mining, and image captioning. Recurrent Neural Networks can also address time series problems such as predicting the prices of stocks in a month or quarter.

Softmax is an activation function that generates the output between zero and one. It divides each output, such that the total sum of the outputs is equal to one. Softmax is often used for output layers.

ReLU (or Rectified Linear Unit) is the most widely used activation function. It gives an output of X if X is positive and zeros otherwise. ReLU is often used for hidden layers.

This is another frequently asked deep learning interview question. With neural networks, youre usually working with hyperparameters once the data is formatted correctly. A hyperparameter is a parameter whose value is set before the learning process begins. It determines how a network is trained and the structure of the network (such as the number of hidden units, the learning rate, epochs, etc.).

When your learning rate is too low, training of the model will progress very slowly as we are making minimal updates to the weights. It will take many updates before reaching the minimum point.

If the learning rate is set too high, this causes undesirable divergent behavior to the loss function due to drastic updates in weights. It may fail to converge (model can give a good output) or even diverge (data is too chaotic for the network to train).

Dropout is a technique of dropping out hidden and visible units of a network randomly to prevent overfitting of data (typically dropping 20 percent of the nodes). It doubles the number of iterations needed to converge the network.

Batch normalization is the technique to improve the performance and stability of neural networks by normalizing the inputs in every layer so that they have mean output activation of zero and standard deviation of one.

The next step on this top Deep Learning interview questions and answers blog will be to discuss intermediate questions.

Batch Gradient Descent

Stochastic Gradient Descent

The batch gradient computes the gradient using the entire dataset.

It takes time to converge because the volume of data is huge, and weights update slowly.

The stochastic gradient computes the gradient using a single sample.

It converges much faster than the batch gradient because it updates weight more frequently.

Overfitting occurs when the model learns the details and noise in the training data to the degree that it adversely impacts the execution of the model on new information. It is more likely to occur with nonlinear models that have more flexibility when learning a target function. An example would be if a model is looking at cars and trucks, but only recognizes trucks that have a specific box shape. It might not be able to notice a flatbed truck because there's only a particular kind of truck it saw in training. The model performs well on training data, but not in the real world.

Underfitting alludes to a model that is neither well-trained on data nor can generalize to new information. This usually happens when there is less and incorrect data to train a model. Underfitting has both poor performance and accuracy.

To combat overfitting and underfitting, you can resample the data to estimate the model accuracy (k-fold cross-validation) and by having a validation dataset to evaluate the model.

There are two methods here: we can either initialize the weights to zero or assign them randomly.

Initializing all weights to 0: This makes your model similar to a linear model. All the neurons and every layer perform the same operation, giving the same output and making the deep net useless.

Initializing all weights randomly: Here, the weights are assigned randomly by initializing them very close to 0. It gives better accuracy to the model since every neuron performs different computations. This is the most commonly used method.

There are four layers in CNN:

Pooling is used to reduce the spatial dimensions of a CNN. It performs down-sampling operations to reduce the dimensionality and creates a pooled feature map by sliding a filter matrix over the input matrix.

Long-Short-Term Memory (LSTM) is a special kind of recurrent neural network capable of learning long-term dependencies, remembering information for long periods as its default behavior. There are three steps in an LSTM network:

While training an RNN, your slope can become either too small or too large; this makes the training difficult. When the slope is too small, the problem is known as a Vanishing Gradient. When the slope tends to grow exponentially instead of decaying, its referred to as an Exploding Gradient. Gradient problems lead to long training times, poor performance, and low accuracy.

Tensorflow provides both C++ and Python APIs, making it easier to work on and has a faster compilation time compared to other Deep Learning libraries like Keras and Torch. Tensorflow supports both CPU and GPU computing devices.

This is another most frequently asked deep learning interview question. A tensor is a mathematical object represented as arrays of higher dimensions. These arrays of data with different dimensions and ranks fed as input to the neural network are called Tensors.

Constants - Constants are parameters whose value does not change. To define a constant we use tf.constant() command. For example:

a = tf.constant(2.0,tf.float32)

b = tf.constant(3.0)

Print(a, b)

Variables - Variables allow us to add new trainable parameters to graph. To define a variable, we use the tf.Variable() command and initialize them before running the graph in a session. An example:

W = tf.Variable([.3].dtype=tf.float32)

b = tf.Variable([-.3].dtype=tf.float32)

Placeholders - these allow us to feed data to a tensorflow model from outside a model. It permits a value to be assigned later. To define a placeholder, we use the tf.placeholder() command. An example:

a = tf.placeholder (tf.float32)

b = a*2

with tf.Session() as sess:

result =,feed_dict={a:3.0})

print result

Sessions - a session is run to evaluate the nodes. This is called the Tensorflow runtime. For example:

a = tf.constant(2.0)

b = tf.constant(4.0)

c = a+b

# Launch Session

Sess = tf.Session()

# Evaluate the tensor c


Everything in a tensorflow is based on creating a computational graph. It has a network of nodes where each node operates, Nodes represent mathematical operations, and edges represent tensors. Since data flows in the form of a graph, it is also called a DataFlow Graph.

Suppose there is a wine shop purchasing wine from dealers, which they resell later. But some dealers sell fake wine. In this case, the shop owner should be able to distinguish between fake and authentic wine.

The forger will try different techniques to sell fake wine and make sure specific techniques go past the shop owners check. The shop owner would probably get some feedback from wine experts that some of the wine is not original. The owner would have to improve how he determines whether a wine is fake or authentic.

The forgers goal is to create wines that are indistinguishable from the authentic ones while the shop owner intends to tell if the wine is real or not accurately.

Let us understand this example with the help of an image shown above.

There is a noise vector coming into the forger who is generating fake wine.

Here the forger acts as a Generator.

The shop owner acts as a Discriminator.

The Discriminator gets two inputs; one is the fake wine, while the other is the real authentic wine. The shop owner has to figure out whether it is real or fake.

So, there are two primary components of Generative Adversarial Network (GAN) named:

The generator is a CNN that keeps keys producing images and is closer in appearance to the real images while the discriminator tries to determine the difference between real and fake images The ultimate aim is to make the discriminator learn to identify real and fake images.

This Neural Network has three layers in which the input neurons are equal to the output neurons. The network's target outside is the same as the input. It uses dimensionality reduction to restructure the input. It works by compressing the image input to a latent space representation then reconstructing the output from this representation.

Bagging and Boosting are ensemble techniques to train multiple models using the same learning algorithm and then taking a call.

With Bagging, we take a dataset and split it into training data and test data. Then we randomly select data to place into the bags and train the model separately.

With Boosting, the emphasis is on selecting data points which give wrong output to improve the accuracy.

Read more here:
Top Deep Learning Interview Questions and Answers for 2024 - Simplilearn

Read More..

Google AI heavyweight Jeff Dean talks about algorithmic breakthroughs and data center emissions – Fortune

Google sent a jolt of unease into the climate change debate this month when it disclosed that emissions from its data centers rose 13% in 2023, citing the AI transition in its annual environmental report. But according to Jeff Dean, Googles chief scientist, the report doesnt tell the full story and gives AI more than its fair share of blame.

Dean, who is chief scientist at both Google DeepMind and Google Research, said that Google is not backing off its commitment to be powered by 100% clean energy by the end of 2030. But, he said, that progress is not necessarily a linear thing because some of Googles work with clean energy providers will not come on line until several years from now.

Those things will provide significant jumps in the percentage of our energy that is carbon-free energy, but we also want to focus on making our systems as efficient as possible, Dean said at Fortunes Brainstorm Tech conference on Tuesday, in an onstage interview with Fortunes AI editor Jeremy Kahn.

Dean went on to make the larger point that AI is not as responsible for increasing data center usage, and thus carbon emissions, as critics make it out to be.

Theres been a lot of focus on the increasing energy usage of AI, and from a very small base that usage is definitely increasing, Dean said. But I think people often conflate that with overall data center usage of which AI is a very small portion right now but growing fast and then attribute the growth rate of AI based computing to the overall data center usage.

Dean said that its important to examine all the data and the true trends that underlie this, though he did not elaborate on what those trends were.

One of Googles earliest employees, Dean joined the company in 1999 and is credited with being one of the key people who transformed its early internet search engine into a powerful system capable of indexing the internet and reliably serving billions of users. Dean cofounded the Google Brain project in 2011, spearheading the companys efforts to become a leader in AI. Last year, Alphabet merged Google Brain with DeepMind, the AI company Google acquired in 2014, and made Dean chief scientist reporting directly to CEO Sundar Pichai.

By combining the two teams, Dean said that the company has a better set of ideas to build on, and can pool the compute so that we focus on training one large-scale effort like Gemini rather than multiple fragmented efforts.

Dean also responded to a question about the status of Googles Project Astraa research project which DeepMind leader Demis Hassabis unveiled in May at Google I/O, the companys annual developer conference. Described by Hassabis as a universal AI agent that can understand the context of a users environment, a video demonstration of Astra showed how users could point their phone camera to nearby objects and ask the AI agent relevant questions such as What neighborhood am I in? or Did you see where I left my glasses?

At the time, the company said the Astra technology will come to the Gemini app later this year. But Dean put it more conservatively: Were hoping to have something out into the hands of test users by the end of the year, he said.

The ability to combine Gemini models with models that actually have agency and can perceive the world around you in a multimodal way is going to be quite powerful, Dean said. Were obviously approaching this responsibly, so we want to make sure that the technology is ready and that it doesnt have unforeseen consequences, which is why well roll it out first to a smaller set of initial test users.

As for the continued evolution of AI models, Dean noted that additional data and computing power alone will not suffice. A couple more generations of scaling will get us considerably farther, Dean said, but eventually there will be a need for some additional algorithmic breakthroughs.

Dean said his team has long focused on ways to combine scaling with algorithmic approaches in order to improve factuality and reasoning capabilities, so that the model can imagine plausible outputs and reason its way through which one makes the most sense.

Those kind of advances Dean said, will be important to really make these models robust and more reliable than they already are.

Read more coverage from Brainstorm Tech 2024:

Wiz CEO says consolidation in the security market is truly a necessity as reports swirl of $23 billion Google acquisition

Why Grindrs CEO believes synthetic employees are about to unleash a brutal talent war for tech startups

Experts worry that a U.S.-China cold war could turn hot: Everyones waiting for the shoe to drop in Asia

Here is the original post:
Google AI heavyweight Jeff Dean talks about algorithmic breakthroughs and data center emissions - Fortune

Read More..