Page 40«..1020..39404142..5060..»

The 10 Best AI Courses That Are Worth Taking in 2024 – TechRepublic

Since ChatGPT proved a consumer hit, a gold rush has set off for AI in Silicon Valley. Investors are intrigued by companies promising generative AI will transform the world, and companies seek workers with the skills to bring them into the future. The frenzy may be cooling down in 2024, but AI skills are still hot in the tech market.

Looking to join the AI industry? Which route into the profession is best for each individual learner will depend on that persons current skill level and their target skill or job title.

When assessing online courses, we examined the reliability and popularity of the provider, the depth and variety of topics offered, the practicality of the information, the cost and the duration. The courses and certification programs vary a lot, so choose the options that are right for each person or business.

They are listed in order of skill level and, within the skill level categories, alphabetically. In most cases, each provider offers multiple courses in different aspects of generative AI. Explore these generative AI courses to see which might fit the right niche.

A name learners are likely to see on AI courses a lot is Andrew Ng; he is an adjunct professor at Stanford University, founder of DeepLearning.AI and cofounder of Coursera. Ng is one of the authors of a 2009 paper on using GPUs for deep learning, which NVIDIA and other companies are now doing to transform AI hardware. Ng is the instructor and driving force behind AI for Everyone, a popular, self-paced course more than one million people have enrolled. AI for Everyone from Coursera contains four modules:

For individuals, a Coursera account is $49-$79 per month with a 7-day free trial, depending on the course and plan. However, the AI for Everyone course can be taken for free; the $79 per month fee provides access to graded assignments and earning a certificate.

Coursera states the class takes six hours to complete.

This course has no prerequisites.

Are you a C-suite leader looking to shape your companys vision for generative AI? If so, this non-technical course helps business leaders build a top-down philosophy around generative AI projects. It could be useful for sparking conversation between business and technical leaders.

Free if completed within the Coursera 7-day trial. Otherwise, a Coursera account is $49-$79 per month, depending on the course and plan.

This course takes about one hour.

There are no prerequisites for this course.

This is a well-reviewed beginner course that sets itself apart by approaching AI holistically, including its practical applications and potential social impact. It includes hands-on exercises but doesnt require the learner to know how to code, making it a good mix of practical and beginner content. Datacamps Understanding Artificial Intelligence course is particularly interesting because it includes a section on business and enterprise. Business leaders looking for a non-technical explanation of infrastructure and skills they need to harness AI might be interested in this course.

This course can be accessed with a DataCamp subscription, which costs $25 per person per month, billed annually. Educators can get a group subscription for free.

Including videos and exercises, this course lasts about two hours.

This course has no prerequisites.

Google Clouds Introduction to Generative AI Learning Path covers what generative AI and large language models are for beginners. Since its from Google, it provides some specific Google applications used to build generative AI: Google Tools and Vertex AI. It includes a section on responsible AI, inviting the learner to consider ethical practices around the generative AI they may go on to create. Completing this learning path will award the Prompt Design in Vertex AI skill badge.

Another option from Google Cloud is the Generative AI for Developers Learning Path.

This course is free.

The path technically contains 8 hours and 30 minutes of content, but some of that content is quizzes. The time it takes for each individual to complete the path may vary.

The path has no prerequisites.

Since this course is taught by an IBM professional, it is likely to include contemporary, real-world insight into how generative AI and machine learning are used today. It is an eight-hour course that covers a wide range of topics around artificial intelligence, including ethical concerns. Introduction to Artificial Intelligence includes quizzes and can contribute to career certificates in a variety of programs from Coursera.

Free if completed within the 7-day Coursera free trial, or $49-$79 per month afterward, depending on the course and plan. Financial aid is available.

Coursera estimates this course will take about eight hours.

There are no prerequisites for this course.

AWS offers a lot of AI-related courses and programs, but we chose this one because it combines fundamentals the first two courses in the developer kit with hands-on knowledge and training on specific AWS products. This could be very practical for someone whose organization already works with multiple AWS products but wants to expand into more generative AI products and services. This online, self-guided kit includes hands-on labs and AWS Jam challenges, which are gamified and AI-powered experiences.

The AWS Generative AI Developer Kit is part of the AWS Skill Builder subscription. AWS Skill Builder is accessible with a 7-day trial, after which it costs $29 per month or $449 per year.

The courses take 16 hours and 30 minutes to complete.

This course is appropriate for professionals who have not worked with generative AI before, but it would help to have worked within the AWS ecosystem. In particular, Amazon Bedrock is discussed at such a level that it would be beneficial to have completed the course AWS Technical Essentials or have comparable real-world experience.

Harvards online professional certificate combines the venerable universitys Introduction to Computer Science course with another course tailored to careers in AI: Introduction to Artificial Intelligence with Python. This certification is suitable for people who want to become software developers with a focus on AI. This course is self-paced, and students will receive pre-recorded instruction from Harvard University faculty.

Both courses together cost $466.20 as of the time of writing; this is a discounted price from the usual $518. Learners can take both courses in the certification for free, but the certification itself requires a fee.

These courses are self paced, but the estimated time for completion is five months at 7-22 hours per week.

There are no prerequisites required, although a high-school level of experience with programming basics would likely provide a solid foundation. The Introduction to Computer Science course covers algorithms and programming in C, Python, SQL and JavaScript, as well as CSS and HTML.

MIT has played a leading role in the rise of AI and the new category of jobs it is creating across the world economy, the description of the program states, summing up the educational legacy behind this course. MITs AI and machine learning certification course for professionals is taught by MIT faculty who are working at the cutting edge of the field.

This certification program is comparable to a traditional college course, and that level of commitment is reflected in the price.

If a learner completes at least 16 days of qualifying courses, they will be eligible to receive the certificate. Courses are typically taught June, July and August online or on MITs campus.

There is an application fee of $325. The two mandatory courses are:

The remaining required 11 days can be composed of elective classes, which last between two and five days each and cost between $2,500 and $4,700 each.

16 days.

The Professional Certificate Program in Machine Learning & Artificial Intelligence is designed for technical professionals with at least three years of experience in computer science, statistics, physics or electrical engineering. In particular, MIT recommends this program for anyone whose work intersects with data analysis or for managers who need to learn more about predictive modeling.

Completion of the academically rigorous Stanford Artificial Intelligence Professional Program will result in a certification. This program is suitable for professionals who want to learn how to build AI models from scratch and then fine-tune them for their businesses. In addition, it helps professionals understand research results and conduct their own research on AI. This program offers 1 to 1 time with professionals in the industry and some flexibility learners can take all eight courses in the program or choose individual courses.

The individual courses are:

The Stanford Artificial Intelligence Professional Program costs $1,750 per course. Learners who complete three courses will earn a certificate.

Each course lasts 10 weeks at 10 to 15 hours per week. Courses are held on set dates.

Interested professionals can submit an application; applicants are asked to prove competence in the following areas:

Udacitys Artificial Intelligence Nanodegree program equips graduates with practical knowledge about how to solve mathematical problems using artificial intelligence. This class isnt about generative AI models; instead, it teaches the underpinnings of traditional search algorithms, probabilistic graphical models, and planning and scheduling systems. Learners who complete this course will gain experience in working with the types of algorithms used in the real world for:

This course costs $249 per month paid monthly or $846 for the first four months of the subscription, after which it will cost $249 per month.

This course lasts about three months.

Learners in this course should have a background in programming and mathematics. The following skills are recommended:

Whether it is worth taking an AI course depends on many factors: the course, the individual and the job market. For instance, getting an AI-focused certification might contribute to getting a salary increase or making a career change. AI courses could help someone learn AI skills that might be a good fit for their abilities, or could be the first step toward a lucrative and life-long career. Educating oneself in a contemporary topic can always have some benefits in terms of practicing new skills.

Some introductory AI courses do not require coding; however, AI is a relatively complex topic in computing, and practitioners will need some programming skills as they progress to more advanced courses and learn how to build and deploy AI models. Most likely, intermediate learners need to be comfortable working in Python.

SEE: Help your business by becoming your own IT expert. (TechRepublic Academy)

Some of these courses and certifications include education in basic programming and computer science. More advanced courses and certifications will require learners to already have a college-level knowledge of calculus, linear algebra, probability and statistics, as well as coding.

Read the original:

The 10 Best AI Courses That Are Worth Taking in 2024 - TechRepublic

Read More..

Beauty Reviews Were Already Suspect. Then Came Generative AI. – The Business of Fashion

For beauty shoppers, it was already hard enough to trust reviews online.

Brands such as Sunday Riley and Kylie Skin are among those to have been caught up in scandals over fake reviews, with Sunday Riley admitting in a 2018 incident that it had tasked employees with writing five-star reviews of its products on Sephora. It downplayed the misstep at the time, arguing it would have been impossible to post even a fraction of the hundreds of thousands of Sunday Riley reviews on platforms around the globe.

Today, however, thats increasingly plausible with generative artificial intelligence.

Text-generating tools like ChatGPT, which hit the mainstream just over a year ago, make it easier to mimic real reviews faster, better and at greater scale than ever before, creating more risk of shoppers being taken in by bogus testimonials. Sometimes there are dead giveaways. As an AI language model, I dont have a body, but I understand the importance of comfortable clothing during pregnancy, began one Amazon review of maternity shorts spotted by CNBC. But often theres no way to know.

Back in the day, you would see broken grammar and youd think, That doesnt look right. That doesnt sound human, said Saoud Khalifah, a former hacker and founder of Fakespot, an AI-powered tool to identify fake reviews. But over the years weve seen that drop off. These fake reviews are getting much, much better.

Fake reviews have become an industry in themselves, driven by fraud farms that act as syndicates, according to Khalifah. A 2021 report by Fakespot found roughly 31 percent of reviews across Amazon, Sephora, Walmart, eBay, Best Buy and sites powered by Shopify which altogether accounted for more than half of US online retail sales that year to be unreliable.

It isnt just bots that are compromising trust in beauty reviews. The beauty industry already relies heavily on incentivised human reviewers, who receive a free product or discount in exchange for posting their opinion. It can be a valuable way for brands to get new products into the hands of their target audience and boost their volume of reviews, but consumers are increasingly suspicious of incentivised reviews, so brands should use them strategically, and should always explicitly declare them.

Sampling and review syndicators such as Influenster are keen to point out that receiving a free product does not oblige the reviewer to give positive feedback, but its clear from the exchanges in online communities that many users of these programmes believe they will receive more freebies if they write good reviews. As one commenter wrote in a post in Sephoras online Beauty Insider community, People dont want to stop getting free stuff if they say honest or negative things about the products they receive for free.

That practice alone can skew the customer rating of a product. On Sephora, for example, the new Ouai Hair Gloss In-Shower Shine Treatment has 1,182 reviews and a star rating of 4.3. But when filtering out incentivised reviews, just 89 remain. Sephora also doesnt recalculate the star rating after removing those reviews. Among just the non-incentivised reviews, the products rating is 2.6 stars. The issue has sparked some frustration among members of its online community. Sephora declined to comment.

But the situation gets even murkier when factoring in the rise in reviews partially created by a human and partially by AI. Khalifah describes these kinds of reviews as a hybrid monstrosity, where its half legit and half not, because AI is being used to fill the gaps within the review and make it look better.

The line between authentic reviews and AI-generated content is itself beginning to blur as review platforms roll out new AI-powered tools to assist their communities in writing reviews. Bazaarvoice, a platform for user-generated content which owns Influenster and works with beauty brands including LOral, Pacifica, Clarins and Sephora, has recently launched three new AI-powered features, including a tool called Content Coach. The company developed the tool based on research showing that 68 percent of its community had trouble getting started when writing a review, according to Marissa Jones, Bazaarvoice senior vice president of product.

Content Coach gives users prompts of key topics to include in their review, based on common themes in other reviews. The prompts for a review of a Chanel eyeliner might include pigmentation, precision and ease of removal, for instance. As users type their review, the topic prompts light up as they are addressed, gamifying the process.

Jones stressed that the prompts are meant to be neutral. We wanted to provide an unbiased way to give [users] some ideas, she said. We dont want to influence their opinion or do anything that pushes them one direction or the other.

But even such seemingly innocuous AI nudges as those created by Content Coach can still influence what a consumer writes in a product review, shifting it from a spontaneous response based on considered appraisal of a product to something more programmed that requires less thought.

Fakespots Khalifah points out that governments and regulators around the globe have been slow to act, given the speed at which the problem of fake reviews is evolving with the advancement of generative AI.

But change is finally on the horizon. In July 2023, the US Federal Trade Commission introduced the Trade Regulation Rule on the Use of Consumer Reviews and Testimonials, a new piece of regulation to punish marketers who feature fake reviews, suppress negative reviews or offer incentives for positive ones.

Our proposed rule on fake reviews shows that were using all available means to attack deceptive advertising in the digital age, Samuel Levine, director of the FTCs Bureau of Consumer Protection, said in a release at the time. The rule would trigger civil penalties for violators and should help level the playing field for honest companies.

In its notice of proposed rule-making, the FTC shared comments from industry players and public interest groups on the damage to consumers caused by fake reviews. Amongst these, the National Consumers League cited an estimate that, in 2021, fraudulent reviews cost US consumers $28 billion. The text also noted that the widespread emergence of AI chatbots is likely to make it easier for bad actors to write fake reviews.

In beauty, of course, the stakes are potentially higher, as fake reviews can also mislead consumers into buying counterfeit products, which represent a risk to a shoppers health and wellbeing as well as their wallet.

If the FTCs proposed rule gets the green light, as expected, it will impose civil penalties of up to $51,744 per violation. The FTC could take the position that each individual fake review constitutes a separate violation every time it is viewed by a consumer, establishing a considerable financial deterrent to brands and retailers alike.

With this tougher regulatory stance approaching, beauty brands should get their houses in order now, and see it as an opportunity rather than an imposition. There is huge potential for brands and retailers to take the lead on transparency and build an online shopping experience consumers can believe in.

Continue reading here:

Beauty Reviews Were Already Suspect. Then Came Generative AI. - The Business of Fashion

Read More..

Students Are Likely Writing Millions of Papers With AI – WIRED

Students have submitted more than 22 million papers that may have used generative AI in the past year, new data released by plagiarism detection company Turnitin shows.

A year ago, Turnitin rolled out an AI writing detection tool that was trained on its trove of papers written by students as well as other AI-generated texts. Since then, more than 200 million papers have been reviewed by the detector, predominantly written by high school and college students. Turnitin found that 11 percent may contain AI-written language in 20 percent of its content, with 3 percent of the total papers reviewed getting flagged for having 80 percent or more AI writing. (Turnitin is owned by Advance, which also owns Cond Nast, publisher of WIRED.) Turnitin says its detector has a false positive rate of less than 1 percent when analyzing full documents.

ChatGPTs launch was met with knee-jerk fears that the English class essay would die. The chatbot can synthesize information and distill it near-instantlybut that doesnt mean it always gets it right. Generative AI has been known to hallucinate, creating its own facts and citing academic references that dont actually exist. Generative AI chatbots have also been caught spitting out biased text on gender and race. Despite those flaws, students have used chatbots for research, organizing ideas, and as a ghostwriter. Traces of chatbots have even been found in peer-reviewed, published academic writing.

Teachers understandably want to hold students accountable for using generative AI without permission or disclosure. But that requires a reliable way to prove AI was used in a given assignment. Instructors have tried at times to find their own solutions to detecting AI in writing, using messy, untested methods to enforce rules, and distressing students. Further complicating the issue, some teachers are even using generative AI in their grading processes.

Detecting the use of gen AI is tricky. Its not as easy as flagging plagiarism, because generated text is still original text. Plus, theres nuance to how students use gen AI; some may ask chatbots to write their papers for them in large chunks or in full, while others may use the tools as an aid or a brainstorm partner.

Students also aren't tempted by only ChatGPT and similar large language models. So-called word spinners are another type of AI software that rewrites text, and may make it less obvious to a teacher that work was plagiarized or generated by AI. Turnitins AI detector has also been updated to detect word spinners, says Annie Chechitelli, the companys chief product officer. It can also flag work that was rewritten by services like spell checker Grammarly, which now has its own generative AI tool. As familiar software increasingly adds generative AI components, what students can and cant use becomes more muddled.

Detection tools themselves have a risk of bias. English language learners may be more likely to set them off; a 2023 study found a 61.3 percent false positive rate when evaluating Test of English as a Foreign Language (TOEFL) exams with seven different AI detectors. The study did not examine Turnitins version. The company says it has trained its detector on writing from English language learners as well as native English speakers. A study published in October found that Turnitin was among the most accurate of 16 AI language detectors in a test that had the tool examine undergraduate papers and AI-generated papers.

See the original post here:

Students Are Likely Writing Millions of Papers With AI - WIRED

Read More..

How Ukraine is using AI to fight Russia – The Economist

IN THE run-up to Ukraines rocket attacks on the Antonovsky Bridge, a vital road crossing from the occupied city of Kherson to the eastern bank of the Dnipro River, security officials carefully studied a series of special reports. It was the summer of 2022 and Russia was relying heavily on the bridge to resupply its troops west of the Dnipro. The reports contained research into two things: would destroying the bridge lead the Russian soldiers, or their families back home, to panic? And, more importantly, how could Ukraines government maximise the blow to morale by creating a particular information environment?

This is how Sviatoslav Hnizdovsky, the founder of the Open Minds Institute (OMI) in Kyiv, describes the work his research outfit did by generating these assessments with artificial intelligence (AI). Algorithms sifted through oceans of Russian social-media content and socioeconomic data on things ranging from alcohol consumption and population movements to online searches and consumer behaviour. The AI correlated any changes with the evolving sentiments of Russian loyalists and liberals over the potential plight of their countrys soldiers.

This highly sensitive work continues to shape important Ukrainian decisions about the course of the war, says Mr Hnizdovsky. This includes potential future strikes on Russias Kerch Bridge, which is the only direct land link between Russia and Crimea.

Ukraine, outgunned by Russia, is increasingly seeking an edge with AI by employing the technology in diverse ways. A Ukrainian colonel involved in arms development says drone designers commonly query ChatGPT as a start point for engineering ideas, like novel techniques for reducing vulnerability to Russian jamming. Another military use for AI, says the colonel, who requested anonymity, is to identify targets.

As soldiers and military bloggers have wisely become more careful with their posts, simple searches for any clues about the location of forces have become less fruitful. By ingesting reams of images and text, however, AI models can find potential clues, stitch them together and then surmise the likely location of a weapons system or a troop formation. Using this puzzle-pieces approach with AI allows Molfar, an intelligence firm with offices in Dnipro and Kyiv, to typically find two to five valuable targets every day, says Maksym Zrazhevsky, an analyst with the firm. Once discovered, this intelligence is quickly passed along to Ukraines army, resulting in some of the targets being destroyed.

Targeting is being assisted by AI in other ways. SemanticForce, a firm with offices in Kyiv and Ternopil, a city in the west of Ukraine, develops models that in response to text prompts scrutinises online or uploaded text and images. Many of SemanticForces clients use the system commercially to monitor public sentiments about their brands. Molfar, however, uses the model to map areas where Russian forces are likely to be low on morale and supplies, which could make them a softer target. The AI finds clues in pictures, including those from drone footage, and from soldiers bellyaching in social media.

It also cobbles together clues about Russian military weaknesses using a sneaky proxy. For this, Molfar employs SemanticForces AI to generate reports on the activities of Russian volunteer groups that fundraise and prepare care packages for the sections of the front most in need. The algorithms, Molfar says, do a good job of discarding potentially misleading bot posts. (Accounts with jarring political flip-flops are one tipoff.) The firms analysts sometimes augment this intelligence by using software that disguises the origin of a phone call, so that Russian volunteer groups can be rung by staff pretending to be a Russian eager to contribute. Ten of the companys 45-odd analysts work on targeting, and do so free of charge for Ukrainian forces.

Then there is counter-intelligence. The use of AI helps Ukraines spycatchers identify people who Oleksiy Danilov, until recently secretary of the National Security and Defence Council (NSDC), describes as prone to betrayal. Offers to earn money by taking geolocated pictures of infrastructure and military assets are often sent to Ukrainian phones, says Dmytro Zolotukhin, a former Ukrainian deputy minister for information policy. He recently received one such text himself. People who give this market for intelligence services a shot, he adds, are regularly nabbed by Ukraines SBU intelligence agency.

Using AI from Palantir, an American firm, Ukrainian counter-intelligence fishes for illuminating linkages in disparate pools of data. Imagine, for instance, an indebted divorcee at risk of losing his flat and custody of his children who opens a foreign bank account and has been detected with his phone near a site that was later struck by missiles. In addition to such dot-connecting, the AI performs social-network analysis. If, say, the hypothetical divorcee has strong personal ties to Russia and has begun to take calls from someone whose phone use suggests a higher social status, then AI may increase his risk score.

The result of AI assessments of interactions among a networks nodes have been impressive for more than a decade. Kristian Gustafson, a former British intelligence officer who advised Afghanistans interior ministry in 2013, recounts the capture of a courier transporting wads of cash for Taliban bigwigs. Their ensuing phone calls, he says, lit up the whole diagram. Since then, algorithmic advances for calculating things like betweenness centrality, a measure of influence, make those days look, as another former intelligence officer puts it, pretty primitive.

In addition, network analysis helps Ukrainian investigators identify violators of sanctions on Russia. By connecting data in ship registries with financial records held elsewhere, the software can pierce the corporate veil, a source says. Mr Zolotukhin says hackers are providing absolutely enormous caches of stolen business data to Ukrainian agencies. This is a boon for combating sanctions-busting.

The use of AI has been developing for some time. Volodymyr Zelensky, Ukraines president, called for the development of a massive boost in the use of the technology for national security in November 2019. The result is a strategically minded model built and run by the NSDC that ingests text, statistics, photos and video. Called the Centre of Operations for Threats Assessment (COTA), it is fed a wide range of information, some obtained by hackers, says Andriy Ziuz, NSDCs head of staff. The model tracks prices, phone usage, migration, trade, energy, politics, diplomacy and military developments down to the weapons in repair shops.

Operators at COTA call this model a constructor. This is because it also ingests output from smaller models such as Palantirs software and Delta, which is battlefield software that supports the Ukrainian armys manoeuvre decisions. COTAs bigger picture output provides senior officials with guidance on sensitive matters, including mobilisation policy, says Mykola Dobysh, NSDCs chief technologist. Mr Danilov notes that Mr Zelensky has been briefed on COTAs assessments on more than 130 occasions, once at 10am on the day of Russias full invasion. Access to portions (or circuits) of COTA is provided to some other groups, including insurers, foreign ministries and Americas Department of Energy.

Ukraines AI effort benefits from its societys broad willingness to contribute data for the war effort. Citizens upload geotagged photos potentially relevant for the countrys defence into a government app called Diia (Ukrainian for action). Many businesses supply Mantis Analytics, a firm in Lviv, with operations data on things that range from late deliveries to call-centre activity and the setting off of burglar alarms. Recipients of the platforms assessments of societal functioning include the defence ministry and companies that seek to deploy their own security resources in better ways.

How much difference all this will ultimately make is still unclear. Evan Platt of Zero Line, an NGO in Kyiv that provides kit to troops and who spends time at the front studying fighting effectiveness, describes Ukraines use of AI as a bright spot. But there are concerns. One is that enthusiasm for certain AI security applications may divert resources that would provide more bang for the buck elsewhere. Excessive faith in AI is another risk, and some models on the market are certainly overhyped. More dramatically, might AI prove to be a net negative for Ukraines battlefield performance?

A few think so. One is John Arquilla, a professor emeritus at the Naval Postgraduate School in California who has written influential books on warfare and advised Pentagon leaders. Ukraines biggest successes came early in the war when decentralised networks of small units were encouraged to improvise. Today, Ukraines AI constructor process, he argues, is centralising decision-making, snuffing out creative sparks at the edges. His assessment is open to debate. But at a minimum, it underscores the importance of human judgment in how any technology is used.

See the original post here:

How Ukraine is using AI to fight Russia - The Economist

Read More..

Intel Takes Aim at Nvidia’s AI Dominance With Launch of Gaudi 3 Chip – Investopedia

Key Takeaways

Intel (INTC) unveiled its latest artificial intelligence (AI) chip, the Gaudi 3 AI accelerator, which the chipmaker claims outperforms Nvidia's (NVDA)H100, during an event on Tuesday.

The Gaudi 3 accelerator delivers "50% on average better inferenceand 40% on average better power efficiency" than Nvidia's H100 at "a fraction of the cost," Intel said.

The latest Intel AI chip will be available to some original equipment manufacturers (OEMs)including Dell Technologies (DELL), Hewlett Packard Enterprise (HPE), Lenovo, and Super Micro Computer (SMCI) in the second quarter of 2024.

The announcement comes as the chipmaker works to compete with other semiconductor companies leading the AI boom, including Nvidia and Advanced Micro Devices (AMD).

Intel compared its latest chip to Nvidia's H100, which was first announced in 2022. Nvidia has since unveiledthe Blackwell platform, the latest version of its AI-powering tech, whichanalysts called the "most ambitious project in Silicon Valley," in March.

Nvidia's latest chip, the GB200, "provides up to a 30x performance increase compared to the same number of NVIDIA H100 Tensor Core GPUs for LLM inference workloads, and reduces cost and energy consumption by up to 25x," the company said.

Intel shares were up 0.4% at $38.12 as of about 12:45 p.m. ET Tuesday. The stock has lost about one-fifth of its value year to date.

Follow this link:

Intel Takes Aim at Nvidia's AI Dominance With Launch of Gaudi 3 Chip - Investopedia

Read More..

We now have a better look at what’s inside the Humane AI pin – The Verge

The Humane AI pin promises to give users a way to use generative AI in the physical world. You can clip the pin to your shirt, talk to it, and project answers from chatbots onto any surface, most often your palm. We know a little bit about what powers the tiny square pin, and thanks to a new report, we have a much better view of what goes on under the hood.

The Federal Communications Commission (FCC) included a photographic teardown of the AI pin in a new report. The photos show the clearest look so far into what comprises the Humane AI pin, as well as a close-up of the Snapdragon processor it uses.

The FCC must certify devices that use wireless communications to ensure they follow regulations before they are released to the public. They then get a nifty FCC mark on the product. This review process often includes a teardown of the gadget so the commission can inspect whats on the inside.

We already knew the AI pin runs on Snapdragon, though the company did not indicate what version. From the photos, it looks like the pin uses a Snapdragon 720G processor, which Qualcomm says on its website can run on-device AI with low power on mobile devices. The Snapdragon 720G is one of the smaller chips available that can also handle an AI compute load.

While certainly there are still questions as to why the Humane AI pin exists, at least we now know its using a chip powerful enough to project ChatGPT results onto your palm.

Read more from the original source:

We now have a better look at what's inside the Humane AI pin - The Verge

Read More..

How to Protect Yourself (and Your Loved Ones) From AI Scam Calls – WIRED

You answer a random call from a family member, and they breathlessly explain how theres been a horrible car accident. They need you to send money right now, or theyll go to jail. You can hear the desperation in their voice as they plead for an immediate cash transfer. While it sure sounds like them, and the call came from their number, you feel like somethings off. So, you decide to hang up and call them right back. When your family member picks up your call, they say there hasnt been a car crash, and that they have no idea what youre talking about.

Congratulations, you just successfully avoided an artificial intelligence scam call.

As generative AI tools get more capable, it is becoming easier and cheaper for scammers to create fakebut convincingaudio of peoples voices. These AI voice clones are trained on existing audio clips of human speech, and can be adjusted to imitate almost anyone. The latest models can even speak in numerous languages. OpenAI, the maker of ChatGPT, recently announced a new text-to-speech model that could further improve voice cloning and make it more widely accessible.

Of course, bad actors are using these AI cloning tools to trick victims into thinking they are speaking to a loved one over the phone, even though theyre talking to a computer. While the threat of AI-powered scams can be frightening, you can stay safe by keeping these expert tips in mind the next time you receive an urgent, unexpected call.

Its not just OpenAI; many tech startups are working on replicating near perfect-sounding human speech, and the recent progress is rapid. If it were a few months ago, we would have given you tips on what to look for, like pregnant pauses or showing some kind of latency, says Ben Colman, cofounder and CEO of Reality Defender. Like many aspects of generative AI over the past year, AI audio is now a more convincing imitation of the real thing. Any safety strategies that rely on you audibly detecting weird quirks over the phone are outdated.

Security experts warn that its quite easy for scammers to make it appear as if the call were coming from a legitimate phone number. A lot of times scammers will spoof the number that they're calling you from, make it look like it's calling you from that government agency or the bank, says Michael Jabbara, global head of fraud services at Visa. You have to be proactive. Whether its from your bank or from a loved one, any time you receive a call asking for money or personal information, go ahead and ask to call them back. Look up the number online or in your contacts, and initiate a follow-up conversation. You can also try sending them a message through a different, verified line of communication like video chat or email.

A popular security tip that multiple sources suggested was to craft a safe word that only you and your loved ones know about, and which you can ask for over the phone. You can even prenegotiate with your loved ones a word or a phrase that they could use in order to prove who they really are, if in a duress situation, says Steve Grobman, chief technology officer at McAfee. Although calling back or verifying via another means of communication is best, a safe word can be especially helpful for young ones or elderly relatives who may be difficult to contact otherwise.

What if you dont have a safe word decided on and are trying to suss out whether a distressing call is real? Pause for a second and ask a personal question. It could even be as simple as asking a question that only a loved one would know the answer to, says Grobman. It could be, Hey, I want to make sure this is really you. Can you remind me what we had for dinner last night? Make sure the question is specific enough that a scammer couldnt answer correctly with an educated guess.

Deepfake audio clones arent just reserved for celebrities and politicians, like the calls in New Hampshire that used AI tools to sound like Joe Biden and to discourage people from going to the polls. One misunderstanding is, It cannot happen to me. No one can clone my voice, says Rahul Sood, chief product officer at Pindrop, a security company that discovered the likely origins of the AI Biden audio. What people dont realize is that with as little as five to 10 seconds of your voice, on a TikTok you might have created or a YouTube video from your professional life, that content can be easily used to create your clone. Using AI tools, the outgoing voicemail message on your smartphone might even be enough to replicate your voice.

Whether its a pig butchering scam or an AI phone call, experienced scammers are able to build your trust in them, create a sense of urgency, and find your weak points. Be wary of any engagement where youre experiencing a heightened sense of emotion, because the best scammers arent necessarily the most adept technical hackers, says Jabbara. But they have a really good understanding of human behavior. If you take a moment to reflect on a situation and refrain from acting on impulse, that could be the moment you avoid getting scammed.

Original post:

How to Protect Yourself (and Your Loved Ones) From AI Scam Calls - WIRED

Read More..

NRO eyes diverse satellite fleet and AI-powered ground systems in modernization push – SpaceNews

COLORADO SPRINGS The National Reconnaissance Office, the secretive U.S. intelligence agency responsible for operating the countrys spy satellites, is developing a more diverse fleet of satellites alongside an overhaul of its ground systems.

Troy Meink, principal deputy director of the NRO, said the agency is looking to develop a more diverse satellite architecture, including smaller and more maneuverable models, to improve its intelligence gathering across a wider range of orbits and mission profiles.

We are pushing the boundaries to ensure we stay on the leading edge of innovation, Meink said April 9 in a keynote speech at the 39th Space Symposium. Over the next decade, we will continue to increase the number of satellites operating across multiple orbits, not just large systems that are the traditional hallmark of the NRO, but also smaller proliferated systems.

In parallel with the changes to its space-based systems, the NRO is also overhauling its satellite ground architecture, investing heavily in new technologies like artificial intelligence (AI) and machine learning to help process the flood of data coming from its expanding satellite network.

Expanding our overhead architecture will provide greater revisit rates, increased coverage, and more timely delivery of information, Meink said. This will make our collection more agile, eliminate single points of failure and will make our constellations more resilient.

Ground systems

A more diverse space architecture will allow the NRO to collect an order of magnitude more data, he said. So this means ground operations must evolve as well. I think this is actually one of the biggest challenges we face. Its not the bits that matter. Its how the bits get organized into useful information thats important.

The NROs push to modernize its ground systems started several years ago, said Joshua Perrius, senior vice president of Booz Allen Hamilton. The company is a support contractor to the NRO for ground systems modernization.

The goal is more automated tasking and collection based on data models and less on human planned activities, Perrius told SpaceNews.

The NRO is seeking more advanced data processing and exploitation capabilities on the ground to make sense of all the data its collecting, he said. They have to be able to rapidly task, re-task, and exploit data from a more diverse and resilient constellation, while also leveraging the latest AI and automation tools, said Perrius.

He said AI and machine learning algorithms can help to identify critical information and generate actionable intelligence much faster than traditional methods.

While the specific details of the NROs plans are classified, Perrius noted, this shift towards a more diverse satellite fleet and AI-powered ground systems signifies a major transformation for the intelligence agency.

The NROs fleet includes imaging satellites that take high-resolution pictures of the Earths surface, signals intelligence satellites that intercept and collect electronic communications, and others that gather information about objects by analyzing radio frequencies and other emissions.

Access to hostile territory

Millions of people count on us everyday, Meink said at the Space Symposium. Civilian customers depend on space collection to assist them with natural disasters, help predict climate change, and help relief agencies determine how and where to deliver humanitarian aid.

The Department of Defense and the intelligence community, he added, depend on the NRO capabilities, for example, for geolocation data and high-resolution imagery. The NRO systems are often the only tools able to access hostile territory or rugged terrain so we can collect critical information.

Read the original post:

NRO eyes diverse satellite fleet and AI-powered ground systems in modernization push - SpaceNews

Read More..

Aboards AI-powered bookmarking and project app is a new spin on a chatbot – The Verge

Aboard is not an easy app to explain. It used to be easier: at first, it was a way to collect and organize information Trello meets Pinterest meets that spreadsheet full of links you use to plan your vacation. The companys founders, Paul Ford and Rich Ziade, are two longtime web developers and app creators (and, in Fords case, also an influential writer about the web) who previously ran a well-liked agency called Postlight. They did a bunch of interesting work on parsing websites to pull out helpful information, and they built a handy visual tool for displaying it all. People love to save links, Ziade says, and we love to make those links beautiful when they come in. Simple!

But now Im sitting here in a co-working space in New York City, a few minutes after an earthquake hit and a few days before Aboards biggest launch yet, and Ziade is showing me something very different. He opens up a beta version of the app and clicks a button, and after a second, the page begins to change. A database appears out of nowhere, with a bunch of categories Year, Title, Genre, and more that start to populate with a number of well-known movie titles. The board, as Aboard calls it, titles itself Movie Night. With one click, Ziade just built and populated a way to track your viewing habits.

Maybe the best way to explain the new Aboard is not as a Pinterest competitor but as a radical redesign of ChatGPT. When Ziade made that board, all he was really doing was querying OpenAIs GPT-3.5. The companys chatbot might have returned some of the same movies, but it would have done so with a series of paragraphs and bullet points. Aboard has built a more attractive, more visual AI app and has made it so you can turn that app into anything you want.

Ziade and Ford imagine three main things you might do with Aboard. The first, Organize, is closest to the original vision: ask the tool for a bunch of things to do in Montreal this summer, and itll populate a board with some popular attractions and restaurants. Ask Aboard to meal plan your week, and itll create a board segmented by day and by meal with nicely formatted recipes. The second, Research, is similar but a little more exploratory: ask Aboard to grab the most interesting links about African bird species, and itll dump them all into place for you to peruse at your leisure.

Like any AI product right now, this is sometimes cooler in theory than in reality. When I ask Ziade to make a board with important tech moments from 2004, it pulls a bunch of them into separate cards: Googles IPO, the launch of Gmail, the iPod Mini launch. And then the iPod Mini launch again and then another time and then three more times after that. Ziade and Ford both laugh and say this is the stuff they see all the time. A few times, a demo just fails, and each time, Ford says something to the effect of Yeah, that just happens when you ping the models. But he says its also getting better fast.

The third use case, which Aboard calls Workflow, is where Aboard figures its true business lies. Ziade does another demo: he enters a prompt into Aboard, asking it to set up a claims tracker for an insurance company. After a few seconds, he has a fairly straightforward but useful-looking board for processing claims, along with a bunch of sample cards to show how it works. Is this going to be perfect and powerful enough for an insurance company to start using as is? No. But its a start. Ford tells me that Aboards job is to build something good enough but also not quite good enough if the app can work just well enough to get you to customize it the rest of the way to fit your needs, thats the goal.

An Aboard board can be a table, a list, a gallery, and more

This is ultimately a very business-y use case and puts Aboard in loose competition with the Airtables and Salesforces of the world. Ziade and Ford are upfront about this. We want to be in professional settings, Ford says, thats a real thing were aiming for. Doesnt have to be for big enterprise, but definitely small teams, nonprofits, things like that. He figures Aboard can sell to companies by saving them a bunch of time and meetings spent figuring out how to organize data and just get them started right away. An Aboard board can be a table, a list, a gallery, and more; its a pretty flexible tool for managing most kinds of data.

I have no particular business use for Aboard, but Ive been testing the app for a while, and its a really clever way to redesign the output of a large language model. Particularly when its combined with Aboards ability to parse URLs, it can quickly put together some really useful information. Ive been saving links for months as I plan a vacation, and I had Aboard build me a project planner for managing a big renovation of my bathroom. (Its all very exciting stuff.)

Just before Aboards AI launch, I tried building another board: I prompted the AI to create a board of Oscar-winning movies, with stacks for each movie genre and tags for Rotten Tomatoes scores, and Aboard went to work. It came back with stacks (Aboards parlance for sub-lists) for six different movie genres, tags for various score ranges, plus runtimes, posters, and Rotten Tomatoes links for each flick. Were all the movies it selected Best Picture winners? Nope! Did it get the ratings right, like, ever? Nope! But it still felt like a good start and Aboard always gives you the option to delete the sample cards it generates and just start from scratch.

Aboard is just one of a new class of AI companies, the ones that wont try to build Yet Another Large Language Model but will instead try to build new things to do with those models and new ways to interact with them. The Aboard founders say they ultimately plan to connect to lots of models as those models become, in some cases, more specialized and, in others, more commoditized. In Aboards case, they want to use AI not as an answer machine but as something like a software generator. We still want you to go to the web, Ford says. We want to guide you a bit and maybe kickstart you, but were software people and we think the ability to get going really quickly is really, really interesting. The Aboard founders want AI to do the work about the work, so you can just get to work.

See more here:

Aboards AI-powered bookmarking and project app is a new spin on a chatbot - The Verge

Read More..

Google is building its own AI chipsand it’s a warning shot at Nvidia and Intel – Fortune

Google announced a proprietary chip Tuesday that could help the company cut back its reliance on heavyweight chipmakers and gain a foothold in the increasingly competitive AI race.

The new chip, dubbed Axion, will help handle the massive amount of data used by AI applications, Google said in a Tuesday statement. Its designed to be grouped into clusters of thousands of chips to improve performance, the Wall Street Journal reported.

The new chipswhich are central processing units, or CPUsare reportedly 30% better than already available general-purpose chips that use similar circuitry made by the U.K.-based semiconductor and software company Arm, the company said in a statement. Although Google had previously made other chips for its different business segments, this is its first meant to support AI in data centers.

Customers of the Alphabet subsidiary will be able to access Axion through Googles cloud business later this year, but will not be able to buy them directly, according to the Journal. The companys vice president overseeing proprietary chips, Amin Vahdat, told the outlet that it wants to take a different approach.

Becoming a great hardware company is very different from becoming a great cloud company or a great organizer of the worlds information, Vahdat said.

By not selling directly to customers, Google is avoiding direct competition with its longtime partnersand dominant chipmakersIntel and Nvidia. Instead, Vahdat said, the company sees its entry into the chip market as a positive for everyone in the industry.

I see this as a basis for growing the size of the pie, Vahdat said.

As the hypercompetitive race to enable AI heats up, Googles rivals likely dont share that vision. On Tuesday, Santa Clara, Calif.based semiconductor company Intel released the artificial-intelligence-focused chip Gaudi 3. Intel says the new chips will be available by the third quarter, and can be used to train large language models (LLMs) like ChatGPT. The company claims the Gaudi 3 chips have an edge over Nvidias competing chip, the H100.

Nvidia, meanwhile, announced the new generation of its popular H100 chip in November and plans to release it later this year. Still, shares of Nvidia closed down 2% on Tuesday following the news. The company has seen its stock skyrocket about 75% since the start of the year on outsize demand for its powerful H100 chips, but is facing increasing competition.

Shares of Google parent company Alphabet jumped as much as 2.4% on the day following news of the new chip before paring back gains. The stock closed up 1.28% at about $158.

Read this article:

Google is building its own AI chipsand it's a warning shot at Nvidia and Intel - Fortune

Read More..