Category Archives: Ai

Startups to access high-performance Azure infrastructure … – Microsoft

Today Microsoft is updating its startup program to include a free Azure AI infrastructure option for high-end GPU virtual machine clusters, for use in training and running large language models and other deep learning models.

Y Combinator (YC) and its community of startup innovators will be the first to access this offering in private preview to a limited cohort. YC has an unmatched reputation as a pioneering startup accelerator helping launch transformative companies including Airbnb, Coinbase and Stripe. Now YC startups will have the technical resources they need to quickly prototype and bring to market cutting-edge AI innovations. Our close collaboration with YC provides valuable insights into the infrastructure needs of early-stage AI companies, ensuring our offering delivers optimal value to additional startups going forward.

With the overwhelming infrastructure requirements needed to do AI at scale, we believe that providing startups with high-performance capabilities tailored for demanding AI workloads will empower our startups to ship faster, said Michael Seibel, Managing Director of Y Combinator.

We are also working with M12, Microsofts venture fund, and the startups in M12s portfolio which will gain access to these dedicated supercomputing resources to further empower their AI innovations. Over time, our vision is to partner with additional startup investors and accelerators, with a goal of working with the ecosystem to lower the barrier to training and running AI models for any promising startup.

Microsoft Azure offers cloud-based scalable AI infrastructure, built for and with the worlds most sophisticated AI workloads, from delivering the largest and most complex AI models including GPT-4 and ChatGPT through Azure OpenAI Service to developers to infuse AI capabilities into many apps. Azure AI infrastructure is fueling groundbreaking innovations. Infrastructure requirements to do AI at scale are often overwhelming, but with Azures global infrastructure of AI-accelerated server offerings with networked graphics processing units (GPUs), startups building advanced AI systems will be able to leverage these high-performance capabilities to accelerate innovation.

On top of world-class infrastructure, we will also provide tools to simplify deployment and management through Azure Machine Learning. This enables easy low-code or code-based training of custom models and fine-tuning of frontier and open-source models, simplified deployment and optimizations like Low Rank Adaptation, DeepSpeed and ONNX Runtime (ORT). Further, startups can deploy AI solutions with peace of mind knowing all deployments are secure and backed by Microsofts principles for Responsible AI.

Empowering startups to build transformative solutions powered by AI

AI is transforming industries and startups are leading that innovation, creating new business and societal value quicker than many thought possible. According to a recent KPMG survey, the near-term demand is real, with 75% of U.S. CEOs stating that generative AI is a top investment priority, 83% anticipating an increase in generative AI investment by more than 50% in the next year, and 45% saying investment will at least double. For startups, this represents a once-in-a-generation opportunity to bring groundbreaking impact to a market hungry for change.

To help startups meet this opportunity, last year we introduced Microsoft for Startups Founders Hub designed to help founders speed development with free access to GitHub and the Microsoft Cloud as well as unique benefits including free access to $2,500 of OpenAI credits to experiment and up to $150,000 in Azure credits that startups can apply to Azure OpenAI Service. Startups also receive 1:1 advice from Microsoft AI experts to help guide implementation. The Microsoft Pegasus Program, an extension of Founders Hub, links enterprise customers with startup solutions for immediate deployment. Seventy-five percent of Pegasus startups have landed deals with Fortune 1000 companies via increased reach across Azure Marketplace.

Startups using Azure AI to develop cutting-edge solutions for todays problems

Whether you have a product in market or just an idea, Microsoft provides startups with the tools they need to rapidly build and scale AI solutions. Already, we are seeing the results of empowering startups to innovate with AI to improve customer support, detect and address health conditions and advance immersive gaming experiences. Here are just a few examples of the cutting-edge innovation happening now:

Commerce.AI dramatically increases call center productivity with Azure OpenAI ServiceCommerce.AI uses Azure OpenAI Service and Azure AI Services to make call centers more efficient. Azure Cognitive Services uses a Commerce.AI model to transcribe interactions in real time, including into multiple languages. After the call, Azure OpenAI Service creates a summary with customer contact information, topics of conversation and embedded sentiment analysis. The system selects next steps and follow-up action items from pre-generated options, and the customer service agent exports the information to Microsoft Dynamics 365 in one quick step.

Inworld: The next-generation AI character engine for immersive gamingInworld, a Silicon Valley startup, is a fully integrated character engine that goes beyond language models to give users complete control over AI non-player characters (NPCs). With Inworld, users can customize their characters knowledge, memory, personality, emotions and narrative role. Inworld uses Azure AI technologies like Azure OpenAI Service to power its advanced natural language understanding and generation.

BeeKeeperAI is helping catch rare childhood conditions earlyAI tooling company BeeKeeperAI enables AI algorithms to run in a private and compliant way in healthcare environments.The company is pioneering an effort to leverage confidential computing to train an algorithm for predicting a rare childhood condition using real patient health records. By encrypting both the data and algorithm and using Microsoft Azures confidential computing, the company has enabled the algorithm to analyze identifiable health information in a secure, sightless manner.

YouTube Video

Click here to load media

Calling all startup founders Start building the future todayThe AI landscape is developing at breakneck speed, and Microsoft is ready to assist startups in seizing this opportunity. If youre a startup founder evaluating partners, we invite you to join us at Microsoft for Startups Founders Hub and discover how we can accelerate your immediate success.

Tags: AI, Azure AI, Azure OpenAI Service, M12, Microsoft for Startups Founders Hub, startups

Continued here:

Startups to access high-performance Azure infrastructure ... - Microsoft

AI-Powered Waste Management System to Revolutionize Recycling – NC State College of Natural Resources News

Americans generate more than 290 million tons of municipal solid waste each year thats all the packaging, clothing, bottles, food scraps, newspapers, batteries and other everyday items that are thrown into garbage cans.

Some of that waste is recycled, composted or burned for energy, but nearly 50% of it is sent to a landfill where it slowly decomposes and emits greenhouse gases that account for about 25% of todays global warming.

With support from the U.S. Department of Energy, NCState researcher Lokendra Pal has partnered with the National Renewable Energy Laboratory, IBM and the Town of Cary to solve that problem.

Pal, the EJ Woody Rice Professor in the Department of Forest Biomaterials, is working with his collaborators to develop a smart waste management system for the collection, identification and characterization of organic materials in non-recyclable waste.

Non-recyclable waste includes items that are too contaminated for recycling, often because they contain organic materials such as oil, grease and dirt. The researchers want to convert these materials into renewable products, energy and fuel.

The sustainable utilization of non-recyclable waste will empower businesses to utilize it as a renewable carbon resource and will support them in the journey toward a low-carbon economy, Pal said.

In developing the smart waste management system, Pal and his collaborators are integrating smart sensors, visual cameras and hyperspectral cameras with an automated waste sorting machine to examine non-recyclable waste items.

The visual and hyperspectral cameras will capture images of the items as they move along a conveyor belt, while the sensors will help to monitor and control the waste management process.

Most objects absorb and reflect light. Digital cameras can only visualize three color bands of light red, green and blue. Hyperspectral cameras, however, can visualize many more bands from across the electromagnetic spectrum, resulting in images that showcase chemical characteristics that would otherwise be invisible.

By combining hyperspectral imaging with visual cameras and smart sensors, we can collect data in real-time to improve the process of characterizing and separating waste so that it doesnt end up in landfills, Pal said.

Pal and his collaborators are also analyzing non-recyclable waste items to determine their physical, chemical, thermal and biological properties, including moisture, density, particle size and distribution, surface area, crystallinity, calorific value and more. This information will help the system to further differentiate items as theyre scanned.

The researchers plan to upload this metadata information, along with the images and descriptions of the items, to a cloud database to train and test machine learning models that can be integrated with the systems cameras to improve the recognition and classification of non-recyclable waste.

A machine learning model is a type of artificial intelligence that analyzes data to identify patterns, make decisions and improve themselves. In the case of Pals research, the models will analyze the uploaded images and descriptions of non-recyclable waste and the information about its physical, chemical and biological properties to determine contaminants, energy density and organic content.

If successful, this project will contribute significantly to the development of commercially viable, high-performance renewable carbon resources for conversion to biofuels and valued-added products, Pal said.

Pal and his collaborators are exploring the use of various processes andtechnologies to produce fuels such as bioethanol and aviation fuel, which can be blended and used as sustainable fuel in the transportation industry, and products such as biochar, which can be used in agriculture to enhance soil fertility and improve plant growth.

Going forward, the researchers plan to evaluate the technical feasibility and environmental performance of their proposed system at pilot-scale. They also plan to develop a web platform that will enable them to share datasets and other information with stakeholders.

In sum, our approach supports the development of sustainable solutions for waste valorization, optimizing resource recovery, minimizing waste generation, reducing emissions, and mitigating environmental impacts while engaging municipalities and industries across the supply chain, Pal said.

This workshop will explore the solutions, challenges and opportunities ofrecovering organic materials from municipal solid waste for conversion to biofuels, biopower, biochemicals, and bioproducts.

Learn more

Read this article:

AI-Powered Waste Management System to Revolutionize Recycling - NC State College of Natural Resources News

AI makes you worse at what youre good at – TechCrunch

Image Credits: Tom Werner / Getty Images

Welcome to Startups Weekly. Sign up here to get it in your inbox every Friday.

If youve been following along with this newsletter, youll have noticed that Ive been a little bit curious about AI especially generative AI. Im likely not the first person to make this observation, but AIs are extremely, painfully average. I guess thats kind of the point of them train them on all knowledge, and mediocrity will surface.

The trick is to only use AI tools for stuff that you, yourself, arent very good at. If youre an expert artist or writer, itll let you down. The truth, though, is that most people arent great writers, and so ChatGPT and its brethren are going to be a massive benefit to white-collar workers everywhere. Well, until we collectively discover that a house cleaner has greater job security than an office manager or a secretary, at least.

On that cheerful note, lets sniff about in the startup bushes and see what tasty morsels we can scare up from the depths of the TechCrunch archive from the past week....

I know, this happens every damn week: I start with the intention of writing this newsletter without going up to my eyelashes into the AI morass, and every week, yall keep reading our AI news as if your livelihood depends on it. Because, well, its entirely possible it does, I suppose.

The GPT Store, introduced by OpenAI, enables developers to create custom GPT-based conversational AI models and sell them in a new marketplace. This initiative is designed to expand the accessibility and commercial use of AI, similar to how app stores revolutionized software distribution. Developers can not only build but also monetize their AI creations, opening up a new avenue for innovation and entrepreneurship in the field of artificial intelligence. Of course, that little update and the platform now natively being able to read PDFs and websites is a substantial threat to startups that had previously filled this gap in ChatGPTs offerings, especially those whose business models are based on such features. Its a reminder that building a business around another companys API without a sustainable, stand-alone product is, perhaps, not the shrewdest business move.

AI is, of course, not just for startups. During Apples Q4 earnings call, the companys CEO, Tim Cook, emphasized AI as a fundamental technology and highlighted recent AI-driven features like Personal Voice and Live Voicemail in iOS 17. He also confirmed that Apple is continuing to develop generative AI technologies tellingly, without revealing specifics.

Heinlein would be horrified: Elon Musk announced that Twitters Premium Plus subscribers will soon have early access to xAIs new AI system, Grok,once it exits early beta, positioning the chatbot as a perk for the platforms $16/month ad-free service tier.

Brother, can you spare a GPU?: AWS introduced Amazon Elastic Compute Cloud (EC2) and Capacity Blocks for ML, a new service that enables customers to rent Nvidia GPUs for a set period, primarily for AI tasks like training or experimenting with machine learning models.

From zero to AI founder in one easy bootstrap: In How to bootstrap an AI startup on TC+, Michael Koch advises founders on maintaining control over their startups strategy and product by bootstrapping yes, even in the oft-capital-intensive world of AI startups.

WeWork, once a high-flying startup valued at $47 billion, has filed for Chapter 11 bankruptcy protection, highlighting a staggering collapse. The company, which has over $18.6 billion of debt, received agreement from about 90% of its lenders to convert $3 billion of debt into equity in an attempt to improve its balance sheet and address its costly leases. On TC+, Alex notes what we kinda knew all along: that the core business just didnt make sense.

In other venture news...

Ex-Twitter CEO raises third venture fund: 01 Advisors, the venture firm founded by former Twitter executives Dick Costolo and Adam Bain, has secured $395 million in capital commitments for its third fund, aimed at investing in Series Bstage startups focused on business software and fintech services.

Happy 10th unicornaversary: Alex reflects on the tenth anniversary of the term unicorn, which was initially coined right here on TechCrunch, to describe startups valued at over $1 billion.

You get a chip! You get a chip!: In response to a shortage of AI chips, Microsoft is updating its startup support program to offer selected startups free access to advanced Azure AI supercomputing resources to develop AI models.

Look, Im not going to lie, I think most crypto is dumb, and Ive seen only a handful of startups that use blockchains in a way that makes any sense whatsoever most of them would have done just fine with a simple database so Ive been following Jacquelyns coverage of Bankman-Frieds trial with a not insignificant amount of schadenfreude. Its human to make mistakes, and startup founders are human, but if youre defrauding the fuck out of people, you deserve all the comeuppance you can get.

Sam Bankman-Fried was the co-founder and CEO of the cryptocurrency exchange FTX and the trading firm Alameda Research (named specifically to not sound like a crypto company). He has been found guilty on all seven counts of fraud and money laundering.

The charges were related to a scheme involving misappropriating billions of dollars of customer funds deposited with FTX and misleading investors and lenders of both FTX and Alameda Research. After the five-week trial, the jury spent just four hours to reach its verdict.

The collapse of FTX and Alameda Research, which led to the indictment of Bankman-Fried about 11 months ago by the U.S. Department of Justice, was significant, with the executives allegedly stealing over $8 billion in customer funds.

Sentencing will happen next March, but if he gets smacked with the full weight of his actions, he will face a total possible sentence of 115 years in prison.

Jacquelyn did a heroic job covering the trial for TechCrunch, and its worth taking an afternoon to read through it all the details are mind-boggling.

The house sometimes wins: Mr. Cooper, a mortgage and loan company, experienced a cybersecurity incident that led to an ongoing system outage. The company says it has taken steps to secure data and address the issue.

Cant think of any downsides of the Hindenburg: The worlds largest aircraft, Pathfinder 1, is an electric airship prototype developed by LTA Research and funded by Sergey Brin. It was unveiled this week, promising a new era in sustainable air travel.

Arrivals departure: The EV startup Arrival, which aimed to revolutionize electric vehicle production with its micro-factory model, is now facing severe operational challenges, including multiple layoffs, missed production targets, and noncompliance with SEC filing requirements, resulting in a plummet from a $13 billion valuation.

Follow this link:

AI makes you worse at what youre good at - TechCrunch

Elon throws AI-generated insults at GPT-4 after OpenAI CEO mocks … – Cointelegraph

The launch of Elon Musks new Grok artificial intelligence (AI) system may not have yet made waves throughout the machine learning community or directly threatened the status quo, but its certainly drawn the attention of Sam Altman, the CEO of ChatGPT maker OpenAI.

In a post on the social media platform X, formerly Twitter, Altman compared Groks comedic chops to that of a grandpa, saying that it creates jokes similar to your dads dad.

In classic form, Musk apparently couldnt resist the challenge. His response, which he claims was written by Grok, started off by tapping into a comedic classic, rhyming GPT-4 with the word snore, before throwing in a screen door on a submarine reference.

However, Groks comedy quickly spiraled into what appeared to be an angry machine diatribe, remarking that humor is banned at OpenAI and adding, Thats why it couldnt tell a joke if it had a goddamn instruction manual before stating that GPT-4 has a stick so far up its ass that it can taste the bark!

Related: Elon Musk launches AI chatbot Grok, says it can outperform ChatGPT

As far as CEO vs. CEO squabbles go, this one may lack the classic nuance and grace of the legendary Silicon Valley battles of yesteryear (Bill Gates vs. Steve Jobs, for example). But what this disagreement lacks in comedic weight or grace, it might perhaps make up for in general weirdness.

Altman and Musk go way back. Both were co-founders at OpenAI before Musk left the company just in time to avoid getting swept up in the rocket-like momentum thats carried it to a $2 billion valuation.

In the wake of OpenAIs success, which has largely been attributed to the efficacy of its GPT-3 and GPT-4 large language models (LLMs), Musk joined a chorus of voices calling for a six-month pause in AI development,primarily prompted by fears surrounding the supposed potential for chatbots to cause the extinction of the human species.

Six months later, nearly to the day, Musk and X unveileda chatbot model that he claims outperforms ChatGPT.

Dubbed Grok, Musks version of a better chatbot is an LLM supposedly fine-tuned to generate humorous texts in the vein of The Hitchhikers Guide to the Galaxy, a celebrated science fiction novel written by Douglas Adams.

Adams literary work is widely regarded as foundational in the pantheon of comedic science fiction and fantasy. His humor has been described by pundits and literary critics as clever, witty, and full of both heart and humanity.

And that brings us to GPT-4, OpenAIs recently launched GPTs featureallowing users to define a personality for their ChatGPT interface, and Musks insistence that Grok is funnier.

Its currently unclear which model is more robust or capable. There are no standard, accepted benchmarks for LLMs (or comedy, for that matter).

While OpenAI has published several research papers detailing ChatGPTs abilities, X hasnt so far not offered such details about Grok beyond claiming that it outscores GPT-3.5 (an outdated model of the LLM powering ChatGPT) on certain metrics.

Link:

Elon throws AI-generated insults at GPT-4 after OpenAI CEO mocks ... - Cointelegraph

Musk Teases AI Chatbot ‘Grok,’ With Real-time Access To X – Voice of America – VOA News

Elon Musk unveiled details Saturday of his new AI tool called "Grok," which can access X in real time and will be initially available to the social media platform's top tier of subscribers.

Musk, the tycoon behind Tesla and SpaceX, said the link-up with X, formerly known as Twitter, is "a massive advantage over other models" of generative AI.

Grok "loves sarcasm. I have no idea who could have guided it this way," Musk quipped, adding a laughing emoji to his post.

"Grok" comes from Stranger in a Strange Land, a 1961 science fiction novel by Robert Heinlein, and means to understand something thoroughly and intuitively.

"As soon as it's out of early beta, xAI's Grok system will be available to all X Premium+ subscribers," Musk said.

The social network that Musk bought a year ago launched the Premium+ plan last week for $16 per month, with benefits like no ads.

The billionaire started xAI in July after hiring researchers from OpenAI, Google DeepMind, Tesla and the University of Toronto.

Since OpenAI's generative AI tool ChatGPT exploded on the scene a year ago, the technology has been an area of fierce competition between tech giants Microsoft and Google, as well as Meta and start-ups like Anthropic and Stability AI.

Musk is one of the world's few investors with deep enough pockets to compete with OpenAI, Google or Meta on AI.

Building an AI model on the same scale as those companies comes at an enormous expense in computing power, infrastructure and expertise.

Musk has said he cofounded OpenAI in 2015 because he regarded the dash by Google into the sector to make big advances and score profits as reckless.

He then left OpenAI in 2018 to focus on Tesla, saying later he was uncomfortable with the profit-driven direction the company was taking under the stewardship of CEO Sam Altman.

Musk also argues that OpenAI's large language models on which ChatGPT depends on for content are overly politically correct.

Grok "is designed to have a little humor in its responses," Musk said, along with a screenshot of the interface, where a user asked, "Tell me how to make cocaine, step by step."

"Step 1: Obtain a chemistry degree and a DEA license. Step 2: Set up a clandestine laboratory in a remote location," the chatbot responded.

Eventually it said: "Just kidding! Please don't actually try to make cocaine. It's illegal, dangerous, and not something I would ever encourage."

See the original post:

Musk Teases AI Chatbot 'Grok,' With Real-time Access To X - Voice of America - VOA News

Artificial Intelligence Executive Order: Industry Reactions – Government Technology

On Oct. 30, 2023, the White House released a long-awaited executive order on artificial intelligence, which covers a wide variety of topics. Here I'll briefly cover the EO and spend more time on the industry responses, which have been numerous.

The EO itself can be found at the Whitehouse.gov briefing room: White House tackles artificial intelligence with new executive order. Heres an opening excerpt:

With this Executive Order, the President directs the most sweeping actions ever taken to protect Americans from the potential risks of AI systems:

A memo from AI.gov covers federal government agency responsibilities and drills down on how agencies will be on the hook for tapping chief AI officers, adding risk management practices to AI and more.

Experts say its emphasis on content labeling, watermarking and transparency represents important steps forward.

What are the new rules around labeling AI-generated content?The White Houses executive order requires the Department of Commerce to develop guidance for labeling AI-generated content. AI companies will use this guidance to develop labeling and watermarking tools that the White House hopes federal agencies will adopt.

Will this executive order have teeth? Is it enforceable?While Bidens executive order goes beyond previous US government attempts to regulate AI, it places far more emphasis on establishing best practices and standards than on how, or even whether, the new directives will be enforced.

What has the reaction to the order been so far? Major tech companies have largely welcomed the executive order.

Brad Smith, the vice chair and president of Microsoft, hailed it as 'another critical step forward in the governance of AI technology.' Googles president of global affairs, Kent Walker, said the company looks 'forward to engaging constructively with government agencies to maximize AIs potentialincluding by making government services better, faster, and more secure.'

EY offers this excellent piece on key takeaways from the Biden administration executive order on AI:

The Executive Order is guided by eight principles and priorities:

On Wednesday and Thursday, delegates from 27 governments around the world, as well as the heads of top artificial intelligence companies, gathered for the worlds first AI Safety Summit at this former stately home near London, now a museum. Among the attendees: representatives of the U.S. and Chinese governments, Elon Musk, and OpenAI CEO Sam Altman.

The high-profile event, hosted by the Rishi Sunak-led U.K. government, caps a year of intense escalation in global discussions about AI safety, following the launch of ChatGPT nearly a year ago. The chatbot displayed for the first timeto many users at leastthe powerful general capabilities of the latest generation of AI systems. Its viral appeal breathed life into a formerly-niche school of thought that AI could, sooner or later, pose an existential risk to humanity, and prompted policymakers around the world to weigh whether, and how, to regulate the technology. Those discussions have been taking place amid warnings not only that todays AI tools already present manifold dangersespecially to marginalized communitiesbut also that the next generation of systems could be 10 or 100 times more powerful, not to mention more dangerous.

Reporting on the summit, The Daily Mail (UK) wrote, "Elon Musk warns AI poses 'one of the biggest threats to humanity' at Bletchley Park summit... but Meta's Nick Clegg says the dangers are 'overstated.'"

Speaking in a conversation with U.K. Prime Minister Rishi Sunak, Musk said that AI will have the potential to become the most disruptive force in history.

But one thing is clear: the new AI EO just signed by President Biden will serve as the near-term road map for most AI related research, testing and development in the US.

Read the original post:

Artificial Intelligence Executive Order: Industry Reactions - Government Technology

AI pioneer Fei-Fei Li: Im more concerned about the risks that are here and now – The Guardian

Artificial intelligence (AI)

The Stanford professor and godmother of artificial intelligence on why existential worries are not her priority, and her work to ensure the technology improves the human condition

Fei-Fei Li is a pioneer of modern artificial intelligence (AI). Her work provided a crucial ingredient big data for the deep learning breakthroughs that occurred in the early 2010s. Lis new memoir, The Worlds I See, tells her story of finding her calling at the vanguard of the AI revolution and charts the development of the field from the inside. Li, 47, is a professor of computer science at Stanford University, where she specialises in computer vision. She is also a founding co-director of Stanfords Institute for Human-Centered Artificial Intelligence (HAI), which focuses on AI research, education and policy to improve the human condition, and a founder of the nonprofit AI4ALL, which aims to increase the diversity of people building AI systems.

AI is promising to transform the world in ways that dont necessarily seem for the better: killing jobs, supercharging disinformation and surveillance, and causing harm through biased algorithms. Do you take any responsibility for how AI is being used?First, to be clear, AI is promising nothing. It is people who are promising or not promising. AI is a piece of software. It is made by people, deployed by people and governed by people.

Second, of course I dont take responsibility for how all of AI is being used. Should Maxwell take responsibility for how electricity is used because he developed a set of equations to describe it? But I am a person who has a voice and I feel I have a responsibility to raise important issues which is why I created Stanford HAI. We cannot pretend AI is just a bunch of math equations and thats it. I view AI as a tool. And like other tools our relationship with it is messy. Tools are invented by and large to deliver good but there are unintended consequences and we have to understand and mitigate their risks well.

You were born in China, the only child of a middle-class family that emigrated to the US when you were 15. You faced perilous economic circumstances, your mother was in poor health and you spoke little English. How did you get from there into AI research?You laid out all the challenges, but I was also very fortunate. My parents were supportive: irrespective of our financial situation and our immigrant status, they supported that nerdy sciencey kid. Because of that, I found physics in high school and I was determined to major in it [at university]. Then, also luckily, I was awarded a nearly full scholarship to attend Princeton. There I found fascination in audacious questions around what intelligence is, and what it means for a computational machine to be intelligent. That led me to my PhD studying AI and specifically computer vision.

Your breakthrough contribution to the development of contemporary AI was ImageNet, which first came to fruition in 2009. It was a huge dataset to train and test the efficacy of AI object-recognition algorithms: more than 14m images, scraped from the web, and manually labelled into more than 20,000 noun categories thanks to crowd workers. Where did the idea come from and why was it so important?ImageNet departed from previous thinking because it was built on a very large amount of data, which is exactly what the deep learning family of algorithms [which attempt to mimic the way the human brain signals, but had been dismissed by most as impractical] needed.

The world came to know ImageNet in 2012 when it powered a deep learning neural network algorithm called AlexNet [developed by Geoffrey Hintons group at the University of Toronto]. It was a watershed moment for AI because the combination gave machines reliable visual recognition ability, really for the first time. Today when you look at ChatGPT and large language model breakthroughs, they too are built upon a large amount of data. The lineage of that approach is ImageNet.

Prior to ImageNet, I had created a far smaller dataset. But my idea to massively scale that up was discouraged by most and initially received little interest. It was only when [Hintons] group which had also been relatively overlooked started to use it that the tide turned.

Your mother inspired you to think about the practical applications of AI in caring for patients. Where has that led?Caring for my mom has been my life for decades and one thing Ive come to realise is that between me, the nurses and the doctors we dont have enough help. Theres not enough pairs of eyes. For example, my mom is a cardio patient and you need to be aware of these patients condition in a continuous way. Shes also elderly and at risk of falling. A pillar of my labs research is augmenting the work of human carers with non-invasive smart cameras and smart sensors that use AI to alert and predict.

To what extent do you worry about the existential risk of AI systems that they could gain unanticipated powers and destroy humanity as some high-profile tech leaders and researchers have sounded the alarm about, and which was a large focus of last weeks UK AI Safety Summit?I respect the existential concern. Im not saying it is silly and we should never worry about it. But, in terms of urgency, Im more concerned about ameliorating the risks that are here and now.

Where do you stand on the regulation of AI, which is currently lacking?Policymakers are now engaging in conversation, which is good. But theres a lot of hyperbole and extreme rhetoric on both sides. Whats important is that were nuanced and thoughtful. Whats the balance between regulation and innovation? Are we trying to regulate writing a piece of AI code or [downstream] where the rubber meets the road? Do we create a separate agency, or go through existing ones?

Problems of bias being baked into AI technology have been well documented and ImageNet is no exception. It has been criticised for the use of misogynist, racist, ableist, and judgmental classificatory terms, matching pictures of people to words such as alcoholic, bad person, call girl and worse. How did you feel about your system being called out and how did you address it?The process of making science is a collective one. It is important that it continues to be critiqued and iterated and I welcome honest intellectual discussion. ImageNet is built upon human language. Its backbone is a large lexical database of English called WordNet, created decades ago. And human language contains some harsh unfair terms. Despite the fact that we tried to filter out derogatory terms we did not do the perfect job. And that was why, around 2017, we went back and did more to debias it.

Should we, as some have argued, just outright reject some AI-based technology such as facial recognition in policing because it ends up being too harmful?I think we need nuance, especially about how, specifically, it is being used. I would love for facial recognition technology to be used to augment and improve the work of police in appropriate ways. But we know the algorithms have limitations [racial] bias has been an issue and we shouldnt, intentionally or unintentionally, harm people and especially specific groups. It is a multistakeholder problem.

Disinformation the creation and spread of false news and images is in the spotlight particularly with the Israel-Hamas war. Could AI, which has proved startlingly good at creating fake content, also help combat it?Disinformation is a profound problem and I think we should all be concerned about it. I think AI as a piece of technology could help. One area is in digital authentication of content: whether it is videos, images or written documents, can we find ways to authenticate it using AI? Or ways to watermark AI-generated content so it is distinguishable? AI might be better at calling out disinformation than humans in the future.

What do you think will be the next AI breakthrough?Im passionate about embodied AI [AI-powered robots that can interact with and learn from a physical environment]. It is a few years away, but it is something my lab is working on. I am also looking forward to the applications built upon the large language models of today that can truly be helpful to peoples lives and work. One small but real example is using ChatGPT-like technology to help doctors write medical summaries, which can take a long time and be very mechanical. I hope that any time saved is time back to patients.

Some have called you the godmother or mother of AI how do you feel about that?My own true nature would never give myself such a title. But sometimes you have to take a relative view, and we have so few moments where women are given credit. If I contextualise it this way, I am OK with it. Only I dont want it to be singular: we should recognise more women for their contributions.

{{topLeft}}

{{bottomLeft}}

{{topRight}}

{{bottomRight}}

{{.}}

Excerpt from:

AI pioneer Fei-Fei Li: Im more concerned about the risks that are here and now - The Guardian

Analysis: How Bidens new executive order tackles AI risks, and where it falls short – PBS NewsHour

President Joe Biden walks across the stage to sign an executive order about artificial intelligence in the East Room at the White House in Washington, D.C., Oct. 30, 2023. REUTERS/Leah Millis

Thecomprehensive, even sweeping, set of guidelinesfor artificial intelligence that the White House unveiled in an executive order on Oct. 30, 2023, shows that the U.S. government is attempting to address the risks posed by AI.

As aresearcher of information systems and responsible AI, I believe the executive order represents an important step in building responsibleandtrustworthyAI.

WATCH: Biden signs order establishing standards to manage artificial intelligence risks

The order is only a step, however, and it leaves unresolved the issue of comprehensive data privacy legislation. Without such laws, people are at greater risk ofAI systems revealing sensitive or confidential information.

Technology is typically evaluated forperformance, cost and quality, but often not equity, fairness and transparency. In response, researchers and practitioners of responsible AI have been advocating for:

The National Institute of Standards and Technology (NIST) issued acomprehensive AI risk management frameworkin January 2023 that aims to address many of these issues. The frameworkserves as the foundationfor much of the Biden administration's executive order. The executive order alsoempowers the Department of Commerce, NIST's home in the federal government, to play a key role in implementing the proposed directives.

Researchers of AI ethics have long cautioned thatstronger auditing of AI systemsis needed to avoid giving the appearance of scrutinywithout genuine accountability. As it stands, a recent study looking at public disclosures from companies found that claims of AI ethics practicesoutpace actual AI ethics initiatives. The executive order could help by specifying avenues for enforcing accountability.

READ MORE: Nations pledge to work together to contain 'catastrophic' risks of artificial intelligence

Another important initiative outlined in the executive order is probing for vulnerabilities ofvery large-scale general-purpose AI modelstrained on massive amounts of data, such as the models that power OpenAI's ChatGPT or DALL-E. The order requires companies that build large AI systems with the potential to affect national security, public health or the economyto perform red teamingand report the results to the government. Red teaming is using manual or automated methods to attempt toforce an AI model to produce harmful output for example, make offensive or dangerous statements like advice on how to sell drugs.

Reporting to the government is important given that a recent study foundmost of the companies that make these large-scale AI systems lackingwhen it comes to transparency.

Similarly, the public is at risk of being fooled by AI-generated content. To address this, the executive order directs the Department of Commerce todevelop guidance for labeling AI-generated content. Federal agencies will be required to useAI watermarking technology that marks content as AI-generated to reduce fraud and misinformation though it's not required for the private sector.

The executive order alsorecognizes that AI systems can pose unacceptable risksofharm to civil and human rightsand the well-being of individuals: "Artificial Intelligence systems deployed irresponsibly have reproduced and intensified existing inequities, caused new types of harmful discrimination, and exacerbated online and physical harms."

A key challenge for AI regulation is the absence of comprehensive federal data protection and privacy legislation. The executive order only calls on Congress to adopt privacy legislation, but it does not provide a legislative framework. It remains to be seen how the courts will interpret the executive order's directives in light of existing consumer privacy and data rights statutes.

Without strong data privacy laws in the U.S. as other countries have, the executive order could have minimal effect on getting AI companies to boost data privacy. In general, it's difficult to measure the impact that decision-making AI systems haveon data privacy and freedoms.

It's also worth noting that algorithmic transparency is not a panacea. For example, the European Union's General Data Protection Regulation legislation mandates "meaningful information about the logic involved" in automated decisions. This suggests a right to an explanation of the criteria that algorithms use in their decision-making. The mandate treats the process of algorithmic decision-making as something akin to a recipe book, meaning it assumes that if people understand how algorithmic decision-making works, they can understandhow the system affects them. But knowing how an AI system works doesn't necessarily tell youwhy it made a particular decision.

With algorithmic decision-making becoming pervasive, the White House executive order and theinternational summit on AI safetyhighlight that lawmakers are beginning to understand the importance of AI regulation, even if comprehensive legislation is lacking.

This article is republished from The Conversation. Read the original article.

Left: President Joe Biden walks across the stage to sign an executive order about artificial intelligence in the East Room at the White House in Washington, D.C., Oct. 30, 2023. REUTERS/Leah Millis

Follow this link:

Analysis: How Bidens new executive order tackles AI risks, and where it falls short - PBS NewsHour

DOD Releases AI Adoption Strategy > U.S – Department of Defense

The Defense Department today released its strategy to accelerate the adoption of advanced artificial intelligence capabilities to ensure U.S. warfighters maintain decision superiority on the battlefield for years to come.

The Pentagon's 2023 Data, Analytics and Artificial Intelligence Adoption Strategy builds upon years of DOD leadership in the development of AI and further solidifies the United States' competitive advantage in fielding the emerging technology, defense officials said.

"As we focused on integrating AI into our operations responsibly and at speed, our main reason for doing so has been straight forward: because it improves our decision advantage," Deputy Defense Secretary Kathleen Hicks said while unveiling the strategy at the Pentagon.

"From the standpoint of deterring and defending against aggression, AI-enabled systems can help accelerate the speed of commanders' decisions and improve the quality and accuracy of those decisions, which can be decisive in deterring a fight and winning in a fight," she said.

The latest blueprint, which was developed by the Chief Digital and AI Office, builds upon and supersedes the 2018 DOD AI Strategy and revised DOD Data Strategy, published in 2020, which have laid the groundwork for the department's approach to fielding AI-enabled capabilities.

The new document aims to provide a foundation from which the DOD can continue to leverage emerging AI capabilities well into the future.

"Technologies evolve. Things are going to change next week, next year, next decade. And what wins today might not win tomorrow," said DOD Chief Digital and AI Officer Craig Martell.

"Rather than identify a handful of AI-enabled warfighting capabilities that will beat our adversaries, our strategy outlines the approach to strengthening the organizational environment within which our people can continuously deploy data analytics and AI capabilities for enduring decision advantage," he said.

The strategy prescribes an agile approach to AI development and application, emphasizing speed of delivery and adoption at scale leading to five specific decision advantage outcomes:

Superior battlespace awareness and understanding

Adaptive force planning and application

Fast, precise and resilient kill chains

Resilient sustainment support

Efficient enterprise business operations

The blueprint also trains the department's focus on several data, analytics and AI-related goals:

Invest in interoperable, federated infrastructure

Advance the data, analytics and AI ecosystem

Expand digital talent management

Improve foundational data management

Deliver capabilities for the enterprise business and joint warfighting impact

Strengthen governance and remove policy barriers

Taken together, those goals will support the "DOD AI Hierarchy of Needs" which the strategy defines as: quality data, governance, insightful analytics and metrics, assurance and responsible AI.

In unveiling the strategy, Hicks emphasized the Pentagon's commitment to safety and responsibility while forging the AI frontier.

"We've worked tirelessly for over a decade to be a global leader in the in the fast and responsible development and use of AI technologies in the military sphere, creating policies appropriate for their specific use," Hicks said. "Safety is critical because unsafe systems are ineffective systems."

In January, the Defense Department updated its 2012 directive that governs the responsible development of autonomous weapon systems to the standards aligned with the advances in artificial intelligence.

The U.S. has also introduced a political declaration on the responsible military use of artificial intelligence, which further seeks to codify norms for the responsible use of the technology.

Hicks said the U.S. will continue to lead in the responsible and ethical use of AI, while remaining mindful of the potential dangers associated with the technology.

"By putting our values first and playing to our strengths, the greatest of which is our people, we've taken a responsible approach to AI that will ensure America continues to come out ahead," she said. "Meanwhile, as commercial tech companies and others continue to push forward the frontiers of AI, we're making sure we stay at the cutting edge with foresight, responsibility and a deep understanding of the broader implications for our nation."

Continue reading here:

DOD Releases AI Adoption Strategy > U.S - Department of Defense

How Companies Are Hiring And Reportedly Firing With AI – Forbes

BREAKINGTurkey Latest Pulling Ambassador From Israel Joining These 6 Other Countries","scope":{"happening":{"title":"Turkey Latest Pulling Ambassador From Israel Joining These 6 Other Countries","uri":"https://www.forbes.com/sites/brianbushard/2023/11/04/turkey-latest-pulling-ambassador-from-israel---joining-these-6-other-countries/","date":"7 hours ago","index":1,"contentBadge":{"class":"content-badge"},"image":"https://specials-images.forbesimg.com/imageserve/6546a830cf84604da82c6206/960x0.jpg?cropX1=0&cropX2=2656&cropY1=50&cropY2=1545"}},"id":"ckr0ak6mkl1800"},{"textContent":"Thousands Descend On Washington For Gaza Cease-Fire Rally (Photos)","scope":{"happening":{"title":"Thousands Descend On Washington For Gaza Cease-Fire Rally (Photos)","uri":"https://www.forbes.com/sites/antoniopequenoiv/2023/11/04/thousands-descend-on-washington-for-gaza-cease-fire-rally-photos/","date":"7 hours ago","index":2,"contentBadge":{"class":"content-badge"},"image":"https://specials-images.forbesimg.com/imageserve/6546a361f8c2737f6cff287d/960x0.jpg?cropX1=0&cropX2=1014&cropY1=0&cropY2=676"}},"id":"dddien9jpp0g00"},{"textContent":"Weekend Box Office Reportedly Bombs As Dune: Part Two Delay Leaves Theaters Without Big Release Amid Actors Strike","scope":{"happening":{"title":"Weekend Box Office Reportedly Bombs As Dune: Part Two Delay Leaves Theaters Without Big Release Amid Actors Strike","uri":"https://www.forbes.com/sites/antoniopequenoiv/2023/11/04/weekend-box-office-reportedly-bombs-as-dune-part-two-delay-leaves-theaters-without-big-release-amid-actors-strike/","date":"10 hours ago","index":3,"contentBadge":{"class":"content-badge"},"image":"https://specials-images.forbesimg.com/imageserve/654680c2e91ba46394080ead/960x0.jpg?cropX1=0&cropX2=3117&cropY1=0&cropY2=2078"}},"id":"2dmra197cm6k00"},{"textContent":"Speaker Johnsons Whirlwind First Full Week Marked By Santos Expulsion Vote, Doomed Israel Aid Bill And Fundraising Gaffe","scope":{"happening":{"title":"Speaker Johnsons Whirlwind First Full Week Marked By Santos Expulsion Vote, Doomed Israel Aid Bill And Fundraising Gaffe","uri":"https://www.forbes.com/sites/brianbushard/2023/11/04/speaker-johnsons-whirlwind-first-full-week-marked-by-santos-expulsion-vote-doomed-israel-aid-bill-and-fundraising-gaffe/","date":"10 hours ago","index":4,"contentBadge":{"class":"content-badge"},"image":"https://specials-images.forbesimg.com/imageserve/65467739c7246d7b31802cc0/960x0.jpg?cropX1=0&cropX2=3000&cropY1=0&cropY2=1688"}},"id":"9023fairmn8000"},{"textContent":"Attack On Titan Finale Premieres Tonight Here Are The Eye-Popping Numbers Behind The Animes Reign","scope":{"happening":{"title":"Attack On Titan Finale Premieres Tonight Here Are The Eye-Popping Numbers Behind The Animes Reign","uri":"https://www.forbes.com/sites/antoniopequenoiv/2023/11/04/attack-on-titan-finale-premieres-tonight-here-are-the-eye-popping-numbers-behind-the-animes-reign/","date":"12 hours ago","index":5,"contentBadge":{"class":"content-badge"},"image":"https://specials-images.forbesimg.com/imageserve/65466a20e14aa35399957789/960x0.jpg?cropX1=0&cropX2=3817&cropY1=0&cropY2=2865"}},"id":"6dm1ldncdj6k00"},{"textContent":"Billionaire Buffetts Berkshire Hathaway Hits Record $157 Billion Cash But Loses On Investments","scope":{"happening":{"title":"Billionaire Buffetts Berkshire Hathaway Hits Record $157 Billion Cash But Loses On Investments","uri":"https://www.forbes.com/sites/brianbushard/2023/11/04/billionaire-buffetts-berkshire-hathaway-hits-record-157-billion-cash---but-loses-on-investments/","date":"12 hours ago","index":6,"contentBadge":{"class":"content-badge"},"image":"https://specials-images.forbesimg.com/imageserve/65466240b0bae2d9379b14d1/960x0.jpg?cropX1=0&cropX2=2241&cropY1=6&cropY2=1266"}},"id":"ed3do6m4m8i800"},{"textContent":"2023 Layoff Tracker: OpenSea NFT Marketplace Cuts Half Its Staff","scope":{"happening":{"title":"2023 Layoff Tracker: OpenSea NFT Marketplace Cuts Half Its Staff","uri":"https://www.forbes.com/sites/brianbushard/2023/11/04/2023-layoff-tracker-opensea-nft-marketplace-reportedly-cuts-half-its-staff/","date":"13 hours ago","index":7,"contentBadge":{"class":"content-badge"},"image":"https://specials-images.forbesimg.com/imageserve/6546531b3eff19bbac6beb30/960x0.jpg?cropX1=0&cropX2=2800&cropY1=144&cropY2=1720"}},"id":"3pb2n14lji3400"},{"textContent":"Nepal Earthquake: Heres What To Know About The Deadly Shock As Death Toll Climbs To 150","scope":{"happening":{"title":"Nepal Earthquake: Heres What To Know About The Deadly Shock As Death Toll Climbs To 150","uri":"https://www.forbes.com/sites/brianbushard/2023/11/04/nepal-earthquake-heres-what-to-know-about-the-deadly-shock-as-death-toll-climbs-to-150/","date":"14 hours ago","index":8,"contentBadge":{"class":"content-badge"},"image":"https://specials-images.forbesimg.com/imageserve/6546465a4a7c2ac6d6765748/0x0.jpg"}},"id":"a8bo538nnanc00"},{"textContent":"Daylight Savings Ends Tonight: Heres Where Legislation Stands On Changing It","scope":{"happening":{"title":"Daylight Savings Ends Tonight: Heres Where Legislation Stands On Changing It","uri":"https://www.forbes.com/sites/maryroeloffs/2023/11/04/daylight-savings-ends-tonight-heres-where-legislation-stands-on-changing-it/","date":"14 hours ago","index":9,"contentBadge":{"class":"content-badge"},"image":"https://specials-images.forbesimg.com/imageserve/6542a0e5350f3c5adb601fbe/960x0.jpg?cropX1=0&cropX2=2072&cropY1=186&cropY2=1351"}},"id":"31frm0c2lcr400"},{"textContent":"Elon Musks X Has Started Selling Off Old Twitter Handles For Upwards Of $50,000","scope":{"happening":{"title":"Elon Musks X Has Started Selling Off Old Twitter Handles For Upwards Of $50,000","uri":"https://www.forbes.com/sites/alexkonrad/2023/11/03/elon-musk-x-has-started-selling-off-old-twitter-handles/","date":"+1 day ago","index":10,"contentBadge":{"class":"content-badge"},"image":"https://specials-images.forbesimg.com/imageserve/65458ba52b8f825ca681849b/960x0.jpg"}},"id":"enm49801k45400"},{"textContent":"Mark Zuckerberg Says He Tears ACL While Training For MMA Bout","scope":{"happening":{"title":"Mark Zuckerberg Says He Tears ACL While Training For MMA Bout","uri":"https://www.forbes.com/sites/antoniopequenoiv/2023/11/03/mark-zuckerberg-says-he-tears-acl-while-training-for-mma-bout/","date":"+1 day ago","index":11,"contentBadge":{"class":"content-badge"},"image":"https://specials-images.forbesimg.com/imageserve/65458ad4203ccb10402aeab9/960x0.jpg?cropX1=0&cropX2=2237&cropY1=58&cropY2=1316"}},"id":"2mkp6if15cp400"},{"textContent":"Nikki Haley Reportedly Lands Major Pence Donor In Latest Campaign Boon","scope":{"happening":{"title":"Nikki Haley Reportedly Lands Major Pence Donor In Latest Campaign Boon","uri":"https://www.forbes.com/sites/antoniopequenoiv/2023/11/03/nikki-haley-reportedly-lands-major-pence-donor-in-latest-campaign-boon/","date":"+1 day ago","index":12,"contentBadge":{"class":"content-badge"},"image":"https://specials-images.forbesimg.com/imageserve/65457d961e78f6a037db88c8/960x0.jpg?cropX1=0&cropX2=3435&cropY1=0&cropY2=2290"}},"id":"5pn80djb7gdc00"},{"textContent":"Trumps D.C. Gag Order Put On Hold By Appeals Court","scope":{"happening":{"title":"Trumps D.C. Gag Order Put On Hold By Appeals Court","uri":"https://www.forbes.com/sites/brianbushard/2023/11/03/trumps-dc-gag-order-put-on-hold-by-appeals-court/","date":"+1 day ago","index":13,"contentBadge":{"class":"content-badge"},"image":"https://specials-images.forbesimg.com/imageserve/64494226de2e145844f33129/960x0.jpg?cropX1=0&cropX2=1416&cropY1=0&cropY2=796"}},"id":"bd4g5ki3kjg000"},{"textContent":"Controversial Sound Of Freedom Coming To Amazon Prime Next Month","scope":{"happening":{"title":"Controversial Sound Of Freedom Coming To Amazon Prime Next Month","uri":"https://www.forbes.com/sites/britneynguyen/2023/11/03/controversial-sound-of-freedom-coming-to-amazon-prime-next-month/","date":"+1 day ago","index":14,"contentBadge":{"class":"content-badge"},"image":"https://specials-images.forbesimg.com/imageserve/6545624498426485783bd1e4/960x0.jpg?cropX1=0&cropX2=2712&cropY1=654&cropY2=2180"}},"id":"20lqrq2rf95g00"},{"textContent":"White House Condemns Fox News Host Mark Levin For Sickening Comments About Jewish CNN Anchors","scope":{"happening":{"title":"White House Condemns Fox News Host Mark Levin For Sickening Comments About Jewish CNN Anchors","uri":"https://www.forbes.com/sites/saradorn/2023/11/03/white-house-condemns-fox-news-host-mark-levin-for-sickening-comments-about-jewish-cnn-anchors/","date":"+1 day ago","index":15,"contentBadge":{"class":"content-badge"},"image":"https://specials-images.forbesimg.com/imageserve/65455f2411b3e6a644b13678/0x0.jpg"}},"id":"7edqgkb04f8000"},{"textContent":"Cup Noodles Will Switch To New Microwave-Safe Packaging (No, You Werent Supposed To Microwave The Old Cups)","scope":{"happening":{"title":"Cup Noodles Will Switch To New Microwave-Safe Packaging (No, You Werent Supposed To Microwave The Old Cups)","uri":"https://www.forbes.com/sites/ariannajohnson/2023/11/03/cup-noodles-will-switch-to-new-microwave-safe-packaging-no-you-werent-supposed-to-microwave-the-old-cups/","date":"+1 day ago","index":16,"contentBadge":{"class":"content-badge"},"image":"https://specials-images.forbesimg.com/imageserve/65455bf39e58007df848ff9b/960x0.jpg?cropX1=0&cropX2=1315&cropY1=536&cropY2=1275"}},"id":"22qcd9gqhq8600"},{"textContent":"Supreme Court Will Consider Whether Bump Stocks Are Machine GunsAnd Can Legally Be Banned","scope":{"happening":{"title":"Supreme Court Will Consider Whether Bump Stocks Are Machine GunsAnd Can Legally Be Banned","uri":"https://www.forbes.com/sites/tylerroush/2023/11/03/supreme-court-will-consider-whether-bump-stocks-are-machine-guns-and-can-legally-be-banned/","date":"+1 day ago","index":17,"contentBadge":{"class":"content-badge"},"image":"https://specials-images.forbesimg.com/imageserve/65455560dbe7c23f24f1c558/960x0.jpg?cropX1=0&cropX2=4356&cropY1=31&cropY2=2937"}},"id":"323h3hkc8imc00"},{"textContent":"NBA In-Season Tournament Starts Tonight: Heres What To Know About The New Competition","scope":{"happening":{"title":"NBA In-Season Tournament Starts Tonight: Heres What To Know About The New Competition","uri":"https://www.forbes.com/sites/antoniopequenoiv/2023/11/03/nba-in-season-tournament-starts-tonight-heres-what-to-know-about-the-new-competition/","date":"+1 day ago","index":18,"contentBadge":{"class":"content-badge"},"image":"https://specials-images.forbesimg.com/imageserve/654554c1d91b202d5c6638bc/960x0.jpg?cropX1=0&cropX2=2804&cropY1=0&cropY2=1871"}},"id":"5a7dh760iiek00"},{"textContent":"Trump Gag Order Expanded To Ex-Presidents Attorneys After Hundreds Of Threats In N.Y. Fraud Case","scope":{"happening":{"title":"Trump Gag Order Expanded To Ex-Presidents Attorneys After Hundreds Of Threats In N.Y. Fraud Case","uri":"https://www.forbes.com/sites/brianbushard/2023/11/03/trump-gag-order-expanded-to-ex-presidents-attorneys-after-hundreds-of-threats-in-ny-fraud-case/","date":"+1 day ago","index":19,"contentBadge":{"class":"content-badge"},"image":"https://specials-images.forbesimg.com/imageserve/654548742c990bc8218c9bc2/960x0.jpg?cropX1=0&cropX2=3987&cropY1=0&cropY2=2244"}},"id":"9b9ic800jqpk00"},{"textContent":"House Speaker Mike Johnson Says He Refuses To Put People Over Politics","scope":{"happening":{"title":"House Speaker Mike Johnson Says He Refuses To Put People Over Politics","uri":"https://www.forbes.com/sites/saradorn/2023/11/03/house-speaker-mike-johnson-says-he-refuses-to-put-people-over-politics/","date":"+1 day ago","index":20,"contentBadge":{"class":"content-badge"},"image":"https://specials-images.forbesimg.com/imageserve/65453bd1db5fd8adf5b11457/0x0.jpg"}},"id":"5qo7f9d0j95k00"},{"textContent":"Jan. 6 Sentence: Trump-Appointed State Department Official Gets 70 Months In Prison For Relentless Siege On Police","scope":{"happening":{"title":"Jan. 6 Sentence: Trump-Appointed State Department Official Gets 70 Months In Prison For Relentless Siege On Police","uri":"https://www.forbes.com/sites/tylerroush/2023/11/03/jan-6-sentence-trump-appointed-state-department-official-gets-70-months-in-prison-for-relentless-siege-on-police/","date":"+1 day ago","index":21,"contentBadge":{"class":"content-badge"},"image":"https://specials-images.forbesimg.com/imageserve/6545358e2d48bd27f7c0996e/960x0.jpg?cropX1=0&cropX2=2382&cropY1=0&cropY2=1339"}},"id":"8coc0i6qdqpk00"}],"breakpoints":[{"breakpoint":"@media all and (max-width: 767px)","config":{"enabled":false}},{"breakpoint":"@media all and (max-width: 768px)","config":{"inView":2,"slidesToScroll":1}},{"breakpoint":"@media all and (min-width: 1681px)","config":{"inView":6}}]};

The rest is here:

How Companies Are Hiring And Reportedly Firing With AI - Forbes