Category Archives: Artificial Intelligence
NAB Wrap: Artificial Intelligence, Virtual Production and Future of Broadcasting in Focus – Hollywood Reporter
NABs The Last of Us panel with (from left) THRs Carolyn Giardina, Craig Mazin, Ksenia Sereda, Timothy Good, Emily Mendez and Alex Wang.
The 2023 National Association of Broadcasters Show, which wrapped Wednesday in Las Vegas, attracted an estimated 65,000 delegates, according to show organizers, which many viewed as a healthy number for a post-pandemic show.
The attendance numbers marked a notable rise following NABs return to an in-person event in 2022, which counted 52,468 delegates, though it was still well below its last show before the lockdown, which drew 91,000 attendees in 2019.
Artificial intelligence was arguably the most widespread topic this year, as NAB marked its centennial. As the potential of AI rapidly evolves, its a topic that clearly will continue to cause significant anxiety, as well as staggering potential opportunities.
Generally, it was an evolutionary rather than a revolutionary year on the exhibition floor. From a tech standpoint, there was a large number of new and evolving tools for all sorts of cloud-based remote workflows. And while theres still a lot to understand before the promise of virtual production can be fully realized, attendees would have been hard-pressed to go anywhere in the vast exhibition halls, where 1,200 companies featured their latest technologies, and not see at least one LED wall or related demonstration.
NAB promoted the rollout of Next-Gen TV and turned a spotlight on sustainability with the launch of its Excellence in Sustainability Awards program, while participation from Hollywood took this event beyond broadcasting. Heres a look at some of the weeks highlights and biggest trends.
AI was rampant at NAB, from the conference sessions to the exhibition floor. This is an area where NAB will absolutely be active, said NAB president and CEO Curtis LeGeyt, who shared his views on the potential dangers, as well as benefits, of the tech during a state of the industry presentation. It is just amazing how quickly the relevance of AI to our entire economy but, specifically, since were in this room, the broadcast industry has gone from amorphous concept to real.
LeGeyt warned of several concerns that he has for local broadcasters where AI is concerned, among them, how big tech [uses] their platforms to access broadcast television and radio content. That, in our view, does not allow for fair compensation for our content despite the degree to which we drive tremendous traffic at their sites. He asserted that legislation is needed to put some guardrails on it, especially at a time when AI has the potential to put that on overdrive.
He warned of the additional diligence that will be needed to determine what is real and what is AI, as well as the caution that will be required when it comes to protecting ones likeness. He balanced these warnings with a discussion of potential opportunities, including the ability to speed up research at resource-constrained local stations.
Imax, which made its first appearance this year as an NAB exhibitor, was among many companies that showed AI-driven tech on the show floor. It demoed current and prototype technology from SSIMWAVE, the tech startup that it acquired for $21 million in 2022. This includes AI-driven tools aimed at bandwidth and image quality optimization, which may be used with the companys Imax Enhanced streaming format.
Other such exhibitors included Adobe, which showed a new beta version of Premiere Pro that includes an AI-driven, text-based editing tool developed to analyze and transcribe clips.
Sessions on HBO series The Last of Us and a conversation with Ted Lassos Brett Goldstein attracted standing-room-only crowds to the NAB Shows main stage, while talent from the American Society of Cinematographers and American Cinema Editors presented master class sessions during the week.
Writer, producer and actor Goldstein otherwise known as Ted Lasso footballer Roy Kent was featured in a freewheeling conversation with fellow Ted Lasso writer Ashley Nicole Black.
The life of just an actor, with all respect to actors, theyre insane. I dunno why they would live that way, he admitted when asked about working as both a writer and actor. Its fucking mental. Your life is a lottery. Every day you wait for a magical phone to ring, and you have zero control over it. I just didnt want to be an actor who sits around going, There arent any good scripts. You have to write yourself stuff, and then you cant complain.
He also talked about why collaboration makes the writers room work. Describing the teams on Shrinking and Ted Lasso as some of the smartest people in the world, in this fucking room, he said, Youd be mad not to take these ideas. And when you sort of allow this process of everyone joining in and taking this and taking that, its 100 percent going to be a better show.
ACE presented The Last of Us, during which showrunner and exec producer Craig Mazin teased that the series would extend beyond its announced season two, generating cheers from the crowd. The session went behind the scenes of the production with Mazin, DP Ksenia Sereda,editors Timothy Good and Emily Mendez, VFX supervisor Alex Wangand sound supervisor Michael J. Benavente.
There was no shortage of exhibitors showing tech and workflows for the evolving area of virtual production, with potential applications from advertising and series work to features.
Virtual production, to me, is an amazing tool in our arsenal of making stories come to life, but its a tool, like all tools, that needs to be properly applied to get the best out of it, asserts two-time Oscar-nominated cinematographer Jeff Cronenweth (The Social Network, The Girl With the Dragon Tattoo). He is currently an adviser to SISU, which develops robotic arms that were demoed as part of a virtual production pipeline at NAB.
Cronenweth reports that his next project is Disneys Tron: Ares starring Jared Leto (which Joachim Ronning is set to direct for a 2025 release), and hes eyeing virtual production. As you can imagine for a sci-fi film like this, we will embellish all of the technology available to bring it to life, including some virtual production.Im anticipating SISUs robotic technology to play a key part in that emerging technology.
NAB used its annual confab to promote the voluntary rollout of the next generation of digital television, known as ATSC 3.0, which is based on internet protocol and may include new capabilities such as free, live broadcasting to mobile devices. A change of this magnitude has a long way to go before its potential can be realized.
At NAB, Federal Communications Commission chairwoman Jessica Rosenworcel launched the Future of Television Initiative, which she described as a public-private partnership among stakeholders to support a transition to ATSC 3.0.
U.S. broadcasters delivered 26 new Next-Gen TV markets to reach 66 by year-end 2022, reported ATSC president Madeleine Noland. We are looking ahead to another year of continued deployments across the U.S. and sales of new consumer receivers.
Read the original:
NAB Wrap: Artificial Intelligence, Virtual Production and Future of Broadcasting in Focus - Hollywood Reporter
GPT-4 Passes the Bar Exam: What That Means for Artificial … – Stanford Law School
CodeXThe Stanford Center for Legal Informatics and the legal technology company Casetext recently announced what they called a watershed moment. Research collaborators had deployed GPT-4, the latest generation Large Language Model (LLM), to takeand passthe Uniform Bar Exam (UBE). GPT-4 didnt just squeak by. It passed the multiple-choice portion of the exam and both components of the written portion, exceeding not only all prior LLMs scores, but also the average score of real-life bar exam takers, scoring in the 90th percentile.
Casetexts Chief Innovation Officer and co-founder Pablo Arredondo, JD 05, who is a Codex fellow, collaborated with CodeX-affiliated faculty Daniel Katz and Michael Bommarito to study GPT-4s performance on the UBE. In earlier work, Katz and Bommarito found that a LLM released in late 2022 was unable to pass the multiple-choice portion of the UBE. Their recently published paper, GPT-4 Passes the Bar Exam quickly caught the national attention. Even The Late Show with Steven Colbert had a bit of comedic fun with the notion of robo-lawyers running late-night TV ads looking for slip-and-fall clients.
However for Arredondo and his collaborators, this is serious business. While GPT-4 alone isnt sufficient for professional use by lawyers, he says, it is the first large language model smart enough to power professional-grade AI products.
Here Arredondo discusses what this breakthrough in AI means for the legal profession and for the evolution of products like the ones Casetext is developing.
What technological strides account for the huge leap forward from GPT-3 to GPT-4 with regard to its ability to interpret text and its facility with the bar exam?
If you take a broad view, the technological strides behind this new generation of AI began 80 years ago when the first computational models of neurons were created (McCulloch-Pitts Neuron). Recent advancesincluding GPT-4have been powered by neural nets, a type of AI that is loosely based on neurons and includes natural language processing. I would be remiss not to point you to the fantastic article by Stanford Professor Chris Manning, director of the Stanford Artificial Intelligence Laboratory. The first few pages provide a fantastic history leading up to the current models.
You say that computational technologies have struggled with natural language processing and complex or domain-specific tasks like those in the law, but with advancing capabilities of large language modelsand GPT-4you sought to demonstrate the potential in law. Can you talk about language models and how they have improved, specifically for law? If its a learning model, does that mean that the more this technology is used in the legal profession (or the more it takes the bar exam) the better it becomes/more useful it is to the legal profession?
Large language models are advancing at a breathtaking rate. One vivid illustration is the result of the study I worked on with law professors and Stanford CodeX fellows Dan Katz and Michael Bommarito. We found that while GPT-3.5 failed the bar, scoring roughly in the bottom 10th percentile, GPT-4 not only passed but approached 90th percentile. These gains are driven by the scale of the underlying models more than any fine-tuning for law. That is, our experience has been that GPT-4 outperforms smaller models that have been fine-tuned on law. It is also critical from a security standpoint that the general model doesnt retain, much less learn from, the activity and information of attorneys.
What technologies are next and how will they impact the practice of law?
The rate of progress in this area is remarkable. Every day I see or hear about a new version or application. One of the most exciting areas is something called Agentic AI, where the LLMs (large language models) are set up so that they can themselves strategize about how to carry out a task, and then execute on that strategy, evaluating things along the way. For example, you could ask an Agent to arrange transportation for a conference and, without any specific prompting or engineering, it would handle getting a flight (checking multiple airlines if need be) and renting a car. You can imagine applying this to substantive legal tasks (i.e., first I will gather supporting testimony from a deposition, then look through the discovery responses to find further support, etc).
Another area of growth is mutli-modal, where you go beyond text and fold in things like vision. This should enable things like an AI that can comprehend/describe patent figures or compare written testimony with video evidence.
Big law firms have certain advantages and I expect that they would want to maintain those advantages with this sort of evolutionary/learning technology. Do you expect AI to level the field?
Technology like this will definitely level the playing field; indeed, it already is. I expect this technology to at once level and elevate the profession.
So, AI-powered technology such as LLMs can help to close the access to justice gap?
Absolutely. In fact, this might be the most important thing LLMs do in the field of law. The first rule of the Federal Rules of Civil Procedure exhorts the just, speedy and inexpensive resolution of matters. But if you asked most people what three words come to mind when they think about the legal system, speedy and inexpensive are unlikely to be the most common responses. By making attorneys much more efficient, LLMs can help attorneys increase access to justice by empowering them to serve more clients.
Weve read about AIs double-edged sword. Do you have any big concerns? Are we getting close to a Robocop moment?
My view, and the view of Casetext, is that this technology, as powerful as it is, still requires attorney oversight. It is not a robot lawyer, but rather a very powerful tool that enables lawyers to better represent their clients. I think it is important to distinguish between the near term and the long term questions in debates about AI.
The most dramatic commentary you hear (e.g., AI will lead to utopia, AI will lead to human extinction) is about artificial general intelligence (AGI), which most believe to be decades away and not achievable simply by scaling up existing methods. The near term discussion, about how to use the current technology responsibly, is generally more measured and where I think the legal profession should be focused right now.
At a recent workshop we held at CodeXs FutureLaw conference, Professor Larry Lessig raised several near-term concerns around issues like control and access. Law firm managing partners have asked us what this means for associate training; how do you shape the next generation of attorneys in a world where a lot of attorney work can be delegated to AI? These kinds of questions, more than the apocalyptic prophecies, are what occupy my thinking. That said, I am glad we have some folks focused on the longer term implications.
Pablo Arredondo is a Fellow at CodeX The Stanford Center for Legal Informatics and the co-founder of Casetext, a legal AI company. Casetexts CoCounsel platform, powered by GPT-4, assists attorneys in document review, legal research memos, deposition preparation, and contract analysis, among other tasks. Arredondos work at CodeX focuses on civil litigation, with an emphasis on how litigators access and assemble the law. He is a graduate of Stanford Law School, JD 05, and of the University of California at Berkeley.
Read more:
GPT-4 Passes the Bar Exam: What That Means for Artificial ... - Stanford Law School
‘Gold Rush’ in Artificial Intelligence Expected To Drive Data Center … – CoStar Group
The rapid adoption of new artificial intelligence apps and an intensifying bid for dominance among tech giants Amazon, Google and Microsoft are expected to drive investment and double-digit growth for the data center industry in the next five years.
A gold rush of AI these days centers on the brisk development of tools such as ChatGPT, according to a new analysis from real estate services firm JLL. Voice- and text-generating AI apps could transform the speed and accuracy of customer service interactions and accelerate demand for computing power, as well as the systems and networks connecting users that data centers provide, the real estate firm said.
The emergence of AI comes on the heels of increased usage of data centers in the past few years, as people spend more time online for work and entertainment, fueling the need for these digital information hubs, which provide the speed, memory and power to support those connections.
JLL projected that half of all data centers will be used to support AI programs by 2025. The new AI applications need for enormous amounts of data capacity will require more power and expanded space for the data center services, particularly colocation facilities, which are a type of data center that rents capacity to third-party companies and may service dozens of them at one time. It's also a potential growth area for commercial property investors.
We expect AI applications, and the machine learning processes that enable them, will drive significant demand for colocation capabilities like those we provide, Raul Martynek, CEO of Dallas-based DataBank, told CoStar News in an email. Specifically, the demand will be for high-density colocation and data centers that provide significantly greater concentrations of power and cooling.
One kilowatt hour of energy can power a 100-watt light bulb for 10 hours, and traditional data server workloads might require 15 kilowatts per typical cabinet, or server rack, Martynek said. But the high-performance computing nodes required to train large language models like ChatGPT can consume 80 kilowatts or more per cabinet.
This requires more spacing between cabinets to maintain cooling, or specialized water-chilled doors to cool the cabinets, Martynek said.
In addition to the added energy and water needs, the growth in data centers faces other challenges. Credit-rating firm S&P Global Ratings noted that long-term industry risks include shifting technology, cloud service providers filling their own data center needs, and weaker pricing. The data center industry, with power-hungry facilities running 24 hours a day and 365 days a year, has also received criticism from environmentalists.
DataBank owns and operates more than 65 data centers in 27 metropolitan markets. This month, it secured $350 million in financing from TD Bank to fund its ongoing expansion.
It was DataBanks second successful financing this year, coming just weeks after completing a $715 million net-lease securitization in March 1. Under net-lease offerings, issuers securitize their rent revenue streams into bonds. The sale of those bonds replenishes the issuers capital to be used to pay down debt and continue investments.
ChatGPT and other apps are bots that use machine learning to mimic human speech and writing. ChatGPT debuted in November and is most arguably the most sophisticated to launch so far. AI software developer Tidio estimated recently that usage of such bots has already grown to 1.5 billion users worldwide.
In January, Microsoft announced a new multibillion-dollar investment in ChatGPT maker OpenAI. Google has recently improved its AI chatbot, Bard, in an effort to rival its competitors. And Amazon Web Services, the largest cloud computing provider, introduced a service last week called Bedrock aimed at helping other companies develop their own chatbots.
Amazon CEO Andy Jassy touted the e-commerce giants AI plans in his annual letter to shareholders.
Most companies want to use these large language models but the really good ones take billions of dollars to train and many years and most companies dont want to go through that, Jassy said last week on CNBC. So what they want to do is they want to work off of a foundational model thats big and great already and then have the ability to customize it for their own purposes. And thats what Bedrock is.
The growth projections of AI have data center owners and operators at the forefront of the securitized bond market. Three data center providers have issued $1.3 billion in net-lease securitized offerings already this year, according to CoStar data. Thats more than all of last year combined. In addition, two more providers have offerings in the wings.
The sector is a bright spot in an otherwise weakened market for other commercial real estate securitized bond offerings, down more than 70% from the same time last year.
The data center space remains extremely attractive to capital sources looking for quality and stability versus other asset classes that have been challenged amidst uncertain economic conditions, Carl Beardsley, managing director and data centers lead at JLL Capital Markets, told CoStar News in an email.
JLL said data center financing comes from a variety of sources including debt funds, life insurance companies, banks and originators of commercial-mortgage backed securities.
Although money center banks and some regional banks have become more conservative during this volatile interest rate period, there is still a large appetite from the lender universe to allocate funds toward data centers, Beardsley said.
JLL is forecasting that the global data center market is expected to grow 11.3% from 2021 through 2026.
Across its six primary data center markets Chicago, Dallas-Fort Worth, New Jersey, Northern California, Northern Virginia and Phoenix the United States has a strong appetite for data centers property transactions compared to other countries, according to JLL, accounting for 52% of all deals from 2018 to 2022. These markets also have a data center capacity of 1,939 megawatts under construction, JLL said. One megawatt is equal to 1,000 kilowatts.
The growth is expected to continue even heading into a potential recession, according to S&P, which has rated two of the three data center securitized bond offerings completed this year so far.
Overall supply and demand is relatively balanced as new data center development has been constrained in certain markets by site availability, lingering supply chain issues and more recently, power capacity constraints, S&P noted in its reviews. Although we expect data centers to see some growth deceleration in a recessionary environment, we believe it will be mitigated by the critical nature of data centers.
S&P added that market data suggests 2022 vacancy rates were low for key data center markets and rental rates increased year over year.
New net-lease securitized fundraisings this year have come from DataBank, Stack Infrastructure, and Vantage Data Centers.
Denver-based Vantage, a global provider of hyperscale data center campuses, saw unprecedented growth in 2022, outperforming its previous record set in 2021. The company began developing four new campuses internationally and opened 13 data centers. The company raised more than $3 billion last year to support that effort.
Last month, Vantage completed an additional securitized notes offering raising $370 million. The offering was backed by tenant lease payments on 13 completed and operating wholesale data centers located in California, Washington state and Canada.
Stack, a Denver-based developer and operator of data centers, issued $250 million in securitized notes last month.
Stacks growth is outpacing the industry with a portfolio of more than 1 gigawatt, or 1,000 megawatts, of built and under-development capacity, and more than 2 gigawatts of future development capacity planned across the globe. The company has more than 4 million square feet currently under development.
Stack most recently announced the expansion of a Northern Virginia campus to 250 megawatts, the groundbreaking for another 100 megawatt campus in Northern Virginias Prince William County and the expansion of its 200 megawatt flagship Portland, Oregon, campus.
In addition, Dallas firm CyrusOne and Seattle-based Sabey Data Centers have filed preliminary notices of offerings in the works with the Securities and Exchange Commission.
Here is the original post:
'Gold Rush' in Artificial Intelligence Expected To Drive Data Center ... - CoStar Group
Africa tipped on power of artificial intelligence – Monitor
Pulse Lab Kampala (PLK) has called on stakeholders to support the creation of an environment that fosters the use of data and artificial intelligence (AI).
Speaking at the Conference on the State of Artificial Intelligence in Africa (COSAA) held at Strathmore University in Nairobi, Kenya, last month, Ms Morine Amutorinea data associate at PLKhighlighted the importance of collaboration in creating platforms and communities that can help scale up projects beyond Africa.
When working alone, a project cannot scale beyond Uganda or even Africa at large, noted Ms Amutorine.
During the event, Ms Amutorine showcased a radio mining tool developed by PLK.
The AI-powered social listening tool can monitor multiple radio stations at the same time and filter out content based on specific keywords.
When we wanted to assess public opinion about Covid-19 in Uganda, specific keywords would be fed in and the tool would retrieve audio clips of radio talk shows that were discussing that topic, she explained.
ProwessThe tool can listen to more than 20 radio stations simultaneously and comprehend three local languages in UgandaUgandan English, Acholi, and Luganda.
Mr Martin Gordon Mubangizi, a data scientist doubling as the PLK lead, explained that the Automatic Voice Recognition tool should be trained with a vast dataset of content, including text, pairs of text and audio, a list of all words in a given language and then attribute the rightful pronunciation.
Ms Amutorine toldMonitor that AI does not dispense with human effort.
The latter is needed at the stage of transcribing and analysing the data in order to assess personal perceptions.
Mr Pius Kavuma Mugagga, a data engineer with PLK, consequently demonstrated another AI tool. Named the Pulse Satellite Tool, it is a product of a group of researchers at PLK that was birthed in 2015 in partnership with the United Nations Satellite Centre (UNOSAT). The goal then was to develop an AI tool that would estimate economic development of a region.
We started by experimenting with satellite imagery, Mr Mugagga revealed, adding, The AI would then be programmed to identify, highlight and count shelter roof tops and later the results would be used to assess settlement mapping.
He went on to explain that by looking at the nature of roof tops, you may be able to make some inference regarding the economic transition of a place.This, according to Mugagga, led to the initial conception of the Pulse Satellite Tool that was eventually taken on by UNOSAT. The tool proved handy in flood mapping.
In January 2021, Mozambique experienced floods caused by Tropical Cyclone Eloise. They requested UNOSAT to do a rapid mapping on the area, Mr Mugagga revealed, adding, Using the Pulse Satellite Tool, UNOSAT was able to identify the areas of interest that were assessed for service delivery.
Not ready to rest on its laurels, PLK is already looking to develop a platform where anyone can upload a satellite image whose results would be downloaded for use after the mapping process.
However, there should be legal terms to it for it to be operational outside the UN systems, Mr Mubangizi told Saturday Monitor, adding that that is why Uganda has joined Nigeria in being one of only two countries in Africa to embark on a national data strategy.
Mr Mubangizi further revealed: PLK, in partnership with the Ugandan Ministry of ICT and National Guidance, are looking at achieving such goals by 2040 where big data and AI governance will be accessible by everyone for use, reuse and sharing.
The conferencethe first of its kind in Africa highlighted the potential for AI to transform the culture of the United Nations and deepen its impact. Experts say by creating a supportive environment for data and AI, stakeholders can unlock opportunities for innovation and growth, driving positive change across Africa and beyond.
About artificial intelligence
Since the early days of computers, scientists have strived to create machines that can rival humans in their ability to think, reason and learn in other words, artificial intelligence (AI).While todays AI systems still fall short of that goal, they are starting to perform as well as, and sometimes better than, their creators at certain tasks. Thanks to new techniques that allow machines to learn from enormous sets of data, AI has taken massive leaps forward.
AI is starting to move out of research labs and into the real world. It is having an impact on our lives. There can be little doubt that we are entering the age of AI.
As AI enters the real world by assessing loan applications, informing courtroom decisions or helping to identify patients who should receive treatment, so too does one of its most fundamental flaws: bias.
Algorithms are only as good as the code that governs them and the data used to teach them. Each can carry the watermark of our own preconceptions. Facial recognition software can misclassify black faces or fail to identify women, criminal profiling algorithms have ranked non-whites as higher risk and recruitment tools have scored women lower than men. But with these challenges, there has been mounting pressure on technology giants to fix them.
*Additional information source: BBC
Link:
Africa tipped on power of artificial intelligence - Monitor
Joe Rogan just issued a warning about artificial intelligence after a fake version of his podcast was created 100% through AI technology. Here are 3…
Controversial podcaster Joe Rogan has issued a warning about artificial intelligence after a fake but very realistic version of his popular podcast The Joe Rogan Experience was published online.
The fictional episode generated with the help of AI chatbot ChatGPT was published on YouTube on April 4.
It depicts a conversation between Joe Rogan and Sam Altman, the CEO of OpenAI, with a disclaimer that the ideas and opinions expressed in the podcast are not reflective of [their actual] thoughts.
Let me tell you folks, this is some next-level stuff we've got going on here today, the AI-generated Joe Rogan says. Every single word of this podcast has been generated with the help of ChatGPT...I am not the real Joe Rogan this is purely fiction.
Joe Rogan just issued a warning about artificial intelligence after a fake version of his podcast was created 100% through AI technology. Here are 3 stocks to capitalize
The real-life Rogan was ill at ease after the episode dropped. He tweeted: This is going to get very slippery, kids.
The Joe Rogan AI Experience was intended as an exploration of the capabilities of language models and it certainly achieves that.
If youre interested in capitalizing on this trend, here are three AI plays in todays stock market.
Microsoft is heavily involved in the AI boom. The tech giant has been an investor in OpenAI, the company that developed ChatGPT, since 2019.
In January, Microsoft announced an extension of its partnership with OpenAI through a through a multiyear, multibillion-dollar investment.
The online search market has experienced significant disruption in 2023, with the launch of several competing AI chatbots. In February, Microsoft launched Bing Chat, which runs on ChatGPT-4 technology.
Story continues
After a shaky start with the accuracy of its findings being called into question Bing Chat has been in heated competition with Google parent company Alphabets AI chatbot called Bard and both bots are up against the original ChatGPT.
Microsoft enjoyed a strong final quarter in 2022, with its revenue increasing by 2% to $52.7 billion. As of April 13, Microsoft's stock performance improved by 4% year over year.
Upon announcing its latest financial results, Microsoft CEO Satya Nadella said: The next major wave of computing is being born, as the Microsoft Cloud turns the worlds most advanced AI models into a new computing platform.
We are committed to helping our customers use our platforms and tools to do more with less today and innovate for the future in the new era of AI.
Read more: Owning real estate for passive income is one of the biggest myths in investing but here is 1 simple way to really make it work
Nvidia creates microchips that power AI software and services, with a focus on business solutions.
"AI is at an inflection point, setting up for broad adoption reaching into every industry, said Jensen Huang, founder and CEO of NVIDIA. From startups to major enterprises, we are seeing accelerated interest in the versatility and capabilities of generative AI.
The Santa Clara-based chipmaker has been touted by some of Americas financial giants including Bank of America, Morgan Stanley and Barclays as a top AI stock for 2023.
While Nvidias revenue for the fiscal year 2023 remained flat from a year ago, at $26.97 billion, the companys stock performance is up 26% year over year, as of April 13.
In October 2022, Nvidia announced a multi-year partnership with Oracle, giving the cloud computing company access to Nvidias full AI platform, including chips, systems and software.
Oracle Cloud Infrastructure is a key competitor to Amazon Web Services. The e-commerce giant Amazon is another AI stock worth considering, as it uses the technology in its online store and via Alexa, the virtual assistant in Echo devices.
Adobe is an American multinational computer software company, headquartered in San Jose, California.
The tech company had a strong start to 2023, achieving record revenue of $4.66 billion in the first quarter.
Adobe stock performance has dropped 10% in the past 12 months but it remains a top tech stock pick for many analysts.
In September 2022, Adobe acquired the design platform Figma, which expanded its suite of essential designer tools.
Last year, the company announced new AI and machine learning capabilities in its Experience Cloud product, a marketing and analytics suite, and it continued its journey into generative AI, which enables people to prompt technology to create text, images, or other media.
This article provides information only and should not be construed as advice. It is provided without warranty of any kind.
Read the original here:
Joe Rogan just issued a warning about artificial intelligence after a fake version of his podcast was created 100% through AI technology. Here are 3...
Artificial Intelligence Reveals a Stunning, High-Definition View of M87’s Big Black Hole – SciTechDaily
M87 supermassive black hole originally imaged by the EHT collaboration in 2019 (left); and new image generated by the PRIMO algorithm using the same data set (right). Credit: Medeiros et al. 2023
Astronomers used machine learning to improve the Event Horizon Telescopes first black hole image, aiding in black hole behavior understanding and testing gravitational theories. The new technique, called PRIMO, has potential applications in various fields, including exoplanets and medicine.
Astronomers have used machine learning to sharpen up the Event Horizon Telescopes first picture of a black hole an exercise that demonstrates the value of artificial intelligence for fine-tuning cosmic observations.
The image should guide scientists as they test their hypotheses about the behavior of black holes, and about the gravitational rules of the road under extreme conditions.
Overview of simulations that were generated for the training set of the PRIMO algorithm. Credit: Medeiros et al. 2023
The EHT image of the supermassive black hole at the center of an elliptical galaxy known as M87, about 55 million light-years from Earth, wowed the science world in 2019. The picture was produced by combining observations from a worldwide array of radio telescopes but gaps in the data meant the picture was incomplete and somewhat fuzzy.
In a study published last week in The Astrophysical Journal Letters, an international team of astronomers described how they filled in the gaps by analyzing more than 30,000 simulated black hole images.
With our new machine learning technique, PRIMO, we were able to achieve the maximum resolution of the current array, study lead author Lia Medeiros of the Institute for Advanced Study said in a news release.
PRIMO slimmed down and sharpened up the EHTs view of the ring of hot material that swirled around the black hole as it fell into the gravitational singularity. That makes for more than just a prettier picture, Medeiros explained.
Since we cannot study black holes up close, the detail of an image plays a critical role in our ability to understand its behavior, she said. The width of the ring in the image is now smaller by about a factor of two, which will be a powerful constraint for our theoretical models and tests of gravity.
The technique developed by Medeiros and her colleagues known as principal-component interferometric modeling, or PRIMO for short analyzes large data sets of training imagery to figure out the likeliest ways to fill in missing data. Its similar to the way AI researchers used an analysis of Ludwig von Beethovens musical works to produce a score for the composers unfinished 10th Symphony.
Tens of thousands of simulated EHT images were fed into the PRIMO model, covering a wide range of structural patterns for the gas swirling into M87s black hole. The simulations that provided the best fit for the available data were blended together to produce a high-fidelity reconstruction of missing data. The resulting image was then reprocessed to match the EHTs actual maximum resolution.
The researchers say the new image should lead to more precise determinations of the mass of M87s black hole and the extent of its event horizon and accretion ring. Those determinations, in turn, could lead to more robust tests of alternative theories relating to black holes and gravity.
The sharper image of M87 is just the start. PRIMO can also be used to sharpen up the Event Horizon Telescopes fuzzy view of Sagittarius A*, the supermassive black hole at the center of our own Milky Way galaxy. And thats not all: The machine learning techniques employed by PRIMO could be applied to much more than black holes. This could have important implications for interferometry, which plays a role in fields from exoplanets to medicine, Medeiros said.
Adapted from an article originally published on Universe Today.
Reference: The Image of the M87 Black Hole Reconstructed with PRIMO by Lia Medeiros, Dimitrios Psaltis, Tod R. Lauer and Feryal zel3, 13 April 2023, The Astrophysical Journal Letters.DOI: 10.3847/2041-8213/acc32d
Go here to see the original:
Artificial Intelligence Reveals a Stunning, High-Definition View of M87's Big Black Hole - SciTechDaily
The High-Tech Hollywood Smile: How 3D Scanners and Artificial Intelligence Are Perfecting Teeth – Hollywood Reporter
Image caption: Can you ID the celebrity by their teeth? Answers at the end of the article.
A major director had scheduled a quick lunch just before leaving for an out-of-town shoot, but when he bit down on something hard and cracked a back tooth, his plans were suddenly in limbo. He raced to the Brentwood office of Dr. Jon Marashi, explaining the urgency.
The aesthetic dentist skipped the usual goopy impressions and instead captured details of the directors bite with a 3D scanner in less than a minute. The digital file was immediately input into a 3D printer, which created a replica of the broken tooth. Using a Computer Aided Design (CAD) and Computer Aided Manufacturing (CAM) dental milling machine, a new porcelain crown was quickly fabricated, verified on the printed model, and within hours bonded onto the patients damaged tooth.
The industry standard turnaround for a crown has always been two weeks, but this entire process was three and a half hours, notes Marashi, whose patient roster includes Matt Damon, Ben Affleck, Goldie Hawn, Cher, Pink, producer Jennifer Todd and Ben Harper.
I had a previous version of the scanner and it was the biggest waste of money because the resolution was terrible, slow and not accurate, like a dial-up modem but this one, called the Treos, is super kick-ass and like high-speed internet, Marashi adds. It takes 45 seconds to get the image of an entire mouth.
In recent years, high-tech advances including lasers, digital scanners and printers, artificial intelligence and augmented reality have made even the most involved dental intervention a much quicker, easier and more predictable experience.
Patients now have more control over a smiles outcome. There is a new mix of augmented reality and artificial intelligence that works by capturing a scan of the persons face and allowing us to morph a filter onto it, explains Dr. Alex Fine, who works in the office of Dr. Marc Lowenberg, where Chris Rock, Julianna Margulies, Kelly Ripa and Liev Schreiber are patients. Its as if you are creating an Instagram filter uniquely designed for patients that shows what they would look like with their best smiles. The patient then takes part in the process, helping decide what is optimal.
It works well because there are different characteristics that look beautiful on different people, explains Dr. Robert Raimondi, of One Manhattan Dental. We can superimpose [the scan] on photos of a face and alter size, shape, shade, number of teeth, and show them what is actually possible in their mouth. Raimondi also integrates the internal scanner with a CT Scan and robotic surgery to increase the precision of implant placement. Its so cool, he says.
Dr. Shawn Flynn has patients weigh in on their smiles without an additional visit to his Beverly Hills office. I just finished a case for someone on a TV show, and we had his scan on file, so we were able to go back and forth with the images without him having to come in again, he notes.
Gummy smiles used to require surgical tools and a drill, but the handheld LiteTouch laser, developed in Israel, is now able to remove gum tissue and bone, raising the upper teeth without traditional surgery. Its less invasive, less painful and heals more quickly, reports Fine.
The LiteTouch also removes old veneers by melting the adhesive, so dentists no longer have to painstakingly grind them off and potentially damage underlying teeth. I had a woman who had terrible veneers, recalls Marashi. The work was garbage and her bite was messed up. I had all her veneers off in 15 minutes without destroying any natural tooth. This tech is about as spanking new as it gets!
Even whitening is less of an ordeal. The new lasers no longer heat the teeth the way they did, and the gel isnt so harsh, so its much gentler for people with sensitive teeth, points out Dr. Lana Rozenberg, who tends to the smiles of Justin Theroux and Kristin Davis.
The new technologies are particularly useful for productions. Rozenberg tells of a heartthrob British actor who cracked his tooth while on set. He bit into something, and he wasnt laughing, but we were able to use the scanner and get him out of the office and back to work with a new tooth in an hour and a half, she says. Its amazing that we dont even need to take impressions, which is perfect for gaggers. You can even make night guards on the printer.
Actresses are never too young to head to an aesthetic dentist. When Madison Taylor Baez, 11 at the time, lost a baby tooth on the set of Showtimes Let the Right One In, she was rushed to Lowenberg, who quickly scanned her mouth and printed her a temporary tooth to get her through the shoot.
Recently, a new AI program was offered to dentists that actually reads and interprets X-rays. Its mind-boggling, says Rozenberg with a laugh. Soon dentists will be obsolete.
While scanning and printing of molds and trial smiles can be done digitally, and some dentists are using 3D printers to actually create veneers for front as well as back teeth, Lowenberg explains: Although these advancements are great and very new, the final step of a true Hollywood smile needs subtlety and artistry, which is still only achieved by the hand of a master ceramist.
Top row, from left: Kelly Ripa, Justin Theroux, Cher, Chris Rock.
Bottom row, from left: Ben Harper, Julianna Margulies, Ben Affleck, Pink
A version of this story first appeared in the April 12 issue of The Hollywood Reporter magazine. Click here to subscribe.
See the rest here:
The High-Tech Hollywood Smile: How 3D Scanners and Artificial Intelligence Are Perfecting Teeth - Hollywood Reporter
Diagnosing AI: Healthcare community excited, wary of artificial … – Worcester Business Journal
Artificial intelligence technologies have the potential to reshape how diagnoses are made, if not revolutionize diagnostics in medicine. Already, technologies exist to streamline the diagnostic process and detect illness before a physician can.
But the technology should not be fully relied upon just yet.
The algorithm can be only as good as the data that we give it, said Elke Rundensteiner, professor of computer science and founding director in data science at Worcester Polytechnic Institute.
PHOTO | Courtesy of WPI
Elke Rundensteiner's research at WPI focuses on effective use of data.
Its a mistake to think that technology immediately holds less bias than humans. Artificial intelligence programs arent developed in a vacuum: In medicine, they are fed thousands of data points from practitioners based on real data from real patients, and the AI learns to analyze based on data not always representative of diverse groups.
Participation in clinical trials in medicine is not representative of all populations, said Rundensteiner. According to a 2021 report in the Journal of the American Medical Association, less than 5% of clinical trial participants are Black.
If the data is biased and you give it to an algorithm, then the system might be neutral, but the bias is already baked in, she said.
Dr. Neil Marya, a gastroenterology specialist at UMass Memorial Health and assistant professor at UMass Chan Medical School in Worcester, is researching new diagnostic tools for pancreatic cancer and bile duct cancer. While a few years from clinical use, said Marya, the technologies are showing promise through observational use.
The AI is trained on tens of thousands of image and video data points from patients with and without cancer. The goal is to use computer learning to diagnose cancer, but for now, it is not enough to rely on for a definitive diagnosis.
We're not acting on the AI right now. Its just in the background, and we are understanding how it works in these cases, Marya said. For now, humans outrank the technology in declaring something cancerous or noncancerous. They will not yet use the technology to recommend chemotherapy or surgery without a definitive biopsy, he said.
In some cases, the AI has said something is cancer far before a pathologist has been able to make a diagnosis, Marya said. However, at other times, it has been incorrect, giving a false negative. In either case, the data the AI is pulling from is potentially more useful for certain demographic groups than others.
It would stand to reason to say, AI, should supersede all that implicit bias since its not influenced by one's upbringing or other things. But the issue is that for a lot of these AI models that we're developing, they're being developed at these major academic medical centers in areas or regions where there are only certain types of patients that are getting access to the care that generate the data necessary to create these artificial intelligence models, Marya said.
Maryas original work on the topic of pancreatic cancer models was at the Mayo Clinic in Minnesota, where he said a majority of the patients they saw were white males over the age of 60. Due to the regional bias of Southern Minnesota, there were a very low number of African Americans in the study, he said.
So while the results were very promising, indicating the possibility artificial intelligence programs could be trained to detect pancreatic cancer before physicians could use the current gold-standard method of biopsy, they might not be applicable to other demographic groups.
There's no denying that it's a heavily biased dataset towards a particular type of person. And it might not perform as accurately or maintain these really nice performance parameters when we put it out into practice, Marya said.
This chart shows the increase in clinical trials related to artificial intelligence in the U.S.
While there is cause for a measured introduction of the new machine-learning technology into the practice of medicine, increased access to software physicians can use in tandem with their expertise may increase access to needed health care in less serviced areas, said Dr. Bernardo Bizzo, senior director of the Data Science Office at Mass General Brigham in Boston.
Bizzos team works on software as a medical device, he said. With success, they have developed technology to detect early acute infarcts in some instances that neuroradiologists did not. In his estimation, a wave of new, similarly capable software is here.
We are at an inflection point with this [AI] technology. The expectation is that there will be a lot more AI products made available at a much faster pace, he said.
Marya said he may appear too nervous in his full adoption of the technology, but he just wants to work out the issues. I love the research we're doing; and I'm really passionate about it, and I trust it; but I think we need to be measured with how we approach this, he said. It's really important to do this right.
Rundensteiner said, based on how the public responds to AI overall, patients will continue to want the human element in the forefront, even if the technology is delivering more definitive answers.
Most [patients] would probably still say that they want that human touch right now. But I believe it's just a matter of time until we might want that convenience, and the human is maybe just a face because even that human that they're talking to has computers behind them, she said.
Patients and their families are often less skeptical than expected, said Marya. When it comes to life-threatening illnesses, patients and their families are open to most technologies with minimal hesitation.
If there was a better way, a more accurate way to get a diagnosis sooner, they're all for it, he said. A lot of people, for good and bad, I think they just want the most accurate diagnosis done as soon as possible. And however that's done, I think people just want the best results.
The rest is here:
Diagnosing AI: Healthcare community excited, wary of artificial ... - Worcester Business Journal
Programers behind artificial intelligence could learn from Daedalus – Herald-Mail Media
Pete Waters| Columnist
2023 Maryland General Assembly wrap up
Maryland Senate President Bill Ferguson, D-Baltimore City, bangs a gavel adjourning the chamber on April 11, 2023. Confetti and balloons fall in celebration.
Dwight A. Weingarten, The Herald-Mail
You know, I was thinking the other day. There is much information being reported in the news about artificial intelligence, more commonly known by its initials AI.
In an article on his website, author Will Hurd says, "The term artificial intelligence describes a field in computer science that refers to computers or machines being able to simulate human intelligence to perform tasks or solve problems."
Sasha Reeves in an article on iotforall.com says, "Using virtual filters on our faces when taking pictures and using face ID for unlocking our phones are two examples of artificial intelligence."
Alyssa Schroer, an analyst writing for builtin.com, says, "Examples of AI in the automotive industry include industrial robots constructing a vehicle and autonomous cars navigating traffic with machine learning and vision."
Some experts have even suggested that as AI advances further, it wont be long before machines will replace many more workers on jobs, solve space mysteries, and perhaps even design a plan to destroy the world.
Farfetched thinking, you say? Well, some of the most intriguing intellectual thinkers of the world like Stephen Hawking once proffered that very thought. Elon Musk and others have also recently sounded that alarm.
And as I sit here in my dark room with only the light on my computer screen, considering mans inventions, a Greek myth comes to visit me.
It is a story I once read about a famous Greek inventor, Daedalus, and his son, Icarus, who lived on the island of Crete in the kingdom of a mighty ruler by the name King Minos.
Daedalus was a famous sculptor and also an inventor something, I suppose, like some of our modern-day inventors of AI.
And during the time that Daedalus lived, there was much upheaval in King Minos' kingdom and, recognizing the important contributions of good inventions and wisdom, this powerful king had decided to keep Daedalus and his son captives on Crete.
As the relationship between Daedalus and King Minos became estranged, Daedalus longed to escape from the wicked king and leave the island with his son.
As an inventor, Daedalus had studied the flight of birds and he pondered an invention that would help him and his son in their plans.
He [Minos] may thwart our escape by land or sea but the sky is surely open to us," Daedalus thought. "We will go that way. Minos rules everything but he does not rule the heavens.
Soon he conceived a plan to make wings just like a bird's to navigate the sky. He laid down feathers and tied them together using beeswax and thread, and before long he had built wings for himself and Icarus.
Daedalus attached the wings, taking a successful trial flight as his son watched.
It was at this point, before giving Icarus his wings, that Daedalus gave his son some valuable advice:
Let me warn you, Icarus, to take the middle way, in case the moisture weighs down your wings, if you fly too low, or if you go too high, the sun scorches them. Travel between the extremes."
Daedalus' warning was a serious one. His invention would help them escape from that island of despair, but if not applied properly, it would have some devastating consequence.
He loved his son and wanted him to be safe. He also wanted to escape Crete.
Daedalus and Icarus then took flight and left Crete behind them. But not long afterwards, Icarus ignored his fathers instructions and suddenly flew upward toward the sun. He must have had a moment of joy on his face as he soared higher and higher.
Unfortunately for Icarus, his joy would not last. Soon the beeswax began to melt and his body tumbled from the sky. His last word as he fell was Father.
His father heard his cry and searched the sky, but he couldn't see his son. He had drowned in the dark waters below, which became known as the Icarian Sea.
Eventually, Daedalus found the body of his son floating amidst feathers. Cursing his inventions, he took the body to the nearest island and buried it there.
And perhaps Daedalus leaves our AI inventors today with some valuable advice:
Travel between the extremes. Never fly too close to the sun.
Pete Waters is a Sharpsburg resident who writes for The Herald-Mail.
More here:
Programers behind artificial intelligence could learn from Daedalus - Herald-Mail Media
I can’t wait for artificial intelligence to take this job away from humans – Tom’s Guide
Adults have to do a lot of unpleasant jobs; its part of the gig. Taking out the trash, cleaning the toilet and making the daily school run are unavoidable when it comes to keeping life running smoothly.
But theres one particular job that fills me with dread: calling a helpline."
Every time I pick up the phone to discuss tax codes, remortgage rates, insurance quotes, doctors appointments or some other exciting aspect of modern life, my knees go slack and my head starts to pound. Cue generic hold music and a constant robotic reminder of my place in the virtual queue.
Once you do get through to a person, things rarely improve. The poor soul on the other end of the line guides me through mundane security questions before reading from a pre-prepared script. Often, they fail to offer up a single noteworthy piece of advice when questioned directly.
During one of these recent calls, it occurred to me everyone involved would benefit from letting artificial intelligence handle the task. I dont mean the basic interactive voice response (IVR) program that routes your call based on how you answer recorded questions; I mean a full conversational AI agent capable of discussing and actioning my requests with no human input.
Id get through the process faster (because the organization wouldnt need to wait for available humans to assign) and it wouldnt require a living, breathing person to spend their days on the phone to an aggravated person like me. Similarly, an AI doesnt need to clock off at the end of a shift, so the call could be handled any time of the day or night.
Plenty of companies have implemented browser or app-based chat clients but, the fact is, a huge amount of people still prefer to pick up the phone and do things by voice. And I think most industry leaders recognize this.
Humana, a healthcare insurance provider with over 13 million customers, partnered with IBMs Data and AI Expert Labs in 2019 to implement natural language understanding (NLU) software into its call centers to respond to spoken sentences. The machines either rerouted the call to the relevant person or, where necessary, simply provided the information. This came after Humana recognized that 60% of the million-or-so calls they were getting each month were just queries for information.
According to a blog post (opens in new tab) from IBM, The Voice Assistant uses significant speech customization with seven language models and two acoustic models, each targeted to a specific type of user input collected by Humana.
Through speech customization training, the solution achieves an average of 90-95% sentence error rate accuracy level on the significant data inputs. The implementation handles several sub-intents within the major groupings of eligibility, benefits, claims, authorization and referrals, enabling Humana to quickly answer questions that were never answerable before.
The obvious stumbling block for most companies will be the cost. After all, OpenAIs chatbot ChatGPT charges for API access while Metas LLaMA is partially open-source but doesnt permit commercial use.
However, given time, the cost for implementing machine learning solutions will come down. For example, Databricks, a U.S.-based enterprise company recently launched Dolly 2.0 (opens in new tab), a 12-billion parameter model thats completely open source. It will allow companies and organizations to create large language models (LLMs) without having to pay costly API fees to the likes of Microsoft, Google or Meta. With more of these advancements being made, the AI adoption rate for small and medium-sized businesses will (and should) increase.
According to recent research by industry analysts Gartner (opens in new tab), around 10% of so-called agent interactions will be performed by conversational AI by 2026. At present, the number stands at around 1.6%.
"Many organizations are challenged by agent staff shortages and the need to curtail labor expenses, which can represent up to 95 percent of contact center costs, explained Daniel O'Connell, a VP analyst at Gartner. Conversational AI makes agents more efficient and effective, while also improving the customer experience."
You could even make the experience a bit more fun. Imagine if a company got the license to utilize James Earl Jones voice for its call center AI. I could spend a half-hour discussing insurance renewal rates with Darth Vader himself.
I could spend a half-hour discussing insurance renewal rates with Darth Vader himself.
Im not saying there wont be teething problems; AI can struggle with things like regional dialects or slang terms and there are more deep-rooted issues like unconscious bias. And if a company simply opts for a one-size-fits-all AI approach, rather than tailoring it to specific customer requirements, we wont be any better off.
Zooming out for a second, I appreciate that were yet to fully consider all the ethical questions posed by the rapid advancements in AI. Regulation will surely become a factor (if it can keep pace) and upskilling a workforce to become comfortable with the new system will be something for industry leaders and educational institutions to grapple with.
But I still think a good place to start is letting the robots take care of mundane helpline tasks its for the good of humanity.
Today's best Google Nest Hub (2nd Gen) deals
See the original post:
I can't wait for artificial intelligence to take this job away from humans - Tom's Guide