Category Archives: Artificial Intelligence
Cleveland wants to use artificial intelligence to fight illegal dumping – cleveland.com
CLEVELAND, Ohio The city of Cleveland will work with Case Western Reserve University and Cleveland State University on a solution for illegal dumping thats powered by artificial intelligence.
The end product will ideally provide new city-owned technology that Cleveland could use to identify people responsible for dumping, according to Roy Fernando, chief innovation and technology officer under Mayor Justin Bibb, who has promised to use technology to improve city services.
Cleveland City Council on Monday approved legislation allowing students and faculty, who are part of the two universities Internet of Things Collaborative, to commence work. It was one of two initiatives approved this week that is intended to bring smart technology to city devices and operations.
Students and staff will use smart cameras to develop and test an AI model designed to identify illegal dumping. Such work would be performed in a controlled-environment, likely on-campus, where students will walk into the field of view of a monitor and leave an item behind, Fernando said.
Once the model has been tweaked and perfected, it would be able to identify that person as having illegally dumped the item on the ground, Fernando said.
Then, the city intends to deploy smart cameras outfitted with the new technology on two corridors known for being dumping hotspots. One would be deployed on the citys East Side, and one on the West Side, Fernando said.
Once someone dumps an item and the AI model detects it, it would automatically alert authorities, so they could investigate and potentially ticket whoevers responsible.
If the test projects are successful, the technology could then be scaled-up for use elsewhere in Cleveland. The technology could also serve as a guide, of sorts, for creating different smart-city solutions for other problems, Fernando said.
Ward 3s Councilman Kerry McCormack, who has long advocated for Cleveland to begin using smart-city technology, praised the idea during a Monday committee hearing. He identified illegal dumping as one of the citys largest problems.
Ward 14s Councilwoman Jasmin Santana, who said illegal dumping has been a big concern in alleyways in her neighborhood, was a bit skeptical. We [already] know the hotspots for illegal dumping. Thats not the question, Santana said. [The issue is] capacity within the illegal dumping task force, and cameras.
The second smart-city initiative approved by Council on Monday was a no-cost partnership with Honeywell, a manufacturing and technology company, to develop a smart city roadmap that could be used to guide Clevelands future use of technology in delivering city services.
Cleveland was one of five cities selected for the partnership by Accelerator for America, which is a coalition of U.S. mayors that seeks and shares innovative solutions for problems commonly faced by municipalities.
Technology advancements identified by Honeywell could relate to any number of city services or needs. Examples mentioned by Fernando and McCormack include uses for transportation, sustainability, smart buildings, smart sensors embedded in roads or other infrastructure, meter-reading for utilities, making traffic lights more efficient, or monitoring air quality or waste collection.
Over a two- or three-month period, Honeywell will interview leaders of several city departments about challenges they routinely face. Honeywell will then present findings about how to address those challenges with smart technology, Fernando said.
Bibb intends to use those findings and recommendations to apply for federal grants that would be used to pay for the needed technology upgrades, he said.
Here is the original post:
Cleveland wants to use artificial intelligence to fight illegal dumping - cleveland.com
Putting artificial intelligence and machine learning workloads in the cloud – ComputerWeekly.com
Artificial intelligence (AI) and machine learning (ML) are some of the most hyped enterprise technologies and have caught the imagination of boards, with the promise of efficiencies and lower costs, and the public, with developments such as self-driving cars and autonomous quadcopter air taxis.
Of course, the reality is rather more prosaic, with firms looking to AI to automate areas such as online product recommendations or spotting defects on production lines. Organisations are using AI in vertical industries, such as financial services, retail and energy, where applications include fraud prevention and analysing business performance for loans, demand prediction for seasonal products and crunching through vast amounts of data to optimise energy grids.
All this falls short of the idea of AI as an intelligent machine along the lines of 2001: A Space Odysseys HAL. But it is still a fast-growing market, driven by businesses trying to drive more value from their data, and automate business intelligence and analytics to improve decision-making.
Industry analyst firm Gartner, for example, predicts that the global market for AI software will reach US$62bn this year, with the fastest growth coming from knowledge management. According to the firm, 48% of the CIOs it surveyed have already deployed artificial intelligence and machine learning or plan to do so within the next 12 months.
Much of this growth is being driven by developments in cloud computing, as firms can take advantage of the low initial costs and scalability of cloud infrastructure. Gartner, for example, cites cloud computing as one of five factors driving AI and ML growth, as it allows firms to experiment and operationalise AI faster with lower complexity.
In addition, the large public cloud providers are developing their own AI modules, including image recognition, document processing and edge applications to support industrial and distribution processes.
Some of the fastest-growing applications for AI and ML are around e-commerce and advertising, as firms look to analyse spending patterns and make recommendations, and use automation to target advertising. This takes advantage of the growing volume of business data that already resides in the cloud, cutting out the costs and complexity associated with moving data.
The cloud also lets organisations make use of advanced analytics and compute facilities, which are often not cost-effective to build in-house. This includes the use of dedicated, graphics processing units (GPUs) and extremely large storage volumes made possible by cloud storage.
Such capabilities are beyond the reach of many organisations on-prem offerings, such as GPU processing. This demonstrates the importance of cloud capability in organisations digital strategies, says Lee Howells, head of AI at advisory firm PA Consulting.
Firms are also building up expertise in their use of AI through cloud-based services. One growth area is AIOps, where organisations use artificial intelligence to optimise their IT operations, especially in the cloud.
Another is MLOps, which Gartner says is the operationalisation of multiple AI models, creating composite AI environments. This allows firms to build up more comprehensive and functional models from smaller building blocks. These blocks can be hosted on on-premise systems, in-house, or in hybrid environments.
Just as cloud service providers offer the building blocks of IT compute, storage and networking so they are building up a range of artificial intelligence and machine learning models. They are also offering AI- and ML-based services which firms, or third-party technology companies, can build into their applications.
These AI offerings do not need to be end-to-end processes, and often they are not. Instead, they provide functionality that would be costly or complex for a firm to provide itself. But they are also functions that can be performed without compromising the firms security or regulatory requirements, or that involve large-scale migration of data.
Examples of these AI modules include image processing and image recognition, document processing and analysis, and translation.
We operate within an ecosystem. We buy bricks from people and then we build houses and other things out of those bricks. Then we deliver those houses to individual customers, says Mika Vainio-Mattila, CEO at Digital Workforce, a robotic process automation (RPA) company. The firm uses cloud technologies to scale up its delivery of automation services to its customers, including its robot as a service, which can run either on Microsoft Azure or a private cloud.
Vainio-Mattila says AI is already an important part of business automation. The one that is probably the most prevalent is intelligent document processing, which is basically making sense of unstructured documents, he says.
The objective is to make those documents meaningful to robots, or automated digital agents, that then do things with the data in those documents. That is the space where we have seen most use of AI tools and technologies, and where we have applied AI ourselves most.
He sees a growing push from the large public cloud companies to provide AI tools and models. Initially, that is to third-party software suppliers or service providers such as his company, but he expects the cloud solution providers (CSPs) to offer more AI technology directly to user businesses too.
Its an interesting space because the big cloud providers spearheaded by Google obviously, but very closely followed by Microsoft and Amazon, and others, IBM as well have implemented services around ML- and AI-based services for deciphering unstructured information. That includes recognising or classifying photographs or, or translation.
These are general-purpose technologies designed so that others can reuse them. The business applications are frequently very use-case specific and need experts to tailor them to a companys business needs. And the focus is more on back-office operations than applications such as driverless cars.
Cloud providers also offer domain-specific modules, according to PA Consultings Howells. These have already evolved in financial services, manufacturing and healthcare, he says.
In fact, the range of AI services offered in the cloud is wide, and growing. The big [cloud] players now have models that everyone can take and run, says Tim Bowes, associate director for data engineering at consultancy Dufrain. Two to three years ago, it was all third-party technology, but they are now building proprietary tools.
Azure, for example, offers Azure AI, with vision, speech, language and decision-making AI models that users can access via AI calls. Microsoft breaks its offerings down into Applied AI Services, Cognitive Services, machine learning and AI infrastructure.
Google offers AI infrastructure, Vertex AI, an ML platform, data science services, media translation and speech to text, to name a few. Its Cloud Inference API lets firms work with large datasets stored in Googles cloud. The firm, unsurprisingly, provides cloud GPUs.
Amazon Web Services (AWS) also provides a wide range of AI-based services, including image recognition and video analysis, translation, conversational AI for chatbots, natural language processing, and a suite of services aimed at developers. AWS also promotes its health and industrial modules.
The large enterprise software and software-as-a-service (SaaS) providers also have their own AI offerings. These include Salesforce (ML and predictive analytics), Oracle (ML tools including pre-trained models, computer vision and NLP) and IBM (Watson Studio and Watson Services). IBM has even developed a specific set of AI-based tools to help organisations understand their environmental risks.
Specialist firms include H2O.ai, UIPath, Blue Prism and Snaplogic, although the latter three could be better described as intelligent automation or RPA companies than pure-play AI providers.
It is, however, a fine line. According to Jeremiah Stone, chief technology officer (CTO) at Snaplogic, enterprises are often turning to AI on an experimental basis, even where more mature technology can be more appropriate.
Probably 60% or 70% of the efforts Ive seen are, at least initially, starting out exploring AI and ML as a way to solve problems that may be better solved with more well-understood approaches, he says. But that is forgivable because, as people, we continually have extreme optimism for what software and technology can do for us if we didnt, we wouldnt move forward.
Experimentation with AI will, he says, bring longer-term benefits.
There are other limitations to AI in the cloud. First and foremost, cloud-based services are best suited to generic data or generic processes. This allows organisations to overcome the security, privacy and regulatory hurdles involved in sharing data with third parties.
AI tools counter this by not moving data they stay in the local business application or database. And security in the cloud is improving, to the point where more businesses are willing to make use of it.
Some organisations prefer to keep their most sensitive data on-prem. However, with cloud providers offering industry-leading security capabilities, the reason for doing this is rapidly reducing, says PA Consultings Howells.
Nonetheless, some firms prefer to build their own AI models and do their own training, despite the cost. If AI is the product and driverless cars are a prime example the business will want to own the intellectual property in the models.
But even then, organisations stand to benefit from areas where they can use generic data and models. The weather is one example, image recognition is potentially another.
Even firms with very specific demands for their AI systems might benefit from the expansive data resources in the cloud for model training. Potentially, they might also want to use cloud providers synthetic data, which allows model training without the security and privacy concerns of data sharing.
And few in the industry would bet against those services coming, first and foremost, from the cloud service providers.
More here:
Putting artificial intelligence and machine learning workloads in the cloud - ComputerWeekly.com
Artificial Intelligence Market in the Education Sector 2026, Increasing Demand For ITS to Boost Growth – Technavio – PR Newswire
NEW YORK, Sept. 19, 2022 /PRNewswire/ -- The Artificial Intelligence Market in the Education Sector is expected to grow by USD 374.3 million during 2021-2026, at a CAGR of 48.15% during the forecast period, according to Technavio. The increasing demand for ITS will offer immense growth opportunities, and security and privacy concerns will challenge the growth of the market participants.
To make the most of the opportunities, market vendors should focus more on the growth prospects in the fast-growing segments, while maintaining their positions in the slow-growing segments. Increasing demand for it has been instrumental in driving the growth of the market. However, security and privacy concerns might hamper the market growth. Buy Sample Report.
Artificial Intelligence Market in the Education Segmentation
Artificial Intelligence Market in the Education Sector Scope
Technavio presents a detailed picture of the market by the way of study, synthesis, and summation of data from multiple sources. Our artificial intelligence market in the education sector report covers the following areas:
This study identifies increased emphasis on chatbots as one of the prime reasons driving the artificial intelligence market in the education sector growth during the next few years. Request Free Sample Report.
Artificial Intelligence Market in the Education Sector Vendor Analysis
We provide a detailed analysis of around 25 vendors operating in the Artificial Intelligence Market in the Education Sector, including some of the vendors such as vendors Backed with competitive intelligence and benchmarking, our research reports on the Artificial Intelligence Market in the Education Sector are designed to provide entry support, customer profile and M&As as well as go-to-market strategy support.
Find additional highlights on the growth strategies adopted by vendors and their product offerings, Download Free Sample Report.
Artificial Intelligence Market in the Education Sector Key Highlights
Related Reports:Overhead Cables Marketby Type and Geography - Forecast and Analysis 2022-2026:The overhead cables market share is expected to increase by USD17.67 billion from 2021 to 2026,and the market's growth momentum will accelerate at a CAGR of 5.1%.
Electric Motor Sales Marketby Application and Geography - Forecast and Analysis 2022-2026:The electric motor sales market share is expected to increase by USD52.69 billion from 2021 to 2026,and the market's growth momentum will accelerate at a CAGR of 6.38%.
Artificial Intelligence Market In The Education Sector Scope
Report Coverage
Details
Page number
120
Base year
2021
Forecast period
2022-2026
Growth momentum & CAGR
Accelerate at a CAGR of 48.15%
Market growth 2022-2026
$ 374.3 million
Market structure
Fragmented
YoY growth (%)
46.6
Regional analysis
US
Performing market contribution
North America at 100%
Key consumer countries
US
Competitive landscape
Leading companies, competitive strategies, consumer engagement scope
Companies profiled
Alphabet Inc., Carnegie Learning Inc., Century-Tech Ltd., Cognii, DreamBox Learning Inc., Fishtree Inc., Intellinetics Inc., International Business Machines Corp., Jenzabar Inc, John Wiley and Sons Inc., LAIX Inc., McGraw Hill Education Inc., Microsoft Corp., Nuance Communications Inc., Pearson Plc, PleIQ Smart Toys Spa, Providence Equity Partners LLC, Quantum Adaptive Learning LLC, Tangible Play Inc., and True Group Inc.
Market Dynamics
Parent market analysis, Market growth inducers and obstacles, Fast-growing and slow-growing segment analysis, COVID-19 impact and future consumer dynamics, and market condition analysis for the forecast period.
Customization purview
If our report has not included the data that you are looking for, you can reach out to our analysts and get segments customized.
Browse for Technavio "Industrials" Research Reports
Table Of Contents :
1 Executive Summary
2 Market Landscape
3 Market Sizing
4 Five Forces Analysis
5 Market Segmentation by End-user
6 Market Segmentation by Type
7 Customer Landscape
8 Drivers, Challenges, and Trends
9 Vendor Landscape
10 Vendor Analysis
11 Appendix
About Us
Technavio is a leading global technology research and advisory company. Their research and analysis focus on emerging market trends and provides actionable insights to help businesses identify market opportunities and develop effective strategies to optimize their market positions. With over 500 specialized analysts, Technavio's report library consists of more than 17,000 reports and counting, covering 800 technologies, spanning across 50 countries. Their client base consists of enterprises of all sizes, including more than 100 Fortune 500 companies. This growing client base relies on Technavio's comprehensive coverage, extensive research, and actionable market insights to identify opportunities in existing and potential markets and assess their competitive positions within changing market scenarios.
Contact
Technavio ResearchJesse MaidaMedia & Marketing ExecutiveUS: +1 844 364 1100UK: +44 203 893 3200Email: [emailprotected]Website: http://www.technavio.com/
SOURCE Technavio
Read the original:
Artificial Intelligence Market in the Education Sector 2026, Increasing Demand For ITS to Boost Growth - Technavio - PR Newswire
Distracted drivers are being identified by artificial intelligence in Edmonton – The Gateway Online
Artificial intelligence is currently being used in Edmonton to detect distracted driving as part of a research project.
On September 13, the University of Alberta launched this three-week research project to understand the prevalence of distracted drivers, specifically in Edmonton. Karim El-Basyouny, a professor in the faculty of engineering and urban traffic safety research chair at the University of Alberta, is the lead of the research team. The U of A research is in a collaboration with Acusensus, the City of Edmonton, and the Edmonton Police Service.
Since September 13, the technology has been stationed at its first location on the intersection of 79 Street and Argyll Road. According to El-Basyouny, it will be stationed there for about a week before moving to the next location, which is currently unknown. There will be a total of three different locations, one for each week during this project.
El-Basyounys research is being supported by a seed grant, making the use of Acusensus technology possible. Although the Edmonton Police Service is in collaboration with this project, the collection of data will be used solely for research, not traffic enforcement.
Edmonton is the first city in Canada to test Acusensus technology, according to Tony Parrino, the general manager for Acusensus in North America.
The data around distracted driving in Canada has been a little patchy, [and] we dont really understand how big of a problem it is what were trying to do is see if there is a better way of understanding how big of an issue [distracted driving] is, El-Basyouny explained.
The technology being used to determine the prevalence of distracted drivers is mainly AI. According to Parrino, the AI has gone through a number of training scenarios with millions of data points.
The system is radar-based with many different sensors, and four different cameras. Each camera captures something different; one captures a steep shot of the windshield, one camera is shallow in case of a phone-to-ear event, and the other two cameras are used for color context and capturing license plates. The information gathered is then given to the AI.
According to Parrino, although the AI has been trained to have maximum accuracy there is a possibility for false positives.
It is very accurate, but there are false positives 100 per cent of the images that are captured are reviewed by trained individuals [who determine if] the criteria is met for the U of A to determine that a distracted driving event has occurred, and only those are counted, Parrino said.
Although Acusensus technology is being used in Australia for traffic enforcement, according to Parrino, it is unknown if the technology will be used for traffic enforcement in Edmonton. As of right now, this research is being used solely to see the prevalence of distracted drivers in Edmonton.
I think [traffic enforcement] is an option that is available to us at [some] point in the future, [however] it is not predominantly the purpose of this study, El-Basyouny said.
In a statement sent out September 13, Jessica Lamarre, director of Safe Mobility for the City of Edmonton, commented on the U of A research project.
This project provides an opportunity to gain a better understanding of the prevalence and safety impacts of distracted driving on our streets through the creative use of new technology alongside our talented research partners at the University of Alberta.
Read the rest here:
Distracted drivers are being identified by artificial intelligence in Edmonton - The Gateway Online
How Ambient.ai Is Using Artificial Intelligence to Turn Video Security On Its Head – Inc.
Shikhar Shrestha hasbeen building security systems since he was a teenager. It began as part obsession, part coping mechanism. He'd been traumatizedwhen he and his mother wererobbed at gunpoint when he was 12. The area of his hometown in eastern India seemed to have lots of security cameras--butwhat was the use? Help did not come while he was being threatened, and whilemother's jewelry was being stolen. He thought about that a lot.
Asa child Shrestha tinkeredwith technology, including building homemade security systems for neighbors. Years laterhe enrolled at Stanford,doing graduate work in electrical and mechanical engineering.There hemet computer science grad student Vikesh Khanna--and the pair had a lightbulb moment in conceptualizing the future of video innovation.
"We had an idea thatartificial intelligenceand video technology were getting so good that in five years video tech and A.I. could look at a video more exactly than humans can," Shrestha, now30,says."If any camera out there can tell you right away when it sees something suspicious, that would make for a great security system."
The pair earned master's degrees, and in 2017founded Ambient.ai, iteratingon their idea with funding and support from the Silicon Valley startup incubator Y Combinator. They had a clear goal: to prevent every physical security incident possible. They developed a technology thatcombines A.I. and a computer-vision breakthrough, called computer vision intelligence, to understand situational context. It could, in real-time, identify elements in a videofrom a human walking, to a car tailing another car, to a weapon being brandished, to a perimeter breach.
The foundersthought they had a straightforward problem to fix. With conventional enterprise security systems, video cameras capturean endless stream of video--which is rarely, if ever, watched in real time to actually stop, prevent, or quickly respond to an incident. During his time in Y Combinator, Shrestha sent 100 emails a week to security chiefs at large companies, hospitals, hotels, and governments,to learn more about his market and its needs.He quickly learned that no one wanted a new security system--they already had cameras. But the meetings confirmed what he knew: "Everyone does security the same way: They spend millions of dollars on their programs. The expectation is that if something bad happens you rewind the video." In other words, it wasn't having the kind of crime-stopping utility Shrestha envisioned.
At the same time, he was gaining confidence in his teachablevideo-scanning tool. It could identify when a human fell and got hurt, or when a weapon appeared. The softwarealso couldgauge how certainit was thata security incident occurred.Low confidence meant it would ping a member of Ambient.ai's small team of humans to verify what was happening in the video. In cases of high confidence, it alerts a designated authority, such as a security chief on duty or local law enforcement.
Just because Shrestha trusted his technology didn't mean investors saw the point."At that time the venture community did not believe that physical security was an interesting space where you could build a venture-scale business," he says. There were dominant players already. Companies' budgets were allocated. But Ambient.ai'ssolution was complementary with existing security: It could be integrated into almost any camera-feed system, and customized based on the security needs of nearlyany business to detect threats in real-time. Still, Shrestha says raising the first $2 million for Ambient.ai required approximately 50 meetings over the course of two months.
The company pitched its productwhere it saw immediate need. When a private school in San Jose, California, the Harker School, experienced a nighttime perimeter breach (caught on video that no one was watching) followed by an assault the next morning, Shrestha proposed his system could have prevented it by alerting the authorities immediately. Getting a paying customer seemed to set more deals in motion. While still in beta, the company slowly amassed a client roster. Investor confidence soared, too.When Ambient.ai raised a Series A round of funding, it took13 days of meetings; the Series B took just three.
After five years of signing upcustomers and building up its A.I. intelligence in stealth mode, Ambient.ai formally launched to the public in January 2022. It also announced it had raised $52 million in a round led by Andreessen Horowitz. The startupworks with seven of the top-10 U.S. technology companiesby market capitalization, andits client listincludes Adobe, VMWare, and Impossible Foods. Most of the company's 100 employees are based around its headquarters in theSan Francisco Bay area.
Shrestha is hoping his company flips the surveillance model of security to be proactive, rather than reactive. He's also addressing concerns about the use of machine learning in security, which evokes concern over baked-in or learned prejudices andprofiling.The Ambient.ai system identifies forms of objects and people, not their colors or traits. Unlike other video-monitoring systems, it does not use facial recognition. Nor does its system have the ability to recognize bias-inducing traits, such as gender, age, or skin color.
"It's not looking for classes that can include bias," Shrestha says. "There's a huge responsibility of people who build these systems to build systems from the ground up to maximize privacy and to eliminate bias."
See the article here:
How Ambient.ai Is Using Artificial Intelligence to Turn Video Security On Its Head - Inc.
Artificial intelligence thinks the Aspen area looks like this – The Aspen Times
Aspen is known for its world-class skiing, sky-high real-estate prices and breath-taking mountain views. The town has been known to conjure artistic inspiration, as well; its the town where Stevie Nicks reportedly wrote the hit Landslide, and a place John Denver called home for many years.
According to Swift Luxe, there are approximately 1.5 million visitors to Aspen each year who come to take in the beauty of the area.
While its practically impossible to capture Aspen and the surrounding areas beauty in an image, an AI program tried. The images below were created using a program calledDream Studio beta, a more rapid and accessible version ofStable Diffusion, a text-to-image modelthat was released to the public last month.
When this artificial intelligence text-to-image application thinks of Aspen, it thinks of vast mountain ranges.
Aspen, ColoradoStable Diffusion
Aspen, ColoradoStable Diffusion
Aspen, ColoradoStable Diffusion
Aspen, ColoradoStable Diffusion
Aspen, ColoradoStable Diffusion
Aspen, ColoradoStable Diffusion
Show CaptionsHide Captions
This is pretty close if you ask us.
Maroon BellsStable Diffusion
Maroon BellsStable Diffusion
Maroon BellsStable Diffusion
Maroon BellsStable Diffusion
Maroon BellsStable Diffusion
Show CaptionsHide Captions
Aspen Real EstateStable Diffusion
Aspen Real EstateStable Diffusion
Aspen Real EstateStable Diffusion
Aspen Real EstateStable Diffusion
Aspen Real EstateStable Diffusion
Show CaptionsHide Captions
Snowmass VillageStable Diffusion
Snowmass VillageStable Diffusion
Snowmass VillageStable Diffusion
Snowmass VillageStable Diffusion
Show CaptionsHide Captions
Close, very close.
Read the rest here:
Artificial intelligence thinks the Aspen area looks like this - The Aspen Times
The New Artificial Intelligence Of Car Audio Might Improve More Than Just Tunes – Forbes
As Artificial Intelligence is applied to car audio, the system can start to sense competing noise ... [+] and adjust the experience dynamically.
Hollywood has perennially portrayed Artificial Intelligence (AI) as the operating layer of dystopian robots who replace unsuspecting humans and create the escalating, central conflict. In a best case reference, you might imagine a young Hailey Joel Osment playing David, the self-aware, artificial kid in Spielbergs polar-caps-thawed-and-flooded-coastal-cities world (sound familiar?) of AI: Artificial Intelligence who (spoiler alert) only kills himself. Or maybe you recall Robin Williamss voice as Bicentennial Man who, once again, is a self-aware robot attempting to thrive who (once again on the spoiler alert), ends up being his only victim. And, of course, theres the nearly clich reference to Terminator and its post-apocalyptic world with machines attempting to destroy humans and, well, (not-so-spoiler alert) lots of victims over a couple of decades. In none of these scenarios, however, do humans coexist with an improved life, let alone enhanced entertainment and safety.
That, however, is the new reality. Artificial Intelligence algorithms can be included into audio designs and continuously improved via over-the-air updates to improve the driving experience. And in direct contradiction to these Hollywood examples, such AI might actually improve the humans likelihood to survive.
How the car audio performs can now become an innovative, self-tuned system that enhances the ... [+] experience for the user.
Until recently, all User Interface (UI) including audio development has required complex programming by expert coders over the standard thirty-six (36) months of a vehicle program. Sheet metal styling and electronic boxes are specified, sourced and developed in parallel only to calibrate individual elements late in development. Branded sounds. Acoustic signatures. All separate initiatives within the same, anemic system design that has cost manufacturers billions.
But Artificial Intelligence has allowed a far more flexible and efficient way of approaching audio experience design. What were seeing is the convergence of trends, states Josh Morris, DSP Concepts Machine Learning Engineering Manager. Audio is becoming a more dominant feature within automotive, but at the same time youre seeing modern processors become stronger with more memory and capabilities.
And, therein, using a systems-focused development platform, Artificial Intelligence and these stronger processors provides drivers and passengers with a new level of adaptive, real-time responsiveness. . Instead of the historical need to write reams of code for every conceivable scenario, AI guides system responsiveness based on a learned awareness of environmental conditions and events, states Steve Ernst, DSP Concepts Head of Automotive Business Development.
The very obvious way to use such a learning system is de-noising the vehicle so that premium audio can be tailored and improved despite having swapped to winter tires or other such ambient changes. But LG Electronics has developed algorithms running in the DSP Concepts Audio Weaver platform to allow voice enhancements of the movies dialogue during rear-seat entertainment to accentuate it versus in-movie explosions, thereby allowing the passenger to better hear the critical content
Another non-obvious aspect would be how branded audio sounds are orchestrated in the midst of other noises. Does this specific vehicle require the escalating boot-up sequence to play while other sounds like the radio and chimes are automatically turned down? Each experience can be adjusted.
How to deal with the ongoing, internal, external and ever-changing audio alerts will be a ... [+] development challenge for autonomous and electric vehicles alike.
As the world races into both electric vehicles and autonomous driving, the frequency and needs of audible warnings will likely change drastically. For instance, an autonomous taxis safety engineer cannot assume the passengers are anywhere near a visual display when a timely alert is required. And how audible is that alert for the nearly 25 million Americans with disabilities for whom autonomous vehicles should open new mobility possibilities? Audio now isnt just for listening to your favorite song, states Ernst. With autonomous driving, there are all sorts of alerts that are required to keep the driver engaged or to alert the non-engaged driver about things going on around them.
And what makes it more challenging, injects Adam Levenson, DSP Conceptss Head of Marketing, are all of the things being handled simultaneously within the car: telephony, immersive or spatial sound, engine noise, road noise, acoustic vehicle alert systems, voice systems, etc. We like to say the most complex audio product is the car.
For instance, imagine the scenario where a driver has enabled autonomous drive mode on the highway, has turned up his tunes and is pleasantly ignorant of an approaching emergency vehicle. At what accuracy (and distance) of siren-detection using the vehicles microphone(s) does the car alert its quasi-distracted-driver? How must that alert be presented to overcome ambient noise, provide sufficient attention but not needlessly startle the driver? All of this can be tuned via pre-developed models, upfront training with different sirens and subsequent cloud-based tuning. This is where the overall orchestration becomes really important, explains Morris. We can take the output of the [AIs detection] model and direct that to different places in the car. Maybe you turn the audio down, trigger some audible warning signal and flash something on the dashboard for the driver to pay attention.
The same holds true for external alerts. For instance, quiet electric vehicle may have tuned alarms for pedestrians. And so new calibrations can be created offline and downloaded to vehicles as software updates based upon the enabled innovation.
Innovation everywhere. And Artificial Intelligence feeding the utopian experience rather than creating Hollywoods dystopian world.
Heres my prediction of the week (and its only Tuesday, folks): the next evolution of audio shall include a full, instantaneous feedback loop including the subtle, real-time users delight. Yes, much of the current design likely improves the experience, but an ongoing calibration of User-Centered Design (UCD) might be additionally enhanced based upon the passengers expressions, body language and comments, thereby individually tuning the satisfaction in real-time. All of the enablers are all there: camera, AI, processors and an adaptive platform.
Yes, weve previously heard of adaptive mood lighting and remote detection of boredom, stress, etc. to improve safety, but nothing that enhances the combined experience based upon real-time, learning algorithms of all user-pointed sensors.
Maybe Im extrapolating too much. But just like Robin Williamss character Ive spanned two centuries so maybe Im also just sensitive to what humans might want.
Read more from the original source:
The New Artificial Intelligence Of Car Audio Might Improve More Than Just Tunes - Forbes
Will Artificial Intelligence Kill College Writing? – The Chronicle of Higher Education
When I was a kid, my favorite poem was Shel Silversteins The Homework Machine, which summed up my childhood fantasy: a machine that could do my homework at the press of a button. Decades later that technology, the innocuously titled GPT-3, has arrived. It threatens many aspects of university education above all, college writing.
The web-based GPT-3 software program, which was developed by an Elon Musk-backed nonprofit called OpenAI, is a kind of omniscient Siri or Alexa that can turn any prompt into prose. You type in a query say, a list of ingredients (what can I make with eggs, garlic, mushrooms, butter, and feta cheese?) or a genre and prompt (write an inspiring TED Talk on the ways in which authentic leaders can change the world) and GPT-3 spits out a written response. These outputs can be astonishingly specific and tailored. When asked to write a song protesting inhumane treatment of animals in the style of Bob Dylan, the program clearly draws on themes from Dylans Blowin in the Wind:
How many more creatures must suffer?How many more must die?Before we open up our eyesAnd see the harm were causing?
When asked to treat the same issue in the style of Shakespeare, it produces stanzas of iambic tetrameter in appropriately archaic English:
By all the gods that guide this EarthBy all the stars that fill the skyI swear to end this wretched dearthThis blight of blood and butchery.
GPT-3 can write essays, op-eds, Tweets, jokes (admittedly just dad jokes for now), dialogue, advertisements, text messages, and restaurant reviews, to give just a few examples. Each time you click the submit button, the machine learning algorithm pulls from the wisdom of the entire internet and generates a unique output, so that no two end products are the same.
The quality of GPT-3s writing is often striking. I asked the AI to discuss how free speech threatens a dictatorship, by drawing on free speech battles in China and Russia and how these relate to the First Amendment of the U.S. Constitution. The resulting text begins, Free speech is vital to the success of any democracy, but it can also be a thorn in the side of autocrats who seek to control the flow of information and quash dissent. Impressive.
From an essay written by the GPT-3 software program
The current iteration of GPT-3 has its quirks and limitations, to be sure. Most notably, it will write absolutely anything. It will generate a full essay on how George Washington invented the internet or an eerily informed response to 10 steps a serial killer can take to get away with murder. In addition, it stumbles over complex writing tasks. It cannot craft a novel or even a decent short story. Its attempts at scholarly writing I asked it to generate an article on social-role theory and negotiation outcomes are laughable. But how long before the capability is there? Six months ago, GPT-3 struggled with rudimentary queries, and today it can write a reasonable blog post discussing ways an employee can get a promotion from a reluctant boss.
Since the output of every inquiry is original, GPT-3s products cannot be detected by anti-plagiarism software. Anyone can create an account for GPT-3. Each inquiry comes at a cost, but its usually less than a penny and the turnaround is instantaneous. Hiring someone to write a college-level essay, in contrast, currently costs $15 to $35 per page. The near-free price point of GPT-3 is likely to entice many students who would otherwise be priced out of essay-writing services.
It wont be long before GPT-3, and the inevitable copycats, infiltrate the university. The technology is just too good and too cheap not to make its way into the hands of students who would prefer not to spend an evening perfecting the essay I routinely assign on the leadership style of Elon Musk. Ironic that he has bankrolled the technology that makes this evasion possible.
To help me think through what the collision of AI and higher ed might entail, I naturally asked GPT-3 to write an op-ed exploring the ramifications of GPT-3 threatening the integrity of college essays. GPT-3 noted, with mechanical unself-consciousness, that it threatened to undermine the value of a college education. If anyone can produce a high-quality essay using an AI system, it continued, then whats the point of spending four years (and often a lot of money) getting a degree? College degrees would become little more than pieces of paper if they can be easily replicated by machines.
The effects on college students themselves, the algorithm wrote, would be mixed: On the positive side, students would be able to focus on other aspects of their studies and would not have to spend time worrying about writing essays. On the negative side, however, they will not be able to communicate effectively and will have trouble in their future careers. Here GPT-3 may actually be understating the threat to writing: Given the rapid development of AI, what percent of college freshmen today will have jobs that require writing at all by the time they graduate? Some who would once have pursued writing-focused careers will find themselves instead managing the inputs and outputs of AI. And once AI can automate that, even those employees may become redundant. In this new world, the argument for writing as a practical necessity looks decidedly weaker. Even business schools may soon take a liberal-arts approach, framing writing not as career prep but as the foundation of a rich and meaningful life.
So what is a college professor to do? I put the question to GPT-3, which acknowledged that there is no easy answer to this question. Still, I think we can take some sensible measures to reduce the use of GPT-3 or at least push back the clock on its adoption by students. Professors can require students to draw on in-class material in their essays, and to revise their work in response to instructor feedback. We can insist that students cite their sources fully and accurately (something that GPT-3 currently cant do well). We can ask students to produce work in forms that AI cannot (yet) effectively create, such as podcasts, PowerPoints, and verbal presentations. And we can design writing prompts that GPT-3 wont be able to effectively address, such as those that focus on local or university-specific challenges that are not widely discussed online. If necessary, we could even require students to write assignments in an offline, proctored computer lab.
Eventually, we might enter the if you cant beat em, join em phase, in which professors ask students to use AI as a tool and assess their ability to analyze and improve the output. (I am currently experimenting with a minor assignment along these lines.) A recent project on Beethovens 10th symphony suggests how such projects might work. When he died, Beethoven had composed only 5 percent of his 10th symphony. A handful of Beethoven scholars fed the short, completed section into an AI that generated thousands of potential versions of the rest of the symphony. The scholars then sifted through the AI-generated material, identified the best parts, and pieced them together to create a complete symphony. To my somewhat limited ear, it sounds just like Beethoven.
Read more:
Will Artificial Intelligence Kill College Writing? - The Chronicle of Higher Education
New artificial intelligence recycling technology can sort plastics on its own – H2 News – Hydrogen News – Green Hydrogen Report
New recycling technology has been developed using artificial intelligence to help programs to sort plastics effectively and affordably in order to stop recyclable materials from being sent to landfills.
Even though many people with municipal programs carefully sort through their waste, much of the plastic they think is being recycled is still finding its way to the landfill. Among the biggest problems is that once the trash has been collected, the individual plastics must still be sorted. On a massive scale and with cost as a concern, recycling technology has not reached the point that many plastics will end up anywhere but in a landfill.
Without proper quick and easy sorting, it becomes difficult, slow, and expensive to try to process all the recycled materials. It becomes impossible to keep up with the incoming waste to be sorted and very expensive when much of it must be accomplished by hand. Failing to do so and mixing the wrong plastics means that the remade plastics will be flawed and will not perform as needed, wasting the entire batch as well as the energy and resources required to produce it.
The recycling process is quite complicated. If you go to the supermarket or for the daily recycling you need to know how to properly place all the recyclable (items), like bottles or others, into the right bins. You need to know the labels, know the icons, explained University of Technology Sydney School of Electrical and Data Engineerings Dr. Xu Wang.
This being the case, Dr. Wang led a team of the universitys researchers from the Global Big Data Technologies Centre (GBDTC) in the development of a smart bin capable of automatically sorting the plastics it receives.
The bin uses a spectrum of different forms of recycling technology including robotics, machine vision and artificial intelligence.
This machine can classify different (types) of waste including glasses, metal cans and plastics, explained Wang. This includes the different forms of plastics including PET and HDPE.
See original here:
New artificial intelligence recycling technology can sort plastics on its own - H2 News - Hydrogen News - Green Hydrogen Report
University of Washington graduates use artificial intelligence to create new proteins – NBC Right Now
SEATTLE, Wash.-
For over two years, protein structure prediction has been changed by machine learning. On Sept. 15, two science related research talk about a similar idea in the revolution of protein design.
The findings show how machine learning can create protein molecules that are more accurate and made quicker than before.
With these new software tools, we should be able to find solutions to long-standing challenges in medicine, energy, and technology, said senior author David Baker, professor of biochemistry at the University of Washington School of Medicine.
The algorithm used in machine learning which includes RoseTTAFold have been trained to predict the smaller detailed shapes if natural proteins based on their amino acid sequences.
Machine learning is a type of artificial intelligence that allows computers to learn from data without having to be programmed.
A.I. has the ability to generate protein in two ways. One being akin to DALL-E or other A.I. tools that produce an output from simple prompts. The second is the autocomplete feature we can find in a search bar.
As a way of making things go by faster the A.I. team created a new algorithm that creates amino acid sequences. This tool, called ProteinMPNN, creates the sequence in one second. That's over 200 minutes faster than previous best software.
The Baker Lab also says combining new machine learning tools could reliably generate new proteins that functioned in the laboratory. Among those were the nanoscale ring that could make up part of a custom nanomachines.
Read the original post:
University of Washington graduates use artificial intelligence to create new proteins - NBC Right Now