Category Archives: Artificial Intelligence

Artificial Intelligence Helps Understand the Evolution of Young Stars and Their Planets – SciTechDaily

An X-class solar flare from our sun in November 2013. Scientists trained a neural network to find such flares in data taken of distant planets around other stars. Credit: Scott Wiessinger, Solar Dynamics Observatory at NASA Goddard Space Flight Center

University of Chicago scientists teach a neural net to find baby star flares.

Like its human counterparts, a young star is cute but prone to temper flaresonly a stars are lethal. A flare from a star can incinerate everything around it, including the atmospheres of any nearby planets starting to form.

Finding out how often such young stars erupt can help scientists understand where to look for habitable planets. But until now, locating such flares involved poring over thousands of measurements of star brightness variations, called light curves, by eye.

Scientists with the University of Chicago and the University of New South Wales, however, thought this would be a task well suited for machine learning. They taught a type of artificial intelligence called a neural network to detect the telltale light patterns of a stellar flare, then asked it to check the light curves of thousands of young stars; it found more than 23,000 flares.

Published on October 23, 2020, in the Astronomical Journaland the Journal of Open Source Software the results offer a new benchmark in the use of AI in astronomy, as well as a better understanding of the evolution of young stars and their planets.

When we say young, we mean only a million to 800 million years old, said Adina Feinstein, a UChicago graduate student and first author on the paper. Any planets near a star are still forming at this point. This is a particularly fragile time, and a flare from a star can easily evaporate any water or atmosphere thats been collected.

NASAs TESS telescope, aboard a satellite that has been orbiting Earth since 2018, is specifically designed to search for exoplanets. Flares from faraway stars show up on TESSs images, but traditional algorithms have a hard time picking out the shape from the background noise of star activity.

NASAs Solar Dynamics Observatory captures flares from the sun. Credit: NASA

But neural networks are particularly good at looking for patternslike Googles AI picking cats out of internet imagesand astronomers have increasingly begun to look to them to classify astronomical data. Feinstein worked with a team of scientists from NASA, the Flatiron Institute, Fermi National Accelerator Laboratory, the Massachusetts Institute of Technology and the University of Texas at Austin to pull together a set of identified flares and not-flares to train the neural net.

It turned out to be really good at finding small flares, said study co-author and former UChicago postdoctoral fellow Benjamin Montet, now a Scientia Lecturer at the University of New South Wales in Sydney. Those are actually really hard to find with other methods.

Once the researchers were satisfied with the neural nets performance, they turned it loose on the full set of data of more than 3,200 stars.

They found that stars similar to our sun only have a few flares, and those flares seem to drop off after about 50 million years. This is good for fostering planetary atmospheresa calmer stellar environment means the atmospheres have a better chance of surviving, Feinstein said.

This can help scientists pinpoint the most likely places to look for habitable planets elsewhere in the universe.

In contrast, cooler stars called red dwarfs tended to flare much more frequently. Red dwarfs have been seen to host small rocky planets; If those planets are being bombarded when theyre young, this could prove detrimental for retaining any atmosphere, she said.

The results help scientists understand the odds of habitable planets surviving around different types of stars, and how atmospheres form. This can help them pinpoint the most likely places to look for habitable planets elsewhere in the universe.

They also investigated the connection between stellar flares and star spots, like the kind we see on our own suns surface. The spottiest our sun ever gets is maybe 0.3% of the surface, Montet said. For some of these stars were seeing, the surface is basically all spots. This reinforces the idea that spots and flares are connected, as magnetic events.

The scientists next want to adapt the neural net to look for planets lurking around young stars. Currently we only know of about a dozen younger than 50 million years, but theyre so valuable for learning how planetary atmospheres evolve, Feinstein said.

Reference: Flare Statistics for Young Stars from a Convolutional Neural Network Analysis of TESS Data by Adina D. Feinstein, Benjamin T. Montet, Megan Ansdell, Brian Nord, Jacob L. Bean, Maximilian N. Gnther, Michael A. Gully-Santiago, and Joshua E. Schlieder, 23 October 2020, The Astronomical Journal.DOI:10.3847/1538-3881/abac0a

Other UChicago-affiliated scientists on the study included visiting assistant research professor Brian Nord and Assoc. Prof. Jacob Bean.

Link:
Artificial Intelligence Helps Understand the Evolution of Young Stars and Their Planets - SciTechDaily

Imaging and Artificial Intelligence Tools Help Predict Response to Breast Cancer Therapy – On Cancer – Memorial Sloan Kettering

Summary

For breast cancers that have high levels of HER2, advanced MRI scans and artificial intelligence may help doctors make treatment decisions.

For people with breast cancer, biopsies have long been the gold standard for characterizing the molecular changes in a tumor, which can guide treatment decisions. Biopsies remove a small piece of tissue from the tumor so pathologists can study it under the microscope and make a diagnosis. Thanks to advances in imaging technologies and artificial intelligence (AI), however, experts are now able to use the characteristics of the whole tumor rather than the small sample removed during biopsy to assess tumor characteristics.

In a study published October 8, 2020, in EBioMedicine, a team led by experts from Memorial Sloan Kettering report that for breast cancers that have high levels of a protein called HER2 AI-enhanced imaging tools may also be useful for predicting how patients will respond to the targeted chemotherapy given before surgery to shrink the tumor (called neoadjuvant therapy). Ultimately, these tools could help to guide treatment and make it more personalized.

Were not aiming to replace biopsies, says MSK radiologist Katja Pinker, the studys corresponding author. But because breast tumors can be heterogeneous, meaning that not all parts of the tumor are the same, a biopsy cant always give us the full picture.

Because breast tumors can be heterogeneous, meaning that not all parts of the tumor are the same, a biopsy cant always give us the full picture, says breast radiologist Katja Pinker.

The study looked at data from 311 patients who had already been treated at MSK for early-stage breast cancer. All the patients had HER2-positive tumors meaning that the tumors had high levels of the protein HER2, which can be targeted with drugs like trastuzumab (Herceptin). The researchers wanted to see if AI-enhanced magnetic resonance imaging (MRI) could help them learn more about each specific tumors HER2 status.

One goal was to look at factors that could predict response to neoadjuvant therapy in people whose tumors were HER2-positive. Breast cancer experts have generally believed that people with heterogeneous HER2 disease dont do as well, but recently a study suggested they actually did better, says senior author Maxine Jochelson, Director of Radiology at MSKs Breast and Imaging Center. We wanted to find out if we could use imaging to take a closer look at heterogeneity and then use those findings to study patient outcomes.

The MSK team took advantage of AI and radiomics analysis, which uses computer algorithms to uncover disease characteristics. The computer helps revealfeatures on an MRI scan that cant be seen with the naked eye.

In this study, the researchers used machine learning to combine radiomics analysis of the entire tumor with clinical findings and biopsy results. They took a closer look at the HER2 status of the 311 patients, with the aim of predicting their response to neoadjuvant chemotherapy. By comparing the computer models to actual patient outcomes, they were able to verify that the models were effective.

We hope that this will get us to the next level of personalized treatment for breast cancer.

Our next step is to conduct a larger multicenter study that includes different patient populations treated at different hospitals and scanned with different machines, Dr. Pinker says. Im confident that our results will be the same, but these larger studies are very important to do before you can apply these findings to patient treatment.

Once weve confirmed our findings, our goal is to perform risk-adaptive treatment, Dr. Jochelson says. That means we could use it to monitor patients during treatment and consider changing their chemotherapy during treatment if their early response is not ideal.

Dr. Jochelson adds that conducting more frequent scans and using them to guide therapies has improved treatments for people with other cancers, including lymphoma. We hope that this will get us to the next level of personalized treatment for breast cancer, she concludes.

Continued here:
Imaging and Artificial Intelligence Tools Help Predict Response to Breast Cancer Therapy - On Cancer - Memorial Sloan Kettering

Artificial intelligence reveals hundreds of millions of trees in the Sahara – Newswise

Newswise If you think that the Sahara is covered only by golden dunes and scorched rocks, you aren't alone. Perhaps it's time to shelve that notion. In an area of West Africa 30 times larger than Denmark, an international team, led by University of Copenhagen and NASA researchers, has counted over 1.8 billion trees and shrubs. The 1.3 million km2 area covers the western-most portion of the Sahara Desert, the Sahel and what are known as sub-humid zones of West Africa.

"We were very surprised to see that quite a few trees actually grow in the Sahara Desert, because up until now, most people thought that virtually none existed. We counted hundreds of millions of trees in the desert alone. Doing so wouldn't have been possible without this technology. Indeed, I think it marks the beginning of a new scientific era," asserts Assistant Professor Martin Brandt of the University of Copenhagen's Department of Geosciences and Natural Resource Management, lead author of the study'sscientific article, now published inNature.

The work was achieved through a combination of detailed satellite imagery provided by NASA, and deep learning -- an advanced artificial intelligence method. Normal satellite imagery is unable to identify individual trees, they remain literally invisible. Moreover, a limited interest in counting trees outside of forested areas led to the prevailing view that there were almost no trees in this particular region. This is the first time that trees across a large dryland region have been counted.

The role of trees in the global carbon budget

New knowledge about trees in dryland areas like this is important for several reasons, according to Martin Brandt. For example, they represent an unknown factor when it comes to the global carbon budget:

"Trees outside of forested areas are usually not included in climate models, and we know very little about their carbon stocks. They are basically a white spot on maps and an unknown component in the global carbon cycle," explains Martin Brandt.

Furthermore, the new study can contribute to better understanding the importance of trees for biodiversity and ecosystems and for the people living in these areas. In particular, enhanced knowledge about trees is also important for developing programmes that promote agroforestry, which plays a major environmental and socio-economic role in arid regions.

"Thus, we are also interested in using satellites to determine tree species, as tree types are significant in relation to their value to local populations who use wood resources as part of their livelihoods. Trees and their fruit are consumed by both livestock and humans, and when preserved in the fields, trees have a positive effect on crop yields because they improve the balance of water and nutrients," explains Professor Rasmus Fensholt of the Department of Geosciences and Natural Resource Management.

Technology with a high potential

The research was conducted in collaboration with the University of Copenhagen's Department of Computer Science, where researchers developed the deep learning algorithm that made the counting of trees over such a large area possible.

The researchers show the deep learning model what a tree looks like: They do so by feeding it thousands of images of various trees. Based upon the recognition of tree shapes, the model can then automatically identify and map trees over large areas and thousands of images. The model needs only hours what would take thousands of humans several years to achieve.

"This technology has enormous potential when it comes to documenting changes on a global scale and ultimately, in contributing towards global climate goals. We are motivated to develop this type of beneficial artificial intelligence," says professor and co-author Christian Igel of the Department of Computer Science.

The next step is to expand the count to a much larger area in Africa. And in the longer term, the aim is to create a global database of all trees growing outside forest areas.

###

FACTS:

Excerpt from:
Artificial intelligence reveals hundreds of millions of trees in the Sahara - Newswise

The Emergence of Artificial General Intelligence: Are we There? – Analytics Insight

Artificial Intelligence is something thats been around quite a while. Since its development into the public consciousness through sci-fi, many have expected that one day machines will have general intelligence, and considered diverse practical, ethical and philosophical implications.

In all actuality, AI has been the discussion of standard pop-culture and sci-fi since the first Terminator film turned out in 1984. These motion pictures present an example of something many refer to as Artificial General Intelligence.

No compelling reason to state that superhuman AI is not even close to happening. In any case, general society is captivated by the possibility of incredibly smart PCs taking control over the world. This fascination has a name: the myth of singularity.

The singularity alludes forthright in time when an artificial intelligence would enter a cycle of exponential improvement. A software so wise that it is ready to develop itself quicker and quicker. Now, technical advancement would turn into the selective doing of AIs, with unforeseeable repercussions on the destiny of the human species.

Singularity is connected to the idea of Artificial General Intelligence. An Artificial General Intelligence can be characterized as an AI that can perform any task that a human can perform. This idea is way more fascinating than the idea of singularity, since its definition is at any rate somewhat concrete.

Software engineers and researchers use machine learning algorithms to create specific AIs. Those are artificially intelligent algorithms that are as acceptable if worse than people at one explicit assignment. For instance, playing chess or picking which square in a segmented picture has a road sign in it, for example Captchas

Recent advances in AI and ML, while not actually close to real AGI, have made a feeling that AGI is close, as surprisingly fast for many years. It additionally doesnt enable you to have some worlds top personalities like Elon Musk getting down on AI as one of the greatest existential dangers to human existence ever.

The absolute greatest headways in AI today have been artificial neural networks, which are technologists method of copying the way that human cerebrums work with code. All things considered, defining what precisely makes something intelligent is difficult

Artificial consciousness carries a more ethical conversation of AGI. Can a machine actually accomplish consciousness similarly as humans can? Furthermore, if it could, would we need to treat it as a person?

Experimentally, consciousness comes straightforwardly from biological input being deciphered and responded to by a biological animal, with the end goal that the creature turns into its own thing. If you eliminate the explaining expression of biological from that definition, at that point its not hard to perceive how even existing AIs could already be viewed as conscious, but moronically conscious.

One thing that characterizes human consciousness is the capacity to recall memories and dream about the future. In numerous angles, this is an extraordinary human ability. If a machine could do this, then we may characterize it as having artificial general intelligence. Dreams are unnecessary to intelligent life, yet, they define our reality as people. If a PC could dream for itself, not on the grounds that it was modified to do as such, this may be the greatest pointer that AGI is here.

Artificial General Intelligence is a trendy expression, since it is either a huge promise or a scaring threat. Like some other popular expression, it must be controlled with caution. Its important to draw your attention to conscious reasoning, compositionality and out-of-distribution generalization. Since they are dissimilar to Singularity or AGI, they represent useful approaches to improve ML algorithms and really support the performance of artificial intelligence.

From an innovation viewpoint, were very far away from having the ability to make AGI. Nonetheless, given how quickly innovation progresses, we may just be a couple of many years. Experts expect and anticipate the first rough artificial general intelligence to be made by around 2030, not very distant. In any case, experts also expect that it wont be until 2060 until AGI has gotten adequate to pass a consciousness test. At the end of the day, were likely to take a look at a long time from now before we see an AI that could pass for a human.

Share This ArticleDo the sharing thingy

About AuthorMore info about author

See original here:
The Emergence of Artificial General Intelligence: Are we There? - Analytics Insight

Global Artificial Intelligence in Construction Market latest demand by 2020-2026 with leading players & COVID-19 Analysis – re:Jerusalem

The latest industry report that focuses onArtificial Intelligence in Construction Marketand gives a professional and in-depth Global Artificial Intelligence in Construction marketanalysis and future prospects of Artificial Intelligence in Construction market 2020. The analysis report begins with the audit of the business condition and characterizes industry chain structure, then highlighted Industry size and forecast of Artificial Intelligence in Construction market during 2020-2026. This report covers the current Artificial Intelligence in Construction market conditions, competitive landscape containing all-inclusivekey players like (IBM, Microsoft, Oracle, SAP, Alice Technologies, eSUB, SmarTVid.Io, DarKTrace, Aurora Computer Services, Autodesk, Jaroop, Lili.Ai, Predii, Assignar, Deepomatic, Coins Global, Beyond Limits, Doxel, Askporter, Plangrid, Renoworks Software, Building System Planning, Bentley Systems) and segmented by Product Type, Applicationsand the Geographies regions like the United States, Europe, China, Japan, India, and South-east Asia.

For Sample Copy of the Report, Ask Here:https://www.syndicatemarketresearch.com/sample/artificial-intelligence-in-construction-market

(We Provide Free Sample copy as per your Research Requirement, also including COVID 19 impact analysis)

This report presents the realistic view of manufacturing value, market share (%), growth rate, income, sales revenue of every kind is mentioned. The report gives the ongoing market size of theGlobal Artificial Intelligence in Construction marketand its growth rate based on the most recent 5 years history records close by organization profile of top players/producers such as IBM, Microsoft, Oracle, SAP, Alice Technologies, eSUB, SmarTVid.Io, DarKTrace, Aurora Computer Services, Autodesk, Jaroop, Lili.Ai, Predii, Assignar, Deepomatic, Coins Global, Beyond Limits, Doxel, Askporter, Plangrid, Renoworks Software, Building System Planning, Bentley Systems. Additionally, It provides accurate statistics by segments of Artificial Intelligence in Construction market support to manage future accommodation and to settle on essential choices for improvement. The study report further spotlights on market materials, limits and in addition gives information on development and trends, innovations,CAPEX cycleand the dynamic structure of Artificial Intelligence in Construction market.

This Global Artificial Intelligence in Construction market research report splits into Product Type such as Cloud, On-premisesandit segmented by Application/end users like Residential, Institutional Commercials, Heavy Construction, Others; on the basis of Gross Margin, Pricing, Sales profit (Million USD) of industry size & forecast with the help of their yearly functions and operations.

Any Inquiries & to check discount on this report, Visit Here:https://www.syndicatemarketresearch.com/inquiry/artificial-intelligence-in-construction-market

Regionally, This Artificial Intelligence in Construction Market report divides into several regions by consumption, production, revenue (million USD), growth rate (CAGR) of Artificial Intelligence in Construction market in this key regions such as North America, South America, Europe, the Middle East and Africa, South-east Asia (India, China, Japan, Korea) and many more which are forecasts during 2020-2026.

Global Artificial Intelligence in Construction Market research report supports to define the outline of all products advanced in granular detail, with the critical impression of latest developments and turning points like organizations currently performing in the world over the market. With the most recent 5 years revenue figure, the report further describes a tremendous suggestion for individuals and associations about present-day business speculation chances of Artificial Intelligence in Construction market before assessing its probability.

This Report contains 15 Chapters to profoundly show the Global Artificial Intelligence in Construction Market:

Chapter 1, to explain Introduction, market review, market risk and opportunities, market driving force, product scope of Artificial Intelligence in Construction Market;Chapter 2, to inspect the leading manufacturers (Cost Structure, Raw Material) with sales Analysis, revenue Analysis, and price Analysis of Artificial Intelligence in Construction Market;Chapter 3, to show the focused circumstance among the best producers, with deals, income, and Artificial Intelligence in Construction market share 2020;Chapter 4, to display the regional analysis of Global Artificial Intelligence in Construction Market with revenue and sales of an industry, from 2020 to 2022;Chapter 5, 6, 7, to analyze the key countries (United States, China, Europe, Japan, Korea & Taiwan), with sales, revenue and market share in key regions;Chapter 8 and 9, to exhibit International and Regional Marketing Type Analysis, Supply Chain Analysis, Trade Type Analysis;Chapter 10 and 11, to analyze the market by product type and application/end users (industry sales, share, and growth rate) from 2020 to 2026Chapter 12, to show Artificial Intelligence in Construction Market forecast by regions, forecast by type and forecast by application with revenue and sales, from 2020 to 2025;Chapter 13, 14 & 15, to specify Research Findings and Conclusion, Appendix, methodology and data source of Artificial Intelligence in Construction market buyers, merchants, dealers, sales channel.

Key Features of the Artificial Intelligence in Construction Market:

Detailed research of the standard Artificial Intelligence in Construction market makers can ask the entire market to review the modernize plans and propelling examinations. Accurate overview of Artificial Intelligence in Construction market depends upon expansion, drive confining components and point of speculation can presume the market progress. The investigation of developing Artificial Intelligence in Construction market fragment and the dominating business sector will deal with the readers to plan the business strategies. The essential appraisal identified with Artificial Intelligence in Construction industry like the cost, sort of uses, the meaning of product, demand, and supply elements are acknowledged in this study report.

Syndicate Market Research provides customization of reports as per your need. The report can be altered to meet your requirements. Contact our sales team, who will guarantee you to get a report that suits your needs.

If you have any special requirements, please let us know and we will offer you the report as you want.

Read another Report:-

About Syndicate Market Research:

At Syndicate Market Research, we provide reports about a range of industries such as healthcare & pharma, automotive, IT, insurance, security, packaging, electronics & semiconductors, medical devices, food & beverage, software & services, manufacturing & construction, defense aerospace, agriculture, consumer goods & retailing, and so on. Every aspect of the market is covered in the report along with its regional data. Syndicate Market Research committed to the requirements of our clients, offering tailored solutions best suitable for strategy development and execution to get substantial results. Above this, we will be available for our clients 247.

Contact US:

Syndicate Market Research244 Fifth Avenue, Suite N202New York, 10001, United StatesEmail ID:sales@syndicatemarketresearch.comWebsite:https://www.syndicatemarketresearch.com/Blog:Syndicate Market Research Blog

Read the original:
Global Artificial Intelligence in Construction Market latest demand by 2020-2026 with leading players & COVID-19 Analysis - re:Jerusalem

Artificial Intelligence Predicts Acute Kidney Injury in COVID-19 Patients | The Weather Channel – Articles from The Weather Channel | weather.com -…

Representative image.

A new artificial-intelligence-based algorithm may help clinicians predict which patients with COVID-19 face a high risk of developing acute kidney injury (AKI) requiring dialysis, say researchers.

In a recent study, a new algorithm achieved good performance for predicting which hospitalized patients will develop acute kidney injury requiring dialysis.

"A machine learning model using admission features had a good performance for prediction of dialysis need," said study co-author Lili Chan from Mount Sinai Health System in the US.

Models like this are potentially useful for resource allocation and planning during future COVID-19 surges. We are in the process of deploying this model into our healthcare systems to help clinicians better care for their patients," Chan added.

According to the researchers, preliminary reports indicate that acute kidney injury is common in patients with COVID-19. Using data from more than 3,000 hospitalised patients with COVID-19, investigators trained a model based on machine learning, a type of artificial intelligence, to predict AKI that requires dialysis. The only information gathered within the first 48 hours of admission was included, so predictions could be made when patients were admitted.

The model demonstrated high accuracy (AUC of 0.79), and features that were important for prediction included blood levels of creatinine and potassium, age, and vital signs of heart rate and oxygen saturation.

The research is scheduled to be presented online during ASN Kidney Week 2020 Reimagined October 19-25.

**

The above article has been published from a wire agency with minimal modifications to the headline and text.

See more here:
Artificial Intelligence Predicts Acute Kidney Injury in COVID-19 Patients | The Weather Channel - Articles from The Weather Channel | weather.com -...

Artificial intelligence and the antitrust case against Google – VentureBeat

Following the launch of investigations last year, the U.S. Department of Justice (DOJ) together with attorney generals from 11 U.S. states filed a lawsuit against Google on Tuesday alleging that the company maintains monopolies in online search and advertising, and violates laws prohibiting anticompetitive business practices.

Its the first antitrust lawsuit federal prosecutors filed against a tech company since the Department of Justice brought charges against Microsoft in the 1990s.

Back then, Google claimed Microsofts practices were anticompetitive, and yet, now, Google deploys the same playbook to sustain its own monopolies, the complaint reads. For the sake of American consumers, advertisers, and all companies now reliant on the internet economy, the time has come to stop Googles anticompetitive conduct and restore competition.

Attorneys general from no Democratic states joined the suit. State attorneys general Democrats and Republicans alike plan to continue on with their own investigations, signaling that more charges or backing from states might be on the way. Both the antitrust investigation completed by a congressional subcommittee earlier this month and the new DOJ lawsuit advocate breaking up tech companies as a potential solution.

The64-page complaint characterizes Google as a monopoly gatekeeper for the internet and spells out the reasoning behind the lawsuit in detail, documenting the companys beginning at Stanford University in the 1990s alongside deals made in the past decade with companies like Apple and Samsung to maintain Googles dominance. Also key to Googles power and plans for the future is access to personal data and artificial intelligence. In this story, we take a look at the myriad of ways in which artificial intelligence plays a role in the antitrust case against Google.

The best place to begin when examining the role AI plays in Googles antitrust case is online search, which is powered by algorithms and automated web crawlers that scour webpages for information. Personalized search results made possible by the collection of personal data started in 2009, and today Google can search for images, videos, and even songs that people hum. Google dominates the $40 billion online search industry, and that dominance acts like a self-reinforcing cycle: More data leads to more training data for algorithms, defense against competition, and more effective advertising.

General search services, search advertising, and general search text advertising require complex algorithms that are constantly learning which organic results and ads best respond to user queries; the volume, variety, and velocity of data accelerates the automated learning of search and search advertising algorithms, the complaint reads. The additional data from scale allows improved automated learning for algorithms to deliver more relevant results, particularly on fresh queries (queries seeking recent information), location-based queries (queries asking about something in the searchers vicinity), and long-tail queries (queries used infrequently).

Search is now primarily conducted on mobile devices like smartphones or tablets. To build monopolies in mobile search and create scale insurmountable to competitors, the complaint states, Google turned to exclusionary agreements with smartphone sellers like Apple and Samsung as well as revenue sharing with wireless carriers. The Apple-Google symbiosis is in fact so important that losing it is referred to as code red at Google, according to the DOJ filing. An unnamed senior Apple employee corresponding with their counterpart at Google said its Apples vision that the two companies operate as if one company. Today, Google accounts for four out of five web searches in the United States and 95% of mobile searches. Last year, Google estimated that nearly half of all search traffic originated on Apple devices, while 15-20% of Apple income came from Google.

Exclusive agreements that put Google apps on mobile devices effectively captured hundreds of millions of users. An antitrust report referenced these data advantages, stating that Googles anticompetitive conduct effectively eliminates rivals ability to build the scale necessary to compete.

In addition to the DOJ report, the antitrust report Congress released earlier this month frequently cites the network effect achieved by Big Tech companies as a significant barrier to entry for smaller businesses or startups. The incumbents have access to large data sets that give them a big advantage, especially when combined with machine learning and AI, the report reads. Companies with superior access to data can use that data to better target users or improve product quality, drawing more users and, in turn, generating more data an advantageous feedback loop.

Network effects often come up in the congressional report in reference to mobile operating systems, public cloud providers, and AI assistants like Alexa and Google Assistant, which improve their machine learning models through the collection of data like voice recordings.

One potential solution the congressional investigation suggested is better data portability to help small businesses compete with tech giants.

One part of maintaining Googles search monopoly, according to the congressional report, is control of emerging search access points. While Google searches began on desktop computers, mobile is king today, and fast emerging are devices like smartwatches, smart speakers, and IoT devices with AI assistants like Alexa, Google Assistant, and Siri. Virtual assistants are using AI to turn speech into text and predict a users intent, becoming a new battleground. An internal Google document declared voice will become the future of search.

The growth of searches via Amazon Echo devices is why a Morgan Stanley analyst previously suggested Google give everyone in the country a free speaker. In the end, he concluded, it would be cheaper for Google to give away hundreds of millions of speakers than to lose its edge to Amazon.

The scale afforded by Android and native Google apps also appears to be a key part of Google Assistants ability to understand or translate dozens of languages and collect voice data across the globe.

Search is primarily done on mobile devices today. Thats what drives the symbiotic relationship between Apple and Google, where Apple receives 20% of its total revenue from Google in exchange for making Google the de facto search engine on iOS phones, which still make up about 60% of the U.S. smartphone market.

The DOJ suit states that Google is concentrating on Google Nest IoT devices and smart speakers because internet searches will increasingly take place using voice orders. The company wants to control the next popular environment for search queries, the DOJ says, whether it be wearable devices like smartwatches or activity monitors from Fitbit, which Google announced plans to acquire roughly one year ago.

Google recognizes that its hardware products also have HUGE defensive value in virtual assistant space AND combatting query erosion in core Search business. Looking ahead to the future of search, Google sees that Alexa and others may increasingly be a substitute for Search and browsers with additional sophistication and push into screen devices,' the DOJ report reads. Google has also harmed competition by raising rivals costs and foreclosing them from effective distribution channels, such as distribution through voice assistant providers, preventing them from meaningfully challenging Googles monopoly in general search services.

In other words, only Google Assistant can get microphone access for a smartphone to respond to a wake word like Hey, Google, a tactic the complaint says handicaps rivals.

AI like Google Assistant also features prominently in the antitrust report a Democrat-led antitrust subcommittee in Congress released, which refers to AI assistants as efforts to lock consumers into information ecosystems. The easiest way to spot this lock-in is when you consider the fact that Google prioritizes YouTube, Apple wants you to use Apple Music, and Amazon wants users to subscribe to Amazon Prime Music.

The congressional report also documents the recent history of Big Tech companies acquiring startups. It alleges that in order to avoid competition from up-and-coming rivals, companies like Google have bought up startups in emerging fields like artificial intelligence and augmented reality.

If you expect a quick ruling by the DC Circuit Court in the antitrust lawsuit against Google, youll be disappointed that doesnt seem at all likely. Taking the 1970s case against IBM and the Microsoft suit in the 1990s as a guide, antitrust cases tend to take years. In fact, its not outside the realm of possibility that this case could still be happening the next time voters pick a president in 2024.

What does seem clear from language used in both US v Google and the congressional antitrust report is that both Democrats and Republicans are willing to consider separating company divisions in order to maintain competitive markets and a healthy digital economy. Whats also clear is that both the Justice Department and antitrust lawmakers in Congress see action as necessary based in part on how Google treats personal data and artificial intelligence.

Read the original post:
Artificial intelligence and the antitrust case against Google - VentureBeat

The Next Generation Of Artificial Intelligence – Forbes

AI legend Yann LeCun, one of the godfathers of deep learning, sees self-supervised learning as the ... [+] key to AI's future.

The field of artificial intelligence moves fast. It has only been 8 years since the modern era of deep learning began at the 2012 ImageNet competition. Progress in the field since then has been breathtaking and relentless.

If anything, this breakneck pace is only accelerating. Five years from now, the field of AI will look very different than it does today. Methods that are currently considered cutting-edge will have become outdated; methods that today are nascent or on the fringes will be mainstream.

What will the next generation of artificial intelligence look like? Which novel AI approaches will unlock currently unimaginable possibilities in technology and business? This article highlights three emerging areas within AI that are poised to redefine the fieldand societyin the years ahead. Study up now.

The dominant paradigm in the world of AI today is supervised learning. In supervised learning, AI models learn from datasets that humans have curated and labeled according to predefined categories. (The term supervised learning comes from the fact that human supervisors prepare the data in advance.)

While supervised learning has driven remarkable progress in AI over the past decade, from autonomous vehicles to voice assistants, it has serious limitations.

The process of manually labeling thousands or millions of data points can be enormously expensive and cumbersome. The fact that humans must label data by hand before machine learning models can ingest it has become a major bottleneck in AI.

At a deeper level, supervised learning represents a narrow and circumscribed form of learning. Rather than being able to explore and absorb all the latent information, relationships and implications in a given dataset, supervised algorithms orient only to the concepts and categories that researchers have identified ahead of time.

In contrast, unsupervised learning is an approach to AI in which algorithms learn from data without human-provided labels or guidance.

Many AI leaders see unsupervised learning as the next great frontier in artificial intelligence. In the words of AI legend Yann LeCun: The next AI revolution will not be supervised. UC Berkeley professor Jitenda Malik put it even more colorfully: Labels are the opium of the machine learning researcher.

How does unsupervised learning work? In a nutshell, the system learns about some parts of the world based on other parts of the world. By observing the behavior of, patterns among, and relationships between entitiesfor example, words in a text or people in a videothe system bootstraps an overall understanding of its environment. Some researchers sum this up with the phrase predicting everything from everything else.

Unsupervised learning more closely mirrors the way that humans learn about the world: through open-ended exploration and inference, without a need for the training wheels of supervised learning. One of its fundamental advantages is that there will always be far more unlabeled data than labeled data in the world (and the former is much easier to come by).

In the words of LeCun, who prefers the closely related term self-supervised learning: In self-supervised learning, a portion of the input is used as a supervisory signal to predict the remaining portion of the input....More knowledge about the structure of the world can be learned through self-supervised learning than from [other AI paradigms], because the data is unlimited and the amount of feedback provided by each example is huge.

Unsupervised learning is already having a transformative impact in natural language processing. NLP has seen incredible progress recently thanks to a new unsupervised learning architecture known as the Transformer, which originated at Google about three years ago. (See #3 below for more on Transformers.)

Efforts to apply unsupervised learning to other areas of AI remain at earlier stages, but rapid progress is being made. To take one example, a startup named Helm.ai is seeking to use unsupervised learning to leapfrog the leaders in the autonomous vehicle industry.

Many researchers see unsupervised learning as the key to developing human-level AI. According to LeCun, mastering unsupervised learning is the greatest challenge in ML and AI of the next few years.

One of the overarching challenges of the digital era is data privacy. Because data is the lifeblood of modern artificial intelligence, data privacy issues play a significant (and often limiting) role in AIs trajectory.

Privacy-preserving artificial intelligencemethods that enable AI models to learn from datasets without compromising their privacyis thus becoming an increasingly important pursuit. Perhaps the most promising approach to privacy-preserving AI is federated learning.

The concept of federated learning was first formulated by researchers at Google in early 2017. Over the past year, interest in federated learning has exploded: more than 1,000 research papers on federated learning were published in the first six months of 2020, compared to just 180 in all 2018.

The standard approach to building machine learning models today is to gather all the training data in one place, often in the cloud, and then to train the model on the data. But this approach is not practicable for much of the worlds data, which for privacy and security reasons cannot be moved to a central data repository. This makes it off-limits to traditional AI techniques.

Federated learning solves this problem by flipping the conventional approach to AI on its head.

Rather than requiring one unified dataset to train a model, federated learning leaves the data where it is, distributed across numerous devices and servers on the edge. Instead, many versions of the model are sent outone to each device with training dataand trained locally on each subset of data. The resulting model parameters, but not the training data itself, are then sent back to the cloud. When all these mini-models are aggregated, the result is one overall model that functions as if it had been trained on the entire dataset at once.

The original federated learning use case was to train AI models on personal data distributed across billions of mobile devices. As those researchers summarized: Modern mobile devices have access to a wealth of data suitable for machine learning models....However, this rich data is often privacy sensitive, large in quantity, or both, which may preclude logging to the data center....We advocate an alternative that leaves the training data distributed on the mobile devices, and learns a shared model by aggregating locally-computed updates.

More recently, healthcare has emerged as a particularly promising field for the application of federated learning.

It is easy to see why. On one hand, there are an enormous number of valuable AI use cases in healthcare. On the other hand, healthcare data, especially patients personally identifiable information, is extremely sensitive; a thicket of regulations like HIPAA restrict its use and movement. Federated learning could enable researchers to develop life-saving healthcare AI tools without ever moving sensitive health records from their source or exposing them to privacy breaches.

A host of startups has emerged to pursue federated learning in healthcare. The most established is Paris-based Owkin; earlier-stage players include Lynx.MD, Ferrum Health and Secure AI Labs.

Beyond healthcare, federated learning may one day play a central role in the development of any AI application that involves sensitive data: from financial services to autonomous vehicles, from government use cases to consumer products of all kinds. Paired with other privacy-preserving techniques like differential privacy and homomorphic encryption, federated learning may provide the key to unlocking AIs vast potential while mitigating the thorny challenge of data privacy.

The wave of data privacy legislation being enacted worldwide today (starting with GDPR and CCPA, with many similar laws coming soon) will only accelerate the need for these privacy-preserving techniques. Expect federated learning to become an important part of the AI technology stack in the years ahead.

We have entered a golden era for natural language processing.

OpenAIs release of GPT-3, the most powerful language model ever built, captivated the technology world this summer. It has set a new standard in NLP: it can write impressive poetry, generate functioning code, compose thoughtful business memos, write articles about itself, and so much more.

GPT-3 is just the latest (and largest) in a string of similarly architected NLP modelsGoogles BERT, OpenAIs GPT-2, Facebooks RoBERTa and othersthat are redefining what is possible in NLP.

The key technology breakthrough underlying this revolution in language AI is the Transformer.

Transformers were introduced in a landmark 2017 research paper. Previously, state-of-the-art NLP methods had all been based on recurrent neural networks (e.g., LSTMs). By definition, recurrent neural networks process data sequentiallythat is, one word at a time, in the order that the words appear.

Transformers great innovation is to make language processing parallelized: all the tokens in a given body of text are analyzed at the same time rather than in sequence. In order to support this parallelization, Transformers rely heavily on an AI mechanism known as attention. Attention enables a model to consider the relationships between words regardless of how far apart they are and to determine which words and phrases in a passage are most important to pay attention to.

Why is parallelization so valuable? Because it makes Transformers vastly more computationally efficient than RNNs, meaning they can be trained on much larger datasets. GPT-3 was trained on roughly 500 billion words and consists of 175 billion parameters, dwarfing any RNN in existence.

Transformers have been associated almost exclusively with NLP to date, thanks to the success of models like GPT-3. But just this month, a groundbreaking new paper was released that successfully applies Transformers to computer vision. Many AI researchers believe this work could presage a new era in computer vision. (As well-known ML researcher Oriol Vinyals put it simply, My take is: farewell convolutions.)

While leading AI companies like Google and Facebook have begun to put Transformer-based models into production, most organizations remain in the early stages of productizing and commercializing this technology. OpenAI has announced plans to make GPT-3 commercially accessible via API, which could seed an entire ecosystem of startups building applications on top of it.

Expect Transformers to serve as the foundation for a whole new generation of AI capabilities in the years ahead, starting with natural language. As exciting as the past decade has been in the field of artificial intelligence, it may prove to be just a prelude to the decade ahead.

Excerpt from:
The Next Generation Of Artificial Intelligence - Forbes

Using Artificial Intelligence to speed up recovery time and save patients money – fox13now.com

Intermountain Healthcare is making some high tech changes that could help you save money while improving recovery time.

David Skarda, MD, Intermountain Healthcares medical director for Center for Value-Based Surgery, is helping establish a surgical care process model that changes the way Intermountain analyzes and codes surgeries to create better outcomes and lower costs for patients. This new state-of-the art tool uses artificial intelligence to analyze supply chain data, claims, and anything associated with the cost of care from 30 days before to 90 days after a surgery.

So far, the tool is being used for two procedures across the Intermountain system, and its already projected to save more than $8 million during the fiscal year. The savings are expected to increase as the technology is applied to other surgical procedures.

In the past, most health systems would save money by cutting out devices or procedures that cost the most, said Dr. Skarda. By analyzing total medical costs over 120 days we get a clearer picture of what gives us the best surgical outcomes, which also tends to lower the total cost of care.

Looking at the total cost of surgery, and not just what happens in the operating room, gives clinicians the information needed to improve care, said Dr. Skarda.

An example is a knee replacement, which is common procedure. The AI system analyzes the cost of the knee replacement device but also looks at any medications, imaging, physical therapy, and complications over the 120-day period. If a device is slightly more expensive but leads to fewer complications, and quicker recovery, the system recognizes it as a better value even though the initial cost is more expensive.

Trying to find these results using only electronic medical records would be impossible but combining claims data and the AI system makes the information useful to caregivers.

Intermountain surgeons now receive a report card that shows where they can reduce costs and how other physicians in their field are improving outcomes. This helps doctors make better decisions because it gives them the necessary data to prove what works.

That information can easily be shared to help hospital systems around the world improve the way they give care.

Because of this groundbreaking work Dr. Skarda was recently named to the nations class of Top 25 Innovators for 2020 by Modern Healthcare magazine.

To see the complete Modern Healthcare 2020 innovators list, click here.

See the article here:
Using Artificial Intelligence to speed up recovery time and save patients money - fox13now.com

SparkCognition Advances the Science of Artificial Intelligence with 85 Patents – PRNewswire

AUSTIN, Texas, Oct. 12, 2020 /PRNewswire/ --SparkCognition, the world's leading industrial artificial intelligence (AI) company, is pleased to announce significant progress in its efforts to develop state of the art AI algorithms and systems, through the award of a substantial number of new patents. Since January 1, 2020, SparkCognition has filed 29 new patents, expanding the company's intellectual property portfolio to 27 awarded patents and 58 pending applications.

"Since SparkCognition's inception, we have placed a major emphasis on advancing the science of AI through research making advancement through innovation a core company value," said Amir Husain, founder and CEO of SparkCognition, and a prolific inventor with over 30 patents. "At SparkCognition, we've built one of the leading Industrial AI research teams in the world. The discoveries made and the new paths blazed by our incredibly talented researchers and scientists will be essential to the future."

SparkCognition's patents have come from inventors in different teams across the organization, and display commercial significance and scientific achievements in autonomy, automated model building, anomaly detection, natural language processing, industrial applications, and foundations of artificial intelligence. A select few include surrogate-assisted neuroevolution, unsupervised model building for clustering and anomaly detection, unmanned systems hubs for dispatch of unmanned vehicles, and feature importance estimation for unsupervised learning. These accomplishments have been incorporated into SparkCognition's products and solutions, and many have been published in peer-reviewed academic venues in order to contribute to the scientific community's shared body of knowledge.

In June 2019, AI research stalwart and two-time Chair of the University of Texas Computer Science Department, Professor Bruce Porter, joined SparkCognition full time as Chief Science Officer, at which time he launched the company's internal AI research organization. This team includes internal researchers, additional talent from a rotation of SparkCognition employees, and faculty from Southwestern University, the University of Texas at Austin, and the University of Colorado at Colorado Springs. The organization works to produce scientific accomplishments such as: the patents and publications listed above, advancing the science of AI, and supporting SparkCognition's position as an industry leader.

"Over the past two years, we've averaged an AI patent submission nearly every two weeks. This is no small feat for a young company," said Prof. Bruce Porter. "The sheer number of intelligent, science-minded people at SparkCognition keeps the spirit of innovation alive throughout the research organization and the entire company. I'm excited about what this team will continue to achieve going forward, and eagerly awaiting the great discoveries we will make."

To learn more about SparkCognition, visit http://www.sparkcognition.com.

About SparkCognitionWith award-winning machine learning technology, a multinational footprint, and expert teams, SparkCognition builds artificial intelligence systems to advance the most important interests of society. Our customers are trusted with protecting and advancing lives, infrastructure, and financial systems across the globe. They turn to SparkCognition to help them analyze complex data, empower decision-making, and transform human and industrial productivity. SparkCognition offers four main products: DarwinTM, DeepArmor, SparkPredict, and DeepNLPTM. With our leading-edge artificial intelligence platforms, our clients can adapt to a rapidly changing digital landscape and accelerate their business strategies. Learn more about SparkCognition's AI applications and why we've been featured in CNBC's 2017 Disruptor 50, and recognized four years in a row on CB Insights AI 100, by visiting http://www.sparkcognition.com.

For Media Inquiries:

Michelle SaabSparkCognitionVP, Marketing Communications[emailprotected]512-956-5491

SOURCE SparkCognition

http://www.sparkcognition.com

See the rest here:
SparkCognition Advances the Science of Artificial Intelligence with 85 Patents - PRNewswire