Category Archives: Machine Learning
How to use the intelligence features in iOS 16 to boost productivity and learning – TechRepublic
Apple has packed intelligence features into iOS 16 to allow for translations from videos, copying the subject of a photo and removing the background, and copying text from a video.
A few years ago, Apple began betting on local machine learning in iOS to boost the user experience. It started simple with Photos, but now machine learning is a mainstay in iOS and can help to boost productivity in every turn. iOS 16 adds to the machine learning features of iOS to allow for the ability to copy text from a video, perform quick text actions from photos and videos, and allow you to easily copy the subject of a photo and remove the background, creating an easy alpha layer.
Well walk through these three new intelligence features in iOS 16, find out how to use them, and show you all of the ways that you can use these features to boost your productivity and more.
SEE: iCloud vs. OneDrive: Which is best for Mac, iPad and iPhone users? (free PDF) (TechRepublic)
All of the features below only work on iPhones containing an A12 Bionic processor or later, and the translation and text features only available in the following languages: English, Chinese, French, Italian, German, Japanese, Korean, Portuguese, Spanish and Ukrainian text.
One of the cooler features in iOS 16 was the ability to lift the subject of a photo off the photo, creating an instant alpha of the subject. This removes the background from the photo and leaves you with a perfectly cut out photo subject that you can easily paste into a document, iMessage or anywhere else you can imagine (Figure A).
Figure A
This feature works on iPhone with A12 Bionic and later, and can be done by performing these steps inside of the Photos app:
This doesnt only work in Photos, but also in the Screenshot Utility, QuickLook, Safari and other apps soon. This feature saves a lot of time over opening the photo into a photo editor and manually removing the background.
iOS 15 introduced Live Text, which lets you copy text from a photo or search through your Photo library using text that might be contained in a photo (Figure B). Apple is ramping up this feature in iOS 16 by allowing you to pause a video and copy text from it as well.
Figure B
It works like this:
This feature is great for online learning environments where students might need to copy an example and paste it into a document or other file.
Live Text has been around for two iterations of iOS, so Apple has started building additional features around the Live Text feature, namely the ability to perform actions on text from a photo or paused video frame (Figure C).
Figure C
When you select text in a photo or paused video, you now have the option of performing the following actions on the text:
You can do this by selecting the text from the photo or video, then selecting one of the quick actions presented. This works in the Camera app, Photos app, QuickLook and in the iOS video player.
Original post:
How to use the intelligence features in iOS 16 to boost productivity and learning - TechRepublic
How machine learning could help save threatened species from extinction – The Verge
There are thousands of species on Earth that we still dont know much about but we now know that they are already teetering on the edge of extinction. A new study used machine learning to figure out just how threatened these lesser-known species are, and the results were grim.
Some species of animals and plants are labeled data deficient because conservationists havent been able to gather enough information about them to understand how they live or how many of them are left. It turns out that those data deficient species are unfortunately even more threatened than other species that are more well known (to scientists, at least). The data from this study came from the International Union for Conservation of Nature (IUCN), which maintains a global Red List that ranks species based on how threatened they are.
More than half of the data deficient species included in this study, 56 percent, likely face the risk of extinction. In comparison, just 28 percent of better understood species on the Red List are at risk of extinction.
Things could be worse than we actually realize now, says Jan Borgelt, an ecologist at the Norwegian University of Science and Technology and the lead author of the study published today in the journal Communications Biology. More species are likely to be threatened than we previously thought.
Much of Borgelts work focuses on understanding how human activity like hydroelectricity generation or plastic pollution affects ecosystems and biodiversity. The Red List is an invaluable resource for those efforts. But more than 20,000 species are classified as data deficient. And that blind spot can potentially make research that relies on the Red List less accurate.
To try to solve that problem, Borgelt and his colleagues turned to machine learning. They trained an algorithm to predict the extinction risk of data deficient species. To do that, they used information on 28,363 different kinds of animals that the IUCN has already evaluated. That way, the algorithm could start to understand factors that often determine how threatened a species is including climate change, invasive species, and pollution.
Then the researchers turned their attention to 7,699 data deficient species. Thats a little over a third of all data deficient species, but Borgelt and his colleagues could only work with species for which they knew the geographic distribution of the animals. The algorithm determined that 56 percent of those species are likely at risk of extinction. But some animals are in deeper trouble than others; 85 percent of data deficient amphibians, for instance, are at risk of extinction. That includes the Mali screeching frog, spotted narrow-mouthed frog, and several species of robber frogs. The IUCN doesnt even have photos of these critters on its Red List, but with names like that, dont you want to see them?
Their research received some validation when the IUCN updated its Red List last year. One hundred twenty-three of the species in the update were species that the algorithm had made predictions about. More than two-thirds of the algorithms predictions, 76 percent, were correct.
That was reassuring, Borgelt tells The Verge. But he also understands the limitations of machine learning. For now, [these algorithms] certainly shouldnt replace expert assessments, he says, because the expert assessments are more accurate.
But such algorithms, theyre really quick. Theyre not so time intensive or labor intensive as if you were to assess the species individually, Borgelt says.
The creatures numbers out in the wild might have eluded researchers for plenty of reasons. The killer whale, for example, happens to be labeled data deficient. Even though the orca starred in my favorite 90s movie and lived on all my childhood notebooks in the form of Lisa Frank stickers, scientists arent even sure whether theres just one species of killer whale or several. Other animals might be found only in remote regions with a limited range, for instance. And the same characteristics that make them hard to study might also make them more vulnerable.
That makes it all the more important to give these species some well-deserved attention. Machine learning, Borgelt says, isnt a replacement for tracing down the animals on the ground. But its another tool in the toolbox, and it could help conservationists figure out which species need some extra TLC.
Original post:
How machine learning could help save threatened species from extinction - The Verge
Artificial Intelligence is giving drug discovery a great big leap | Mint – Mint
Last month, Alphabets artificial intelligence subsidiary, DeepMind, stunned the world of science by presenting something truly spectacular: a snapshot of nearly every existing protein on Earth 200 million of them. This feat of machine learning could speed the creation of new drugs. It has already upended my own scepticism about the role AI can play in the pharmaceutical industry.
AlphaFold, DeepMinds protein structure program, is impressive because it reveals so much fundamental information about living organisms. Proteins are the building blocks of life, after all, and as such they are essential to life and to the development of medicines. Proteins can be drug targets, and they can themselves be drugs. In either case, it is important to know the intricate ways in which they fold into various shapes. Their coils, floppy bits, hidden pockets and sticky patches can control, for example, when a signal is sent between cells or if a process is turned on or off. Until now, capturing an image of a protein has required painstaking work lasting anywhere from days to months to years.
Since the early 1990s, scientists have been trying to train computers to predict a proteins structure based on its genetic sequence. AlphaFold had the first taste of success in 2020, when it correctly predicted the structures of a handful of proteins. The next year, DeepMind put on its server about 365,000 proteins. Now, its put the entire universe of proteins up for grabsin animals, plants, bacteria, fungi and other living things. All 200 million of them. Much as the gene-editing tool Crispr revolutionized the study of human disease and the design of drugs to target genetic errors, AlphaFolds feat is fundamentally changing the way new medicines can be invented.
Anybody who could have thought that machine learning was not yet relevant for drug hunting surely must feel different," said Jay Bradner, president of the Novartis Institutes for BioMedical Research, the pharma companys research arm. Im on it more than Spotify."
Count me as a former sceptic. I hadnt discounted the possibility that AI might have an impact on the drug industry, but I was weary of the many biotech firms hyping often ill-defined machine-learning capabilities. Companies often claimed that they could use AI to invent a new drug without acknowledging that the starting pointa protein structurestill needed to be worked out by a human. And so far, people have had to first invent drugs for the computer to improve upon them.
Producing the full compendium of proteins is something entirely different. Its little wonder that executives at biotech and pharma companies are widely adopting AlphaFolds revelations.
Rosana Kapeller, chief executive officer of Rome Therapeutics, offers an example from her companys labs. Rome is probing the dark genome, the repetitive portion of the human genetic code that is believed to be largely a relic of ancient viruses. Romes team spent more than six months refining its first image of one protein embedded in that dark genome. Just one day after they captured an initial snapshot of a second protein, DeepMind dropped its complete load of images. Within 24 hours, Romes scientists had perfected their picture. So you see," she said, thats amazing."
None of this is to say that AlphaFold will solve every problem in drug discovery, or even that its 200 million protein images are perfect. Theyre not. Some need more work, and others are more akin to a childs scribbles than fleshed out images. Scientists in tell me that even when the snapshots are imperfect, they have enough information to provide a rough sense of where the important bits are. David Liu, a professor at the Broad Institute of MIT and Harvard, said the technology still allows researchers in his lab to achieve that Zen-like understanding state" to decide where to tinker with a protein to change its properties.
But proteins also dont exist as still snapshots. Depending on the job theyre performing at a given moment, they yawn and jiggle and twist inside a cell. In other words, AlphaFold gives us protein Instagram; scientists would love to have protein TikTok or, eventually, protein YouTube. Even if that becomes possible, this addresses just one step in the process of creating new drugs. The most expensive part is testing that new medicine in humans.
Nevertheless, AlphaFolds pictures can help drugmakers get to the testing stage faster. DeepMinds feat may have taken several years of exploration, but it produced something with major consequences. And it made that work freely available. Finally, we are getting a glimpse of AIs potential to transform the drug industry. And now its possible to consider which problems machine-learning might solve next for science and medicine.
Lisa Jarvis is a Bloomberg Opinion columnist covering biotech, health care and the pharmaceutical industry.
Subscribe to Mint Newsletters
* Enter a valid email
* Thank you for subscribing to our newsletter.
View original post here:
Artificial Intelligence is giving drug discovery a great big leap | Mint - Mint
This Teenager Invented a Low-Cost Tool to Spot Elephant Poachers in Real Time – Smithsonian Magazine
ElSa is a prototype of a machine-learning-driven software that analyzes movement patterns in videos of humans and elephants. Society for Science
When Anika Puri visited India with her family four years ago, she was surprised to come across a market in Bombay filled with rows of ivory jewelry and statues. Globally, ivory trade has been illegal for more than 30 years, and elephant hunting has been prohibited in India since the 1970s.
I was quite taken aback, the 17-year-old from Chappaqua, New York, recalls. Because I always thought, well, poaching is illegal, how come it really is still such a big issue?
Curious, Puri did some research and discovered a shocking statistic: Africas forest elephant population had declined by about 62 percent between 2002 and 2011. Years later, the numbers continue to drop. A wildlife lover, Puri wanted to do something to help protect the species and others still threatened by poaching.
Drones are currently used to detect and capture images of poachers, and they arent that accurate, the teenager explains. But after watching videos of elephants and humans, she saw how the two differed vastly in the way they movetheir speed, their turning patterns and other motions.
I realized that we could use this disparity between these two movement patterns in order to actually increase the detection accuracy of potential poachers, she says.
Over the course of two years, Puri created ElSa (short for elephant savior), a low-cost prototype of a machine-learning-driven software that analyzes movement patterns in thermal infrared videos of humans and elephants. Puri says the software is four times more accurate than existing state-of-the-art detection methods. It also eliminates the need for expensive high-resolution thermal cameras, which can cost in the thousands, she says. ElSa uses a $250 FLIR ONE Pro thermal camera with 206x156 pixel resolution thatplugs into an off-the-shelf iPhone 6. The camera and iPhone are then attached to a drone, and the system produces real-time inferences as it flies over parks as to whether objects below are human or elephant.
It's really amazing just to see all these kids coming together. And for the same purpose enjoying science and doing research, Puri says. I was honored just to be on that stage.
Puri first learned about the capabilities of artificial intelligence just after ninth grade, when she was selected to attend Stanford A.I. Labs summer program.
Initially, my enthusiasm for artificial intelligence was based off of this limitless possibility for social good, she says. But she soon discovered that because data is collected and analyzed by humans, it contains human biases, and so does A.I. as a result.
It really has the capability to reinforce some of the worst aspects of our society, she says. What I really realized from this is how important it is that women, people of color, all sorts of minorities in the field of technology are at the forefront of this kind of groundbreaking technology.
About a year later, Puri founded a nonprofit called mozAIrt, which inspires girls and other underrepresented groups to get involved in computer science using a combination of music, art and A.I.
At an A.I. conference where she held a workshop, Puri met Elizabeth Bondi-Kelly, a Harvard computer scientist who was working on a wildlife conservation project using drones and machine learning. Bondi-Kelly had also started a nonprofit, called Try AI, to increase diversity in the field.
Puri reached out to the computer scientist about her idea to catch elephant poachers using movement patterns, and Bondi-Kelly became her mentor for the project.
To create her model, Puri first found movement patterns of humans and elephants using the Benchmarking IR Dataset for Surveillance with Aerial Intelligence (BIRDSAI), a dataset collected by Bondi-Kelly and her colleagues using a thermal infrared camera attached to an unmanned aerial vehicle (UAV) in multiple protected areas in Africa. Sifting through the data, Puri identified 516 time series extracted from videos that captured humans or elephants in motion.
Puri used a machine learning algorithm to train a model to classify a figure as either an elephant or a human based on its speed, group size, turning radius, number of turns and other patterns. She used 372 series300 elephant movements, and 72 human movements. The remaining 144 were used to test her model with data it hadnt seen before. When tested on the BIRDSAI dataset, her model was able to detect humans with over 90 percent accuracy.
Puri's software is "quite commendable," saysJasper Eikelboom, an ecologist at Wageningen University in the Netherlands who is designing a system to detect poachers using GPS trackers on animals. It's quite remarkable that a high school student has been able to do something like this, he says. Not only the research and the analysis, but alsobeing able to implement it in the prototypes.
Eikelboom cautions that Puris model still needs to be tested on raw video footage to see how well it can detect poachersthe accuracy of Puris model was tested using figures already determined either human or elephant. He also says other barriers already exist to using drones in parks, such as the money and manpower to keep them flying.
ElSa, he notes, could be used broadly for other conservation goals, not just for spotting poachers, too.
In ecology in general, we like to track animals and see what they're doing and how it impacts the ecosystem, he says. And if we look, for example, on the satellite data, we can find a lot of moving patterns, but we don't know what species they are. I think it's a very smart move to look at these movement patterns themselves instead of only at the imageat the pixelsto determine what kind of species it is.
In the fall, Puri will attend the Massachusetts Institute of Technology, where she wants to study electrical engineering and computer science. She has plans to expand her movement pattern research into other endangered animals. Next up is rhinos, she says. And she wants to begin implementing her software in national parks in Africa, including South Africas Kruger National Park. Covid-19 restrictions delayed some of her plans to travel to these parks to get her project off the ground, but she hopes to explore her options after she starts college. Because drones only have a battery life of a few hours, she is currently creating a path-planning algorithm to ensure maximum efficiency in the drones flight course.
Research isn't a straight line, Purisays. That has made me more resourceful. It also helped me develop into a more innovative thinker. You learn along the way.
Recommended Videos
Link:
This Teenager Invented a Low-Cost Tool to Spot Elephant Poachers in Real Time - Smithsonian Magazine
What is the Potential for Digital Twins in Healthcare? – HIT Consultant
David Talby, CTO, John Snow Labs
Digital twins are virtual representations of an object or system that spans its lifecycle, is updated from real-time data, and use simulation, machine learning and reasoning to help decision-making (IBM). In most cases, this helps data scientists understand how products are operating in production environments and anticipate how they may behave overtime. But what happens when a digital twin is that of a human being?
By using digital twins to model a person, you can use technologies like natural language processing (NLP) to better understand data and uncover other useful insights that will help improve use cases from customer experience to patient care. Today, were simply generating more data than ever before. Digital twins can be useful in synthesizing this information to provide actionable insights.
As such, there are few fields digital twins can be more helpful in than healthcare. Take a visit to your primary care physician, for example. They will have a baseline understanding of you your history, medications you take, allergies, and other factors. If you then go to see a specialist, they may ask many of the same repetitive questions, and remake inferences and deductions that have been done before. But beyond convenience and time savings, digital clones can substantially help with accuracy.
Having a good virtual replica of a patient enables medical professionals to dig down specific medications, health conditions, and even social determinants of health that may impact care. Greater detail and context enables providers to make better clinical decisions, and its all being done behind the scenes, thanks to advances in artificial intelligence (AI) and machine learning (ML).
Digital Twins in Production
Digital clones or digital twins can greatly benefit the healthcare system, and were already starting to see them in use. Kaiser Permanente uses digital twins through a system that improves patient flow within a hospital. It achieves this by combining structured and unstructured data to build a more complete view of each patient to anticipate what their needs will be at the hospital. In another instance, Roche uses digital twins to help securely integrate and display relevant aggregated data about cancer patients into a single, holistic patient timeline.
Digital twins are already at work in some of the largest healthcare organizations in the world, but their potential doesnt stop with the existing use cases. There are many other applications for digital twins at play, and they span from practical everyday use to functions that sound more like science fiction than reality. Here are some additional areas digital twins can be particularly useful in healthcare:
Summarizing Patient Data: Providers are experiencing information overload with the amount of data in todays healthcare system. From electronic health records (EHRs) to doctors notes to diagnostic imaging, it can be a challenge to connect the disparate data structured tables, unstructured text, medical images, sensors and more associated with an individual patient. Consider a patient with a cancerous tumor along with other underlying conditions. Typically, oncologists and other specialists will meet to determine the next steps in treatment, whether it be surgery, medication, or another protocol. Integrating all this data into a unified, relevant, and summarized timeline can be done using a combination of natural language processing (NLP), computer vision (CV), and knowledge graph (KG) techniques today.
Accelerating Precision Medicine: Precision medicine is mostly applied in the areas of cardiology and oncology, dealing with serious conditions, as cancer and heart disease. Sometimes, instead of recommending an aggressive treatment like chemotherapy, its important to see if a patient has certain genomic biomarkers that can inform doctors if another approach may work better for that patient. Genetic profiling is useful to uncover these insights, helping doctors better understand a given patients tumor, labs, genomics, history, and other pertinent details to reach an optimal decision. As a result, the clinician can provide a more personalized approach to care. However, to achieve this, you need to aggregate much more information about the patient. By building a digital twin, you can compare an individual to other patients similar in clinically important ways to see if there are genomic similarities and how certain treatments have impacted them.
Process Improvement: Improving organizational performance, thereby improving patient outcomes or population health, requires a high level of specificity. For example, if your goal is to reduce the length of a patients hospital stay, its imperative to understand many other factors about their condition. Through structured data, you can find information, like whether the patient has a chronic condition and what medications they were taking, or whether or not they have insurance. But some of the considerations that really matter in terms of the duration of the patients hospital stay how they are eating, feeling, sleeping, coping, moving, etc. can only be found in free-text data. Creating a digital twin to anticipate patient needs and the length of their stay can be very valuable.
Whats Next for Digital Twins
Some medical devices have the capabilities of producing digital twins of specific organs or conditions so doctors can better diagnose them. Areas like NLP can be a great help here if you have a patient with a chronic condition (Asthma, COPD, mental health issues, and others). For acute issues especially in oncology, cardiology, and psychiatry digital twins can offer a higher level of detail. For example, creating the digital twin of a patients heart enables a doctor to see exactly whats going on whether there is scarring from previous surgeries or an abnormality that needs to be inspected further and make better decisions before an operation, rather than during. This can mean a world of difference for patient outcomes.
Well start to see more advanced use cases for digital twins in the coming years. But to truly live up to the hype, its crucial that we move beyond simply collecting and analyzing only structured data. Recent advances in deep learning and transfer learning have made it possible to extract information from imaging and free-text data, serving as the connective tissue between what can be found in EHRs and other information, like radiology images and medical documents of all types. Only then can we begin to construct a meaningful digital twin to uncover useful insights that will help improve hospital operations and patient care.
About David Talby
David Talby, Ph.D., MBA, is the CTO of John Snow Labs, the AI and NLP for healthcare companies provide state-of-the-art software, models, and data to help healthcare and life science organizations put AI to good use. He has spent his career making AI, big data and data scientists solve real-world problems in healthcare, life science and related fields.
The rest is here:
What is the Potential for Digital Twins in Healthcare? - HIT Consultant
Amazons Werner Vogels: Enterprises are more daring than you might think – Protocol
When AWS unveiled Lambda in 2014, Werner Vogels thought the serverless compute service would be the domain of young, more tech-savvy businesses.
But it was enterprises that flocked to serverless first, Amazons longtime chief technology officer told Protocol in an interview last week.
For them, it was immediately obvious what the benefits were and how you only pay for the five microseconds that this code runs, and any idle is not being charged to you, Vogels said. And you don't have to worry about reliability and security and multi-[availability zone] and all these things that then go out of the window. That was really an eye-opener for me this idea that we sometimes have in our head that sort of the young businesses are more technologically advanced and moving faster. Clearly in the area of serverless, that was not the case.
AWS Lambda launched into general availability in 2015, and more than a million customers are using it today, according to AWS.
Vogels gave Protocol a rundown on AWS Lambda and serverless computing, which allows customers to build and run applications and services without provisioning or managing servers. He also talked about Amazon CodeWhisperer, AWS new machine learning-powered coding tool, launched in preview in June; how artificial intelligence and ML are changing developers lives; and his thoughts on AWS providing customers with primitives versus higher-level managed services.
This interview has been edited and condensed for clarity.
So what's the state of the state on AWS Lambda and how it's helping customers, and are there any new features that we can expect?
You'll see a whole range of different migrations happening. We've had folks from Capital One that migrated old mainframe codes to Lambda. [IRobot, which Amazon announced plans to acquire on Friday], the folks that make Roomba, the automatic [vacuum] cleaner, have their complete back end running as serverless because, for example, that's a service that their customers don't pay for, and as such, they really wanted to minimize their costs yet provide a good service. There's a whole range of different projects happening and whether that is pre-processing images at some telescope deep in Chile, all the way up to monitoring Snowcones running in the International Space Station, where they were in Lambda on that device as well and actually can do processing of imagery and things like that. It's become quite pervasive in that sense.
Now, the one thing is, of course, if you have existing code, and you want to move over to the cloud moving over to a virtual machine is easy it's all in the same environment that you had on-premises. If you want to decompose the application that you had, don't want to do too many code changes, probably containers are a better target for that.
But for quite a few of our customers that really want to start from scratch, but sort of really innovate and really think about [what] event-driven architectures look like, serverless becomes quickly the sudden default target for them. Mostly also because it's not only that we see significant reduction in cost for our customers, but also a significant reduction in their carbon footprints, because we're able to do much better packing on energy than customers would be able to do by themselves. We now also run serverless on our Graviton processors, so you'll see easily a 40% reduction in cost in energy usage.
For me, serverless means that our customers don't have to think about security, reliability, managing performance, managing scale, doing failover all those kinds of things and really controlling costs.
But always I'm a bit ambivalent about the word serverless, mostly because many people associate that with when we launched Lambda. But in essence, the first service that we launched, S3, also is really serverless. For me, serverless means that our customers don't have to think about security, reliability, managing performance, managing scale, doing failover all those kinds of things and really controlling costs. And so, in essence, almost all services at AWS are serverless by nature. If you think about DynamoDB [a serverless NoSQL database], or if you think about Neptune [a graph database service] or any of the other services that we have, most of them are serverless because you don't have to think about sort of provisioning them, managing them. That's all done for you.
Can you talk about the value of CodeWhisperer and what you think is the next big thing for or the future of low-code/no-code?
For me, CodeWhisperer is more an assistant to a developer. There's a number of application areas where I think machine learning really shines and it is sort of augmenting professionals by helping them, taking away mundane tasks. And we already did that, of course, in AWS. If you think about development, there's CodeGuru and DevOps Guru, which are both already machine-learning services to help customers with, on one hand, operations, and the other one sort of doing the early security checks during the development process.
CodeWhisperer even takes that a step further, where if you look how our developers develop, there's quite a few mundane tasks where you will go search on the web for a piece of code how do we do [single sign-on] login into X, Y or Z? Most people will just cut and paste or do a little translation. If that was in Python and you need to actually write it in TypeScript, we may do a translation on that.
There's a lot of work, actually, that developers do in that particular area. So we thought that we could really help our customers there by using machine learning to look at the complete base of, on one hand, the AWS code, the Amazon code and all the open-source code that is out there, and then do a qualitative test on that, and then include it into this body of work where we can easily help customers by just writing some plain text, and then saying, I want a [single sign-on] log-on here, and then the code automatically appears. And with that, we can do checks for security, we can do checks for bias. There's lots of other things that are now possible because we're basically assisting the developer in being more efficient and actually writing the code that they really want to write.
When we launched Lambda, I said the only code that will be written in the future is business logic. Well, it turns out we're still not completely there, but tools like CodeWhisperer definitely help us to get on that path because you can focus on what's the unique code that you need to write for the application that you have, instead of the same code that everybody else needs to write.
People really like it. It's also something that we continuously improve. This is not a standing-still product. As we look at more code, as we get more feedback, the service improves.
If I think about software developers, it's one of the few jobs in the world where you can be truly creative and can go to work and create something new every morning. However, there's quite a bit of heavy lifting still around that [that] sort of has nothing to do with your creativity or your ability to solve problems. With CodeWhisperer, we really tried to take the heavy lifting away so that people can focus on the creativity part of the development job, and I think anything we can do there, developers like.
In your tech predictions for 2022, you said this is the year when artificial intelligence and machine learning take on the undifferentiated heavy lifting in the lives of developers. Can you just expand on that, and how AWS is helping that?
When you think about CodeWhisperer and CodeGuru and DevOps Guru or Copilot from GitHub this is just the beginning of seeing the application area of machine learning to augment humans. Whether there is a radiologist somewhere that is late at night looking at imagery and gets help from machine learning to compare these images or whether it's a developer, we're really at the cusp of how machine learning will accelerate the way that we can build digital systems.
I was in Germany not that long ago, and there the government told me that they have 80,000 open IT positions. With all the scarceness in the world of labor, anything which we can do to make the life of developers easier so that they're more productive, that it makes it easier for people that do not have a four-year computer science degree to actually get started in the IT world, anything we can do there will benefit all the enterprises in the world.
What's another developer problem that you're trying to solve, or what are developers asking AWS for?
If you're an organization like AWS or Amazon or quite a few other organizations around the world, you make use of the DevOps principle, where basically your developers also have operational tasks. If you do operations, there's information that is coming from 10 or 20 different sides. There's log files, there's metrics, there's dashboards and actually tying that information together and analyzing the massive amounts of log files that are being produced by systems in real time, surfacing that to the operators, showing that there may be potential problems here and then give context around it because normally these log files are pretty cryptic. So what we do with DevOps Guru, for example, is provide context around it such that the operators can immediately start taking action, looking for what [the] root cause of particular problems are. So we're looking at all of the different aspects of development and operations to see what are the kind of things that we can build to help customers there.
At AWS re:Invent last year, you put up a slide that read primitives, not frameworks, and you said AWS gives customers primitives or simple machines, not frameworks. Meanwhile, Google Cloud and Microsoft are offering these sort of larger, chunkier blocks such as managed services where customers don't have to do the heavy lifting, and AWS also seems to be selling more of them as well.
Let me clarify that. It mostly has to do also with sort of the speed of innovation of AWS.
Last year, we launched more than 3,000 features and services. And so why are we still looking at these fine-ingrained building blocks? Let me go back to the beginning of AWS when we started then, how software companies at that moment were providing infrastructure or platforms was basically that they would give developers everything [but] the kitchen sink on Day One. And they would tell you, "This is how you shall develop software on this platform." Given that these platforms took quite a while to develop, basically what you operate is a platform that is already five years old, that is looking at five years back.
Werner Vogels gives his keynote at AWS re:Invent 2021. Photo: Amazon Web Services, Inc.
We knew that if cloud would really be effective, development would change radically. Development would indeed be able to scale quicker and make use of multiple availability zones and many different types of databases and things like that. So we needed to make sure that we were not building things from the past, but that we were building for how our customers would want to build in 2025. To do that, you don't give them everything and tell them what to do. You give them small building blocks, and that's what I mean by primitives. And all these small building blocks together make a very rich ecosystem for developers to choose from.
Now, quite a few, especially the more tech-savvy companies, are more than happy to put these building blocks together themselves. For example, if you want to build a data lake, we have to use Glue [a serverless data integration service], we have to use S3, maybe some Redshift, Kinesis for ingestion, Athena for ad hoc analytics. I think there's quite a few customers that are building these things by themselves.
But then there's a whole category of customers that just want a data lake. They don't want to think about Glue and S3 and Kinesis, so we give them a service or solution called Lake Formation. That automatically grabs all these things together and gives them this higher-level component.
Now the fact that we are delivering these higher-level solutions, for example, some customers just want a backup solution, and they don't want to think about how to move things into S3 and then do some intelligent tiering [so] that if this data isn't accessed in two weeks, then it is being moved into cold storage. They don't want to think about that. They just want a backup solution. And so for that, we provide them some backup. So we do have these higher-level services. It's more managed-style services for you, but they're all still based on the primitives that sit underneath there. So whether you want to start with Lake Formation and later on maybe start tweaking things under the covers, that's still possible for you. While we are providing these higher-level components, where customers need to have less worry about which components can fit together, we still provide the underlying components to the developers as well.
Is quantum computing something that enterprise CTOs should be keeping their eye on? Do you expect there to be an enterprise use for it, or will it be a domain just for researchers, or is it just too far out to surmise?
There is a back-and-forth there. If I look at some of the newer developments, it's clearly research oriented. The reason for us to provide Braket, which is our quantum compute service, is that customers generally start experimenting with the different types of hardware that are out there. And there's typical usage there. It's life sciences, it's oil and gas. All of these companies are already investigating whether they could see significant speed-ups if they would transform their algorithms into things that could run on a quantum machine.
Now, there's a major difference between, let's say, traditional development and quantum development. The tools, the compilers, the software principles, the books, the documentation for traditional development that's huge, you need great support.
In quantum, I think what we'll see in the coming four or five years, as I listen to the Amazon researchers working on this, [is that] much of the work will not only go into hardware, but also how to provide better software support around it, such that development for these types of machines becomes easier or even goes at the same level as traditional machines. But one of the things that I think is very, very clear is that we're not going to be able to solve new problems necessarily with quantum computing; we're just going to be able to solve old problems much, much faster. That's why the life sciences companies and health care and companies that are very interested in the high-performance compute are experimenting with quantum because that could accelerate their algorithms, maybe by orders of magnitude. But, we still have to see the results of that. So I'm keeping a very close eye on it, because I think there may be very interesting workloads and application areas in the future.
Read more:
Amazons Werner Vogels: Enterprises are more daring than you might think - Protocol
Weekly AiThority Roundup: Biggest Machine Learning, AI, Robotic And Automation Updates July Week 05 – AiThority
This is your AI Weekly Roundup today. We are covering the top updates from around the world. The updates will feature state-of-the-art capabilities inartificial intelligence (AI),Machine Learning, Robotic Process Automation, Fintech, and human-system interactions. We cover the role of AI Daily Roundup and its application in various industries and daily lives.
UK and Japan-based crypto startup Sumo Signals Ltd. announced that it has raised US$5.5 Million in its recent round of funding led by Hong Kong based prominent investor OnDeck Venture. The successful funding round is a clear indication of the companys strong growth prospects powered by its pioneering AI-based technology.
Thentia, a venture capital-backed and global industry-leading government software-as-a-service (SaaS) provider, announced it has joined the Google Cloud Partner Advantageprogram. Thentia Cloud can be procured directly through Google Clouds Independent Software Vendor (ISV) Marketplace.
Merkle, dentsus leading technology-enabled, data-driven customer experience management (CXM) company, announces the expansion of its EMEA Salesforce practice with the appointment of three new strategic hires.
Chargebee, the leading subscription management platform, announced its Summer 2022 Product Release. The slate of new products and features is focused on enabling high-performing subscription businesses to monetize their existing customers and fend off the growing threats of a tumultuous economy. These new products help businesses build their cash reserves and maintain their customer base at a time when many businesses and their customers are struggling with the realities of inflation and drying up of venture capital, the lingering effects of COVID-19 and a decimated global supply chain.
Mvix, a leading provider of enterprise-grade digital signage solutions, speeds up its development and integration of Enterprise Business Intelligence Tools on its cloud-based software Mvix CMS, empowering data sharing for efficiency and scalability.Microsoft Power BI, Tableau, and Klipfolio, top business intelligence (BI) powerhouses with a combined market share of 80 percent, are three of numerous tools slated to offer real-time data and metrics streamlining workflow and productivity for clients.
[To share your insights with us, please write tosghosh@martechseries.com]
AiT Analyst is a trained researcher with many years of experience in finding news and reviewing them. The Analysts provide extensive coverage to major companies and startups in key technology sectors and geographies from the emerging tech landscape.
To connect, please write to AiT Analyst at sghosh@martechseries.com.
Read the rest here:
Weekly AiThority Roundup: Biggest Machine Learning, AI, Robotic And Automation Updates July Week 05 - AiThority
New hardware offers faster computation for artificial intelligence, with much less energy – MIT News
As scientists push the boundaries of machine learning, the amount of time, energy, and money required to train increasingly complex neural network models is skyrocketing. A new area of artificial intelligence called analog deep learning promises faster computation with a fraction of the energy usage.
Programmable resistors are the key building blocks in analog deep learning, just like transistors are the core elements for digital processors. By repeating arrays of programmable resistors in complex layers, researchers can create a network of analog artificial neurons and synapses that execute computations just like a digital neural network. This network can then be trained to achieve complex AI tasks like image recognition and natural language processing.
A multidisciplinary team of MIT researchers set out to push the speed limits of a type of human-made analog synapse that they had previously developed. They utilized a practical inorganic material in the fabrication process that enables their devices to run 1 million times faster than previous versions, which is also about 1 million times faster than the synapses in the human brain.
Moreover, this inorganic material also makes the resistor extremely energy-efficient. Unlike materials used in the earlier version of their device, the new material is compatible with silicon fabrication techniques. This change has enabled fabricating devices at the nanometer scale and could pave the way for integration into commercial computing hardware for deep-learning applications.
With that key insight, and the very powerful nanofabrication techniques we have at MIT.nano, we have been able to put these pieces together and demonstrate that these devices are intrinsically very fast and operate with reasonable voltages, says senior author Jess A. del Alamo, the Donner Professor in MITs Department of Electrical Engineering and Computer Science (EECS). This work has really put these devices at a point where they now look really promising for future applications.
The working mechanism of the device is electrochemical insertion of the smallest ion, the proton, into an insulating oxide to modulate its electronic conductivity. Because we are working with very thin devices, we could accelerate the motion of this ion by using a strong electric field, and push these ionic devices to the nanosecond operation regime, explains senior author Bilge Yildiz, the Breene M. Kerr Professor in the departments of Nuclear Science and Engineering and Materials Science and Engineering.
The action potential in biological cells rises and falls with a timescale of milliseconds, since the voltage difference of about 0.1 volt is constrained by the stability of water, says senior author Ju Li, the Battelle Energy Alliance Professor of Nuclear Science and Engineering and professor of materials science and engineering, Here we apply up to 10 volts across a special solid glass film of nanoscale thickness that conducts protons, without permanently damaging it. And the stronger the field, the faster the ionic devices.
These programmable resistors vastly increase the speed at which a neural network is trained, while drastically reducing the cost and energy to perform that training. This could help scientists develop deep learning models much more quickly, which could then be applied in uses like self-driving cars, fraud detection, or medical image analysis.
Once you have an analog processor, you will no longer be training networks everyone else is working on. You will be training networks with unprecedented complexities that no one else can afford to, and therefore vastly outperform them all. In other words, this is not a faster car, this is a spacecraft, adds lead author and MIT postdoc Murat Onen.
Co-authors include Frances M. Ross, the Ellen Swallow Richards Professor in the Department of Materials Science and Engineering; postdocs Nicolas Emond and Baoming Wang; and Difei Zhang, an EECS graduate student. The research is published today in Science.
Accelerating deep learning
Analog deep learning is faster and more energy-efficient than its digital counterpart for two main reasons. First, computation is performed in memory, so enormous loads of data are not transferred back and forth from memory to a processor. Analog processors also conduct operations in parallel. If the matrix size expands, an analog processor doesnt need more time to complete new operations because all computation occurs simultaneously.
The key element of MITs new analog processor technology is known as a protonic programmable resistor. These resistors, which are measured in nanometers (one nanometer is one billionth of a meter), are arranged in an array, like a chess board.
In the human brain, learning happens due to the strengthening and weakening of connections between neurons, called synapses. Deep neural networks have long adopted this strategy, where the network weights are programmed through training algorithms. In the case of this new processor, increasing and decreasing the electrical conductance of protonic resistors enables analog machine learning.
The conductance is controlled by the movement of protons. To increase the conductance, more protons are pushed into a channel in the resistor, while to decrease conductance protons are taken out. This is accomplished using an electrolyte (similar to that of a battery) that conducts protons but blocks electrons.
To develop a super-fast and highly energy efficient programmable protonic resistor, the researchers looked to different materials for the electrolyte. While other devices used organic compounds, Onen focused on inorganic phosphosilicate glass (PSG).
PSG is basically silicon dioxide, which is the powdery desiccant material found in tiny bags that come in the box with new furniture to remove moisture. It is studied as a proton conductor under humidified conditions for fuel cells. It is also the most well-known oxide used in silicon processing. To make PSG, a tiny bit of phosphorus is added to the silicon to give it special characteristics for proton conduction.
Onen hypothesized that an optimized PSG could have a high proton conductivity at room temperature without the need for water, which would make it an ideal solid electrolyte for this application. He was right.
Surprising speed
PSG enables ultrafast proton movement because it contains a multitude of nanometer-sized pores whose surfaces provide paths for proton diffusion. It can also withstand very strong, pulsed electric fields. This is critical, Onen explains, because applying more voltage to the device enables protons to move at blinding speeds.
The speed certainly was surprising. Normally, we would not apply such extreme fields across devices, in order to not turn them into ash. But instead, protons ended up shuttling at immense speeds across the device stack, specifically a million times faster compared to what we had before. And this movement doesnt damage anything, thanks to the small size and low mass of protons. It is almost like teleporting, he says.
The nanosecond timescale means we are close to the ballistic or even quantum tunneling regime for the proton, under such an extreme field, adds Li.
Because the protons dont damage the material, the resistor can run for millions of cycles without breaking down. This new electrolyte enabled a programmable protonic resistor that is a million times faster than their previous device and can operate effectively at room temperature, which is important for incorporating it into computing hardware.
Thanks to the insulating properties of PSG, almost no electric current passes through the material as protons move. This makes the device extremely energy efficient, Onen adds.
Now that they have demonstrated the effectiveness of these programmable resistors, the researchers plan to reengineer them for high-volume manufacturing, says del Alamo. Then they can study the properties of resistor arrays and scale them up so they can be embedded into systems.
At the same time, they plan to study the materials to remove bottlenecks that limit the voltage that is required to efficiently transfer the protons to, through, and from the electrolyte.
Another exciting direction that these ionic devices can enable is energy-efficient hardware to emulate the neural circuits and synaptic plasticity rules that are deduced in neuroscience, beyond analog deep neural networks. We have already started such a collaboration with neuroscience, supported by the MIT Quest for Intelligence, adds Yildiz.
The collaboration that we have is going to be essential to innovate in the future. The path forward is still going to be very challenging, but at the same time it is very exciting, del Alamo says.
Intercalation reactions such as those found in lithium-ion batteries have been explored extensively for memory devices. This work demonstrates that proton-based memory devices deliver impressive and surprising switching speed and endurance, says William Chueh, associate professor of materials science and engineering at Stanford University, who was not involved with this research. It lays the foundation for a new class of memory devices for powering deep learning algorithms.
This work demonstrates a significant breakthrough in biologically inspired resistive-memory devices. These all-solid-state protonic devices are based on exquisite atomic-scale control of protons, similar to biological synapses but at orders of magnitude faster rates, says Elizabeth Dickey, the Teddy & Wilton Hawkins Distinguished Professor and head of the Department of Materials Science and Engineering at Carnegie Mellon University, who was not involved with this work. I commend the interdisciplinary MIT team for this exciting development, which will enable future-generation computational devices.
This research is funded, in part, by the MIT-IBM Watson AI Lab.
Here is the original post:
New hardware offers faster computation for artificial intelligence, with much less energy - MIT News
Machine Learning Breakthroughs Have Sparked the AI Revolution – InvestorPlace
Source: shutterstock.com/Peshkova
[Editors note: Machine Learning Breakthroughs Have Sparked the AI Revolution was previously published in February 2022. It has since been updated to include the most relevant information available.]
Its October 1950. Alan Turing, the genius who cracked the Enigma code and helped end World War II, has just introduced a novel concept.
Its called the Turing Test, and its aimed at answering the fundamental question:Can machines think?
The world laughs. Machines think for themselves? Not possible.
However, the Turing Test sets in motion decades of research into the emerging field of Artificial Intelligence (AI).
This research is conducted in the worlds most prestigious labs by some of the worlds smartest people. Collectively, theyre working to create a new class of computers and machines that can, indeed, think for themselves.
Fast forward 70 years.
AI is everywhere.
Its in yourphones. What do you think powers Siri? How does a phone recognize your face?
Its in yourapplications. How does Google Maps know directions and optimal routes? How does it make real-time changes based on traffic? And how does Spotify create hyper-personalized playlists or Netflix recommend movies?
AI is on yourcomputers. How does Google suggest personalized search items for you? How do websites use chatbots that seem like real humans?
As it turns out, the world shouldnt have laughed back in 1950.
The great Alan Turing ended up creating a robust foundation upon which seven decades of groundbreaking research has compounded. Ultimately, it resulted in self-thinking computers and machines not just being a thing but being everything.
Make no mistake. This decades-in-the-making AI Revolution is just getting started.
Thats because AI is mostly built on what industry insiders call machine learning (ML) and natural language processing (NLP) models. And these models are informed with data.
Accordingly, the more data they have, the better the models get and the more capable the AI becomes.
When I say identity, what do you think of?
If youre like me, you immediately start to think of what makes you, well, you your height, eye color; what job you have, what car you drive, what shows you like to binge-watch.
In other words, the amount of data associated with each individual identity is both endless and unique.
Those attributes make identity data extremely valuable.
Up until recently, though, enterprises had no idea how to extract value from this robust dataset. Thats all changing right now.
Breakthroughs in artificial intelligence and machine-learning technology are enabling companies to turn identity data into more personalized, secure and streamlined user experiences for their customers, employees and partners.
The volume and granularity of data is exploding right now. Thats mostly because every object in the world is becoming a data-producing device.
Dumb phones have become smartphones and have started producing a ton of usage data.
Dumb cars have become smart cars and have started producing lots of in-car driving data.
And dumb apps have become smart apps and have started producing heaps of consumer preference data.
Dumb watches have become smartwatches and have started producing bunches of fitness and activity data.
As weve sprinted into the Smart World, the amount of data that AI algorithms have access to has exploded. And its making them more capable than ever.
Why else has AI started popping up everywhere in recent years? Its because90% of the worlds data was generated in the last two years alone.
More data, better ML and NLP models, smarter AI.
Its that simple.
And guess what? The world isnt going to take any steps back in terms of this smart pivot. No. We love our smartphones, smart cars and smartwatches far too much.
Instead, society will accelerate in this transition. Globally, the world produces about 2.5 exabytes of data per day. By 2025, that number is expected to rise to 463 exabytes.
Lets go back to our process.
More data, better ML and NLP models, smarter AI.
Thus, as the volume of data produced daily soars more than 185X over the next five years, ML and NLP models will get 185X better (more or less). And AI machines will get 185X smarter (more or less).
Folks, the AI Revolution is just getting started.
Most things a human does, a machine will soon be able to do better, faster and cheaper.
Given the advancements AI has made over the past few years with the help of data and the exponential amount of it yet to come Im inclined to believe this.
Eventually, and inevitably, the world will be run by hyperefficient and hyperintelligent AI.
Im not alone in thinking this. Gartner predicts that 69% of routine office work will be fully automated by 2024. And the World Economic Forum has said that robots will handle 52% of current work tasks by 2025.
The AI Revolution is coming and its going to be the biggest youve seen in your lifetime.
You need to be invested in this emerging tech megatrend that promises to change the world forever.
Of course, the question remains: What AI stocks should you start buying right now?
You could play it safe and go with the blue-chip tech giants. All are making inroads with AI and are low-risk, low-reward plays on the AI Revolution. Im talking Microsoft (MSFT), Alphabet (GOOG), Amazon (AMZN), Adobe (ADBE) and Apple (AAPL).
However, thats not how we do things. We dont like safe we like best.
At present, enterprise AI software is being used very effectively by Big Tech. And its being used ineffectively or not at all by everyone else.
Todays AI companies are changing that. And the best way to play the AI Revolution is by buying the stocks that are changing the paradigm in which they exist.
We have identified several AI stocks to buy for enormous long-term returns.
Again, these AI stocks arent the safe way to play the AI Revolution. Theyre the best way to do it.
One company is pioneering a novel model-driven architecture. Indeed, it represents a promising paradigm shift in the AI application development process. Ultimately, it will democratize the power of AI so that its no longer a weapon used by Big Tech to crush its opponents.
Essentially, this company has pre-built multiple, highly scalable AI models in its ecosystem. And it allows customers to build their own AI models by simply editing and stacking them atop one another.
Think of building an AI application as a puzzle. You must have the right pieces and directions. In other words, to effectively utilize the power of enterprise AI, customers just piece it together in a way that works best for them.
Equally important, the building of these puzzles is not rocket science. The company does all the hard work of making the actual models. Customers simply have to pick which ones they want to use and decide how they want to use them.
In some instances, coding and data science are still required but not much. Todays top AI companies make it easy to develop, scale, and apply insights without writing any code.
Its a genius breakthrough to address the widening AI gap between Big Tech and everyone else.
Eventually, every company from every industry and of every size will leverage the power of AI to enhance their business, increase revenues and reduce costs.
Of course, this reality bodes well for AI stocks in the long term.
You just have to know which ones are worth buying and which are not
On the date of publication, Luke Lango did not have (either directly or indirectly) any positions in the securities mentioned in this article.
Original post:
Machine Learning Breakthroughs Have Sparked the AI Revolution - InvestorPlace
U.S. Army Research Lab Expands Artificial Intelligence and Machine Learning Contract with Palantir for $99.9M – Yahoo Finance
DENVER, July 28, 2022--(BUSINESS WIRE)--Palantir Technologies Inc. (NYSE: PLTR) today announced that it will expand its work with the U.S. Army Research Laboratory to implement data and artificial intelligence (AI)/machine learning (ML) capabilities for users across the combatant commands (COCOMs). The contract totals $99.9 million over two years.
Palantir first partnered with the Army Research Lab to provide those on the frontlines with state-of-the-art operational data and AI capabilities in 2018. Palantirs platform has supported the integration, management, and deployment of relevant data and AI model training to all of the Armed Services, COCOMs, and special operators. This extension grows Palantirs operational RDT&E work to more users globally.
"Maintaining a leading edge through technology is foundational to our mission and partnership with the Army Research Laboratory," said Akash Jain, President of Palantir USG. "Our nations armed forces require best-in-class software to fulfill their missions today while rapidly iterating on the capabilities they will need for tomorrows fight. We are honored to support this critical work by teaming up to deliver the most advanced operational AI capabilities available with dozens of commercial and public sector partners."
By working with the U.S. Army Research Lab, integrating with partner vendors, and iterating with users on the front lines, Palantirs software platforms will continue to quickly implement advanced AI capabilities against some of DODs most pressing problem sets. "Were looking forward to fielding our newest ML, Edge, and Space technologies alongside our U.S. military partners," said Shannon Clark, Senior Vice President of Innovation, Federal. "These technologies will enable operators in the field to leverage AI insights to make decisions across many fused domains. From outer space to the sea floor, and everything in between."
About Palantir Technologies Inc.
Foundational software of tomorrow. Delivered today. Additional information is available at https://www.palantir.com.
Forward-Looking Statements
This press release contains forward-looking statements within the meaning of Section 27A of the Securities Act of 1933, as amended, and Section 21E of the Securities Exchange Act of 1934, as amended. These statements may relate to, but are not limited to, Palantirs expectations regarding the amount and the terms of the contract and the expected benefits of our software platforms. Forward-looking statements are inherently subject to risks and uncertainties, some of which cannot be predicted or quantified. Forward-looking statements are based on information available at the time those statements are made and were based on current expectations as well as the beliefs and assumptions of management as of that time with respect to future events. These statements are subject to risks and uncertainties, many of which involve factors or circumstances that are beyond our control. These risks and uncertainties include our ability to meet the unique needs of our customer; the failure of our platforms to satisfy our customer or perform as desired; the frequency or severity of any software and implementation errors; our platforms reliability; and our customers ability to modify or terminate the contract. Additional information regarding these and other risks and uncertainties is included in the filings we make with the Securities and Exchange Commission from time to time. Except as required by law, we do not undertake any obligation to publicly update or revise any forward-looking statement, whether as a result of new information, future developments, or otherwise.
View source version on businesswire.com: https://www.businesswire.com/news/home/20220728005319/en/
Contacts
Media Contact Lisa Gordonmedia@palantir.com
Go here to see the original:
U.S. Army Research Lab Expands Artificial Intelligence and Machine Learning Contract with Palantir for $99.9M - Yahoo Finance