Page 2,479«..1020..2,4782,4792,4802,481..2,4902,500..»

Empowering the Intelligent Data-Driven Enterprise in the Cloud – CDOTrends

Businesses realize that the cloud offers a lot more than digital infrastructure. Around the world, organizations are turning to the cloud to democratize data access, harness advanced AI and analytics capabilities, and make better data-driven business decisions.

But despite heavy investments to build data repositories, setting up advanced database management systems (DBMS), and building large data warehouses on-premises, many enterprises are still challenged with poor business outcomes, observed Anthony Deighton, chief product officer at Tamr.

Deighton was speaking at the Empowering the intelligent data-driven enterprise in the cloud event by Tamr and Google Cloud in conjunction with CDOTrends. Attended by top innovation executives, data leaders, and data scientists from Asia Pacific, the virtual panel discussion looked at how forward-looking businesses might kick off the next phase of data transformation.

Why a DataOps strategy makes sense

Despite this massive [and ongoing] revolution in data, customers still can't get a view of their customers, their suppliers, and the materials they use in their business. Their analytics are out-of-date, or their AI initiatives are using bad data and therefore making bad recommendations. The result is that people don't trust the data in their systems, said Deighton.

As much as we've seen a revolution in the data infrastructure space, we're not seeing a better outcome for businesses. To succeed, we need to think about changing the way we work with data, he explained.

And this is where a DataOps strategy comes into play. A direct play on the popular DevOps strategy for software development, DataOps relies on an automated, process-oriented methodology to improve data quality for data analytics. Deighton thinks the DevOps revolution in software development can be replicated with data through a continuous collaborative approach with best-of-breed systems and the cloud.

Think of Tamr working in the backend to clean and deliver this centralized master data in the cloud. Offering clean, curated sources to questions such as: Who are my customers? What products have we sold? What vendors do we do business with? What are my sales transactions? And of course, for every one of your [departments], there's a different set of these clean, curated business topics that are relevant to you.

Data in an intelligent cloud

But wont an on-premises data infrastructure work just as well? So what benefits does the cloud offer? Deighton outlined two distinct advantages to explain why he considers the cloud the linchpin of the next phase of data transformation.

You can store infinite amounts of data in the cloud, and you can do that very cost-effectively. It's far less costly to store data in the cloud than it is to try to store it on-premises, in [your own] data lakes, he said.

Another really powerful capability of Google Cloud is its highly scalable elastic compute infrastructure. We can leverage its highly elastic compute and the fact that the data is already there. And then we can run our human-guided machine learning algorithms cost-effectively and get on top of that data quickly.

Andrew Psaltis, the APAC Technology Practice Lead at Google Cloud, drew attention to the synergy between Tamr and Google Cloud.

You can get data into [Google] BigQuery in different ways, but what you really want is clean, high-quality data. That quality allows you to have confidence in your advanced analytics, machine learning, and to the entire breadth of our analytics and AI platform. We have an entire platform to enable you to collaborate with your data science team; we have the tooling to do so without code, packaged AI solutions, tools for those who prefer to write their code, and everywhere in between.

Bridging the data silos

A handful of polls were conducted as part of the panel event, which saw participants quizzed about their ongoing data-driven initiatives. When asked about how they are staffing their data science initiatives, the majority (46%) responded they have multiple teams across various departments handling their data science initiatives.

The rest are split between either having a central team collecting, processing, and analyzing data or a combination of a central team working with multiple project teams across departments.

Deighton observed that multiple work teams typically result in multiple data silos: Each team has their silo of data. Maybe the team is tied to a specific business unit, a specific product team, or maybe a specific customer sales team.

The way to break the data barriers is to bring data together in the cloud to give users a view of the data across teams, he says. And it may sound funny, but sometimes, the way to break the interpersonal barriers is by breaking the data barriers.

Your customers don't care how you are organized internally. They want to do business with you, with your company. If you think about it, not from the perspective of the team, but the customer, then you need to put more effort into resolving your data challenges to best serve your customers.

Making the move

When asked about their big data change initiatives for the next three years, the response is almost unanimous: Participants want to democratize analytics, build a data culture, and make decisions faster (86%). Unsurprisingly, the top roadblock is that IT takes too long to deliver the systems data scientists need (62%) and the cost of data solutions (31%).

The cloud makes sense, given how it enables better work efficiency, lowers operational expenses, and is inherently secure, said Psaltis. Workers are moving to the cloud, Psaltis noted as he shared an anecdote about an unnamed organization that loaded the cloud with up to a petabyte of data in relatively short order.

This was apparently done without the involvement or knowledge of the IT department. Perhaps it might be better if the move to the cloud is done under more controlled circumstances with the approval and participation of IT, says Psaltis.

Finally, it is imperative that data is cleaned and kept clean as it is migrated to the cloud. Simply moving it into the cloud isn't enough. Without cleaning the data first, you will end up with poor quality, disparate data in the cloud. Where each applications data sits within a silo, with more silos than before, and difficulty making quality business decisions, summed up Deighton.

Paul Mah is the editor of DSAITrends. A former system administrator, programmer, and IT lecturer, he enjoys writing both code and prose. You can reach him at[emailprotected].

Image credit: iStockphoto/Artystarty

See more here:

Empowering the Intelligent Data-Driven Enterprise in the Cloud - CDOTrends

Read More..

The Winners Of Weekend Hackathon -Tea Story at MachineHack – Analytics India Magazine

The Weekend Hackathon Edition #2 The Last Hacker Standing Tea Story challenge concluded successfully on 19 August 2021. The challenge involved creating a time series analysis model that forecasts for 29 weeks . It had almost 240+participants and 110+ solutions posted on the leaderboard.

Based on the leaderboard score,we have the top 4 winners of the Tea Story Time Series Challenge, who will get free passes to the virtual Deep Learning DevCon 2021, to be held on 23-24 Sept 2021. Here, we look at the winners journeys, solution approaches and experiences at MachineHack.

First Rank Vybhav Nath C A

Vybhav Nath- a final year student at IIT Madras. He entered this field during his second year of college and started participating in MachineHack hackathons from last year. He plans to take up a career in Data Science.

Approach

He says the problem was unique in the sense that many columns in the test set had a lot of null value. So this was a challenging task to solve. He kept his preprocessing steps restricted to imputation and replacing N.S tasks. This was the first competition where he didnt use any ML model. Since many columns had null values, he interpolated the columns to get a fully populated test set. Then the final prediction was just the mean of these Price columns. He thinks this was total doosra by the cool MachineHack Team.

Experience

He says, I always participate in MH hackathons whenever possible. There are a wide variety of problems which test multiple areas. I also get to participate with many Professionals which I found to be a good pointer about where I stand among them.

Check out his solution here.

Second prize Shubham Bharadwaj

Shubham has been working as a Data Scientist for about 7 years now. He has been working on large datasets for the past 7 years. Started off with SQL then BigData Analytics, then Data Engineering and finally working as a Data Scientist. But he is new to hackathons and this is his fourth hackathon in which he has participated. He loves to solve complex problems.

Approach

The data which was provided was very raw in nature, there were around 70 percent missing values in the test dataset. From his point of view ,finding the best imputation method was the backbone of this challenge.

Preprocessing steps followed:

1. Converting the columns to correct data types,

2. Imputing the missing values- He tried various methods like filling the null values with mean of each column, mean of that row, MICE. But the best was KNN imputer with n_neighbors as 3.

For removing the outliers,he used the IQR(InterQuartile Range), which helped in reducing the mean square error.

Models tried were logistic regression, then XGBRegressor, ARIMA, T-POT, and finally H2OAutoML which yielded the best result.

Experience

Shubham says I am new to the MachineHack family, and one thing is for sure that I am here to stay. Its a great place, I have already learned so much. The datasets are of wide variety and the problem statements are unique, puzzling and complex. Its a must for every aspiring and professional data scientist to upskill themselves.

Check out his solution here.

Third prize Panshul Pant

Panshul is a Computer Science and Engineering graduate. He has picked up data science mostly from online platforms like Coursera, Hackerearth, MachineHack and by watching videos on YouTube. Going through articles on websites like Analytics India Magazine have also helped him in this journey. This problem was based on a time series which made it unique, though he solved it using machine learning algorithms rather than other traditional ways.

Approach

There were certain string values like N.S, No sale etc in all numerical columns which I changed to Null values and imputed all the null values. I tried various ways to impute NaNs like with zero, mean, f-fill and b-fill methods .Out of these forward and backward filling methods performed significantly better. Exploring the data he noticed that the prices increased over the months and years, having a trend. The target columns values were also very closely related to the average of prices of all the independent columns.He kept all data including the outliers without much change as tree based models are quite robust to outliers.

As the prices were related to time he extracted time based features as well out of which day of week proved to be useful. An average based feature which had the average of all the numerical columns was extremely useful for good predictions. He tried using some aggregate based features as well but they were not of much help. For predictions he used tree based models like lightgbm and xgboost. The combination of both of them using weighted average gave best results.

Experience

Panshul says It was definitely a valuable experience. The challenges set up by the organisers are always exciting and unique. Participating in these challenges has helped me hone my skills in this domain.

Check out his solution here.

Fourth prize Shweta Thakur

Shwetas fascination with data science started when she realised how numbers can guide decision making. She did a PGP-DSBA course from Great Learning . Even though her professional work does not involve Data Science activity, she loves to challenge herself by working on Data Science projects and participating in Hackathons.

Approach

Shweta says that the fact that it is a time series problem makes it unique. She observed the trend and seasonality in the dataset and the higher correlation between various variables. Didnt treat the outliers but tried to treat the missing values with interpolate (linear, spline)method, ffill, bfill, replacing with other values from dataset.Even though some of the features were not as significant in identifying the target but removing them didnt improve the RMSE. She tried only SARIMAX.

Experience

Shweta says It was a great experience to compete with people from different back-ground and expertise.

Check out his solution here.

Once again, join us in congratulating the winners of this exciting hackathon who indeed were the Last Hackers Standing of Tea Story- Weekend Hackathon Edition-2 . We will be back next week with the winning solutions of the ongoing challenge Soccer Fever Hackathon.

Original post:

The Winners Of Weekend Hackathon -Tea Story at MachineHack - Analytics India Magazine

Read More..

Taktile makes it easier to leverage machine learning in the financial industry – TechCrunch

Meet Taktile, a new startup that is working on a machine learning platform for financial services companies. This isnt the first company that wants to leverage machine learning for financial products. But Taktile wants to differentiate itself from competitors by making it way easier to get started and switch to AI-powered models.

A few years ago, when you could read machine learning and artificial intelligence in every single pitch deck, some startups chose to focus on the financial industry in particular. It makes sense as banks and insurance companies gather a ton of data and know a lot of information about their customers. They could use that data to train new models and roll out machine learning applications.

New fintech companies put together their own in-house data science team and started working on machine learning for their own products. Companies like Younited Credit and October use predictive risk tools to make better lending decisions. They have developed their own models and they can see that their models work well when they run them on past data.

But what about legacy players in the financial industry? A few startups have worked on products that can be integrated in existing banking infrastructure. You can use artificial intelligence to identify fraudulent transactions, predict creditworthiness, detect fraud in insurance claims, etc.

Some of them have been thriving, such as Shift Technology with a focus on insurance in particular. But a lot of startups build proof-of-concepts and stop there. Theres no meaningful, long-term business contract down the road.

Taktile wants to overcome that obstacle by building a machine learning product that is easy to adopt. It has raised a $4.7 million seed round led by Index Ventures with Y Combinator, firstminute Capital, Plug and Play Ventures and several business angels also participating.

The product works with both off-the-shelf models and customer-built models. Customers can customize those models depending on their needs. Models are deployed and maintained by Taktiles engine. It can run in a customers cloud environment or as a SaaS application.

After that, you can leverage Taktiles insights using API calls. It works pretty much like integrating any third-party service in your product. The company tried to provide as much transparency as possible with explanations for each automated decision and detailed logs. As for data sources, Taktile supports data warehouses, data lakes as well as ERP and CRM systems.

Its still early days for the startup, and its going to be interesting to see whether Taktiles vision pans out. But the company has already managed to convince some experienced backers. So lets keep an eye on them.

Visit link:
Taktile makes it easier to leverage machine learning in the financial industry - TechCrunch

Read More..

Avalo uses machine learning to accelerate the adaptation of crops to climate change – TechCrunch

Climate change is affecting farming all over the world, and solutions are seldom simple. But if you could plant crops that resisted the heat, cold or drought instead of moving a thousand miles away, wouldnt you? Avalo helps plants like these become a reality using AI-powered genome analysis that can reduce the time and money it takes to breed hardier plants for this hot century.

Founded by two friends who thought theyd take a shot at a startup before committing to a life of academia, Avalo has a very direct value proposition, but it takes a bit of science to understand it.

Big seed and agriculture companies put a lot of work into creating better versions of major crops. By making corn or rice ever so slightly more resistant to heat, insects, drought or flooding, they can make huge improvements to yields and profits for farmers, or alternatively make a plant viable to grow somewhere it couldnt before.

There are big decreases to yields in equatorial areas and its not that corn kernels are getting smaller, said co-founder and CEO Brendan Collins. Farmers move upland because salt water intrusion is disrupting fields, but they run into early spring frosts that kill their seedlings. Or they need rust resistant wheat to survive fungal outbreaks in humid, wet summers. We need to create new varieties if we want to adapt to this new environmental reality.

To make those improvements in a systematic way, researchers emphasize existing traits in the plant; this isnt about splicing in a new gene but bringing out qualities that are already there. This used to be done by the simple method of growing several plants, comparing them, and planting the seeds of the one that best exemplifies the trait like Mendel in Genetics 101.

Nowadays, however, we have sequenced the genome of these plants and can be a little more direct. By finding out which genes are active in the plants with a desired trait, better expression of those genes can be targeted for future generations. The problem is that doing this still takes a long time as in a decade.

The difficult part of the modern process stems (so to speak) from the issue that traits, like survival in the face of a drought, arent just single genes. They may be any number of genes interacting in a complex way. Just as theres no single gene for becoming an Olympic gymnast, there isnt one for becoming drought-resistant rice. So when the companies do what are called genome-wide association studies, they end up with hundreds of candidates for genes that contribute to the trait, and then must laboriously test various combinations of these in living plants, which even at industrial rates and scales takes years to do.

Numbered, genetically differentiated rice plants being raised for testing purposes. Image Credits: Avalo

The ability to just find genes and then do something with them is actually pretty limited as these traits become more complicated, said Mariano Alvarez, co-founder and CSO of Avalo. Trying to increase the efficiency of an enzyme is easy, you just go in with CRISPR and edit it but increasing yield in corn, there are thousands, maybe millions of genes contributing to that. If youre a big strategic [e.g., Monsanto] trying to make drought-tolerant rice, youre looking at 15 years, 200 million dollars its a long play.

This is where Avalo steps in. The company has built a model for simulating the effects of changes to a plants genome, which they claim can reduce that 15-year lead time to two or three and the cost by a similar ratio.

The idea was to create a much more realistic model for the genome thats more evolutionarily aware, said Collins. That is, a system that models the genome and genes on it that includes more context from biology and evolution. With a better model, you get far fewer false positives on genes associated with a trait, because it rules out far more as noise, unrelated genes, minor contributors and so on.

He gave the example of a cold-tolerant rice strain that one company was working on. A genomewide association study found 566 genes of interest, and to investigate each costs somewhere in the neighborhood of $40,000 due to the time, staff and materials required. That means investigating this one trait might run up a $20 million tab over several years, which naturally limits both the parties who can even attempt such an operation, and the crops that they will invest the time and money in. If you expect a return on investment, you cant spend that kind of cash improving a niche crop for an outlier market.

Were here to democratize that process, said Collins. In that same body of data relating to cold-tolerant rice, We found 32 genes of interest, and based on our simulations and retrospective studies, we know that all of those are truly causal. And we were able to grow 10 knockouts to validate them, three in a three-month period.

In each graph, dots represent confidence levels in genes that must be tested. The Avalo model clears up the data and selects only the most promising ones. Image Credits: Avalo

To unpack the jargon a little there, from the start Avalos system ruled out more than 90% of the genes that would have had to be individually investigated. They had high confidence that these 32 genes were not just related, but causal having a real effect on the trait. And this was borne out with brief knockout studies, where a particular gene is blocked and the effect of that studied. Avalo calls its method gene discovery via informationless perturbations, or GDIP.

Part of it is the inherent facility of machine learning algorithms when it comes to pulling signal out of noise, but Collins noted that they needed to come at the problem with a fresh approach, letting the model learn the structures and relationships on its own. And it was also important to them that the model be explainable that is, that its results dont just appear out of a black box but have some kind of justification.

This latter issue is a tough one, but they achieved it by systematically swapping out genes of interest in repeated simulations with what amount to dummy versions, which dont disrupt the trait but do help the model learn what each gene is contributing.

Avalo co-founders Mariano Alvarez (left) and Brendan Collins by a greenhouse. Image Credits: Avalo

Using our tech, we can come up with a minimal predictive breeding set for traits of interest. You can design the perfect genotype in silico [i.e., in simulation] and then do intensive breeding and watch for that genotype, said Collins. And the cost is low enough that it can be done by smaller outfits or with less popular crops, or for traits that are outside possibilities since climate change is so unpredictable, who can say whether heat- or cold-tolerant wheat would be better 20 years from now?

By reducing the capital cost of undertaking this exercise, we sort of unlock this space where its economically viable to work on a climate-tolerant trait, said Alvarez.

Avalo is partnering with several universities to accelerate the creation of other resilient and sustainable plants that might never have seen the light of day otherwise. These research groups have tons of data but not a lot of resources, making them excellent candidates to demonstrate the companys capabilities.

The university partnerships will also establish that the system works for fairly undomesticated plants that need some work before they can be used at scale. For instance it might be better to supersize a wild grain thats naturally resistant to drought instead of trying to add drought resistance to a naturally large grain species, but no one was willing to spend $20 million to find out.

On the commercial side, they plan to offer the data handling service first, one of many startups offering big cost and time savings to slower, more established companies in spaces like agriculture and pharmaceuticals. With luck Avalo will be able to help bring a few of these plants into agriculture and become a seed provider as well.

The company just emerged from the IndieBio accelerator a few weeks ago and has already secured $3 million in seed funding to continue their work at greater scale. The round was co-led by Better Ventures and Giant Ventures, with At One Ventures, Climate Capital, David Rowan and of course IndieBio parent SOSV participating.

Brendan convinced me that starting a startup would be way more fun and interesting than applying for faculty jobs, said Alvarez. And he was totally right.

Originally posted here:
Avalo uses machine learning to accelerate the adaptation of crops to climate change - TechCrunch

Read More..

Improve Machine Learning Performance by Dropping the Zeros – ELE Times

KAUST researchers have found a way to significantly increase the speed of training. Large machine learning models can be trained significantly faster by observing how frequently zero results are produced in distributed machine learning that uses large training datasets.

AI models develop their intelligence by being trained on datasets that have been labelled to tell the model how to differentiate between different inputs and then respond accordingly. The more labelled data that goes in, the better the model becomes at performing whatever task it has been assigned to do. For complex deep learning applications, such as self-driving vehicles, this requires enormous input datasets and very longtrainingtimes, even when using powerful and expensive highly parallel supercomputing platforms.

During training, small learning tasks are assigned to tens or hundreds of computing nodes, which then share their results over acommunications networkbefore running the next task. One of the biggest sources of computing overhead in such parallel computing tasks is actually this communication among computing nodes at each model step.

Communication is a major performance bottleneck in distributed deep learning, explains the KAUST team. Along with the fast-paced increase in model size, we also see an increase in the proportion of zero values that are produced during the learning process, which we call sparsity. Our idea was to exploit this sparsity to maximize effective bandwidth usage by sending only non-zero data blocks.

Building on an earlier KAUST development called SwitchML, which optimized internode communications by running efficient aggregation code on the network switches that process data transfer, Fei, Marco Canini and their colleagues went a step further by identifying zero results and developing a way to drop transmission without interrupting the synchronization of the parallel computing process.

Exactly how to exploit sparsity to accelerate distributed training is a challenging problem, says the team. All nodes need to process data blocks at the same location in a time slot, so we have to coordinate the nodes to ensure that only data blocks in the same location are aggregated. To overcome this, we created an aggregator process to coordinate the workers, instructing them on which block to send next.

The team demonstrated their OmniReduce scheme on a testbed consisting of an array of graphics processing units (GPU) and achieved an eight-fold speed-up for typicaldeep learningtasks.

Read more from the original source:
Improve Machine Learning Performance by Dropping the Zeros - ELE Times

Read More..

New imaging, machine-learning methods speed effort to reduce crops’ need for water – University of Illinois News

CHAMPAIGN, Ill. Scientists have developed and deployed a series of new imaging and machine-learning tools to discover attributes that contribute to water-use efficiency in crop plants during photosynthesis and to reveal the genetic basis of variation in those traits.

The findings are described in a series of four research papers led by University of Illinois Urbana-Champaign graduate students Jiayang (Kevin) Xie and Parthiban Prakash, and postdoctoral researchers John Ferguson, Samuel Fernandes and Charles Pignon.

The goal is to breed or engineer crops that are better at conserving water without sacrificing yield, said Andrew Leakey, a professor of plant biology and of crop sciences at the University of Illinois Urbana-Champaign, who directed the research.

Drought stress limits agricultural production more than anything else, Leakey said. And scientists are working to find ways to minimize water loss from plant leaves without decreasing the amount of carbon dioxide the leaves take in.

Plants breathe in carbon dioxide through tiny pores in their leaves called stomata. That carbon dioxide drives photosynthesis and contributes to plant growth. But the stomata also allow moisture to escape in the form of water vapor.

A new approach to analyzing the epidermis layer of plant leaves revealed that the size and shape of the stomata (lighter green pores) in corn leaves strongly influence the crops water-use efficiency.

Micrograph by Jiayang (Kevin) Xie

Edit embedded media in the Files Tab and re-insert as needed.

The amount of water vapor and carbon dioxide exchanged between the leaf and atmosphere depends on the number of stomata, their size and how quickly they open or close in response to environmental signals, Leakey said. If rainfall is low or the air is too hot and dry, there can be insufficient water to meet demand, leading to reduced photosynthesis, productivity and survival.

To better understand this process in plants like corn, sorghum and grasses of the genus Setaria, the team analyzed how the stomata on their leaves influenced plants water-use efficiency.

We investigated the number, size and speed of closing movements of stomata in these closely related species, Leakey said. This is very challenging because the traditional methods for measuring these traits are very slow and laborious.

For example, determining stomatal density previously involved manually counting the pores under a microscope. The slowness of this method means scientists are unable to analyze large datasets, Leakey said.

There are a lot of features of the leaf epidermis that normally dont get measured because it takes too much time, he said. Or, if they get measured, its in really small experiments. And you cant discover the genetic basis for a trait with a really small experiment.

To speed the work, Xie took a machine-learning tool originally developed to help self-driving cars navigate complex environments and converted it into an application that could quickly identify, count and measure thousands of cells and cell features in each leaf sample.

Jiayang (Kevin) Xie converted a machine-learning tool originally designed to help self-driving cars navigate complex environments into an application that can quickly analyze features on the surface of plant leaves.

Photo by L. Brian Stauffer

Edit embedded media in the Files Tab and re-insert as needed.

To do this manually, it would take you several weeks of labor just to count the stomata on a seasons worth of leaf samples, Leakey said. And it would take you months more to manually measure the sizes of the stomata or the sizes of any of the other cells.

The team used sophisticated statistical approaches to identify regions of the genome and lists of genes that likely control variation in the patterning of stomata on the leaf surface. They also used thermal cameras in field and laboratory experiments to quickly assess the temperature of leaves as a proxy for how much water loss was cooling the leaves.

This revealed key links between changes in microscopic anatomy and the physiological or functional performance of the plants, Leakey said.

By comparing leaf characteristics with the plants water-use efficiency in field experiments, the researchers found that the size and shape of the stomata in corn appeared to be more important than had previously been recognized, Leakey said.

Along with the identification of genes that likely contribute to those features, the discovery will inform future efforts to breed or genetically engineer crop plants that use water more efficiently, the researchers said.

The new approach provides an unprecedented view of the structure and function of the outermost layer of plant leaves, Xie said.

There are so many things we dont know about the characteristics of the epidermis, and this machine-learning algorithm is giving us a much more comprehensive picture, he said. We can extract a lot more potential data on traits from the images weve taken. This is something people could not have done before.

Leakey is an affiliate of the Carl R. Woese Institute for Genomic Biology at the U. of I. He and his colleagues report their findings in a study published in The Journal of Experimental Botany and in three papers in the journal Plant Physiology (see below).

The National Science Foundation Plant Genome Research Program, the Advanced Research Projects Agency-Energy, the Department of Energy Biosystems Design Program, the Foundation for Food and Agriculture Research Graduate Student Fellows Program, The Agriculture and Food Research Initiative from the U.S. Department of Agriculture National Institute of Food and Agriculture, and the U. of I. Center for Digital Agriculture supported this research.

More:
New imaging, machine-learning methods speed effort to reduce crops' need for water - University of Illinois News

Read More..

The Best Of Our Knowledge #1614: The Rise Of The Machines – WAMC

Today we think nothing of seeing laptops and iPads in the classroom. But there have been attempts at creating so-called teaching machines since the early 20th Century. And its the history of those early teaching machines that Audrey Watters explores in her new book called Teaching Machines The History of Personalized Learning."

Audrey Watters is an education technology writer and creator of the blog Hack Education.

So after a discussion about the history of learning machines, we thought it would be a good idea to take another look at machine learning. Its a very different thing. Machine learning and "Artificial Intelligence are two terms that were coined in the 1950s but are only now beginning to be put to solving practical problems. In the past few years, machine learning algorithms have been used to automate the interpretation and analysis of clinical chemistry data in a variety of situations in the lab. In the September 2020 issue of the journal Clinical Chemistry, there is a paper on a machine learning approach for the automated interpretation of amino acid profiles in human plasma. The same issue contains an accompanying editorial titled Machine Learning for the Biochemical Genetics Laboratory. One of the authors of the editorial is Dr. Stephen Master, Chief of the Division of Laboratory Medicine at the Childrens Hospital of Philadelphia and an Associate Professor of Pathology and Laboratory Medicine at the Perelman School of Medicine of the University of Pennsylvania. I asked Dr. Master, first of all, what exactly is machine learning, and why would it be significant for the clinical laboratory?

Okay, so weve done some deep dives into teaching machines and machine learning, lets go for the hat trick and take on virtual reality. Thats the topic of todays Academic Minute.

See the original post here:
The Best Of Our Knowledge #1614: The Rise Of The Machines - WAMC

Read More..

Some of the emerging AI And machine Learning trends of 2021 – Floridanewstimes.com

From consumer electronics and smart personal assistants, advanced quantum computing systems to leading-edge medical diagnostic systems artificial Intelligence and machine learning technologies are increasingly finding their way into everything as they have been hot topics in 2020. According to market researcher IDC, up 12.3 percent from 2019 revenue generated by AI hardware, software and services is expected to reach $156.5 billion worldwide this year. But when it comes to trends in the development and use of AI and ML technologies can be easy to lose sight of the forest for the trees. You should look at hown AI and machine learning are being developed and the ways they are being used not just in the types of applications they are finding their way into as we approach the end of a turbulent 2020.

The growth of AI And Machine Learning in Hyperautomation

Hyperautomationis the idea that most anything within an organization that can be automated such as legacy business processes should be automated which is identified by market research firm Gartner. Also known as digital process automation and intelligent process automation,the pandemic has advanced adoption of the concept. The major drivers of hyper automation are AI and machine learning which are the key components. On static packaged software, hyper-automation initiatives cannot rely to be successful. To changing occurrences and answer to unexpected situations, the automated business processes must be able to adapt. To allow the system to automatically improve over time and respond to changing business processes and requirements along with data generated by the automated system the AI, machine learning models and deep learning technology arrive, using learning algorithms and models. You can check out embedded hardware design services of Integra Sources where you may find out more about this topic and also can apply to work for them.

Through AI EngineeringBringing Discipline to AI Development

According to Gartners research, the percentage of AI projects which successfully make it from prototype to full production is only about 53 percent. AI initiatives often fail to generate the hoped-for returns because businesses and organizations often struggle with system maintainability, scalability and governance when trying to deploy newly developed AI systems and machine learning models. According to Gartners list of Top Strategic Technology Trends for 2021 the performance, scalability, interpretability and reliability of AI models and deliver the full value of AI investments, will improve due to Businesses and organizations which are coming to understand the robust AI engineering strategy.

Link:
Some of the emerging AI And machine Learning trends of 2021 - Floridanewstimes.com

Read More..

Bodo.ai Raises $14 million Series A to Revolutionize Simplicity, Performance and Scale for Data Analytics and Machine Learning – Business Wire

SAN FRANCISCO--(BUSINESS WIRE)--Bodo.ai, the extreme-performance parallel compute platform for data workloads, today announced it has raised $14 million in Series A funding led by Dell Technologies Capital, with participation from Uncorrelated Ventures, Fusion Fund and Candou Ventures.

Founded in 2019 to revolutionize complex data analytics and machine learning applications, Bodos goal is to make Python a first-class, high-performance and production-ready platform. The companys innovative compiler technology enables customers to solve challenging, large-scale data and machine learning problems at extreme performance and low cost with the simplicity and flexibility of native Python. Validated at 10,000+ cores and petabytes of data, Bodo delivers a previously unattainable supercomputing-like performance with linear parallel scalability. By eliminating the need to use new libraries or APIs or rewrite Python into Scala, C++, Java, or GPU code to achieve scalability, Bodo users may achieve a new level of performance and economic efficiency for large-scale ETL, Data Prep, Feature Engineering, and AI/ML Model training.

Big data is getting bigger, and in todays data-driven economy, enterprise customers need speed and scale for their data analytics needs, said Behzad Nasre, co-founder and CEO of Bodo.ai. Existing workarounds for large scale data processing like extra libraries and frameworks fail to address the underlying scale and performance issues. Bodo not only addresses this, but does so with an approach that requires no rewriting of the original application code.

Python is the second most popular programming language in existence largely due to its popularity among AI and ML developers and data scientists. However, most developers and data engineers who rely on Python for AI and ML algorithms are hampered by its sub-optimal performance when handling large-scale data. And those who use extensions and frameworks still find their performance falls orders of magnitude short of Bodos. For example, a large retailer recently achieved more than 100x real time performance improvement for their mission-critical program metric analysis workloads and saved over 90% on cloud infrastructure costs by using Bodo as opposed to a leading cloud data platform.

Customers know that parallel computing is the only way to keep up with computational demands for artificial intelligence and machine learning and extend Moores Law. But such high-performance computing has only been accessible to select experts at large tech companies and government laboratories, added Ehsan Totoni, co-founder and CTO of Bodo.ai. Our inferential compiler technology automates the parallelization formerly done by performance experts, democratizing compute power for all developers and enterprises. This will have a profound impact on large-scale AI, ML and analytics communities.

Bodo bridges the simplicity-vs-performance gap by delivering compute performance and runtime efficiency with no application rewriting. This will enable hundreds of thousands of Python developers and data scientists to perform near-real-time analytics and unlock new revenue opportunities for customers.

We see enterprises using more ML and data analytics to drive business insight and growth. There is a nearly constant need for more and better insights at near-real-time, said Daniel Docter, Managing Director, Dell Technologies Capital. But the exploding growth in data and analytics comes with huge hidden costs - massive infrastructure spend, code rewriting, complexity, and time. We see Bodo attacking these problems head-on, with an elegant approach that works for native Python for scale-out parallel processing. It will change the face of analytics.

For more information visit http://www.bodo.ai.

About Bodo.ai

Founded in 2019, Bodo.ai is an extreme-performance parallel compute platform for data analytics, scaling past 10,000 cores and petabytes of data with unprecedented efficiency and linear scaling. Leveraging unique automatic parallelization and the first inferential compiler, Bodo is helping F500 customers solve some of the worlds most massive data analysis problems. And doing so in a fraction of traditional time, complexity, and cost, all while leveraging the simplicity and flexibility of native Python. Developers can deploy Bodo on any infrastructure, from a laptop to a public cloud. Headquartered in San Francisco with offices in Pittsburgh, PA, the team of passionate technologists aims to radically accelerate the world of data analytics. http://bodo.ai #LetsBodo

About Dell Technologies Capital

Dell Technologies Capital is the global venture capital investment arm of Dell Technologies. The investment team backs passionate early stage founders who push the envelope on technology innovation for enterprises. Since inception in 2012, the team has sustained an investment pace of $150 million a year and has invested in more than 125 startups, 52 of which have been acquired and 7 have gone public. Portfolio companies also gain unique access to the go-to-market capabilities of Dell Technologies (Dell, Dell EMC, VMWare, Pivotal, Secureworks). Notable investments include Arista Networks, Cylance, Docusign, Graphcore, JFrog, MongoDB, Netskope, Nutanix, Nuvia, RedisLabs, RiskRecon, and Zscaler. Headquartered in Palo Alto, California, Dell Technologies Capital has offices in Boston, Austin, and Israel. For more information, visit http://www.delltechnologiescapital.com.

Go here to read the rest:
Bodo.ai Raises $14 million Series A to Revolutionize Simplicity, Performance and Scale for Data Analytics and Machine Learning - Business Wire

Read More..

How AI and Machine Learning are changing the tech industry – refreshmiami.com

AI and Machine Learning have been gaining momentum over the past few years, but recently with the pandemic, it has accelerated in ways we couldnt imagine. Last year was an extremely difficult year for every imaginable sector of the economy. It has forced the acceleration of AI.

In this event, we will talk about how AI is changing the tech industry, and how the talent pool is not growing fast enough to meet the demands

Companies across all industries have been scrambling to secure top AI talent from a pool thats not growing fast enough. Even during the economic disruptions and layoffs caused by the COVID-19 pandemic, the demand for AI talent has been strong. Leaders are looking to reduce costs through automation and efficiency, and AI has a real role to play in that effort

Our panel will be comprised of amazing people in the industry

Koyuki Nakamori Head of Machine Learning at HeadSpace

Nehar Poddar Machine Learning Engineer at DEKA Research and Development

Excerpt from:
How AI and Machine Learning are changing the tech industry - refreshmiami.com

Read More..