Category Archives: Machine Learning

Abacus.AI Named to 2022 CB Insights AI 100 & the Forbes AI 50 – PR Newswire

Abacus.AI has been recognized for its achievements and developments in artificial intelligence

SAN FRANCISCO, June 30, 2022 /PRNewswire/ --Abacus.AI, the first end-to-end Artificial Intelligence (AI)/Machine Learning (ML) platform, announced that it has been named to the 2022 CB Insights AI 100 List of Most Promising AI Startups, an annual list of the 100 most promising AI companies in the world.

Utilizing the CB Insights platform, the CB Insights team picked 100 private market vendors from a pool of over 7,000 companies, including applicants and nominees. Vendors were chosen based on factors including R&D activity, proprietary Mosaic Scores, market potential, business relationships, investor profiles, news sentiment analysis, competitive landscape, team strength, and tech novelty. They also reviewed thousands of analyst briefings submitted by applicants.

Abacus.AI has been specifically recognized within the ML platforms category, featuring companies that are developing tools to support AI development.

This recognition comes at an exciting time for Abacus.AI as they had been recognized the week prior in the Forbes AI 50, a list of the top 50 private companies in North America that are utilizing AI to transform the future.

Forbes' fourth annual 50 AI list, produced in partnership with Sequoia Capital, features the most compelling companies based on their utilization of AI technologies. Forbes assessed hundreds of submitted entries from the U.S. and Canada. From these submitted entries, their venture capital (VC) partners applied an algorithm that identified more than 120 companies with the highest quantitative scores. The top 50 companies were then hand-picked by a panel of expert AI judges.

"Over the course of the decade, there has been a significant paradigm shift in AI. Our interactions with AI have exponentially increased and companies have begun to see its efficiency in common enterprise use-cases. The challenge, of course, is looking for ways to seamlessly integrate AI within their products in a swift and cost-effective manner," said Bindu Reddy, Co-founder and CEO of Abacus.AI. "We still have a long way to go but I'm exceptionally proud of the progress of the Abacus.AI team in building a unified end to end platform that enables organizations to fully realize their AI, machine learning, and deep learning needs. It is a true honor and privilege to earn recognition from Forbes and CB Insights and stand alongside some of my industry peers."

About Abacus.AI

Abacus.AI is the world's first autonomous cloud AI platform that handles all aspects of machine and deep learning at an enterprise scale. They provide customizable, end-to-end autonomous AI services that can be used to set up data pipelines, specify custom machine learning transformations, and train, deploy, and monitor models.

Abacus.AI specializes in several use-case specific workflows including churn prediction, personalization, forecasting, NLP, and anomaly detection. The company features a world-class research team that has invented several neural architecture search methods that can create custom neural networks from datasets based on a specific use-case. Abacus.AI has been adopted by world-class organizations, several of which are Fortune 500 companies.

About CB Insights

CB Insights builds software that enables the world's best companies to discover, understand, and make technology decisions with confidence. By marrying data, expert insights, and work management tools, clients can manage their end-to-end technology decision-making process with CB Insights. To learn more, please visitwww.cbinsights.com.

SOURCE Abacus.AI

Read more:
Abacus.AI Named to 2022 CB Insights AI 100 & the Forbes AI 50 - PR Newswire

Outlines of Machine Learning (ML) Platforms Market 2022 with Trends, Analysis by Regions, Type, Application Designer Women – Designer Women

Machine Learning (ML) Platforms Market 2022-2026:

The Machine Learning (ML) Platforms market exhibits comprehensive information that is a valuable source of insightful data for business strategists during the decade 2016-2026. On the basis of historical data, Machine Learning (ML) Platforms market report provides key segments and their sub-segments, revenue and demand & supply data. Considering technological breakthroughs of the market Machine Learning (ML) Platforms industry is likely to appear as a commendable platform for emerging Machine Learning (ML) Platforms market investors.

The complete value chain and downstream and upstream essentials are scrutinized in this report. Essential trends like globalization, growth progress boost fragmentation regulation & ecological concerns. This Market report covers technical data, manufacturing plants analysis, and raw material sources analysis of Machine Learning (ML) Platforms Industry as well as explains which product has the highest penetration, their profit margins, and R & D status. The report makes future projections based on the analysis of the subdivision of the market which includes the global market size by product category, end-user application, and various regions.

Get Sample Report: https://www.marketresearchupdate.com/sample/182384

This Machine Learning (ML) Platforms Market Report covers the manufacturers data, including shipment, price, revenue, gross profit, interview record, business distribution, etc., these data help the consumer know about the competitors better.

Topmost Leading Manufacturer Covered in this report:Palantier, Microsoft, MathWorks, SAS, Databricks, Alteryx, H2O.ai, TIBCO Software, IBM, Dataiku, Domino, Altair, Google, RapidMiner, DataRobot, Anaconda, KNIME

Product Segment Analysis:

Cloud-basedOn-premises

On the Basis of Application:

Small and Medium Enterprises (SMEs)Large Enterprises

Get Discount @ https://www.marketresearchupdate.com/discount/182384

Regional Analysis For Machine Learning (ML) PlatformsMarket

North America(the United States, Canada, and Mexico)Europe(Germany, France, UK, Russia, and Italy)Asia-Pacific(China, Japan, Korea, India, and Southeast Asia)South America(Brazil, Argentina, Colombia, etc.)The Middle East and Africa(Saudi Arabia, UAE, Egypt, Nigeria, and South Africa)

The objectives of the report are:

To analyze and forecast the market size of Machine Learning (ML) PlatformsIndustry in theglobal market. To study the global key players, SWOT analysis, value and global market share for leading players. To determine, explain and forecast the market by type, end use, and region. To analyze the market potential and advantage, opportunity and challenge, restraints and risks of global key regions. To find out significant trends and factors driving or restraining the market growth. To analyze the opportunities in the market for stakeholders by identifying the high growth segments. To critically analyze each submarket in terms of individual growth trend and their contribution to the market. To understand competitive developments such as agreements, expansions, new product launches, and possessions in the market. To strategically outline the key players and comprehensively analyze their growth strategies.

View Full Report @ https://www.marketresearchupdate.com/industry-growth/global-machine-learning-ml-platforms-industry-182384

At last, the study gives out details about the major challenges that are going to impact market growth. They also report provides comprehensive details about the business opportunities to key stakeholders to grow their business and raise revenues in the precise verticals. The report will aid the companys existing or intend to join in this market to analyze the various aspects of this domain before investing or expanding their business in the Machine Learning (ML) Platforms markets.

Contact Us:sales@marketresearchupdate.com

Read more from the original source:
Outlines of Machine Learning (ML) Platforms Market 2022 with Trends, Analysis by Regions, Type, Application Designer Women - Designer Women

NTU students to send machine learning software to International Space Station – The Straits Times

SINGAPORE - Five Nanyang Technological University (NTU) students have achieved the next best thing to being on the International Space Station (ISS) - their machine learning software is going up to the home of astronauts for three months later this year.

The students will be running the show from their laptops - testing the software, which was built to predict hardware disruptions on the ISS, satellites or other spacecraft. Such disruptions can cause these space vehicles to go off course or crash, in the worst outcomes.

Called single event upsets, these hardware disruptions tend to afflict sensitive electrical components in space, such as memory devices on semiconductors.

The disruptions happen when highly charged particles from the sun or nearby constellations strike such sensitive electronics.

Energised solar particles tend to come from sunspots and large explosions from the sun's surface that release radiation.

"The space environment is very hostile towards electronic devices," said Mr Eng Chong Yock, 22, a first-year student at NTU's School of Electrical and Electronic Engineeringand one of the team members who built the software, called Cremer.

The students see the predictive solution software as their contribution to protecting the trove of space experiment data that is electronically stored in the ISS. The space station is a giant orbiting lab for astronauts to regularly conduct research in areas such as human health and growing crops in space, as well as biomedical studies.

Between 2000 and 2020, astronauts conducted nearly 3,000 experiments on the ISS.

NTU team member Archit Gupta, 20, a second-year student from the School of Computer Science and Engineering, said: "The purpose of the ISS is to collect experimental data, and if (single event upsets) happen, then the sanctity of the data gets compromised and the experiment is wasted."

Cremer was named after an existing software called Creme, which also addresses single event upsets.

Mr Gupta said: "We wanted to build a better version of it, and hence we named it Cremer."

The NTU team is getting to test its machine learning model in the ISS after it emerged as the champion at a competition earlier this month, which called upon tertiary students in South-east Asia and Taiwan to develop innovative ways to use artificial intelligence for space applications.

The competition, called AI Space Challenge, was organised by four space tech companies - including the home-grown Zenith Intellutions, a firm that provides consultancy services for advanced satellite development and related technologies.

The other members of the NTU team are third-year business student Sim See Min, 22, third-year mechanical engineering student Deon Lim, 24, and second-year electrical and electronic engineering student Rashna Ahmed, 21.

Ms Sim, who is specialising in analytics, said the team members complemented one another well.

She said: "Archit was more familiar with coding and algorithms. Chong Yock, Rashna and Deon are more familiar with electrical engineering and physics. I helped to add value during the pitching part of the competition."

See the original post:
NTU students to send machine learning software to International Space Station - The Straits Times

Inside ‘Everyday AI’ and the machine learning-heavy future – SiliconANGLE News

Artificial intelligence is being used for everything from automating workflows to assisting customers and even creating art.

Dataiku Ltd. calls this Everyday AI. Its a systematic approach to embedding AI into the organization in a way that makes it part of the routine of doing business. Dataiku has teamed up with cloud-based data warehousing giant Snowflake Inc. to set organizations up for everyday AI and the machine learning-heavy future.

We believe that AI will become so pervasive in all of the business processes, all the decision-making that organizations have to go through, and that its no longer this special thing that we talk about, said Kurt Muehmel (pictured, right), chief customer officer of Dataiku. Its the day-to-day life of our businesses. And we cant do that without partners like Snowflake, because theyre bringing together all of that data and ensuring that there is the computational horsepower behind that to drive.

Muehmel and Ahman Khan (pictured, left), head of artificial intelligence and machine learning strategy at Snowflake, spoke with theCUBE industry analysts Lisa Martin and Dave Vellante at Snowflake Summit, during an exclusive broadcast on theCUBE, SiliconANGLE Medias livestreaming studio. They discussed Everyday AI, making AI scalable and accessible, scaling data science and more. (* Disclosure below.)

One of the biggest issues in AI, historically, has been the amount of data and processing power it takes to train and run machine learning models. Dataiku and Snowflake took advantage of the scalable nature of cloud computing using Snowflakes infrastructure and, using push-down optimization, made AI more accessible and easier to manage.

Any kind of large-scale data processing is automatically pushed down by Dataiku into Snowflakes scalable infrastructure, Khan explained. So you dont get into things like memory issues or situations where your pipeline is running overnight and it doesnt finish in time.

The AI focus relates to two big announcements Snowflake made during the summit that include its Snowpark and Streamlit products, including the ability to run Python in Snowflake and easily incorporate models into Dataiku.

You can now, as a Python developer, bring the processing to where the data lives rather than move the data out to where the processing lives, Khan said. The predictions that are coming out of models that are being trained by Dataiku are then being used downstream by these data applications for most of our customers.I can write a complete data application without writing a single line of JavaScript CSS or HTML. I can write it completely in Python, which makes me super excited as a Python developer.

Heres the complete video interview, part of SiliconANGLEs and theCUBEs coverage of the Snowflake Summit event:

(* Disclosure: TheCUBE is a paid media partner for the Snowflake Summit event. NeitherSnowflake Inc., the sponsor for theCUBEs event coverage, nor other sponsors have editorial control over content on theCUBE or SiliconANGLE.)

More here:
Inside 'Everyday AI' and the machine learning-heavy future - SiliconANGLE News

Using Machine Learning to Automate Kubernetes Optimization The New Stack – thenewstack.io

Brian Likosar

Brian is an open source geek with a passion for working at the intersection of people and technology. Throughout his career, he's been involved in open source, whether that was with Linux, Ansible and OpenShift/Kubernetes while at Red Hat, Apache Kafka while at Confluent, or Apache Flink while at AWS. Currently a senior solutions architect at StormForge, he is based in the Chicago area and enjoys horror, sports, live music and theme parks.

Note: This is the third of a five-part series covering Kubernetes resource management and optimization. In this article, we explain how machine learning can be used to manage Kubernetes resources efficiently. Previous articles explained Kubernetes resource types and requests and limits.

As Kubernetes has become the de-facto standard for application container orchestration, it has also raised vital questions about optimization strategies and best practices. One of the reasons organizations adopt Kubernetes is to improve efficiency, even while scaling up and down to accommodate changing workloads. But the same fine-grained control that makes Kubernetes so flexible also makes it challenging to effectively tune and optimize.

In this article, well explain how machine learning can be used to automate tuning of these resources and ensure efficient scaling for variable workloads.

Optimizing applications for Kubernetes is largely a matter of ensuring that the code uses its underlying resources namely CPU and memory as efficiently as possible. That means ensuring performance that meets or exceeds service-level objectives at the lowest possible cost and with minimal effort.

When creating a cluster, we can configure the use of two primary resources memory and CPU at the container level. Namely, we can set limits as to how much of these resources our application can use and request. We can think of those resource settings as our input variables, and the output in terms of performance, reliability and resource usage (or cost) of running our application. As the number of containers increases, the number of variables also increases, and with that, the overall complexity of cluster management and system optimization increases exponentially.

We can think of Kubernetes configuration as an equation with resource settings as our variables and cost, performance and reliability as our outcomes.

To further complicate matters, different resource parameters are interdependent. Changing one parameter may have unexpected effects on cluster performance and efficiency. This means that manually determining the precise configurations for optimal performance is an impossible task, unless you have unlimited time and Kubernetes experts.

If we do not set custom values for resources during the container deployment, Kubernetes automatically assigns these values. The challenge here is that Kubernetes is quite generous with its resources to prevent two situations: service failure due to an out-of-memory (OOM) error and unreasonably slow performance due to CPU throttling. However, using the default configurations to create a cloud-based cluster will result in unreasonably high cloud costs without guaranteeing sufficient performance.

This all becomes even more complex when we seek to manage multiple parameters for several clusters. For optimizing an environments worth of metrics, a machine learning system can be an integral addition.

There are two general approaches to machine learning-based optimization, each of which provides value in a different way. First, experimentation-based optimization can be done in a non-prod environment using a variety of scenarios to emulate possible production scenarios. Second, observation-based optimization can be performed either in prod or non-prod by observing actual system behavior. These two approaches are described next.

Optimizing through experimentation is a powerful, science-based approach because we can try any possible scenario, measure the outcomes, adjust our variables and try again. Since experimentation takes place in a non-prod environment, were only limited by the scenarios we can imagine and the time and effort needed to perform these experiments. If experimentation is done manually, the time and effort needed can be overwhelming. Thats where machine learning and automation come in.

Lets explore how experimentation-based optimization works in practice.

To set up an experiment, we must first identify which variables (also called parameters) can be tuned. These are typically CPU and memory requests and limits, replicas and application-specific parameters such as JVM heap size and garbage collection settings.

Some ML optimization solutions can scan your cluster to automatically identify configurable parameters. This scanning process also captures the clusters current, or baseline, values as a starting point for our experiment.

Next, you must specify your goals. In other words, which metrics are you trying to minimize or maximize? In general, the goal will consist of multiple metrics representing trade-offs, such as performance versus cost. For example, you may want to maximize throughput while minimizing resource costs.

Some optimization solutions will allow you to apply a weighting to each optimization goal, as performance may be more important than cost in some situations and vice versa. Additionally, you may want to specify boundaries for each goal. For instance, you might not want to even consider any scenarios that result in performance below a particular threshold. Providing these guardrails will help to improve the speed and efficiency of the experimentation process.

Here are some considerations for selecting the right metrics for your optimization goals:

Of course, these are just a few examples. Determining the proper metrics to prioritize requires communication between developers and those responsible for business operations. Determine the organizations primary goals. Then examine how the technology can achieve these goals and what it requires to do so. Finally, establish a plan that emphasizes the metrics that best accommodate the balance of cost and function.

With an experimentation-based approach, we need to establish the scenarios to optimize for and build those scenarios into a load test. This might be a range of expected user traffic or a specific scenario like a retail holiday-based spike in traffic. This performance test will be used during the experimentation process to simulate production load.

Once weve set up our experiment with optimization goals and tunable parameters, we can kick off the experiment. An experiment consists of multiple trials, with your optimization solution iterating through the following steps for each trial:

The machine learning engine uses the results of each trial to build a model representing the multidimensional parameter space. In this space, it can examine the parameters in relation to one another. With each iteration, the ML engine moves closer to identifying the configurations that optimize the goal metrics.

While machine learning automatically recommends the configuration that will result in the optimal outcomes, additional analysis can be done once the experiment is complete. For example, you can visualize the trade-offs between two different goals, see which parameters have a significant impact on outcomes and which matter less.

Results are often surprising and can lead to key architectural improvements, for example, determining that a larger number of smaller replicas is more efficient than a smaller number of heavier replicas.

Experiment results can be visualized and analyzed to fully understand system behavior.

While experimentation-based optimization is powerful for analyzing a wide range of scenarios, its impossible to anticipate every possible situation. Additionally, highly variable user traffic means that an optimal configuration at one point in time may not be optimal as things change. Kubernetes autoscalers can help, but they are based on historical usage and fail to take application performance into account.

This is where observation-based optimization can help. Lets see how it works.

Depending on what optimization solution youre using, configuring an application for observation-based optimization may consist of the following steps:

Once configured, the machine learning engine begins analyzing observability data collected from Prometheus, Datadog or other observability tools to understand actual resource usage and application performance trends. The system then begins making recommendations at the interval specified during configuration.

If you specified automatic implementation of recommendations during configuration, the optimization solution will automatically patch deployments with recommended configurations as they are recommended. If you selected manual deployment, you can view the recommendation, including container-level details, before deciding to approve or not.

As you may have noted, observation-based optimization is simpler than experimentation-based approaches. It provides value faster with less effort, but on the other hand, experimentation- based optimization is more powerful and can provide deep application insights that arent possible using an observation-based approach.

Which approach to use shouldnt be an either/or decision; both approaches have their place and can work together to close the gap between prod and non-prod. Here are some guidelines to consider:

Using both experimentation-based and observation-based approaches creates a virtuous cycle of systematic, continuous optimization.

Optimizing our Kubernetes environment to maximize efficiency (performance versus cost), scale intelligently and achieve our business goals requires:

For small environments, this task is arduous. For an organization running apps on Kubernetes at scale, it is likely already beyond the scope of manual labor.

Fortunately, machine learning can bridge the automation gap and provide powerful insights for optimizing a Kubernetes environment at every level.

StormForge provides a solution that uses machine learning to optimize based on both observation (using observability data) and experimentation (using performance-testing data).

To try StormForge in your environment, you can request a free trial here and experience how complete optimization does not need to be a complete headache.

Stay tuned for future articles in this series where well explain how to tackle specific challenges involved in optimizing Java apps and databases running in containers.

The New Stack is a wholly owned subsidiary of Insight Partners, an investor in the following companies mentioned in this article: StormForge.

Feature image via Pixabay.

View original post here:
Using Machine Learning to Automate Kubernetes Optimization The New Stack - thenewstack.io

Snowflake is trying to bring machine learning to the everyman – TechRadar

Snowflake has set out plans to help democratize access to machine learning (ML) resources by eliminating complexities for non-expert customers.

At its annual user conference, Snowflake Summit, the database company has made a number of announcements designed to facilitate the uptake of machine learning. Chief among them, enhanced support for Python (the language in which many ML products are written) and a new app marketplace that allows partners to monetize their models.

"Our objective is to make it as easy as possible for customers to leverage advanced ML models without having to build from scratch, because that requires a huge amount of expertise," said Tal Shaked, who heads up ML at Snowflake.

"Through projects like Snowflake Marketplace, we want to give customers a way to run these kinds of models against their data, both at scale and in a secure way."

Although machine learning is a decades-old concept, only within the last few years have advances in compute, storage, software and other technologies paved the way for widespread adoption.

And even still, the majority of innovation and expertise is pooled disproportionately among a small minority of companies, like Google and Meta.

The ambition at Snowflake is to open up access to the opportunities available at the cutting edge of machine learning through a partnership- and ecosystem-driven approach.

Shaked, who worked across a range of machine learning projects at Google before joining Snowflake, explained that customers will gain access to the foundational resources, on top of which they can make small optimizations for their specific use cases.

For example, a sophisticated natural language processing (NLP) model developed by the likes of OpenAI could act as the general-purpose foundation for a fast food customer looking to develop an ML-powered ordering system, he suggested. In this scenario, the customer is involved in none of the training and tuning of the underlying model, but still reaps all the benefits of the technology.

More from Snowflake Summit

Theres so much innovation happening within the field of ML and we want to bring that into Snowflake in the form of integrations, he told TechRadar Pro. Its about asking how we can integrate with these providers so our customers can do the fine-tuning without needing to hire a bunch of PhDs.

This sentiment was echoed earlier in the day by Benoit Dageville, co-founder of Snowflake, who spoke about the importance of sharing expertise across the customer and partner ecosystem.

Democratizing ML is an important aspect of what we are trying to do. Were becoming an ML platform, but not just where you built it and use it for yourself; the revolution is in the sharing of expertise.

Its no longer just the Googles and Metas of this world using this technology, because were making it easy to share.

Disclaimer: Our flights and accommodation for Snowflake Summit 2022 were funded by Snowflake, but the organization had no editorial control over the content of this article.

See the rest here:
Snowflake is trying to bring machine learning to the everyman - TechRadar

Advances in AI and machine learning could lead to better health care: lawyers – Lexpert

Of course, transparency and privacy concerns are significant, she notes, but if the information from our public health care system benefits everyone, is it inefficient to ask for consent for every use?

On the other hand, cybersecurity is another essential consideration, as weve come to learn that there are a lot of malevolent actors out there, says Miller Olafsson, with the potential ability to hack into centralized systems as part of a ransomware attack or other threat.

Even in its more basic uses, the potential of AI and machine learning is enormous. But the tricky part of using it in the health care sector is the need to have access to incredible amounts of data while at the same time understanding the sensitive nature of the data collected.

For artificial intelligence to be used in systems, procedures, or devices, you need access to data, and getting that data, particularly personal health information, is very challenging, says Carole Piovesan, managing partner at INQ Law in Toronto.

She points to the developing legal frameworks in Europe and North America for artificial intelligence and privacy legislation more generally. Lawyers working with start-up companies or health care organizations to build AI systems must help them stay within the parameters of existing laws, says Piovesan, and provide guidance on best practices for whatever may come down the line and help them deal with the potential risks.

More:
Advances in AI and machine learning could lead to better health care: lawyers - Lexpert

Datatonic Wins Google Cloud Specialization Partner of the Year Award for Machine Learning – PR Newswire

LONDON, June 15, 2022 /PRNewswire/ -- Datatonic, a leader for Data + AI consulting on Google Cloud, today announced it has received the 2021 Google Cloud Specialization Partner of the Year award for Machine Learning.

Datatonic was recognized for the company's achievements in the Google Cloud ecosystem, helping joint customers scale their Machine Learning (ML) capabilities with Machine Learning Operations (MLOps) and achieve business impact with transformational ML solutions.

Datatonic has continuously invested in expanding their MLOps expertise, from defining what "good" MLOps looks like, to helping clients make their ML workloads faster, scalable, and more efficient. In just the past year, they have built high-performing MLOps platforms for global clients across the Telecommunications, Media, and e-Commercesectors, enabling them to seamlessly leverage MLOps best practices across their teams.

Their recently open-sourced MLOps Turbo Templates, co-developed with Google Cloud's Vertex AI Pipelines product team, showcase Datatonic's experience implementing MLOps solutions, and Google Cloud's technical excellence to help teams get started with MLOps even faster.

"We're delighted with this recognition from our partners at Google Cloud. It's amazing to see our team go from strength to strength at the forefront of cutting-edge technology with Google Cloud and MLOps. We're proud to be driving continuous improvements to the tech stack in partnership with Google Cloud, and to drive impact and scalability with our customers, from increasing ROI in data and AI spending to unlocking new revenue streams." - Louis Decuypere - CEO, Datatonic

"Google Cloud Specializations recognize partner excellence and proven customer success in a particular product area or industry," said Nina Harding, Global Chief, Partner Programs and Strategy, Google Cloud. "Based on their certified, repeatable customer success and strong technical capabilities, we're proud to recognize Datatonic as Specialization Partner of the Year for Machine Learning."

Datatonic is a data consultancy enabling companies to make better business decisions with the power of Modern Data Stack and MLOps. Its services empower clients to deepen their understanding of consumers, increase competitive advantages, and unlock operational efficiencies by building cloud-native data foundations and accelerating high-impact analytics and machine learning use cases.

Logo - https://mma.prnewswire.com/media/1839415/Datatonic_Logo.jpg

For enquiries about new projects, get in touch at [emailprotected]For media / press enquiries, contact Krisztina Gyure ([emailprotected])

SOURCE Datatonic Ltd

View original post here:
Datatonic Wins Google Cloud Specialization Partner of the Year Award for Machine Learning - PR Newswire

Machine Learning to Enable Positive Change An Interview with Adam Benzion – Elektor

Machine learning can enable positive change in society, says Adam Benzion, Chief Experience Officer at Edge Impulse. Read on to learn how the company is preventing unethical uses of its ML/AI development platform.

Machine learning can enable positive change in society, says Adam Benzion, Chief Experience Officer at Edge Impulse. Read on to learn how the company is preventing unethical uses of its ML/AI development platform.

Priscilla Haring-Kuipers: What Ethics in Electronics are you are working on?

Adam Benzion: At Edge Impulse, we try to connect our work to doing good in the world as a core value to our culture and operating philosophy. Our founders, Zach Shelby and Jan Jongboom define this as Machine learning can enable positive change in society, and we are dedicated to support applications for good. This is fundamental to what and how we do things. We invest our resources to support initiatives like UN Covid-19 Detect & Protect, Data Science Africa, and wildlife conservation with Smart Parks, Wildlabs, and ConservationX.

This also means we have a responsibility to prevent unethical uses of our ML/AI development platform. When Edge Impulse launched in January 2020, we decided to require a Responsible AI Licensefor our users, which prevents use for criminal purposes, surveillance, or harmful military or police applications. We have had a couple of cases where we have turned down a project that were not compatible with this license. There are also many positive uses for ML in governmental and defense applications, which we do support as compatible with our values.

We also joined 1% for the Planet, pledging to donate 1% of our revenue to support nonprofit organizations focused on the environment. I personally lead an initiative that focuses on elephant conservation where we have partnered with an organization called Smart Parks and helped developed a new AI-powered tracking collar that can last for eight years and be used to understand how the elephants communicate with each other. This is now deployed in parks across Mozambique.

Haring-Kuipers: What is the most important ethical question in your field?

Benzion: There are a lot of ethical issues with AI being used in population control, human recognition and tracking, let alone AI-powered weaponry. Especially where we touch human safety and dignity, AI-powered applications must be carefully evaluated, legislated and regulated. We dream of automation, fun magical experiences, and human-assisted technologies that do things better, faster and at a lesser cost. Thats the good AI dream, and thats what we all want to build. In a perfect world, we should all be able to vote on the rules and regulations that govern AI.

Haring-Kuipers: What would you like to include in an Electronics Code of Ethics?

Benzion: We need to look at how AI impacts human rights and machine accountability aka, when AI-powered machines fail, like in the case of autonomous driving, who takes the blame? Without universal guidelines to support us, it is up to every company in this field to find its core values and boundaries so we can all benefit from this exciting new wave.

Haring-Kuipers: An impossible choice ... The most important question before building anything is? A) Should I build this? B) Can I build this? C) How can I build this?

Benzion: A. Within reason, you can build almost anything, so ask yourself: Is the effort vs. outcome worth your precious time?

Priscilla Haring-Kuipers writes about technology from a social science perspective. She is especially interested in technology supporting the good in humanity and a firm believer in effect research. She has an MSc in Media Psychology and makes This Is Not Rocket Science happen.

More:
Machine Learning to Enable Positive Change An Interview with Adam Benzion - Elektor

Machine learning-led decarbonisation platform Ecolibrium launches in the UK – Yahoo Finance

The advisory and climate tech-led sustainability solution has opened a new London HQ after raising $5m in a pre-Series A funding round, to support growing demand from commercial and industrial UK real estate owners striving to meet net zero carbon targets

Ecolibriums Head of Commercial Real Estate Yash Kapila (left) and CEO Chintan Soni (right) will lead the business UK expansion from its new London HQ. Image credit: Max Lacome

UK expansion builds on considerable success in Asia Pacific, where Ecolibrium's technology has been deployed across 50 million sq ft by globally renowned brands including Amazon, Fiat, Honeywell, Thomson Reuters, Tata Power, and the Delhi Metro

The $5m pre-Series A funding round was co-led by Amit Bhatia's Swordfish Investments and Shravin Bharti Mittal's Unbound venture capital firm

Launches in the UK today having already signed its first commercial contract with Integral, real estate giant JLL's engineering and facilities service business

LONDON, June 13, 2022 /PRNewswire/ -- Machine learning-led decarbonisation platform Ecolibrium has today launched its revolutionary sustainability solution in the UK, as the race to reduce carbon emissions accelerates across the built environment.

Founded in 2008 by entrepreneur brothers Chintan and Harit Soni at IIM Ahmedabad's Centre for Innovation, Incubation and Entrepreneurship in India, Ecolibrium provides expert advisory as well as technology-driven sustainability solutions to enable businesses in commercial and industrial real estate to reduce energy consumption and ultimately achieve their net zero carbon ambitions.

Relocating its global headquarters to London, Ecolibrium has raised $5m in a pre-Series A funding round as it looks to expand its international footprint to the UK. The round was co-led by Amit Bhatia's Swordfish Investments and Shravin Bharti Mittal's Unbound venture capital firm, alongside several strategic investors.

Ecolibrium launches in the UK today having already signed its first commercial contract with Integral, JLL's UK engineering and facilities service business.

The fundraising and UK expansion builds on Ecolibrium's considerable success in Asia Pacific, where its technology is being used across 50 million sq ft by more than 150 companies including Amazon, Fiat, Honeywell, Thomson Reuters, Tata Power, and the Delhi Metro. An annual reduction of 5-15% in carbon footprint has been achieved to date by companies which have deployed Ecolibrium's technology.

Story continues

Ecolibrium has also strengthened its senior UK management team, as it prepares to roll-out its green platform across the UK, by hiring facilities and asset management veteran Yash Kapila as its new head of commercial real estate. Kapila previously held senior leadership positions with JLL across APAC and EMEA regions.

Introducing SmartSense

At the heart of Ecolibrium's offer is its sustainability-led technology product SmartSense, which assimilates thousands of internet of things (IoT) data points from across a facility's entire energy infrastructure.

This information is then channelled through Ecolibrium's proprietary machine learning algorithms, which have been developed over 10 years by their in-house subject matter experts. Customers can visualise the data through a bespoke user interface that provides actionable insights and a blueprint for achieving operational excellence, sustainability targets, and healthy buildings.

This connected infrastructure generates a granular view of an asset's carbon footprint, unlocking inefficiencies and empowering smart decision-making, while driving a programme of continuous improvement to deliver empirical and tangible sustainability and productivity gains.

Preparing for future regulation

Quality environmental data and proof points are also providing a distinct business advantage at this time of increasing regulatory requirements that require corporates to disclose ESG and sustainability performance. Ecolibrium will work closely with customers to lead the way in shaping their ESG governance.

According to Deloitte, with a minimum Grade B Energy Performance Certification (EPC) requirement anticipated by 2030, 80% of London office stock will need to be upgraded an equivalent of 15 million sq ft per annum.

Research from the World Economic Forumhas found that the built environment is responsible for 40% of global energy consumption and 33% of greenhouse gas emissions, with one-fifth of the world's largest 2,000 companies adopting net zero strategies by 2050 or earlier. Technology holds the key to meeting this challenge, with Ecolibrium and other sustainability-focused changemakers leading the decarbonisation drive.

Chintan Soni, Chief Executive Officer at Ecolibrium, said:"Our mission is to create a balance between people, planet and profit and our technology addresses each of these objectives, leading businesses to sustainable prosperity. There is no doubt the world is facing a climate emergency, and we must act now to decarbonise and protect our planet for future generations.

"By using our proprietary machine learning-led technology and deep in-house expertise, Ecolibrium can help commercial and industrial real estate owners to deliver against ESG objectives, as companies awaken to the fact that urgent action must be taken to reduce emissions and achieve net zero carbon targets in the built environment.

"Our goal is to partner with companies and coach them to work smarter, make critical decisions more quickly and consume less. And, by doing this at scale, Ecolibrium will make a significant impact on the carbon footprint of commercial and industrial assets, globally."

The UK expansion has been supported by the Department for International Trade's Global Entrepreneur Programme. The programme has provided invaluable assistance in setting up Ecolibrium's London headquarters and scaling in the UK market.

In turn, Ecolibrium is supporting the growth of UK innovation, promoting green job creation, and providing tangible economic benefits, as part of the country's wider transition to a more sustainable future.

Minister for Investment Lord Grimstone said: "Tackling climate change is crucial in our quest for a cleaner and green future, something investment will play an important part in.

"That's why I'm pleased to see Ecolibrium's expansion to the UK. Not only will the investment provide a revolutionary sustainability solution to reduce carbon emissions across various sectors, it is a continued sign of the UK as a leading inward investment destination, with innovation and expertise in our arsenal".

About Ecolibrium

Ecolibrium is a machine learning-led decarbonisation platform balancing people, planet and profit to deliver sustainable prosperity for businesses.

Founded in 2008 by entrepreneur brothers Chintan and Harit Soni, Ecolibrium provides expert advisory as well as technology-driven sustainability solutions to enable commercial and industrial real estate owners to reduce energy consumption and ultimately achieve their net zero carbon ambitions.

Ecolibrium's flagship technology product SmartSense is currently being used across 50 million sq ft by more than 150 companies including JLL, Amazon, Fiat, Honeywell, Thomson Reuters, Tata Power, and the Delhi Metro. SmartSense collects real-time information on assets, operational data and critical metrics using internet of things (IoT) technology. This intelligence is then channelled through Ecolibrium's proprietary machine learning algorithms to visualise data and provide actionable insights to help companies make transformative changes to their sustainability goals.

For more information, visit: http://www.ecolibrium.io

For press enquiries, contact: FTI Consulting: ecolibrium@fticonsulting.com, +44 (0) 2037271000

Photo - https://mma.prnewswire.com/media/1837227/Ecolibrium_Yash_Kapila_and_Chintan_Soni.jpg

Cision

View original content to download multimedia:https://www.prnewswire.com/news-releases/machine-learning-led-decarbonisation-platform-ecolibrium-launches-in-the-uk-301566340.html

SOURCE Ecolibrium

See the original post here:
Machine learning-led decarbonisation platform Ecolibrium launches in the UK - Yahoo Finance