Page 1,782«..1020..1,7811,7821,7831,784..1,7901,800..»

Weekly AiThority Roundup: Biggest Machine Learning, AI, Robotic And Automation Updates July Week 05 – AiThority

This is your AI Weekly Roundup today. We are covering the top updates from around the world. The updates will feature state-of-the-art capabilities inartificial intelligence (AI),Machine Learning, Robotic Process Automation, Fintech, and human-system interactions. We cover the role of AI Daily Roundup and its application in various industries and daily lives.

UK and Japan-based crypto startup Sumo Signals Ltd. announced that it has raised US$5.5 Million in its recent round of funding led by Hong Kong based prominent investor OnDeck Venture. The successful funding round is a clear indication of the companys strong growth prospects powered by its pioneering AI-based technology.

Thentia, a venture capital-backed and global industry-leading government software-as-a-service (SaaS) provider, announced it has joined the Google Cloud Partner Advantageprogram. Thentia Cloud can be procured directly through Google Clouds Independent Software Vendor (ISV) Marketplace.

Merkle, dentsus leading technology-enabled, data-driven customer experience management (CXM) company, announces the expansion of its EMEA Salesforce practice with the appointment of three new strategic hires.

Chargebee, the leading subscription management platform, announced its Summer 2022 Product Release. The slate of new products and features is focused on enabling high-performing subscription businesses to monetize their existing customers and fend off the growing threats of a tumultuous economy. These new products help businesses build their cash reserves and maintain their customer base at a time when many businesses and their customers are struggling with the realities of inflation and drying up of venture capital, the lingering effects of COVID-19 and a decimated global supply chain.

Mvix, a leading provider of enterprise-grade digital signage solutions, speeds up its development and integration of Enterprise Business Intelligence Tools on its cloud-based software Mvix CMS, empowering data sharing for efficiency and scalability.Microsoft Power BI, Tableau, and Klipfolio, top business intelligence (BI) powerhouses with a combined market share of 80 percent, are three of numerous tools slated to offer real-time data and metrics streamlining workflow and productivity for clients.

[To share your insights with us, please write tosghosh@martechseries.com]

AiT Analyst is a trained researcher with many years of experience in finding news and reviewing them. The Analysts provide extensive coverage to major companies and startups in key technology sectors and geographies from the emerging tech landscape.

To connect, please write to AiT Analyst at sghosh@martechseries.com.

Read the rest here:
Weekly AiThority Roundup: Biggest Machine Learning, AI, Robotic And Automation Updates July Week 05 - AiThority

Read More..

Machine Learning Breakthroughs Have Sparked the AI Revolution – InvestorPlace

Source: shutterstock.com/Peshkova

[Editors note: Machine Learning Breakthroughs Have Sparked the AI Revolution was previously published in February 2022. It has since been updated to include the most relevant information available.]

Its October 1950. Alan Turing, the genius who cracked the Enigma code and helped end World War II, has just introduced a novel concept.

Its called the Turing Test, and its aimed at answering the fundamental question:Can machines think?

The world laughs. Machines think for themselves? Not possible.

However, the Turing Test sets in motion decades of research into the emerging field of Artificial Intelligence (AI).

This research is conducted in the worlds most prestigious labs by some of the worlds smartest people. Collectively, theyre working to create a new class of computers and machines that can, indeed, think for themselves.

Fast forward 70 years.

AI is everywhere.

Its in yourphones. What do you think powers Siri? How does a phone recognize your face?

Its in yourapplications. How does Google Maps know directions and optimal routes? How does it make real-time changes based on traffic? And how does Spotify create hyper-personalized playlists or Netflix recommend movies?

AI is on yourcomputers. How does Google suggest personalized search items for you? How do websites use chatbots that seem like real humans?

As it turns out, the world shouldnt have laughed back in 1950.

The great Alan Turing ended up creating a robust foundation upon which seven decades of groundbreaking research has compounded. Ultimately, it resulted in self-thinking computers and machines not just being a thing but being everything.

Make no mistake. This decades-in-the-making AI Revolution is just getting started.

Thats because AI is mostly built on what industry insiders call machine learning (ML) and natural language processing (NLP) models. And these models are informed with data.

Accordingly, the more data they have, the better the models get and the more capable the AI becomes.

When I say identity, what do you think of?

If youre like me, you immediately start to think of what makes you, well, you your height, eye color; what job you have, what car you drive, what shows you like to binge-watch.

In other words, the amount of data associated with each individual identity is both endless and unique.

Those attributes make identity data extremely valuable.

Up until recently, though, enterprises had no idea how to extract value from this robust dataset. Thats all changing right now.

Breakthroughs in artificial intelligence and machine-learning technology are enabling companies to turn identity data into more personalized, secure and streamlined user experiences for their customers, employees and partners.

The volume and granularity of data is exploding right now. Thats mostly because every object in the world is becoming a data-producing device.

Dumb phones have become smartphones and have started producing a ton of usage data.

Dumb cars have become smart cars and have started producing lots of in-car driving data.

And dumb apps have become smart apps and have started producing heaps of consumer preference data.

Dumb watches have become smartwatches and have started producing bunches of fitness and activity data.

As weve sprinted into the Smart World, the amount of data that AI algorithms have access to has exploded. And its making them more capable than ever.

Why else has AI started popping up everywhere in recent years? Its because90% of the worlds data was generated in the last two years alone.

More data, better ML and NLP models, smarter AI.

Its that simple.

And guess what? The world isnt going to take any steps back in terms of this smart pivot. No. We love our smartphones, smart cars and smartwatches far too much.

Instead, society will accelerate in this transition. Globally, the world produces about 2.5 exabytes of data per day. By 2025, that number is expected to rise to 463 exabytes.

Lets go back to our process.

More data, better ML and NLP models, smarter AI.

Thus, as the volume of data produced daily soars more than 185X over the next five years, ML and NLP models will get 185X better (more or less). And AI machines will get 185X smarter (more or less).

Folks, the AI Revolution is just getting started.

Most things a human does, a machine will soon be able to do better, faster and cheaper.

Given the advancements AI has made over the past few years with the help of data and the exponential amount of it yet to come Im inclined to believe this.

Eventually, and inevitably, the world will be run by hyperefficient and hyperintelligent AI.

Im not alone in thinking this. Gartner predicts that 69% of routine office work will be fully automated by 2024. And the World Economic Forum has said that robots will handle 52% of current work tasks by 2025.

The AI Revolution is coming and its going to be the biggest youve seen in your lifetime.

You need to be invested in this emerging tech megatrend that promises to change the world forever.

Of course, the question remains: What AI stocks should you start buying right now?

You could play it safe and go with the blue-chip tech giants. All are making inroads with AI and are low-risk, low-reward plays on the AI Revolution. Im talking Microsoft (MSFT), Alphabet (GOOG), Amazon (AMZN), Adobe (ADBE) and Apple (AAPL).

However, thats not how we do things. We dont like safe we like best.

At present, enterprise AI software is being used very effectively by Big Tech. And its being used ineffectively or not at all by everyone else.

Todays AI companies are changing that. And the best way to play the AI Revolution is by buying the stocks that are changing the paradigm in which they exist.

We have identified several AI stocks to buy for enormous long-term returns.

Again, these AI stocks arent the safe way to play the AI Revolution. Theyre the best way to do it.

One company is pioneering a novel model-driven architecture. Indeed, it represents a promising paradigm shift in the AI application development process. Ultimately, it will democratize the power of AI so that its no longer a weapon used by Big Tech to crush its opponents.

Essentially, this company has pre-built multiple, highly scalable AI models in its ecosystem. And it allows customers to build their own AI models by simply editing and stacking them atop one another.

Think of building an AI application as a puzzle. You must have the right pieces and directions. In other words, to effectively utilize the power of enterprise AI, customers just piece it together in a way that works best for them.

Equally important, the building of these puzzles is not rocket science. The company does all the hard work of making the actual models. Customers simply have to pick which ones they want to use and decide how they want to use them.

In some instances, coding and data science are still required but not much. Todays top AI companies make it easy to develop, scale, and apply insights without writing any code.

Its a genius breakthrough to address the widening AI gap between Big Tech and everyone else.

Eventually, every company from every industry and of every size will leverage the power of AI to enhance their business, increase revenues and reduce costs.

Of course, this reality bodes well for AI stocks in the long term.

You just have to know which ones are worth buying and which are not

On the date of publication, Luke Lango did not have (either directly or indirectly) any positions in the securities mentioned in this article.

Original post:
Machine Learning Breakthroughs Have Sparked the AI Revolution - InvestorPlace

Read More..

21 PPC lessons learned in the age of machine learning automation – Search Engine Land

What youre about to read is not actually from me. Its a compilation of PPC-specific lessons learned by those who actually do the work every day in this age of machine learning automation.

Before diving in, a few notes:

Its simple, a machine cannot optimize toward a goal if there isnt enough data to find patterns.

For example, Google Ads may recommend Maximize Conversions as a bid strategy, BUT the budget is small (like sub $2,000/mo) and the clicks are expensive.

In a case like this, you have to give it a Smart Bid strategy goal capable of collecting data to optimize towards.

So a better option might be to consider Maximize Clicks or Search Impression Share. In small volume accounts, that can make more sense.

The key part of machine learningis the second word: learning.

For a machine to learn what works, it must also learn what doesnt work.

That part can be agonizing.

When launching an initial Responsive Search Ad (RSA), expect the results to underwhelm you. The system needs data to learn the patterns of what works and doesnt.

Its important for you to set these expectations for yourself and your stakeholders. A real-life client example saw the following results:

As you can see, month two looked far better. Have the proper expectations set!

Many of us whove been in the industry a while werent taught to manage ad campaigns the way they need to be run now. In fact, it was a completely different mindset.

For example, I was taught to:

Any type of automation relies on proper inputs. Sometimes what would seem to be a simple change could do significant damage to a campaign.

Some of those changes include:

Those are just a few examples, but they all happened and they all messed with a live campaign.

Just remember, all bets are off when any site change happens without your knowledge!

The best advice to follow regarding Recommendations are the following:

Officially defined as the impressions youve received on the Search Network divided by the estimated number of impressions you were eligible to receive, Search Impression Share is basically a gauge to inform you what percentage of the demand you are showing to compete for.

This isnt to imply Search Impression Share is the single most important metric. However, you might implement a smart bidding rule with Performance Max or Maximize Conversions and doing so may negatively impact other metrics (like Search Impression Share).

That alone isnt wrong. But make sure youre both aware and OK with that.

Sometimes things change. Its your job to stay on top of it. For smart bidding, Target CPA no longer exists for new campaigns. Its now merged with Maximize Conversions.

Smart Shopping and Local Campaigns are being automatically updated to Performance Max between July and September 2022. If youre running these campaigns, the best thing you can do is to do the update manually yourself (one click implementation via the recommendations tab in your account).

Why should you do this?

This doesnt need to be complicated. Just use your favorite tool like Evernote, OneNote, Google Docs/Sheets, etc. Include the following for each campaign:

There are three critical reasons why this is a good idea:

Imagine youre setting up a campaign and loading snippets of an ad. Youve got:

Given the above conditions, do you think it would be at all useful to know which combinations performed best? Would it help you to know if a consistent trend or theme emerges? Wouldnt having that knowledge help you come up with even more effective snippets of an ad to test going forward?

Well, too bad because thats not what you get at the moment.

If you run a large volume account with a lot of campaigns, then anytime you can provide your inputs in a spreadsheet for a bulk upload you should do it. Just make sure you do a quality check of any bulk actions taken.

Few things can drag morale down like a steady stream of mundane tasks. Automate whatever you can. That can include:

To an outsider, managing an enterprise level PPC campaign would seem like having one big pile of money to work with for some high-volume campaigns. Thats a nice vision, but the reality is often quite different.

For those who manage those campaigns, it can feel more like 30 SMB accounts. You have different regions with several unique business units (each having separate P&Ls).

The budgets are set and you cannot go over it. Period.

You also need to ensure campaigns run the whole month so you cant run out of budget on the 15th.

Below is an example of a custom budget tracking report built within Google Data Studio that shows the PPC manager how the budget is tracking in the current month:

Devote 10% of your management efforts (not necessarily budget) to trying something new.

Try a beta (if you have access to it), a new smart bidding strategy, new creative snippets, new landing page, call to action, etc.

If you are required (for example by legal, compliance, branding, executives) to always display a specific message in the first headline, you can place a pin that will only insert your chosen copy in that spot while the remainder of the ad will function as a typical RSA.

Obviously if you pin everything, then the ad is no longer responsive. However, it has its place so when you gotta pin, you gotta pin!

Its simple: The ad platform will perform the heavy lifting to test for the best possible ad snippet combinations submitted by you to achieve an objective defined by you.

The platform can either perform that heavy lifting to find the best combination of well-crafted ad snippets or garbage ones.

Bottom line, an RSA doesnt negate the need for skilled ad copywriting.

If youve managed campaigns for an organization in a highly regulated industry (healthcare, finance, insurance, education, etc.) you know all about the legal/compliance review and frustrations that can mount.

Remember, you have your objectives (produce campaigns that perform) and they have theirs (to keep the organization out of trouble).

When it comes to RSA campaigns, do yourself a favor and educate the legal, compliance, and branding teams on:

To use an automotive analogy, think of automation capabilities more like park assist than full self driving.

For example, you set up a campaign to Bid to Position 2 and then just let it run without giving it a second thought. In the meantime, a new competitor enters the market and showing up in position 2 starts costing you a lot more. Now youre running into budget limitations.

Use automation to do the heavy lifting and automate the mundane tasks (Lesson #11), but ignore a campaign once its set up.

This is related to lesson #5 and cannot be overstated.

For example, you may see a recommendation to reach additional customers at a similar cost per conversion in a remarketing campaign. Take a close look at the audiences being recommended as you can quickly see a lot of inflated metrics especially in remarketing.

You have the knowledge of the business far better than any algorithm possibly could. Use that knowledge to guide the machine and ensure it stays pointed in the right direction.

By some accounts, Im mostly referring to low-budget campaigns.

Machine learning needs data and so many smaller accounts dont have enough activity to generate it.

For those accounts, just keep it as manual as you can.

Speak with one of your industry peers, and youll quickly find someone who understands your daily challenges and may have found ways to mitigate them.

Attend conferences and network with people attending the PPC track. Sign up for PPC webinars where tactical campaign management is discussed.

Participate (or just lurk) in social media discussions and groups specific to PPC management.

Many of the mundane tasks (Lesson #11) can be automated now, thus eliminating the need for a person to spend hours on end performing them. Thats a good thing no one really enjoyed doing most of those things anyway.

As more tasks continue toward the path of automation, marketers only skilled at the mundane work will become less needed.

On the flipside, this presents a prime opportunity for strategic marketers to become more valuable. Think about it the machine doing the heavy lifting needs guidance, direction and course corrective action when necessary.

That requires the marketer to:

Opinions expressed in this article are those of the guest author and not necessarily Search Engine Land. Staff authors are listed here.

New on Search Engine Land

About The Author

Read the rest here:
21 PPC lessons learned in the age of machine learning automation - Search Engine Land

Read More..

U.S. Army Research Lab Expands Artificial Intelligence and Machine Learning Contract with Palantir for $99.9M – Yahoo Finance

DENVER, July 28, 2022--(BUSINESS WIRE)--Palantir Technologies Inc. (NYSE: PLTR) today announced that it will expand its work with the U.S. Army Research Laboratory to implement data and artificial intelligence (AI)/machine learning (ML) capabilities for users across the combatant commands (COCOMs). The contract totals $99.9 million over two years.

Palantir first partnered with the Army Research Lab to provide those on the frontlines with state-of-the-art operational data and AI capabilities in 2018. Palantirs platform has supported the integration, management, and deployment of relevant data and AI model training to all of the Armed Services, COCOMs, and special operators. This extension grows Palantirs operational RDT&E work to more users globally.

"Maintaining a leading edge through technology is foundational to our mission and partnership with the Army Research Laboratory," said Akash Jain, President of Palantir USG. "Our nations armed forces require best-in-class software to fulfill their missions today while rapidly iterating on the capabilities they will need for tomorrows fight. We are honored to support this critical work by teaming up to deliver the most advanced operational AI capabilities available with dozens of commercial and public sector partners."

By working with the U.S. Army Research Lab, integrating with partner vendors, and iterating with users on the front lines, Palantirs software platforms will continue to quickly implement advanced AI capabilities against some of DODs most pressing problem sets. "Were looking forward to fielding our newest ML, Edge, and Space technologies alongside our U.S. military partners," said Shannon Clark, Senior Vice President of Innovation, Federal. "These technologies will enable operators in the field to leverage AI insights to make decisions across many fused domains. From outer space to the sea floor, and everything in between."

About Palantir Technologies Inc.

Foundational software of tomorrow. Delivered today. Additional information is available at https://www.palantir.com.

Forward-Looking Statements

This press release contains forward-looking statements within the meaning of Section 27A of the Securities Act of 1933, as amended, and Section 21E of the Securities Exchange Act of 1934, as amended. These statements may relate to, but are not limited to, Palantirs expectations regarding the amount and the terms of the contract and the expected benefits of our software platforms. Forward-looking statements are inherently subject to risks and uncertainties, some of which cannot be predicted or quantified. Forward-looking statements are based on information available at the time those statements are made and were based on current expectations as well as the beliefs and assumptions of management as of that time with respect to future events. These statements are subject to risks and uncertainties, many of which involve factors or circumstances that are beyond our control. These risks and uncertainties include our ability to meet the unique needs of our customer; the failure of our platforms to satisfy our customer or perform as desired; the frequency or severity of any software and implementation errors; our platforms reliability; and our customers ability to modify or terminate the contract. Additional information regarding these and other risks and uncertainties is included in the filings we make with the Securities and Exchange Commission from time to time. Except as required by law, we do not undertake any obligation to publicly update or revise any forward-looking statement, whether as a result of new information, future developments, or otherwise.

View source version on businesswire.com: https://www.businesswire.com/news/home/20220728005319/en/

Contacts

Media Contact Lisa Gordonmedia@palantir.com

Go here to see the original:
U.S. Army Research Lab Expands Artificial Intelligence and Machine Learning Contract with Palantir for $99.9M - Yahoo Finance

Read More..

Machine learning hiring levels in the pharmaceutical industry rose in June 2022 – Pharmaceutical Technology

The proportion of pharmaceutical companies hiring for machine learning related positions rose in June 2022 compared with the equivalent month last year, with 26.4% of the companies included in our analysis recruiting for at least one such position.

This latest figure was higher than the 24.1% of companies who were hiring for machine learning related jobs a year ago and an increase compared to the figure of 26.3% in May 2022.

When it came to the rate of all job openings that were linked to machine learning, related job postings dropped in June 2022 from May 2022, with 1.2% of newly posted job advertisements being linked to the topic.

This latest figure was the same as the 1.2% of newly advertised jobs that were linked to machine learning in the equivalent month a year ago.

Machine learning is one of the topics that GlobalData, from whom our data for this article is taken, have identified as being a key disruptive force facing companies in the coming years. Companies that excel and invest in these areas now are thought to be better prepared for the future business landscape and better equipped to survive unforeseen challenges.

Our analysis of the data shows that pharmaceutical companies are currently hiring for machine learning jobs at a rate equal to the average for all companies within GlobalData's job analytics database. The average among all companies stood at 1.2% in June 2022.

GlobalData's job analytics database tracks the daily hiring patterns of thousands of companies across the world, drawing in jobs as they're posted and tagging them with additional layers of data on everything from the seniority of each position to whether a job is linked to wider industry trends.

Natural Cannabinoid Ingredients for Pharmaceutical Products

Clinical Trial Management Systems for Complex Clinical Trials

GxP Audit Provider for Pharmaceutical Companies

Originally posted here:
Machine learning hiring levels in the pharmaceutical industry rose in June 2022 - Pharmaceutical Technology

Read More..

Machine Learning is the Wrong Way to Extract Data From Most Documents – hackernoon.com

Documents have spent decades stubbornly guarding their contents against software. In the late 1960s, the first OCR (optical character recognition) techniques turned scanned documents into raw text. By indexing and searching the text from these digitized documents, software sped up formerly laborious legal discovery and research projects.

Today, Google, Microsoft, and Amazon provide high-quality OCR as part of their cloud services offerings. But documents remain underused in software toolchains, and valuable data languish in trillions of PDFs. The challenge has shifted from identifying text in documents to turning them into structured data suitable for direct consumption by software-based workflows or direct storage into a system of record.

The prevailing assumption is that machine learning, often embellished as AI, is the best way to achieve this, superseding outdated and brittle template-based techniques. This assumption is misguided. The best way to turn the vast majority of documents into structured data is to use the next generation of powerful, flexible templates that find data in a document much as a person would.

The promise of machine learning is that you can train a model once on a large corpus of representative documents and then smoothly generalize to out-of-sample document layouts without retraining. For example, you want to train an ML model on company A, B, and Cs home insurance policies, and then extract the same data from similar documents issued by company Z. This is very difficult to achieve in practice for three reasons:

Your goal is often to extract dozens or hundreds of individual data elements from each document. A model at the document level of granularity will frequently miss some of these values, and those errors are quite difficult to detect. Once your model attempts to extract those dozens or hundreds of data elements from out-of-sample document types, you get an explosion of opportunities for generalization failure.

While some simple documents might have a flat key/value ontology, most will have a substructure:think of a list of deficiencies in a home inspection report or the set of transactions in a bank statement. In some cases youll even encounter complex nested substructures:think of a list of insurance policies, each with a claims history. You either need your machine learning model to infer these hierarchies, or you need to manually parameterize the model with these hierarchies and the overall desired ontology before training.

A document is anything that fits on one or more sheets of paper and contains data! Documents are really just bags of diverse and arbitrary data representations. Tables, labels, free text, sections, images, headers and footers: you name it and a document can use it to encode data. There's no guarantee that two documents, even with the same semantics, will use the same representational tools.

It's no surprise that ML-based document parsing projects can take months, require tons of data up front, lead to unimpressive results, and in general be "grueling" (to directly quote a participant in one such project with a leading vendor in the space).

These issues strongly suggest that the appropriate angle of attack for structuring documents is at the data element level rather than the whole-document level. In other words, we need to extract data from tables, labels, and free text; not from a holistic document. And at the data element level, we need powerful tools to express the relationship between the universe of representational modes found in documents and the data structures useful to software.

So let's get back to templates.

Historically, templates have had an impoverished means of expressing that mapping between representational mode and data structure. For example, they might instruct: go to page 3 and return any text within these box coordinates. This breaks down immediately for any number of reasons, including if:

None of these minor changes to the document layout would faze a human reader.

For software to successfully structure complex documents, you want a solution that sidesteps the battle of months-long ML projects versus brittle templates. Instead, lets build a document-specific query language that (when appropriate) embeds ML at the data element, rather than document, level.

First, you want primitives (i.e., instructions) in the language that describe representational modes (like a label/value pair or repeating subsections) and stay resilient to typical layout variations. For example, if you say:

Find a row starting with this word and grab the lowest dollar amount from it

You want row recognition thats resilient to whitespace variation, vertical jitter, cover pages, and document skew, and you want powerful type detection and filtering.

Second, for data representations with a visual or natural language component, such as tables, checkboxes, and paragraphs of free text, the primitives should embed ML. At this level of analysis, Google, Amazon, Microsoft, and OpenAI all have tools that work quite well off the shelf.

Sensible takes just that approach: blending powerful and flexible templates with machine learning. With SenseML, our JSON-based query language for documents, you can extract structured data from most document layouts in minutes with just a single reference sample. No need for thousands of training documents and months spent tweaking algorithms, and no need to write hundreds of rules to account for tiny layout differences.

SenseMLs wide range of primitives allows you to quickly map representational modes to useful data structures, including complex nested substructures. In cases where the primitives do not use ML, they behave deterministically to provide strong behavior and accuracy guarantees. And even for the non-deterministic output of our ML-powered primitives, such as tables, validation rules can identify errors in the ML output.

What this means is that document parsing with Sensible is incredibly fast, transparent, and flexible. If you want to add a field to a template or fix an error, it's straightforward to do so.

The tradeoff for Sensibles rapid time to value is that each meaningfully distinct document layout requires a separate template. But this tradeoff turns out to be not so bad in the real world. In most business use cases, there are a countable number of layouts (e.g., dozens of trucking carriers generating rate confirmations in the USA; a handful of software systems generating home inspection reports). Our customers dont create thousands of document templates most generate tremendous value with just a few.

Of course, for every widely used tax form, insurance policy, and verification of employment, collectively we only need to create a template once. Thats why weve introduced

Our open-source Sensible Configuration Library is a collection of over 100 of the most frequently parsed document layouts, from auto insurance policies to ACORD forms, loss runs, tax forms, and more. If you have a document that's of broad interest, we'll do the onboarding for you and then make it freely available to the public. It will also be free for you to use for up to 150 extractions per month on our free account tier.

We believe that this hybrid approach is the path to transparently and efficiently solving the problem of turning documents into structured data for a wide range of industries, including logistics, financial services, insurance, and healthcare. If you'd like to join us on this journey and connect your documents to software,schedule a demo or sign up for a free account!

L O A D I N G. . . comments & more!

Link:
Machine Learning is the Wrong Way to Extract Data From Most Documents - hackernoon.com

Read More..

Deep Learning Laptops We’ve Reviewed (2022) – Analytics India Magazine

As an amateur professional, there are certain key components to focus on while purchasing a laptop for performing deep learning operations such as RAM, CPU, storage and operating system.

Laptops with higher RAM would ensure faster processing while those with GPU provide an additional advantage to speed up the training process and help reduce time from model training. Another essential component for deep learning laptops is graphics card, used to render higher dimensional images.

Here is a detailed list of top laptops for deep learning

Lambda Labs recognises Tensorbook as the Deep Learning Laptop.

Tensorbook is equipped with GeForce RTX 3080 Max-Q 16GB GPU, VRAM-16 GB GDDR6 and is backed by Intel Core i7-11800H along with RAM of 64 GB 3200 MHz DDR4 and storage of 2 TB NVMe PCIe 4.0.

(Image source: Amazon)

According to Lambda Labs, Tensorbooks GeForce RTX 3080 is capable of delivering model training performance up to 4x faster than Apples M1 Max and 10x faster than Google Colab instances. It is also equipped with pre-installed machine learning tools such as PyTorch, Tensorflow, CUDA, and cuDNN.

(Image source: Lambda Labs)

Razer Blade 15 RTX3080

Razer Blade 15 RTX3080 is an equally good choice in terms of deep learning operations.

The laptop is powered by NVIDIA GeForce RTX 3080 Ti along with Intel Core i7-11800H. The Intel Turbo Boost Technology can boost the i7 processor up to 5.1GHz.Go with ultra-fast 360Hz FHD.

(Image source: Amazon)

Razer Blade 15 RTX3080 has a battery life of upto 5 hours.

The laptop efficiently dissipates heat through the evaporation and condensation of an internal fluid and keeps it running soundlessly and coolly even under intense loads owing to features like vapour chamber cooling for maximised thermal performance.

It is a powerhouse laptop with the combination of both NVIDIA and AMD. It is powered by AMD Ryzen 9 5900HX CPU and GeForce RTX 3080 GPU along with an ultrafast panel up to 300 Hz/3ms. It has a 90 Wh battery with rapid Type-c charging with video playback upto 12 hours.

(Image source: Asus)

Dell Inspiron i5577 is equipped with a 7th Generation Intel Quad-core CPU which makes it suitable for CPU-intensive projects.

(Image source: Amazon)

The laptop has NVIDIA GTX 1050 Graphics with 4GB GDDR5 video memory. The user can choose from hard drive options upto 1TB conventional HDD or PCIe NVMe 512 GB SSD for plenty of storage, stability and responsive performance. It is backed by a 6-cell 74Whr battery.

The ASUS ROG Strix G17 laptop is equipped with RTX3070 GPU along with 8GB VRAM and 8-core Ryzen 9 which makes it one of the most suitable laptops for machine learning. It also has a 165Hz 3ms refresh rate and a 90Wh battery which allows usage upto a solid 10 hours.

(Image source: Asus)

The Eluktronics MAX-17 renders itself the lightest 17.3 gaming laptop in the industry. It is powered by Intel Core i7-10870H Eight Cores-16 Threads (2.2-5.0GHz TurboBoost) along with 8GB GDDR6 VRAM NVIDIAGeForce RTX 2070 Super (Max-PTDP:115 Watts).

(Image source: Eluktronics)

In terms of memory and storage configuration, the laptop is equipped with 1TB Ultra Performance PCIe NVMe SSD + 16GB DDR4 2933MHz RAM.

ASUS TUF Gaming F17 is yet another impressive option for deep learning operations. It is powered by the latest 10th Gen Intel Core i7 CPUwith 8 cores and 16 threads to tear through serious gaming, streaming and heavy duty multitasking. It also has GeForce GTX 1650 Ti GPU with IPS-level displays up to 144Hz.

(Image source: Amazon)

The laptop also features a larger 48Wh battery that allows up to 12.3 hours of video playback and upto 7.3 hours of web browsing. In terms of durability, it claims to be equipped with TUFs signature military-grade durability.

The Razer Blade 15 laptop boasts of 11th Gen Intel Core i7-11800H 8 Core (2.3GHz/4.6GHz) and NVIDIA GeForce RTX 3060 along with 6GB DDR6 VRAM.

(Image source: Amazon)

This laptop comes with a built-in 65WHr rechargeable lithium-ion polymer battery that lasts upto 6 hours.

See the rest here:
Deep Learning Laptops We've Reviewed (2022) - Analytics India Magazine

Read More..

Global Machine Learning Market is Expected to Grow at a CAGR of 39.2 % by 2028 – Digital Journal

According to the latest research by SkyQuest Technology, the Global Machine Learning Market was valued at US$ 16.2 billion in 2021, and it is expected to reach a market size of US$ 164.05 billion by 2028, at a CAGR of 39.2 % over the forecast period 20222028. The research provides up-to-date Machine Learning Market analysis of the current market landscape, latest trends, drivers, and overall market environment.

Software systems may forecast events more correctly with the use of machine learning (ML), a type of artificial intelligence (AI), without needing to be explicitly told to do so. Machine learning algorithms use historical data as input to anticipate new output values. As organizations adopt more advanced security frameworks, the global machine learning market is anticipated to grow as machine learning becomes a prominent trend in security analytics. Due to the massive amount of data being generated and communicated over several networks, cyber professionals struggle considerably to identify and assess potential cyber threats and assaults.

Machine-learning algorithms can assist businesses and security teams in anticipating, detecting, and recognising cyber-attacks more quickly as these risks become more widespread and sophisticated. For example, supply chain attacks increased by 42% in the first quarter of 2021 in the US, affecting up to 7,000,000 people. For instance, AT&T and IBM claim that the promise of edge computing and 5G wireless networking for the digital revolution will be proven. They have created virtual worlds that, when paired with IBM hybrid cloud and AI technologies, allow business clients to truly experience the possibilities of an AT&T connection.

Computer vision is a cutting-edge technique that combines machine learning and deep learning for medical imaging diagnosis. This has been accepted by the Microsoft InnerEye programme, which focuses on image diagnostic tools for image analysis. For instance, using minute samples of linguistic data, an AI model created by a team of researchers from IBM and Pfizer can accurately forecast the eventual onset of Alzheimers disease in healthy persons by 71 percent (obtained via clinical verbal cognition tests).

Read Market Research Report, Global Machine Learning Market by Component, (Solutions, and Services), Enterprise Size (SMEs And Large Enterprises), Deployment (Cloud, On-Premise), End-User [Healthcare, Retail, IT and Telecommunications, Banking, Financial Services and Insurance (BFSI), Automotive & Transportation, Advertising & Media, Manufacturing, Others (Energy & Utilities, Etc.)], and Region Forecast and Analysis 20222028 By Skyquest

Get Sample PDF : https://skyquestt.com/sample-request/machine-learning-market

Large enterprises segment dominated the machine learning market in 2021. This is because data science and artificial intelligence technologies are being used more often to incorporate quantitative insights into business operations. For instance, under a contract between Pitney Bowes and IBM, IBM will offer managed infrastructure, IT automation, and machine learning services to help Pitney Bowes convert and adopt hybrid cloud computing to support its global business strategy and goals.

Small and midsized firms are expected to grow considerably throughout the anticipated timeframe. It is projected that AI and ML would be the main technologies allowing SMEs to reduce ICT investments and access digital resources. For instance, the IPwe Platform, IPwe Registry, and Global Patent Marketplace are just a few of the small- and medium-sized firms (SMEs) and other organizations that are reportedly already using IPwes technology.

The healthcare sector had the biggest share the global machine learning market in 2021 owing to the industrys leading market players doing rapid research and development, as well as the partnerships formed in an effort to increase their market share. For instance, per the terms of the two businesses signed definitive agreement, Francisco Partners would buy IBMs healthcare data and analytics assets that are presently a part of the Watson Health company. An established worldwide investment company with a focus on working with IT startups is called Francisco Partners. Francisco Partners acquired a wide range of assets, including Health Insights, MarketScan, Clinical Development, Social Program Management, Micromedex, and imaging software services.

The prominent market players are constantly adopting various innovation and growth strategies to capture more market share. The key market players are IBM Corporation, SAP SE, Oracle Corporation, Hewlett Packard Enterprise Company, Microsoft Corporation, Amazon Inc., Intel Corporation, Fair Isaac Corporation, SAS Institute Inc., BigML, Inc., among others.

The report published by SkyQuest Technology Consulting provides in-depth qualitative insights, historical data, and verifiable projections about Machine Learning Market Revenue. The projections featured in the report have been derived using proven research methodologies and assumptions.

Speak With Our Analyst : https://skyquestt.com/speak-with-analyst/machine-learning-market

Report Findings

What does this Report Deliver?

SkyQuest has Segmented the Global Machine Learning Market based on Component, Enterprise Size, Deployment, End-User, and Region:

Read Full Report : https://skyquestt.com/report/machine-learning-market

Key Players in the Global Machine Learning Market

About Us-SkyQuest Technology Group is a Global Market Intelligence, Innovation Management & Commercialization organization that connects innovation to new markets, networks & collaborators for achieving Sustainable Development Goals.

Find Insightful Blogs/Case Studies on Our Website:Market Research Case Studies

Read more:
Global Machine Learning Market is Expected to Grow at a CAGR of 39.2 % by 2028 - Digital Journal

Read More..

Enko Raises $70M Series C to Commercialize Safe Crop Protection through Machine Learning-based Discovery Technology – PR Newswire

Round led by Nufarm will advance company's digital discovery platform and pipeline of leading crop health molecules

MYSTIC, Conn., July 27, 2022 /PRNewswire/ --Enko, the crop health company, today announced $70 million in Series C funding, bringing the company's overall capital raised to date to $140 million. Global agrochemical company Nufarm led the round as part of an expanded partnership to bring innovative products to their core markets.

Enko will use the new funds to advance its product pipeline of crop protection chemistries that target critical pests and weeds through novel pathways. The funds will also expand Enko's ENKOMPASSTM technology platform, which combines DNA-encoded library screening with machine learning and structure-based design to quickly find new, better performing and more targeted chemistries. Since its start in 2017, Enko has generated hundreds of leading molecules across all categories of crop protection. Enko's product pipeline is currently led by a range of herbicides that are demonstrating breakthrough performance compared to industry standards like glyphosate.

"Reliance on outdated chemistries has led to rampant resistance that is threatening farmer livelihoods and our food supply," said Enko CEO and founder Jacqueline Heard. "Enko's digital platform massively increases the scale and discovery rate for new solutions, screening out off-target organisms from the get-go. The result is bringing safe and effective products to growers better, faster and cheaper. The need for this innovation has never been more urgent."

To move the industry forward amidst stalled R&D, Enko is collaborating withSyngenta andBayer on promising new chemistries. Enko's target-based approach has generated its industry-leading discoveries in roughly half the time and with fewer resources than conventional R&D methods.

On expanding their partnership, Nufarm Managing Director and CEO Greg Hunt said, "We were early investors in Enko and have followed the performance of their pipeline in the lab and field over the last two years with increased interest. As an agricultural innovator, Nufarm's strategy is to partner with like-minded companies who recognize that innovation and technology are the future for sustainable agriculture practices. We were delighted to invest in this Series C financing round."

In addition to Nufarm, its investors include Anterra Capital, Taher Gozal, the Bill & Melinda Gates Foundation, Eight Roads Ventures, Finistere Ventures, Novalis LifeSciences, Germin8 Ventures, TO Ventures Food, Endeavor8, Alumni Ventures Group and Rabo Food & Agri Innovation Fund.

About EnkoEnko designs safe and sustainable solutions to farmers' biggest crop threats today, from pest resistance to new diseases by applying the latest drug discovery and development approaches from pharma to agriculture. Enko is headquartered in Mystic, Connecticut. For more information, visit enkochem.com.

About NufarmNufarm is a global crop protection and seed technology company established over 100 years ago. It is listed on the Australian Securities Exchange (ASX:NUF) with its head office in Melbourne, Australia. As an agricultural innovator, Nufarm is focused on innovative crop protection and seed technology solutions. It has introduced to the market Omega-3 canola and has an expanding portfolio of unique GHG biofuel solutions. Nufarm has manufacturing and marketing operations in Australia, New Zealand, Asia, Europe and North America.

Media ContactsMission North for Enko[emailprotected]

SOURCE Enko

Read this article:
Enko Raises $70M Series C to Commercialize Safe Crop Protection through Machine Learning-based Discovery Technology - PR Newswire

Read More..

New $10M NSF-funded institute will get to the CORE of data science – EurekAlert

image:The core of EnCORE: co-principal investigators include (from l to r) Yusu Wang, Barna Saha (the principal investigator), Kamalika Chaudhuri, (top row) Arya Mazumdar and Sanjoy Dasgupta. view more

Credit: University of California San Diego

A new National Science Foundation initiative has created a $10 million dollar institute led by computer and data scientists at University of California San DIego that aims to transform the core fundamentals of the rapidly emerging field of Data Science.

Called The Institute for Emerging CORE Methods in Data Science (EnCORE), the institute will be housed in the Department of Computer Science and Engineering (CSE), in collaboration with The Halcolu Data Science Institute (HDSI), and will tackle a set of important problems in theoretical foundations of Data Science.

UC San Diego team members will work with researchers from three partnering institutions University of Pennsylvania, University of Texas at Austin and University of California, Los Angeles to transform four core aspects of data science: complexity of data, optimization, responsible computing, and education and engagement.

EnCORE will join three other NSF-funded institutes in the country dedicated to the exploration of data science through the NSFs Transdisciplinary Research in Principles of Data Science Phase II (TRIPODS) program.

The NSF TRIPODS Institutes will bring advances in data science theory that improve health care, manufacturing, and many other applications and industries that use data for decision-making, said NSF Division Director for Electrical, Communications and Cyber Systems Shekhar Bhansali.

UC San Diego Chancellor Pradeep K. Khosla said UC San Diegos highly collaborative, multidisciplinary community is the perfect environment to launch and develop EnCORE. We have a long history of successful cross-disciplinary collaboration on and off campus, with renowned research institutions across the nation. UC San Diego is also home to the San Diego Supercomputer Center, the HDSI, and leading researchers in artificial intelligence and machine learning, Khosla said. We have the capacity to house and analyze a wide variety of massive and complex data sets by some of the most brilliant minds of our time, and then share that knowledge with the world.

Barna Saha, the EnCORE project lead and an associate professor in UC San Diegos Department of Computer Science and Engineering and HDSI, said: We envision EnCORE will become a hub of theoretical research in computing and Data Science in Southern California. This kind of national institute was lacking in this region, which has a lot of talent. This will fill a much-needed gap.

The other UC San Diego faculty members in the institute include professors Kamalika Chaudhury, and Sanjoy Dasgupta from CSE; Arya Mazumdar (EnCORE co-principal investigator), Gal Mishne, and Yusu Wang from HDSI; and Fan Chung Graham from Mathematics. Saura Naderi of HDSI will spearhead the outreach activities of the institute.

Professor Barna Saha has assembled a team of exceptional scholars across UC San Diego and across the nation to explore the underpinnings of data science. This kind of institute, focused on groundbreaking research, innovative education and effective outreach, will be a model of interdisciplinary initiatives for years to come, said Department of Computer Science and Engineering Chair Sorin Lerner.

CORE Pillars of Data Science

The EnCORE Institute seeks to investigate and transform three research aspects of Data Science:

EnCORE represents exactly the kind of talent convergence that is necessary to address the emerging societal need for responsible use of data. As a campus hub for data science, HDSI is proud of a compelling talent pool to work together in advancing the field, said HDSI founding director Rajesh K. Gupta.

Team members expressed excitement about the opportunity of interdisciplinary research that the institute will provide. They will work together to improve privacy-preserving machine learning and robust learning, and to integrate geometric and topological ideas with algorithms and machine learning methodologies to tame the complexity in modern data. They envision a new era in optimization with the presence of strong statistical and computational components adding new challenges.

One of the exciting research thrusts at EnCORE is data science for accelerating scientific discoveries in domain sciences, said Gal Mishne, a professor at HDSI. As part of EnCORE, the team will be developing fast, robust low-distortion visualization tools for real-world data in collaboration with domain experts. In addition, the team will be developing geometric data analysis tools for neuroscience, a field which is undergoing an explosion of data at multiple scales.

From K-12 and Beyond

A distinctive aspect to EnCORE will be the E, education and engagement, component.

The institute will engage students at all levels, from K-12 to postdoctoral students, and junior faculty and conduct extensive outreach activities at all of its four sites.

The geographic span of the institute in three regions of the United States will be a benefit as the institute executes its outreach plan, which includes regular workshops, events, hiring of students and postdoctoral students. Online and joint courses between the partner institutions will also be offered.

Activities to reach out to high school, middle school and elementary students in Southern California are also part of the institutes plan, with the first engagement planned for this summer with the Sweetwater Union High School District to teach students about the foundations of data science.

There will also be mentorship and training opportunities with researchers affiliated with EnCORE, helping to create a pipeline of data scientists and broadening the reach and impact of the field. Additionally, collaboration with industry is being planned.

Mazumdar, an associate professor in the HDSI and an affiliated faculty member in CSE, said the team has already put much thought and effort into developing data science curricula across all levels. We aim to create a generation of experts while being mindful of the needs of society and recognizing the demands of industry, he said.

We have made connections with numerous industry partners, including prominent data science techs and also with local Southern California industries including start-ups, who will be actively engaged with the institute and keep us informed about their needs, Mazumdar added.

An interdisciplinary, diverse field- and team

Data science has footprints in computer science, mathematics, statistics and engineering. In that spirit, the researchers from the four participating institutions who comprise the core team have diverse and varied backgrounds from four disciplines.

Data science is a new, and a very interdisciplinary area. To make significant progress in Data Science you need expertise from these diverse disciplines. And its very hard to find experts in all these areas under one department, said Saha. To make progress in Data Science, you need collaborations from across the disciplines and a range of expertise. I think this institute will provide this opportunity.

And the institute will further diversity in science, as EnCORE is being spearheaded by women who are leaders in their fields.

Continued here:
New $10M NSF-funded institute will get to the CORE of data science - EurekAlert

Read More..