Category Archives: Machine Learning

dotData Receives APN Machine Learning Competency Partner of the Year Award – WFMZ Allentown

SAN MATEO, Calif., March 25, 2020 /PRNewswire/ -- dotData, focused on delivering full-cycle data science automation and operationalization for the enterprise, today announced that Amazon Web Services (AWS) has awarded dotData with the APN Machine Learning (ML) Competency Partner of the Year Award for 2019.

The award recognizes dotData's rapid growth and success in the enterprise AI/ML market and its contribution to the AWS business in 2019. This award is a testament to dotData platform's ability to significantly accelerate and simplify development of new AI/ML use cases and deliver insights to enterprise customers. The award was announced today at the AWS Partner Summit Tokyo, currently taking place virtually from March 25 - April 10, 2020.

dotData announced in February 2020 that it had achieved AWS ML Competency status, only eight months after joining the AWS Partner Network (APN). The certification recognizes dotData as an APN Partner that accelerates the full-cycle ML and data science process and provides validation that dotData has deep expertise in artificial intelligence (AI) and ML on AWS and can deliver their organization's solutions seamlessly on AWS.

dotData provides solutions designed to improve the productivity of data science projects, which traditionally require extensive manual efforts from valuable and scarce enterprise resources. The platform automates the full life-cycle of the data science process, from business raw data through feature engineering to implementation of ML in production utilizing its proprietary AI technologies.

dotData's AI-powered feature engineering automatically applies data transformation, cleansing, normalization, aggregation, and combination, and transforms hundreds of tables with complex relationships and billions of rows into a single feature table, automating the most manual data science projects.

"We are honored and proud to receive this award which recognizes our commitment to making AI and ML accessible to as many people in the enterprise as possible and our success in helping our enterprise customers meet their business goals," said Ryohei Fujimaki, founder and CEO of dotData. "As an APN ML Competency partner we have been able to deliver an outstanding product that dramatically accelerates the AI and ML initiatives of AWS users and maximizes their business impacts. We look forward to contributing to our customers' success bycollaborating with AWS."

AWS ML Competency Partners provide solutions that help organizations solve their data challenges and enable ML and data science workflows. The program is designed to highlight APN Partners who have demonstrated technical proficiency in specialized solution areas and helps customers find the most qualified organizations with deep expertise and proven customer success.

dotData democratizes data science by enabling existing resources to perform data science tasks, making enterprise data science scalable and sustainable. dotData automates up to 100 percent of the data science workflow, enabling users to connect directly to their enterprise data sources to discover and evaluate millions of features from complex table structures and huge data sets with minimal user input. dotData is also designed to operationalize data science by producing both feature and ML scoring pipelines in production, which IT teams can then immediately integrate with business workflow. This can further automate the time-consuming and arduous process of maintaining the deployed pipeline to ensure repeatability as data changes over time. With the dotData GUI, the data science task becomes a five-minute operation, requiring neither significant data science experience nor SQL/Python/R coding.

For more information or a demo of dotData's AI-powered full-cycle data science automation platform, please visit dotData.com.

About dotDatadotData is one of the first companies focused on full-cycle data science automation. Fortune 500 organizations around the world use dotData to accelerate their ML and AI projects and deliver higher business value. dotData's automated data science platform speeds time to value by accelerating, democratizing, augmenting and operationalizing the entire data science process, from raw business data through data and feature engineering to ML in production. With solutions designed to cater to the needs of both data scientists as well as citizen data scientists, dotData provides value across the entire organization.

dotData's unique AI-powered feature engineering delivers actionable business insights from relational, transactional, temporal, geo-locational, and text data. dotData has been recognized as a leader by Forrester in the 2019 New Wave for AutoML platforms. dotData has also been recognized as the "best machine learning platform" for 2019 by the AI breakthrough awards and was named an "emerging vendor to watch" by CRN in the big data space. For more information, visit http://www.dotdata.com, and join the conversation on Twitter and LinkedIn.

Read the rest here:
dotData Receives APN Machine Learning Competency Partner of the Year Award - WFMZ Allentown

How our publisher harnessed machine learning to overhaul Techday websites – CFOtech New Zealand

Everyone is talking about Artificial intelligence (AI) and machine learning (ML) these days. Fitness devices are measuring our steps and analysing our daily health; map applications telling us the best way to get from A to B based on the trips of countless others before us; even the alarm app on our phone taking note of how long we sleep.

Here at Techday, we see examples every day of how AI and ML are shifting the landscape of modern technology into new and exciting territory. We have even seen the potential to harness itin our own operations.

This is the story of how our publishers passion for ML brought him to the CoderSchool in Ho Chi Minh City in Vietnam, and how we are incorporating AI and ML into Techday's business.

The publisher in question, Sean Mitchell, began his mission to bring cutting edge digital transformation technology to our business model two years ago.

Sean has always been passionate about systems and how automation is not just a nice to have but becoming a competitive must-have. So in 2018, he enrolled in a coding boot camp a full stack web development course.

He had never written code before this course, but by September 2018 he had finished and began overhauling Techday's websites and systems. We run 26 websites with complex systems for our editors, advertising and operations teams.

After redeveloping the look and feel of the Techday websites, he focused on our backend systems. This started pointing him in the direction of artificial intelligence and what a dramatic impact it can have on businesses.

"The goal was for our team to achieve more each day and be freed from the most mundane tasks. We knew we could achieve this with more automation and infusing machine learning into our systems," says Sean.

He couldnt find any suitable boot camps in New Zealand, and he didnt have time for a two-year computer science degree with little practical benefit.

Then he discovered a 12 week-long course on machine learning coding in Ho Chi Minh City taught byCoderSchoolwhich boasted the strong practical element he was looking for.

He signed up, flew over, settled in, and loved every gruelling hour.

Accommodation, food and transport were significantly cheaper than in New Zealand.

The whole course was taught in English, and Ho Chi Minh Citywas very friendly and welcoming to foreigners.

But the best part?

The course was 25% of the cost of a similar one in the US, says Sean.

Sean goes on to say the instructors were extremely helpful and had plenty of capacity to practice one-on-one tutoring, as there were four teachers and only 21 students in his course.

The course also included the basics of learning the python language for those who hadnt coded in it before, as well as a crash course in data analysis.

I can recommend the course to anyone who wants to practically implement AI and ML. This is a technical course with superb teachers and great course work, says Sean.

Sean is now back at Techday headquarters in New Zealand and has already put his studies into practical use.

Already, just a month after finishing the course, we are in thefinal steps of implementing machine learning into Techdays first AI workflow, says Sean.

Sean created a machine learning tool to read a draft of an editors story, and suggest keywords to use as tags for the story.

With over 6,000 stories written per annum this could add up to be a real human time saver.

Sean says the practical experience learnt atCoderSchool is proving more valuable every day.

If youre on the fence, then learning to code will change your life. It certainly did mine and our business will never be the same again.

If you want to learn more or apply to CoderSchool, visit their website.

UPDATE: Coderschool is continuing to teach temporarily in an online format during this time of crisis. More information is available on their website above.

More:
How our publisher harnessed machine learning to overhaul Techday websites - CFOtech New Zealand

Machine Learning Engineer Interview Questions: What You Need to Know – Dice Insights

Along with artificial intelligence (A.I.), machine learning is regarded as one of the most in-demand areas for tech employment at the moment. Machine learning engineers develop algorithms and models that can adapt and learn from data. As a result, those who thrive in this discipline are generally skilled not only in computer science and programming, but also statistics, data science, deep learning, and problem solving.

According to Burning Glass, which collects and analyzes millions of job postings from across the country, the prospects for machine learning as an employer-desirable skill are quite good, with jobs projected to rise 36.5 percent over the next decade. Moreover, even those with relatively little machine-learning experience can pull down quite a solid median salary:

Dice Insights spoke with Oliver Sulley, director of Edge Tech Headhunters, to figure out how you should prepare, what youll be asked during an interviewand what you should say to grab the gig.

Youre going to be faced potentially by bosses who dont necessarily know what it is that youre doing, or dont understand ML and have just been [told] they need to get it in the business, Sulley said. Theyre being told by the transformation guys that they need to bring it on board.

As he explained, that means one of the key challenges facing machine learning engineers is determining what technology would be most beneficial to the employer, and being able to work as a cohesive team that may have been put together on very short notice.

What a lot of companies are looking to do is take data theyve collected and stored, and try and get them to build some sort of model that helps them predict what they can be doing in the future, Sulley said. For example, how to make their stock leaner, or predicting trends that could come up over they year that would change their need for services that they offer.

Sulley notes that machine learning engineers are in rarified air at themomentits a high-demand position, and lots of companies are eager to show theyve brought machine learning specialists onboard.

If theyre confident on their skills, then a lot of the time they have to make sure the role is right for them, Sulley said. Its more about the soft skills that are going to be important.

Many machine learning engineers are strong on the technical side, but they often have to interact with teams such as operations; as such, they need to be able to translate technical specifics into laymans terms and express how this data is going to benefit other areas of the company.

Building those soft skills, and making sure people understand how you will work in a team, is just as important at this moment in time, Sulley added.

There are quite a few different roles for machine learning engineers, and so its likely that all these questions could come upbut it will depend on the position. We find questions with more practical experience are more common, and therefore will ask questions related to past work and the individual contributions engineers have made, Sulley said.

For example:

Membership has its benefits. Sign up for a free Dice profile, add your resume, discover great career insights and set your tech career in motion. Register now

A lot of data engineering and machine learning roles involve working with different tech stacks, so its hard to nail down a hard and fast set of skills, as much depends on the company youre interviewing with.(If youre just starting out with machine learning, here are some resources that could prove useful.)

For example, if its a cloud based-role, a machine learning engineer is going to want to have experience with AWS and Azure; and for languages alone, Python and R are the most important, because thats what we see more and more in machine learning engineering, Sulley said. For deployment, Id say Docker, but it really depends on the persons background and what theyre looking to get into.

Sulley said ideal machine learning candidates posses a really analytical mind, as well as a passion for thinking about the world in terms of statistics.

Someone who can connect the dots and has a statistical mind, someone who has a head for numbers and who is interested in that outside of work, rather than someone who just considers it their job and what they do, he said.

As you can see from the following Burning Glass data, quite a few jobs now ask for machine-learning skills; if not essential, theyre often a nice to have for many employers that are thinking ahead.

Sulley suggests the questions you ask should be all about the technologyits about understanding what the companies are looking to build, what their vision is (and your potential contribution to it), and looking to see where your career will grow within that company.

You want to figure out whether youll have a clear progression forward, he said. From that, you will understand how much work theyre going to do with you. Find out what theyre really excited about, and that will help you figure out whether youll be a valued member of the team. Its a really exciting space, and they should be excited by the opportunities that come with bringing you onboard.

See original here:
Machine Learning Engineer Interview Questions: What You Need to Know - Dice Insights

Put Your Money Where Your Strategy Is: Using Machine Learning to Analyze the Pentagon Budget – War on the Rocks

A masterpiece is how then-Deputy Defense Secretary Patrick Shanahan infamously described the Fiscal Year 2020 budget request. It would, he said, align defense spending with the U.S. National Defense Strategy both funding the future capabilities necessary to maintain an advantage over near-peer powers Russia and China, and maintaining readiness for ongoing counter-terror campaigns.

The result was underwhelming. While research and development funding increased in 2020, it did not represent the funding shift toward future capabilities that observers expected. Despite its massive size, the budget was insufficient to address the departments long-term challenges. Key emerging technologies identified by the department such as hypersonic weapons, artificial intelligence, quantum technologies, and directed-energy weapons still lacked a clear and sustained commitment to investment. It was clear that the Department of Defense did not make the difficult tradeoffs necessary to fund long-term modernization. The Congressional Budget Office further estimated that the cost of implementing the plans, which were in any case insufficient to meet the defense strategys requirements, would be about 2 percent higher than department estimates.

Has anything changed this year? The Department of Defense released its FY2021 budget request Feb. 10, outlining the departments spending priorities for the upcoming fiscal year. As is mentioned every year at its release, the proposed budget is an aspirational document the actual budget must be approved by Congress. Nevertheless, it is incredibly useful as a strategic document, in part because all programs are justified in descriptions of varying lengths in what are called budget justification books. After analyzing the 10,000-plus programs in the research, development, testing and evaluation budget justification books using a new machine learning model, it is clear that the newest budgets tepid funding for emerging defense technologies fails to shift the departments strategic direction toward long-range strategic competition with a peer or near-peer adversary.

Regardless of your beliefs about the optimal size of the defense budget or whether the 2018 National Defense Strategys focus on peer and near-peer conflict is justified, the Department of Defenses two most recent budget requests have been insufficient to implement the administrations stated modernization strategy fully.

To be clear, this is not a call to increase the Department of Defenses budget over its already-gargantuan $705.4 billion FY2021 request. Nor is this the only problem with the federal budget proposal, which included cuts to social safety net programs programs that are needed now more than ever to mitigate the effects from COVID-19. Instead, my goal is to demonstrate how the budget fails to fund its intended strategy despite its overall excess. Pentagon officials described the budget as funding an irreversible implementation of the National Defense Strategy, but that is only true in its funding for nuclear capabilities and, to some degree, for hypersonic weapons. Otherwise, it largely neglects emerging technologies.

A Budget for the Last War

The 2018 National Defense Strategy makes clear why emerging technologies are critical to the U.S. militarys long-term modernization and ability to compete with peer or near-peer adversaries. The document notes that advanced computing, big data analytics, artificial intelligence, autonomy, robotics, directed energy, hypersonics, and biotechnology are necessary to ensure we will be able to fight and win the wars of the future. The Government Accountability Office included similar technologies artificial intelligence, quantum information science, autonomous systems, hypersonic weapons, biotechnology, and more in a 2018 report on long-range emerging threats identified by federal agencies.

In the Department of Defenses budget press release, the department argued that despite overall flat funding levels, it made numerous hard choices to ensure that resources are directed toward the Departments highest priorities, particularly in technologies now termed advanced capabilities enablers. These technologies include hypersonic weapons, microelectronics/5G, autonomous systems, and artificial intelligence. Elaine McCusker, the acting undersecretary of defense (comptroller) and chief financial officer, argued, Any place where we have increases, so for hypersonics or AI for cyber, for nuclear, thats where the money went This budget is focused on the high-end fight. (McCuskers nomination for Department of Defense comptroller was withdrawn by the White House in early March because of her concerns over the 2019 suspension of defense funding for Ukraine.) Deputy Defense Secretary David L. Norquist noted that the budget request had the largest research and development request ever.

Despite this, the FY2021 budget is not a significant shift from the FY2020 budget in developing advanced capabilities for competition against a peer or near-peer. I analyzed data from the Army, Navy, Air Force, Missile Defense Agency, Office of the Secretary of Defense, and Defense Advanced Research Projects Agency budget justification books, and the department has still failed to realign its funding priorities toward the long-range emerging technologies that strategic documents suggest should be the highest priority. Aside from hypersonic weapons, which received already-expected funding request increases, most other types of emerging technologies remained mostly stagnant or actually declined from FY2020 request levels.

James Miller and Michael OHanlon argued in their analysis of the FY2020 budget, Desires for a larger force have been tacked onto more crucial matters of military innovation and that the department should instead prioritize quality over quantity. This criticism could be extended to the FY2021 budget, along with the indictment that military innovation itself wasnt fully prioritized either.

Breaking It Down

In this brief review, I attempt to outline funding changes for emerging technologies between the FY2020 and FY2021 budgets based on a machine learning text-classification model, while noting cornerstone programs in each category.

Lets start with the top-level numbers from the R1 document, which divides the budget into seven budget activities. Basic and applied defense research account for 2 percent and 5 percent of the overall FY2021 research and development budget, compared to 38 percent for operational systems development and 27 percent for advanced component development and prototypes. The latter two categories have grown from 2019, in both real terms and as a percentage of the budget, by 2 percent and 5 percent, respectively. These categories were both the largest overall budget activities and also received the largest percentage increases.

Federally funded basic research is critical because it helps develop the capacity for the next generation of applied research. Numerous studies have demonstrated the benefit of federally funded basic science research, with some estimates suggesting two-thirds of the technologies with the most far-reaching impact over the last 50 years [stemmed] from federally funded R&D at national laboratories and research universities. These technologies include the internet, robotics, and foundational subsystems for space-launch vehicles, among others. In fact, a 2019 study for the National Bureau of Economic Researchs working paper series found evidence that publicly funded investments in defense research had a crowding in effect, significantly increasing private-sector research and development from the recipient industry.

Concerns over the levels of basic research funding are not new. A 2015 report by the MIT Committee to Evaluate the Innovation Deficit argued that declining federal basic research could severely undermine long-term U.S. competitiveness, particularly for research areas that lack obvious real-world applications. This is particularly true given that the share of industry-funded basic research has collapsed, with the authors arguing that U.S. companies are left dependent on federally-funded, university-based basic research to fuel innovation. This shift means that federal support of basic research is even more tightly coupled to national economic competitiveness. A 2017 analysis of Americas artificial intelligence strategy recommended that the government [ensure] adequate funding for scientific research, averting the risks of an innovation deficit that could severely undermine long-term competitiveness. Data from the Organization for Economic Cooperation and Development shows that Chinese government research and development spending has already surpassed that of the United States, while Chinese business research and development expenditures are rapidly approaching U.S. levels.

While we may debate the precise levels of basic and applied research and development funding, there is little debate about its ability to produce spillover benefits for the rest of the economy and the public at large. In that sense, the slight declines in basic and applied research funding in both real terms and as a percentage of overall research and development funding hurt the United States in its long-term competition with other major powers.

Clean, Code, Classify

The Defense Departments budget justification books contain thousands of pages of descriptions spread across more than 20 separate PDFs. Each program description explains the progress made each year and justifies the funding request increase or decrease. There is a wealth of information about Department of Defense strategy in these documents, but it is difficult to assess departmental claims about funding for specific technologies or to analyze multiyear trends while the data is in PDF form.

To understand how funding changed for each type of emerging technology, I scraped and cleaned this information from the budget documents, then classified each research and development program into categories of emerging technologies (including artificial intelligence, biotechnologies, directed-energy weapons, hypersonic weapons and vehicles, quantum technologies, autonomous and swarming systems, microelectronics/5G, and non-emerging technology programs). I designed a random forest machine learning model to sort the remaining programs into these categories. This is an algorithm that uses hundreds of decision trees to identify which variables or words in a program description, in this case are most important for classifying data into groups.

There are many kinds of machine learning models that can be used to classify data. To choose one that would most effectively classify the program data, I started by hand-coding 1,200 programs to train three different kinds of models (random forest, k-nearest neighbors, and support vector machine), as well as for a model testing dataset. Each model would look at the term frequency-inverse document frequency (essentially, how often given words appear adjusted for how rarely they are used) of all the words in a programs description to decide how to classify each program. For example, for the Armys Long Range Hypersonic Weapon program, the model might have seen the words hypersonic, glide, and thermal in the description and guessed that it was most likely a hypersonic program. The random forest model slightly outperformed the support vector machine model and significantly outperformed the k-nearest neighbors model, as well as a simpler method that just looked for specific keywords in a program description.

Having chosen a machine-learning model to use, I set it to work classifying the remaining 10,000 programs. The final result is a large dataset of programs mentioned in the 2020 and 2021 research and development budgets, including their full descriptions, predicted category, and funding amount for the year of interest. This effort, however, should be viewed as only a rough estimate of how much money each emerging technology is getting. Even a fully hand-coded classification that didnt rely on a machine learning model would be challenged by sometimes-vague program descriptions and programs that fund multiple types of emerging technologies. For example, the Applied Research for the Advancement of S&T Priorities program funds projects across multiple categories, including electronic warfare, human systems, autonomy, and cyber advanced materials, biomedical, weapons, quantum, and command, control, communications, computers and intelligence. The model took a guess that the program was focused on quantum technologies, but that is clearly a difficult program to classify into a single category.

With the programs sorted and classified by the model, the variation in funding between types of emerging technologies became clear.

Hypersonic Boost-Glide Weapons Win Big

Both the official Department of Defense budget press release and the press briefing singled out hypersonic research and development investment. As one of the departments advanced capabilities enablers, hypersonic weapons, defenses, and related research received $3.2 billion in the FY2021 budget, which is nearly as much as the other three priorities mentioned in the press release combined (microelectronics/5G, autonomy, and artificial intelligence).

In the 2021 budget documents, there were 96 programs (compared with 60 in the 2020 budget) that the model classified as related to hypersonics based on their program descriptions, combining for $3.36 billion an increase from 2020s $2.72 billion. This increase was almost solely due to increases in three specific programs, and funding for air-breathing hypersonic weapons and combined-cycle engine developments was stagnant.

The three programs driving up the hypersonic budget are the Armys Long-Range Hypersonic Weapon, the Navys Conventional Prompt Strike, and the Air Forces Air-Launched Rapid Response Weapon program. The Long-Range Hypersonic Weapon received a $620.42 million funding increase to field an experimental prototype with residual combat capability. The Air-Launched Rapid Response Weapons $180.66 million increase was made possible by the removal of funding for the Air Forces Hypersonic Conventional Strike Weapon in FY2021 which saved $290 million compared with FY2020. This was an interesting decision worthy of further analysis, as the two competing programs seemed to differ in their ambition and technical risk; the Air-Launched Rapid Response Weapon program was designed for pushing the art-of-the-possible while the conventional strike weapon was focused on integrating already mature technologies. Conventional Prompt Strike received the largest 2021 funding request at $1 billion, an increase of $415.26 million over the 2020 request. Similar to the Army program, the Navys Conventional Prompt Strike increase was fueled by procurement of the Common Hypersonic Glide Body that the two programs share (along with a Navy-designed 34.5-inch booster), as well as testing and integration on guided missile submarines.

To be sure, the increase in hypersonic funding in the 2021 budget request is important for long-range modernization. However, some of the increases were already planned, and the current funding increase largely neglects air-breathing hypersonic weapons. For example, the Navys Conventional Prompt Strike 2021 budget request was just $20,000 more than anticipated in the 2020 budget. Programs that explicitly mention scramjet research declined from $156.2 million to $139.9 million.

In contrast to hypersonics, research and development funding for many other emerging technologies was stagnant or declined in the 2021 budget. Non-hypersonic emerging technologies increased from $7.89 billion in 2020 to only $7.97 billion in 2021, mostly due to increases in artificial intelligence-related programs.

Biotechnology, Quantum, Lasers Require Increased Funding

Source: Graphic by the author.

Directed-energy weapons funding fell slightly in the 2021 budget to $1.66 billion, from $1.74 billion in 2020. Notably, the Army is procuring three directed-energy prototypes to support the maneuver-short range air defense mission for $246 million. Several other programs are also noteworthy. The High Energy Power Scaling program ($105.41 million) will finalize designs and integrate systems into a prototype 300 kW-class high-energy laser, focusing on managing thermal blooming (a distortion caused by the laser heating the atmosphere through which it travels) for 300 and eventually 500 kW-class lasers. Second, the Air Forces Directed Energy/Electronic Combat program ($89.03 million) tests air-based directed-energy weapons for use in contested environments.

Quantum technologies funding increased by $109 million, to $367 million, in 2021. In general, quantum-related programs are more exploratory, focused on basic and applied research rather than fielding prototypes. They are also typically funded by the Office of the Secretary of Defense or the Defense Advanced Research Projects Agency rather than by the individual services, or they are bundled into larger programs that distribute funding to many emerging technologies. For example, several of the top 2021 programs that the model classified as quantum research and development based on their descriptions include the Office of the Secretary of Defenses Applied Research for the Advancement of S&T Priorities ($54.52 million), or the Defense Advanced Research Projects Agencys Functional Materials and Devices ($28.25 million). The increase in Department of Defense funding for quantum technologies is laudable, but given the potential disruptive ability of quantum technologies, the United States should further increase its federal funding for quantum research and development, guarantee stable long-term funding, and incentivize young researchers to enter the field. The FY2021 budgets funding increase is clearly a positive step, but quantum technologies revolutionary potential demands more funding than the category currently receives.

Biotechnologies increased from $969 million in 2020 to $1.05 billion in 2021 (my guess is that the model overestimated the funding for emerging biotech programs, by including research programs related to soldier health and medicine that involve established technologies). Analyses of defense biotechnology typically focus on the defense applications of human performance enhancement, synthetic biology, and gene-editing technology research. Previous analyses, including one from 2018 in War on the Rocks, have lamented the lack of a comprehensive strategy for biotechnology innovation, as well as funding uncertainties. The Center for Strategic and International Studies argued, Biotechnology remains an area of investment with respect to countering weapons of mass destruction but otherwise does not seem to be a significant priority in the defense budget. These concerns appear to have been well-founded. Funding has stagnated despite the enormous potential offered by biotechnologies like nanotubes, spider silk, engineered probiotics, and bio-based sensors, many of which could be critical enablers as components of other emerging technologies. For example, this estimate includes the interesting Persistent Aquatic Living Sensors program ($25.7 million) that attempts to use living organisms to detect submarines and unmanned underwater vehicles in littoral waters.

Programs classified as autonomous or swarming research and development declined from $3.5 billion to $2.8 billion in 2021. This includes the Army Robotic Combat Vehicle program (stagnant at $86.22 million from $89.18 million in 2020). The Skyborg autonomous attritable (a low-cost, unmanned system that doesnt have to be recovered after launch) drone program requested $40.9 million and also falls into the autonomy category, as do the Air Forces Golden Horde ($72.09 million), Office of the Secretary of Defenses manned-unmanned teaming Avatar program ($71.4 million), and the Navys Low-Cost UAV Swarming Technology (LOCUST) program ($34.79 million).

The programs sorted by the model into the artificial intelligence category increased from $1.36 billion to $1.98 billion in 2021. This increase is driven by an admirable proliferation of smaller programs 161 programs under $50 million, compared with 119 in 2020. However, as the Department of Defense reported that artificial intelligence research and development received only $841 million in the 2021 budget request, it is clear that the random forest model is picking up some false positives for artificial intelligence funding.

Some critics argue that federal funding risks duplicating artificial intelligence efforts in the commercial sector. There are several problems with this argument, however. A 2017 report on U.S. artificial intelligence strategy argued, There also tends to be shortfalls in the funding available to research and start-ups for which the potential for commercialization is limited or unlikely to be lucrative in the foreseeable future. Second, there are a number of technological, process, personnel, and cultural challenges in the transition of artificial intelligence technologies from commercial development to defense applications. Finally, the Trump administrations anti-immigration policies hamstring U.S. technological and industrial base development, particularly in artificial intelligence, as immigrants are responsible for one-quarter of startups in the United States.

The Neglected Long Term

While there are individual examples of important programs that advance the U.S. militarys long-term competitiveness, particularly for hypersonic weapons, the overall 2021 budget fails to shift its research and development funding toward emerging technologies and basic research.

While recognizing that the overall budget was essentially flat, it should not come as a surprise that research and development funding for emerging technologies was mostly flat as well. But the United States already spends far more on defense than any other country, and even with a flat budget, the allocation of funding for emerging technologies does not reflect an increased focus on long-term planning for high-end competition compared with the 2020 budget. Specifically, the United States should increase its funding for emerging technologies other than hypersonics directed energy, biotech, and quantum information sciences, as well as in basic scientific research even if it requires tradeoffs in other areas.

The problem isnt necessarily the year-to-year changes between the FY2020 and FY2021 budgets. Instead, the problem is that proposed FY2021 funding for emerging technologies continues the previous years underwhelming support for research and development relative to the Department of Defenses strategic goals. This is the critical point for my assessment of the budget: despite multiple opportunities to align funding with strategy, emerging technologies and basic research have not received the scale of investment that the National Defense Strategy argues they deserve.

Chad Peltier is a senior defense analyst at Janes, where he specializes in emerging defense technologies, Chinese military modernization, and data science. This article does not reflect the views of his employer.

Image: U.S. Army (Photo by Monica K. Guthrie)

Read the original:
Put Your Money Where Your Strategy Is: Using Machine Learning to Analyze the Pentagon Budget - War on the Rocks

2020 Supply Chain Planning Value Matrix Underscores Benefits of Machine Learning and Customizable Integrations – Yahoo Finance

Nucleus Research identifies Blue Yonder, E2Open, Infor, Kinaxis, One Network and Vanguard as SCP Leaders

Nucleus Research today released the 2020 Supply Chain Planning (SCP) Technology Value Matrix, its assessment of the SCP market. For the report, Nucleus evaluated SCP vendors based on their products usability, functionality and overall value.

While other firms market reports position vendors based on analyst opinions, the Nucleus Value Matrix segments competitors based on usability, functionality and the value that customers realized from each products capabilities, measured with Nucleus rigorous ROI methodologies.

Nucleus named Blue Yonder, E2Open, Infor, Kinaxis, One Network and Vanguard as SCP leaders.

Supply chain planning has become critical for success as companies must maintain service levels in the face of resource constraints and external disturbances. Tight solution integrations and robust embedded analytics have become table stakes for supply chain planning systems, which can now differentiate based on go-to-market strategy and tactical focuses. Leading vendors have undertaken a "platform approach" to product delivery, providing solution flexibility that enables customers to drive long-term value by configuring deployments with their preferred blend of best practices and customizations.

"To support a broad range of planning capabilities, supply chain planning vendors must provide comprehensive product roadmaps," says Ian Campbell, CEO of Nucleus Research. "Now more than ever, customers demand the capability to prioritize tactical focuses and personalize SCP solutions with their own differentiators."

"In order to be resilient enough to handle external challenges, organizations must have robust plans in place for their supply chains," says Andrew MacMillen, analyst at Nucleus Research. "Proactive resource management has become essential for sustainable success and requires a greater level of collaboration across an organizations departments. Leading SCP solutions realize this, and can consolidate siloed data into a unified view to deliver value."

See the full report at: https://nucleusresearch.com/research/single/scp-technology-value-matrix-2020/

About Nucleus Research

Nucleus Research is a global provider of investigative, case-based technology research and advisory services. We deliver the numbers that drive business decisions. For more information, visit NucleusResearch.com or follow us on Twitter @NucleusResearch.

View source version on businesswire.com: https://www.businesswire.com/news/home/20200324005437/en/

Contacts

Adam OuelletInkHousenucleus@inkhouse.com 978-413-4341

See the article here:
2020 Supply Chain Planning Value Matrix Underscores Benefits of Machine Learning and Customizable Integrations - Yahoo Finance

Structure-based AI tool can predict wide range of very different reactions – Chemistry World

New software has been created that can predict a wide range of reaction outcomes but is also more flexible than other programs when it comes to dealing with completely different chemical problems.The machine-learning platform, which uses structure-based molecular representations instead of big reaction-based datasets, could find diverse applications in organic chemistry.

Although machine-learning methods have been widely used to predict the molecular properties and biological activities of target molecules, their application in predicting reaction outcomes has been limited because current models usually cant be transferred to different problems. Instead, complex parameterisation is required for each individual case to achieve good results. Researchers in Germany are now reporting a general approach that overcomes this limitation.

Previous models for accurately predicting reaction results have been highly complex and problem-specific, says Frank Glorius of the University of Mnster, Germany, who led the study. They are mostly based on a previously gained understanding of the underlying processes and cannot be transferred to other problems. In our approach, we use a universal representation of the involved compounds, which is solely based on their molecular structures. This allows for a general applicability of our program to diverse problem sets.

The new tool is based on the assumption that reactivity can be directly derived from a molecules structure and uses an input based on multiple fingerprint features as an all-round molecular representation. Frederik Sandfort, who also participated in the research, explains that organic compounds can be represented as graphs on which simple structural (yes/no) queries can be carried out. Fingerprints are number sequences based on the combination of many such successive queries, he says. They have originally been developed for structural similarity searches and were proven to be well-suited for application in computational models. We use a large number of different fingerprints to represent the molecular structure of each compound as accurately as possible.

Glorius points out that their platform is very versatile. While our model can be used to predict molecular properties, its most important application is the accurate prediction of reaction results, he says. We could predict enantioselectivities and yields with comparable accuracy to previous problem-specific models. Furthermore, the model was applied to predicting relative conversion based on a high-throughput data set which was never tackled using machine learning before.

The program is also easy to use, the researchers say. It only requires the input data in a very simple form and some problem-specific settings, explains Sandfort. He adds that the tool is already online and will be updated further with the teams most recent developments.

Robert Paton at Colorado State University and the Center for Computer Assisted Synthesis, US, who was not involved in the study, notes that machine-learning methods are being increasingly used to identify patterns in data that can help to predict the outcome of experiments. Chemists have managed to harness these techniques by converting molecular structures into vectors of numbers that can then be passed to learning algorithms, he says. Representations using information only from a molecules atoms and their connectivity are agnostic to the particular reaction and as a result may be used across multiple reaction types for different types of predictions. Future developments in interpreting these predictions a challenge shared by all machine learning approaches will be valuable.

See original here:
Structure-based AI tool can predict wide range of very different reactions - Chemistry World

With Launch of COVID-19 Data Hub, The White House Issues A ‘Call To Action’ For AI Researchers – Machine Learning Times – machine learning & data…

Originally published in TechCrunch, March 16, 2020

In a briefing on Monday, research leaders across tech, academia and the government joined the White House to announce an open data set full of scientific literature on the novel coronavirus. The COVID-19 Open Research Dataset, known as CORD-19, will also add relevant new research moving forward, compiling it into one centralized hub. The new data set is machine readable, making it easily parsed for machine learning purposes a key advantage according to researchers involved in the ambitious project.

In a press conference, U.S. CTO Michael Kratsios called the new data set the most extensive collection of machine readable coronavirus literature to date. Kratsios characterized the project as a call to action for the AI community, which can employ machine learning techniques to surface unique insights in the body of data. To come up with guidance for researchers combing through the data, the National Academies of Sciences, Engineering, and Medicine collaborated with the World Health Organization to come up with high priority questions about the coronavirus related to genetics, incubation, treatment, symptoms and prevention.

The partnership, announced today by the White House Office of Science and Technology Policy, brings together the Chan Zuckerberg Initiative, Microsoft Research, the Allen Institute for Artificial Intelligence, the National Institutes of Healths National Library of Medicine, Georgetown Universitys Center for Security and Emerging Technology, Cold Spring Harbor Laboratory and the Kaggle AI platform, owned by Google.

The database brings together nearly 30,000 scientific articles about the virus known as SARS-CoV-2. as well as related viruses in the broader coronavirus group. Around half of those articles make the full text available. Critically, the database will include pre-publication research from resources like medRxiv and bioRxiv, open access archives for pre-print health sciences and biology research.

To continue reading this article, click here.

Link:
With Launch of COVID-19 Data Hub, The White House Issues A 'Call To Action' For AI Researchers - Machine Learning Times - machine learning & data...

So only 12% of supply chain pros are using AI? Apparently. – Supply Chain Dive

Dive Brief:

Matt Leonard / Supply Chain Dive, data from MHI Annual Industry Report

One problem with pinning down the number of people who are using AI is if you ask two people what they consider to be AI you'll get two answers. I know because I did just that.

Thomas D. Boykin, a supply chain specialist at Deloitte and leader of the MHI white paper, said he considered AI to be not just predictive analytics and prescriptive analytics, but a system where the human is taken completely out of the loop. An example would be a system used by a waste management company to reroute vehicles based on sensor data from waste receptacles around the service area,Boykin said.

"There's some things that can be executed systematically without human intervention," he said in an interview. "And for us, that's where AI comes in."

But the definition is quite different for Stefan Nusser, the VP of product at Fetch Robotics who used to run the Cloud AI team for Google in Europe.

"In my mind, any data-driven, model-based machine learning approach that to me is AI,"Nusser said in an interview with Supply Chain Dive. This would include an algorithm based on historical data that provides outputs with a certain level of accuracy, he said.

For Nusser, if AI is the car then machine learning is the engine. In this case, many of the methods analysts currently use for predictive analytics clustering, classification, etc. would be considered AI.

But definitions aside,Boykin and Nusser agree that AI is far from widespread within the supply chain at this point.

"I think penetration is just slow,"Nusser said.

There are also two ways a company could be using AI:

The latter of these approaches is probably even rarer in the world of supply chain right now unless you're a transportation company trying to predict traffic and optimize route planning, Nusser said.

"I really doubt 12% of companies have that level of investment in AI," he said about in-house modeling capabilities.

This doesn't mean companies aren't interested in using more of the technology. However, bringing the required talent on board can be a struggle. Fifty-six percent of respondents considered hiring a top challenge in the current environment and 78% said there was high competition for the talent available.

Access to data is another issue.

AI applications are trained on historical data and, depending on the application, a company will need to ensure access to its data as a first step. But the MHI report found that only 16% of respondents consider their organization's data stream management to be either "good" or "excellent."

Data is more available thanks to cheap sensors and other Internet of Things technology, but "it also presents a problem with being able to synthesize it and filter it and understand what data is needed to drive what insights,"Boykin said.

Putting this data in the cloud can make it easier to share with vendors and other business partners when looking to create an AI application with outside help,Nusser said. These struggles aside, he still considers it a good technology for companies to invest.

While some technologies like blockchain might have been overhyped, AI has proven itself.

"I do think that it has the potential to even exceed what people are expecting from it,"Boykin said. "And I do think it is a worthwhile investment."

So what is AI good for? It's great for understanding unstructured data like images or language,Nusser said.

Within a warehouse this could mean using cameras to get a better understanding of inventory, the use of robotics or anything else in the physical world, he said.

"The value I see us bringing to the table is the physical world: an understanding of the physical world, an understanding of what get's touched ... how the environment changes over time?" he said.

This story was first published in our weekly newsletter, Supply Chain Dive: Operations. Sign up here.

See more here:
So only 12% of supply chain pros are using AI? Apparently. - Supply Chain Dive

Fritz brings on-device AI to Android and iOS – VentureBeat

Fritz AI, a startup providing an AI and machine learning development platform for Android and iOS, today announced that it has raised $5 million. CEO Dan Abdinoor says that the capital will accelerate Fritzs expansion as it launches its product out of early access, which he asserts addresses the challenges of mobile AI for businesses with toolkits that facilitate development, management, and execution.

Successfully deploying AI and machine learning models to production isnt often a walk in the park. In a recent study conducted by IDC analysts, only 25% of organizations said theyd successfully adopted an enterprise-wide AI strategy, and its estimated that 50% of companies spend between 8 and 90 days developing a single AI model.

To address this challenge, Fritz provides a cross-platform software development kit (SDK) with pretrained models for object detection, image segmentation, image labeling, style transfer, pose estimation, and more baked in. Using its end-to-end suite for building and deploying custom trained models, developers can generate and collect labeled data sets and train optimized models without code, and they can improve those models with fresh data uploaded continuously.

Apple and Google offer mobile machine learning solutions in Core ML and ML Kit, respectively, but theyre platform-specific. Plus, Fritzs prebuilt models dont require an internet connection, and they run atop live video with a fast frame rate. All of them optionally perform inference on-device and come in several sizes, from small models tailored for size and bandwidth to fast models optimized for processing speed.

Fritz enables customers to generate synthetic data or collect data for annotation and to benchmark on-device model performance before deployment. It supports the deployment of model versions to test devices while training and tracks the configurations, and it protects models from attackers while improving those running on-device by analyzing platform, device, and processor performance.

Among Fritzs customers are Momento, One Bite, Video Star, PlantVillage, MDacne, Superimpose X, and Instasaber, whove used its workflows to develop models that change hair color in real time, replace photo backgrounds, identify food like pizza, detect pets, and create stickers for messaging apps. The company has a rival in Polarr, which last year raised $11.5 million for its offline, on-device computational photography thats used by companies including Qualcomm, Oppo, and Hober. But Abdinoor asserts that Fritz has a competitive advantage in the breadth of its product portfolio.

Foundry Group led Boston, Massachusetts-based Fritzs latest round with participation from NextGen Venture Partners, Inner Loop Capital, Eniac Ventures, Uncork Capital, and Hack VC, which brings the companys total raised to $7 million. Fritz has about 24 employees.

Read the original:
Fritz brings on-device AI to Android and iOS - VentureBeat

AI Is Changing Work and Leaders Need to Adapt – Harvard Business Review

Executive Summary

Recent empirical research by the MIT-IBM Watson AI Lab provides new insight into how work is changing in the face of AI. Based on this research, the author provides a roadmap for leaders intent on adapting their workforces and reallocating capital, while also delivering profitability. They argue that the key to unlocking the productivity potential while delivering on business objectives lies in three key strategies: rebalancing resources, investing in workforce reskilling and, on a larger scale, advancing new models of education and lifelong learning.

As AI is increasingly incorporated into our workplaces and daily lives, it is poised to fundamentally upend the way we live and work. Concern over this looming shift is widespread. A recent survey of 5,700 Harvard Business School alumni found that 52% of even this elite group believe the typical company will employ fewer workers three years from now.

The advent of AI poses new and unique challenges for business leaders. They must continue to deliver financial performance, while simultaneously making significant investments in hiring, workforce training, and new technologies that support productivity and growth. These seemingly competing business objectives can make for difficult, often agonizing, leadership decisions.

Against this backdrop, recent empirical research by our team at the MIT-IBM Watson AI Lab provides new insight into how work is changing in the face of AI. By examining these findings, we can create a roadmap for leaders intent on adapting their workforces and reallocating capital, while also delivering profitability.

The stakes are high. AI is an entirely new kind of technology, one that has the ability to anticipate future needs and provide recommendations to its users. For business leaders, that unique capability has the potential to increase employee productivity by taking on administrative tasks, providing better pricing recommendations to sellers, and streamlining recruitment, to name a few examples.

For business leaders navigating the AI workforce transition, the key to unlocking the productivity potential while delivering on business objectives lies in three key strategies: rebalancing resources, investing in workforce reskilling and, on a larger scale, advancing new models of education and lifelong learning.

Our research report, offers a window into how AI will change workplaces through the rebalancing and restructuring of occupations. Using AI and machine learning techniques, our MIT-IBM Watson AI Lab team analyzed 170 million online job posts between 2010 and 2017. The studys first implication: While occupations change slowly over years and even decades tasks become reorganized at a much faster pace.

Jobs are a collection of tasks. As workers take on jobs in various professions and industries, it is the tasks they perform that create value. With the advancement of technology, some existing tasks will be replaced by AI and machine learning. But our research shows that only 2.5% of jobs include a high proportion of tasks suitable for machine learning. These include positions like usher, lobby attendant, and ticket taker, where the main tasks involve verifying credentials and allowing only authorized people to enter a restricted space.

Most tasks will still be best performed by humans whether craft workers like plumbers, electricians and carpenters, or those who do design or analysis requiring industry knowledge. And new tasks will emerge that require workers to exercise new skills.

As this shift occurs, business leaders will need to reallocate capital accordingly. Broad adoption of AI may require additional research and development spending. Training and reskilling employees will very likely require temporarily removing workers from revenue-generating activities.

More broadly, salaries and other forms of employee compensation will need to reflect the shifting value of tasks all along the organization chart. Our research shows that as technology reduces the cost of some tasks because they can be done in part by AI, the value workers bring to the remaining tasks increases. Those tasks tend to require grounding in intellectual skill and insightsomething AI isnt as good at as people.

In high-wage business and finance occupations, for example, compensation for tasks requiring industry knowledge increased by more than $6,000, on average, between 2010 and 2017. By contrast, average compensation for manufacturing and production tasks fell by more than $5,000 during that period. As AI continues to reshape the workplace, business leaders who are mindful of this shifting calculus will come out ahead.

Companies today are held accountable not only for delivering shareholder value, but for positively impacting stakeholders such as customers, suppliers, communities and employees. Moreover, investment in talent and other stakeholders is increasingly considered essential to delivering long-term financial results. These new expectations are reflected in the Business Roundtables recently revised statement on corporate governance, which underscores corporations obligation to support employees through training and education that help develop new skills for a rapidly changing world.

Millions of workers will need to be retrained or reskilled as a result of AI over the next three years, according to a recent IBM Institute for Business Value study. Technical training will certainly be a necessary component. As tasks requiring intellectual skill, insight and other uniquely human attributes rise in value, executives and managers will also need to focus on preparing workers for the future by fostering and growing people skills such as judgement, creativity and the ability to communicate effectively. Through such efforts, leaders can help their employees make the shift to partnering with intelligent machines as tasks transform and change in value.

As AI continues to scale within businesses and across industries, it is incumbent upon innovators and business leaders to understand not only the business process implications, but also the societal impact. Beyond the need for investment in reskilling within organizations today, executives should work alongside policymakers and other public and private stakeholders to provide support for education and job training, encouraging investment in training and reskilling programs for all workers.

Our research shows that technology can disproportionately impact the demand and earning potential for mid-wage workers, causing a squeeze on the middle class. For every five tasks that shifted out of mid-wage jobs, we found, four tasks moved to low-wage jobs and one moved to a high-wage job. As a result, wages are rising faster in the low- and high-wage tiers than in the mid-wage tier.

New models of education and pathways to continuous learning can help address the growing skills gap, providing members of the middle class, as well as students and a broad array of mid-career professionals, with opportunities to build in-demand skills. Investment in all forms of education is key: community college, online learning, apprenticeships, or programs like P-TECH, a public-private partnership designed to prepare high school students for new collar technical jobs like cloud computing and cybersecurity.

Whether it is workers who are asked to transform their skills and ways of working, or leaders who must rethink everything from resource allocation to workforce training, fundamental economic shifts are never easy. But if AI is to fulfill its promise of improving our work lives and raising living standards, senior leaders must be ready to embrace the challenges ahead.

View original post here:
AI Is Changing Work and Leaders Need to Adapt - Harvard Business Review