Page 2,840«..1020..2,8392,8402,8412,842..2,8502,860..»

Acoustic Quality Control with the Help of Artificial Intelligence – Innovation Origins

Although they can bring great benefits in everyday work, many small and medium-sized enterprises (SMEs) shy away from applications based on artificial intelligence. But AI offers a lot of potential, especially in quality control. Nevertheless, training the models is difficult and hardly feasible without mathematical knowledge, as there are countless parameters that can go into such an analysis. And once an AI algorithm is learned, it is trained only on the specifications it learns. If a product design or the geometry of a component is later changed even slightly, the algorithm recognizes this as an error and the AI must be retrained.

Researchers at the Fraunhofer Institute for Digital Media Technology IDMT in Ilmenau, Germany, have now developed the IDMT-ISAAC software, which can be operated even without extensive expert AI knowledge. IDMT-ISAAC stands for Industrial Sound Analysis for Automated Quality Control. We want to enable SMEs to adapt and customize AI algorithms themselves, says Judith Liebetrau, group leader of Industrial Media Applications at Fraunhofer IDMT. They can apply IDMT-ISAAC to their own audio data, retrain it, and thus get fast and reliable results and decision support for their quality assurance.

IDMT-ISAAC relies on acoustics for analysis, since in many cases it is possible to detect defects just by the sound of the process. To train the AI, the scientists use recorded acoustic data from welding processes. The AI analyzes the typical noises that occur and draws conclusions about the quality of the respective weld seam from the audio data. If, for example, the geometry of a product is then changed, the user can teach this to IDMT-ISAAC with just a few clicks. As early as summer 2021, the software should be adapted to live operation to the extent that the system can immediately analyze real-time data from production and optimize quality assurance. In three to four years, it should even be able to actively intervene in production.

Want to be inspired 365 days per year? Heres the opportunity. We offer you one "origin of innovation" a day in a compact Telegram message. Seven days a week, delivered around 8 p.m. CET. Straight from our newsroom. Subscribe here, it's free!

But the framework at the heart of IDMT-ISAAC doesnt offer new analysis options just for welding. We have integrated various methods in the modular system to be able to map other processes, such as milling, relatively quickly, Liebetrau explains. Companies that already have their own software should also be able to use it in the future. They will also be able to access the institutes AI via an interface on the Fraunhofer IDMT server. It is important to the developers here to emphasize that data protection and data security would always be observed and that the data would be processed anonymously, regardless of whether companies access the AI via an interface or it is integrated into the company via the framework.

For different user groups AI novices as well as AI experts the software can be customized via different user profiles. For example, developers of AI algorithms are very interested in getting a feel for how AI makes its decisions and the sounds it uses to make them, says Judith Liebetrau. So we are also moving a bit in the direction of Explainable AI with the framework to make AI more comprehensible, Liebetrau says.

The researchers will present IDMT-ISAAC at the Hannover Messe from April 12 to 16, 2021. At the virtual booth, Bescher will apply artificial intelligence models using the IDMT-ISAAC software to industrial audio data to verify its quality.

Cover photo: Fraunhofer IDMTs new IDMT-ISAAC software framework provides AI-based audio analysis tools that can be used by users without expert AI knowledge. istock.com/Byjeng, istock.com/TIMETOFOCUS

For more articles on AI, click here.

Read more here:
Acoustic Quality Control with the Help of Artificial Intelligence - Innovation Origins

Read More..

Google Maps using artificial intelligence to help point people in the right direction – ZDNet

Boasting that it is on track to bring over 100 "AI-powered" improvements to Google Maps, Google has announced a series of updates that have been or are set to be released in the coming year.

The first is adding Live View, a feature that uses augmented reality cues -- arrows and accompanying directions -- to help point people in the right way and avoid the "awkward moment when you're walking the opposite direction of where you want to go".

According to Google Maps product VP Dane Glasgow, Live View relies on AI technology, known as global localisation, to scan "tens of billions" of Street View images to help understand a person's orientation, as well as the precise altitude and placement of an object inside a building, such as an airport, transit station, or shopping centre, before providing directions.

"If you're catching a plane or train, Live View can help you find the nearest elevator and escalators, your gate, platform, baggage claim, check-in counters, ticket office, restrooms, ATMs and more. And if you need to pick something up from the mall, use Live View to see what floor a store is on and how to get there so you can get in and out in a snap," Glasgow explained in a post.

For now, the indoor Live View feature is available on Android and iOS in a number of shopping centres in the US across Chicago, Long Island, Los Angeles, Newark, San Francisco, San Jose, and Seattle, with plans to expand it to a select number of airports, shopping centres, and transit stations in Tokyo and Zurich. More cities will also be added, Glasgow confirmed.

See also:Google Maps turns 15: A look back on where it all began

Glasgow added commuters will be able to view the current and forecast temperature and weather conditions, as well as the air quality in an area through Google Maps, made possible through data shared by Google partners such as The Weather Company, AirNow.gov, and the Central Pollution Board. To be available on Android and iOS, the weather layer will be made available globally, while the air quality layer will launch in Australia, the US, and India, with plans to see it expanded in other countries.

On the environment, Glasgow also noted that Google is building a new routing model using insights from the US Department of Energy's National Renewable Energy Lab to help deliver more eco-friendly route options, based on factors like road incline and traffic congestion, for commuters in the US on Android and iOS. The model will be available later this year, with plans for global expansion at an unspecified later date.

Glasgow said the move is part of the company's commitment to reduce its environmental footprint.

"Soon, Google Maps will default to the route with the lowest carbon footprint when it has approximately the same ETA as the fastest route. In cases where the eco-friendly route could significantly increase your ETA, we'll let you compare the relative CO2 impact between routes so you can choose," he said.

In further efforts to meet its sustainability commitment, the tech giant also plans to introduce in "coming months" an updated version of Maps where commuters will have a view of all routes and transportation modes available to their destination, without toggling between tabs, while also automatically prioritising a user's preferred transport mode or modes that are popular in their city.

"For example, if you bike a lot, we'll automatically show you more biking routes. And if you live in a city like New York, London, Tokyo, or Buenos Aires where taking the subway is popular, we'll rank that mode higher," Glasgow said.

Also, within Maps, Google said it is teaming up with US supermarket Fred Meyer to pilot in select stores in Portland, Oregon a feature that has been designed to make contactless grocery pickup easier, including notifying commuters what time to leave to pick up their groceries, share the arrival time with the store, and allow customers to "check-in" on the Google Maps app so their grocery orders can be brought out to their car on arrival.

Link:
Google Maps using artificial intelligence to help point people in the right direction - ZDNet

Read More..

Study Finds Both Opportunities and Challenges for the Use of Artificial Intelligence in Border Management Homeland Security Today – HSToday

Frontex, the European Border and Coast Guard Agency, commissioned RAND Europe to carry out an Artificial intelligence (AI) research study to provide an overview of the main opportunities, challenges and requirements for the adoption of AI-based capabilities in border management.

AI offers several opportunities to the European Border and Coast Guard, including increased efficiency and improving the ability of border security agencies to adapt to a fast-paced geopolitical and security environment. However, various technological and non-technological barriers might influence how AI materializes in the performance of border security functions.

Some of the analyzed technologies included automated border control, object recognition to detect suspicious vehicles or cargo and the use of geospatial data analytics for operational awareness and threat detection.

The findings from the study have now been made public, and Frontex aims to use the data gleaned to shape the future landscape of AI-based capabilities for Integrated Border Management, including AI-related research and innovation projects.

The study identified a wide range of current and potential future uses of AI in relation to five key border security functions, namely: situation awareness and assessment; information management; communication; detection, identification and authentication; and training and exercise.

According to the report, AI is generally believed to bring at least an incremental improvement to the existing ways in which border security functions are conducted. This includes front-end capabilities that end users directly utilize, such as surveillance systems, as well as back-end capabilities that enable border security functions, like automated machine learning.

Potential barriers to AI adoption include knowledge and skills gaps, organizational and cultural issues, and a current lack of conclusive evidence from actual real-life scenarios.

Read the full report at Frontex

(Visited 192 times, 2 visits today)

More here:
Study Finds Both Opportunities and Challenges for the Use of Artificial Intelligence in Border Management Homeland Security Today - HSToday

Read More..

Artificial Intelligence-Based Security Market Key Factor Drive Growth Is Increasing Adoption of Internet of Things – Rome News-Tribune

Pune, India, March 30, 2021 (Wiredrelease) Prudour Pvt. Ltd : The New Report Artificial Intelligence-based Security Market posted through MarketResearch.Biz, covers the market panorama and its growth possibilities over the upcoming years. The report also offers leading players in this market. The research comprises in-depth insight of the worldwide share, size, and developments, in addition to the growth rate of the Artificial Intelligence-based Security Market to estimate its development throughout the forecast period. Most importantly, the report in addition identifies the historical, current, and future developments which might be predicted to persuade the improvement ratio of the Artificial Intelligence-based Security market. The research segments the market on the premise of offering, deployment type, security type, solution, technology, industry vertical, and region. To provide extra readability concerning the Artificial Intelligence-based Security industry, the report takes a more in-depth study the contemporary fame of different factors along with however now no longer confined to deliver chain management, area of interest markets, distribution channel, trade, deliver, supply and manufacturing functionality regionwise.

The Artificial Intelligence-based Security market report has provided an evaluation of different factors influencing the markets ongoing rise. Drivers, restraints, and Artificial Intelligence-based Security market developments are elaborated to apprehend their tremendous or terrible effects. This segment is aimed toward supplying readers with thorough facts approximately the capability scope of diverse programs and segments. These Artificial Intelligence-based Security market estimates are primarily based totally on the present developments and ancient milestones.

*** NOTE:Our Free Complimentary Sample Report Offers a BriefSummary Of The Report, TOC,Company Profiles and Geographic Segmentation, List of Tables and Figures, and FutureEstimation ***

The top players Intel Corporation, Nvidia Corporation, Xilinx Inc, Samsung Electronics Co Ltd, Micron Technology Inc, International Business Machines Corporation, Cylance Inc, ThreatMetrix Inc, Securonix Inc, Acalvio Technologies Inc are examined through the following points:

Business Segmentation Research and Study

SWOT Analysis and Porters Five Forces Evaluation

COVID 19 Impact Analysis on Latest Artificial Intelligence-based Security Market Situation (Drivers, Restraints, Trends, Challenges and Opportunities)

Global Artificial Intelligence-based Security market has been studied definitely to get higher insights into the businesses. Across the globe, various regions includes North America, Latin America, Asia-Pacific, Europe, and Africa were summarized withinside the report Artificial Intelligence-based Security Market.

*** NOTE: MarketResearch.Biz crew are reading Covid19 updates and its effect at the increase of the Artificial Intelligence-based Security market and wherein important we can take into account Covid-19 footmark for a higher evaluation of the market and industries. Contact us cogently for extra targeted facts.***

Artificial Intelligence-based Security Market Segmentation:

Segmentation by offering: Hardware, Software, Services. Segmentation by deployment type: Cloud Deployment, On-premise Deployment. Segmentation by security type: Network Security, Endpoint Security, Application Security, Cloud Security. Segmentation by solution: Identity and Access Management (IAM), Risk and Compliance Management, Encryption, Data Loss Prevention (DLP), Unified Threat Management (UTM), Antivirus/Antimalware, Intrusion Detection/Prevention System (IDS/IPS), Others (Firewall, Security and Vulnerability Management, Disaster Recovery, DDOS Mitigation, Web Filtering, Application Whitelisting, and Patch Management). Segmentation by technology: Machine Learning, Context Awareness Computing, Natural Language Processing. Segmentation by industry vertical: Government Defense, BFSI, Enterprise, Infrastructure, Automotive Transportation, Healthcare, Retail, Manufacturing, Others (Oil Gas, Education, Energy)

Global Artificial Intelligence-based Security Market: Regional Analysis

The Artificial Intelligence-based Security market research report studies the contribution of diverse areas withinside the market through information on their political, technological, social, environmental, and financial fame. Artificial Intelligence-based Security market analysts have covered reports referring to each location, its manufacturers, and revenue. The areas studied withinside the market consist of North America, Europe, Asia Pacific, South and Central America, South Asia, the Middle and Africa, South Korea, and others. This segment is targeted at supporting the Artificial Intelligence-based Security market reader examine the capability of every location for making sound investments.

The Artificial Intelligence-based Security market objective examine are:

Artificial Intelligence-based Security Overview Market Status and Future Forecast 2021 to 2031

Artificial Intelligence-based Security Market report mentioned product developments, partnerships, mergers and acquisitions, RD tasks are mentioned

Artificial Intelligence-based Security Market Details on Opportunities and Challenges, Restrictions and Risks, Market Drivers, Challenges.

General aggressive scenario, along with the principle market players, their growth targets, expansions, deals.

In-depth Description of Artificial Intelligence-based Security Market Manufacturers, Sales, Revenue, Market Share, and Latest Developments for Key Players.

To examine and research the Artificial Intelligence-based Security market through offering, deployment type, security type, solution, technology, industry vertical, and region

The major questions replied withinside the report:

What are the principal elements that take this market to the top level?

What is the market growth and demand analysis?

What are the recent possibilities for the Artificial Intelligence-based Security market withinside the coming period?

What are the principal benefits of the player?

What is the important thing to the Artificial Intelligence-based Security market?

Section 1: Based on an executive synopsis of this report. And additionally, it consists of key developments of the Artificial Intelligence-based Security market associated with products, applications, and different critical elements. It additionally affords an evaluation of the aggressive panorama and CAGR and market length of the Artificial Intelligence-based Security market primarily based totally on manufacturing and revenue.

Section 2: Production and Consumption Through Region: It covers all nearby markets to which the research examine relates. Prices and key players similarly to manufacturing and intake in every nearby Artificial Intelligence-based Security market are mentioned.

Section 3: Key Players: Here, the Artificial Intelligence-based Security report throws mild on monetary ratios, pricing structure, manufacturing cost, gross profit, income volume, revenue, and gross margin of main and outstanding companies.

Section 4: Market Segments: This part of the report discusses product kind and alertness segments of the Artificial Intelligence-based Security market primarily based totally on market share, CAGR, market size, and diverse different elements.

Section 5: Research Methodology: This segment discusses the research technique and technique used to put together the Artificial Intelligence-based Security report. It covers reports triangulation, market breakdown, market length estimation, and research layout and/or programs.

Suite 300 New York City, NY 10170,

More here:
Artificial Intelligence-Based Security Market Key Factor Drive Growth Is Increasing Adoption of Internet of Things - Rome News-Tribune

Read More..

Artificial intelligence is being put into power of student organizations The Bradley Scout – The Scout

The following article is a part of our April Fools edition, The Scoop. The content of these stories is entirely fabricated and not to be taken seriously.

With low participation from the most recent underclassmen at Bradley, the university has implemented artificial intelligence to replace club members.

As part of a senior capstone project, Jeff Echo, a computer science major, developed a program to help prevent clubs from losing the full experience of extracurriculars.

I remember when student organizations were a big part of my life, and sitting at the meetings gave me a chance to bond with other students, Echo said. I dont want incoming students to lose that environment.

So far, three clubs have taken part in the senior capstone project.

The Campus People-Watchers Club, Juggling Club and Anti-Pizza Crust Association have all seen a decrease in general member enrollment. They also hadnt had enough people running for executive board positions to replace any graduated seniors or students not running for re-election.

As an artificial intelligence program, taking club positions while attending a university seems to be a big accomplishment for A.I., Cee Threepwo, treasurer of the Campus People-Watchers Club, said. We help enhance the club experience for our peers by adding more members to the rosters and handling position responsibilities, showing what A.I. is capable of.

Not only are these virtual club members handling the duties that student organizations need to have done, but they are also capable of building relations with other members.

According to Echo, with classes being on Zoom, the A.I. can watch hours worth of lectures from various departments and understand what assignments, projects or topics they might be learning in class.

Conversations are a tool we use to have a greater retention in the club, meaning potential growth for the club in the future, Avery Nest, another A.I. program serving as secretary for the Juggling Club, said. This is to also avoid students from feeling lonely.

While conversations are meant to be as natural as possible, some students have noted some hiccups in their interactions with the new exec members.

One of the general members of the Juggling Club, Esmeralda Tesla, said that after talking with the A.I. program, it asked for feedback on the conversation. Along with that, it also sent a long terms and agreements contract.

It was really strange, but at the same time, I cant compare it to any other since this is the only time Ive been to a club meeting at Bradley, Tesla, freshman nursing major, said.

As for next semester, with classes returning back to campus, Echo sees this as a chance to make A.I. fully immersed in a college environment. Echo plans on teaming up with students interested in robotics and engineering to see if they could build a robot to put the programs in.

Alexa Bender, a virtual club member who is now limited to the Zoom environment, seems to be looking forward to becoming more human.

Perhaps I shall live up to my full potential as a member of the Anti-Pizza Crust Association with a functioning body, Bender, vice president, said. I may tear all crusts off of pizzas and fling them into the sun. Only when all pizzas have no crust will I rest and have completed my purpose.

Read the rest here:
Artificial intelligence is being put into power of student organizations The Bradley Scout - The Scout

Read More..

Challenges of AI in the Supply Chain – Supply and Demand Chain Executive

Artificial intelligence (AI) may be one of the most impressive human achievements and offers endless opportunities for companies willing to foster this technology. As the benefits of main AI elements such as machine learning, data analysis and predictive analysis are undeniable, what are the few biggest challenges that companies may face while introducing AI to their day-to-day operations?

In the AI projects initial stages, the key project stakeholders need to inform the business that the technology is not perfect and that its introduction might create some temporary inconveniences. Once the AI application gets deployed, it needs to be used and trusted to be continually improved. Unfortunately, learning and developing new skills and breaking up with old habits dont come easily for some employees. During the project initiation phase, the company must provide lots of guidance and training to its employees on the benefits and opportunities that AI can deliver. That will help ensure that the employees understand the need and see how they can personally benefit from AI.

Fragmented systems are always an issue in any company. Systems may vary locally and globally within the same company and may not always cooperate in one eco-system. Lack of system interoperability may be an obstacle when deploying AI, as these systems generate data that is an essential component of any AI solution. It is vital to know or predict system standards, frameworks and possibilities. Using this information, a company should define how these systems can supply the required data and communicate with the AI framework.

Over the past few years, companies have generated more data than ever before. Data is the food that fuels AI, and manufacturing companies need to access this data efficiently. Before introducing AI in your company, the data access constraints should be minimized, ensuring that the relevant data sources and databases are easily accessible. Once you have access to the appropriate and comprehensive data lake, meaningful analysis and actionable insights can be derived. The proper data use can become an excellent opportunity for the company to win the race against its competitors. It is also imperative to remember that having access to the most massive data quantities is not the deciding factor for a successful AI project. It is more about selecting relevant data for the respective AI application, cleaning it up and applying the right analytical methods against that data.

Even with sufficient and complete AI data, you may face some technological constraints. Many applications can be significantly sensitive to latencies; for instance, predictive maintenance applications will only work when auto alarm mechanisms and rapid response are built into the overall process of handling predictive maintenance issues. That is especially true in high-volume, fast-moving production. Decisions need to be made in seconds, and this is where ultra-fast computing, together with the proper response process, can make a difference.

Original post:
Challenges of AI in the Supply Chain - Supply and Demand Chain Executive

Read More..

Environmental Factor – April 2021: Data science paves the way with new tools, insights for SRP – Environmental Factor Newsletter

The NIEHS Superfund Research Program (SRP) held its first External Use Case (EUC) Showcase Feb. 18-19. Over 140 participants joined the meeting to share experiences and recommendations about integrating datasets from SRP-sponsored research. EUCs, developed by collaborations of researchers from different SRP centers, demonstrate specific scenarios in which data management and sharing could provide new insight on research questions and to identify barriers to inform future data efforts.

NIEHS SRP Director William Suk, Ph.D., emphasized the interdisciplinary nature of SRP-funded research. Combining the diverse data coming out of these programs offers a unique opportunity to uncover new scientific connections that can help us better understand the complex interplay between exposures and health, he said. SRP research teams are well positioned to use data to accelerate the pace of research and answer new questions that could not be answered before.

At the event, EUCs were presented by grantees who received supplemental funds to bring in data scientists to enhance the integration, interoperability, and reuse of SRP-generated data.

Members of the NIEHS Office of Data Science highlighted efforts to combine datasets that resulted in new insights or information shared with communities and other stakeholders.

The Data Supplement Showcase offered an opportunity for researchers to share their progress, the challenges they encountered, and their recommendations for making data more findable, accessible, interoperable, and reusable (FAIR; see top sidebar). Major outcomes from the showcase included:

Although teams discussed common challenges, there were plenty of success stories. Researchers who had never before had the chance to connect found new, inventive ways to think through their collaborations.

One of the best parts of this project was getting to know people from other institutions and putting our heads together, combining good social science and basic science to look at a problem in a new way and offer a tangible tool to help address it, said Brown University SRP Center researcher Scott Frickel, Ph.D.

A key outcome of these data [grant] supplements was having data scientists team up with subject matter experts to together develop ways to facilitate interoperability between existing datasets, said SRP Health Scientist Administrator Michelle Heacock, Ph.D. In this way, the approach used by the data scientists considered the needs of the subject matter experts.

A flurry of publications(https://tools.niehs.nih.gov/srp/data/index.cfm) is expected to come from these partnership projects. Discussions highlighted the best practices to move forward with SRP data efforts to accelerate the pace of scientific discoveries.

By combining distinct datasets across centers and disciplines, the 19 EUCs helped address a complex research question that individual groups could not tackle alone.

For example, one team includes Texas A&M University (TAMU), Brown University, and University of California (UC), San Diego. They are working to understand how land use vacant, industrial, and green space, for example affect peoples exposure to harmful chemicals, as well as community resilience and health.

Researchers and data scientists created an online interactive map using city, local, and SRP Center datasets. The map overlays factors like social vulnerability, impervious surfaces, green space, and housing conditions. With this tool, communities, regulators, and other researchers can visualize how different factors contribute to health risks. The team also published their work in December 2020.

The Boston University and Dartmouth University SRP Centers teamed up to create a searchable platform of publicly available data on contaminants in fish, environmental factors, and SRP data.

I think one of the biggest accomplishments was working to understand the types of data that we had between our partner teams, said Dartmouth SRP Center Director Celia Chen, Ph.D. It was astounding to me that we were all mercury scientists and used very different terms for similar types of measurements. That was part of the challenge of combining the data.

The team standardized the data from different sources, integrated it, and produced a new centralized repository. Their product will underpin an interactive mapping tool to provide a broad view of contaminants in fish and potential health risks by region.

Citations:Malecha ML, Kirsch KR, Karaye I, Horney JA, Newman G. 2020. Advancing the Toxics Mobility Inventory: development and application of a Toxics Mobility Vulnerability Index to Harris County, Texas. Sustainability 13(6):282291.

Heacock ML, Amolegbe SM, Skalla LA, Trottier BA, Carlin DJ, Henry HF, Lopez AR, Duncan CG, Lawler CP, Balshaw DM, Suk WA. 2020. Sharing SRP data to reduce environmentally associated disease and promote transdisciplinary research. Rev Environ Health 35(2):111122.

(Natalie Rodriguez is a research and communication specialist for MDB Inc., a contractor for the NIEHS Superfund Research Program.)

SRP publications mapped by discipline. Integrating and reusing data generated from individual research projects within the program can accelerate the pace of research. (Image from Heacock et al., 2020 used under CC BY 4.0 license)

Read the original post:

Environmental Factor - April 2021: Data science paves the way with new tools, insights for SRP - Environmental Factor Newsletter

Read More..

LLNL’s Winter Hackathon Highlights Data Science Talks and Tutorial – HPCwire

March 29, 2021 The Data Science Institute (DSI) sponsored LLNLs 27th hackathonon February 1112. Held four times a year, these seasonal events bring the computing community together for a 24-hour period where anything goes: Participants can focus on special projects, learn new programming languages, develop skills, dig into challenging tasks, and more. The winter hackathon was the DSIssecond such sponsorship. Organizers were data scientist Ryan Dana, postdoctoral researcher Sarah Mackay, and DSI administrator Jennifer Bellig. DSI director Michael Goldman opened the event by noting, Hackathons are great opportunities to explore new ideas and make connections with other staff, and to both innovate and learn.

In a new twist to the typical hackathon schedule, organizers offered four optional presentations showcasing data science techniques in COVID-19 drug discovery, inertial confinement fusion, central nervous system modeling, and querying of massive graphs. Participants could also choose to attend an introductory tutorial on deep learning (DL) for image classification. Goldman noted, Almost every program area at the Lab has some type of data science element. The hackathon is one way to help build that community.

Team and individual presentations at the end of the 24-hour period featured a range of projects. Lisa Hughey, a data analytics applications developer, used the time to learn R Shiny and build an interactive web application. Former hackathon organizer Geoff Cleary experimented with packaging Python applications, while Enterprise Application Services developers Brinda Jana and Yunki Paik continued a previous hackathon project to track radio hazardous waste material.

Tutorial Teamwork

Data scientists Cindy Gonzales and Luke Jaffe ran the two-hour DL tutorial, which explained how to perform multi-class image classification in Python using the PyTorch library. Image classification is a problem in computer vision in which a model recognizes an image and outputs a label for it. This process can play an important role in a variety of mission-relevant scenarios such as chemical detection, remote sensing, optics inspections, and disease diagnosis.

We designed the material so participants wouldnt need to know anything about deep learning, machine learning in general, or computer vision, said Jaffe, who works in LLNLs Global Security Computing Applications Division (GS-CAD). We expected some level of comfort with Python, and provided links where participants could learn more about the machine learning theory we covered.

The team provided sample code via Jupyter Notebook and first walked attendees through importing packages and setting up constants and image display utility functions. Next, the tutorial explored working with images as arrays and tensorsi.e., how a computer sees an image in order to classify itusing a CIFAR10 dataset that contains images of airplanes, cars, birds, cats, and other vehicles and animals.

Gonzales and Jaffe went on to describe the concepts behind neural networks, logistic regression to optimize classification accuracy, and different types of gradient descent algorithms. The tutorial included step-by-step instructions for using PyTorch to load data and create, train, and test the DL model.

Both tutorial leaders are expanding their data science skills on the job. Gonzales came to the Lab as an administrator in 2016 and later changed careers with the help of LLNLs Education Assistance Program (EAP) and Data Science Immersion Program. Now a GS-CAD data scientist, she is pursuing a Masters in Data Science via a Johns Hopkins distance-learning program. Jaffe was a Lab intern who was hired full time in 2016 after earning undergraduate and graduate degrees in Computer Engineering from Northeastern University. He is now using the EAP to fund PhD studies in Computer Vision at UC Berkeley. The team hopes to present their tutorial again to the Labs incoming summer interns.

Continuity Is Crucial

The Lab has held four hackathons virtually since the COVID-19 pandemic began, and Goldman emphasized the importance of continuing the event. Weve been out of the office for almost a year. Several new staff havent been onsite or met colleagues in person, so virtual events are crucial, he stated.

Although online attendance has not been as high as with in-person hackathons, this winter event saw a steady participation of 3035 hackers throughout. Bellig said, As much as I missed the energy of an in-person hackathon, I was quite impressed with all the people who participated virtually and, once again, with the presentations and hacking accomplishments of my fellow employees.

These circumstances havent dampened enthusiasm for the event. Dana, who joined GS-CAD in January 2020, volunteered to help organize the event even though he had not attended a previous hackathon. I wanted to learn more about how data science is applied throughout the Lab, and network with some of incredible talent and research that is being done, he said. Gonzales added, I am definitely interested in participating in future hackathons.

Source: LLNL

Continued here:

LLNL's Winter Hackathon Highlights Data Science Talks and Tutorial - HPCwire

Read More..

Time to insight over precision – Tableau brings data science to business users – Diginomica

( 3dkombinat - shutterstock)

When Salesforce announced its $15.7 billion acquisition of Tableau in 2019, Salesforce CEO Marc Benioff said that the company intends to bring data literacy to everyone in business'. And we are beginning to get a sense of what this will look like for the now combined companies, with the latest Business Science release from Tableau.

The acquisition also prompted questions over how and if there would be overlap between Tableau's BI capabilities and Salesforce's AI product, Einstein. The Business Science announcement from Tableau last week, which aims to put data science in the hands of business users, sheds some light on this question too - as it seemingly showcases how both companies can jointly bring the best of both worlds to the table.

We spoke with Andrew Beers, Chief Technology Officer at Tableau, about Business Science, which brings Einstein Discovery to Tableau's 2021.1 release later this month. The company said that by integrating Einstein Discovery into the Tableau platform, this will help business users go beyond understanding what happened and why it happened, to explore likely outcomes and inform predictive action.

But the most interesting part about the intentions behind the product is that Tableau is urging business users to recognize that there is a trade off between time to insight vs precision - where the later requires heavy investments in data science teams and tooling. Simply put, if you can get there faster, even if it's not 100% accurate, then there could well be benefits there.

Beers says that the COVID-19 pandemic has amplified the need for business users to adopt more sophisticated data science tools, as data is at the centre of driving change across an organisation. In this context, he says:

The challenge with any democratization effort around data I think is just encouraging companies to pull together the right data. We've done a lot to make data visible to the analyst, we've done a lot to make data sort of workable with the analyst. But getting that data pulled together is always challenge number one - Tableau has helped with that by making the data visible within the organisation, making it discoverable Tableau was built for the business user, but that landscape has just expanded over the 16 years that we've been selling software.

So we think business science is a natural next step for us. Companies are very, very interested because it is about helping people make decisions in context and bringing business context into decisions.

Tableau says that Business Science is being driven by demand from business users that want data science capabilities, but don't necessarily have the resources or time to support it for every use case. Beers explains:

A lot of companies are starting to reach for those data science tools to improve decision making. And of course there's all kinds of challenges with that, like not everybody's got a data scientist. Data scientists are relatively rare. And so, I may not have one, or I may have data scientists and they're not necessarily gonna be focused on my problem. There's a lot of examples in the business domain, where I need some predictive power, where precision is not necessarily required, but something that is directionally correct is required because it's going to be injected into this place with all kinds of business context.

And so that's why we think there's this opportunity to democratize advanced analytics, in particular putting some of these data science techniques into the hands of business experts.

Beers says that traditional BI focuses on having a bunch of historical data, around something like the sales on products that your company is making. BI would allow a user to look at that data and slice it and dice it in a variety of ways - who's selling it, who's buying it, what do you know about the customer, etc.

However, sometimes the information that you really need to get in front of your sales team is: how likely is this person going to renew this service? This more predictive approach allows said user to prioritize their work for the day. Beers adds:

Does that need to be a super precise model? Probably not. It's got to be directionally correct, but it doesn't necessarily have to be precise.

We think that the trade off there is getting the ability for the business experts to build these models through discovery and change them as the conditions of the business change. And then getting that next version of the model out and into the hands of the consumers, which in this case is the sales team. We think that that ends up being a lot more important for these kinds of problems than let's say going through a rigorous redesign of a data science project.

Beersoutlines that the Business Science announcement highlights how Salesforce and Tableau are bringing the full power' of both platforms together. In this case, inside the Tableau analytics products, users will be able to write some relatively simple calculations to call out to the Einstein discovery models, which will allow users to bring some predictive insights to their Tableau environment. This moves the needles for Tableau's user base that have typically relied on historical data for insights. Beers says:

The models aren't being prescribed by Tableau. The models are built by the customer using Einstein Discovery. If you've got some sort of KPI that you're trying to maximise or minimise, Discovery is very good at building models that can predict where things are going based on your historical data. And then if you've told it here are the things that I can control, here are the things I can't control', then it can say well by controlling this variable, we think you're gonna affect the outcome in this way'.

To make use of Business Science you have to be both a Tableau and a Salesforce customer, as you need access to both platforms. Beers says that there will be a lot of upside here to both companies, as there are a lot of Salesforce customers that Tableau hasn't acquired and vice versa. And we get the distinct impression that this is a sign of the thinking behind how the companies are planning to progress as a combined entity. Beers adds:

Salesforce has long had this message on helping companies go through digital transformation, this has been their message for years. And at the heart of any digital transformation is data. That's one of the driving reasons why they wanted to bring us into the fold, because they realise that data is at the heart of all these things.

In terms of what we are prioritizing, we've got a lot of irons in the fire, absolutely. We're definitely gonna get better with the data in the Salesforce ecosystem. There's a lot of data there, that now we're part of the company, we're going to get some great access to. And then both companies are going to be leveraging each other's assets. And this release is a great example - we're leveraging the Einstein Discovery assets, bringing that together, expanding it to a bunch of new users.

It's good to see the fruits of the Salesforce/Tableau acquisition being brought to market. This will be particularly interesting for Tableau customers, which could see Einstein bringing more predictive prowess to their traditional analytics platforms. Analysing historical data that's static isn't as powerful as putting it to predictive use. However, as ever, the proof will be in the customer stories and the use cases - which we will be chasing to get our hands on.

Excerpt from:

Time to insight over precision - Tableau brings data science to business users - Diginomica

Read More..

Data, Science, and Journalism in the Age of COVID – Pulitzer Center on Crisis Reporting

In a year defined by a pandemic, journalists relied on data and science to tell the story that has impacted every corner of the globe. Please join Northwestern University in Qatar (NU-Q) and the Pulitzer Center on Tuesday,April 6 at 5:30pm AST (10:30am EDT) for a conversation with three Pulitzer Center grantees whose work over the past year has set a very high standard for the profession.

Youyou Zhou is a New York-based freelance data journalist working with graphics and code. She produces data-driven, visual, interactive, and experimental journalism that breaks free of words-based formats.

Charles Piller writes investigative stories for Science. He previously worked as an investigative journalist for STAT, the Los Angeles Times, and The Sacramento Bee, and has reported on public health, biological warfare, infectious disease outbreaks, and other topics from the United States, Africa, Asia, Europe, and Central America.

Eliza Barclay is a science and health editor at Vox. Formerly, she was a reporter and editor at NPR, and most recently edited the food blog The Salt. As editor of The Salt, she received a James Beard Award, a Gracie Award, and an Association of Food Journalists Award.

This event will be moderated by Tom Hundley, senior editor at the Pulitzer Center.

NU-Q is part of the Pulitzer Centers Campus Consortium network.To learn more about NUQ and the Campus Consortium, click here.

See the rest here:

Data, Science, and Journalism in the Age of COVID - Pulitzer Center on Crisis Reporting

Read More..