Category Archives: Machine Learning
BenchSci Signifies Growth of Machine Learning Product Portfolio With Appointment of Chief Platform Officer – PRNewswire
TORONTO, July 6, 2021 /PRNewswire/ -- BenchSci, an emerging global leader in machine learning applications for novel medicine development, today announced the appointment of Eran Ben-Ari as Chief Platform Officer. Effective June 28, 2021, he will oversee the Product, Engineering, and Science teams.
Ben-Ari is a technology leader with 15 years of experience successfully leading product and engineering teams by combining a constant pursuit of growth, people development, and customer focus. His vast expertise in improving product development processes, building high-performing, cohesive cross-functional teams, and increasing product launch velocity will be integral as BenchSci advances towards its vision of bringing novel medicines to patients 50% faster by 2025.
"BenchSci is in hypergrowth, and Eran's leadership in this newly created role will be critical as we build a scalable and robust system to support a complex product portfolio," says Liran Belenzon, CEO, BenchSci. "His proven record with two of this country's success stories will be instrumental as we scale our product practice. He is a welcome addition to our executive team."
Prior to joining BenchSci, Ben-Ari led product and engineering teams of over 120 people. He was General Manager at Koru, an OTPP-owned venture incubator driving innovation, and Chief Product Officer at both Top Hat and Kik Interactive. Earlier in his career, he was Vice President of Product at Rounds Entertainment (acquired by Kik), Vice President of Marketing & Growth at Hola (acquired by EMK Capital), and Vice President of Product at Kampyle (acquired by Medallia).
"I'm honored to join the passionate and exceptional team at BenchSci," says Ben-Ari. "Their groundbreaking work is already changing the world for the better and will do so much more in the coming years. That truly inspires me. BenchSci's world-class platformis quickly evolving, and I am eager to guide its scaling, including the processes, the offering, and the teams involved."
About BenchSci
BenchSci's vision is to bring medicine to patients 50% faster by 2025. We're doing this by empowering scientists with the world's most advanced biomedical artificial intelligence to run more successful experiments. Backed by F-Prime, Gradient Ventures (Google's AI fund), and Inovia Capital, our platform accelerates science at 15 top 20 pharmaceutical companies and over 4,300 leading research centersworldwide. We're a CIX Top 10 Growth company, certified Great Place to Work, and top-ranked company on Glassdoor. Learn more at http://www.benchsci.com.
For more information, please contact Marie Cook at [emailprotected].
Related Images
eran-ben-ari.jpeg Eran Ben-Ari Eran Ben-Ari is the new Chief Platform Officer at BenchSci
SOURCE BenchSci
Original post:
BenchSci Signifies Growth of Machine Learning Product Portfolio With Appointment of Chief Platform Officer - PRNewswire
Machine learning: spaCy 3.1 forwards predictions in the pipeline – Market Research Telecast
The Berlin company Explosion AI has released version 3.1 of the natural language processing (NLP) Python library spaCy. One of the innovations is the option to pass on annotations about predictions from one component to another during training. A new component is also used to label any and potentially overlapping text passages.
The open source Python library spaCy is also used for processing natural language (NLP), as is the Natural Language Toolkit (NLTK). While the latter mainly plays a role in the academic environment, spaCy aims for productive use. The Berlin company Explosion AI advertises it as Industrial strength NLP in Python. (Not only) because of its German roots, German is one of the supported languages.
Similar to the NumPy or Pandas libraries with methods for matrix operations, data science and numerical calculations, spaCy offers ready-made functions for typical computer-linguistic tasks such as tokenization or lemmatization. The former describes the segmentation of a text into units such as words, sentences or paragraphs, and the latter brings inflections of words to their basic forms, the lemmas.
spaCy is implemented in Cython and offers numerous extensions such as sense2vec as an extended form of word2vec or Holmes to extract information from German or English texts based on predicate logic. Version 3.0 of the library introduced a Transformer-based pipeline system.
The training process for components is usually isolated: the individual components have no insight into the predictions of the components that are ahead of them in the pipeline. The current release enables annotations to be written during training that can be accessed by other components. The new configuration setting training.annotating_components defines which components write annotations.
In this way, for example, the information on the grammatical structure from the dependency of the parser can be used for tagging with the Tok2Vec extension, as the following example from the spaCy documentation shows:
Annotations may be made from both regular and frozen components (frozen_components come. The latter are not updated during training. The procedure results in an overhead for non-frozen components, since they cause a double pass during training: The first updates the model that is used as the basis for the predictions in the second pass.
spaCy 3.1 introduces the new component SpanCategorizer to label any text passages that can overlap or be nested. The component previously identified as experimental is intended to cover those cases in which Named Entity Recognition (NER) reaches its limits. The latter categorizes the individual entities of a text, which must, however, be clearly separable.
Parallel to the new component, Explosion AI has a pre-release version of the Annotationswerkzeugs Prodigy published, which among other things offers a new UI for annotating nested and overlapping passages. The annotations defined therein can be used as training data for SpanCategorizer use.
Prodigy enables the labeling of overlapping text passages.
(Image: ExplosionAI)
Further innovations in spaCy 3.1 such as the additional pipeline packages for Catalan and Danish as well as the direct connection to the Hugging Face Hub can be added to the Refer to ExplosionAI blog.
(rme)
Article Source
Disclaimer: This article is generated from the feed and not edited by our team.
Originally posted here:
Machine learning: spaCy 3.1 forwards predictions in the pipeline - Market Research Telecast
HPE Acquires Determined AI to Accelerate Machine Learning Training – HPCwire
June 21, 2021 Hewlett Packard Enterprise today announced that it has acquired Determined AI, a San Francisco-based startup that delivers a powerful and robust software stack to train AI models faster, at any scale, using its open source machine learning (ML) platform.
HPE will combine Determined AIs unique software solution with its world-leading AI and high performance computing (HPC) offerings to enable ML engineers to easily implement and train machine learning models to provide faster and more accurate insights from their data in almost every industry.
As we enter the Age of Insight, our customers recognize the need to add machine learning to deliver better and faster answers from their data, said Justin Hotard, senior vice president and general manager, HPC and Mission Critical Solutions (MCS), HPE. AI-powered technologies will play an increasingly critical role in turning data into readily available, actionable information to fuel this new era. Determined AIs unique open source platform allows ML engineers to build models faster and deliver business value sooner without having to worry about the underlying infrastructure. I am pleased to welcome the world-class Determined AI team, who share our vision to make AI more accessible for our customers and users, into the HPE family.
Determined AI accelerates innovation with open source AI solutions to build and train models faster and easier
Building and training optimized machine learning models at scale is considered the most demanding and critical stage of ML development, and doing it well increasingly requires researchers and scientists to face many challenges frequently found in HPC. These include properly setting up and managing a highly parallel software ecosystem and infrastructure spanning specialized compute, storage, fabric and accelerators. Additionally, users need to program, schedule and train their models efficiently to maximize the utilization of the highly specialized infrastructure they have set up, creating complexity and slowing down productivity.
Determined AIs open source machine learning training platform closes this gap to help researchers and scientists to focus on innovation and accelerate their time to delivery by removing the complexity and cost associated with machine learning development. This includes making it easy to set-up, configure, manage and share workstations or AI clusters that run on-premises or in the cloud.
Determined AI also makes it easier and faster for users to train their models through a range of capabilities that significantly speed up training, which in one use case related to drug discovery, went from three days to three hours. These capabilities include accelerator scheduling, fault tolerance, high speed parallel and distributed training of models, advanced hyperparameter optimization and neural architecture search, reproducible collaboration and metrics tracking.
HPC the foundation for delivering speed-to-insight and AI at scale
AI training is continuing to fuel projects and innovation with intelligence, and to do so effectively, and at scale, will require specialized computing. According to IDC, the accelerated AI server market, which plays an integral role in providing targeted capabilities for image and data-intensive training, is expected to grow by 38% each year and reach $18B by 2024.
The massive computing power of HPC is also increasingly being used to train and optimize AI models, in addition to combining with AI to augment workloads such as modeling and simulation, which are well-established tools to speed time-to-discovery. Intersect360 Research notes that the HPC market will grow by more than 40%, reaching almost $55 billion in revenue by 2024.
To tackle the growing complexity of AI with faster time-to-market, HPE is committed to continue delivering advanced and diverse HPC solutions to train machine learning models and optimize applications for any AI need, in any environment. By combining Determined AIs open source capabilities, HPE is furthering its mission in making AI heterogeneous and empowering ML engineers to build AI models at a greater scale.
Additionally, through HPE GreenLake cloud services for High Performance Computing (HPC), HPE is making HPC and AI solutions even more accessible and affordable to the commercial market with fully managed services that can run in a customers data center, in a colocation or at the edge using the HPE GreenLake edge to cloud platform.
The Determined AI team will join HPEs High Performance Computing (HPC) & Mission Critical Solutions (MCS) business group
Determined AI was founded in 2017 by Neil Conway, Evan Sparks, and Ameet Talwalkar, and based in San Francisco. It launched its open-source platform in 2020, and as a result of its focus on model training, Determined AI has quickly emerged as a leading player in the evolving machine learning software ecosystem. Its solution has been adopted by customers across a wide range of industries, such as biopharmaceuticals, autonomous vehicles, defense contracting, and manufacturing.
About Determined AI
Determined AI is an early stage company at the forefront of machine learning technology by helping customers reap benefits of high-performance computing without the required expertise or staffing. Determined AI provides an open source machine learning solution that speeds up time-to-market by increasing developer productivity, improving resource utilization and reducing risk. The company is headquartered in San Francisco. For more information, visit: http://www.determined.ai
About Hewlett Packard Enterprise
Hewlett Packard Enterprise (NYSE: HPE) is a global edge-to-cloud company that helps organizations accelerate outcomes by unlocking value from all of their data, everywhere. Built on decades of reimagining the future and innovating to advance the way people live and work, HPE delivers unique, open and intelligent technology solutions delivered as a service spanning Compute, Storage, Software, Intelligent Edge, High Performance Computing and Mission Critical Solutions with a consistent experience across all clouds and edges, designed to help customers develop new business models, engage in new ways, and increase operational performance. For more information, visit: http://www.hpe.com
Source: HPE
View post:
HPE Acquires Determined AI to Accelerate Machine Learning Training - HPCwire
Honeywell’s Experion Operator Advisor Incorporates Advanced Machine Learning To Measure And Improve Operator Performance – inForney.com
HOUSTON, June 21, 2021 /PRNewswire/ --Honeywell (Nasdaq: HON) announced today the addition of Operator Advisor to its Experion Highly Augmented Lookahead Operations (HALO) suite. This powerful software solution enables plant owners to objectively measure gaps and drive operator effectiveness to the next level. This market-first solution presents users including oil and gas, chemical, refining and petrochemical organizations with a consolidated scorecard of enterprise automation utilization and recommended steps to address performance-related gaps.
Honeywell's solution uses machine learning-powered analytics, a type of artificial intelligence, to gather insights from enterprise data sources such as distributed control systems and funnel those insights into dashboards. These dashboards can provide operations managers and supervisors with a clear and complete view of operator performance and improvement opportunities.
By understanding how operator actions, inactions and workload levels contribute to optimal production, organizations can develop targeted training programs, make strides toward autonomous operations and build process resilience all of which can help them better compete in the digital age.
"According to the Abnormal Situation Management Consortium, 40% to 70% of industrial accidents are linked to human error," said Pramesh Maheshwari, vice president and general manager, Lifecycle Solutions and Services, Honeywell Process Solutions. "This underscores the importance of deploying an enterprise-wide competency program that empowers organizations and workers through use of advanced technologies like machine learning to improve plant performance, uptime, reliability and safety."
As part of Honeywell's Workforce Excellence portfolio, HALO Operator Advisor is a timely response to several industry trends, including the global desire for post-COVID-19 preparedness and resilience, growing operational complexity, the aging industrial workforce and the urgent need to upskill next-generation recruits.
Honeywell data reveals the transformational impact HALO Operator Advisor can have on plant operations. Potential benefits include the reduction of 75% of incidents and human errors, leading to the recovery of $1.5 million annually per plant of production loss due to worker performance; a $2 million annual reduction in operational costs by optimizing worker productivity and training and advancing toward full autonomous plant operation; a $1.3 million annual savings in headcount through optimized production; and a $1 million savings in annual maintenance costs through improved equipment reliability.
HALO Operator Advisor will be available in October 2021. For more information, visit:https://www.honeywellprocess.comand check the HALO Operator Advisor Service Note.
About Honeywell Performance Materials and Technologies (PMT)
Honeywell PMT develops process technologies, automation solutions, advanced materials and industrial software that are transforming industries around the world. PMT's Advanced Materials businesses manufacture a wide variety of high-performance products including environmentally preferable materials used for the production of refrigerants, blowing agents, aerosols and solvents, pharmaceutical packaging, fine chemicals, additives and high strength-fiber for military, law enforcement and industrial use. Technologies developed by Honeywell UOP (www.uop.com), a leading provider in the oil and gas sector, form the foundation for most of the world's refiners, efficiently producing gasoline, diesel, jet fuel, petrochemicals and renewable fuels. Honeywell Process Solutions (www.honeywellprocess.com) is a pioneering provider of automation control, safety systems, field instrumentation, fuel delivery and burners, connected plant offerings, cybersecurity, tissue and packaging materials control systems, connected utility and metering solutions, and services for a wide range of industries.
About Honeywell
Honeywell (www.honeywell.com) is a Fortune 100 technology company that delivers industry-specific solutions that include aerospace products and services; control technologies for buildings and industry; and performance materials globally. Our technologies help aircraft, buildings, manufacturing plants, supply chains, and workers become more connected to make our world smarter, safer, and more sustainable. For more news and information on Honeywell, please visit http://www.honeywell.com/newsroom.
View original content:http://www.prnewswire.com/news-releases/honeywells-experion-operator-advisor-incorporates-advanced-machine-learning-to-measure-and-improve-operator-performance-301316115.html
SOURCE Honeywell
Digitization System Offers Virtual Microscopy, Collaboration and Machine Learning Data Analytics Metrology and Quality News – Online Magazine -…
ZEISS is introducing a new ground breaking solution for petrographic analysis. ZEISS Axioscan 7 expands the possibilities of automated microscopy by combining unique motorized polarization acquisition modes with unprecedented speed and a rich software ecosystem for visualization, analysis, and collaboration.
The most challenging research tasks as well as your routine scanning applications are supported by powerful hardware and perfectly featured software. The ZEISS Axioscan 7 captures virtual slides quickly with high-speed scanning, while retaining consistently high quality, whether needing to capture brightfield, fluorescence or polarized light images.
Fully automated acquisition now comes with unprecedented speed across even the largest sample collections. Coupled with trusted ZEISS optical quality this ensuresconsistent and reproducible imaging and analysis. The ZEISS ZEN Pol Viewer allows for complex multichannel polarization data to be visualized and interrogated in an intuitive environment as a virtual petrographic microscope. Data can be stored locally or automatically uploaded to the cloud for online visualization, distribution and collaboration, allowing researchers to share their images online and organize entire projects on the go.
Polarization Images With Unprecedented Flexibility
Speed, efficiency, and multichannel viewing are critical when it comes to petrographic dataset analysis. ZEISS Axioscan 7 employs swift and reproducible LED illumination and a sophisticated filter concept to efficiently separate a broad range of channels, using the new motorized stage and image acquisition system. Aperture settings are automatically adjusted and optimized to the numerical aperture of the selected objective.
Visualize Complex Digitized Petrographic Data
Geological teaching and research require specialized visualization solutions and global collaboration. Remote access to fully digitized thin section collections augments traditional teaching approaches through blended learning, allowing for a flexible approach to tackling complex ideas.
ZEISS Axioscan 7, the innovative digitization platform, goes beyond collecting multiple polarization modes brightfield, plane polarization, cross polarization, or circular polarization. It automatically simulates stage rotation such that the sample appears exactly as it would in a traditional petrographic light microscope, even whilst observing multiple image types. This facilitates the learning process in a teaching environment in the lab and online and creates a unique and immersive petrographic experience.
Automated Quantitative Petrography
Multimodal acquisition and digitization allow for fully automated quantification and analysis of complex optical mineralogy data. Machine learning and advanced image analysis can be used to classify mineralogy and porosity, as well as identify individual mineral grains directly from the unique rotating cross polarized data. Birefringence analysis allows for fabrics and anisotropy to be quantified using mineral-by-mineral identification of extinction orientation, and subsequent grain identification used for grain size, shape and orientation analysis. The extremely reproducible nature of ZEISS Axioscan 7 data allow for large batch analysis to be interrogated and reports generated with only minimal user interaction.
The demand for higher throughput and screening capability drives the need for automated instruments. ZEISS Axioscan 7 provides automation without sacrificing flexibility or the high quality of images needed to attract a very wide range of users to core imaging facilities. With approaches as varied as visualization for students to advanced analytics in industry, the new slide scanner attracts users from the full range of geoscience workflows. The powerful combination of accommodating a broad user base with robust design places ZEISS Axioscan 7 as a top performer when it comes to utilization & productivity, and a rapid return on investment.
For more information: http://www.zeiss.com/microscopy/
HOME PAGE LINK
Latest Headline News
ZEISS has announced that its users now have access to the latest version of its all-purpose measurement software with focus on standard geometries. ZEISS CALYPSO 2021 reduces the time and
With COVID measures and restrictions easing the organizers of the CIM2021 Metrology Congress (September 7 to 9) in Lyon, France have announced its event program. The Congress will be the
Following the announcements that Chicagos McCormick place will hold events starting in July and that the state of Illinois lifted capacity limits and social distancing requirements on June 11, IMTS
INSPHERE has announced that it has been awarded a contract to deliver its new IONA system for Robot Monitoring & Control to AMRCCymru AMRC Cymru is installing a cutting-edge robot
Additive manufacturing offers an unprecedented level of design flexibility and expanded functionality, but the quality and process can drastically differ across production machines, according to Hui Yang, a professor of
ShipReality and Holo-Light have showcased the first holographic visualisation of ultra large ship 3D laser scans merged with complex CAD design data, using the most advanced XR streaming technology available.
With upwards of 70 million Electric and Hybrid Vehicles (EVs) predicted to be in operation by year 2030, automotive OEMs and their suppliers are racing to develop the next generation
Interspectral has entered into a new agreement with The W.M Keck Center for 3D innovation at The University of Texas in El Paso, US. The unique multi-disciplinary additive manufacturing research
The U.S. Department of Commerces National Institute of Standards and Technology (NIST) has announced a new competition for awards to support industry-driven consortia in developing technology roadmaps that will address
Intel, EXOR International, JMA Wireless andTelecom Italia (TIM) have teamed together to build an end-to-end smart factory in Verona, Italy, as an example of the benefits of Industry 4.0 digitalization
Automation is the foundation of modern industry. The Global Industrial Robotics Market that was valued at $37,875 million in 2016 is poised to reach $70,715 million by 2023, growing at
It has become abundantly clear that not only are electric vehicles (EVs) becoming more prevalent on the roads but that almost all automotive companies are announcing plans where EVs will
In the era of smart manufacturing, there are numerous, continually developing opportunities for organisations to drive productivity. From robotics to the industrial internet of things (IIoT), the list of key
The most beautiful nostalgic equipment in the metal working industry that was the aim for the photo contest of the Dutch consultancy organization P.K.M. Not just the picture itself,
As the economies reopen from the pandemic, Asia, Europe and America adjust their robotics research funding programs. What are the targets of the officially driven government programs today? This has
Read more from the original source:
Digitization System Offers Virtual Microscopy, Collaboration and Machine Learning Data Analytics Metrology and Quality News - Online Magazine -...
Machine learning helps Benteler with predictive quality control in production – Autocar Professional
When machines learn which production data affect the quality of the product, quality deviations can be completely avoided. This makes production processes even better, faster and more reliable. German supplier Benteler is part of the ML4Pro2 research project (Machine Learning for Production and its Products) led by the Fraunhofer Institute for Mechatronic Systems Design.
The aim of the project is to make machine learning permanently available for intelligent products and production processes. For this purpose, Benteler says it is analysing data generated during the production of components in hot forming presses.
Detecting quality deviations based on temperature changesBenteler uses hot forming technology primarily for customers in the automotive industry. The forming presses process sheet metal blanks into high-strength components, for example A and B pillars, frame parts, and cross and longitudinal beams. The quality of the various components is determined, among other things, by how the heat is distributed during the pressing process. Until now, quality control has been carried out at the end of the production process using an optical measuring station.
Now, as part of the research project, the automotive supplier is using a thermal imaging system that records the heat distribution of a component as soon as it leaves the press. This thermographic data is used as part of predictive quality control. The aim is to know in advance, based on the analysis of process heat, whether the pressed parts will meet the required quality even before they leave the production process.
"Predictive quality is a key objective at Benteler. Our plan in the research project is to record and analyse the machine parameters of our hot forming presses. For example, we check precisely how temperature and pressure interact. This enables us to develop predictive models. Based on these, we can forecast whether the quality of our products is okay," says Daniel Kochling, Industry 4.0 manager at Benteler. "In the future, we will be able to react more quickly and change production parameters if necessary. This ensures that the temperature profiles of the components remain within tolerance and that quality improvements are possible during the ongoing process."
What'sthe ML4Pro2 project?The ML4Pro2 project (Machine Learning for Production and its Products) started at the end of 2018 and will run until November 2021. The research and development project is funded by the Ministry of Economic Affairs, Innovation, Digitalization and Energy (MWIDE) of the German federal state of North Rhine-Westphalia. Under the leadership of Fraunhofer IEM, a total of 10 cooperation partners are participating in the project.
/news-international/machine-learning-helps-benteler-with-predictive-quality-control-in-production-79427 Machine learning helps Benteler with predictive quality control in production Machine learning helps Benteler with predictive quality control in production https://www.autocarpro.in/Utils/ImageResizer.ashx?n=http://img.haymarketsac.in/autocarpro/3d56b6b3-1329-403b-b58d-d7d2c53b0b81.jpg

Link:
Machine learning helps Benteler with predictive quality control in production - Autocar Professional
Akamai Unveils Machine Learning That Intelligently Automates Application and API Protections and Reduces Burden on Security Professionals – Yahoo…
Edge platform enhancements increase responsiveness and augment security decision-making
CAMBRIDGE, Mass., June 16, 2021 /PRNewswire/ -- Akamai Technologies, Inc. (NASDAQ: AKAM), the world's most trusted solution for protecting and delivering digital experiences, today announces platform security enhancements to strengthen protection for web applications, APIs, and user accounts. Akamai's machine learning derives insight on malicious activity from more than 1.3 billion daily client interactions to intelligently automate threat detections, time-consuming tasks, and security logic to help professionals make faster, more trustworthy decisions regarding cyberthreats.
Akamai Technologies, Inc. logo (PRNewsfoto/Akamai Technologies, Inc.)
In its May 9 report Top Cybersecurity Threats in 2021, Forrester estimates that due to reasons "exacerbated by COVID-19 and the resulting growth in digital interactions, identity theft and account takeover increased by at least 10% to 15% from 2019 to 2020." The leading global research and advisory firm notes that we should "anticipate another 8% to 10% increase in identity theft and ATO [account takeover] fraud in 2021." With threat actors increasingly using automation to compromise systems and applications, security professionals must likewise automate defenses in parallel against these attacks to manage cyberthreats at pace.
New Akamai platform security enhancements include:
Adaptive Security Engine for Akamai's web application and API protection (WAAP) solutions, Kona Site Defender and Web Application Protector, is designed to automatically adapt protections with the scale and sophistication of attacks, while reducing the effort to maintain and tune policies. The Adaptive Security Engine combines proprietary anomaly risk scoring with adaptive threat profiling to identify highly targeted, evasive, and stealthy attacks. The dynamic security logic intelligently adjusts its defensive aggressiveness based on threat intelligence automatically correlated for each customer's unique traffic. Self-tuning leverages machine learning, statistical models, and heuristics to analyze all triggers across each policy to accurately differentiate between true and false positives.
Story continues
Audience Hijacking Protection has been added to Akamai Page Integrity Manager to detect and block malicious activity in real time from client-side attacks using JavaScript, advertiser networks, browser plug-ins, and extensions that target web clients. Audience Hijacking Protection is designed to use machine learning to quickly identify vulnerable resources, detect suspicious behavior, and block unwanted ads, pop-ups, affiliate fraud, and other malicious activities aimed at hijacking your audience.
Bot Score and JavaScript Obfuscation have been added to Akamai Bot Manager, laying the foundation for ongoing innovations in adversarial bot management, including the ability to take action against bots aligned with corporate risk tolerance. Bot Score automatically learns unique traffic and bot patterns, and self-tunes for long-term effectiveness; JavaScript Obfuscation dynamically changes detections to prevent bot operators from reverse engineering detections.
Akamai Account Protector is a new solution designed to proactively identify and block human fraudulent activity like account takeover attacks. Using advanced machine learning, behavioral analytics, and reputation heuristics, Account Protector intelligently evaluates every login request across multiple risk and trust signals to determine if it is coming from a legitimate user or an impersonator. This capability complements Akamai's bot mitigation to provide effective protection against both malicious human actors and automated threats.
"At Akamai, our latest platform release is intended to help resolve the tension between security and ease of use, with key capabilities around automation and machine learning specifically designed to intelligently augment human decision-making," said Aparna Rayasam, senior vice president and general manager, Application Security, Akamai. "Smart automation adds immediate value and empowers users with the right tools to generate insight and context to make faster and more trustworthy decisions, seamlessly all while anticipating what attackers might do next."
For more information about Akamai's Edge Security solutions, visit our Platform Update page.
About Akamai Akamai secures and delivers digital experiences for the world's largest companies. Akamai's intelligent edge platform surrounds everything, from the enterprise to the cloud, so customers and their businesses can be fast, smart, and secure. Top brands globally rely on Akamai to help them realize competitive advantage through agile solutions that extend the power of their multi-cloud architectures. Akamai keeps decisions, apps, and experiences closer to users than anyone and attacks and threats far away. Akamai's portfolio of edge security, web and mobile performance, enterprise access, and video delivery solutions is supported by unmatched customer service, analytics, and 24/7/365 monitoring. To learn why the world's top brands trust Akamai, visit http://www.akamai.com, blogs.akamai.com, or @Akamai on Twitter. You can find our global contact information at http://www.akamai.com/locations.
Contacts: Tim Whitman Media Relations 617-444-3019 twhitman@akamai.com
Tom Barth Investor Relations 617-274-7130 tbarth@akamai.com
Cision
View original content to download multimedia:http://www.prnewswire.com/news-releases/akamai-unveils-machine-learning-that-intelligently-automates-application-and-api-protections-and-reduces-burden-on-security-professionals-301313433.html
SOURCE Akamai Technologies, Inc.
See the original post here:
Akamai Unveils Machine Learning That Intelligently Automates Application and API Protections and Reduces Burden on Security Professionals - Yahoo...
An introduction to object detection with deep learning – TechTalks
This article is part of Deconstructing artificial intelligence, a series of posts that explore the details of how AI applications work (In partnership with Paperspace).
Deep neural networks have gained fame for their capability to process visual information. And in the past few years, they have become a key component of many computer vision applications.
Among the key problems neural networks can solve is detecting and localizing objects in images. Object detection is used in many different domains, including autonomous driving, video surveillance, and healthcare.
In this post, I will briefly review the deep learning architectures that help computers detect objects.
One of the key components of most deep learningbased computer vision applications is the convolutional neural network (CNN). Invented in the 1980s by deep learning pioneer Yann LeCun, CNNs are a type of neural network that is efficient at capturing patterns in multidimensional spaces. This makes CNNs especially good for images, though they are used to process other types of data too. (To focus on visual data, well consider our convolutional neural networks to be two-dimensional in this article.)
Every convolutional neural network is composed of one or several convolutional layers, a software component that extracts meaningful values from the input image. And every convolution layer is composed of several filters, square matrices that slide across the image and register the weighted sum of pixel values at different locations. Each filter has different values and extracts different features from the input image. The output of a convolution layer is a set of feature maps.
When stacked on top of each other, convolutional layers can detect a hierarchy of visual patterns. For instance, the lower layers will produce feature maps for vertical and horizontal edges, corners, and other simple patterns. The next layers can detect more complex patterns such as grids and circles. As you move deeper into the network, the layers will detect complicated objects such as cars, houses, trees, and people.
Most convolutional neural networks use pooling layers to gradually reduce the size of their feature maps and keep the most prominent parts. Max-pooling, which is currently the main type of pooling layer used in CNNs, keeps the maximum value in a patch of pixels. For example, if you use a pooling layer with a size 2, it will take 22-pixel patches from the feature maps produced by the preceding layer and keep the highest value. This operation halves the size of the maps and keeps the most relevant features. Pooling layers enable CNNs to generalize their capabilities and be less sensitive to the displacement of objects across images.
Finally, the output of the convolution layers is flattened into a single dimension matrix that is the numerical representation of the features contained in the image. That matrix is then fed into a series of fully connected layers of artificial neurons that map the features to the kind of output expected from the network.
The most basic task for convolutional neural networks is image classification, in which the network takes an image as input and returns a list of values that represent the probability that the image belongs to one of several classes. For example, say you want to train a neural network to detect all 1,000 classes of objects contained in the popular open-source dataset ImageNet. In that case, your output layer will have 1,000 numerical outputs, each of which contains the probability of the image belonging to one of those classes.
You can always create and test your own convolutional neural network from scratch. But most machine learning researchers and developers use one of several tried and tested convolutional neural networks such as AlexNet, VGG16, and ResNet-50.
While an image classification network can tell whether an image contains a certain object or not, it wont say where in the image the object is located. Object detection networks provide both the class of objects contained in an image and a bounding box that provides the coordinates of that object.
Object detection networks bear much resemblance to image classification networks and use convolution layers to detect visual features. In fact, most object detection networks use an image classification CNN and repurpose it for object detection.
Object detection is a supervised machine learning problem, which means you must train your models on labeled examples. Each image in the training dataset must be accompanied with a file that includes the boundaries and classes of the objects it contains. There are several open-source tools that create object detection annotations.
The object detection network is trained on the annotated data until it can find regions in images that correspond to each kind of object.
Now lets look at a few object-detection neural network architectures.
The Region-based Convolutional Neural Network (R-CNN) was proposed by AI researchers at the University of California, Berkley, in 2014. The R-CNN is composed of three key components.
First, a region selector uses selective search, algorithm that find regions of pixels in the image that might represent objects, also called regions of interest (RoI). The region selector generates around 2,000 regions of interest for each image.
Next, the RoIs are warped into a predefined size and passed on to a convolutional neural network. The CNN processes every region separately extracts the features through a series of convolution operations. The CNN uses fully connected layers to encode the feature maps into a single-dimensional vector of numerical values.
Finally, a classifier machine learning model maps the encoded features obtained from the CNN to the output classes. The classifier has a separate output class for background, which corresponds to anything that isnt an object.
The original R-CNN paper suggests the AlexNet convolutional neural network for feature extraction and a support vector machine (SVM) for classification. But in the years since the paper was published, researchers have used newer network architectures and classification models to improve the performance of R-CNN.
R-CNN suffers from a few problems. First, the model must generate and crop 2,000 separate regions for each image, which can take quite a while. Second, the model must compute the features for each of the 2,000 regions separately. This amounts to a lot of calculations and slows down the process, making R-CNN unsuitable for real-time object detection. And finally, the model is composed of three separate components, which makes it hard to integrate computations and improve speed.
In 2015, the lead author of the R-CNN paper proposed a new architecture called Fast R-CNN, which solved some of the problems of its predecessor. Fast R-CNN brings feature extraction and region selection into a single machine learning model.
Fast R-CNN receives an image and a set of RoIs and returns a list of bounding boxes and classes of the objects detected in the image.
One of the key innovations in Fast R-CNN was the RoI pooling layer, an operation that takes CNN feature maps and regions of interest for an image and provides the corresponding features for each region. This allowed Fast R-CNN to extract features for all the regions of interest in the image in a single pass as opposed to R-CNN, which processed each region separately. This resulted in a significant boost in speed.
However, one issue remained unsolved. Fast R-CNN still required the regions of the image to be extracted and provided as input to the model. Fast R-CNN was still not ready for real-time object detection.
[faster r-cnn architecture]
Faster R-CNN, introduced in 2016, solves the final piece of the object-detection puzzle by integrating the region extraction mechanism into the object detection network.
Faster R-CNN takes an image as input and returns a list of object classes and their corresponding bounding boxes.
The architecture of Faster R-CNN is largely similar to that of Fast R-CNN. Its main innovation is the region proposal network (RPN), a component that takes the feature maps produced by a convolutional neural network and proposes a set of bounding boxes where objects might be located. The proposed regions are then passed to the RoI pooling layer. The rest of the process is similar to Fast R-CNN.
By integrating region detection into the main neural network architecture, Faster R-CNN achieves near-real-time object detection speed.
In 2016, researchers at Washington University, Allen Institute for AI, and Facebook AI Research proposed You Only Look Once (YOLO), a family of neural networks that improved the speed and accuracy of object detection with deep learning.
The main improvement in YOLO is the integration of the entire object detection and classification process in a single network. Instead of extracting features and regions separately, YOLO performs everything in a single pass through a single network, hence the name You Only Look Once.
YOLO can perform object detection at video streaming framerates and is suitable applications that require real-time inference.
In the past few years, deep learning object detection has come a long way, evolving from a patchwork of different components to a single neural network that works efficiently. Today, many applications use object-detection networks as one of their main components. Its in your phone, computer, car, camera, and more. It will be interesting (and perhaps creepy) to see what can be achieved with increasingly advanced neural networks.
Read more here:
An introduction to object detection with deep learning - TechTalks
Veritone : What Is MLOps? | A Complete Guide to Machine Learning Operations – Marketscreener.com
Table of contents:
What Is MLOps + How Does It Work?Why Do You Need MLOps?What Problems Does MLOps Solve?How Do You Implement MLOps In Your Organization?How Do I Learn MLOps?Want to Learn Even More About MLOps?
Machine learning operations, or MLOps, is the term given to the process of creating, deploying, and maintaining machine learning models. It's a discipline that combines machine learning, DevOps, and data engineering with the goal of finding faster, simpler, and more effective ways to productize machine learning. When done right, MLOps can help organizations align their models with their unique business needs, as well as regulatory requirements. Keep reading to find out how you can implement MLOps with your team.
What Is MLOps + How Does It Work?
A typical MLOps process looks like this: a business goal is defined, the relevant data is collected and cleaned, and then a machine learning model is built and deployed. Or maybe we should say that's what a typical MLOps process is supposed to look like, but many organizations are struggling to get it down.
Productizing machine learning, or ML, is one of the biggest challenges in AI practices today. Many organizations are desperate to figure out how to convert the insights discovered by data scientists into tangible value for their business-which is easier said than done.
It requires unifying multiple processes across multiple teams-starting with defining business objectives and continuing all the way through data acquisition and model development and deployment.
This unification is achieved through a set of best practices for communication and collaboration between the data engineers who acquire the data, the data scientists who prepare the data and develop the model, and the operations professionals who serve the models.
Why Do You Need MLOps?
Businesses are dealing with more data than ever before. In a recent study, the IBM Institute for Business Value found that 59% of companies have accelerated their digital transformation. This pivot to digital-first enterprise strategy means continued investments in data, analytics, and AI capabilities have never been more critical.
Leveraging data as a strategic asset can lead to accelerated business growth and increased revenue. According to McKinsey, companies with the greatest overall growth in revenue and earnings receive a significant proportion of that boost from data and analytics. If you're hoping to replicate this growth and set your business up for sustainable success, ad hoc initiatives and one-off projects won't cut it. You'll need a well-planned data strategy that brings the best practices of software development and applies them to data science-which is where MLOps comes in.
MLOps bridges the gap between gathering data and turning that data into actionable business value. A successful MLOps strategy leverages the best of data science with the best of operations to streamline scalable, repeatable machine learning from end to end. It empowers organizations to approach this new era of data with confidence and reap the benefits of machine learning and AI in real life.
In addition to increased growth and revenue, benefits include faster go-to-market times and lower operational costs. With a solid framework for your data science and DevOps teams to follow, managers can spend more time thinking through strategy and individual contributors can be more agile.
What Problems Does MLOps Solve?
Let's dig into specifics. Applying MLOps best practices solves a variety of the problems that plague businesses around the globe, including:
Poor Communication
No matter how your company is organized, it's likely that your data scientists, software engineers, and operations managers live in very different worlds. This silo effect kills communication, collaboration, and productivity.
Without collaboration, you can forget about simplifying and automating the deployment of machine learning models in large-scale production environments. MLOps solves this problem by establishing dynamic pipelines and adaptable frameworks that keep everyone on the same page-reducing friction and opening up bottlenecks.
Unfinished Projects
As VentureBeat reports, 87% of machine learning models never make it into production. In other words, only about 1 in 10 data scientists' workdays actually end up producing something of value for the company. This sad statistic represents lost revenue, wasted time, and a growing sense of frustration and fatigue in data scientists everywhere. MLOps solves this problem by first ensuring all key stakeholders are on board with a project before it kicks off. MLOps then supports and optimizes every step of the process, ensuring that each model can journey its way toward production without any lag (and without the never-ending email chains).
Lost Learnings
We already talked about the silo effect, but it rears its ugly head again here. Creating and serving ML models requires input and expertise from multiple different teams, with each team driving a different part of the process. Without communication and collaboration between everyone involved, key learnings and critical insights will remain stuck within each silo. MLOps solves this problem by bringing together different teams with one central hub for testing and optimization. MLOps best practices make it easy to share learnings that can be used to improve the model and rapidly redeploy.
Redundancy
Lengthy development and deployment cycles mean that, way too often, evolving business objectives make models redundant before they've even been fully developed. Or the changing business objectives mean that the ML system needs to be retrained immediately after deployment. MLOps solves these issues by implementing best practices across the entire process-making productizing ML faster at every stage. MLOps best practices also build in room for adjustments, so your models can adapt to your changing business needs.
Misuse of Talent
Data scientists are not software engineers and vice versa. They have different focuses, different skill sets, and very different priorities. Expecting one to perform the tasks of the other is a recipe for failure. Unfortunately, many organizations make this mistake while trying to cut corners or speed up the process of getting machine learning models into production. MLOps solves this problem by bringing both disciplines together in a way that lets each use their respective talents in the best way possible-laying the groundwork for long-term success.
Noncompliance
The age of big data is accompanied by the age of intense, ever-changing regulation and compliance systems. Many organizations struggle to meet data compliance standards, let alone remain adaptable for future iterations and addendums. MLOps solves this problem by implementing a comprehensive plan for governance. This ensures that each model, whether new or updated, is compliant with original standards. MLOps also ensures that all data programs are auditable and explainable by introducing monitoring tools.
How Do You Implement MLOps In Your Organization?
Now that you're sold on the benefits of MLOps, it's time to figure out how you can bring the discipline to life at your organization.
The good news is that MLOps is still a relatively new discipline, which means even if you are just now getting started you aren't far behind other organizations. The bad news is that MLOps is still a relatively new discipline, which means there aren't many tried-and-true formulas for success readily available for you to replicate at your organization. However, ModelOps platforms with ready-to-deploy models can accelerate the MLOps process.
That being said, if you are ready to invest in machine learning there are a few ways you can set your organization up for success. Let's dive into how to achieve MLOps success in more detail:
MLOps Teams
Start by looking at your teams to confirm you have the necessary skill sets covered. We've already established that productizing ML models require a set of skills that, up until now, organizations have considered separate. So, it's likely that your data engineers, data scientists, software engineers, and operations professionals will be dispersed throughout various departments.
You don't need to alter your entire organizational structure to create a MLOps team. Instead, consider creating a hybrid team with cross-functionality. This way you can cover a wide range of skills without too much disruption to your organization. Alternatively, you may choose to use a solution like aiWARE that can rapidly deploy and scale AI within your applications and business processes without requiring AI developers and ML engineers.
Your MLOps team will need to cover 4 main areas:
Scoping
The first stage in a typical machine learning lifecycle is scoping. This stage consists of scoping out the project by identifying what business problem(s) you are aiming to solve with AI.
This stage usually involves collaborators with a deep understanding of the potential business problems that can be solved with AI such as d-level managers and above. It also usually includes collaborators that are intimately familiar with the data such as senior data scientists.
Data
The second stage in a typical ML lifecycle is data. This stage starts with acquiring the data and continues through cleaning, processing, organizing, and storing the data.
Stage two usually involves both data engineers and data scientists along with product managers.
Modeling
Stage three in the typical ML lifecycle is modeling. In this stage, the data from stage two is used to train, test, and refine ML models.
This third stage usually involves both data engineers and data scientists (and even ML architects if you have them). It also requires feedback and input from cross-functional stakeholders.
Deployment
The fourth and final stage in the typical machine learning lifecycle is deployment. Trained models are deployed into production.
This stage usually involves collaborators that have experience with machine learning and the DevOps process, such as machine learning engineers or DevOps specialists.
The exact composition and organization of the team will vary depending on your individual business needs, but the essential part is ensuring that each skillset is covered by someone.
MLOps Tools
In addition to having the right team, you'll also need to have the right tools in place to achieve MLOps success. MLOps is a relatively new, rapidly growing field. And, as is often the case in such fields, a large variety of tools have been created to help manage and streamline the processes involved.
When putting together your MLOps toolkit, you'll need to consider a few different factors such as the MLOps tasks you need to address, the languages and libraries your data scientists will be using, the level of product support you'll need, which cloud provider(s) you'll be working with, what AI models and engines to utilize, etc.
Once you build models, you can easily onboard them into a production-ready environment with aiWARE. This option allows you to rapidly deploy models that solve real-world business problems. And flexible API integrations make it easy to customize the solution to your business needs.
How Do I Learn MLOps?
As we've already mentioned, MLOps is a rapidly growing field. And that massive growth is only expected to continue-with 60% of companies planning to accelerate their process automation in the next 2 years, according to the IBV Trending Insights report.
This increased investment has made MLOps, or DevOps for machine learning, a necessary skill set at companies in nearly every industry. According to the LinkedIn emerging jobs report, the hiring for machine learning and artificial intelligence roles grew 74% annually between 2015 and 2019. This makes MLOps the top emerging job in the U.S.
And it's experiencing a talent shortage. There are many factors contributing to the MLOps talent crunch, the biggest being an overwhelming number of platforms and tools to learn, a lack of clarity in role and responsibility, a shortage of dedicated courses for MLOps engineers and an overwhelming number of platforms and tools to learn.
All that to say, if you're looking to get your foot in the MLOps door there's no better time than right now. We recommend checking out some of these great resources:
MLOps Resources
This course, currently available on Coursera, is a great jumping-off point if you're new to MLOps. Primarily intended for data scientists and software engineers that are looking to develop MLOps skills, this course introduces participants to MLOps tools and best practices for deploying, evaluating, monitoring, and operating production ML systems on Google Cloud.
This course, currently available on Coursera, is for those that have already nailed the fundamentals. It covers deep MLOps concepts as well as production engineering capabilities. You'll learn how to use well-established tools and methodologies to conceptualize, build and maintain integrated systems that continuously operate in production.
This book, by Mark Treveil and the Dataiku Team, was written specifically for the people directly facing the task of scaling ML in production. It's a guide for creating a successful MLOps environment, from the organizational to the technical challenges involved.
This seminar series takes a look at the frontier of ML. It aims to drive research focus to interesting questions and stir up conversations around ML topics. Every seminar is live-streamed on YouTube, and they encourage viewers to ask questions in the live chat. Videos of the talks are available on YouTube afterward as well. Past seminars are available for viewing on YouTube as well.
This book, by Andriy Burkov, offers a 'theory of the practice' approach. It provides readers with an overview of the problems, questions, and best practices of machine learning problems.
We also highly recommend joining the MLOps community on slack. An open community for all enthusiasts of ML and MLOps, you can learn many interesting things and broaden your knowledge. Both amateurs and professionals alike are welcome to join the conversation.
Want to Learn Even More About MLOps?
In the coming weeks, we'll be digging into some core MLOps topics that may interest you. If you're interested in diving deeper, keep an eye on our blog. We'll publish more in-depth content that covers MLOps best practices, ModelOps, MLOps tools, and MLOps versus AIOps.
Ready to dig into another MLOps resource right away? Check out this on-demand webinar: MLOps Done Right: Best Practices to Deploy. Integrate, Scale, Monitor, and Comply.
Original post:
Veritone : What Is MLOps? | A Complete Guide to Machine Learning Operations - Marketscreener.com
Akamai Unveils Machine Learning That Intelligently Automates Application and API Protections and Reduces Burden on Security Professionals – PRNewswire
CAMBRIDGE, Mass., June 16, 2021 /PRNewswire/ -- Akamai Technologies, Inc. (NASDAQ: AKAM), the world's most trusted solution for protecting and delivering digital experiences, today announces platform security enhancements to strengthen protection for web applications, APIs, and user accounts. Akamai's machine learning derives insight on malicious activity from more than 1.3 billion daily client interactions to intelligently automate threat detections, time-consuming tasks, and security logic to help professionals make faster, more trustworthy decisions regarding cyberthreats.
In its May 9 report Top Cybersecurity Threats in 2021, Forrester estimates that due to reasons "exacerbated by COVID-19 and the resulting growth in digital interactions, identity theft and account takeover increased by at least 10% to 15% from 2019 to 2020." The leading global research and advisory firm notes that we should "anticipate another 8% to 10% increase in identity theft and ATO [account takeover] fraud in 2021." With threat actors increasingly using automation to compromise systems and applications, security professionals must likewise automate defenses in parallel against these attacks to manage cyberthreats at pace.
New Akamai platform security enhancements include:
Adaptive Security Engine for Akamai's web application and API protection (WAAP) solutions, Kona Site Defender and Web Application Protector, is designed to automatically adapt protections with the scale and sophistication of attacks, while reducing the effort to maintain and tune policies. The Adaptive Security Engine combines proprietary anomaly risk scoring with adaptive threat profiling to identify highly targeted, evasive, and stealthy attacks. The dynamic security logic intelligently adjusts its defensive aggressiveness based on threat intelligence automatically correlated for each customer's unique traffic. Self-tuning leverages machine learning, statistical models, and heuristics to analyze all triggers across each policy to accurately differentiate between true and false positives.
Audience Hijacking Protection has been added to Akamai Page Integrity Manager to detect and block malicious activity in real time from client-side attacks using JavaScript, advertiser networks, browser plug-ins, and extensions that target web clients. Audience Hijacking Protection is designed to use machine learning to quickly identify vulnerable resources, detect suspicious behavior, and block unwanted ads, pop-ups, affiliate fraud, and other malicious activities aimed at hijacking your audience.
Bot Score and JavaScript Obfuscation have been added to Akamai Bot Manager, laying the foundation for ongoing innovations in adversarial bot management, including the ability to take action against bots aligned with corporate risk tolerance. Bot Score automatically learns unique traffic and bot patterns, and self-tunes for long-term effectiveness; JavaScript Obfuscation dynamically changes detections to prevent bot operators from reverse engineering detections.
Akamai Account Protector is a new solution designed to proactively identify and block human fraudulent activity like account takeover attacks. Using advanced machine learning, behavioral analytics, and reputation heuristics, Account Protector intelligently evaluates every login request across multiple risk and trust signals to determine if it is coming from a legitimate user or an impersonator. This capability complements Akamai's bot mitigation to provide effective protection against both malicious human actors and automated threats.
"At Akamai, our latest platform release is intended to help resolve the tension between security and ease of use, with key capabilities around automation and machine learning specifically designed to intelligently augment human decision-making," said Aparna Rayasam, senior vice president and general manager, Application Security, Akamai. "Smart automation adds immediate value and empowers users with the right tools to generate insight and context to make faster and more trustworthy decisions, seamlessly all while anticipating what attackers might do next."
For more information about Akamai's Edge Security solutions, visit our Platform Update page.
About Akamai Akamai secures and delivers digital experiences for the world's largest companies. Akamai's intelligent edge platform surrounds everything, from the enterprise to the cloud, so customers and their businesses can be fast, smart, and secure. Top brands globally rely on Akamai to help them realize competitive advantage through agile solutions that extend the power of their multi-cloud architectures. Akamai keeps decisions, apps, and experiences closer to users than anyone and attacks and threats far away. Akamai's portfolio of edge security, web and mobile performance, enterprise access, and video delivery solutions is supported by unmatched customer service, analytics, and 24/7/365 monitoring. To learn why the world's top brands trust Akamai, visit http://www.akamai.com, blogs.akamai.com, or @Akamai on Twitter. You can find our global contact information at http://www.akamai.com/locations.
Contacts: Tim Whitman Media Relations 617-444-3019 [emailprotected]
Tom Barth Investor Relations 617-274-7130 [emailprotected]
SOURCE Akamai Technologies, Inc.