Category Archives: Machine Learning
Google is using machine learning to improve the quality of Duo calls – The Verge
Google has rolled out a new technology to improve audio quality in Duo calls when the service cant maintain a steady connection called WaveNetEQ. Its based on technology from Googles DeepMind division that aims to replace audio jitter with artificial noise that sounds just like human speech, generated using machine learning.
If youve ever made a call over the internet, chances are youve experienced audio jitter. It happens when packets of audio data sent as part of the call get lost along the way or otherwise arrive late or in the wrong order. Google says that 99 percent of Duo calls experience packet loss: 20 percent of these lose over 3 percent of their audio, and 10 percent lose over 8 percent. Thats a lot of audio to replace.
Every calling app has to deal with this packet loss somehow, but Google says that these packet loss concealment (PLC) processes can struggle to fill gaps of 60ms or more without sounding robotic or repetitive. WaveNetEQs solution is based on DeepMinds neural network technology, and it has been trained on data from over 100 speakers in 48 different languages.
Here are a few audio samples from Google comparing WaveNetEQ against NetEQ, a commonly used PLC technology. Heres how it sounds when its trying to replace 60ms of packet loss:
Heres a comparison when a call is experiencing packet loss of 120ms:
Theres a limit to how much audio the system can replace, though. Googles tech is designed to replace short sounds, rather than whole words. So after 120ms, it fades out and produces silence. Google says it evaluated the system to make sure it wasnt introducing any significant new sounds. Plus, all of the processing also needs to happen on-device since Google Duo calls are end-to-end encrypted by default. Once the calls real audio resumes, WaveNetEQ will seamlessly fade back to reality.
Its a neat little bit of technology that should make calls that much bit easier to understand when the internet fails them. The technology is already available for Duo calls made on Pixel 4 phones, thanks to the handsets December feature drop, and Google says its in the process of rolling it out to other unnamed handsets.
More:
Google is using machine learning to improve the quality of Duo calls - The Verge
Data Science and Machine-Learning Platformss Market Share Opportunities Trends, And Forecasts To 2020-2027 with Key Players: SAS, Alteryx, IBM,…
Global Data Science and Machine-Learning Platforms Market Forecast 2020-2027
This report offers a detailed view of market opportunity by end user segments, product segments, sales channels, key countries, and import / export dynamics. It details market size & forecast, growth drivers, emerging trends, market opportunities, and investment risks in over various segments in Data Science and Machine-Learning Platformss industry. It provides a comprehensive understanding of Data Science and Machine-Learning Platformss market dynamics in both value and volume terms.
The report provides a basic overview of the industry including definitions and classifications. The Data Science and Machine-Learning Platformss Market analysis is provided for the international markets including development trends, competitive landscape analysis, and key regions development status.
The Major players reported in the market include: SAS, Alteryx, IBM, RapidMiner, KNIME, Microsoft, Dataiku, Databricks, TIBCO Software, MathWorks, H20.ai, Anaconda, SAP, Google, Domino Data Lab, Angoss, Lexalytics, and Rapid Insight
The final report will add the analysis of the Impact of Covid-19 in this report Data Science and Machine-Learning Platformss industry.
Get Sample Copy of the Complete Report
The report firstly introduced the Data Science and Machine-Learning Platformss Market basics: definitions, classifications, applications and industry chain overview; industry policies and plans; product specifications; manufacturing processes; cost structures and so on. Then it analyzed the worlds main region market conditions, including the product price, profit, capacity, production, capacity utilization, supply, demand and industry growth rate etc. In the end, the report introduced new project SWOT analysis, investment feasibility analysis, and investment return analysis.
Table Of Content
1 Report Overview
2 Global Growth Trends
3 Market Share by Key Players
4 Breakdown Data by Type and Application
5 North America
6 Europe
7 China
8 Japan
9 Southeast Asia
10 India
11 Central & South America
12 International Players Profiles
13 Market Forecast 2019-2025
14 Analysts Viewpoints/Conclusions
15 Appendix
This report studies the Data Science and Machine-Learning Platformss market status and outlook of Global and major regions, from angles of players, countries, product types and end industries; this report analyzes the top players in global market, and splits the Data Science and Machine-Learning Platformss market by product type and applications/end industries.
Customization of this Report:This report can be customized to meet the clients requirements. Please connect with our sales team ( [emailprotected] ), who will ensure that you get a report that suits your needs. For more relevant reports visit http://www.reportsandmarkets.com
What to Expect From This Report on Data Science and Machine-Learning Platformss Market:
The developmental plans for your business based on the value of the cost of the production and value of the products, and more for the coming years.
A detailed overview of regional distributions of popular products in the Data Science and Machine-Learning Platformss Market.
How do the major companies and mid-level manufacturers make a profit within the Data Science and Machine-Learning Platformss Market?
Estimate the break-in for new players to enter the Data Science and Machine-Learning Platformss Market.
Comprehensive research on the overall expansion within the Data Science and Machine-Learning Platformss Market for deciding the product launch and asset developments.
If U Know More about This Report
Any special requirements about this report, please let us know and we can provide custom report.
About Us:
Market research is the new buzzword in the market, which helps in understanding the market potential of any product in the market. Reports And Markets is not just another company in this domain but is a part of a veteran group called Algoro Research Consultants Pvt. Ltd. It offers premium progressive statistical surveying, market research reports, analysis & forecast data for a wide range of sectors both for the government and private agencies all across the world.
For more detailed information please contact us at:
Sanjay Jain
Manager Partner Relations & International Marketing
http://www.reportsandmarkets.com
Ph: +1-352-353-0818 (US)
View original post here:
Data Science and Machine-Learning Platformss Market Share Opportunities Trends, And Forecasts To 2020-2027 with Key Players: SAS, Alteryx, IBM,...
Google TensorFlow Cert Suggests AI, ML Certifications on the Rise – Dice Insights
Over the past few years, many companies have embraced artificial intelligence (A.I.) and machine learning as the way of the future. Thats been good news for those technologists whove mastered the tools and concepts related to A.I. and machine learning; those with the right combination of experience and skills can easily earn six-figure salaries (with accompanying perks and benefits).
As A.I. and machine learning mature as sub-industries, its inevitable that more certifications proving technologists skills will emerge. For example, Google recently launched aTensorFlow Developer Certificate, whichjust like it says on the tinconfirms that a developer has mastered the basics of TensorFlow, the open-source library for deep learning software developed by Google.
This certificate in TensorFlow development is intended as a foundational certificate for students, developers, and data scientists who want to demonstrate practical machine learning skills through building and training of basic models using TensorFlow,read a note on the TensorFlow Blog. This level one certificate exam tests a developers foundational knowledge of integrating machine learning into tools and applications.
Those who pass the exam will receive aa certificate and a badge. In addition, those certified developers will also be invited to join ourcredential networkfor recruiters seeking entry-level TensorFlow developers, the blog posting added. This is only the beginning; as this program scales, we are eager to add certificate programs for more advanced and specialized TensorFlow practitioners.
Membership has its benefits. Sign up for a free Dice profile, add your resume, discover great career insights and set your tech career in motion. Register now
Google and TensorFlow arent the only entities in the A.I. certification arena.IBM offers an A.I. Engineering Professional Certificate, which focuses on machine learning and deep learning. Microsoft also has a number of A.I.-related certificates,including an Azure A.I. Engineer Associatecertificate. And last year, Amazon launchedAWS Certified Machine Learning.
Meanwhile, if youre interested in learning how to use TensorFlow, Udacity and Google areoffering a two-month course (just updated in February 2020) designed to help developers utilize TensorFlow to build A.I. applications that scale. Thecourse is part of Udacitys School of A.I., a cluster of free courses to help those relatively new to A.I. andmachine learninglearn the fundamentals.
As the COVID-19 pandemic forces many companies to radically adjust their products, workflows, and internal tech stacks,interest in A.I. and machine learning may accelerate; managers are certainly interested in tools and platforms that will allow them to automate work. Even before the virus emerged, Burning Glass, which collects and analyzes millions of job postings from across the country, estimated that jobs involving A.I. would grow 40 percent over the next decadea number that might only increase under the current circumstances.
Follow this link:
Google TensorFlow Cert Suggests AI, ML Certifications on the Rise - Dice Insights
Machine Learning in Life Sciences Market Report History and Forecast 2020 Breakdown Data by Manufacturers, by Key Regions, Types and Applications -…
The research report presents a comprehensive outlook on the Global Machine Learning in Life Sciences Market contains thoughtful insights, facts, historical data, and statistically supported and industry-validated market data. It also contains projections using a suitable set of assumptions and methodologies. The research report provides analysis and information according to categories such as market segments, geographies, types, technology and applications. The Machine Learning in Life Sciences Market research report provides the newest industry data and industry future trends, allowing you to identify the products and end users driving revenue growth and profitability. It furthermore has an assessment of the factors influencing the demand and supply of the associated products and services, and challenges witnessed by market players. Moreover, the report is made with different graphical representation with the precise arrangement of key outlines, strategic diagrams, and descriptive figures based on the reliable data to depict an exact picture of value assessment and income graphs.
The Machine Learning in Life Sciences market report begins with a market overview combining the data integration and analysis capabilities with the relevant findings; the report has predicted strong future growth of the market. The research analyst combining secondary research which involves reference to various statistical databases, relevant patent and regulatory databases and a number of internal and external proprietary databases. Machine Learning in Life Sciences report has focused on each region market size in terms of US$ Mn for each segment and sub-segment for the period from 2019 to 2026, considering the macro and micro environmental factors. With the help of inputs and insights from technical and marketing experts, the report presents an objective assessment of the Machine Learning in Life Sciences market.
The final report will add the analysis of the Impact of Covid-19 in this report Machine Learning in Life Sciences industry.
Request for Sample Report @ https://www.reportsandmarkets.com/sample-request/machine-learning-in-life-sciences-market-professional-strategic-survey-report-2019-2027?utm_source=curiousdesk&utm_medium=40
The Machine Learning in Life Sciences Market study gives data with n-number of tables and figures examining the Machine Learning in Life Sciences, the research gives you a visual, one-stop breakdown of the leading products, submarkets and market leaders market contribution in terms of volume produced (in kilo tons) and the revenue it generates (in US$) forecasts as well as analysis to 2026. The Machine Learning in Life Sciences research report includes the products that are currently in demand and available in the market along with their cost breakup, manufacturing volume, import/export scheme and contribution to the Machine Learning in Life Sciences market revenue worldwide. It constitutes quantitative and qualitative evaluation by industry experts, assistance from industry analysts, and first-hand data. The report explore product market by end users or application, product market by region, market size for the specific product, sales and revenue by region, manufacturing cost analysis, Industrial Chain, Sourcing Strategy and Downstream Buyers, Market Effect Factors Analysis, market size forecast, and more.
Key Highlights of the Machine Learning in Life Sciences Market Report:
1) This report provides a quantitative analysis of the current trends and estimations from 2017 to 2022 of the global Machine Learning in Life Sciences market to identify the prevailing market opportunities.
2) Comprehensive analysis of factors that drive and restrict the Machine Learning in Life Sciences market growth is provided.
3) Key players and their major developments in the recent years are listed.
4) The Machine Learning in Life Sciences research report presents an in-depth analysis of current research & evaluation of recent industrial developments within the market with key dynamic factors.
5) Major countries in each region are covered according to individual market revenue.
6) Historical, present, and prospective size of the market from the perspective of both value and volume(product type, application, and regions).
7) Information on market segmentation, major opportunities and market trends, market limitations, and major challenges faced by the competitive market.
Target Audience of Machine Learning in Life Sciences Market:
Manufacturer / Potential Investors
Traders, Distributors, Wholesalers, Retailers, Importers and Exporters
Association and government bodies
Inquiry for Buying Report @ https://www.reportsandmarkets.com/sample-request/machine-learning-in-life-sciences-market-professional-strategic-survey-report-2019-2027?utm_source=curiousdesk&utm_medium=40
Towards the end, In conclusion, it is a deep research report on Global Machine Learning in Life Sciences industry. Here, we express our thanks for the support and assistance from Machine Learning in Life Sciences industry chain related technical experts and marketing engineers during Research Teams survey and interviews. Finally, Machine Learning in Life Sciences market report gives you details about the market research findings and conclusion which helps you to develop profitable market strategies to gain competitive advantage.
AI cant predict how a childs life will turn out even with a ton of data – MIT Technology Review
Policymakers often draw on the work of social scientists to predict how specific policies might affect social outcomes such as the employment or crime rates. The idea is that if they can understand how different factors might change the trajectory of someones life, they can propose interventions to promote the best outcomes.
In recent years, though, they have increasingly relied upon machine learning, which promises to produce far more precise predictions by crunching far greater amounts of data. Such models are now used to predict the likelihood that a defendant might be arrested for a second crime, or that a kid is at risk for abuse and neglect at home. The assumption is that an algorithm fed with enough data about a given situation will make more accurate predictions than a human or a more basic statistical analysis.
Sign up for The Algorithm artificial intelligence, demystified
Now a new study published in the Proceedings of the National Academy of Sciences casts doubt on how effective this approach really is. Three sociologists at Princeton University asked hundreds of researchers to predict six life outcomes for children, parents, and households using nearly 13,000 data points on over 4,000 families. None of the researchers got even close to a reasonable level of accuracy, regardless of whether they used simple statistics or cutting-edge machine learning.
The study really highlights this idea that at the end of the day, machine-learning tools are not magic, says Alice Xiang, the head of fairness and accountability research at the nonprofit Partnership on AI.
The researchers used data from a 15-year-long sociology study called the Fragile Families and Child Wellbeing Study, led by Sara McLanahan, a professor of sociology and public affairs at Princeton and one of the lead authors of the new paper. The original study sought to understand how the lives of children born to unmarried parents might turn out over time. Families were randomly selected from children born in hospitals in large US cities during the year 2000. They were followed up for data collection when the children were 1, 3, 5, 9, and 15 years old.
McLanahan and her colleagues Matthew Salganik and Ian Lundberg then designed a challenge to crowdsource predictions on six outcomes in the final phase that they deemed sociologically important. These included the childrens grade point average at school; their level of grit, or self-reported perseverance in school; and the overall level of poverty in their household. Challenge participants from various universities were given only part of the data to train their algorithms, while the organizers held some back for final evaluations. Over the course of five months, hundreds of researchers, including computer scientists, statisticians, and computational sociologists, then submitted their best techniques for prediction.
The fact that no submission was able to achieve high accuracy on any of the outcomes confirmed that the results werent a fluke. You can't explain it away based on the failure of any particular researcher or of any particular machine-learning or AI techniques, says Salganik, a professor of sociology. The most complicated machine-learning techniques also werent much more accurate than far simpler methods.
For experts who study the use of AI in society, the results are not all that surprising. Even the most accurate risk assessment algorithms in the criminal justice system, for example, max out at 60% or 70%, says Xiang. Maybe in the abstract that sounds somewhat good, she adds, but reoffending rates can be lower than 40% anyway. That means predicting no reoffenses will already get you an accuracy rate of more than 60%.
Likewise, research has repeatedly shown that within contexts where an algorithm is assessing risk or choosing where to direct resources, simple, explainable algorithms often have close to the same prediction power as black-box techniques like deep learning. The added benefit of the black-box techniques, then, is not worth the big costs in interpretability.
The results do not necessarily mean that predictive algorithms, whether based on machine learning or not, will never be useful tools in the policy world. Some researchers point out, for example, that data collected for the purposes of sociology research is different from the data typically analyzed in policymaking.
Rashida Richardson, policy director at the AI Now institute, which studies the social impact of AI, also notes concerns in the way the prediction problem was framed. Whether a child has grit, for example, is an inherently subjective judgment that research has shown to be a racist construct for measuring success and performance, she says. The detail immediately tipped her off to thinking, Oh theres no way this is going to work.
Salganik also acknowledges the limitations of the study. But he emphasizes that it shows why policymakers should be more careful about evaluating the accuracy of algorithmic tools in a transparent way. Having a large amount of data and having complicated machine learning does not guarantee accurate prediction, he adds. Policymakers who don't have as much experience working with machine learning may have unrealistic expectations about that.
To have more stories like this delivered directly to your inbox,sign upfor our Webby-nominated AI newsletter The Algorithm. It's free.
More here:
AI cant predict how a childs life will turn out even with a ton of data - MIT Technology Review
The Global Machine Learning Market is expected to grow by USD 11.16 bn during 2020-2024, progressing at a CAGR of 39% during the forecast period -…
NEW YORK, March 30, 2020 /PRNewswire/ --
Global Machine Learning Market 2020-2024The analyst has been monitoring the global machine learning market and it is poised to grow by USD 11.16 bn during 2020-2024, progressing at a CAGR of 39% during the forecast period. Our reports on global machine learning market provides a holistic analysis, market size and forecast, trends, growth drivers, and challenges, as well as vendor analysis covering around 25 vendors.
Read the full report: https://www.reportlinker.com/p05082022/?utm_source=PRN
The report offers an up-to-date analysis regarding the current global market scenario, latest trends and drivers, and the overall market environment. The market is driven by increasing adoption of cloud-based offerings. In addition, increasing use of machine learning in customer experience management is anticipated to boost the growth of the global machine learning market as well.
Market SegmentationThe global machine learning market is segmented as below:End-User: BFSI Retail Telecommunications Healthcare Others
Geographic Segmentation: APAC Europe MEA North America South America
Key Trends for global machine learning market growthThis study identifies increasing use of machine learning in customer experience management as the prime reasons driving the global machine learning market growth during the next few years.
Prominent vendors in global machine learning marketWe provide a detailed analysis of around 25 vendors operating in the global machine learning market 2020-2024, including some of the vendors such as Alibaba Group Holding Ltd., Alphabet Inc., Amazon.com Inc., Cisco Systems Inc., Hewlett Packard Enterprise Development LP, International Business Machines Corp., Microsoft Corp., Salesforce.com Inc., SAP SE and SAS Institute Inc. .The study was conducted using an objective combination of primary and secondary information including inputs from key participants in the industry. The report contains a comprehensive market and vendor landscape in addition to an analysis of the key vendors.
Read the full report: https://www.reportlinker.com/p05082022/?utm_source=PRN
About ReportlinkerReportLinker is an award-winning market research solution. Reportlinker finds and organizes the latest industry data so you get all the market research you need - instantly, in one place.
__________________________Contact Clare: clare@reportlinker.comUS: (339)-368-6001Intl: +1 339-368-6001
Read the rest here:
The Global Machine Learning Market is expected to grow by USD 11.16 bn during 2020-2024, progressing at a CAGR of 39% during the forecast period -...
Well-Completion System Supported by Machine Learning Maximizes Asset Value – Journal of Petroleum Technology
You have access to this full article to experience the outstanding content available to SPE members and JPT subscribers.
To ensure continued access to JPT's content, please Sign In, JOIN SPE, or Subscribe to JPT
In this paper, the authors introduce a new technology installed permanently on the well completion and addressed to real-time reservoir fluid mapping through time-lapse electromagnetic tomography during production or injection. The variations of the electromagnetic fields caused by changes of the fluid distribution are measured in a wide range of distances from the well. The data are processed and interpreted through an integrated software platform that combines 3D and 4D geophysical data inversion with a machine-learning (ML) platform. The complete paper clarifies the details of the ML work flow applied to electrical resistivity tomography (ERT) models using an example based on synthetic data.
An important question in well completions is how one may acquire data with sufficient accuracy for detecting the movements of the fluids in a wide range of distances in the space around the production well. One method that is applied in various Earth disciplines is time-lapse electrical resistivity. The operational effectiveness of ERT allows frequent acquisition of independent surveys and inversion of the data in a relatively short time. The final goal is to create dynamic models of the reservoir supporting important decisions in near-real-time regarding production and management operations. ML algorithms can support this decision-making process.
In a time-lapse ERT survey [often referred to as a direct-current (DC) time-lapse survey], electrodes are installed at fixed locations during monitoring. First, a base resistivity data set is collected. The inversion of this initial data set produces a base resistivity model to be used as a reference model. Then, one or more monitor surveys are repeated during monitoring. The same acquisition parameters applied in the base survey must be used for each monitor survey. The objective is to detect any small change in resistivity, from one survey to another, inside the investigated medium.
As a first approach, the eventual variations in resistivity can be retrieved through direct comparison between the different inverted resistivity models. A different approach is called difference inversion. Instead of inverting the base and monitor data sets separately, in difference inversion, the difference between the monitor and base data sets is inverted. In this way, all the coherent inversion artifacts may be canceled in the difference images resulting from this type of inversion.
Repeating the measurements many times (through multiple monitor surveys) in the same area and inverting the differences between consecutive data sets results in deep insight about relevant variations of physical properties linked with variations of the electric resistivity.
The Eni reservoir electromagnetic monitoring and fluid mapping system consists of an array of electrodes and coils (Fig. 1) installed along the production casing/liner. The electrodes are coupled electrically with the geological formations. A typical acquisition layout can include several hundred electrodes densely spaced (for instance, every 510m) and deployed on many wells for long distances along the liner. This type of acquisition configuration allows characterization, after data inversion, of the resistivity space between the wells with relatively high resolution and in a wide range of distances. The electrodes work alternately as sources of electric currents (Electrodes A and B in Fig. 1) and as receivers of electric potentials (Electrodes M and N). The value of the measured electric potentials depends on the resistivity distribution of the medium investigated by the electric currents. Consequently, the inversion of the measured potentials allows retrieval of a multidimensional resistivity model in the space around the electrode array. This model is complementary to the other resistivity model retrieved through ERT tomography. Finally, the resistivity models are transformed into fluid-saturation models to obtain a real-time map of fluid distribution in the reservoir.
The described system includes coils that generate and measure a controlled electromagnetic field in a wide range of frequencies.
The geoelectric method has proved to be an effective approach for mapping fluid variations, using both surface and borehole measurements, because of its high sensitivity to the electrical resistivity changes associated with the different types of fluids (fresh water, brine, hydrocarbons). In the specific test described in the complete paper, the authors simulated a time-lapse DC tomography experiment addressed to hydrocarbon reservoir monitoring during production.
A significant change in conductivity was simulated in the reservoir zone and below it because of the water table approaching four horizontal wells. A DC cross-hole acquisition survey using a borehole layout deployed in four parallel horizontal wells located at a mutual constant distance of 250 m was simulated. Each horizontal well is a constant depth of 2340 m below the surface. In each well, 15 electrodes with a constant spacing of 25 m were deployed.
The modeling grid is formed by irregular rectangular cells with size dependent on the spacing between the electrodes. The maximum expected spatial resolution of the inverted model parameter (resistivity, in this case) corresponds to the minimum half-spacing between the electrodes.
For this simulation, the authors used a PUNQ-S3 reservoir model representing a small industrial reservoir scenario of 19285 gridblocks. A South and East fault system bounds the modeled hydrocarbon field. Furthermore, an aquifer bounds the reservoir to the North and West. The porosity and saturation distributions were transformed into the corresponding resistivity distribution. Simulations were performed on the resulting resistivity model. This model consists of five levels (with a thickness of 10 m each) with variable resistivity.
The acquisition was simulated in both scenarios before and after the movement of waterthat is, corresponding with both the base and the monitor models. A mixed-dipole gradient array, with a cycle time of 1.2 s, was used, acquiring 2,145 electric potentials. This is a variant of the dipole/dipole array with all four electrodes (A, B, M, and N) usually deployed on a straight line.
The authors added 5% of random noise in the synthetic data. Consequently, because of the presence of noisy data, a robust inversion approach more suited to presence of outliers was applied.
After the simulated response was recorded in the two scenarios (base and monitor models), the difference data vector was created and inverted for retrieving the difference conductivity model (that is, the 3D model of the spatial variations of the conductivity distribution). One of the main benefits of DC tomography is the rapidity by which data can be acquired and inverted. This intrinsic methodological effectiveness allows acquisition of several surveys per day in multiple wells, permitting a quasi-real-time reservoir-monitoring approach.
Good convergence is reached after only five iterations, although the experiment started from a uniform resistivity initial model, assuming null prior knowledge.
In another test, the DC response measured in two different scenarios was studied. A single-well acquisition scheme was considered, including both a vertical and a horizontal segment. The installation of electrodes in both parts was simulated, with an average spacing of 10 m. A water table approaching the well from below was simulated, with the effect of changing the resistivity distribution significantly. The synthetic response was inverted at both stages of the water movement. After each inversion, the water table was interpreted in terms of absolute changes of resistivity.
The technology is aimed at performing real-time reservoir fluid mapping through time-lapse electric/electromagnetic tomography. To estimate the resolution capability of the approach and its theoretical range of investigation, a full sensitivity analysis was performed through 3D forward modeling and time-lapse 3D inversion of synthetic data simulated in realistic production scenarios. The approach works optimally when sources and receivers are installed in multiple wells. Time-lapse ERT tests show that significant conductivity variations caused by waterfront movements up to 100150 m from the borehole electrode layouts can be detected. Time-lapse ERT models were integrated into a complete framework aimed at analyzing the continuous information acquired at each ERT survey. Using a suite of ML algorithms, a quasi-real-time space/time prediction about the probabilistic distributions of invasion of undesired fluids into the production wells can be made.
Read more from the original source:
Well-Completion System Supported by Machine Learning Maximizes Asset Value - Journal of Petroleum Technology
Weekend Roundup: Anything-Other-Than-COVID-19 Edition (Seriously!) – Dice Insights
Its the weekend! You made it through yet another wild week. Lets take a moment andnotmention COVID-19. Sound good? Sounds good! Lets cover other things going on in tech, from Googles nifty new art app to the automation of cybersecurity.
Googles Arts & Culture app attracted a lot of buzz two years ago, thanks to its neat-o trick ofpairing users selfies with famous portraits. Now its back with a new feature: Rendering your photos in one of many famous art styles.
After taking or uploading a photo, choose from dozens of masterpieces to transfer that style onto your image,reads the explanatory note on Googles blog. (And while you wait, well share a fun fact about the artwork, in case youre curious to know a bit more about its history.) For more customization, you can use the scissors icon to select which part of the image you want the style applied to.
This feature, dubbed Art Transfer, relies on machine learning to transform that decent shot of todays grilled-cheese sandwich into a Frida Kahlo masterwork. Art Transfer doesnt just blend the two things or simply overlay your image, the blog continued. Instead, it kicks off a unique algorithmic recreation of your photo inspired by the specific art style you have chosen. If you cant go to a museum this weekend, in other words, you can give yourself an art-y experience at home.
For those concerned about their privacy, this processing is apparently done on-device, without your image reaching Googles cloud. Nonetheless, keep in mind that Google is probably using data from the process to improve its A.I. and machine-learning efforts in some way.
Membership has its benefits. Sign up for a free Dice profile, add your resume, discover great career insights and set your tech career in motion. Register now
Cybersecuritytakes a lot of skill and effort, even at the best of times. Amazons new generally-available tool, Amazon Detective, is designed to automate the scanning of customers cloud resources. Previewed last year, its supposed to sniff out vulnerabilities and possible cyber-attacks.
The caveat, of course, is that Amazon Detective is designed expressly to scan AWS logs. Amazon Detectiveworks across your AWS accounts, it is a multi-account solution that aggregates data and findings from up to 1000 AWS accounts into a single security-owned master account making it easy to view behavioral patterns and connections across your entire AWS environment,reads the companys blog postingon the matter, which also includes a handy tactical breakdown of how it works (including slides).
In many ways, Amazon Detective is a potential preview of a future in which automation is used increasingly to scan systems for weaknesses. That wont put flesh-and-blood cybersecurity professionals out of a job, but it could radically change their workflow; for example, if software can handle many of the low level security tasks that confront a company on a weekly basis, technologists can spend more time on high-level tasks such as long-term security strategy.
For a couple months in there, it looked as if WeWork founder Adam Neumann had one heck of a golden parachute ready to deploy, despite the implosion of his Uber, but for office space startup: roughly a billion dollars. In exchange for stepping away, Neumann would earn $975 million in stock buybacks from SoftBank, which invested quite a bit of money in WeWork.
But according to CNN, WeWork failed to meet certain conditions, and now Neumann is out all of that sweet, sweet cash (and probably having a bad weekend as a result). Hopefully things go a little better for theWeWork engineers and other employeeswho are still trying to figure out how to navigate the company through multiple problems.
Have a good weekend, everyone! And keep washing those hands!
See more here:
Weekend Roundup: Anything-Other-Than-COVID-19 Edition (Seriously!) - Dice Insights
Intel + Cornell Pioneering Work in the Science of Smell – insideBIGDATA
Nature Machine Intelligence published a joint paper from researchers at Intel Labs and Cornell University demonstrating the ability of Intels neuromorphic test chip, Loihi, to learn and recognize 10 hazardous chemicals, even in the presence of significant noise and occlusion. The work demonstrates how neuromorphic computing could be used to detect smells that are precursors to explosives, narcotics and more.
Loihi learned each new odor from a single example without disrupting the previously learned smells, requiring up to 3000x fewer training samples per class compared to a deep learning solution and demonstrating superior recognition accuracy. The research shows how the self-learning, low-power, and brain-like properties of neuromorphic chips combined with algorithms derived from neuroscience could be the answer to creating electronic nose systems that recognize odors under real-world conditions more effectively than conventional solutions.
We are developing neural algorithms on Loihi that mimic what happens in your brain when you smell something, said Nabil Imam, senior research scientist in Intels Neuromorphic Computing Lab. This work is a prime example of contemporary research at the crossroads of neuroscience and artificial intelligence and demonstrates Loihis potential to provide important sensing capabilities that could benefit various industries.
Intel Labs is driving computer-science research that contributes to a third generation of AI. Key focus areas include neuromorphic computing, which is concerned with emulating the neural structure and operation of the human brain, as well as probabilistic computing, which createsalgorithmic approaches to dealing with the uncertainty, ambiguity, and contradiction in the natural world.
Sign up for the free insideBIGDATAnewsletter.
See original here:
Intel + Cornell Pioneering Work in the Science of Smell - insideBIGDATA
Data to the Rescue! Predicting and Preventing Accidents at Sea – JAXenter
Watch Dr. Yonit Hoffman's Machine Learning Conference session
Accidents at sea happen all the time. Their costs in terms of lives, money and environmental destruction are huge. Wouldnt it be great if they could be predicted and perhaps prevented? Dr. Yonit Hoffmans Machine Learning Conference session discusses new ways of preventing sea accidents with the power of data science.
Does machine learning hold the key to preventing accidents at sea?
With more than 350 years of history, the marine insurance industry is the first data science profession to try to predict accidents and estimate future risk. Yet the old ways no longer work, new waves of data and algorithms can offer significant improvements and are going to revolutionise the industry.
In her Machine Learning Conference session, Dr. Yonit Hoffman will show that it is now possible to predict accidents, and how data on a ships behaviour such as location, speed, maps and weather can help. She will show how fragments of information on ship movements can be gathered and taken all the way to machine learning models. In this session, she discusses the challenges, including introducing machine learning to an industry that still uses paper and quills (yes, really!) and explaining the models using SHAP.
Dr. Yonit Hoffman is a Senior Data Scientist at Windward, a world leader in maritime risk analytics. Before investigating supertanker accidents, she researched human cells and cancer at the Weizmann Institute, where she received her PhD and MSc. in Bioinformatics. Yonit also holds a BSc. in computer science and biology from Tel Aviv University.
See the original post here:
Data to the Rescue! Predicting and Preventing Accidents at Sea - JAXenter