Category Archives: Machine Learning

RXA to Participate in 2nd Annual A2.AI Conference focused on Machine Learning & Applied AI – PR Web

Panelists from last years RXA event speak at the A2.AI session about how machine learning & applied AI enable businesses to make informed decisions with data. Tune in to this year's virtual event.

ANN ARBOR, Mich. (PRWEB) September 10, 2020

RXA, the international leader in applied artificial intelligence, advanced data science, and analytics allowing companies to make smarter, faster decisions, announced today that it will participate in the second annual A2.AI conference this time, virtually.

The virtual conference is the first of its kind in the Ann Arbor area, focusing on how machine learning and applied artificial intelligence enable businesses to make more informed and actionable decisions with their data. The conference will host leaders and entrepreneurs from companies including RXA, Weave Workforce and ESPN and will ignite new conversations in the artificial intelligence space.

Many businesses have not begun to utilize the power of applied AI, viewing it as something to tackle in the future. Through a series of speaker, panel and networking sessions, RXA and other industry leaders will share how applied AI, predictive analytics, and data visualization are changing how companies can tackle complex challenges during these unpredictable times, said Jason Harper, chief executive officer and founder, RXA. The exponentially rising influx of customer data has impacted every industry around the globe in various ways, and we have the ability using advanced technologies to better plan for the future of business if we can learn from and share best practices from other industry executives.

The conference will be held on September 23, 2020 from 12:00 p.m. 4:30 p.m. EDT. Roundtable speakers at the virtual event will include: Mike McFall, CEO and co-founder, BIGGBY COFFEE; LTG Reynold Hoover, retired LTG and principal, RNHoover Consulting LLC.; Doug Kramon, senior director of Fan Support & Customer Care Operations, ESPN; Charles Cantu, founder and CEO, RESET DIGITAL; Amy Klinke, senior director, Business Engagement Center, University of Michigan; Kristie Rowley, principal data scientist, manager of Data Science Professional Services, Domo and; Jason Harper, CEO and co-founder, RXA.

RXA featured speakers will include John Larson, co-founder and CEO of Weave Workforce https://www.weaveworkforce.com/ Jonathan Prantner, cofounder and chief analytics officer, RXA, Eric Green, chief executive officer, Ready Signal https://www.readysignal.com/ Heather Reed, chief experience officer, RXA and Tom Stanek, president RXA https://RXA.IO

For more information about the A2.AI event, please visit https://a2.ai

If you are interested in attending, please register here https://a2.ai/homepage/register/

RXARXA https://RXA.IO is a leading applied artificial intelligence and data science company founded in 2016 in Ann Arbor, MI. RXA has a diverse portfolio of services and solutions such as being a leading DOMO implementation and consulting firm, customized artificial intelligence kick-start programs, and an RXA Studio to support the development of new products and companies and proprietary solutions such as Mixed Media Optimization, Voice of Customer, and Workforce Optimization to help organizations improved their ROI and decision making while streamline operations.

RXAs solutions are currently being leveraged by over 70 different customers across North America, Asia, and Europe. RXA has been named the 2019 Innovative Partner of the Year, Voice of Customer Experience | Application of the Year for 2020, and 2020 Rising Star by Domo, Inc.

Share article on social media or email:

Continue reading here:
RXA to Participate in 2nd Annual A2.AI Conference focused on Machine Learning & Applied AI - PR Web

50 Data Science and Analysts Jobs That Opened Just Last Week – Analytics India Magazine

Despite the pandemic, organisations and industries are looking to hire professionals for data science and analytics who are familiar with advanced statistical techniques, machine learning tools, among others. With the evolving Data Science and Analytics market, data scientists and AI practitioners should keep themselves abreast of the latest tools and trends in the field.

In this article, we list down 50 latest job openings in data science and analytics that opened just last week.

(The jobs are sorted according to the years of experience required).

Location: Bangalore

Skills Required: Machine learning, Python, data analysis, SQL, analytics, predictive modelling, natural language processing, deep learning, data science, etc.

Apply here.

Location: Bangalore

Skills Required: Deep learning frameworks such as TensorFlow for Keras, Python and basic libraries for machine learning such as sci kit-learn and pandas, visualising and manipulating big data sets, Open CV, Linux, etc.

Apply here.

Location: Bangalore

Skills Required: Business Intelligence and analysis, designing, testing, migration, production support and implement automation, visualisation tools like Power BI/Tableau, statistical packages for analysing datasets, Big data handling tools such as Python/R programming knowledge, etc.

Apply here.

Location: Telangana

Skills Required: Machine learning, text analytics, and statistical analysis methods, data mining, data wrangling, and data munging, R, Python, SAS, SPSS, Weka, etc.

Apply here.

Location: Mumbai

Skills Required: Customer segmentation, user profiling, churn analysis, Tableau, Looker, Qlik, SQL and MS Excel, R or Python, etc.

Apply here.

Location: Bangalore

Skills Required: Analytical skills, problem-solving skills, statistical computer languages (R, Python, SAS), statistical concepts, advanced machine learning techniques, optimisation techniques, distributed data/computing tools, etc.

Apply here.

Location: Hyderabad

Skills Required: Building and implementing predictive models using machine learning algorithms, Python, R, Hadoop and related Big Data technologies, etc.

Apply here.

Location: Remote

Skills Required: Python, R, AI/ML, data science techniques, programming, statistics, large datasets, etc.

Apply here.

Location: Hyderabad

Skills Required: Strong analytical background, instance writing scripts, web scraping, calling APIs, write SQL queries, Python scripting, business stakeholders data requirements, design and implement automation policies, etc.

Apply here.

Location: Bangalore

Skills Required: SQL and relational databases, workflows using Alteryx, Python, Hadoop and cloud computing like AWS, data acquisition and manipulation, modelling, and analysing data from core platforms (Eloqua, Salesforce, Enterprise Data Warehouse), etc.

Apply here.

Location: Bangalore

Skills Required: Statistics, Machine Learning, programming skills in various languages (Python, Scala, R), machine learning frameworks such as Scikit-Learn, H2O, Keras, TensorFlow, and Spark, Linux/OS X command line, version control software (git), and general software development, database, programming or scripting to enable ETL development, etc.

Apply here.

Location: Bangalore

Skills Required: C++, Object orientated programming and File-based Design, video surveillance domain and algorithms evaluation, Linux platform-based development, Python/Go programming, Kaffe, TensorFlow, JS Scripting, etc.

Apply here.

Location: Bangalore

Skills Required: Python, data manipulation, wrangling, cleansing, and analysis, large-scale datasets, Anaconda, Rstudio, Apache HTTP server, ETL pipelines, MS SQL Server 2008 and above, SQL queries, etc.

Apply here.

Location: Mumbai

Skills Required: HDFS, file, database, JSON, HTML, data warehousing & databases, Hadoop, Hive, Spark, SQL, Oracle, RDBMS, R, Python, etc.

Apply here.

Location: Mumbai

Skills Required: Data mining and machine learning techniques, complex statistical models, SQL, Linux, Python, and R, Big Data technologies like Hadoop, Hive, and/or MapReduce, Spark, Amazon Web Services, Google Cloud Platform, etc.

Apply here.

Location: Bangalore

Skills Required: Qiskit stack, full-stack quantum development, quantum computing, Python, delivery and test-driven development environments, etc.

Apply here.

Location: Bangalore

Skills Required: ML models, ETL process, data structure/algorithm, design patterns, and testing principles, fraud detection & credit risk assessment, etc.

Apply here.

Location: Kolkata

Skills Required: Advanced knowledge of applications programming, system flow and develop standards for coding, testing, debugging, and implementation, micro/macro designing and familiar with Unix Commands and basic work experience in Unix Shell Scripting, etc.

Apply here.

Location: Bangalore

Skills Required: Design, develop and enhance the Marcus Data Platform, data warehousing concepts, especially in the ETL space, advanced SQL knowledge and experience working with relational databases, query authoring (SQL), etc.

Apply here.

Location: Bangalore

Skills Required: Deep learning (e.g., CNN, RNN, LSTM), SQL, R/Python knowledge such as Python, data science libraries, version control, Tensorflow, Keras, Caffe, PyTorch, etc.

Apply here.

Location: Gurgaon

Skills Required: Python/R scripting, common ML/DL algorithms, AWS implementations, Spark, BI Tool, preferably DSS and Tableau, etc.

Apply here.

Location: Hyderabad

Skills Required: Researching, gathering and analysing data, statistical methods and procedures used to obtain data to ensure validity, applicability, efficiency, and accuracy, etc.

Apply here.

Location: Mumbai

Skills Required: Extract and visualise data using SQL queries, building predictive models on Python, etc.

Apply here.

Location: Mumbai

Skills Required: Satellite analytics, document digitisation, image-based recommender system, frameworks like Pytorch, Scikit-learn, Python programming, distributed computing, Azure / AWS / GCP, deep learning solutions using FastAPI, Docker, etc.

Apply here.

Location: Pune

Skills Required: ML libraries and applications, mining and analysing data, Deep Learning, Pyspark, implementing the A/B tests, etc.

Apply here.

Location: Bangalore

Skills Required: Python, SQL, cloud computing, ETL tools, Spark, Airflow, Kafka, Big query, etc.

Apply here.

Location: Bangalore

Skills Required: Data analysis, data mining, modelling, statistical analysis, etc.

Apply here.

Location: Delhi

Skills Required: SQL, Advance Excel, VBA, Python, R, SSRS, Tableau, Domo, Power BI, etc.

Apply here.

Location: Anywhere in India

Skills Required: Data analytics, SQL, analytical Skills & problem-solving skills, stakeholder management skills, etc.

Apply here.

Location: Bangalore

Skills Required: Analytical Skills and Process Orientation, R/Python, SQL, Excel, visualisation tools like R-Shiny, Tableau, etc.

Apply here.

Location: Bangalore

Skills Required: Databases, analytical reasoning, statistical techniques, MySQL, Postgres, Oracle databases and fluent in SQL scripts, XML, Javascript, or ETL frameworks, Excel, SPSS, SAS, etc.

Apply here.

Location: Hyderabad

Skills Required: Machine learning / statistical analytics,A/B testing, experiment design, causal inference, or quasi-experimental methods, etc.

Apply here.

Location: Bangalore

See the original post here:
50 Data Science and Analysts Jobs That Opened Just Last Week - Analytics India Magazine

FSS Launches Next Gen Recon with Machine Learning and Cloud Support – TechGenyz

FSS (Financial Software and Systems) a global payment product provider and processor, has introduced critical increments to its market-leading Smart Recon platform.

The new system will leverage ML and automate manually intensive processes and improve the speed, accuracy, and reliability of the payment reconciliation process.

FSS Smart Recon provides an end-to-end, automated solution for reconciliation management across payment workflows, with built-in support for complex, multi-source, multi-file many-to-many reconciliation scenarios.

The notable enhancements include:

Collectively the enhancements provide a 40% improvement in time to market for greenfield implementations, a sizable 30% improvement in reconciliation time cycles, and an overall 25% reduction in direct costs as compared to partially automated processes.

Speaking on the developments, Krishnan Srinivasan, Global Chief Revenue Officer FSS, stated: FSS is a leader in the payment reconciliation space, with 25+ deployments globally. Our Smart Recon solution has been deployed by Tier One banks, neo banks, MNOs, and merchant aggregators.

Across segments and markets, we are seeing significant demand for modernisation of back-office operations. Customers are increasingly pivoting away from in-house payment reconciliation systems with semi-automated processes towards service-based contracts backed by new-age technology platforms.

Speaking on the new increments, Sathish N, Dy CPO, FSS, stated: Payment reconciliations have become exceedingly complex and simplicity and speed are crucial for banks and financial institutions in the wake of an ever-increasing influx of transaction data.

With our continuous innovation-led model and investment in the right technology like Machine learning, AI, and cloud computing, banks, and financial institutions will benefit from flexible and scalable processes and improved speed and accuracy of reconciliation.

FSS Smart Recon deploys advanced unsupervised Machine Learning (ML), algorithms for settlement accelerating the reconciliation cycle by dynamically identifying potential discrepancies at the source.

Algorithm-based settlement processes allow new components to be detected within seconds, lowering process latency, and saving 80% time in performing resolution actions.

FSS Universal Data Wizard catapults time to market for new implementations by 40%. Across deployments, the initial process of configuring proprietary Core Banking and the Switch file formats is the lengthiest step in the implementation cycle.

Data Wizard maintains a meta-repository of Switch and Host file formats for reuse across deployments, saving time and costs on implementation projects.

To further optimize implementation cycle time, FSS Smart Recon has enhanced its critical General Ledger Tally process to support a generic framework for automation of closing balances between General Ledger and Core Banking systems.

FSS Smart Recon functionality can be accessed from the Oracle cloud providing greater deployment flexibility to customers. The availability via a cloud platform transforms the economics of ownership for financial institutions and neo-banks and customers wishing to migrate from legacy systems, in particular.

Deployed by globally leading banks and payment processors, FSS Smart Recon is a unified system for reconciling digital payments and incorporates data import, transformation and enrichment, data matching, exception management, and timely reporting and analytics.

The solution supports a wide diversity of payment classes ATM Recon, Online Commerce, Wallets, Instant Payments (IMPS and UPI), NEFT, RTGS, and QR Code Payments with built-in flexibility to rapidly onboard new payment channels and scheme-based payments.

Financial Software and Systems (FSS) is a leader in payments technology and transaction processing. The company offers an integrated portfolio of software products, hosted payment services, and software solutions built over 29+ years of experience.

FSS, end-to-end payments products suite, powers retail delivery channels including ATM, POS, Internet, and Mobile as well as critical back-end functions including cards management, reconciliation, settlement, merchant management, and device monitoring.

Headquartered in Chennai, India, FSS services leading global banks, financial institutions, processors, central regulators, and governments across North America, UK, Europe, ME, Africa, and APAC and has 2,500 experts on-board.

See the article here:
FSS Launches Next Gen Recon with Machine Learning and Cloud Support - TechGenyz

Getting to the heart of machine learning and complex humans – The Irish Times

You recently made a big discovery that an academic library containing millions of images used to train artificial intelligence systems had privacy and ethics issues, and that it included racist, misogynistic and other offensive content.

Yes, I worked on this with Vinay Prabhu a chief scientist at UnifyID, a privacy start-up in Silicon Valley on the 80-million images dataset curated by Massachusetts Institute of Technology. We spent about months looking through this dataset, and we found thousands of images labelled with insults and derogatory terms.

Using this kind of content to build and train artificial intelligence systems, including face recognition systems, would embed harmful stereotypes and prejudices and could have grave consequences for individuals in the real world.

What happened when you published the findings?

The media picked up on it, so it got a lot of publicity. MIT withdrew the database and urged people to delete their copies of the data. That was humbling and a nice result.

How does this finding fit in to your PhD research?

I study embodied cognitive science, which is at the heart of how people interact and go about their daily lives and what it means to be a person. The background assumption is that people are ambiguous, they come to be who they are through interactions with other people.

It is a different perspective to traditional cognitive science, which is all about the brain and rationality. My research looks at how artificial intelligence and machine learning has limits in how it can understand and predict the complex messiness of human behaviour and social outcomes.

Can you give me an example?

If you take the Shazam app, it works very well to recognise a piece of music that you play to it. It searches for the pattern of the music in a database, and this narrow search suits the machine approach. But predicting a social outcome from human characteristics is very different.

As humans we have infinite potentials, we can react to situations in different ways, and a machine that uses numerable parameters cannot predict whether someone is a good hire or at risk of committing a crime in the future. Humans and our interactions represent more than just a few parameters. My research looks at existing machine learning systems and the ethics of this dilemma.

How did you get into this work?

I started in physics back home in Ethiopia, but when I came to Ireland there was so much paperwork and so many exams to translate my Ethiopian qualification that I decided to start from scratch.

So I studied psychology and philosophy and I did a masters [masters course had lots of elements neuroscience, philosophy, anthropology, and computer science, where we built computational models of various cognitive faculties and it is where I really found my place.

How has Covid-19 affected your research?

At the start of the pandemic, I thought this might be a chance to write up a lot of my project, but I found it hard to work at home and to unhook my mind from what was going on around the world.

I also missed the social side, going for coffee and talking with my colleagues about work and everything else. So I am glad to be back in the lab now and seeing my lab mates even at a distance.

See the rest here:
Getting to the heart of machine learning and complex humans - The Irish Times

Global Machine Learning Courses Market Trends, Key Driven Factors, Segmentation And Forecast To 2020-2026 – The Scarlet

The research report on the global Machine Learning Courses Market offers an all-encompassing analysis of recent and upcoming states of this industry which also analyzes several growth strategies for market growth. The Machine Learning Courses report also focuses on the comprehensive study of the industry environment, and industry chain structure extensively. The Machine Learning Courses report also sheds light on major factors including leading vendors, growth rate, production value, and key regions.

Request for a sample report here @:

https://www.reportspedia.com/report/business-services/global-machine-learning-courses-market-report-2020-by-key-players,-types,-applications,-countries,-market-size,-forecast-to-2026-(based-on-2020-covid-19-worldwide-spread)/69391#request_sample

Top Key Players:

UdemyEdXMetisSimplilearnUdacityEdvancerNobleProgIvy Professional SchoolJigsaw AcademyDataCampBitBootCamp

Machine Learning Courses Market Fragment by Areas, regional examination covers

United States, Canada, Germany, UK, France, Italy, Russia, Switzerland, Sweden, Poland, , China, Japan, South Korea, Australia, India, Taiwan, Thailand, Philippines, Malaysia, Brazil, Argentina, Columbia, Chile, Saudi Arabia, UAE, Egypt, Nigeria, South Africa and Rest of the World.

The Machine Learning Courses Market report introduces the industrial chain analysis, downstream buyers, and raw material sources along with the correct comprehensions of market dynamics. The Machine Learning Courses Market report is articulated with a detailed view of the Global Machine Learning Courses industry including Global production sales, Global revenue, and CAGR. Additionally, it offers potential insights about Porters Five Forces including substitutes, buyers, industry competitors, and suppliers with genuine information for understanding the Global Machine Learning Courses Market.

Get Impressive discount @:

https://www.reportspedia.com/discount_inquiry/discount/69391

Market segment by Type, the product can be split into:

BusinessRetailBFSIManufacturing

Market segment by Application, split into:

AcademicNon-academic

The Machine Learning Courses Market study projects viability analysis, SWOT analysis, and various other information about the leading companies operating in the Global Machine Learning Courses Market provide a complete efficient account of the viable environment of the industry with the aid of thorough company profiles. However, Machine Learning Courses research examines the impact of current market success and future growth prospects for the industry.

Inquire Before Buying @:

https://www.reportspedia.com/report/business-services/global-machine-learning-courses-market-report-2020-by-key-players,-types,-applications,-countries,-market-size,-forecast-to-2026-(based-on-2020-covid-19-worldwide-spread)/69391#inquiry_before_buying

In this study, the years considered to estimate the market size of Machine Learning Courses are as follows:

Table of Contents:

Get Full Table of Content @:

https://www.reportspedia.com/report/business-services/global-machine-learning-courses-market-report-2020-by-key-players,-types,-applications,-countries,-market-size,-forecast-to-2026-(based-on-2020-covid-19-worldwide-spread)/69391#table_of_contents

See more here:
Global Machine Learning Courses Market Trends, Key Driven Factors, Segmentation And Forecast To 2020-2026 - The Scarlet

AI and Machine Learning Network Fetch.ai Partners Open-Source Blockchain Protocol Waves to Conduct R&D on DLT – Crowdfund Insider

The decentralized finance (DeFi) space is growing rapidly. Oracle protocols like Chainlink, BAND and Gravity have experienced a significant increase in adoption in a cryptocurrency market thats still highly speculative and plagued by market manipulative and wash trading.

Fetch.ai, an open-access machine learning network established by former DeepMind investors and software engineers, has teamed up with Waves, an established, open-source blockchain protocol that provides developer tools for Web 3.0 applications.

As mentioned in an update shared with Crowdfund Insider:

[Fetch.ai and Waves will] conduct joint R&D for the purpose of bringing increased multi-chain capabilities to Fetch.ais system of autonomous economic agents (AEA). [They will also] push further into bringing DeFi cross-chain by connecting with Waves blockchain agnostic and interoperable decentralized cross-chain and oracle network, Gravity.

As explained in the announcement, the integration with Gravity will enable Fetch.ais Autonomous Economic Agents to gain access to data sources or feeds for several different market pairs, commodities, indices, and futures.

Fetch.ai and Waves aim to achieve closer integration with Gravity in order to provide seamless interoperability to Fetch.ai, making its blockchain-based AI and machine learning (ML) solutions accessible across various distributed ledger technology (DLT) networks.

As stated in the update, the integration will help with opening up new ways for all Gravity-connected communities to use Fetch.ais ML functionality within the comfort of their respective ecosystems.

As noted in another update shared with CI, a PwC report predicts that AI and related ML technologies may contribute more than $15 trillion to the world economy from 2017 through 2030. Gartner reveals that during 2019, 37% of organizations had adopted some type of AI into their business operations.

In other DeFi news, Chainlink competitor Band Protocol is securing oracle integration with Nervos, which is a leading Chinese blockchain project.

As confirmed in a release:

Nervos is a Chinese public blockchain thats tooling up for a big DeFi push. The project is building DeFi platforms with China Merchants Bank International and Huobi, and also became one of the first public blockchains to integrate with Chinas BSN. Amid the DeFi surge, Nervos is integrating Bands oracles to give developers access to real-world data like crypto price feeds.

Link:
AI and Machine Learning Network Fetch.ai Partners Open-Source Blockchain Protocol Waves to Conduct R&D on DLT - Crowdfund Insider

UT Austin Selected as Home of National AI Institute Focused on Machine Learning – UT News | The University of Texas at Austin

AUSTIN, Texas The National Science Foundation has selected The University of Texas at Austin to lead the NSF AI Institute for Foundations of Machine Learning, bolstering the universitys existing strengths in this emerging field. Machine learning is the technology that drives AI systems, enabling them to acquire knowledge and make predictions in complex environments. This technology has the potential to transform everything from transportation to entertainment to health care.

UT Austin already among the worlds top universities for artificial intelligence is poised to develop entirely new classes of algorithms that will lead to more sophisticated and beneficial AI technologies. The university will lead a larger team of researchers that includes the University of Washington, Wichita State University and Microsoft Research.

This is another important step in our universitys ascension as a world leader in machine learning and tech innovation as a whole, and I am grateful to the National Science Foundation for their profound support, said UT Austin interim President Jay Hartzell. Many of the worlds greatest problems and challenges can be solved with the assistance of artificial intelligence, and its only fitting, given UTs history of accomplishment in this area along with the booming tech sector in Austin, that this new NSF institute be housed right here on the Forty Acres.

UT Austin is simultaneously establishing a permanent base for campuswide machine learning research called the Machine Learning Laboratory. It will house the new AI institute and bring together computer and data scientists, mathematicians, roboticists, engineers and ethicists to meet the institutes research goals while also working collaboratively on other interdisciplinary projects. Computer science professor Adam Klivans, who led the effort to win the NSF AI institute competition, will direct both the new institute and the Machine Learning Lab. Alex Dimakis, associate professor of electrical and computer engineering, will serve as the AI institutes co-director.

Machine learning can be used to predict which of thousands of recently formulated drugs might be most effective as a COVID-19 therapeutic, bypassing exhaustive laboratory trial and error, Klivans said. Modern datasets, however, are often diffuse or noisy and tend to confound current techniques. Our AI institute will dig deep into the foundations of machine learning so that new AI systems will be robust to these challenges.

Additionally, many advanced AI applications are limited by computational constraints. For example, algorithms designed to help machines recognize, categorize and label images cant keep up with the massive amount of video data that people upload to the internet every day, and advances in this field could have implications across multiple industries.

Dimakis notes that algorithms will be designed to train video models efficiently. For example, Facebook, one of the AI institutes industry partners, is interested in using these algorithms to make its platform more accessible to people with visual impairments. And in a partnership with Dell Medical School, AI institute researchers will test these algorithms to expedite turnaround time for medical imaging diagnostics, possibly reducing the time it takes for patients to get critical assessments and treatment.

The NSF is investing more than $100 million in five new AI institutes nationwide, including the $20 million project based at UT Austin to advance the foundations of machine learning.

In addition to Facebook, Netflix, YouTube, Dell Technologies and the city of Austin have signed on to transfer this research into practice.

The institute will also pursue the creation of an online masters degree in AI, along with undergraduate research programming and online AI courses for high schoolers and working professionals.

Austin-based tech entrepreneurs Zaib and Amir Husain, both UT Austin alumni, are supporting the new Machine Learning Laboratory with a generous donation to sustain its long-term mission.

The universitys strengths in computer science, engineering, public policy, business and law can help drive applications of AI, Amir Husain said. And Austins booming tech scene is destined to be a major driver for the local and national economy for decades to come.

The Machine Learning Laboratory is based in the Department of Computer Science and is a collaboration among faculty, researchers and students from across the university, including Texas Computing; Texas Robotics; the Department of Statistics and Data Sciences; the Department of Mathematics; the Department of Electrical and Computer Engineering; the Department of Information, Risk & Operations Management; the School of Information; the Good Systems AI ethics grand challenge team; the Oden Institute for Computational Engineering and Sciences; and the Texas Advanced Computing Center (TACC).

View post:
UT Austin Selected as Home of National AI Institute Focused on Machine Learning - UT News | The University of Texas at Austin

Participation-washing could be the next dangerous fad in machine learning – MIT Technology Review

More promising is the idea of participation as justice. Here, all members of the design process work together in tightly coupled relationships with frequent communication. Participation as justice is a long-term commitment that focuses on designing products guided by people from diverse backgrounds and communities, including the disability community, which has long played a leading role here. This concept has social and political importance, but capitalist market structures make it almost impossible to implement well.

Machine learning extends the tech industrys broader priorities, which center on scale and extraction. That means participatory machine learning is, for now, an oxymoron. By default, most machine-learning systems have the ability to surveil, oppress, and coerce (including in the workplace). These systems also have ways to manufacture consentfor example, by requiring users to opt in to surveillance systems in order to use certain technologies, or by implementing default settings that discourage them from exercising their right to privacy.

Given that, its no surprise that machine learning fails to account for existing power dynamics and takes an extractive approach to collaboration. If were not careful, participatory machine learning could follow the path of AI ethics and become just another fad thats used to legitimize injustice.

How can we avoid these dangers? There is no simple answer. But here are four suggestions:

Recognize participation as work. Many people already use machine-learning systems as they go about their day. Much of this labor maintains and improves these systems and is therefore valuable to the systems owners. To acknowledge that, all users should be asked for consent and provided with ways to opt out of any system. If they chose to participate, they should be offered compensation. Doing this could mean clarifying when and how data generated by a users behavior will be used for training purposes (for example, via a banner in Google Maps or an opt-in notification). It would also mean providing appropriate support for content moderators, fairly compensating ghost workers, and developing monetary or nonmonetary reward systems to compensate users for their data and labor.

Make participation context specific. Rather than trying to use a one-size-fits-all approach, technologists must be aware of the specific contexts in which they operate. For example, when designing a system to predict youth and gang violence, technologists should continuously reevaluate the ways in which they build on lived experience and domain expertise, and collaborate with the people they design for. This is particularly important as the context of a project changes over time. Documenting even small shifts in process and context can form a knowledge base for long-term, effective participation. For example, should only doctors be consulted in the design of a machine-learning system for clinical care, or should nurses and patients be included too? Making it clear why and how certain communities were involved makes such decisions and relationships transparent, accountable, and actionable.

Plan for long-term participation from the start. People are more likely to stay engaged in processes over time if theyre able to share and gain knowledge, as opposed to having it extracted from them. This can be difficult to achieve in machine learning, particularly for proprietary design cases. Here, its worth acknowledging the tensions that complicate long-term participation in machine learning, and recognizing that cooperation and justice do not scale in frictionless ways. These values require constant maintenance and must be articulated over and over again in new contexts.

Learn from past mistakes. More harm can be done by replicating the ways of thinking that originally produced harmful technology. We as researchers need to enhance our capacity for lateral thinking across applications and professions. To facilitate that, the machine-learning and design community could develop a searchable database to highlight failures of design participation (such as Sidewalk Labs waterfront project in Toronto). These failures could be cross-referenced with socio-structural concepts (such as issues pertaining to racial inequality). This database should cover design projects in all sectors and domains, not just those in machine learning, and explicitly acknowledge absences and outliers. These edge cases are often the ones we can learn the most from.

Its exciting to see the machine-learning community embrace questions of justice and equity. But the answers shouldnt bank on participation alone. The desire for a silver bullet has plagued the tech community for too long. Its time to embrace the complexity that comes with challenging the extractive capitalist logic of machine learning.

Mona Sloane is a sociologist based at New York University. She works on design inequality in the context of AI design and policy.

Excerpt from:
Participation-washing could be the next dangerous fad in machine learning - MIT Technology Review

The Role of Artificial Intelligence and Machine Learning in the… – Insurance CIO Outlook

Machine learning has proven to be useful for insurance agents and brokers in various ways. These include capturing knowledge, skills, and expertise from a generation of insurance staff before they retire in the next 5 to 10 years and use it to train new employees.

FREMONT, CA: Technology has become the dominant force across all businesses in the last few years. Disruptive technologies like Artificial Intelligence (AI), machine learning, and natural language processing are improving rapidly and quickly, evolving from theoretical to practical applications. These technologies have also made an impact on insurance agents and brokers. Many people continue to view technology as their foe. They either believe that machines will eventually replace them, or that a machine can never do their job better than them. While this may not be true, some aspects of it are relatable. For instance, a machine will never be able to provide real-time advice as a live agent does. However, low cost and easy to use platforms are currently available that allow agents and brokers to take advantage of this technology to enhance their delivery of advice and expertise to prospects and clients.

Machine learning has proven to be useful for insurance agents and brokers in various ways. These include capturing knowledge, skills, and expertise from a generation of insurance staff before they retire in the next 5 to 10 years and use it to train new employees.

Employee Augmentation

It helps provide personalized answers for a wide range of insurance questions. Digital customers want to get answers for their questions anytime and not just when an agent's office is open.

Personalized Digital Answers

It helps create and deliver a digital annual account review for personal lines or small commercial insurance accountants. A robust analysis leads to client satisfaction, creates cross-selling opportunities, and reduces errors and omission problems for the agency.

Digital Account Review

Many believe that artificial intelligence and machine learning will be the end of insurance agents as a trusted source for adequate protection against financial losses. However, these technologies are a threat only for insurance agents that are simply order takers. Insurance agents and brokers that embrace the technologies will always find opportunities to grow.

These emerging technologies mustn't be seen as a bane but as a boon. Insurance agents and brokers need to work in tandem with the upgrades in technology and leverage it to the best use. It holds increased potential to enhance customer satisfaction and offer a higher quality of service.

Read the original post:
The Role of Artificial Intelligence and Machine Learning in the... - Insurance CIO Outlook

Air Force Taps Machine Learning to Speed Up Flight Certifications – Nextgov

Machine learning is transforming the way an Air Force office analyzes and certifies new flight configurations.

The Air Force SEEK EAGLE Office sets standards for safe flight configurations by testing and looking at historical data to see how different storeslike a weapon system attached to an F-16affect flight. A project AFSEO developed along with industry partners can now automate up to 80% of requests for analysis, according to the offices Chief Data Officer Donna Cotton.

The application is kind of like an eager junior engineer consulting a senior engineer, Cotton said. It makes the straightforward calls without any input, but in the hard cases it walks into the senior engineers office and says: Hey, I did a bunch of research and this is what I found out. Can you give me your opinion?

Cotton spoke at a Tuesday webinar hosted by Tamr, one of the industry partners involved in the project. Tamr announced July 30 AFSEO awarded the company a $60 million contract for its machine learning application. Two other companies, Dell and Cloudera, helped AFSEO take decades of historical data from simulations, performance studies and the like that were siloed across various specialities and organize them into a searchable data lake.

On top of this new data architecture, the machine learning application provided by Tamr searches through all the historical data to find past records that can help answer new safety recommendation requests automatically.

This tool is critical because the vast majority of AFSEOs flight certification recommendations are made by analogy, meaning using previous data rather than new flight tests. But in the past, data was disorganized and lacked unification. This made tracking down these helpful records a challenge for engineers.

Now, a cleaner AFSEO data lake cuts the amount of time engineers waste on looking for the information they need. Machine learning further speeds up the process by generating safety reports automatically while still keeping the professional engineers in the loop. Even when engineers need to produce original research, the machine learning application can smooth the process by collecting related records to serve as a jumping off point.

The new process helps AFSEO avoid doing costly flight tests while also increasing confidence that the team is making the safety certification correctly with all the information available to them, Cotton said.

We are able to be more productive, Cotton said. It's saving us a lot of money because for us, it's not about profit, but it's about hours. It's about how much effort are we going to have to use to solve or to answer a new request.

See original here:
Air Force Taps Machine Learning to Speed Up Flight Certifications - Nextgov