Category Archives: Machine Learning

Mental health diagnoses and the role of machine learning – Health Europa

It is common for patients with psychosis or depression to experience symptoms of both conditions which has meant that traditionally, mental health diagnoses have been given for a primary illness with secondary symptoms of the other.

Making an accurate diagnosis often poses difficulties to mental health clinicians and diagnoses often do not accurately reflect the complexity of individual experience or neurobiology. For example, a patient being diagnosed with psychosis will often have depression regarded as a secondary condition, with more focus on the psychosis symptoms, such as hallucinations or delusions; this has implications on treatment decisions for patients.

A team at the University of Birminghams Institute for Mental Health and Centre for Human Brain Health, along with researchers at the European Union-funded PRONIA consortium, explored the possibility of using machine learning to create extremely accurate models of pure forms of both illnesses and using these models to investigate the diagnostic accuracy of a cohort of patients with mixed symptoms. The results of this study have been published in Schizophrenia Bulletin.

Paris Alexandros Lalousis, lead author, explains that the majority of patients have co-morbidities, so people with psychosis also have depressive symptoms and vice versa That presents a big challenge for clinicians in terms of diagnosing and then delivering treatments that are designed for patients without co-morbidity. Its not that patients are misdiagnosed, but the current diagnostic categories we have do not accurately reflect the clinical and neurobiological reality.

The researchers analysed questionnaire responses and detailed clinical interviews, as well as data from structural magnetic resonance imaging from a cohort of 300 patients taking part in the study. From this group of patients, they identified small subgroups of patients, who could be classified as suffering either from psychosis without any symptoms of depression, or from depression without any psychotic symptoms.

With the goal of developing a precise disease profile for each patient and testing it against their diagnosis to see how accurate it was, the research team was able to identify machine learning models of pure depression, and pure psychosis by using the collected data. They were then able to use machine learning methods to apply these models to patients with symptoms of both illnesses.

The team discovered that patients with depression as a primary illness were more likely to have accurate mental health diagnoses, whereas patients with psychosis with depression had symptoms which most frequently leaned towards the depression dimension. This may suggest that depression plays a greater part in the illness than had previously been thought.

Lalousis added: There is a pressing need for better treatments for psychosis and depression, conditions which constitute a major mental health challenge worldwide. Our study highlights the need for clinicians to understand better the complex neurobiology of these conditions, and the role of co-morbid symptoms; in particular considering carefully the role that depression is playing in the illness.

In this study we have shown how using sophisticated machine learning algorithms, which take into account clinical, neurocognitive, and neurobiological factors can aid our understanding of the complexity of mental illness. In the future, we think machine learning could become a critical tool for accurate diagnosis. We have a real opportunity to develop data-driven diagnostic methods this is an area in which mental health is keeping pace with physical health and its really important that we keep up that momentum.

View original post here:
Mental health diagnoses and the role of machine learning - Health Europa

There Is No Silver Bullet Machine Learning Solution – Analytics India Magazine

Download our Mobile App

A recommendation engine is a class of machine learning algorithm that suggests products, services, information to users based on analysis of data. Robust recommendation systems are the key differentiator in the operations of big companies like Netflix, Amazon, and Byte Dance (TikTok parent) etc.

Alok Menthe, Data Scientist at Ericsson, gave an informative talk on building Custom recommendation engines for real-world problems at the Machine Learning Developers Summit (MLDS) 2021. Whenever a niche business problem comes in, it has complicated intertwined ways of working. Standard ML techniques may be inadequate and might not serve the customers purpose. That is where the need for a custom-made engine comes in. We were also faced with such a problem with our service network unit at Ericsson, he said.

Menthe said the unit wanted to implement a recommendation system to provide suggestions for assignment workflow a model to delegate the incoming projects to the most appropriate team or resource pool

Credit: Alok Menthe

There were three kinds of data available:

Pool definition data: It relates to the composition of a particular resource poolthe number of people, their competence, and other metadata.

Historical demand data: This kind of data helps in establishing a relationship between the feature demand and a particular resource pool.

Transactional data: It is used for operational purposes.

Menthe said building a custom recommendation system in this context involves the following steps:

Credit: Alok Menthe

After building our model, the most difficult part was feature engineering, which is imperative for building an efficient system. Among the two major modules classification and clusteringwe faced challenges with respect to the latter. We had only categorical information making it difficult to find distances within the objects. We went out of the box to see if we can do any special encoding for the data. We adopted data encoding techniques and frequency-based encoding in this regard, said Menthe.

Clustering module: For this module, initially the team implemented K-modes and agglomerative. However, the results were far from perfect, prompting the team to consider the good-old K-means algorithm. For evaluation purposes, it was done manually with the help of subject matter experts.

The final model had 700 resource pools condensed to 15 pool clusters.

Classification module: For this module, three kinds of algorithm iterations were usedRandom Forest, Artificial Neural Network, XGBoost. Classification accuracy was used as an evaluation metric. Finally, upon 50,00,000 training records, this module demonstrated an accuracy of 71 percent.

Menthe said this recommendation model is monitored on a fortnightly basis by validating the suggested pools against the allocated pools for project demands:

The model has proved to be successful on three fronts:

Menthe summarised the three major takeaways from this project in his concluding remarks: the need to preserve business nuances in ML solutions; thinking beyond standard ML approaches; and understanding that there is no silver bullet ML solution.

I am a journalist with a postgraduate degree in computer network engineering. When not reading or writing, one can find me doodling away to my hearts content.

Follow this link:
There Is No Silver Bullet Machine Learning Solution - Analytics India Magazine

Scientists use machine learning to tackle a big challenge in gene therapy – STAT

As the world charges to vaccinate the population against the coronavirus, gene therapy developers are locked in a counterintuitive race. Instead of training the immune system to recognize and combat a virus, theyre trying to do the opposite: designing viruses the body has never seen, and cant fight back against.

Its OK, really: These are adeno-associated viruses, which are common and rarely cause symptoms. That makes them the perfect vehicle for gene therapies, which aim to treat hereditary conditions caused by a single faulty gene. But they introduce a unique challenge: Because these viruses already circulate widely, patients immune systems may recognize the engineered vectors and clobber them into submission before they can do their job.

Unlock this article by subscribing to STAT+ and enjoy your first 30 days free!

STAT+ is STAT's premium subscription service for in-depth biotech, pharma, policy, and life science coverage and analysis.Our award-winning team covers news on Wall Street, policy developments in Washington, early science breakthroughs and clinical trial results, and health care disruption in Silicon Valley and beyond.

Here is the original post:
Scientists use machine learning to tackle a big challenge in gene therapy - STAT

Getting Help from HAL: Applying Machine Learning to Predict Antibiotic Resistance – Contagionlive.com

Highlighted Study: Lewin-Epstein O, Baruch S, Hadany L, Stein GY, Obolski U. Predicting antibiotic resistance in hospitalized patients by applying machine learning to electronic medical records. Clin Infect Dis. Published online October 18, 2020. doi:10.1093/cid/ciaa1576

Appropriate empirical antimicrobial therapy is paramount for ensuring the best outcomes for patients. The literature shows that inappropriate antimicrobial therapy for infections caused by resistant pathogens leads to worse outcomes.1,2 Additionally, increased use of broad spectrum antibiotics in patients without resistant pathogens can lead to unintended consequences.3-5 As technology advances, it may enable clinicians to better prescribe empiric antimicrobials. Lewin-Epstein et al studied the potential for machine learning to optimize the use of empiric antibiotics in patients who may be harboring resistant bacteria.

As machine learning and artificial intelligence technology improves, investigators are examining new ways to implement it in practice. Lewin-Epstein et al studied the potential for machine learning to predict antibiotic resistance in hospitalized patients.6 This study specifically targeted the use of empiric antibiotics, attempting to reduce their use in patients who may be harboring resistant bacteria.

The single-center retrospective study was conducted in Israel from May 2013 through December 2015 using electronic medical records of patients who had positive bacterial culture results and resistance profiles for the antibiotics of interest. The investigators studied 5 antibiotics from commonly prescribed antibiotic classes: ceftazidime, gentamicin, imipenem, ofloxacin, and sulfamethoxazole-trimethoprim. The data set included 16,198 samples for patients who had positive bacterial culture results and sensitivities for these 5 antibiotics. The most common bacterial species were Escherichia coli, Klebsiella pneumoniae, coagulase negative Staphylococcus, and Pseudomonas aeruginosa. The investigators also collected patient demographics, comorbidities, hospitalization records, and information on previous inpatient antibiotic use.

Employing a supervised machine learning approach, they created a model comprising 3 submodels to predict antibiotic resistance. The first 85% of data were used to train the model, whereas the remainder were used to test it. During training, the investigators identified the variable with the highest effect on predictionthe rate of previous antibiotic-resistant infections, regardless of whether the bacterial species was included in the analysis. Other important variables included previous hospitalizations, nosocomial infections, previous antibiotic usage, and patient functioning and independence levels. The model was trained in multiple ways to identify which manner of use would be the most accurate. In one analysis, the model was trained and evaluated on each antibiotic individually. In another, it was trained and evaluated on all 5 antibiotics. The model was also evaluated when the bacterial species was included and excluded. The models success was defined by the area under the receiver-operating characteristic (auROC) curve and balanced accuracy, which is the unweighted average of the sensitivity and specificity rates.

The ensemble model, which was made up of the 3 submodels, was effective at predicting bacterial resistance, especially when the bacteria causing the infection were identified for the model. When the bacterial species was identified, the auROC score ranged from 0.8 to 0.88 versus 0.73 to 0.79 when the species was not identified. These results are more promising than previous studies on the use of machine learning in identifying resistant infections, despite this study incorporating heterogenous data and multiple antibiotics. Previous studies that only included 1 species or 1 type of infection yielded auROC scores of 0.7 to 0.83. This shows that using the composite result of multiple models may be more successful at predicting antibiotic resistance.

One limitation of this study is that it did not compare the model with providers abilities to recognize potentially resistant organisms and adjust therapy accordingly. Although this study did not directly make a comparison, a previous study involving machine learning showed that a similar model performed better than physicians when predicting resistance. The model in this study performed better than the one in the previous study, which suggests that this model may perform better than providers when predicting resistance. Another limitation of this study is that it did not evaluate causal effects of antibiotic resistance. The authors believe that further research should be conducted in this area to evaluate whether machine learning could be employed to determine further causes of antimicrobial resistance. A third limitation is that this study only evaluated the 5 antibiotics included, which are the 5 antibiotics most commonly tested for resistance at that facility. Additional research and machine learning would likely need to be incorporated to apply this model to other antibiotics.

The authors concluded that the model used in this study could be used as a template for other health systems. Because resistance patterns vary by region, this seems to be an appropriate conclusion. A model would have to be trained at each facility that was interested in employing machine learning in antimicrobial stewardship, and additional training would have to occur periodically to keep up with evolving resistance patterns. Additionally, if a facility would like to incorporate this type of model, they might want to also incorporate rapid polymerase chain reaction testing to provide the model with a bacterial species for optimal predictions. Overall, the results of this study indicate that great potential exists for machine learning in antimicrobial stewardship programs.

References

See the original post here:
Getting Help from HAL: Applying Machine Learning to Predict Antibiotic Resistance - Contagionlive.com

How to expand your machine learning capabilities by installing TensorFlow on Ubuntu Server 20.04 – TechRepublic

If you're looking to add Machine Learning to your Python development, Jack Wallen shows you how to quickly install TensorFlow on Ubuntu Desktop 20.04.

Image: Google

TensorFlow is an open source development platform for machine learning (ML). With this software platform, you'll get a comprehensive collection of tools, libraries, and various resources that allow you to easily build and deploy modern ML-powered applications. Beginners and experts alike can make use of this end-to-end platform, and create ML models to solve real-world problems.

How do you get started? The first thing you must do is get TensorFlow installed on your machine. I'm going to show you how to make that happen on Ubuntu Desktop 20.04.

SEE: Top 5 programming languages for systems admins to learn (free PDF) (TechRepublic)

The first thing to be done is the installation of the necessary dependencies. It just so happens these are all about Python. Log in to your desktop and install the dependencies with the command:

Python should now be installed, so you're ready to continue on.

How we install TensorFlow is from within a Python virtual environment. Create a new directory to house the environment with the command:

Change into that newly created directory with the command:

Next, create the Python virtual environment with the command:

Once the above command completes, we must then activate the virtual environment, using the source command like so:

After you activate the environment, your command prompt should change, such that it begins with (venv) (Figure A).

Figure A

We've activated our virtual Python environment.

Next, we're going to upgrade Pip with the command:

We can now install TensorFlow with the command:

TensorFlow should now be installed on your system. To verify, issue the command:

The command will return the TensorFlow version number (Figure B).

Figure B

I'm running the Ubuntu 20.04 desktop on a virtual machine, so GPU is an issue.

Make sure, when you're done working, that you deactivate the Python virtual environment with the command:

And there you have it, you've successfully installed TensorFlow on Ubuntu Desktop 20.04 and are ready to start adding machine learning to your builds.

Subscribe to TechRepublic's How To Make Tech Work on YouTube for all the latest tech advice for business pros from Jack Wallen.

You don't want to miss our tips, tutorials, and commentary on the Linux OS and open source applications. Delivered Tuesdays

Read more from the original source:
How to expand your machine learning capabilities by installing TensorFlow on Ubuntu Server 20.04 - TechRepublic

Postdoctoral Research Associate in Digital Humanities and Machine Learning job with DURHAM UNIVERSITY | 246392 – Times Higher Education (THE)

Department of Computer Science

Grade 7:-33,797 - 40,322 per annumFixed Term-Full TimeContract Duration:7 monthsContracted Hours per Week:35Closing Date:13-Mar-2021, 7:59:00 AM

Durham University

Durham University is one of the world's top universities with strengths across the Arts and Humanities, Sciences and Social Sciences. We are home to some of the most talented scholars and researchers from around the world who are tackling global issues and making a difference to people's lives.

The University sits in a beautiful historic city where it shares ownership of a UNESCO World Heritage Site with Durham Cathedral, the greatest Romanesque building in Western Europe. A collegiate University, Durham recruits outstanding students from across the world and offers an unmatched wider student experience.

Less than 3 hours north of London, and an hour and a half south of Edinburgh, County Durham is a region steeped in history and natural beauty. The Durham Dales, including the North Pennines Area of Outstanding Natural Beauty, are home to breathtaking scenery and attractions. Durham offers an excellent choice of city, suburban and rural residential locations. The University provides a range of benefits including pension and childcare benefits and the Universitys Relocation Manager can assist with potential schooling requirements.

Durham University seeks to promote and maintain an inclusive and supportive environment for work and study that assists all members of our University community to reach their full potential. Diversity brings strength and we welcome applications from across the international, national and regional communities that we work with and serve.

The Department

The Department of Computer Science is rapidly expanding. A new building for the department (joint with Mathematical Sciences) has recently opened to house the expanded Department. The current Department has research strengths in (1) algorithms and complexity, (2) computer vision, imaging, and visualisation and (3) high-performance computing, cloud computing, and simulation. We work closely with industry and government departments. Research-led teaching is a key strength of the Department, which came 5th in the Complete University Guide. The department offers BSc and MEng undergraduate degrees and is currently redeveloping its interdisciplinary taught postgraduate degrees. The size of its student cohort has more than trebled in the past five years. The Department has an exceptionally strong External Advisory Board that provides strategic support for developing research and education, consisting of high-profile industrialists and academics.Computer Science is one of the very best UK Computer Science Departments with an outstanding reputation for excellence in teaching, research and employability of our students.

The Role

Postdoctoral Research Associate to work on the AHRC-funded project Visitor Interaction and Machine Curation in the Virtual Liverpool Biennial.

The project looks at virtual art exhibitions that are curated by machines, or even co-curated by humans and machines; and how audiences interact with these exhibitions in the era of online art shows. The project is in close collaboration with the 2020 (now 2021) Liverpool Biennial (http://biennial.com/). The role of the post holder is, along with the PI Leonardo Impett, to implement different strategies of user-machine interaction for virtual art exhibits; and to investigate the interaction behaviour of different types of users with such systems.

Responsibilities:

This post is fixed term until31 August 2021 as the research project is time limited and will end on 31 August 2021.

The post-holder is employed to work on research/a research project which will be led by another colleague. Whilst this means that the post-holder will not be carrying out independent research in his/her own right, the expectation is that they will contribute to the advancement of the project, through the development of their own research ideas/adaptation and development of research protocols.

Successful applicants will, ideally, be in post byFebruary 2021.

How to Apply

For informal enquiries please contactDr Leonardo Impett (leonardo.l.impett@durham.ac.uk).All enquiries will be treated in the strictest confidence.

We prefer to receive applications online via the Durham University Vacancies Site.https://www.dur.ac.uk/jobs/. As part of the application process, you should provide details of 3 (preferably academic/research) referees and the details of your current line manager so that we may seek an employment reference.

Applications are particularly welcome from women and black and minority ethnic candidates, who are under-represented in academic posts in the University.We are committed to equality: if for any reason you have taken a career break or periods of leave that may have impacted on your career path, such as maternity, adoption or parental leave, you may wish to disclose this in your application.The selection committee will recognise that this may have reduced the quantity of your research accordingly.

What to Submit

All applicants are asked to submit:

The Requirements

Essential:

Qualifications

Experience

Skills

Desirable:

Experience

Skills

DBS Requirement:Not Applicable.

Read more:
Postdoctoral Research Associate in Digital Humanities and Machine Learning job with DURHAM UNIVERSITY | 246392 - Times Higher Education (THE)

The head of JPMorgan’s machine learning platform explained what it’s like to work there – eFinancialCareers

For the past few years, JPMorgan has been busy building out its machine learning capability underDaryush Laqab, its San Francisco-based head of AI platform product management, who was hired from Google in 2019. Last time we looked, the bank seemed to be paying salaries of $160-$170k to new joiners onLaqab's team.

If that sounds appealing, you might want to watch the video below so that you know what you're getting into. Recorded at the AWS re:Invent conferencein December, it's only just made it to you YouTube. The video is flagged as a day in the life of JPMorgan's machine learning data scientists, butLaqab arguably does a better of job of highlighting some of the constraints data professionals at allbanks have to work under.

"There are some barriers to smooth data science at JPMorgan," he explains - a bank is not the same as a large technology firm.

For example, data scientists at JPMorgan have to check data is authorized for use, saysLaqab: "They need to go to a process to log that use and make surethat they have the adequate approvals for that intent in terms of use."

They also have to deal with the legacy infrastructureissue: "We are a large organization, we have a lot of legacy infrastructure," says Laqab. "Like any other legacy infrastructure, it is built over time,it is patched over time. These are tightly integrated,so moving part or all of that infrastructure to public cloud,replacing rule base engines with AI/ML based engines.All of that takes time and brings inertia to the innovation."

JPMorgan's size and complexity is another source of inertia as multiple business lines in multiple regulated entities in different regulated environments need to be considered. "Making sure that those regulatory obligationsare taken care of, again, slows down data science at times," saysLaqab.

And then there are more specific regulations such as those concerning model governance. At JPMorgan, a machine learning model can't go straight into a production environment."It needs to go through a model review and a model governance process," says Laqab. "- To make sure we have another set of eyes that looksat how that model was created, how that model was developed..." And then there are software governance issues too.

Despite all these hindrances, JPMorgan has already productionized AI models and built an 'Omni AI ecosystem,'which Laqab heads,to help employees to identify and ingest minimum viable data so that they canbuild models faster. Laqab saysthe bank saved $150m in expenses in 2019 as a result. JPMorgan's AI researchers are now working on everything fromFAQ bots and chat bots, to NLP search models for the bank'sown content, pattern recognition in equities markets and email processing. - The breadth of work on offer is considerable. "We play in every market that is out there," saysLaqab,

The bank has also learned that the best way to structure its AI team is to split people into data scientists who train and create models and machine learning engineers who operationalize models, saysLaqab. - Before you apply, you might want to consider which you'd rather be.

Photo by NeONBRAND on Unsplash

Have a confidential story, tip, or comment youd like to share? Contact:sbutcher@efinancialcareers.comin the first instance. Whatsapp/Signal/Telegram also available. Bear with us if you leave a comment at the bottom of this article: all our comments are moderated by human beings. Sometimes these humans might be asleep, or away from their desks, so it may take a while for your comment to appear. Eventually it will unless its offensive or libelous (in which case it wont.)

Originally posted here:
The head of JPMorgan's machine learning platform explained what it's like to work there - eFinancialCareers

5 Ways the IoT and Machine Learning Improve Operations – BOSS Magazine

Reading Time: 4 minutes

By Emily Newton

The Internet of Things (IoT) and machine learning are two of the most disruptive technologies in business today. Separately, both of these innovations can bring remarkable benefits to any company. Together, they can transform your business entirely.

The intersection of IoT devices and machine learning is a natural progression. Machine learning needs large pools of relevant data to work at its best, and the IoT can supply it. As adoption of both soars, companies should start using them in conjunction.

Here are five ways the IoT and machine learning can improve operations in any business.

Around 25% of businesses today use IoT devices, and this figure will keep climbing. As companies implement more of these sensors, they add places where they can gather data. Machine learning algorithms can then analyze this data to find inefficiencies in the workplace.

Looking at various workplace data, a machine learning program could see where a company spends an unusually high amount of time. It could then suggest a new workflow that would reduce the effort employees expend in that area. Business leaders may not have ever realized this was a problem area without machine learning.

Machine learning programs are skilled at making connections between data points that humans may miss. They can also make predictions 20 times earlier than traditional tools and do so with more accuracy. With IoT devices feeding them more data, theyll only become faster and more accurate.

Machine learning and the IoT can also automate routine tasks. Business process automation (BPA) leverages AI to handle a range of administrative tasks, so workers dont have to. As IoT devices feed more data into these programs, they become even more effective.

Over time, technology like this has contributed to a 40% productivity increase in some industries. Automating and streamlining tasks like scheduling and record-keeping frees employees to focus on other, value-adding work. BPAs potential doesnt stop there, either.

BPA can automate more than straightforward data manipulation tasks. It can talk to customers, plan and schedule events, run marketing campaigns and more. With more comprehensive IoT implementation, it would have access to more areas, becoming even more versatile.

One of the most promising areas for IoT implementation is in the supply chain. IoT sensors in vehicles or shipping containers can provide companies with critical information like real-time location data or product quality. This data alone improves supply chain visibility, but paired with machine learning, it could transform your business.

Machine learning programs can take this real-time data from IoT sensors and put it into action. It could predict possible disruptions and warn workers so they can respond accordingly. These predictive analytics could save companies the all-too-familiar headache of supply chain delays.

UPS Orion tool is the gold standard for what machine learning can do for supply chains. The system has saved the shipping giant 10 million gallons of fuel a year by adjusting routes on the fly based on traffic and weather data.

If a company cant understand the vulnerabilities it faces, business leaders cant make fully informed decisions. IoT devices can provide the data businesses need to get a better understanding of these risks. Machine learning can take it a step further and find points of concern in this data that humans could miss.

IoT devices can gather data about the workplace or customers that machine learning programs then process. For example, Progressive has made more than 1.7 trillion observations about its customers driving habits through Snapshot, an IoT tracking device. These analytics help the company adjust clients insurance rates based on the dangers their driving presents.

Business risks arent the only hazards the Internet of Things and machine learning can predict. IoT air quality sensors could alert businesses when to change HVAC filters to protect employee health. Similarly, machine learning cybersecurity programs could sense when hackers are trying to infiltrate a companys network.

Another way the IoT and machine learning could transform your business is by eliminating waste. Data from IoT sensors can reveal where the company could be using more resources than it needs. Machine learning algorithms can then analyze this data to suggest ways to improve.

One of the most common culprits of waste in businesses is energy. Thanks to various inefficiencies, 68% of power in America ends up wasted. IoT sensors can measure where this waste is happening, and with machine learning, adjust to stop it.

Machine learning algorithms in conjunction with IoT devices could restrict energy use, so processes only use what they need. Alternatively, they could suggest new workflows or procedures that would be less wasteful. While many of these steps may seem small, they add up to substantial savings.

Without the IoT and machine learning, businesses cant reach their full potential. These technologies enable savings companies couldnt achieve otherwise. As they advance, theyll only become more effective.

The Internet of Things and machine learning are reshaping the business world. Those that dont take advantage of them now could soon fall behind.

Emily Newton is the Editor-in-Chief of Revolutionized, a magazine exploring how innovations change our world. She has over 3 years experience writing articles in the industrial and tech sectors.

Go here to see the original:
5 Ways the IoT and Machine Learning Improve Operations - BOSS Magazine

Rackspace Technology Study uncovers AI and Machine Learning knowledge gap in the UAE – Intelligent CIO ME

As companies in the UAE scale up their adoption of Artificial Intelligence (AI) and Machine Learning (ML) implementation, a new report suggests that UAE organisations are now on par with their global counterparts in boasting mature capabilities in these fields.

Nonetheless, the vast majority of organisations in the wider EMEA regionincluding the UAEare still at the early stages of exploring the technologys potential (52%) or still require significant organisational work to implement an AI/ML solution (36%).

These are the key findings of new research from Rackspace Technology Inc, an end-to-end, multi-cloud technology solutions company, which revealed that the majority of organisations lack the internal resources to support critical AI and ML initiatives.The survey, Are Organisations Succeeding at AI and Machine Learning?,indicates that while many organisations are eager to incorporate AI and ML tactics into operations, they typically lack the expertise and existing infrastructure needed to implement mature and successful AI/ML programmes.

This study shines a light on the struggle to balance the potential benefits of AI and ML against the ongoing challenges of getting AI/ML initiatives off the ground. While some early adopters are already seeing the benefits of these technologies, others are still trying to navigate common pain points such as lack of internal knowledge, outdated technology stacks, poor data quality or the inability to measure ROI.

Other key findings of the report include the following:

Countries across EMEA, including the UAE, are lagging behind in AI and ML implementation, which can be hindering their competitive edge and innovation, said Simon Bennett, Chief Technology Officer, EMEA, Rackspace Technology. Globally were seeing IT decision-makers turn to these technologies to improve efficiency and customer satisfaction. Working with a trusted third-party provider, organisations can enhance their AI/ML projects moving beyond the R&D stage and into initiatives with long-term impacts.

Facebook Twitter LinkedInEmailWhatsApp

Excerpt from:
Rackspace Technology Study uncovers AI and Machine Learning knowledge gap in the UAE - Intelligent CIO ME

Parascript and SFORCE Partner to Leverage Machine Learning Eliminating Barriers to Automation – GlobeNewswire

Longmont, CO, Feb. 09, 2021 (GLOBE NEWSWIRE) -- Parascript, which provides document analysis software processing for over 100 billion documents each year, announced today the Smart-Force (SFORCE) and Parascript partnership to provide a digital workforce that augments operations by combining cognitive Robotic Process Automation (RPA) technology with customers current investments for high scalability, improved accuracy and an enhanced customer experience in Mexico and across Latin America.

Partnering with Smart-Force means we get to help solve some of the greatest digital transformation challenges in Intelligent Document Processing instead of just the low-hanging fruit. Smart-Force is forward-thinking and committed to futureproofing their customers processes, even with hard-to-automate, unstructured documents where the application of techniques such as NLP is often required, said Greg Council, Vice President of Marketing and Product Management at Parascript. Smart-Force leverages bots to genuinely collaborate with staff so that the staff no longer have to spend all their time on finding information, and performing data entry and verification, even for the most complex multi-page documents that you see in lending and insurance.

Smart-Force specializes in digital transformation by identifying processes in need of automation and implementing RPA to improve those processes so that they run faster without errors. SFORCE routinely enables increased productivity, improves customer satisfaction, and improves staff morale through leveraging the technology of Automation Anywhere, Inc., a leader in RPA, and now Parascript Intelligent Document Processing.

As intelligent automation technology becomes more ubiquitous, it has created opportunities for organizations to ignite their staff towards new ways of working freeing up time from the manual tasks to focus on creative, strategic projects, what humans are meant to do, said Griffin Pickard, Director of Technology Alliance Program at Automation Anywhere. By creating an alliance with Parascript and Smart-Force, we have enabled customers to advance their automation strategy by leveraging ML and accelerate end-to-end business processes.

Our focus at SFORCE is on RPA with Machine Learning to transform how customers are doing things. We dont replace; we compliment the technology investments of our customers to improve how they are working, said Alejandro Castrejn, Founder of SFORCE. We make processes faster, more efficient and augment their staff capabilities. In terms of RPA processes that focus on complex document-based information, we havent seen anything approach what Parascript can do.

We found that Parascript does a lot more than other IDP providers. Our customers need a point-to-point RPA solution. Where Parascript software becomes essential is in extracting and verifying data from complex documents such as legal contracts. Manual data entry and review produces a lot of errors and takes time, said Barbara Mair, Partner at SFORCE. Using Parascript software, we can significantly accelerate contract execution, customer onboarding and many other processes without introducing errors.

The ability to process simple to very complex documents such as unstructured contracts and policies within RPA leveraging FormXtra.AI represents real opportunities for digital transformation across the enterprise. FormXtra.AI and its Smart Learning allow for easy configuration, and by training the systems on client-specific data, the automation is rapidly deployed with the ability to adapt to new information introduced in dynamic production environments.

About SFORCE, S.A. de C.V.

SFORCE offers services that allow customers to adopt digital transformation at whatever pace the organization needs. SFORCE is dedicated to helping customers get the most out of their existing investments in technology. SFORCE provides point-to-point solutions that combine existing technologies with next generation technology, which allows customers to transform operations, dramatically increase efficiency as well as automate manual tasks that are rote and error-prone, so that staff can focus on high-value activities that significantly increase revenue. From exploring process automation to planning a disruptive change that ensures high levels of automation, our team of specialists helps design and implement the automation of processes for digital transformation. Visit SFORCE.

About Parascript

Parascript software, driven by data science and powered by machine learning, configures and optimizes itself to automate simple and complex document-oriented tasks such as document classification, document separation and data entry for payments, lending and AP/AR processes. Every year, over 100 billion documents involved in banking, insurance, and government are processed by Parascript software. Parascript offers its technology both as software products and as software-enabled services to our partners. Visit Parascript.

Read more here:
Parascript and SFORCE Partner to Leverage Machine Learning Eliminating Barriers to Automation - GlobeNewswire