Page 2,099«..1020..2,0982,0992,1002,101..2,1102,120..»

Global Government Cloud Computing Market 2022 Scope by Business Standards and Key Players as Microsoft, Oracle, Amazon Web Services, IBM Queen Anne…

MarketQuest.biz recent survey report, named Global Government Cloud Computing Market from 2022 to 2028 provides data and statistics on market structure and size. The researchs goal is to provide market intelligence and strategic insights to help decision-makers make informed investment decisions and identify prospective growth gaps and opportunities. The purpose of this study is to provide an in-depth review of market trends and growth so that appropriate approaches may be implemented to surpass the global Government Cloud Computing market.

The scope of the project, production, manufacturing value, loss/profit, supply/demand, and import/export are all depicted in great detail. The market research then forecasts global Government Cloud Computing business growth patterns. It also contains information on strategic partnerships. A feasibility analysis, a SWOT analysis, and a return on investment analysis are all included in this study.

DOWNLOAD FREE SAMPLE REPORT: https://www.marketquest.biz/sample-request/76462

The growth mapping process requires segmentation analysis since it allows suppliers to track demand in real time, allowing them to plan ahead and balance market demand and supply. The research takes a broad approach to finding unexplored market pathways and prospects.

The report also consists of a global perspective of key regions, namely:

Market breakdown by applications:

Market breakdown by types:

The main players in the market include:

ACCESS FULL REPORT: https://www.marketquest.biz/report/76462/global-government-cloud-computing-market-2021-by-company-regions-type-and-application-forecast-to-2026

Government Cloud Computing on a global scale Highlights of the Market Report:

Based on primary research and in-depth secondary research, the report was formed on the basis of recent trends, pricing analysis, potential and past demand & supply, economic condition, COVID-19 influence, and other aspects. Industry specialists and our in-house domain experts undertake primary research. Vice presidents, consultants, product managers, and supply chain managers were interviewed as part of the primary research.

Customization of the Report:

This report can be customized to meet the clients requirements. Please connect with our sales team (sales@marketquest.biz), who will ensure that you get a report that suits your needs. You can also get in touch with our executives on 1-201-465-4211 to share your research requirements.

Contact UsMark StoneHead of Business DevelopmentPhone: 1-201-465-4211Email: sales@marketquest.biz

Link:
Global Government Cloud Computing Market 2022 Scope by Business Standards and Key Players as Microsoft, Oracle, Amazon Web Services, IBM Queen Anne...

Read More..

A Recipe to Migrate and Scale Monoliths in the Cloud – InfoQ.com

Key Takeaways

As a consulting cloud architect at fourTheorem, I see many companies struggling to scale their applications and take full advantage of cloud computing.

Some of these companies are both startups and more consolidated organizations that have developed a product in a monolithic fashion and are finally getting some good traction in their markets. Their business is growing, but they are struggling to scale their deployments.

Their service is generally deployed on a private server on-premise or managed remotely by some hosting providers on a virtual server. With the increased demand for their service, their production environment is starting to suffer from slowness and intermittent availability, which eventually hinders the quality of the service and the potential for more growth.

Moving the product to a cloud provider such as AWS could be a sensible solution here. Using the cloud allows the company to use resources on demand and only pay as they go. Cloud resources could also be scaled dynamically to adapt to bursts of traffic keeping the user experience always up to great standards.

Interestingly enough, some of the companies that I have been talking to believe that, in order to transition to the cloud, they necessarily have to re-engineer the entire architecture of their application and switch to microservices or even serverless.

In most circumstances, re-engineering the entire application would be a prohibitive investment in terms of cost and time and it would divert focus that should otherwise be spent on building features that can help the business to grow more. This belief makes the business skeptical about the opportunities the cloud could bring them and they end up preferring a shorter-term scale-up strategy where the current application server is upgraded to a more powerful and expensive machine.

Of course, there is a limit on how big a single server can get, and eventually, the business will need to get back to square one and consider alternative solutions.

In this article, I want to present a simple cloud architecture that can allow an organization to take monolithic applications to the cloud incrementally without a dramatic change in the architecture. We will discuss the minimal requirements and basic components to take advantage of the scalability of the cloud. We will also explore common gotchas that might require some changes in your application codebase. Finally, we will analyze some opportunities for further improvement that will arise once the transition to the cloud is completed.

I have seen a good number of companies succeed in moving to the cloud with this approach. Once they have a foothold into the cloud and their application is stable they can focus on keeping their customers happy and grow their business even more. Moreover, since technology is not a blocker anymore, they can start experimenting and transition parts of their application to decoupled services. This allows the company to start transitioning to a microservices architecture and even new technologies such as Lambda functions, which can help to achieve greater agility in their development process and lead to additional growth opportunities for the business.

Lets make things a bit more tangible here and introduce a fictitious company that we will use as an imaginary case study to explore the topic of cloud migrations.

Eaglebox, Ltd. is a file storage company that offers the Eaglebox App, a web and mobile application that helps legal practitioners keep all their files organized and accessible remotely from multiple devices.

To get familiar with what Eaglebox App looks like, lets present a few specific use cases:

Eaglebox App is developed as a monolithic application written using the Django framework and PostgreSQL as a database.

Eaglebox App is currently deployed on a server on the Eaglebox premises, and all the customer files are kept in the machine drive (yes, they are backed up often!). Similarly, PostgreSQL is running as a service in the same machine. The database data is backed up often, but it is not replicated.

Eaglebox has recently closed a few contracts with some big legal firms, and since then, they are struggling to scale their infrastructure. Their server is becoming increasingly slow, the disk is saturating quickly, requiring a lot of maintenance. The user experience has become sub-optimal, and the whole business is currently at risk.

Lets see how we can help Eaglebox to move to the cloud with a revisited and more scalable architecture.

Based on what the engineers at EagleBox are telling us, we have identified a few crucial problems we need to tackle:

On top of these technical problems, we also need to acknowledge that the team at EagleBox does not have experience with cloud architectures and that a migration to the cloud will be a learning experience for them. Its important to limit the amount of change required for the migration to give the team time to adapt and absorb new knowledge.

Our challenge is to come up with an architecture that addresses all the existing technical problems, but at the same time provides the shortest possible path to the cloud and does not require a major technological change for the team.

To address EagleBox challenges we are going to suggest a simple, yet very scalable and resilient cloud architecture, targeting AWS as the cloud provider of choice.

Such architecture will have the following components:

Figure1. High-level view of the proposed architecture.

In Figure1, we can see a high-level view of the proposed architecture. Lets zoom in on the various components.

Before we discuss the details of the various components it is important to briefly explore how AWS exposes its data centers and how we can configure the networking for our architecture. We are not going to go into great detail but we need to cover the basics to be able to understand what kind of failures we can expect and how we can keep the application running even when things do fail. And how we can make it scale when the traffic increases.

The cloud is not infallible, things break even in there. Cloud providers like AWS, Azure, and Google Cloud give us tools and best practices to be able to design resilient architectures, but its a shared responsibility model where we need to understand what the providers assurances are, what could break, and how.

When it comes to networking, there are a few high-level concepts that we need to introduce. Note that I will be using AWS terminology here, but the concepts should apply also to Azure and Google Cloud.

For the sake of our architecture, we would go with a VPC configuration like the one illustrated in Figure 2.

Figure 2. VPC configuration for our architecture

The main idea is to select a Region close to our customers and create a dedicated VPC in that region. We will then use 3 different availability zones and have a public and a private subnet for every availability zone.

We will use the public subnets only for the load balancer, and we will use the private subnets for every other component in our architecture: virtual machines, cache servers, and databases.

Action point: Start by configuring a VPC in your region of choice. Make sure to create public and private subnets in different availability zones.

The load balancer is the entry point for all the traffic going to the Eaglebox App servers. This is an Elastic Application Load Balancer (layer 7), which can manage HTTP, HTTPS, WebSocket and gRPC traffic. It is configured to distribute the incoming traffic to the virtual machines serving as backend servers. It can check the health of the targets, making sure to forward incoming traffic only to the instances that are healthy and responsive.

Action point: Make sure your monolith has a simple endpoint that can be used to check the health of the instance. If there isnt one already, add it to the application.

Through an integration with ACM (AWS Certificate Manager), the load balancer can use a certificate and serve HTTPS traffic, making sure that all the incoming and outgoing traffic is encrypted.

From a networking perspective, the load balancer is configured to use all the public subnets, therefore, using all the availability zones. This makes the load balancer highly available: if an availability zone suddenly becomes unavailable, the traffic will automatically be routed through the remaining availability zones.

In AWS, Elastic Load Balancers are well capable of handling growing traffic and every instance is capable of distributing even millions of requests per second. For most real-life applications we wont need to worry about doing anything in particular to scale the load balancer. Finally, its worth mentioning that this kind of load balancer is fully managed by AWS so we dont need to worry about system configuration or software updates.

Eaglebox App is a web application written in Python using the Django framework. We want to be able to run multiple instances of the application on different servers simultaneously. This way the application can scale according to increasing traffic. Ideally, we want to spread different instances across different availability zones. Again, if an availability zone becomes unavailable, we want to have instances in other zones to handle the traffic and avoid downtimes.

To make the instances scale dynamically, we can use an autoscaling group. Autoscaling groups allow us to define the conditions under which new instances of the application will automatically be launched (or destroyed in case of downscaling). For instance, we could use the average CPU levels or the average number of requests per instance to determine if we need to spin up new instances or, if there is already plenty of capacity available, we can decide to scale the number of instances down and save on cost. To guarantee high availability, we need to make sure there is always at least one instance available in every availability zone.

In order to provision a virtual machine, it is necessary to build a virtual machine image. An image is effectively a way to package an operating system, all the necessary software (e.g. the Python runtime), the source code of our application, and all its dependencies.

Having to define images to start virtual machine instances might not seem like an important detail, but it is a big departure from how software is generally managed on premise. On premise, its quite common to keep virtual machines around forever. Once provisioned, its common practice for IT managers to login into the machine to patch software, restart services or deploy new releases of the application. This is not feasible anymore once multiple instances are around and they are automatically started and destroyed in the cloud.

A best practice in the cloud is to consider virtual machines immutable: once they are started they are not supposed to be changed. If you need to release an update, then you build a new image and start to roll out new instances while phasing out the old ones.

But immutability does not only affect deployments or software updates. It also affects the way data (or state in general) is managed. We cannot afford to store any persistent state locally in the virtual machine anymore. If the machine gets shut down we will lose all the data, so no more files saved in the local filesystem or session data in the application memory.

With this new mental model infrastructure and data become well-separated concerns that are handled and managed independently from one another.

As we go through the exercise of reviewing the existing code and building the virtual machine images, it will be important to identify all the parts of the code that access data (files, database records, user session data, etc.) and make the necessary changes to ensure that no data is stored locally within the instance. We will discuss more in depth what are our options here as we go through the different types of storage that we need for our architecture.

But how do we build a virtual machine image?

There are several different tools and methodologies that can help us with this task. Personally, the ones I have used in the past and that I have been quite happy with are EC2 Image Builder by AWS and Packer by Hashicorp.

In AWS, the easiest way to spin up a relational database such as PostgreSQL is to use RDS: Relational Database Service. RDS is a managed service that allows you to spin up a database instance for which AWS will take care of updates and backups.

RDS PostgreSQL can be configured to have read replicas. Read replicas are a great way to offload the read queries to multiple instances, keeping the database responsive and snappy even under heavy load.

Another interesting feature of RDS is the possibility to run a PostgreSQL instance in multi-AZ mode. This means that the main instance of the database will run on a specific AZ, but there will be at least 2 standby replicas in other AZs ready to be used in case the main AZ should fail. AWS will take care of performing an automatic switch-over in case of disaster to make sure your database is back online as soon as possible and without any manual intervention.

Keep in mind that multi-AZ failover is not instantaneous (it generally takes 60-120 seconds) so you need to engineer your application to work (or at least to show a clear descriptive message to the users) even when a connection to the database cannot be established.

Now, the main question is, how do we migrate the data from the on-premise database to a new instance on RDS? Ideally, we would like to have a process that allows us to transition between the two environments gradually and without downtimes, so what can we do about that?

AWS offers another database service called AWS Database Migration Service. This service allows you to replicate all the data from the old database to the new one. The interesting part is that it can also keep the two databases in sync during the switch over, when, due to DNS propagation, you might have some users landing on the new system while others might still be routed to the old server.

Action point: Create a database instance on RDS and enable Multi-AZ mode. Use AWS Database Migration Service to migrate all the data and keep the two databases in sync during the switch-over phase.

In our new architecture, we can implement a distributed file storage by simply adopting S3 (Simple Storage Service). S3 is one of the very first AWS services and probably one of the most famous.

S3 is a durable object storage service. It allows you to store any arbitrary amount of data durably. Objects can be stored in buckets (logical containers with a unique name). S3 uses a key/value storage model: every object in a bucket is uniquely identified by a key and content and metadata can be associated with a key.

To start using S3 and be able to read and write objects, we need to use the AWS SDK. This is available for many languages (including Python) and it offers a programmatic interface to interact with all AWS services, including S3.

We can also interact with S3 by using the AWS Command Line Interface. The CLI has a command that can be particularly convenient in our scenariothe sync command. With this command, we can copy all the existing files into an S3 bucket of our choice.

To transition smoothly between the two environments, a good strategy is to start using S3 straight away from the existing environments. This means that we will need to synchronize all our local files into a bucket, then we need to make sure that every new file uploaded by the users is copied into the same bucket as well.

Action point: Files migration. Create a new S3 bucket. Synchronize all the existing files into the bucket. Save every new file in S3.

In our new architecture, we will have multiple backend servers handling requests for the users. Given that the traffic is load balanced, a user request might end up on a given backend instance but the following request from the same user might end up being served by another instance.

For this reason, all the instances need to have access to a shared session storage. In fact, without a shared session storage, the individual instances wont be able to correctly recognize the user session when a request is served by a different instance from the one that originally initiated the session.

A common way to implement a distributed session storage is to use a Redis instance.

The easiest way to spin up a Redis instance on AWS is to use a service called Elasticache. Elasticache is a managed service for Redis and Memcached and as with RDS, it is built in such a way that you dont have to worry about the operative system or about installing security patches.

ElastiCache can spin up a Redis Cluster in multi-AZ mode with automatic failover. Like with RDS, this means that if the Availability Zone where the primary instance of the cluster were to be unreachable, Elasticache would automatically perform a DNS failover and switch to one of the standby replicas in one of the other Availability Zones. Also, in this case, the failover is not instantaneous, so its important to account at the application level that it might not be possible to establish a connection to Redis during a failover.

Action point: Provision a Redis cluster using ElastiCache and make sure all the session data is stored there.

The final step in our migration is about DNS, how do we start forwarding traffic to our new infrastructure on AWS?

The best way to do that is to configure all our DNS for the application in Route 53. Route 53 is a highly available and scalable cloud DNS service.

It can be configured to forward all the traffic on our application domain to our load balancer. Once we configure and enable this (and DNS has been propagated) we will start to receive traffic on the new infrastructure.

If your domain has been registered somewhere else you can either transfer the domain to AWS or change your registrar configuration to use your new Route 53 hosted zone as a name server.

Action point: Create a new hosted zone in Route 53 and configure your DNS to point your domain to the application load balancer. Once you are ready to switch over, point your domain registrar to Route 53 or transfer the domain to AWS.

As we have seen, this new architecture consists of a good amount of moving parts. How can we keep track of all of them and make sure all our environments (e.g. development, QA, and production) are as consistent as possible?

The best way to approach this is through Infrastructure as a Code (IaaC). IaaC, allows you to keep all your infrastructure defined declaratively as code. This code can be stored in a repository (even the same repository you already use for the application codebase). By doing that, all your infrastructure is visible to all the developers. They can review changes and contribute directly. More importantly, IaaC gives you a repeatable process to ship changes across environments and this helps you to keep things aligned as the architecture evolves.

The tool of choice, when it comes to IaaC on AWS, is CloudFormation which allows you to specify your infrastructure templates using YAML. Another alternative tool from AWS is Cloud Development Kit (CDK) which provides a higher-level interface that can be used to define your infrastructure in code using programming languages such as TypeScript, Python, or Java.

Another common alternative is a third-party cross-cloud tool called Terraform.

Its not important which tool you pick (they all have their pros and cons) but its extremely important to define all the infrastructure as code to make sure you can start to build a solid process around how to ship infrastructure changes to the cloud.

Another important topic is observability. Now that we have so many moving parts, how do we debug issues or how do we make sure that the system is healthy? Discussing observability goes beyond the scope of this article, but if you are curious to start exploring topics such as distributed logs, tracing, metrics, alarms, and dashboards make sure to have a look at CloudWatch and X-Ray.

Infrastructure as code and observability are two extremely important topics that will help you a lot to deploy applications to the cloud and keep them running smoothly.

So now that we are in the cloud, is our journey over? Quite the contrary, this journey has just begun and there is a lot more to explore and learn about.

Now that we are in the cloud we have many opportunities to explore new technologies and approaches.

We could start to explore containers or even serverless. If we are building a new feature we are not necessarily constrained by having to deploy in one monolithic server. We can build the new feature in a more decoupled way and try to leverage new tools.

For instance, lets say we need to build a feature to notify them by email when new documents for a case have been uploaded by another user. One way to do this is to use a queue and a worker. The application can publish to a queue the definition of a job related to sending a notification email. A pool of workers can process these jobs from the queue and do the hard work of sending the emails.

This approach allows the backend application to stay snappy and responsive and delegate time-consuming background tasks (like sending emails) to external workers that can work asynchronously.

One way to implement this on AWS is to use SQS (queue) and Lambda (serverless compute).

This is just an example that shows how being in the cloud opens up new possibilities that can allow a company to iterate fast and keep experimenting while leveraging a comprehensive suite of tools and technologies available on demand.

The cloud is a journey, not a destination and this journey has just begun, enjoy the ride!

Originally posted here:
A Recipe to Migrate and Scale Monoliths in the Cloud - InfoQ.com

Read More..

Global Mobile Edge Computing Market Trajectory & Analytics Report 2022: Accelerating Pace of Connected Care Adoption Drives Opportunities -…

DUBLIN, May 13, 2022--(BUSINESS WIRE)--The "Mobile Edge Computing - Global Market Trajectory & Analytics" report has been added to ResearchAndMarkets.com's offering.

Global Mobile Edge Computing Market to Reach $2.2 Billion by 2026

The global market for Mobile Edge Computing estimated at US$427.3 Million in the year 2020, is projected to reach a revised size of US$2.2 Billion by 2026, growing at a CAGR of 30.8% over the analysis period.

Mobile edge computing allows faster and more flexible deployment of various services and applications for customers. The technology combines telecommunication networking and IT to help cellular operators in opening radio access networks (RANs) for authorization of third parties such as content providers and application developers. The access to cloud services and resources also allows emergence of new applications to support smart environments.

Hardware, one of the segments analyzed in the report, is projected to record 29.3% CAGR and reach US$2 Billion by the end of the analysis period. After a thorough analysis of the business implications of the pandemic and its induced economic crisis, growth in the Software & Services segment is readjusted to a revised 35.4% CAGR for the next 7-year period.

The U.S. Market is Estimated at $242.5 Million in 2021, While China is Forecast to Reach $173.5 Million by 2026

The Mobile Edge Computing market in the U.S. is estimated at US$242.5 Million in the year 2021. China, the world`s second largest economy, is forecast to reach a projected market size of US$173.5 Million by the year 2026 trailing a CAGR of 35.8% over the analysis period. Among the other noteworthy geographic markets are Japan and Canada, each forecast to grow at 25.9% and 29.1% respectively over the analysis period. Within Europe, Germany is forecast to grow at approximately 27.3% CAGR.

The market is presently witnessing steady growth also fuelled by the factor of aggressive investment of market players in developing cutting-edge technologies and providing more effective solutions to consumers. Factors including growing load on global cloud infrastructure and increase in intelligent application numbers are also fuelling market growth for mobile edge computing.

Story continues

Growing need for enhancing QoE (Quality of Experience) for end users and increasing ultra-low latency demand are also fuelling demand growth for MEC solutions. MEC aids real time applications in the processes of data analysis and processing. 5G network and emergence of several new languages and frameworks for IoT solutions would also offer major market growth opportunities over the coming years. Location based services is anticipated to report one of the strongest growths of all services over the upcoming years due to greater efficiencies, reduced costs and increasing requirement of enterprises to provide enhanced QoE.

Nonetheless, mobile edge computing necessitates more hardware locally which leads to an increase in maintenance costs, a factor with the potential to hinder anticipated growth for the market. Also, despite being fairly safe, edge computing necessitates constant updating and monitoring because cyber-attacks are also becoming increasingly sophisticated.

There is also dearth of skilled labor for handling the technology which is highly complex. Lack of deployment capability and required infrastructure could also restrain growth in the market.

Key Topics Covered:

I. METHODOLOGY

II. EXECUTIVE SUMMARY

1. MARKET OVERVIEW

Influencer Market Insights

World Market Trajectories

Impact of Covid-19 and a Looming Global Recession

An Introduction to Mobile Edge Computing: Bringing Storage & Computing Closer to Edge of Network

Organizations Influencing Mobile Edge Computing Industry

Mobile Edge Computing Holds Compelling Merits and Supports New Applications

Mobile Edge Computing Emerges as Key Technology to Reduce Network Congestion

Market Overview & Outlook

Network Benefits and Performance Gains Enable Mobile Edge Computing Market to Post Healthy Growth

Key Issues Related to Mobile Edge Computing

Market Analysis by Component

Application Market Analysis

IT & Telecom: The Largest Vertical Market

Transformation of the Telecom Industry with MEC

Regional Analysis

Competitive Scenario

Recent Market Activity

Mobile Edge Computing - Global Key Competitors Percentage Market Share in 2022 (E)

Competitive Market Presence - Strong/Active/Niche/Trivial for Players Worldwide in 2022 (E)

2. FOCUS ON SELECT PLAYERS (Total 52 Featured)

Adlink Technology Inc.

Advantech Co., Ltd.

AT&T Inc.

Gigaspaces Technologies Inc.

Huawei Technology Co. Ltd.

Intel Corporation

Juniper Networks Inc.

Nokia Corporation

Saguna Networks Ltd.

SK Telecom Co. Ltd.

SMART Embedded Computing

Telefonaktiebolaget LM Ericsson

ZephyrTel Inc.

ZTE Corporation

3. MARKET TRENDS & DRIVERS

Rise in IoT Ecosystem, the Cornerstone for Future Growth

Rising Demand for High-Performance Mobile Applications Bodes Well for MEC in Telecommunication Sector

Percentage of Time Spent on Mobile Apps by Category for 2020 by Category

5G Networks to Inflate Market Demand

Breakdown of Network Latency (in Milliseconds) by Network Type

Accelerating Pace of Connected Care Adoption Drives Opportunities for Mobile Edge Computing

Opportunities in Retail Sector

Mobile Edge Computing to Gain Traction in BFSI

Mobile Edge Computing Presents Landmark Technology for Media & Entertainment

Low Latency & High Bandwidth Needs Create Ample Demand

Edge-Powered Computing Offers Intriguing Advantages for Location-Based Applications

Opportunities in Video Surveillance Ecosystem

Reliable Data Analytics with Mobile Edge Computing

Mobile Edge Computing Marks Paradigm Shift for Mobile Cloud Computing

4. GLOBAL MARKET PERSPECTIVE

III. REGIONAL MARKET ANALYSIS

IV. COMPETITION

For more information about this report visit https://www.researchandmarkets.com/r/k9gjd5

View source version on businesswire.com: https://www.businesswire.com/news/home/20220513005362/en/

Contacts

ResearchAndMarkets.comLaura Wood, Senior Press Managerpress@researchandmarkets.com

For E.S.T Office Hours Call 1-917-300-0470For U.S./CAN Toll Free Call 1-800-526-8630For GMT Office Hours Call +353-1-416-8900

View original post here:
Global Mobile Edge Computing Market Trajectory & Analytics Report 2022: Accelerating Pace of Connected Care Adoption Drives Opportunities -...

Read More..

AI, philosophy and religion: what machine learning can tell us about the Bhagavad Gita – The Conversation

Machine learning and other artificial intelligence (AI) methods have had immense success with scientific and technical tasks such as predicting how protein molecules fold and recognising faces in a crowd. However, the application of these methods to the humanities are yet to be fully explored.

What can AI tell us about philosophy and religion, for example? As a starting point for such an exploration, we used deep learning AI methods to analyse English translations of the Bhagavad Gita, an ancient Hindu text written originally in Sanskrit.

Using a deep learning-based language model called BERT, we studied sentiment (emotions) and semantics (meanings) in the translations. Despite huge variations in vocabulary and sentence structure, we found that the patterns of emotion and meaning were broadly similar in all three.

This research opens a path to the use of AI-based technologies for comparing translations and reviewing sentiments in a wide range of texts.

The Bhagavad Gita is one of the central Hindu sacred and philosophical texts. Written more than 2,000 years ago, it has been translated into more than 100 languages and has been of interest to western philosophers since the 18th century.

The 700-verse poem is a part of the larger Mahabharata epic, which recounts the events of an ancient war believed to have occurred at Kurukshetra near modern-day Delhi in India.

Read more: Indian philosophy helps us see clearly, act wisely in an interconnected world

The text of the Bhagavad Gita relates a conversation between the Hindu deity Lord Krishna and a prince called Arjuna. They discuss whether a soldier should go to war for ethics and duty (or dharma) if they have close friends or family on the opposing side.

The text has been instrumental in laying the foundations of Hinduism. Among many other things, it is where the philosophy of karma (a spiritual principle of cause and effect) originates.

Scholars have also regarded the Bhagavad Gita as a book of psychology, management, leadership and conflict resolution.

There have been countless English translations of the Bhagavad Gita, but there is not much work that validates their quality. Translations of songs and poems not only break rhythm and rhyming patterns, but can also result in the loss of semantic information.

In our research, we used deep learning language models to analyse three selected translations of the Bhagavad Gita (from Sanskrit to English) with semantic and sentiment analyses which help in the evaluation of translation quality.

We used a pre-trained language model known as BERT, developed by Google. We further tuned the model using a human-labelled training dataset based on Twitter posts, which captures 10 different sentiments.

These sentiments (optimistic, thankful, empathetic, pessimistic, anxious, sad, annoyed, denial, surprise, and joking) were adopted from our previous research into social media sentiment during the onset of the COVID-19 pandemic.

The three translations we studied used very different vocabulary and syntax, but the language model recognised similar sentiments in the different chapters of the respective translations. According to our model, optimistic, annoyed and surprised sentiments are the most expressed.

Moreover, the model showed how the overall sentiment polarity changes (from negative to positive) over the course of the conversation between Arjuna and Lord Krishna.

Arjuna is pessimistic towards the beginning and becomes optimistic as Lord Krisha imparts knowledge of Hindu philosophy to him. The sentiments expressed by Krishna show that with philosophical knowledge of dharma and mentorship, a troubled mind can get clarity for making the right decisions in times of conflict.

One limitation of our model is that it was trained on data from Twitter, so it recognises joking as a common sentiment. It applies this label inappropriately to some parts of the Bhagavad Gita. Humour is complicated and strongly culturally constrained, and understanding it is too much to ask of our model at this stage.

Due to the nature of the Sanskrit language, the fact that the Bhagavad Gita is a song with rhythm and rhyme, and the varied dates of the translations, different translators used different vocabulary to describe the same concepts.

The table below shows some of the most semantically similar verses from the three translations.

Our research points the way to the use of AI-based technologies for comparing translations and reviewing sentiments in a wide range of texts.

This technology can also be extended to review sentiments expressed in entertainment media. Another potential application is analysing movies and songs to provide insights to parents and authorities about the suitability of content for children.

The author would like to acknowledge the invaluable contribution of Venkatesh Kulkarni to this research.

Link:
AI, philosophy and religion: what machine learning can tell us about the Bhagavad Gita - The Conversation

Read More..

Slacks former head of machine learning wants to put AI in reach of every company – TechCrunch

Adam Oliner, co-founder and CEO of Graft used to run machine learning at Slack, where he helped build the companys internal artificial intelligence infrastructure. Slack lacked the resources of a company like Meta or Google, but it still had tons of data to sift through and it was his job to build something on a smaller scale to help put AI to work on the dataset.

With a small team, he could only build what he called a miniature solution in comparison to the web scale counterparts. After he and his team built it, however, he realized that it was broadly applicable and could help other smaller organizations tap into AI and machine learning without huge resources.

We built a sort of mini Graft at Slack for driving semantic search and recommendations throughout the product. And it was hugely effective And that was when we said, this is so useful, and so powerful if we can get this into the hands of most organizations, we think we could really change the way people interact with their data and interact with AI, Oliner told me.

Last year he decided to leave Slack and go out on his own and started Graft to solve the problem for many companies. He says the beauty of the solution is that it provides everything you need to get started. Its not a slice of a solution or one that requires plug-ins to complete. He says it works for companies right out of the box.

The point of Graft is to make the AI of the 1% accessible to the 99%. he said. What he means by that is giving smaller companies the ability to access and put to use modern AI, and in particular pre-trained models for certain specific tasks, something he says offers a tremendous advantage.

These are sometimes called trunk models or foundation models, a term that a group at Stanford is trying to coin. These are essentially very large pre-trained models that encode a lot of semantic and structural knowledge about a domain of data. And this is useful because you dont have to start from scratch on every new problem, he said.

The company is still a work in progress, working with beta customers to refine the solution, but expects to launch a product later this year. For now they have a team of 11 people, and Oliner says that its never too early to think about building a diverse team.

When he decided to start the company, the first person he sought out was Maria Kazandjieva, former head of engineering at Netflix. I have been working at building the rest of the founding team and also hiring others with an eye toward diversity and inclusion. So, you know, just [the other day], we were talking with recruiting communities that are focused on women and people of color, partly because we feel like investments now in building diverse team will just make it so much easier later on, he said.

As the journey begins for Graft, the company announced what it is calling a pre-seed investment of $4.5 million led by GV with help from NEA, Essence VC, Formulate Ventures and SV Angel.

Continue reading here:
Slacks former head of machine learning wants to put AI in reach of every company - TechCrunch

Read More..

Tech Visionaries to Address Accelerating Machine Learning, Unifying AI Platforms and Taking Intelligence to the Edge, at the Fifth Annual AI Hardware…

SANTA CLARA, Calif.--(BUSINESS WIRE)--Metas VP of Infrastructure Hardware, Alexis Black Bjorlin, will open the flagship AI Hardware Summit with a keynote, while her colleague Vikas Chandra, Metas Director of AI Research will open Edge AI Summit. Other notable keynotes include Microsoft Azures CTO, Mark Russinovich, plus Wells Fargos EVP of Model Risk, Agus Sudjianto; Synopsys President & COO, Sassine Ghazi; Cadences Executive Chairman, Lip-Bu Tan; and Siemens EVP, IC EDA, Joseph Sawicki, among many others

Machine learning and deep learning are fast becoming major line items on agendas in board rooms in every organization across the globe. The technology stack needed to support these workloads, and to execute them quickly, efficiently, and affordably, is fast developing in both the datacenter and in client systems at the edge.

In 2018, a new Silicon Valley event called the AI Hardware Summit launched to provide a platform to discuss innovations in hardware necessary for supporting machine learning both at the very large scale, and in small resource-constrained environments. The event attracted enormous interest from the semiconductor and systems sectors, welcomed Habana Labs into the industry in its inaugural year, and subsequently hosted Alphabet Inc.s Chairman and Turing Award Winner, John L. Hennessy, as a keynote speaker in 2019. Shortly after, the Edge AI Summit was launched to focus specifically on deploying machine learning in commercial use cases in client systems.

Hennessy said of the AI Hardware Summit: Its a great place where lots of people interested in AI Hardware are coming together and exchanging ideas, and together we make the technology better. Theres a synergistic effect at these summits which is really amazing and powers the entire industry.

Fast forward a few years of virtual shows and the events are back in-person with a fresh angle. An all-star cast of tech visionary speakers will address optimizing and accelerating machine learning hardware and software, focusing on the intersection between systems design and ML development. Developer workshops with HuggingFace are a new feature this year focused on helping bring new hardware innovation into leading enterprises.

The co-location of the two industry-leading summits combines the proposition to focus on building, optimizing and unifying software-defined ML platforms across the cloud-edge continuum. Attendees of the AI Hardware Summit can expect content spanning from hardware and infrastructure up to models/applications, whereas the Edge AI Summit has a much tighter focus on case studies of ML in enterprise.

This years audience will consist of machine learning practitioners and technology builders from various engineering disciplines, discussing topics such as systems-first ML, AI acceleration as a full-stack endeavour, software defined systems co-design, boosting developer efficiency, optimizing applications across diverse ML platforms and bringing state of the art production performance into the enterprise.

While the AI Hardware Summit has broadened its scope beyond focusing purely on hardware, there will still be plenty for hardware-focused attendees to explore. The event website, http://www.aihardwaresummit.com, gives accessible information on why a software-focused or hardware-focused attendee should register.

The Edge AI Summit features more end user use cases than any other event of its kind, and is a must attend for anyone moving ML workloads to the edge. The event website, http://www.edgeaisummit.com, gives more information.

Read the original:
Tech Visionaries to Address Accelerating Machine Learning, Unifying AI Platforms and Taking Intelligence to the Edge, at the Fifth Annual AI Hardware...

Read More..

Post-doctoral Research Fellow (A/B) in Machine Learning job with UNIVERSITY OF ADELAIDE | 293182 – Times Higher Education

We are seeking to appoint a Post-doctoral Research Fellow (Level A/B) in Machine Learning - $71,401 to $119,391 per annum includingan employer contribution of up to 17% superannuation may apply.

A 1.5 year fixed-term position is available to work on a research project for developing Machine Learning methods for network protocol evaluation with the possibility of extension to 3 years.

This is a fantastic opportunity for a high-achieving postdoctoral researcher to join a world-leading research group in Computer Security and Machine Learning as well as Computer Science department ranked 48thin the world and The University of Adelaide ranked in the top 1% of Universities worldwide.

You will work on a research program to address the problems in software-based emulation and assessment of networking protocols to support automated dynamic analysis of networking protocols.

The project aims to develop and implement methods to automatically find vulnerabilities and attack strategies in common Internet routing protocols. You will be involved in the development of theory, techniques (such as fuzzing and machine learning methods) and tools for discovering bugs and vulnerabilities in protocol implementations.

You will work with a team of researchers from the University of Adelaides School of Computer Science and the Australian Institute of Machine Learning, University of New South Wales, CSIROs Data61 and Defence Science and Technology Organisation (DSTG).

In this role you will have the options to purseone or moreof the following:

This is an outstanding opportunity to advance your career in cyber security, network security, computer security, software engineering and machine learning whilst exploring the area of large scale, automated, dynamic analysis of networking software with three world-class institutions in a world-leading environment.

The University of Adelaide is a member of Australias prestigious Group of Eight research-intensive universities and ranks inside the worlds top 100. In the Australian Governments 2018 Excellence in Research for Australia (ERA) assessment, 100% of University of Adelaide research was rated world-class or above, with work in 41 distinct fields achieving the highest possible rating of well above world-standard. This included Artificial Intelligence and Image Processing, and Electrical and Electronic Engineering.

Our world-renowned researchers have established a culture of innovation and a strong track record of publication in the top venues, particularly in the area of machine learning, computer vision and security. We're committed to delivering fundamental and commercially oriented research thats highly valued by our local and global communities. Here youll work in one of the worlds most talented and creative machine learning teams, with constant researchengineering collaboration. Youll use state-of-the-art technology and youll be based in the heart of one of the worlds top 10 most liveable cities.

To be successful you will need

Level A

Level B (in addition to the above)

Enjoy an outstanding career environment

The University of Adelaide is a uniquely rewarding workplace. The size, breadth and quality of our education and research programs - including significant industry, government and community collaborations - offers you vast scope and opportunity for a long, fulfilling career.

It also enables us to attract high-calibre people in all facets of our operations, ensuring you will be surrounded by talented colleagues, many world-leading. Our work's cutting-edge nature - not just in your own area, but across virtually the full spectrum of human endeavour - provides a constant source of inspiration.

Our culture is one that welcomes all and embraces diversity consistent with ourStaff Values and Behaviour Frameworkand our Values of integrity, respect, collegiality, excellence and discovery. We firmly believe that our people are our most valuable asset, so we work to grow and diversify the skills, knowledge and capability of all our staff.

We embrace flexibility as a key principle to allow our people to manage the changing demands of work, personal and family life. Flexible working arrangements are on offer for all roles at the University.

In addition, we offer a wide range of attractive staff benefits. These include: salary packaging; flexible work arrangements; high-quality professional development programs and activities; and an on-campus health clinic, gym and other fitness facilities.

Learn more at:adelaide.edu.au/jobs

Your faculty's broader role

The Faculty of Sciences, Engineering and Technology is a multidisciplinary hub of cutting-edge teaching and research. Many of its academic staff are world leaders in their fields and graduates are highly regarded by employers. TheFacultyactively partners with innovative industries to solve problems of global significance.

Learn more at:set.adelaide.edu.au

If you have the talent, we'll give you the opportunity. Together, let's make history.

Click on the Apply Now button to be taken through to the online application form. Please ensure you submit a cover letter, resume, and upload a document that includes your responses to all of the selection criteria for the position as contained in the position description or selection criteria document.

Applications close 11:55 pm, 12 June 2022.

For further information

For a confidential discussion regarding this position, contact:

Damith RanasingheAssociate Professor, School of Computer ScienceP: +61 (8) 8313-0066E:damith.ranasinghe@adelaide.edu.au

You'll find a full selection criteria below:(If no links appear, try viewing on another device)

The University of Adelaide is an Equal Employment Opportunity employer. Women and Aboriginal and Torres Strait Islander people who meet the requirements of this position are strongly encouraged to apply.

See the original post here:
Post-doctoral Research Fellow (A/B) in Machine Learning job with UNIVERSITY OF ADELAIDE | 293182 - Times Higher Education

Read More..

Using machine learning to predict COVID-19 infection and severity risk among 4510 aged adults: a UK Biobank cohort study | Scientific Reports -…

Study design and participants

This retrospective study involved the UK Biobank cohort12. UK Biobank consists of approximately 500,000 people now aged 50 to 84years (mean age=69.4years). Baseline data was collected in 20062010 at 22 centers across the United Kingdom13,14. Summary data are listed in Table 1. This research involved deidentified epidemiological data. All UK Biobank participants gave written, informed consent. Ethics approval for the UK Biobank study was obtained from the National Health Service Health Research Authority North WestHaydock Research Ethics Committee (16/NW/0274), in accordance with relevant guidelines and regulations from the Declarations of Helsinki. All analyses were conducted in line with UK Biobank requirements.

The following categories of predictors were downloaded: (1) demographics; (2) health behaviors and long-term disability or illness status; (3) anthropometric and bioimpedance measures of fat, muscle, or water content; (4) pulse and blood pressure; (5) a serum panel of thirty biochemistry markers commonly collected in a clinic or hospital setting; and (6) a complete blood count with a manual differential.

These factors included participant age in years at baseline, sex, education qualifications, ethnicity, and Townsend Deprivation Index. Sex was coded as 0 for female and 1 for male. For education, higher scores roughly correspond to progressively more skilled trade/vocational or academic training. Ethnicity was coded as UK citizens who identified as White, Black/Black British, or Asian/Asian British. The Townsend index15 is a standardized score indicating relative degree of deprivation or poverty based on permanent address.

This category consisted of self-reported alcohol status, smoking status, a subjective health rating on a 14 Likert scale (Excellent to Poor), and whether the participant had a self-described long-term medical condition. As noted in Table 1, 48.4% of participants indicated having such an ailment. We independently confirmed self-reported data with ICD-10 codes while at hospital. These conditions included all-cause dementia and other neurological disorders, various cancers, major depressive disorder, cardiovascular or cerebrovascular diseases and events, cardiometabolic diseases (e.g., type 2 diabetes), renal and pulmonary diseases, and other so-called pre-existing conditions.

The first automated reading of pulse, diastolic and systolic blood pressure at the baseline visit were used.

Anthropometric measures of adiposity (Body Mass Index, waist circumference) were derived as described16. Data also included bioelectrical impedance metrics that estimate central body cavity (i.e., trunk) and whole body fat mass, fat-free muscle mass, or water content17.

Serum biomarkers were assayed from baseline samples as described18. Briefly, using immunoassay or clinical chemistry devices, spectrophotometry was used to initially quantify values for 34 biochemistry analytes. UK Biobank deemed 30 of these markers to be suitably robust. We rejected a further 4 markers due data missingness>70% (estradiol, rheumatoid factor), or because there was strong overlap with multicollinear variables that had more stable distributions or trait-like qualities (glucose rejected vs. glycated hemoglobin/hba1c; direct bilirubin rejected vs. total bilirubin). A complete blood count with a manual differential was separately processed for red and white blood cell counts, as well as white cell sub-types.

As described (http://biobank.ctsu.ox.ac.uk/crystal/crystal/docs/infdisease.pdf), among 9695 randomized UK Biobank participants selected from the full 500,000 participant cohort, baseline serum was thawed and pathogen-specific assays run in parallel using flow cytometry on a Luminex bead platform19.

Here, the goal of the multiplex serology panel was to measure multiple antibodies against several antigens for different pathogens, reducing noise and estimating the prevalence of prior infection and seroconversion in at least UK Biobank. All measures were initially confirmed in serum samples using gold-standard assays with median sensitivity and specificity of 97.0% and 93.7%, respectively. Antibody load for each pathogen-specific antigen was quantified using median fluorescence intensity (MFI). Because seropositivity is difficult to assess for several pathogens, we did not use pathogen prevalence as a predictor in models.

Table 2 shows the selected pathogens, their respective antigens, estimated prevalence of each pathogen based roughly on antibody titers, and assay values. This array ranges from delta-type retroviruses like human T-cell lymphotropic virus 1 that are rare (<1%) to human herpesviruses 6 and 7 that have an estimated prevalence of more than 90%.

Our study was based on COVID PCR test data available from March 16th to May 19th 2020. Specifically, we used the May 26th, 2020 tranche of COVID-19 polymerase chain reaction (PCR) data from Public Health England. There were 4510 unique participants that had 7539 individual tests administered, hereafter called test cases. To characterize each test case, UK Biobank had a binary variable for test positivity (result) and a separate binary variable for test location (origin). For the positivity variable, a COVID-19 test was coded as negative (0) or positive (1). The second binary variable represented whether the COVID-19 test occurred through a setting that was out-patient (0) or in-patient at hospital (1). As a proxy for COVID-19 severity later verified by electronic health records and death certificates20, and as done in other UK Biobank reports21, a test case first needed to be positive for COVID-19 (i.e., the test had a 1 value for the positivity variable). Next, if the positive test case occurred in an out-patient setting the infection was considered mild (i.e., 0), whereas for in-patient hospitalization it was considered severe (i.e., 1). Thus, two separate sets of analyses were run to predict: (1) COVID-19 positivity; and (2) COVID-19 severity.

For a more technical description of the specific machine learning algorithm used to predict test case outcomes, see Supplementary Text 1. Supplementary Text 2 has an in-depth description and analysis of within-subject variation for outcome measures and number of test cases done per participant. Briefly, this variability was modest and had no significant impact on classifier model performance. SPSS 27 was used for all analyses and Alpha set at 0.05. Preliminary findings suggested that baseline serology data performed well in classifier models, despite a limited number of participants with serology. To determine if this serology sub-group was noticeably different from the full sample, MannWhitney U and KruskalWallis tests were done (Alpha=0.05). Hereafter, separate sets of classification analyses were performed for: (1) the full cohort; and (2) the sub-group of participants that had serology data. In other words, due to the imbalance of sample sizes and by definition the absence or presence of serology data, classifier performance in the serology sub-group was never statistically compared to the full cohort.

Next, linear discriminant analysis (LDA) was used in two separate sets of analyses to predict either: (1) COVID-19 diagnosis (negative vs. positive); or (2) COVID-19 infection severity (mild vs. severe). Again, for a given test case, COVID-19 severity would be examined only among participants who tested positive for COVID-19. LDA is a regression-like classification technique that finds the best linear combination of predictors that can maximally distinguish between groups of interest. To determine how useful a given predictor or related group of predictors (e.g., demographics) were for classification, simple forced entry models were first done. Subsequently, to derive best fit, robust models of the data, stepwise entry (Wilks Lambda, F value entry=3.84) was used to exclude predictors that did not significantly account for unique variance in the classification model. This data reduction step is critical because LDA can lead to model overfitting when there are too many predictors relative to observations22,23, which are COVID-19 test cases for our purposes. Finally, because there were multiple test cases that could occur for the same participant, this would violate the assumption of independence. To guard against this problem, we used Mundry and Sommers permutation LDA approach. Specifically, for each LDA model, permutation testing (1000 iterations, P<0.05) was done by randomizing participants across groupings of test cases to confirm robustness of the original model24.

LDA model overfitting can also occur when there is a sample size imbalance. Because there were many more negative vs. positive COVID-19 test cases in the full sample (5329 vs. 2210), the negative test group was undersampled. Specifically, a random number generator was used to discard 2500 negative test cases at random, such that the proportion of negative to positive tests was now 55% to 45% instead of 70.6 to 29.4%. Results without undersampling were similar (data not shown). No such imbalance was seen for COVID-19 severity in the full sample or for the serology sub-group. A typical holdout method of 70% and 30% was used for classifier training and then testing25. Finally, a two-layer non-parametric approach was used to determine model significance and estimated fit of one or more predictors. First, bootstrapping26 (95% Confidence Interval, 1000 iterations) was done to derive estimates robust against any violations of parametric assumptions. Next, leave-one-out cross-validation22 was done with bootstrap-derived estimates to ensure that models themselves were robust. Collectively, the stepwise LDA models ensured that estimation bias of coefficients would be low because most predictors are thrown out before models are generated using the remaining predictors.

For each LDA classification model, outcome threshold metrics included: specificity (i.e., true negatives correctly identified), sensitivity (i.e., true positives correctly identified), and the geometric mean (i.e., how well the model predicted both true negatives and positives). The area under the curve (AUC) with a 95% confidence interval (CI) was reported to show how well a given model could distinguish between a COVID-19 negative or positive test result, and separately for COVID-19+test cases if the disease was mild or severe. Receiver operating characteristic (ROC) curves plotted sensitivity against 1-specificity to better visualize results for sets of predictors and a final stepwise model. For stepwise models, the Wilks Lambda statistic and standardized coefficients are reported to see how important a given predictor was for the model. A lower Wilks Lambda corresponds to a stronger influence on the canonical classifier.

Ethics approval for the UK Biobank study was obtained from the National Health Service Health Research Authority North WestHaydock Research Ethics Committee (16/NW/0274). All analyses were conducted in line with UK Biobank requirements.

See the rest here:
Using machine learning to predict COVID-19 infection and severity risk among 4510 aged adults: a UK Biobank cohort study | Scientific Reports -...

Read More..

Research Assistant (Bigdata, Machine Learning, and IoT) job with UNITED ARAB EMIRATES UNIVERSITY | 292997 – Times Higher Education

Job Description

The Electrical Engineering Department at the United Arab Emirates University is seeking a research assistant in Bigdata analytics with good exposure to machine learning and Internet-of-things (IoT). The main task is programming and integration of software components in our new designed Bigdata platform. Basic knowledge of wireless communication networks is a plus. The candidate should have a good command of Python programming and be familiar with integrating Bigdata packages and machine learning platforms. The preferred candidate should also have demonstrated experience in Linux operating system with IoT connectivity.

Minimum Qualification

An appropriate educational degree supplemented with documented evidence to support the following:

Preferred Qualification

Division College of Engineering - (COE)Department As.Dean for Research&Grad.Std.- COEJob Close Date 31-08-2022Job Category Academic - Research Assistant

Visit link:
Research Assistant (Bigdata, Machine Learning, and IoT) job with UNITED ARAB EMIRATES UNIVERSITY | 292997 - Times Higher Education

Read More..

Beacon Biosignals announces partnership with Stratus to advance at-home brain monitoring and machine learning-enabled neurodiagnostics – BioSpace

Collaboration will enable AI-powered decentralized clinical trials

BOSTON, May 10, 2022 /PRNewswire/ -- Beacon Biosignals, which applies AI to EEG to unlock precision medicine for brain conditions,today announced a partnership with Stratus Research Labs, the nation's leading provider of EEG services, to enable expanded clinical trial service capabilities by leveraging Beacon's machine learning neuroanalytics platform.

EEG is standard of care in the clinical diagnosis and management of many neurologic diseases and sleep disorders, yet features of clinical significance often are difficult to extract from EEG data. Broader adoption of EEG technology has been further limited by labor-intensive workflows and variability in clinician expert interpretation. By linking their platforms, Beacon and Stratus will unlock AI-powered at-home clinical trials, addressing these challenges head-on.

"The benefits of widely incorporating EEG data into pharmaceutical trials has been desired for years, but the challenge of uniformly capturing and interpreting the data has been an issue," said Charlie Alvarez, chief executive officer for Stratus. "Stratus helps solve data capture issues by providing accessible, nationwide testing services that reduce the variability in data collection and help ensure high-quality data across all sites. Stratus is proud to partner with Beacon and its ability to complete the equation by providing algorithms to ensure the quality of EEG interpretations."

Stratus offers a wide variety of EEG services, including monitored long-term video studies and routine EEGs conducted in the hospital, clinic, and in patients' homes. Stratus has a strong track record of high-quality data acquisition, enabled by an industry-leading pool of registered EEG technologists and a national footprint for EEG deployment logistics. The announced agreement establishes Stratus as a preferred data acquisition partner for Beacon's clinical trial and neurobiomarker discovery efforts using Beacon's analytics platform.

"Reliable and replicable quantitative endpoints help drive faster, better-powered trials," said Jacob Donoghue, MD, PhD, co-founder of Beacon Biosignals. "A barrier to their development, along with performing the necessary analysis, can often be the acquisition of quality EEG at scale. Partnering with Stratus and benefiting from its infrastructure and platform eliminates that hurdle and paves the way toward addressing the unmet need for endpoints, safety tools and computational diagnostics."

Beacon's platform provides an architectural foundation for discovery of robust quantitative neurobiomarkers that subsequently can be deployed for patient stratification or automated safety or efficacy monitoring in clinical trials. The powerful and validated algorithms developed by Beacon's machine learning teams can replicate the consensus interpretation of multiple trained epileptologists while exceeding human capabilities over many hours or days of recording. These algorithms can be focused on therapeutic areas such as neurodegenerative disorders, epilepsy, sleep disorders and mental illness.For example, Beacon is currently assessing novel EEG signatures in Alzheimer's disease patients to identify which patients may or may not benefit from a specific type of therapy.

"This collaboration will enable at-home studies for diseases like Alzheimer's," Donoghue said. "It has traditionally been difficult to obtain clinical-grade EEG for these patients at the scale required for phase 3 and phase 4 clinical trials. Stratus' extensive expertise in scaling EEG operations in at-home settings unlocks real opportunities to harness brain data to evaluate treatment efficacy."

About Beacon BiosignalsBeacon's machine learning platform for EEG enables and accelerates new treatments that transform the lives of patients with neurological, psychiatric or sleep disorders. Through novel machine learning algorithms, large clinical datasets, and advances in software engineering, Beacon Biosignals empowers biopharma companies with unparalleled tools for efficacy monitoring, patient stratification, and clinical trial endpoints from brain data. For more information, visit https://beacon.bio/. For careers, visit https://beacon.bio/careers; for partnership inquiries, visit https://beacon.bio/contact. Follow us on Twitter (@Biosignals) or LinkedIn (https://www.linkedin.com/company/beacon-biosignals).

About StratusStratus is the nation's leading provider of EEG solutions, including ambulatory in-home video EEG. The company has served more than 80,000 patients across the U.S. Stratus offers technology, services, and proprietary software solutions to help neurologists accurately and quickly diagnose their patients with epilepsy and other seizure-like disorders. Stratus also provides mobile cardiac telemetry to support the diagnostic testing needs of the neurology community. To learn more, visit http://www.stratusneuro.com.

MEDIA CONTACTMegan MoriartyAmendola Communications for Beacon Biosignals913.515.7530mmoriarty@acmarketingpr.com

View original content to download multimedia:https://www.prnewswire.com/news-releases/beacon-biosignals-announces-partnership-with-stratus-to-advance-at-home-brain-monitoring-and-machine-learning-enabled-neurodiagnostics-301543440.html

SOURCE Beacon Biosignals

Excerpt from:
Beacon Biosignals announces partnership with Stratus to advance at-home brain monitoring and machine learning-enabled neurodiagnostics - BioSpace

Read More..