Page 2,035«..1020..2,0342,0352,0362,037..2,0402,050..»

Competitive programming with AlphaCode – DeepMind

Solving novel problems and setting a new milestone in competitive programming.

Creating solutions to unforeseen problems is second nature in human intelligence a result of critical thinking informed by experience. The machine learning community has made tremendous progress in generating and understanding textual data, but advances in problem solving remain limited to relatively simple maths and programming problems, or else retrieving and copying existing solutions. As part of DeepMinds mission to solve intelligence, we created a system called AlphaCode that writes computer programs at a competitive level. AlphaCode achieved an estimated rank within the top 54% of participants in programming competitions by solving new problems that require a combination of critical thinking, logic, algorithms, coding, and natural language understanding.

In our preprint, we detail AlphaCode, which uses transformer-based language models to generate code at an unprecedented scale, and then smartly filters to a small set of promising programs.

We validated our performance using competitions hosted on Codeforces, a popular platform which hosts regular competitions that attract tens of thousands of participants from around the world who come to test their coding skills. We selected for evaluation 10 recent contests, each newer than our training data. AlphaCode placed at about the level of the median competitor, marking the first time an AI code generation system has reached a competitive level of performance in programming competitions.

To help others build on our results, were releasing our dataset of competitive programming problems and solutions on GitHub, including extensive tests to ensure the programs that pass these tests are correct a critical feature current datasets lack. We hope this benchmark will lead to further innovations in problem solving and code generation.

Continue reading here:
Competitive programming with AlphaCode - DeepMind

Read More..

Using Deep Learning to Find Genetic Causes of Mental Health Disorders in an Understudied Population – Neuroscience News

Summary: A new deep learning algorithm that looks for the burden of genomic variants is 70% accurate at identifying specific mental health disorders within the African-American community.

Source: CHOP

Minority populations have been historically under-represented in existing studies addressing how genetic variations may contribute to a variety of disorders. A new study from researchers at Childrens Hospital of Philadelphia (CHOP) shows that a deep learning model has promising accuracy when helping to diagnose a variety of common mental health disorders in African American patients.

This tool could help distinguish between disorders as well as identify multiple disorders, fostering early intervention with better precision and allowing patients to receive a more personalized approach to their condition.

The study was recently published by the journalMolecular Psychiatry.

Properly diagnosing mental disorders can be challenging, especially for young toddlers who are unable to complete questionnaires or rating scales. This challenge has been particularly acute in understudied minority populations. Past genomic research has found several genomic signals for a variety of mental disorders, with some serving as potential therapeutic drug targets.

Deep learning algorithms have also been used to successfully diagnose complex diseases like attention deficit hyperactivity disorder (ADHD). However, these tools have rarely been applied in large populations of African American patients.

In a unique study, the researchers generated whole genome sequencing data from 4,179 patient blood samples of African American patients, including 1,384 patients who had been diagnosed with at least one mental disorder This study focused on eight common mental disorders, including ADHD, depression, anxiety, autism spectrum disorder, intellectual disabilities, speech/language disorder, delays in developments and oppositional defiant disorder (ODD).

The long-term goal of this work is to learn more about specific risks for developing certain diseases in African American populations and how to potentially improve health outcomes by focusing on more personalized approaches to treatment.

Most studies focus only on one disease, and minority populations have been very under-represented in existing studies that utilize machine learning to study mental disorders, said senior author Hakon Hakonarson, MD, Ph.D., Director of the Center for Applied Genomics at CHOP.

We wanted to test this deep learning model in an African American population to see whether it could accurately differentiate mental disorder patients from healthy controls, and whether we could correctly label the types of disorders, especially in patients with multiple disorders.

The deep learning algorithm looked for the burden of genomic variants in coding and non-coding regions of the genome. The model demonstrated over 70% accuracy in distinguishing patients with mental disorders from the control group. The deep learning algorithm was equally effective in diagnosing patients with multiple disorders, with the model providing exact diagnostic matches in approximately 10% of cases.

The model also successfully identified multiple genomic regions that were highly enriched formental disorders, meaning they were more likely to be involved in the development of these medical disorders. The biological pathways involved included ones associated with immune responses, antigen and nucleic acid binding, a chemokine signaling pathway, and guanine nucleotide-binding protein receptors.

However, the researchers also found that variants in regions that did not code for proteins seemed to be implicated in these disorders at higher frequency, which means they may serve as alternative markers.

By identifying genetic variants and associated pathways, future research aimed at characterizing their function may provide mechanistic insight as to how these disorders develop, Hakonarson said.

Author: Press OfficeSource: CHOPContact: Press Office CHOPImage: The image is in the public domain

Original Research: Open access.Application of deep learning algorithm on whole genome sequencing data uncovers structural variants associated with multiple mental disorders in African American patients by Yichuan Liu et al. Molecular Psychiatry

Abstract

Application of deep learning algorithm on whole genome sequencing data uncovers structural variants associated with multiple mental disorders in African American patients

Mental disorders present a global health concern, while the diagnosis of mental disorders can be challenging. The diagnosis is even harder for patients who have more than one type of mental disorder, especially for young toddlers who are not able to complete questionnaires or standardized rating scales for diagnosis. In the past decade, multiple genomic association signals have been reported for mental disorders, some of which present attractive drug targets.

Concurrently, machine learning algorithms, especially deep learning algorithms, have been successful in the diagnosis and/or labeling of complex diseases, such as attention deficit hyperactivity disorder (ADHD) or cancer. In this study, we focused on eight common mental disorders, including ADHD, depression, anxiety, autism, intellectual disabilities, speech/language disorder, delays in developments, and oppositional defiant disorder in the ethnic minority of African Americans.

Blood-derived whole genome sequencing data from 4179 individuals were generated, including 1384 patients with the diagnosis of at least one mental disorder. The burden of genomic variants in coding/non-coding regions was applied as feature vectors in the deep learning algorithm. Our model showed ~65% accuracy in differentiating patients from controls. Ability to label patients with multiple disorders was similarly successful, with a hamming loss score less than 0.3, while exact diagnostic matches are around 10%. Genes in genomic regions with the highest weights showed enrichment of biological pathways involved in immune responses, antigen/nucleic acid binding, chemokine signaling pathway, and G-protein receptor activities.

A noticeable fact is that variants in non-coding regions (e.g., ncRNA, intronic, and intergenic) performed equally well as variants in coding regions; however, unlike coding region variants, variants in non-coding regions do not express genomic hotspots whereas they carry much more narrow standard deviations, indicating they probably serve as alternative markers.

See the original post:
Using Deep Learning to Find Genetic Causes of Mental Health Disorders in an Understudied Population - Neuroscience News

Read More..

Artificial Intelligence Creeps on to the African Battlefield – Brookings Institution

Even as the worlds leading militaries race to adopt artificial intelligence in anticipation of future great power war, security forces in one of the worlds most conflict-prone regions are opting for a more measured approach. In Africa, AI is gradually making its way into technologies such as advanced surveillance systems and combat drones, which are being deployed to fight organized crime, extremist groups, and violent insurgencies. Though the long-term potential for AI to impact military operations in Africa is undeniable, AIs impact on organized violence has so far been limited. These limits reflect both the novelty and constraints of existing AI-enabled technology.

Artificial intelligence and armed conflict in Africa

Artificial intelligence (AI), at its most basic, leverages computing power to simulate the behavior of humans that requires intelligence. Artificial intelligence is not a military technology like a gun or a tank. It is rather, as the University of Pennsylvanias Mark Horowitz argues, a general-purpose technology with a multitude of applications, like the internal combustion engine, electricity, or the internet. And as AI applications proliferate to military uses, it threatens to change the nature of warfare. According to the ICRC, AI and machine-learning systems could have profound implications for the role of humans in armed conflict, especially in relation to: increasing autonomy of weapon systems and other unmanned systems; new forms of cyber and information warfare; and, more broadly, the nature of decision-making.

In at least two respects, AI is already affecting the dynamics of armed conflict and violence in Africa. First, AI-driven surveillance and smart policing platforms are being used to respond to attacks by violent extremist groups and organized criminal networks. Second, the development of AI-powered drones is beginning to influence combat operations and battlefield tactics.

AI is perhaps most widely used in Africa in areas with high levels of violence to increase the capabilities and coordination of law enforcement and domestic security services. For instance, fourteen African countries deploy AI-driven surveillance and smart-policing platforms, which typically rely on deep neural networks for image classification and a range of machine learning models for predictive analytics. In Nairobi, Chinese tech giant Huawei has helped build an advanced surveillance system, and in Johannesburg automated license plate readers have enabled authorities to track violent, organized criminals with suspected ties to the Islamic State. Although such systems have significant limitations (more on this below), they are proliferating across Africa.

AI-driven systems are also being deployed to fight organized crime. At Liwonde National Park in Malawi, park rangers use EarthRanger software, developed by the late Microsoft co-founder, Paul Allen, to combat poaching using artificial intelligence and predictive analytics. The software detects patterns in poaching that the rangers might overlook, such as upticks in poaching during holidays and government paydays. A small, motion-activated poacher cam relies on an algorithm to distinguish between humans and animals and has contributed to at least one arrest. Its not difficult to imagine how such a system might be repurposed for counterinsurgency or armed conflict, with AI-enabled surveillance and monitoring systems deployed to detect and deter armed insurgents.

In addition to the growing use of AI within surveillance systems across Africa, AI has also been integrated into weapon systems. Most prominently, lethal autonomous weapons systems use real-time sensor data coupled with AI and machine learning algorithms to select and engage targets without further intervention by a human operator. Depending on how that definition is interpreted, the first use of a lethal autonomous weapon system in combat may have taken place on African soil in March 2020. That month, logistics units belonging to the armed forces of the Libyan warlord Khalifa Haftar came under attack by Turkish-made STM Kargu-2 drones as they fled Tripoli. According to a United Nations report, the Kargu-2 represented a lethal autonomous weapons system because it had been programmed to attack targets without requiring data connectivity between the operator and munition. Although other experts have instead classified the Kargu-2 as a loitering munition, its use in combat in northern Africa nonetheless points to a future where AI-enabled weapons are increasingly deployed in armed conflicts in the region.

Indeed, despite global calls for a ban on similar weapons, the proliferation of systems like the Kargu-2 is likely only beginning. Relatively low costs, tactical advantages, and the emergence of multiple suppliers have led to a booming market for low-and-mid tier combat drones currently being dominated by players including Israel, China, Turkey, and South Africa. Such drones, particularly Turkeys Bakratyar TB2, have been acquired and used by well over a dozen African countries.

While the current generation of drones by and large do not have AI-driven autonomous capabilities that are publicly acknowledged, the same cannot be said for the next generation, which are even less costly, more attritable, and use AI-assisted swarming technology to make themselves harder to defend against. In February, the South Africa-based Paramount Group announced the launch of its N-RAVEN UAV system, which it bills as a family of autonomous, multi-mission aerial vehicles featuring next-generation swarm technologies. The N-RAVEN will be able to swarm in units of up to twenty and is designed for technology transfer and portable manufacture within partner countries. These features are likely to be attractive to African militaries.

AIs limits, downsides, and risks

Though AI may continue to play an increasing role in the organizational strategies, intelligence-gathering capabilities, and battlefield tactics of armed actors in Africa and elsewhere, it is important to put these contributions in a broader perspective. AI cannot address the fundamental drivers of armed conflict, particularly the complex insurgencies common in Africa. African states and militaries may overinvest in AI, neglecting its risks and externalities, as well as the ways in which AI-driven capabilities may be mitigated or exploited by armed non-state actors.

AI is unlikely to have a transformative impact on the outbreak, duration, or mitigation of armed conflict in Africa, whose incidence has doubled over the past decade. Despite claims by its makers, there is little hard evidence linking the deployment of AI-powered smart cities with decreases in violence, including in Nairobi, where crime incidents have remained virtually unchanged since 2014, when the citys AI-driven systems first went online. The same is true of poaching. During the COVID-19 pandemic, fewer tourists and struggling local economies have fueled significant increases, overwhelming any progress that has resulted from governments adopting cutting-edge technology.

This is because, in the first place, armed conflict is a human endeavor, with many factors that influence its outcomes. Even the staunchest defenders of AI-driven solutions, such as Huawei Southern Africa Public Affairs Director David Lane, admit that they cannot address the underlying causes of insecurity such as unemployment or inequality: Ultimately, preventing crime requires addressing these causes in a very local way. No AI algorithm can prevent poverty or political exclusion, disputes over land or national resources, or political leaders from making chauvinistic appeals to group identity. Likewise, the central problems with Africas militariesendemic corruption, human rights abuses, loyalties to specific leaders and groups rather than institutions and citizens, and a proclivity for ill-timed seizures of powerare not problems that artificial intelligence alone can solve.

In the second place, the aspects of armed conflict that AI seems most likely to disruptremote intelligence-gathering capabilities and air powerare technologies that enable armies to keep enemies at arms-length and win in conventional, pitched battles. AIs utility in fighting insurgencies, in which non-state armed actors conduct guerilla attacks and seek to blend in and draw support from the population, is more questionable. To win in insurgencies requires a sustained on the ground presence to maintain order and govern contested territory. States cannot hope to prevail in such conflicts by relying on technology that effectively removes them from the fight.

Finally, the use of AI to fight modern armed conflict remains at a nascent stage. To date, the prevailing available evidence has documented how state actors are adopting AI to fight conflict, and not how armed non-state actors are responding. Nevertheless, states will not be alone in seeking to leverage autonomous weapons. Former African service members speculate that it is only a matter of time before before the deployment of swarms or clusters of offensive drones by non-state actors in Africa, given their accessibility, low costs, and existing use in surveillance and smuggling. Rights activists have raised the alarm about the potential for small, cheap, swarming slaughterbots, that use freely available AI and facial recognition systems to commit mass acts of terror. This particular scenario is controversial, but according to American Universitys Audrey Kurth Cronin, it is both technologically feasible and consistent with classic patterns of diffusion.

The AI armed conflict evolution

These downsides and risks suggest the continued diffusion of AI is unlikely to result in the revolutionary changes to armed conflict suggested by some of its more ardent proponents and backers. Rather, modern AI is perhaps best viewed as continuing and perhaps accelerating long-standing technological trends that have enhanced sensing capabilities and digitized and automated the operations and tactics of armed actors everywhere.

For all its complexity, AI is first and foremost a digital technology, its impact dependent on and difficult to disentangle from a technical triad of data, algorithms, and computing power. The impact of AI-powered surveillance platforms, from the EarthRanger software used at Liwonde to Huawei-supplied smart policing platforms, isnt just a result of machine-learning algorithms that enable human-like reasoning capabilities, but also on the ability to store, collect, process collate and manage vast quantities of data. Likewise, as pointed out by analysts such as Kelsey Atherton, the Kargu 2 used in Libya can be classified as an autonomous loitering munition such as Israels Harpy drone. The main difference between the Kargu 2 and the Harpy, which was first manufactured in 1989, is where the former uses AI-driven image recognition, the latter uses electro-optical sensors to detect and hone in on enemy radar emissions.

The diffusion of AI across Africa, like the broader diffusion of digital technology, is likely to be diverse and uneven. Africa remains the worlds least digitized region. Internet penetration rates are low and likely to remain so in many of the most conflict-prone countries. In Somalia, South Sudan, Ethiopia, the Democratic Republic of Congo, and much of the Lake Chad Basin, internet penetration is below 20%. AI is unlikely to have much of an impact on conflict in regions where citizens leave little in the way of a digital footprint, and non-state armed groups control territory beyond the easy reach of the state.

Taken together, these developments suggest that AI will cause a steady evolution in armed conflict in Africa and elsewhere, rather than revolutionize it. Digitization and the widespread adoption of autonomous weapons platforms may extend the eyes and lengthen the fists of state armies. Non-state actors will adopt these technologies themselves and come up with clever ways to exploit or negate them. Artificial intelligence will be used in combination with equally influential, but less flashy inventions such as the AK-47, the nonstandard tactical vehicle, and the IED to enable new tactics that take advantage or exploit trends towards better sensing capabilities and increased mobility.

Incrementally and in concert with other emerging technologies, AI is transforming the tools and tactics of warfare. Nevertheless, experience from Africa suggests that humans will remain the main actors in the drama of modern armed conflict.

Nathaniel Allen is an assistant professor with the Africa Center for Strategic Studies at National Defense University and a Council on Foreign Relations term member. Marian Ify Okpali is a researcher on cyber policy and the executive assistant to the dean at the Africa Center for Strategic Studies at National Defense University. The opinions expressed in this article are those of the authors.

Microsoft provides financial support to the Brookings Institution, a nonprofit organization devoted to rigorous, independent, in-depth public policy research.

Continued here:
Artificial Intelligence Creeps on to the African Battlefield - Brookings Institution

Read More..

CEO of Alberta-based company says it’s time for Alberta, companies to invest in AI and machine learning – Edmonton Journal

Breadcrumb Trail Links

Now is the time for Alberta-based companies and the province to invest more in AI and machine learning technology, said the CEO of an Edmonton company.

This advertisement has not loaded yet, but your article continues below.

Cam Linke, CEO of Alberta Machine Intelligence Institute (Amii), said its a special time in AI machine learning with lots of advancements being made.

This isnt just an academic thing, there is the ability and tools to be able to apply machine learning to a myriad of business problems, said Linke. Right now, businesses dont have to make enormous investments upfront, they can make reasoned investments around a business plan that can have a meaningful business impact right now.

However, Linke said at the same time, the field is growing rapidly.

Its kind of a special time where its sitting right at the intersection of engineering, where it can be applied right now, and science, where the fields continuing to learn, grow and do new things, he said.

This advertisement has not loaded yet, but your article continues below.

Linke said there is a carrot in the stick when it comes to regions and companies around machine learning where the carrot is creating a lot of opportunity, business value and the ability to create a competitive advantage in your industry.

The stick of it is that if youre not, your competitor is, he said. You kind of have to, not just because theres great opportunity there, but someone in your industry and one of your competitors is going to take advantage of this technology and they will have a competitive edge over you if youre not making that investment.

Linke added Alberta is ahead of many provinces due to the province investing in machine learning since 2002 and the federal governments Pan-Canadian AI Strategy announced five years ago.

This advertisement has not loaded yet, but your article continues below.

Amii is a non-profit that supports and invests in world-leading research and training primarily done at the University of Alberta. Linke said the company has partnered with more than 100 companies, from small start-ups to multi-nationals like Shell, to help in the AI and machine learning fields.

Linke said Amii has worked with companies on implementing things such as predictive maintenance which can predict when a machine may fail which helps a company get in front of repairs before a more expensive incident occurs. Another example is the machine learning and reinforcement learning used at a water treatment plant optimizing the amount of water that can be treated, while trying to reduce the amount of energy used.

Linke said Alberta is already seeing the impacts and work of more AI and machine learning being introduced.

Were seeing it by the amount of investment by large companies in the area, the amount of investment in start-ups and the growth of start-ups in the area and were seeing it with the number of jobs and the number of people hired in the area, said Linke.

ktaniguchi@postmedia.com

twitter.com/kellentaniguchi

This advertisement has not loaded yet, but your article continues below.

Sign up to receive daily headline news from the Edmonton Journal, a division of Postmedia Network Inc.

A welcome email is on its way. If you don't see it, please check your junk folder.

The next issue of Edmonton Journal Headline News will soon be in your inbox.

We encountered an issue signing you up. Please try again

Postmedia is committed to maintaining a lively but civil forum for discussion and encourage all readers to share their views on our articles. Comments may take up to an hour for moderation before appearing on the site. We ask you to keep your comments relevant and respectful. We have enabled email notificationsyou will now receive an email if you receive a reply to your comment, there is an update to a comment thread you follow or if a user you follow comments. Visit our Community Guidelines for more information and details on how to adjust your email settings.

Go here to read the rest:
CEO of Alberta-based company says it's time for Alberta, companies to invest in AI and machine learning - Edmonton Journal

Read More..

How to build healthcare predictive models using PyHealth? – Analytics India Magazine

Machine learning has been applied to many health-related tasks, such as the development of new medical treatments, the management of patient data and records, and the treatment of chronic diseases. To achieve success in those SOTA applications, we must rely on the time-consuming technique of model building evaluation. To alleviate this load, Yue Zhao et al have proposed a PyHealth, a Python-based toolbox. As the name implies, this toolbox contains a variety of ML models and architecture algorithms for working with medical data. In this article, we will go through this model to understand its working and application. Below are the major points that we are going to discuss in this article.

Lets first discuss the use case of machine learning in the healthcare industry.

Machine learning is being used in a variety of healthcare settings, from case management of common chronic conditions to leveraging patient health data in conjunction with environmental factors such as pollution exposure and weather.

Machine learning technology can assist healthcare practitioners in developing accurate medication treatments tailored to individual features by crunching enormous amounts of data. The following are some examples of applications that can be addressed in this segment:

The ability to swiftly and properly diagnose diseases is one of the most critical aspects of a successful healthcare organization. In high-need areas like cancer diagnosis and therapy, where hundreds of drugs are now in clinical trials, scientists and computationalists are entering the mix. One method combines cognitive computing with genetic tumour sequencing, while another makes use of machine learning to provide diagnosis and treatment in a range of fields, including oncology.

Medical imaging, and its ability to provide a complete picture of an illness, is another important aspect in diagnosing an illness. Deep learning is becoming more accessible as data sources become more diverse, and it may be used in the diagnostic process, therefore it is becoming increasingly important. Although these machine learning applications are frequently correct, they have some limitations in that they cannot explain how they came to their conclusions.

ML has the potential to identify new medications with significant economic benefits for pharmaceutical companies, hospitals, and patients. Some of the worlds largest technology companies, like IBM and Google, have developed ML systems to help patients find new treatment options. Precision medicine is a significant phrase in this area since it entails understanding mechanisms underlying complex disorders and developing alternative therapeutic pathways.

Because of the high-risk nature of surgeries, we will always need human assistance, but machine learning has proved extremely helpful in the robotic surgery sector. The da Vinci robot, which allows surgeons to operate robotic arms in order to do surgery with great detail and in confined areas, is one of the most popular breakthroughs in the profession.

These hands are generally more accurate and steady than human hands. There are additional instruments that employ computer vision and machine learning to determine the distances between various body parts so that surgery can be performed properly.

Health data is typically noisy, complicated, and heterogeneous, resulting in a diverse set of healthcare modelling issues. For instance, health risk prediction is based on sequential patient data, disease diagnosis based on medical images, and risk detection based on continuous physiological signals.

Electroencephalogram (EEG) or electrocardiogram (ECG), for example, and multimodal clinical notes (e.g., text and images). Despite their importance in healthcare research and clinical decision making, the complexity and variability of health data and tasks need the long-overdue development of a specialized ML system for benchmarking predictive health models.

PyHealth is made up of three modules: data preprocessing, predictive modelling, and assessment. Both computer scientists and healthcare data scientists are PyHealths target consumers. They can run complicated machine learning processes on healthcare datasets in less than 10 lines of code using PyHealth.

The data preprocessing module converts complicated healthcare datasets such as longitudinal electronic health records, medical pictures, continuous signals (e.g., electrocardiograms), and clinical notes into machine learning-friendly formats.

The predictive modelling module offers over 30 machine learning models, including known ensemble trees and deep neural network-based approaches, using a uniform yet flexible API geared for both researchers and practitioners.

The evaluation module includes a number of evaluation methodologies (for example, cross-validation and train-validation-test split) as well as prediction model metrics.

There are five distinct advantages to using PyHealth. For starters, it contains more than 30 cutting-edge predictive health algorithms, including both traditional techniques like XGBoost and more recent deep learning architectures like autoencoders, convolutional based, and adversarial based models.

Second, PyHealth has a broad scope and includes models for a variety of data types, including sequence, image, physiological signal, and unstructured text data. Third, for clarity and ease of use, PyHealth includes a unified API, detailed documentation, and interactive examples for all algorithmscomplex deep learning models can be implemented in less than ten lines of code.

Fourth, unit testing with cross-platform, continuous integration, code coverage, and code maintainability checks are performed on most models in PyHealth. Finally, for efficiency and scalability, parallelization is enabled in select modules (data preprocessing), as well as fast GPU computation for deep learning models via PyTorch.

PyHealth is a Python 3 application that uses NumPy, scipy, scikit-learn, and PyTorch. As shown in the diagram below, PyHealth consists of three major modules: First is the data preprocessing module can validate and convert user input into a format that learning models can understand;

Second is the predictive modelling module is made up of a collection of models organized by input data type into sequences, images, EEG, and text. For each data type, a set of dedicated learning models has been implemented, and the third is the evaluation module can automatically infer the task type, such as multi-classification, and conduct a comprehensive evaluation by task type.

Most learning models share the same interface and are inspired by the scikit-API learn to design and general deep learning design: I fit learns the weights and saves the necessary statistics from the train and validation data; load model chooses the model with the best validation accuracy, and inference predicts the incoming test data.

For quick data and model exploration, the framework includes a library of helper and utility functions (check parameter, label check, and partition estimators). For example, a label check can check the data label and infer the task type, such as binary classification or multi-classification, automatically.

PyHealth for model building

Now below well discuss how we can leverage the API of this framework. First, we need to install the package by using pip.

! pip install pyhealth

Next, we can load the data from the repository itself. For that, we need to clone the repository. After cloning the repository inside the datasets folder there is a variety of datasets like sequenced based, image-based, etc. We are using the mimic dataset and it is in the zip form we need to unzip it. Below is the snippet clone repository, and unzip the data.

The unzipped file is saved in the current working directory with the name of the folder as a mimic. Next to use this dataset we need to load the sequence data generator function which serves as functionality to prepare the dataset for experimentation.

Now we have loaded the dataset. Now we can do further modelling as below.

Here is the fitment result.

Through this article, we have discussed how machine learning can be used in the healthcare industry by observing the various applications. As this domain is being quite vast and N number application, we have discussed a Python-based toolbox that is designed to build a predictive modelling approach by using various deep learning techniques such as LSTM, GRU for sequence data, and CNN for image-based data.

Read the original:
How to build healthcare predictive models using PyHealth? - Analytics India Magazine

Read More..

Founded by Ex-Uber Data Architect and Apache Hudi Creator, – GlobeNewswire

MENLO PARK, Calif., Feb. 02, 2022 (GLOBE NEWSWIRE) -- Today Onehouse, the first managed lakehouse company, emerged from stealth with its cloud-native managed service based on Apache Hudi that makes data lakes easier, faster and cheaper.

Data has become the driving force of innovation across nearly every industry in the world. Yet organizations still struggle to build and maintain data architectures that can economically scale at the fast-paced growth of their data. As the size of the data and the AI and machine learning (ML) workloads increase, their costs rise exponentially and they start to outgrow their data warehouses. To scale any further they turn to a data lake where they face a whole new set of complex challenges like constantly tuning data layouts, large-scale concurrency controls, fast data ingestion, data deletions and more.

Onehouse founder Vinoth Chandar faced these very challenges as he was building one of the largest data lakes in the world at Uber. A rapidly growing Uber needed the performance of a warehouse and the scale of a data lake, in near real-time to power AI/ML driven features like predicting ETAs, recommending eats and ensuring ride safety. He created Apache Hudi to implement a new path-breaking architecture where the core warehouse and database functionality was directly added to the data lake, today known as the lakehouse. Apache Hudi brings a state-of-the-art data lakehouse to life with advanced indexes, streaming ingestion services and data clustering/optimization techniques.

Apache Hudi is now widely adopted across the industry used from startups to large enterprises including Amazon, Walmart, Disney+ Hotstar, GE Aviation, Robinhoodand TikTok to build exabyte scale data lakes in near-real-time at vastly improved price/performance. The broad adoption of Hudi has battle-tested and proven the foundational benefits of this open source project. Thousands of organizations from across the world have contributed to Hudi and the project has grown 7x in less than two years to nearly one million monthly downloads. At Uber, Hudi continues to ingest more than 500 billion records every day.

Zheng Shao and Mohammad Islam from Uber shared we started the Hudi project in 2016, and submitted it to Apache Incubator Project in 2019. Apache Hudi is now a Top-Level Project, with the majority of our Big Data on HDFS in Hudi format. This has dramatically reduced the computing capacity needs at Uber in the Cost-Efficient Open Source Big Data Platform at Uber blog: https://eng.uber.com/cost-efficient-big-data-platform/.

Even with transformative technology like Apache Hudi, building a high quality data lake requires months of investment with scarce talent without which there are high risks that data is not fresh enough or the lake is unreliable or performs poorly.

Onehouse founder and CEO Vinoth Chandar said: While a warehouse can just be used, a lakehouse still needs to be built. Having worked with many organizations on that journey for four years in the Apache Hudi community, we believe Onehouse will enable easy adoption of data lakes and future-proof the data architecture for machine learning/data science down the line.

Onehouse streamlines the adoption of the lakehouse architecture, by offering a fully-managed cloud-native service that quickly ingests, self-manages and auto-optimizes data. Instead of creating yet another vertically integrated data and query stack, it provides one interoperable and truly open data layer that accelerates workloads across all popular data lake query engines like Apache Spark, Trino, Presto and even cloud warehouses as external tables.

Leveraging unique capabilities of Apache Hudi, Onehouse opens the door for incremental data processing that is typically orders of magnitude faster than old-school batch processing. By combining a breakthrough technology and a fully-managed easy-to-use service, organizations can build data lakes in minutes, not months, realize large cost savings and still own their data in open formats, not locked into any individual vendors.

Industry Analysts on Onehouse

$8 Million in Seed FundingOnehouse raised $8 million in seed funding co-led by Greylock and Addition. Onehouse plans to use the money for its managed lakehouse product and to further the research and development on Apache Hudi.

Greylock Partner Jerry Chen said: The data lake house is the future of data lakes, providing customers the ease of use of a data warehouse with the cost and scale advantages of a data lake. Apache Hudi is already the de facto starting point for modern data lakes and today Onehouse makes data lakes easily accessible and usable by all customers.

Addition Investor Aaron Schildkrout said: Onehouse is ushering in the next generation of data infrastructure, replacing expensive data ingestion and data warehousing solutions with a single lakehouse thats dramatically less costly, faster, more open and - now - also easier to use. Onehouse is going to make broadly accessible what has to-date been a tightly held secret used by only the most advanced data teams.

Additional Resources

About OnehouseOnehouse provides a cloud-native managed lakehouse service that makes data lakes easier, faster and cheaper. Onehouse blends the ease of use of a warehouse with the scale of a data lake into a fully managed product. Engineers can build data lakes in minutes, process data in seconds and own data in open source formats, not locked away to individual vendors. Onehouse is founded by a former Uber data architect and the creator of Apache Hudi who pioneered the fundamental technology of the lakehouse. For more information, please visit https://onehouse.ai or follow @Onehousehq.

Media and Analyst Contact:Amber Rowlandamber@therowlandagency.com+1-650-814-4560

A photo accompanying this announcement is available at https://www.globenewswire.com/NewsRoom/AttachmentNg/aedd9404-e43b-49fb-9091-a4b0e57e7f39

See original here:
Founded by Ex-Uber Data Architect and Apache Hudi Creator, - GlobeNewswire

Read More..

Silicon Labs brings AI and Machine Learning to the Edge with Matter-ready platform – Design Products & Applications

31 January 2022

This new co-optimised hardware and software platform will help bring AI/ML applications and wireless high performance to battery-powered edge devices. Matter-ready, the ultra-low-power BG24 and MG24 families support multiple wireless protocols and incorporate PSA Level 3 Secure Vault protection, ideal for diverse smart home, medical and industrial applications. The SoC and software solution for the Internet of Things (IoT) announced today includes:

Two new families of 2.4 GHz wireless SoCs, which feature the industrys first integrated AI/ML accelerators, support for Matter, Zigbee, OpenThread, Bluetooth Low Energy, Bluetooth mesh, proprietary and multi-protocol operation, the highest level of industry security certification, ultra-low power capabilities and the largest memory and flash capacity in the Silicon Labs portfolio.

A new software toolkit designed to allow developers to quickly build and deploy AI and machine learning algorithms using some of the most popular tool suites like TensorFlow.

The BG24 and MG24 wireless SoCs represent an awesome combination of industry capabilities including broad wireless multiprotocol support, battery life, machine learning, and security for IoT Edge applications, said Matt Johnson, CEO of Silicon Labs.

First integrated AI/ML acceleration improves performance and energy efficiency

IoT product designers see the tremendous potential of AI and machine learning to bring even greater intelligence to edge applications like home security systems, wearable medical monitors, sensors monitoring commercial facilities and industrial equipment, and more. But today, those considering deploying AI or machinehttp://learning at the edge are faced with steep penalties in performance and energy use that may outweigh the benefits.

The BG24 and MG24 alleviate those penalties as the first ultra-low powered devices with dedicated AI/ML accelerators built in. This specialised hardware is designed to handle complex calculations quickly and efficiently, with internal testing showing up to a 4x improvement in performance along with up to a 6x improvement in energy efficiency. Because the ML calculations are happening on the local device rather than in the cloud, network latency is eliminated for faster decision-making and actions.

The BG24 and MG24 families also have the largest Flash and random-access memory (RAM) capacities in the Silicon Labs portfolio. This means that the device can evolve for multi-protocol support, Matter, and trained ML algorithms for large datasets. PSA Level 3-Certified Secure VaultTM, the highest level of security certification for IoT devices, provides the security needed in products like door locks, medical equipment, and other sensitive deployments where hardening the device from external threats is paramount.

To learn more about the capabilities of the BG24 and MG24 SoCs and view a demo on how to get started, register for the instructional Tech Talk "Unboxing the new BG24 and MG24 SoCs" here: https://www.silabs.com/tech-talks.

AI/ML software and Matter-support help designers create for new innovative applications

In addition to natively supporting TensorFlow, Silicon Labs has partnered with some of the leading AI and ML tools providers, like SensiML and Edge Impulse, to ensure that developers have an end-to-end toolchain that simplifies the development of machine learning models optimised for embedded deployments of wireless applications. Using this new AI/ML toolchain with Silicon Labss Simplicity Studio and the BG24 and MG24 families of SoCs, developers can create applications that draw information from various connected devices, all communicating with each other using Matter to then make intelligent machine learning-driven decisions.

For example, in a commercial office building, many lights are controlled by motion detectors that monitor occupancy to determine if the lights should be on or off. However, when typing at a desk with motion limited to hands and fingers, workers may be left in the dark when motion sensors alone cannot recognise their presence. By connecting audio sensors with motion detectors through the Matter application layer, the additional audio data, such as the sound of typing, can be run through a machine-learning algorithm to allow the lighting system to make a more informed decision about whether the lights should be on or off.

ML computing at the edge enables other intelligent industrial and home applications, including sensor-data processing for anomaly detection, predictive maintenance, audio pattern recognition for improved glass-break detection, simple-command word recognition, and vision use cases like presence detection or people counting with low-resolution cameras.

Alpha program highlights variety of deployment options

More than 40 companies representing various industries and applications have already begun developing and testing this new platform solution in a closed Alpha program. These companies have been drawn to the BG24 and MG24 platforms by their ultra-low power, advanced features, including AI/ML capabilities and support for Matter. Global retailers are looking to improve the in-store shopping experience with more accurate asset tracking, real-time price updating, and other uses. Participants from the commercial building management sector are exploring how to make their building systems, including lighting and HVAC, more intelligent to lower owners costs and reduce their environmental footprint. Finally, consumer and smart home solution providers are working to make it easier to connect various devices and expand the way they interact to bring innovative new features and services to consumers.

Silicon Labs most capable family of SoCs

The single-die BG24 and MG24 SoCs combine a 78 MHz ARM Cortex-M33 processor, high-performance 2.4 GHz radio, industry-leading 20-bit ADC, an optimised combination of Flash (up to 1536 kB) and RAM (up to 256 kB), and an AI/ML hardware accelerator for processing machine learning algorithms while offloading the ARM Cortex-M33, so applications have more cycles to do other work. Supporting a broad range of 2.4 GHz wireless IoT protocols, these SoCs incorporate the highest security with the best RF performance/energy-efficiency ratio in the market.

Availability

EFR32BG24 and EFR32MG24 SoCs in 5 x 5mm QFN40 and 6 x 6mm QFN48 packages are shipping today to Alpha customers and will be available for mass deployment in April 2022. Multiple evaluation boards are available to designers developing applications. Modules based on the BG24 and MG24 SoCs will be available in the second half of 2022.

To learn more about the new BG24 family, go to: http://silabs.com/bg24.

To learn more about the new MG24 family, go to: http://silabs.com/mg24.

To learn more about how Silicon Labs supports AI and ML, go to: http://silabs.com/ai-ml.

Read more from the original source:
Silicon Labs brings AI and Machine Learning to the Edge with Matter-ready platform - Design Products & Applications

Read More..

AIs J-curve and upcoming productivity boom – TechTalks

This article is part of our series that explores thebusiness of artificial intelligence

Digital technologies, and at their forefront artificial intelligence, are triggering fundamental shifts in society, politics, education, economy, and other fundamental aspects of life. These changes provide opportunities for unprecedented growth across different sectors of the economy. But at the same time, they entail challenges that organizations must overcome before they can tap into their full potential.

In a recent talk at an online conference organized by Stanford Human-Centered Artificial Intelligence (HAI), Stanford professor Erik Brynjolfsson discussed some of these opportunities and challenges.

Brynjolfsson, who directs Stanfords Digital Economy Lab, believes that in the coming decade, the use of artificial intelligence will be much more widespread than it is today. But its adoption will also face a period of lull, also known as the J-curve.

Theres a growing gap between what the technology is capable of and what it is already doing versus how we are responding to that, Brynjolfsson says. And thats where a lot of our societys biggest challenges and problems and some of our biggest opportunities lie.

According to Brynjolfsson, the next decade will see significantly higher productivity thanks to a wave of powerful technologiesespecially machine learningthat are finding their way into every computing device and application.

Advances in computer vision have been tremendous, especially in areas such as image recognition and medical imaging. Talking to phones, watches, and smart speakers has become commonplace thanks to advances in natural language processing and speech recognition. Product recommendation, ad placement, insurance underwriting, loan approval, and many other applications have benefited immensely from advances in machine learning.

In many areas, machine learning is reducing costs and accelerating production. For example, the application of large language models in programming can help software developers become much more productive and achieve more in less time.

In other areas, machine learning can help create applications that did not exist before. For example, generative deep learning models are creating new applications for arts, music, and other creative work. In areas such as online shopping, advances in machine learning can create major shifts in business models, such as moving from shopping-then-shipping to shipping-then-shopping.

The lockdowns and urgency caused by the covid-19 pandemic accelerated the adoption of these technologies in different sectors, including remote work tools, robotic process automation, powered drug research, and factory automation.

The pandemic has been horrific in so many ways, but another thing its done is its accelerated the digitization of the economy, compressing in about 20 weeks what would have taken maybe 20 years of digitization, Brynjolfsson says. Weve all invested in technologies that are allowing us to adapt to a more digital world. Were not going to stay as remote as we are now, but were not going all the way back either. And that increased digitization of business processes and skills compresses the timeframe for us to adopt these new ways of working and ultimately drive higher productivity.

The productivity potential of machine learning technologies has one big caveat.

Historically, when these new technologies become available, they dont immediately translate into productivity growth. Often theres a period where productivity declines, where theres a lull, Brynjolfsson says. And the reason theres this lull is that you need to reinvent your organizations, you need to develop new business processes.

Brynjolfsson calls this the Productivity J-Curve and has documented it in a paper published in the American Economic Journal: Macroeconomics. Basically, the great potential caused by new general-purpose technologies like the steam engine, electricity, and more recently machine learning requires fundamental changes in business processes and workflows, the co-invention of new products and business models, and investment in human capital.

These investments and changes often take several years, and during this period, they dont yield tangible results. During this phase, the companies are creating intangible assets, according to Brynjolfsson. For example, they might be training and reskilling their workforce to employ these new technologies. They might be redesigning their factories or instrumenting them with new sensor technologies to take advantage of machine learning models. They might need to revamp their data infrastructure and create data lakes on which they can train and run ML models.

These efforts might cost millions of dollars (or billions in the case of large corporations) and make no change in the companys output in the short term. At first glance, it seems that costs are increasing without any return on investment. When these changes reach their turning point, they result in a sudden increase in productivity.

Were in this period right now where were making a lot of that painful transition, restructuring work, and theres a lot of companies that are struggling with that, Brynjolfsson says. But were working through that, and these J-curves will lead to higher productivityaccording to our research, were near the bottom and turning up.

Unfortunately, adapting to AI and other new digital technologies does not run on a predictable path. Most firms arent making the transition correctly or lack the creativity and understanding to make the transition. Various studies show that most applied machine learning projects fail.

Only about the top 10-15 percent of firms are doing most of the investment in these intangibles. The other 85-90 percent of firms are lagging behind and are hardly making any of these restructuring needed, Brynjolfsson says. This is not just the big tech firms. This is within every industry, manufacturing, retail, finance, resources. In each category, were seeing the leading firms pulling away from the rest. Theres a growing performance gap.

But while adopting new technologies is going to be difficult, it is happening at a much faster pace in comparison to previous cycles of technological advances because we are better prepared to make the transition.

I think what is becoming clear is that its going to happen a lot faster in part because we have a much more professional class of people trying to study what works and what doesnt work, Brynjolfsson says. Some of them are in business schools and academia. A lot of them are in consulting companies. Some of them are journalists. And there are people who are describing which practices work and which dont.

Another element that can help immensely is the availability of machine learning and data science tools to process and study the huge amounts of data available on organizations, people, and the economy.

For example, Brynjolfsson and his colleagues are working on a big dataset of 200 million job postings, which include the full text of the job description along with other information. Using different machine learning models and natural language processing techniques, they can transform the job posts into numerical vectors that can then be used for various tasks.

We think of all the jobs as this mathematical space. We can understand how they can relate to each other, Brynjolfsson says.

For example, they can make simple inferences such as how similar or different two or more job posts are based on their text descriptions. They can use other techniques such as clustering and graph neural networks to draw more important conclusions such as what kind of skills are more in demand, or how would the characteristics of a job post change if you modified the description to add AI skills such as Python or TensorFlow. Companies can use these models to find holes in their hiring strategies or to analyze the hiring decisions of their competitors and leading organizations.

Those kinds of tools just didnt exist as recently as five years ago, and I think its a revolution that is just as important as the microscope or some of the other revolutions in science, Brynjolfsson says. We now have them for social sciences and business to have this kind of visibility. Thats allowing us to make a transition a lot more rapidly than before.

However, Brynjolfsson warns that not many companies are using these kinds of tools. This is perhaps further testament to his previous point that companies have not yet figured out the right transition strategy and are relying on old methods to restructure and adapt themselves to the age of AI. And at the center of this strategy should be the correct use of human capital.

You have hundreds of billions of dollars of human capital, of skills walking out the door, and then the company tries to hire back people with the skills that they need. What they dont realize is that the workers that they let go often had skills that were very adjacent to the ones theyre hiring for, Brynjolfsson says.

With the help of machine learning, they will have better visibility and knowledge of their skill adjacencies, Brynjolfsson says. For example, a company might discover that instead of laying off a bunch of people and looking to hire new talent, perhaps all they need to do is a little bit of retraining and repurposing of their workforce.

Its much more expensive to hire somebody fresh than would have been for them to take some of those people who are already in the company and say, if we teach you Python or customer service skills or other skills, you can be doing this job that were looking to hire people for, Brynjolfsson says. My hope is that, in the coming decade, workers will be in a much better position to take full advantage of their capabilities and skills. And it will be good for the companies too to understand all the assets that they have in there, and machine learning can help a lot with understanding those relationships.

Excerpt from:
AIs J-curve and upcoming productivity boom - TechTalks

Read More..

The 20 Coolest Cloud Storage Companies Of The 2022 Cloud 100 – CRN

The definition of cloud storage has recently become very flexible. Its not all about the cloud. And its not all about storage. Instead, cloud storage encompasses a wide range of technologies from straight-forward backup and recovery software to sophisticated technologies that seamlessly tie data across on-premises, private clouds, and public clouds and treat it the same no matter where it resides.

As the definition of cloud storage has morphed, so has the kinds of companies that offer the technology. While it used to be the domain of software vendors, today legacy storage hardware vendors like NetApp and Pure Storage are very active leaders in the cloud storage space given the need of customers to know their data is accessible and protected no matter where it resides or how it moves.

Anyone looking for a list of software vendors here will be disappointed. Cloud storage is all about the data, not where it sits. And as part of CRNs 2022 Cloud 100, this list of 20 cloud storage vendors proves our point.

Original post:
The 20 Coolest Cloud Storage Companies Of The 2022 Cloud 100 - CRN

Read More..

The Best Cloud Storage to Get All Those Photos off Your Phone in 2022 – Futurism

You need a safe digital space to store and protect photos, videos, presentations, and documents. Cloud storage offers storage space on the Internet, aka the cloud. A provider operates and manages the storage for a fee, though there are some free plans out there.

The best cloud storage options may have apps for easier uploading and downloading, access, and file sharing. Many providers have a free tier but provide more storage space for a fee. You can base your storage needs on what and how much you need to store. Photos eat up space, and some providers limit file sizes, creating problems for photographers and other creatives. Other storage services focus on documents rather than media. Still others are primarily for backing up computers.

Check your file sizes, media, and connection speeds, and youll have a pretty good idea of what you need. Keep an eye on your budget though, because services can get pricey if your file number and sizes keep growing.

Best Overall: iDrive Best for Photos: Amazon Photos Best for Personal Use: Sync Best for Business: Google Drive Best Free Cloud Storage: Degoo

The type of cloud storage you need depends a great deal on what you need to back up or store. We took that into consideration while evaluating the different options.

Free Storage: We compared how much storage each service offered in its free tier, if it had a free tier. More storage space can give you more time to decide if its the right service for you.

Security and Encryption: Sensitive information needs extra protection. We checked to see what kind of security and encryption each service included and the upgrades needed to access the best security (end-to-end encryption).

Sharing and Collaboration: Some cloud storage services are designed for sharing and collaboration, while others are more bare-bones or lean toward backup services. However, most people and businesses will eventually need to share something theyve got on the cloud, so we checked to see how easy that is.

Third-party Integrations: Third-party integrations can make it easier to upload, download, and edit documents and photos in cloud storage.

Upload and Download Speeds: All the storage in the world wont mean much if it takes 10 minutes to upload a single photo.

Related: The Best External Hard Drives for All the Jpegs Youll Never Download

Why It Made The Cut: This service made the cut for its ease of setup, disk image backup, and restore options, as well as the ability to connect unlimited devices to a single account.

Specs: File size limit: 2GB Free storage: 5GB Mobile app: Yes

Pros: Fast, simple setup Ability for bulk uploads Fast upload speeds Unlimited devices on a single account Security

Cons: Storage isnt unlimited Basic sharing

The iDrive service offers the best cloud storage service with its reliable backup and access to stored information for personal, team, and business purposes. It has plans that cover these bases, as well as a photo plan that caters to creatives.

You also get the option of a private inscription key or an iDrive management key. A private key offers the best security, but if youre prone to losing passwords, stick with the latter. iDrives dashboard is fairly easy to navigate with self-explanatory tabs; the organization makes it easy for everyone to use. You can have an unlimited number of devices on each account.

From the dashboard, you can select which files to back up or choose all your data. You can also set the backup frequency. iDrive has a mobile app for both iOS and Android, so you can back up mobile devices too. The downside is that storage isnt unlimited, and youve got only basic sharing abilities. But overall, its a robust option thats simple to use.

Why It Made The Cut: Amazon offers automatic photo uploading, unlimited storage, and photo printing, along with no photo size restrictions.

Specs: File size limit: None Free storage: Yes, with Prime membership Mobile app: Yes

Pros: Free with Prime membership No image size restrictions Automatic uploads Unlimited storage

Cons: Requires Amazon Drive desktop app to download photos over 2GB Fee if youre not a Prime member

If youve got Amazon Prime, there is no reason not to use their photo storage to protect and print your pics, because its one of the best cloud storage for photos. Prime members get unlimited storage with automatic uploads. There are also no image size restrictions, which pros can appreciate. One of the downsides is that when you download photos over 2GB, you cant use the mobile app or Drive website. Its the Amazon Drive desktop app only.

This photo-storage option from Amazon includes editing and tagging features to help you organize and find your photos. Theres also a fun feature that uses AI to locate specific photos based on search terms.

There are some downsides to this service, especially if you dont have a Prime membership. Youll pay a monthly fee to access all the features, and that fee goes up the more storage you need. However, even if you dont have a Prime membership, its a good deal for storing and accessing photos.

Why It Made The Cut: Sync offers excellent security for personal users and includes easy setup and access.

Specs: File size limit: Unlimited Free storage: 5GB Mobile app: Yes

Pros: Excellent security for personal use Can store older versions without eating up storage space Great price per terabyte Simple, efficient interface

Cons: Few third-party integrations Single-folder sync

Sync offers an excellent bang for your buck as far as paid services go and the best personal cloud storage. The free service stores up to 5GB, which wont take you very far. However, itll give you a good idea of whether this is the cloud storage service for you. The monthly fee for an individual plan isnt prohibitive, and it comes with excellent security features, with encryption when you download and upload.

The setup is simple, and the interface doesnt require an advanced degree to navigate. You keep a Sync folder on your computer, where you put everything you want backed up or saved. The service offers versioning, where you can store older versions of files, but those old versions dont count toward your storage limit.

Its got basic folder and file sharing, though not as robust as some services. It doesnt have the third-party integrations of larger services like Google Drive and Dropbox, but for the price, you get better security. Theres also the issue of single-folder syncing, which doesnt let you back up files outside the folder.

Why It Made The Cut: Google Drive lets businesses share documents, photos, and a whole lot more with simple access and generous storage.

Specs: File size limit: 5TB Free storage: 15GB Mobile app: Yes

Pros: Generous free storage Wide range of third-party integrations Great collaboration suite Store, share, sync, and collaborate from one place

Cons: Security isnt as tight as it could be No password protection on files

If youve got a team working on the same documents and using the same media, Google Drive makes the best cloud storage for business. If youre not sure if this is for you, consider that you can get 15GB of free storage space to test things out, which is pretty substantial. Granted, it will work better for a smaller team than a corporate one. Once you hit that 15GB storage mark, you enter paid subscription territory.

Google Drive has a 5TB file size limit, which could be an issue for some creatives in certain industries. However, for most offices, thats a generous file size. Where businesses really benefit is in the creation, editing, and collaboration that can take place on documents, sheets, and presentations. Google Drive includes an entire suite for working and collaborating with people all over the world. Plus, there are other Google services that can go along with Drive, like Gmail, calendars, and meetings.

On the downside, its not the most secure platform for storing files. You can control who sees documents with the right settings. However, there is no way to set a password for a document.

Why It Made The Cut: The free tier storage size coupled with an intuitive interface make Degoo the best free cloud storage for photos.

Specs: File size limit: 512MB (free plan) Free storage: 100GB Mobile app: Yes

Pros: Intuitive interface High storage limit in the free tier Works with several mobile platforms Photo storage maximizer

Cons: Free tier limits security

Degoo offers a large storage space of 100GB, which is more than most of the competitors. However, its geared toward photos, which eat up space faster than other file types. Once you get it set up, the interface is bright and intuitive, giving it a shallow learning curve.

Degoo offers a surprising amount of security, even in the free tier. Information gets encrypted in transit and while hanging out in storage. Theres also an end-to-end encryption option in which data is spread onto different servers for added protection, but its available only via subscription.

In short, Degoo offers accessibility with minimal effort on the part of the user. But if you want added security and more storage space, youll have to jump into the paid tiers.

It can be tough to fork over your hard-earned cash for a storage space you cant even see with your own eyes. There are free cloud storage services, but they usually require some compromises on the amount of storage space and file size limits. Most providers have a free tier in their pricing, but storage limits can range from as low as 2GB to as high as 200GB if you refer new customers. Storage services can charge from as little as a few dollars a year to a heftier one-time fee for a lifetime subscription with unlimited storage.

Check the details of the storage agreement. Some providers offer unlimited storage but shrink the size of archived images. Thats a problem if youre a photographer who might need to print older images. In that case, look for a service that stores images at their original size.

However, others may primarily need to store and share documents rather than photos or videos. Some services can handle both, while others specialize in one type of storage or another.

Finally, you have to think about the compatibility of cloud storage. Does it have an app for iOS, Android, and Windows? How easy is it to upload or download files? Will you use cloud storage for backing up your laptop, or only certain types of files?

Do you need to share files with family or a team? And how secure do you need those files to be? Some services offer encryption on the way in and out, while others are more about sharing the love than hiding your info. Services that offer end-to-end encryption and passwords on folders or files boost the security of sensitive information. Take some time to figure out how and if you want to share information.

Q: How secure is cloud storage?Stories of hackers breaking into cloud storage are not uncommon. Certain features definitely increase security. Two-step authentication, encryption in transit, passwords on files or folders, and end-to-end encryptions (which doesnt even allow the service employees access to files) are all features that improve security. While it might be scary to send your information for storage elsewhere, its probably safer than information on your hard drive, which is vulnerable to malware and spyware.

Q: How does cloud storage work?Cloud storage sends your files over the internet to a remote server or storage host. The servers connect remotely to any device with the right access information. Most providers host the information on several servers so that if one server goes down, the information doesnt get lost.

Q: How much does cloud storage cost?Price ranges are pretty varied, from free plans that offer anywhere from 2GB to 200GB of storage to paid services that are several figures with unlimited storage for large businesses. Decide on how much storage you need, what you need to store, and your budget to help you narrow down the provider that will work best for you.

Related: The Best Modems for Speedy Connections

For overall backup and storage, iDrive offers competitive pricing, security, and storage space. If you have a small business that needs lots of collaboration and document sharing, Google Drive provides some of the easiest options, though youll have to be careful about security if your documents have sensitive information.

This post was created by a non-news editorial team at Recurrent Media, Futurisms owner. Futurism may receive a portion of sales on products linked within this post.

Care about supporting clean energy adoption? Find out how much money (and planet!) you could save by switching to solar power at UnderstandSolar.com. By signing up through this link, Futurism.com may receive a small commission.

See the original post:
The Best Cloud Storage to Get All Those Photos off Your Phone in 2022 - Futurism

Read More..