Page 2,116«..1020..2,1152,1162,1172,118..2,1302,140..»

Computer science (BS) – School of Computing and Augmented …

Graduates with a degree in computer science find employment working in a variety of capacities ranging from computer and software design to development of information technologies. Their jobs are often distinguished by the high level of theoretical expertise they apply to solving complex problems and the creation and application of new technologies. Some computer science-related jobs may include:

With the theoretical foundation built in the program, computer science graduates can excel in system and software development, as well as in designing effective computing solutions for emerging and challenging problems in modern society. Skills in system development and research can lead to entrepreneurial activity that produces innovative computing products and services. Learn more about the objectives and outcomes of the BS degree in computer science.

The computer science, BS program at Arizona State University is accredited by the Computing Accreditation Commission of ABET, http://www.abet.org. Student enrollment and graduation data are available at engineering.asu.edu/enrollment.

Read more:

Computer science (BS) - School of Computing and Augmented ...

Read More..

IT vs. Computer Science: Which Degree Is Right for You …

This paragraph is followed by a large infographic entitled IT vs. Computer Science: Which Degree is Right for You.

Please note as you discover the roles described that all included salary data represents national averaged earnings for the occupations listed and includes workers at all levels of education and experience. Education conditions in your area may vary. The included information comes from Burning-Glass.com and their analysis of 1,162,850 computer science and IT job postings from July 2018 through June 2019; 1,075,216 computer science job postings by education level from July 2018 through June 2019; 139,535 IT job postings by education level from July 2018 through June 2019; 143,469 IT job postings from July 2018 through June 2019; and 1,104,422 computer science job postings from July 2018 through June 2019.

As we go into the graphic, we see that the top panel shows the title and an image of a person typing at a computer keyboard with the Rasmussen College logoa lit torchon their T-shirt.

Below, there are two sections side by side: What is IT? on the left and What is Computer Science? on the right. Below What is IT? the text defines it as: The application of computer programs and networks to solve business processes. Professionals in this industry interact with otherswhether in-person, on the phone or via emailwhile helping solve technological problems. Under What is Computer Science? the text reads: The processes of creating usable computer programs and applications and the theories behind those processes. Professionals in this industry do a lot of independent work writing and testing logic-based code.

The next panel is entitled, What experience do I need? The text underneath notes: The majority of job postings for both fields prefer candidates with 3-5 years of experience. Below, four categories indicate how many years of experience most employers prefer for candidates in the IT and Computer Science sectors: 19.6% employers are looking for candidates with 0-2 years of experience; 48.2% employers are looking for candidates with 3-5 years of experience; 19.7% of employers are looking for candidates with 6-8 years of experience; and 12.5% of employers are looking for candidates with 9+ years of experience.

The next panel asks What education do I need? Underneath, a summary of the data reads: The majority of job postings prefer candidates to have a bachelors degree. A horizontal bar chart below the text indicates that 89% of employers in the computer science sector prefer candidates with a bachelors degree, at minimum. 84% of employers in the IT sector prefer candidates with a bachelors degree, at minimum.

The following panel, Comparing Computer Science and IT Jobs is divided into three sections: IT with hammer, crescent wrench and computer icons; Both with tool icons and coding and computer icons; and Computer Science with coding and computer icons. Each section includes common job titles, the 2018 median salary, and job outlook.

Common IT job titles include computer user support specialists, information technology project managers, and network and computer systems administrators. Computer user support specialists earned around $53,470 in 2018, and their demand is expected to grow 11% between 2016 and 2026. Information technology project managers earned about $90,270 in 2018, and their demand is expected to grow 9%. Network and computer administrators earned about $82,050 in 2018, and their demand is expected to grow 6%.

Common titles for both computer science and information technology include computer systems engineers/architects, computer systems analysts, and database administrators. Computer systems engineers/architects earned about $90,270 in 2018, and their demand is expected to grow 9%. Computer systems analysts earned about $88,740 in 2018, and their demand is expected to grow 9%. Database administrators earned about $90,070 in 2018, and their demand is expected to grow 11%.

Common job titles for Computer Science include software developers, web developers, and software quality assurance engineers and testers. Software developers earned about $105,590 in 2018, and their demand is expected to grow 24%. Web developers earned about $69,430 in 2018, and their demand is expected to grow 15%. Software Quality Assurance Engineers and Testers earned about $90,270 in 2018, and their demand is expected to grow 5%.

The next panel is entitled: What skills do I need? Below the text reads Take a look at the top skills employers are seeking in each field, some of which overlap. A venn diagram compares IT skills, computer science skills, and overlapping skills. IT skills: project management, information systems, customer service. Both: SQL, software development, Java. Computer science skills: software engineering, Python, JavaScript.

The below panel, Where can I work lists IT and Computer Science hot spots by state. The summary underneath the titles reads, You can find job opportunities across the U.S for both of these fields. But where is the concentration of jobs highest when controlling for population? Weve identified several hot spots. IT hot spots: Virginia, Colorado, North Carolina, Maryland, Arizona and Georgia. Computer science hot spots: Virginia, Washington, California, Colorado, Maryland and Massachusetts.

It should be noted that the image was created by Rasmussen College, LLC, to promote our education programs and to provide general career-related information covering computer science and IT careers. Please see rasmussen.edu/degrees for a list of the programs we offer.

Read the original post:

IT vs. Computer Science: Which Degree Is Right for You ...

Read More..

Senior Post Doctoral Researcher, Computer Science job with MAYNOOTH UNIVERSITY | 281482 – Times Higher Education (THE)

Department:Computer ScienceVacancy ID:014061Closing Date:20-Mar-2022

CircAI (Artificial Intelligence in the circular economy) is an innovative and state-of-the-art project that will generate new knowledge and understanding around the integration of Artificial Intelligence (AI) in the circular economy. CircAI will investigate the current use of AI in the circular economy in Ireland by collecting information about case-study examples of AIs implementation as well as immediate future plans to adopt AI in the following sectors: (1) waste, (2) construction, (3) agriculture, (4) and the bioeconomy. To evaluate this usage of AI, as well as stakeholder attitudes to the use of AI in their respective industries, the project will also undertake a series of structured interviews and engagement workshops with stakeholders from these sectors.

Furthermore, the project will also engage with the public to explore public attitudes (and understanding) of AI, the circular economy and their interaction. Information collected on the case-study examples will be stored in a database system that can be browsed and accessed using an easy-to-use dashboard website. This assessment of current practice means we will see where AI can help drive circular economy ambitions forward and understand and evaluate the overall societal impacts and benefits of AI. An openly accessible and interoperable online portfolio of best-practice examples from Ireland and beyond will be developed and deployed and will be one of the major knowledge outcomes from the project. CircAI will produce several knowledge outputs, including state-of-the-art reviews of international best practice on AI within the circular economy in Ireland, and best practice guidance for the implementation of AI in circular economic processes. CircAI will communicate with stakeholders (including the public) via social media, a dedicated website and several videos available for viewing.

We are seeking an energetic and enthusiastic postdoctoral researcher to carry out innovative research considering the place and pathway for approaches to using AI in the circular economy and to develop a clear understanding of relevant best practices, both on a sectoral level, a national level and an international level.

Salary

Senior Post-doctoral Researcher: 46,906 per annum (1 point)

Appointment will be made in accordance with the Department of Finance pay guidelines.

Original post:

Senior Post Doctoral Researcher, Computer Science job with MAYNOOTH UNIVERSITY | 281482 - Times Higher Education (THE)

Read More..

Rescale Survey Reveals Profound Disruptions as Industry and Government Shift Workloads to the Cloud for Computational Science and Engineering…

Rescale

Just as cloud and Software-as-a-Service disrupted the digital world, now it is transforming industries from aerospace and medicine to artificial intelligence and machine learning

SAN FRANCISCO, Feb. 15, 2022 (GLOBE NEWSWIRE) -- Rescale, the leader in high performance computing built for the cloud to accelerate engineering innovation, today released a research paper 2022 State of Computational Engineering Report based on surveying 230+ scientists and engineers building rockets, supersonic jets, designing fusion plants and re-imagining drug discovery that reveals how cloud computing will reshape the world of physical things as profoundly as cloud has disrupted the digital computing world over the past two decades.

Computational Science & Engineering is giving early adopters competitive advantages in every major market around the world today, said Edward Hsu, Chief Product Officer, Rescale. But most workloads remain on premises. Where we see the early and most promising design and discovery breakthroughs is where organizations unlock computational barriers and embrace the cloud. Rescale provides the leading platform that enables those breakthroughs while providing IT the security and control they need.

Computational Science & Engineering includes several important computing industries where companies and governments need computational models to simulate and understand natural phenomena such as weather and quantum mechanics or how engineered products will perform under enormous stresses such as aerodynamics and crash tests. This domain includes standard computer engineering, computer science, high performance computing and supercomputing. Rescales platform also allows its customer to run workloads on the worlds fastest supercomputer, Fugaku by RIKEN in Japan, as well as leading public cloud providers such as AWS, Microsoft Azure, Google Cloud Platform, Oracle Cloud and more.

The new report, available free for download here, follows Rescales publication last year of the Big Compute 2021 State of Cloud HPC Report that reported industry analysts forecasting a $55 billion annual HPC market by 2024. Rescale believes the computational engineering and science market may double in size by that time as more and more companies take advantage of its on-premises and hybrid cloud platform to run new workloads in artificial intelligence and machine learning by giving new market entrants easy access to the worlds most powerful computing systems as an operating expense.

Story continues

The 2022 report reveals that a new era of computational engineering is fostering a second wave of massive disruption and innovation in engineered products across all industries. In driving successful engineering innovations, access to massive computing power, a library of thousands of algorithms and proprietary software applications, and automated workflows with granular insights into operating costs, Rescale aims to remove the last remaining computing bottlenecks for companies of all kinds. By giving engineers easy and unlimited access to computing power that offers up to an order of magnitude of higher performance, companies will be able to explore a much wider range of design possibilities and accelerate industry innovations.

Other key findings in the key findings include:

R&D leaders know that researchers spend much of their time on non-research related tasks (e.g., finding lost files, setting up infrastructures). But the impact of this non-R&D time on project success is not well understood;

Organizations that can help researchers focus their time on R&D are more than twice as likely to achieve project goals consistently;

Easy access to compute accelerates innovation velocity and also improves the probability of project success;

Ease of access to computing is highly correlated with the ability of organization to tackle broader science and engineering challenges;

Organizations that use cloud automation platforms generally do so as part of a digital R&D strategy.

About Rescale

Rescale is high performance computing built for the cloud, to empower engineers while giving IT security and control. From supersonic jets to personalized medicine, industry leaders are bringing new product innovations to market with unprecedented speed and efficiency with Rescale the cloud platform that delivers intelligent full-stack automation and performance optimization. IT leaders use Rescale to deliver HPC-as-a-Service with a secure control plane to deliver any application, on any architecture, at any scale on their cloud of choice.

Editorial Contact

Lonn Johnston

lonn@flak42.com

+1.650.219.7764

See the rest here:

Rescale Survey Reveals Profound Disruptions as Industry and Government Shift Workloads to the Cloud for Computational Science and Engineering...

Read More..

A new atlas of cells that carry blood to the brain – MIT News

While neurons and glial cells are by far the most numerous cells in the brain, many other types of cells play important roles. Among those are cerebrovascular cells, which form the blood vessels that deliver oxygen and other nutrients to the brain.

Those cells, which comprise only 0.3 percent of the brains cells, also make up the blood-brain barrier, a critical interface that prevents pathogens and toxins from entering the brain, while allowing critical nutrients and signals through. Researchers from MIT have now performed an extensive analysis of these difficult-to-find cells in human brain tissue, creating a comprehensive atlas of cerebrovascular cell types and their functions.

Their study also revealed differences between cerebrovascular cells from healthy people and people suffering from Huntingtons disease, which could offer new targets for potential ways to treat Huntingtons disease. Breakdown of the blood-brain barrier is associated with Huntingtons and many other neurodegenerative diseases, and often occurs years before any other symptoms appear.

We think this might be a very promising route because the cerebrovasculature is much more accessible for therapeutics than the cells that lie inside the blood-brain barrier of the brain, says Myriam Heiman, an associate professor in MITs Department of Brain and Cognitive Sciences and a member of the Picower Institute for Learning and Memory.

Heiman and Manolis Kellis, a professor of computer science in MITs Computer Science and Artificial Intelligence Laboratory (CSAIL) and a member of the Broad Institute of MIT and Harvard, are the senior authors of the study, which appears today in Nature. MIT graduate students Francisco Garcia in the Department of Brain and Cognitive Sciences, and Na Sun in the Department of Electrical Engineering and Computer Science, are the lead authors of the paper.

A comprehensive atlas

Cerebrovascular cells make up the network of blood vessels that deliver oxygen and nutrients to the brain, and they also help to clear out debris and metabolites. Dysfunction of this irrigation system is believed to contribute to the buildup of harmful effects seen in Huntingtons disease, Alzheimers, and other neurodegenerative diseases.

Many types of cells are found in the cerebrovasculature, but because they make up such a small fraction of the cells in the brain, it has been difficult to obtain enough cells to perform large-scale analyses with single-cell RNA sequencing. This kind of study, which allows the gene expression patterns of individual cells to be deciphered, offers a great deal of information on the functions of specific cell types, based on which genes are turned on in those cells.

For this study, the MIT team was able to obtain over 100 human postmortem brain tissue samples, and 17 healthy brain tissue samples removed during surgery performed to treat epileptic seizures. That brain surgery tissue came from younger patients than the postmortem samples, enabling the researchers to also recognize age-associated differences in the vasculature. The researchers enriched the brain surgery samples for cerebrovascular cells using centrifugation, and ran postmortem sample cells through a computational sorting pipeline that identified cerebrovascular cells based on certain markers that they express.

The researchers performed single-cell RNA-sequencing on more than 16,000 cerebrovascular cells, and used the cells gene-expression patterns to classify them into 11 different subtypes. These types included endothelial cells, which line the blood vessels; mural cells, which include pericytes, found in the walls of capillaries, and smooth muscle cells, which help regulate blood pressure and flow; and fibroblasts, a type of structural cell.

This study allowed us to zoom in to this incredibly central cell type that facilitates all of the functioning of the brain, Kellis says. What weve done here is understand these building blocks and this diversity of cell types that make up the vasculature in unprecedented resolution, across hundreds of individuals.

The researchers also found evidence for a phenomenon known as zonation. This means that the endothelial cells that line the blood vessels express different genes depending on where they are located in an arteriole, capillary, or venule. Furthermore, among the hundreds of genes they identified that are expressed differently in the three zones, only about 10 percent of them are the same as the zonated genes that have been previously seen in the mouse cerebrovasculature.

We saw a lot of human specificity, Heiman says. What our study provides is a list of markers and insights into gene function in these three different regions. These are things that we believe are important to see from a human cerebrovasculature perspective, because the conservation between species is not perfect.

Barrier breakdown

The researchers also used their new vasculature atlas to analyze a set of postmortem brain tissue samples from disease patients, demonstrating its broad usefulness. They focused on Huntingtons disease, where cerebrovasculature abnormalities include leakiness of the blood-brain barrier and a higher density of blood vessels. These symptoms usually appear before any of the other symptoms associated with Huntingtons, and can be seen using functional magnetic resonance imaging (fMRI).

In this study, the researchers found that cells from Huntingtons patients showed many changes in gene expression compared to healthy cells, including a decrease in the expression of the gene for MFSD2A, a key transporter that restricts the passage of lipids across the blood-brain barrier. They believe that the loss of that transporter, along with other changes they observed, could contribute to increased leakiness of the barrier.

They also found upregulation of genes involved in the Wnt signaling pathway, which promotes new blood vessel growth and that endothelial cells of the blood vessels showed unexpectedly strong immune activation, which may further contribute to blood-brain barrier dysregulation.

Because cerebrovascular cells can be accessed through the bloodstream, they could make an enticing target for possible treatments for Huntingtons and other neurodegenerative diseases, Heiman says. The researchers now plan to test whether they might be able to deliver potential drugs or gene therapy to these cells, and study what therapeutic effect they might have, in mouse models of Huntingtons disease.

Given that cerebrovascular dysfunction arises years before more disease-specific symptoms, perhaps its an enabling factor for disease progression, Heiman says. If thats true, and we can prevent that, that could be an important therapeutic opportunity.

The researchers also plan to analyze more of the RNA-sequencing data from their tissue samples, beyond the cerebrovascular cells that they examined in this paper.

Our goal is to build a systematic single-cell map to navigate brain function in health, disease, and aging across thousands of human brain samples, Kellis says. This study is one of the first bite-sized pieces of this atlas, looking at 0.3 percent of cells. We are actively analyzing the other 99 percent in multiple exciting collaborations, and many insights continue to lie ahead.

The research was funded by the Intellectual and Developmental Disability Research Center and Rosamund Stone Zander Translational Neuroscience Center at Boston Childrens Hospital, a Picower Institute Innovation Fund Award, a Walter B. Brewer MIT Fund Award, the National Institutes of Health, and the Cure Alzheimers Fund.

Excerpt from:

A new atlas of cells that carry blood to the brain - MIT News

Read More..

Australian Computing Research Alliance – News – The University of Sydney

The University of Sydney's School of Computer Science is part of theAustralian Computing Research Alliance (ACRA) group,an informal alliance of computing schools from across Australia.

Other members include the Australian National University, University of Melbourne and University of New South Wales.

This alliance encourages publication of quality peer-reviewed research, in both conferences and journals.

The ACRA group aligns with the Declaration on Research Assessment (DORA) and the Leiden Manifesto for Research Metrics.

This alliance advocates practical and robust approaches for evaluating research, aligned to those of DORA.

Venue impact factors and rankings are not measures of the scientific quality nor impact of an articles research.

ACRA strongly discourage inclusion of such rankings in job applications, promotionapplications, and other career(-progression) and evalutation processes.

They acknowledge that such rankings may serve as a guide for early career researchers, or newcomers to a research area, towards finding quality publications.

However, venue rankings have limited value in comparing one research area with another, they do not discriminate specialist from generalist venues, nor the distinct values of different venues, and they often replicate information contained in standard bibliometric tools.

This ACRA groupingproposes that career processes support academics and assessment panels in:

To assist our colleagues in transitioning, ACRA advocates that research leaders offer specific support for writing research quality and impact cases.

As an example first step, the UK Research Excellence Framework (REF) proposes consideration of the importance of the research problem solved, the approach taken and properties of the solution, the output describing such an approach, and how the approach in the research output has been built on or applied, including concrete evidence of impact.

Proposed wording for announcements and documentation includes: Applicants are actively encouraged not to include conference/journal/venue rankings, but should instead focus on the impact of their research outputs in describing the excellence of their research.

Read the original post:

Australian Computing Research Alliance - News - The University of Sydney

Read More..

The Top 10 Movies to Help You Envision Artificial Intelligence – Inc.

Artificial intelligence has been with us for decades -- just throw on a movie if you don't believe it.

Even though A.I. may feel like a newer phenomenon, the groundwork of these technologies are more dated than you'd think. The English mathematician Alan Turing, considered by some as the father of modern computer science, started questioning machine intelligence in 1950. Those questions resulted in the Turing Test, which gauges a machine's capacity to give the impression of"thinking" like a human.

The concept of A.I. can feel nebulous, but it doesn't fall under just one umbrella. From smart assistants and robotics to self-driving cars, A.I. manifests in different forms...some more clear than others. Spoiler alert! Here are 10 movies in chronological order that can help you visualize A.I.:

1. Metropolis (1927)

German directorFritz Lang's classicMetropolis showcases one of the earliest depictions of A.I. in film, with the robot, Maria, transformed intothe likeness of a woman. The movie takes place in an industrial city called Metropolis that is strikingly divided by class, where Robot Maria wreaks havoc across the city.

2. 2001: A Space Odyssey (1968)

Stanley Kubrick's 2001 is notable for its early depictionof A.I. and is yet another cautionary tale in which technology takes a turn for the worse. A handful of scientists are aboard a spacecraft headed to Jupiter where a supercomputer, HAL(IBM to the cynical), runs most of the spaceship's operations. After HAL makes a mistake and tries to attribute it to human error, the supercomputer fights back when those aboard the ship attempt to disconnect it.

3. Blade Runner (1982)and Blade Runner 2049 (2017)

The original Blade Runner (1982) featured Harrison Ford hunting down "replicants,"or humanoids powered by A.I., which are almost indistinguishable from humans. In Blade Runner2049 (2017), Ryan Gosling's character, Officer K, lives with an A.I. hologram, Joi. So at least we're getting along better with our bots.

4. The Terminator (1984)

The Terminator's plot focuses on a man-made artificial intelligence network referred to as Skynet -- despite Skynet being created for military purposes, the system ends up plotting to kill mankind. Arnold Schwarzenegger launched his acting career out ofhis role as the Terminator, a time-traveling cyborg killer that masquerades as a human. The film probes the question -- and consequences -- of what happens when robots start thinking for themselves.

5. The Matrix Series (1999-2021)

Keanu Reeves stars in this cult classic as Thomas Anderson/Neo, a computer programmer by day and hacker by night who uncovers the truth behind the simulation known as "the Matrix." The simulated reality is a product of artificially intelligent programs that enslaved the human race. Human beings are kept asleep in "pods," where they unwittingly participate in the simulated reality of the Matrix while their bodies are used to harvest energy.

6. I, Robot (2004)

Thissci-fiflickstarring Will Smith takes place in 2035 in a society where robots with human-like featuresserve humankind. An artificial intelligent supercomputer, dubbed VIKI (which stands for Virtual Interactive Kinetic Intelligence), is one to watch, especially once a programming bug goes awry. The defect in VIKI's programming leads the supercomputer to believe that the robots must take charge in order to protect mankind from itself.

7. WALL-E (2008)

Disney Pixar's WALL-E follows a robot of the same namewhose main role is to compact garbage on a trash-ridden Earth. But after spending centuries alone, WALL-E evolves into a sentient piece of machinery who turns out to be very lonely. The movie takes place in2805 and follows WALL-E and another robot, named Eve, who's job is toanalyzeif a planet is habitable for humans.

8. Tron Legacy (2010)

The Tron universe is filled to the brim with A.I. given that it takes place in a virtual world, known as "the Grid." The movie's protagonist, Sam, finds himself accidentally uploaded to the Grid, where he embarks on an adventure that leads him face-to-face with algorithms and computer programs.The Grid is protected by programs such as Tron, but corrupt A.I. programs surface as well throughout the virtual network.

9. Her (2013)

Joaquin Phoenix plays Theodore Twombly, a professional letter writer going through a divorce. To help himself cope, Theodore picks up a new operating system with advanced A.I. features. He selects a female voice for the OS, naming the device Samantha (voiced by Scarlett Johansson), but it proves to have smart capabilities of itsown. Or is it, her own?Theodore spends a lot of time talking with Samantha, eventually falling in love. The film traces their budding relationship and confronts the notion of sentience and A.I.

10. Ex-Machina (2014)

After winning a contest at his workplace, programmer Caleb Smith meets his company's CEO, Nathan Bateman. Nathan reveals to Caleb that he's created a robot with artificial intelligence capabilities. Caleb's task? Assess if the feminine humanoid robot, Ava, is able to show signs of intelligent human-like behavior: in other words, pass the Turing Test. Ava has a human-like face and physique, but her "limbs" are composed of metal and electrical wiring. It's later revealed that other characters aren't exactly human, either.

Continue reading here:
The Top 10 Movies to Help You Envision Artificial Intelligence - Inc.

Read More..

Sway AI Announces Its No-Code Artificial Intelligence (AI) Platform to Accelerate AI Adoption in Every Enterprise – Woburn Daily Times

CHELMSFORD, Mass., Feb. 15, 2022 /PRNewswire/ --Sway AI today announced its no-code AI platform for enterprise users and data scientists alike. With Sway AI's platform, enterprises can rapidly build and deploy AI solutions without AI experience or upfront investments.

Sway AI believes that "there's a better way to do AI" by helping enterprises build without expensive upfront investments in AI tools or skillsets. Through its patent-pending technology, Sway AI enables any user to build AI without writing code. Using Sway AI, data scientists and AI experts can collaborate with stakeholders and build prototypes faster, dramatically reducing their time-to-deployment.

AI promises to transform almost every aspect of how we live and work. McKinsey estimates AI can create between $3.5T and $5.8T in value annually across 19 industries. They also report that successful enterprises have realized triple-digit ROIs when implementing AI.

However, Gartner estimates that 85% of AI projects are never deployed. Enterprise AI has long been time-consuming, costly, and risky. It often requires significant upfront investments in skillsets or rapidly evolving tools. Even when enterprises commit to these investments upfront, AI projects can still fail from a lack of stakeholder engagement and collaboration. These risks have long been barriers to AI adoption across industries.

Sway AI addresses these barriers by offering a no-code development platform without the investments that AI often requires. Its collaboration features engage business stakeholders and domain experts throughout the AI development cycle, improving business alignment, reducing risk, and driving increased ROIs.

Sway AI has built a growing pipeline of enterprise customers looking to build AI solutions quickly. For example, it partnered with Trilogy Networks and Veea to provide a turnkey AI solution for precision farming. "Sway AI's platform optimizes models for our real-time low-latency performance needs, which will be critical to the success of our edge AI applications. Through our partnership with Sway AI, we will be able to enhance the edge computing solutions without significant additional investments at the network edge," noted Allen Salmasi, CEO & Chairman of Veea.

"Enterprises faced the daunting task of selecting from a growing and complex marketplace of AI tools, technologies, and models. Sway AI's platform simplifies this everchanging AI ecosystem by offering best-of-class AI tooling through its platform. By using Sway AI enterprises can expect the best AI capabilities available without going through complex evaluation exercises and committing to inflexible technology choices. With Sway AI, an enterprise can reduce development and deployment costs by up to 10x and deployment time from months to hours", said Amir Atai, Sway AI CEO and Co-Founder.

Amir also noted, "This platform accelerates large enterprises' data science teams by helping them rapidly prototype their models. Our platform offers unmatched levels of transparency and collaboration with enterprise stakeholders, which can make all the difference to the success of an AI strategy."

"At Sway AI, we have been carefully listening to our customer's unmet needs. We are the first to offer this combination of enterprise AI capabilities, including a business-user interface, a marketplace of pre-built applications, enterprise collaboration, bring-or-export your models, and an open and extensible architecture. Sway AI offers best-in-class open-source capabilities as a future-proof platform. This minimizes investment and adoption risks, especially as the growth of AI tools accelerate," stated Jitender Arora, Sway AI Chief Product Officer and Co-Founder.

Sway AI is an early no-code innovator founded by several successful entrepreneurs. Co-Founder and Executive Chairman Hassan Ahmed recently served as Chairman and CEO of Affirmed Networks, acquired by Microsoft in 2020. Amir Atai was an executive in three successful startups, the Chief Scientist at Bellcore and Alcatel/Nokia and VP/CTO at Ericsson. Jitender Arora served in many successful startups as engineering/product leader including Acme Packet, acquired by Oracle for $2.1B and led the product management of Amazon Alexa AI/ML Platform.

"AI should no longer be complicated, expensive or hard," said Hassan Ahmed. "AI development often involves consultants, RFPs, engineers, and other costly third-parties that often complicate the problem and add risk. We make sense of the complex and rapidly evolving AI ecosystem to put AI in the hands of business users."

To learn more about Sway AI and its no-code AI solution, visit https://sway-ai.com/

About Sway AI

Headquartered in Chelmsford, MA, Sway AI is a developer of cutting-edge, no-code AI applications and services, offering scalable solutions for enterprises of all sizes. It was founded by proven investors and innovators who started by reinventing how AI projects should be done in a thoughtful, innovative way for all enterprises.

Media Contact:

Len Fernandes

Firecracker PR

329451@email4pr.com

(888) 317-4687 ext. 707

View original content to download multimedia:https://www.prnewswire.com/news-releases/sway-ai-announces-its-no-code-artificial-intelligence-ai-platform-to-accelerate-ai-adoption-in-every-enterprise-301481928.html

SOURCE Sway AI

Follow this link:
Sway AI Announces Its No-Code Artificial Intelligence (AI) Platform to Accelerate AI Adoption in Every Enterprise - Woburn Daily Times

Read More..

Artificial intelligence in positioning between mandibular third molar and inferior alveolar nerve on panoramic radiography | Scientific Reports -…

This study evaluated if AI could determine the positional relationship between M3 and IAN based on panoramic radiography regarding whether the two structures were in contact or intimate and whether the IAN was positioned lingually or buccally to M3 when two structures were overlapped. In this situation, determining the exact position was limited and unreliable even for the specialist, as shown in previous studies25,26. However, AI could determine both positions more accurately than OMFS specialists.

Until now, if M3 and IAN overlap on panoramic radiograph, specialists could use the known predictive signs of IAN injury to determine the positional relationship whether the two structures were in contact or intimate. Umar et al. compared the positional relationship between IAN and M3 through panoramic radiography and CBCT. Loss of the radiopaque line and diversion of the canal on panoramic radiographs resulted in tooth and nerve contact in 100% of the cases on CBCT. Darkening of the roots were associated with contact on CBCT in 76.9% of the cases studied27. However, another study reported that the sensitivities and specificities ranged from 14.6 to 68.3% and from 85.5 to 96.9%, respectively, for those three predictive signs1. Datta et al. compared those signs with the clinical findings during surgical removal and found that only 12% of patients with positive radiological signs showed clinical evidence of involvement3. In the present study, we adopted CBCT reading results instead of radiological signs on panoramic radiography to determine the positional relationship so that the AI could determine whether the two structures were in contact or intimate, showing an accuracy of 0.55 to 0.72. Compared to another study1, our deep learning model exhibited similar performance (accuracy 0.87, precision 0.90, recall, 0.96, F1 score 0.93, and AUC 0.82) to determine whether M3 is contacting the IAN or not. This could explain the different model performance depending on the characteristics of the data.

To replace CBCT with analysis of panoramas with AI, information about bucco-lingual positioning was necessary to ensure safe surgical outcomes. It has been reported that the lingual position of the nerve to the tooth has a significantly higher risk of IAN injury compared to other positions28. To the best of our knowledge, no studies have evaluated bucco-lingual positioning through panoramic radiograph because there were no methods to predict this position using one radiograph. Two intraoral radiographs with different angle (vertical tube-shift technique) in the third molar area caused patient discomfort and nausea during placement of the film or sensor of the digital intraoral x-ray devices29 and is difficult to use clinically. Since there was no effective method to discern the position, the accuracy of the specialists was low in this study. On the contrary, the AI showed considerably high accuracy ranges from 67.7 to 80.6% despite the small amount of study data. The course of the IAN predominantly is buccal to the tooth28, and our data revealed a similar situation. However, the total number of cases was small to match the numbers in each group evenly for deep learning. In addition, the lack of total number of cases forced the use of a simple deep learning model with a relatively small number of parameters to be optimized. Therefore, training AI with more data could produce more accurate results and be used more widely in clinical settings.

In this study, bucco-lingual determination (Experiment 2) exhibited superior performance for true contact positioning (Experiment 1). The difference in accuracy between the two experiments seems to be a characteristic of the data rather than a special technical difference. There might be a particular advantage for AI to be recognized in bucco-lingual classification, or that some of the contact classification data might have characteristics that are difficult to distinguish.

There are several studies that have developed Al algorithms that have been able to outmatch specialists in terms of performance and accuracy. AI assistance improved the performance of radiologists in distinguishing coronavirus disease 2019 from pneumonia of other origins in chest CT30. Moreover, the AI system outperformed radiologists in clinically relevant tasks of breast cancer identification on mammography31. In the present study, the AI exhibited much higher accuracy and performance compared to those of OMFS specialists. To determine the positional relationship between M3 and IAN, we performed preliminary tests to determine the most suitable AI model using VGG19, DenseNet, EfficientNet, and ResNet-50. ResNet showed higher AUC in Experiment 2 and comparable AUC in Experiment 1 (Supplemental Tables 13). Therefore, it was chosen as the final AI model.

This study has limitations. First, the absolute size of the training dataset was small. Data augmentation by image modification was used to overcome the limitation of a small sized dataset. Nevertheless, as shown in Table 1, there were cases where training did not proceed robustly. Therefore, the performances of the trained models highly depend on the train-test split. This unsoundness of the trained model, which hinders the clinical utility of AI models for primary determination in practice, can be alleviated by collecting more data and using them for training. Also, the size of deep learning models is an important factor in performance and, in general, a large number of instances are required to train a large size deep neural network without overfitting. Thus, not only collecting more data but also exploiting external datasets from multiple dental centers can be considered to increase the performance of AI models. However, this study is meaningful in that the AI model performed better than experts even under these adverse conditions. Second, the images used in this study were cropped without any prior domain knowledge such as proper size or resolution to include sufficient information to determine true contact or bucco-lingual positional relationship between M3 and IAN. If the domain knowledge is reflected to construct a dataset, the performances of AI models can be highly increased. Third, the use of interpretable AI models32, which can explain the reason for the model prediction, can help to identify the weaknesses of the trained models. The identified weaknesses can be overcome by collecting data that the models have difficulty in classifying. Finally, the various techniques developed in the machine learning society, such as ensemble learning33, self-supervised learning34, and contrastive learning35, can be utilized for further improvement of the performance of our models even in situations where the total number of cases is insufficient as well.

More here:
Artificial intelligence in positioning between mandibular third molar and inferior alveolar nerve on panoramic radiography | Scientific Reports -...

Read More..

Apple is Using Artificial Intelligence and Music to Win the Music App Arms Race – The Debrief

Apples acquisition of the London-based company AI Musicmade headlines recently in the world of business, as well as artificial intelligence (AI.)For years, the company has been using artificial intelligence and music to develop next-level customized playlists for listeners. The interface between music and artificial intelligence that AI Music has to offer may provide a significant boost to Apples presence within the commercial music industry, and could even help it outperform its competition within the music app arms race.

The relationship between music and artificial intelligence spans severaldecades, originating in 1960 when Russian researcher R. Kh. Zaripov published the first algorithmic music that was composed on the Ural-1 computer. Since then, advancements in the AI systems have allowed it to show real promise for music composition, as in 1997, an AI program called Experiments in Music Intelligence (EMI) seemed to outperform a human composer when composing a piece imitating the style of Bach. Only last year did an artificial intelligence program help to finish Ludwig Beethovens last symphony, using his other compositions as data to make the piece sound similar to the rest of his works.

Currently, there are many universities studying artificial intelligence and music, including Carnegie Mellon University, Princeton University, and Queen Mary University in London. All of these universities use different AI programs, but they all study the real-time composition and performance of music created by AI. Studying this process can give insights into the science of musical composition, as well as the psychological effects of music on our brains.

Artificial intelligence not only creates impressive music but can also help create fresh and engaging music playlists for listeners. Because artificial intelligence works by using old data sets to predict new outcomes, it can track a users listening preferences and create a customized playlist based on this data. This can encourage longer listener usage as well as better overall engagement, which could give the Apple Music app the success its looking for.

Apple already was looking into bolstering its music platform when it previously acquired the music streaming company Primephonic. Now with AI Music, Apple may be working to use this new technology to boost its current audio products including Apple Music, HomePool Mini, or even the Apple Fitness+ app.

Because Apple offers a music and podcasting platform, it is in direct competition with other companies offering similar products, such as Spotify or Pandora. Currently, Spotify has 365 million active monthly users, of which over 50% pay for Spotify Premium. In contrast, Apple has only 98 million subscribers as of 2021. According to one expert, Apple music seems to have more subscribers in the U.S while Spotify has more listeners in Europe and South America. As Apple has fewer listeners overall, it may be hoping to leverage this new acquisition, and the power of artificial intelligence to win the music apps arms race. It will be interesting to see how the other companies respond to Apples new acquisition, or whether AI continues to become a larger part of this industry.

Kenna Castleberry is a staff writer at the Debrief and the Science Communicator at JILA (a partnership between the University of Colorado Boulder and NIST). She focuses on deep tech, the metaverse, and quantum technology. You can find more of her work at her website: https://kennacastleberry.com/

Continued here:
Apple is Using Artificial Intelligence and Music to Win the Music App Arms Race - The Debrief

Read More..