Category Archives: Artificial Intelligence
Artificial Intelligence technology to convert brain signals of speech- impaired persons into langua… – Hindustan Times
Indian Institute of Technology Madras Researchers have developed an Artificial Intelligence technology to convert brain signals of speech impaired humans into Language.
The other major application for this field of research is that the researchers can potentially interpret natures signals such as like plant photosynthesis process or their response to external forces.
A team of researchers lead by Dr. Vishal Nandigana, Assistant Professor, Fluid Systems Laboratory, Department of Mechanical Engineering, IIT Madras, is working on this area of research.
Electrical signals, brain signal or any signal, in general, are waveforms which are decoded to meaningful information using physical law or mathematical transforms such as Fourier Transform or Laplace transform. These physical laws and mathematical transforms are science-based languages discovered by renowned scientists such as Sir Isaac Newton and Jean-Baptiste Joseph Fourier.
Elaborating on this Research, Dr. Vishal Nandigana, the lead researcher, said, The output result is the ionic current, which represents the flow of ions which are charged particles. These electrically driven ionic current signals are worked on to be interpreted as human language meaning speech.
This would tell us what the ions are trying to communicate with us. When we succeed with this effort, we will get electrophysiological data from the neurologists to get brain signals of speech impaired humans to know what they are trying to communicate.
Further, Dr. Vishal Nandigana said, The other major application of this field of research we see potentially is, can we interpret natures signals, like plant photosynthesis process or their response to external forces mean when we collect their real data signal.
The data signal also, we believe, is going to be in some wave like pattern with spikes, humps and crusts. So the big breakthrough will be can we interpret what plants and nature is trying to communicate to us.
Brain signals are typically electrical signals. These are wave like patterns with spikes, humps and crusts which can be converted into simple human language meaning speech using AI.
Continue reading here:
Artificial Intelligence technology to convert brain signals of speech- impaired persons into langua... - Hindustan Times
Why being data-centric is the first step to success with artificial intelligence – Tech Wire Asia
Being successful in deploying AI must start with a data-centric mindset. Source: Shutterstock.
REGARDLESS of industry, artificial intelligence (AI) is a disruptive technology that is greatly sought after.
Many organizations are looking to deploy AI projects at scale, in hopes of boosting performance and ultimately increasing revenues.
However, many fail to see returns on their AI investments. Often, this is because AI projects are not approached in the right manner.
To be AI-first, organizations need to adopt a data-first mindset. Heres how and why:
Using the right methodologies and technologies is crucial for the successful deployment of AI solutions.
It is not enough to just rely on agile methods, as they focus heavily on functionality and application logic delivery. Instead, data-centric methodologies such as the Cross Industry Standard Process for Data Mining (CRISP-DM) should be used, as they concentrate on the steps needed for a successful data project.
Depending on organizational needs, a hybrid methodology can also be deployed by merging the non-agile CRISP-DM with agile methodologies, making it more relevant.
Data-centric methodologies must be followed by the use of data-centric technologies. For any AI projects, organizations must always keep the end in mind, and have clarity on what the desired outcomes are.
Methodology and technology will not be of use without a data-proficient team.
There must be a specialized AI-team in place that can effectively collect, compile, and extract key information from seemingly haphazard data sets.
Ideally, the team should have a good mix of data scientists, engineers, and specialists that possess the skills to put models into operation.
There is no room for guesswork in AI deployment randomly changing data sets wastes unnecessary time and resources and is simply disastrous.
For a successful AI project to materialize, organizations ought to continuously invest for the long term.
Staying complacent is not an option. They must seek to refine the methodologies in place. If the technologies used are no longer relevant, they should be replaced.
AI projects will not work if employees lack the skills and tools needed to deploy them. Thus, employees should be upskilled, and also made to understand the value of AI, and how it can augment the work that they do.
While the technology is still in its infancy for large scale projects, it is only a matter of time before AI is deployed at scale, across organizations and markets.
Ultimately, it all boils down to agility and resilience in the midst of change. Those that adopt the right mindset will succeed, those that resist the change will suffer.
Emily Wong
Emily is a tech writer constantly on the lookout for cool technology and writes about how small and medium enterprises can leverage it. She also reads, runs, and dreams of writing in a mountain cabin.
The rest is here:
Why being data-centric is the first step to success with artificial intelligence - Tech Wire Asia
How Artificial Intelligence Is Improving The Pharma Supply Chain – Forbes
Artificial intelligence (AI) will transform the pharmaceutical cold chain not in the distant, hypothetical future, but in the next few years. As the president of a company that has been actively involved in the creation of an application that will utilize machine learning to generate predictive data on environmental hazards in the biopharmaceutical cold chain cycle, I've seen firsthand the promise of this technology.
When coupled with machine learning and predictive analytics, the AI transformation goes much deeper than smarter search functions. It holds the potential to address some of the biggest challenges in pharmaceutical cold chain management. Here are some examples:
Analytical decision-making: Most companies capture only a fraction of their datas potential value. By aggregating and analyzing data from multiple sources a drug order and weather data along a delivery route, for example AI-based systems can provide complete visibility with predictive data throughout the cold chain. Before your cold chain starts, you can predict hurdles and properly allocate resources.
Analytical decision-making relies on companies having actionable data and real-time visibility throughout the cold chain. Just-in-time delivery of uncompromised drug product relies on predictive data analytics. With the help of analytical decision-making, cold chain logistics and overall drug cost, patient risk, and gaps in the pharmaceutical pipeline will be significantly reduced.
For example, BenevolentAI in the United Kingdom is using a platform of computational and experimental technologies and processes to draw on vast quantities of mined and inferred biomedical data to improve and accelerate every step of the drug discovery process.
Supply chain management (SCM): A 2013 study by McKinsey & Company detailed a severe lack of agility in pharmaceutical supply chains. It noted that replenishment times from manufacturer to distribution centers averaged 75 days for pharmaceuticals but 30 days for other industries, and reported the need for better transparency around costs, logistics, warehousing and inventory. Assuring drug efficacy, patient identity and chain of custody integrated with supply chain agility is where the true value of AI lies for the drug industry.
DataRobot is an example where the agile pharmaceutical supply chain can be implemented with an AI platform powered by open-source algorithms that are able to model automation by using historical drug delivery data. Supply chain managers can build a model that accurately predicts whether a given drug order could be consolidated with another upcoming order to the same location or department.
Inventory management: Biomarkers are making personalized medicine mainstream. Consequently, pharmaceutical companies must stock many more therapeutics but in much lower quantities. AI-based inventory management can determine which product is most likely to be needed (and how often), track exactly when it's delivered to a patient, and provide delivery time and delays or incidents that might trigger replacement shipment within hours.
OptumRx increasingly uses AI/ML to manage data it collects in a healthcare setting. Since becoming operational, the AI/ML system is able to continuously improve itself by analyzing data and outcomes, all without additional intervention. Early results indicate that AI/ML is adding agility to the cold chain already by reducing the number of shortages or excess inventory of drug products needed.
Warehouse automation: Integrating AI into warehouse automation tools speeds communications and reduces errors in pick and pack settings. At its simplest, AI predicts which items will be stored the longest and positions them accordingly. With this approach, Lineage Logistics, a cold-chain food supplier, increased productivity by 20%. In another example, AI positions high-volume items so they are easily accessible while still reducing congestion.
FDA Embraces AI and Big Data
Historically, pharmaceutical companies have been slow to adapt to disruptive technologies because of the important oversight role played by the FDA. However, the FDA realizes AIs potential to learn and improve performance. It already has approved AI to detect diabetic retinopathy and potential strokes in patients, and updated regulationsare expected soon to help streamline the implementation of this important tool.
Gain A Competitive Edge
For pharmaceutical companies looking to implement AI into their cold chain, here are some steps to take to become an early adopter:
1. Prepare your data, and ensure you own it. You need a strong pipeline of clean data and a mature logistics ecosystem with historical data on temperature, environmental conditions and packaging, as well as any other data you collect during your cold chain. If you dont have clean data stored, start collecting it now. If you think you have the data, verify that you own it. Some vendors claim ownership of the thermal data their systems generate and dont allow it to be manipulated by third-party software. In that case, it cant be combined with other data sources for AI analysis. Either negotiate ownership or change vendors.
2. Define your area of need: Where do you need a competitive edge? Start small with one factor that makes a measurable impact on your cold chain. That may be inventory control, packaging optimization, logistics, regulatory strategy or patient compliance. Track metrics, and tie them to business value.
3. Assemble the right people, and verify your internal capabilities. Implementing or supporting an AI/machine learning strategy requires skills that IT personnel typically lack. Consider upskilling your IT team or adding an AI skills requirement for your next new hires.
AI is at a turning point. In the next decade, it is expected to contribute a massive amount of money to the global economy. In the life sciences market alone, AI is valued at $902.1 million and is expected to grow at a rate of 21.1% through 2024. As part of this growth, I believe AI will also make significant contributions to the pharmaceutical supply chain.
See the original post:
How Artificial Intelligence Is Improving The Pharma Supply Chain - Forbes
China Will Lose the Artificial Intelligence (AI) Race (And Why America Will Win) – The National Interest Online
Artificial intelligence (AI) is increasingly embedded into every aspect of life, and China is pouring billions into its bid to become an AI superpower. China's three-step plan is to pull equal with the United States in 2020, start making major breakthroughs of its own by mid-decade, and become the world's AI leader in 2030.
There's no doubt that Chinese companies are making big gains. Chinese government spending on AI may not match some of the most-hyped estimates, but China is providing big state subsidies to a select group of AI national champions, like Baidu in autonomous vehicles (AVs), Tencent in medical imaging, Alibaba in smart cities, Huawei in chips and software.
State support isn't all about money. It's also about clearing the road to success -- sometimes literally. Baidu ("China's Google") is based in Beijing, where the local government has kindly closed more than 300 miles of city roads to make way for AV tests. Nearby Shandong province closed a 16 mile mountain road so that Huawei could test its AI chips for AVs in a country setting.
In other Chinese AV test cities, the roads remain open but are thoroughly sanitized. Southern China's tech capital, Shenzhen, is the home of AI leader Tencent, which is testing its own AVs on Shenzhen's public roads. Notably absent from Shenzhen's major roads are motorcycles, scooters, bicycles, or even pedestrians. Two-wheeled vehicles are prohibited; pedestrians are comprehensively corralled by sidewalk barriers and deterred from jaywalking by stiff penalties backed up by facial recognition technology.
And what better way to jump-start AI for facial recognition than by having a national biometric ID card database where every single person's face is rendered in machine-friendly standardized photos?
Making AI easy has certainly helped China get its AI strategy off the ground. But like a student who is spoon-fed the answers on a test, a machine that learns from a simplified environment won't necessarily be able to cope in the real world.
Machine learning (ML) uses vast quantities of experiential data to train algorithms to make decisions that mimic human intelligence. Type something like "ML 4 AI" into Google, and it will know exactly what you mean. That's because Google learns English in the real world, not from memorizing a dictionary.
It's the same for AVs. Google's Alphabet cousin Waymo tests its cars on the anything-goes roads of everyday America. As a result, its algorithms have learned how to deal with challenges like a cyclist carrying a stop sign. Everything that can happen on America's roads, will happen on America's roads. Chinese AI is learning how to drive like a machine, but American AI is learning how to drive like a human -- only better.
American, British, and (especially) Israeli facial recognition AI efforts face similar real-world challenges. They have to work with incomplete, imperfect data, and still get the job done. What's more, they can't throw up too many false positives -- innocent people identified as threats. China's totalitarian regime can punish innocent people with impunity, but in democratic countries, even one false positive could halt a facial recognition roll-outs.
It's tempting to think that the best way forward for AI is to make it easy. In fact, the exact opposite is true. Like a muscle pushed to exercise, AI thrives on challenges. Chinese AI may take some giant strides operating in a stripped-down reality, but American AI will win the race in the real world. Reality is complicated, and if it's one thing Americans are good at, it's dealing with complexity.
Salvatore Babones is an adjunct scholar at the Centre for Independent Studies and an associate professor at the University of Sydney.
Read the original post:
China Will Lose the Artificial Intelligence (AI) Race (And Why America Will Win) - The National Interest Online
Forget The ROI: With Artificial Intelligence, Decision-Making Will Never Be The Same – Forbes
People are the ultimate power behind AI.
There are a lot of compelling things about artificial intelligence, but people still need to get comfortable with it. As shown in a recent survey of 1,500 decision makers released by Cognilytica, about 40 percent indicate that they are currently implementing at least one AI project or plan to do so. Issues getting in the way include limited availability of AI skills and talent, as well as justifying ROI.
Having the right mindset is half the battle with successfully building AI into the organization. This means looking beyond traditional, cold ROI measures, and looking at the ways AI will enrich and amplify decision-making. Ravi Bapna, professor at the University of Minnesotas Carlson School of Management, says attitude wins the day for moving forward with AI. In a recent Knowledge@Wharton article, he offers four ways AI means better decisions:
AI helps leverage the power and the limitations of tacit knowledge: Many organizations have data that may sit unused because its beyond the comprehension of the human mind. But with AI and predictive modeling applied, new vistas open up. What many executives do not realize is that they are almost certainly sitting on tons of administrative data from the past that can be harnessed in a predictive sense to help make better decisions, Bapna says.
AI spots outliers: AI quickly catches outlying factors. These algorithms fall in thedescriptive analyticspillar, a branch of machine learning that generates business value by exploring and identifying interesting patterns in your hyper-dimensional data, something at which we humans are not great.
AI promotes counter-factual thinking: Data by itself can be manipulated to justify pre-existing notions, or miss variables affecting results. Counter-factual thinking is a leadership muscle that is not exercised often enough, says Bapna relates. This leads to sub-optimal decision-making and poor resource allocation. Casual analytics encourages counter-factual thinking. Not answering questions in a causal manner or using the highest paid persons opinion to make such inferences is a sure shot way of destroying value for your company.
AI enables combinatorial thinking: Even the most ambitious decisions are tempered by constraints to the point where new projects may not be able to deliver. Most decision-making operates in the context of optimizing some goal maximizing revenue or minimizing costs in the presence of a variety of constraints budgets, or service quality levels that have to be maintained, says Bapna. Needless to say, this inhibits growth. Combinatorial thinking, based on prescriptive analytics, can provide answers, he says. Combinatorial optimizations algorithms are capable of predicting favorable outcomes that deliver more value for investments.
Continue reading here:
Forget The ROI: With Artificial Intelligence, Decision-Making Will Never Be The Same - Forbes
Why 2020 Will Be the Year Artificial Intelligence Stops Being Optional for Security – Security Intelligence
Artificial intelligence (AI) isnt new. What is new is the growing ubiquity of AI in large organizations. In fact, by the end of this year, I believe nearly every type of large organization will find AI-based cybersecurity tools indispensable.
Artificial intelligence is many things to many people. One fairly neutral definition is that its a branch of computer science that focuses on intelligent behavior, such as learning and problem solving. Now that cybersecurity AI is mainstream, its time to stop treating AI like some kind of magic pixie dust that solves every problem and start understanding its everyday necessity in the new cybersecurity landscape. 2020 is the year large organizations will come to rely on AI for security.
AI isnt magic, but for many specific use cases, the right tool for the job will increasingly involve AI. Here are six reasons why thats the case.
The monetary calculation every organization must make is the cost of security tools, programs and resources on one hand versus the cost of failing to secure vital assets on the other. That calculation is becoming easier as the potential cost of data breaches grows. And these costs arent stemming from the cleanup operation alone; they may also include damage to the brand, drops in stock prices and loss of productivity.
The average total cost of a data breach is now $3.92 million, according to the 2019 Cost of a Data Breach Report. Thats an increase of nearly 12 percent since 2014. The rising costs are also global, as Juniper Research predicts that the business costs of data breaches will exceed $5 trillion per year by 2024, with regulatory fines included.
These rising costs are partly due to the fact that malware is growing more destructive. Ransomware, for example, is moving beyond preventing file access and toward going after critical files and even master boot records.
Fortunately, AI can help security operations centers (SOCs) deal with these rising risks and costs. Indeed, the Cost of a Data Breach Report found that cybersecurity AI can decrease average costs by $230,000.
The percentage of state-sponsored cyberattacks against organizations of all kinds is also growing. In 2019, nearly one-quarter (23 percent) of breaches analyzed by Verizon were identified as having been funded or otherwise supported by nation-states or state-sponsored actors up from 12 percent in the previous year. This is concerning because state-sponsored attacks tend to be far more capable than garden-variety cybercrime attacks, and detecting and containing these threats often requires AI assistance.
An arms race between adversarial AI and defensive AI is coming. Thats just another way of saying that cybercriminals are coming at organizations with AI-based methods sold on the dark web to avoid setting off intrusion alarms and defeat authentication measures. So-called polymorphic malware and metamorphic malware change and adapt to avoid detection, with the latter making more drastic and hard-to-detect changes with its code.
Even social engineering is getting the artificial intelligence treatment. Weve already seen deepfake audio attacks where AI-generated voices impersonating three CEOs were used against three different companies. Deepfake audio and video simulations are created using generative adversarial network (GAN) technologies, where two neural networks train each other (one learning to create fake data and the other learning to judge its quality) until the first can create convincing simulations.
GAN technology can, in theory and in practice, be used to generate all kinds of fake data, including fingerprints and other biometric data. Some security experts predict that future iterations of malware will use AI to determine whether they are in a sandbox or not. Sandbox-evading malware would naturally be harder to detect using traditional methods.
Cybercriminals could also use AI to find new targets, especially internet of things (IoT) targets. This may contribute to more attacks against warehouses, factory equipment and office equipment. Accordingly, the best defense against AI-enhanced attacks of all kinds is cybersecurity AI.
Large organizations are suffering from a chronic expertise shortage in the cybersecurity field, and this shortage will continue unless things change. To that end, AI-based tools can enable enterprises to do more with the limited human resources already present in-house.
The Accenture Security Index found that more than 70 percent of organizations worldwide struggle to identify what their high-value assets are. AI can be a powerful tool for identifying these assets for protection.
The quantity of data that has to be sifted through to identify threats is vast and growing. Fortunately, machine learning is well-suited to processing huge data sets and eliminating false positives.
In addition, rapid in-house software development may be creating many new vulnerabilities, but AI can find errors in code far more quickly than humans. To embrace rapid application development (RAD) requires the use of AI for bug fixing.
These are just two examples of how growing complexity can inform and demand the adoption of AI-based tools in an enterprise.
There has always been tension between the need for better security and the need for higher productivity. The most usable systems are not secure, and the most secure systems are often unusable. Striking the right balance between the two is vital, but achieving this balance is becoming more difficult as attack methods grow more aggressive.
AI will likely come into your organization through the evolution of basic security practices. For instance, consider the standard security practice of authenticating employee and customer identities. As cybercriminals get better at spoofing users, stealing passwords and so on, organizations will be more incentivized to embrace advanced authentication technologies, such as AI-based facial recognition, gait recognition, voice recognition, keystroke dynamics and other biometrics.
The 2019 Verizon Data Breach Investigations Report found that 81 percent of hacking-related breaches involved weak or stolen passwords. To counteract these attacks, sophisticated AI-based tools that enhance authentication can be leveraged. For example, AI tools that continuously estimate risk levels whenever employees or customers access resources from the organization could prompt identification systems to require two-factor authentication when the AI component detects suspicious or risky behavior.
A big part of the solution going forward is leveraging both AI and biometrics to enable greater security without overburdening employees and customers.
One of the biggest reasons why employing AI will be so critical this year is that doing so will likely be unavoidable. AI is being built into security tools and services of all kinds, so its time to change our thinking around AIs role in enterprise security. Where it was once an exotic option, it is quickly becoming a mainstream necessity. How will you use AI to protect your organization?
View original post here:
Why 2020 Will Be the Year Artificial Intelligence Stops Being Optional for Security - Security Intelligence
Gift will allow MIT researchers to use artificial intelligence in a biomedical device – MIT News
Researchers in the MIT Department of Civil and Environmental Engineering (CEE) have received a gift to advance their work on a device designed to position living cells for growing human organs using acoustic waves. The Acoustofluidic Device Design with Deep Learning is being supported by Natick, Massachusetts-based MathWorks, a leading developer of mathematical computing software.
One of the fundamental problems in growing cells is how to move and position them without damage, says John R. Williams, a professor in CEE. The devices weve designed are like acoustic tweezers.
Inspired by the complex and beautiful patterns in the sand made by waves, the researchers' approach is to use sound waves controlled by machine learning to design complex cell patterns. The pressure waves generated by acoustics in a fluid gently move and position the cells without damaging them.
The engineers developed a computer simulator to create a variety of device designs, which were then fed to an AI platform to understand the relationship between device design and cell positions.
Our hope is that, in time, this AI platform will create devices that we couldnt have imagined with traditional approaches, says Sam Raymond, who recently completed his doctorate working with Williams on this project. Raymonds thesis title, "Combining Numerical Simulation and Machine Learning," explored the application of machine learning in computational engineering.
MathWorks and MIT have a 30-year long relationship that centers on advancing innovations in engineering and science, says P.J. Boardman, director of MathWorks. We are pleased to support Dr. Williams and his team as they use new methodologies in simulation and deep learning to realize significant scientific breakthroughs.
Williams and Raymond collaborated with researchers at the University of Melbourne and the Singapore University of Technology and Design on this project.
Go here to see the original:
Gift will allow MIT researchers to use artificial intelligence in a biomedical device - MIT News
Access Intelligence Announces Artificial Intelligence Strategist Chris Benson Will Deliver the Keynote Presentation at Connected Plant Conference 2020…
HOUSTONChris Benson, principal artificial intelligence strategist for global aerospace, defense, and security giant Lockheed Martin, will give the opening keynote presentation at the Connected Plant Conference 2020, which will take place February 25 to 27, 2020, at the Westin Peachtree Plaza in Atlanta, Georgia.
Kicking off the event, Benson will shed light on the vast role and potential that artificial intelligence (AI) offers as the world embarks on a definitive fourth industrial revolution. While AI is a technology that is still emerging within the power and chemical processing sectors, it has made notable headway in other industries, including defense, security, and manufacturing, and it is commonly hailed as an integral technology evolution that will take IIoT to the next level. Some even describe AI as the software engine that will drive the fourth revolution.
Bensons address will glean from his deep knowledge of AI as a long-time solutions architect for AI and machine learning (ML), and the emerging technologies they intersectincluding robotics, IoT, augmented reality, blockchain, mobile, edge, and cloud. As a renowned thought-leader on AI and related fields, Benson frequently gives captivating speeches on numerous topics about the subject. He also discusses AI with an array of experts as co-host of the Practical AI podcast, which reaches thousands of AI enthusiasts each week. Benson is also the founder and organizer of the Atlanta Deep Learning Meetupone of the largest AI communities in the world.
Before he joined Lockheed Martin, where he oversees strategies related to AI and AI ethics, Benson was chief scientist for AI and ML at technology conglomerate Honeywell SPS, and before that, he was on the AI team at multinational professional services company Accenture.
This years Connected Plant Conference, scheduled for February 25 to 27 in Atlanta, is the only event covering digital transformation/digitalization for the power and chemical process industries. Presenters will explore the fast-paced advances in automation, data analytics, computing networks, smart sensors, augmented reality, and other technologies that companies are using to improve their processes and overall businesses in todays competitive environment.
To register or for more information, see: https://www.connectedplantconference.com
Go here to read the rest:
Access Intelligence Announces Artificial Intelligence Strategist Chris Benson Will Deliver the Keynote Presentation at Connected Plant Conference 2020...
Artificial Intelligence Suffers from The Biases of their Human Creators, that Causes Problems Searching for ET – Science Times
One structure on the dwarf planet Ceres made big news but there is a hitch. It seems that the square-shaped form inside a larger triangle, located in a crater. Everyone else saw it, but the use of artificial intelligence might be a square peg not fitting a round hole. This remark by a Spanish neuropsychologist is questioning the veracity of depending on AI, which might be unsound by SETI.
Ceres is located in the main asteroid belt, a dwarf planet, and the biggest object too. One of its craters, Occator had bright lights which lead to several ideas of what it was. Nasa sent the Dawn probe to go close enough to capture visual evidence of what these lights were and solve the mystery. These lights were from volcanic ice and salt eruptions, nothing more.
It gets more interesting as researchers based in the University of Cadiz (Spain) have examined images of these spots. Areas like them are called Vinalia Faculae, in an area where geometric contours are very evident for observers. It now serves as a template to compare how machines and humans perceive images on planetary surfaces in general. Tests like these will show artificial intelligence can see technosignatures of other lifeforms besides human-civilization.
During the test, more than one individual saw the squarish shape in Vinalia Faculae, and it was a perfect chance to test artificial intelligence on a human. Subjecting a human subject with what AI sees, is a comparison to see the result. In searching for ETs, radio signals aren't the only consideration but also captured images not like before.
To see further what their hypothesis would bring about. These neuropsychologists made more modifications from the previous experiments to dig deeper by adding another layer to it. Another batch of volunteers was conscripted, but this time were amateurs in astronomy to analyze what they see in the Occator image.
By comparison to the artificial vision system that is grounded on the convolutional neural networks (CNN), an AI taught to see squares and triangles to identify them. From this point on, the experiments got interesting as it progressed.
Researchers ran the experiment and this what the people saw, some peculiarities were observed, as the images were perceived.
a. AI and people saw the square structures and did not miss it.
b. Another thing is the AI did not fail to notice the triangle as well.
c. Whenever the triangle was pointed out to people, more mentioned seeing it.
d. Visually the square was inside the triangle as it was visually represented too.
This is what the neuropsychologists drew from the results of the experiment on the "amateurs". It was published in the "Acta Astronautica journal" for reference.
a. The application of artificial intelligence for use in finding ETs is not as foolproof. Just like human intelligence the AI can be mistaken, confused and make false perception as well.
b. AI can be applied for some tasks to find technosignatures and ETs in some exceptions. Overall, AI will not be implemented but with caution, especially in SETI.
c. The presence of biases in the programming of AI is that of their creators, which is unavoidable. The best move is to study artificial intelligence, when under human stewardship.
NearShore Technology Talks About the Impact of Artificial Intelligence on Nearshoring – Science Times
(Photo : Bigstock) AI Learning and Artificial Intelligence Concept - Icon Graphic Interface showing computer, machine thinking and AI Artificial Intelligence of Digital Robotic Devices
The nearshoring technology industry is finding a rapid growth in demand from North American companies for engineers and data science services related to the advances that are coming through the ongoing artificial intelligence (AI) revolution. Companies find high value in working on new and sophisticated applications with nearshoring firms that are close in proximity, time zones, language, and business culture.
In recent years, the costs involved in offshoring have increased in relative comparison to nearshoring costs. Additionally, tech education opportunities in the Western hemisphere have become more advantageous. Western countries have far fewer holidays and lost workdays than in offshore countries as well. In this article, NearShore Technology examines current AI trends impacting nearshoring.
AI has been an active field for computer scientists and logicians for decades, and in recent years hardware and software capabilities have advanced to the stage allowing for the actual implementation of many AI processes. In general, AI describes the ability of a program and associated hardware to simulate human intelligence, reasoning, and analysis of real-world data. Logical algorithms are allowing for increased learning, logic, and creativity with AI processes. Increased technological capabilities are allowing AI to process information in quantities and with perceptive abilities that are beyond traditional human powers. Many industrial processes are finding great utility from machine learning, an AI-based process that allows technology systems to evolve and learn based on experience and self-development.
The huge tech companies that mainly focus on how customers use software programs are leading the way in AI development. Companies like Google, Amazon, and Facebook are positioning immense resources to advance their AI processes' abilities to understand and predict customer behavior. In addition to tech and retail firms, healthcare, financial services, and auto manufacturers (aiming at a future of autonomous cars) are all committing to developing effective AI tech. From routine activities such as customer support and billing to more intuition-based activities like investing and making strategic decisions, AI is becoming a central part of competing in almost every industry.
AI development requires experienced and skillful software engineers and programmers. The ability of an AI application to operate effectively is dependent first on the quantity and quality of data that it is provided. Algorithms must be able to perceive relevant data and also to learn and improve based on the data that is received. Programmers and engineers must be able to understand and facilitate algorithm improvement over time, as AI applications are never really completed and are constantly in development. Programmers must rely on a sufficient number of competent data scientists and analysts to sort and identify the nature and quality of information processed by an AI application to provide a meaningful understanding of how well the AI is functioning. The entire process is changing and progressing quickly, and the effectiveness of AI is determined by the abilities of the engineers and programmers who are involved.
Historically, many traditional IT services have been suited for offshoring. Most traditional IT and call center support services were routine, and the cost-efficiency of offshoring these processes around the world made economic sense in many situations. When skilled programming and data science are not a requirement, offshoring has had a place in the mix for many local companies. However, the worldwide shortage of skilled engineers and data scientists is most prevalent in the parts of the world normally used for offshore services.
Nearshoring AI technology development allows local companies to have meaningful and real-time relationships with programmers and data specialists who have the requisite skills needed. These nearshore relationships are vital to the ongoing nature of AI development.
Among the most important considerations of a successful nearshoring AI relationship are examining the actual skill and education of the nearshore firm's workers. A nearshore provider's team should be up to date with the latest technology developments and should have experience and a history of success in the relevant industry. As a process that depends on natural language use, it is important that AI developers are native or fluent speakers of the client company's language. Working with a nearshore firm that is proximately near in time and place also helps the firm to properly understand the culture and needs of a company's market and customers. A nearshore firm working on AI processes should feel like a complete partner and not just another outsourced provider of routine tasks.
NearShore Technology is a US firm headquartered in Atlanta with offices throughout North America. The company focuses on meeting all the technology needs of its clients. NearShore partners with technology officers and leaders to provide effective and timely solutions that fit each customer's unique needs. NearShore uses a family-based approach to provide superior IT, Medtech, Fintech, and related services to our customers and partners throughout North America.
Go here to read the rest:
NearShore Technology Talks About the Impact of Artificial Intelligence on Nearshoring - Science Times