Page 1,736«..1020..1,7351,7361,7371,738..1,7501,760..»

Glassnode Report Shows Bitcoin And Ethereum Derivatives Gain Massive Traction – NewsBTC

The 2022 crypto winter seems to be one of the most severe bearish trends in cryptocurrency history. This saw the entire crypto market cut down by over 50% in value since the beginning of the year. Also, the situation in the crypto market got worse with the collapse of the Terra-LUNA ecosystem.

However, the crypto market is recovering slightly from its trauma in the years first half. Bitcoin price is suddenly picking up despite its weeks instability and swings.

According to the data from Glassnode, a blockchain analytics firm, the derivatives of the leading cryptocurrencies are making positive progress. Bitcoin and Ethereum derivatives are receiving increased attention from investors with more trading of BTC futures and higher ETH holders.

The record from Glassnode indicates that the Bitcoin derivatives market has a slight directional bias. This means that investment in the market is coming with more caution from the investors. But on the side of Ethereum, there is evidence of optimism from the investors.

The network records more demands for ETH against little withdrawals from the wallets. These overall events for Ethereum could be due to the upcoming Merge.

As per Glassnodes Future Open Interest (BTC) Metric, investors seem to have more confidence in the derivatives market. They are laying aside the events and fear that came with the collapse of Terra-LUNA tokens. Also, the effect of the May-June mining capitulation is wading off gradually.

Glassnode noted the increasing stability in futures trading volume. It recalled that the past 12 months from the sell-off since May 2021 posed a structural dip in trade volume. However, it seems to be staging a come-back as it boasts $33 per day.

Also, the futures markets passed through a structural change within the past one and half years. This was at the beginning of 2021, as the Bitcoin price was in a bullish trend. The underlying spread was stable even as leverage was going up.

Currently, Ethereum derivatives are receiving more attention from investors than Bitcoin. This appears to be the first time in the history of cryptocurrency to experience such a twist between the two leading assets. While Ethereum derivatives record about $6.6 billion in ETH, those of Bitcoin are at $4.8 billion in BTC.

Additionally, the outplay depicts that ETH options Open Interest is almost at its ATH as of Nov 2021. This was when Ether hit $4,900.

A more acceptable explanation for the price increase is the influence of the upcoming Ethereum Merge. Most investors make bullish bets on prices between $2,200 and $5,000.

Here is the original post:
Glassnode Report Shows Bitcoin And Ethereum Derivatives Gain Massive Traction - NewsBTC

Read More..

Artificial Intelligence and Machine Learning: Enhancing Human Effort with Intelligent Systems – Automation.com

Summary

Only when the challenges of data accessibility and expensive computing power were mitigated did the AI field experience exponential growth.There are now more than a dozen types of AI being advanced. This feature originally appeared in InTech magazine's August issue, a special edition from ISA's Smart Manufacturing and IIoT Division.

Artificial intelligence has come a long way since scientists first wondered if machines could think.

In the 20th century, the world became familiar with artificial intelligence (AI) as sci-fi robots who could think and act like humans. By the 1950s, British scientist and philosopher Alan Turing posed the question Can machines think? in his seminal work on computing machinery and intelligence, where he discussed creating machines that can think and make decisions the same way humans do (Reference 1). Although Turings ideas set the stage for future AI research, his ideas were ridiculed at the time. It took several decades and an immense amount of work from mathematicians and scientists to develop the field of artificial intelligence, which is formally defined as the understanding that machines can interpret, mine, and learn from external data in a way that imitates human cognitive practices (Reference 2).

Even though scientists were becoming more accustomed to the idea of AI, data accessibility and expensive computing power hindered its growth. Only when these challenges were mitigated after several AI winters (with limited advances in the field) did the AI field experience exponential growth. There are now more than a dozen types of AI being advanced (Figure).

Due to the accelerated popularity of AI in the 2010s, venture capital funding flooded into a large number of startups focused on machine learning (ML). This technology centers on continuously learning algorithms that make decisions or identify patterns. For example, the YouTube algorithm may recommend less relevant videos at first, but over time it learns to recommend better targeted videos based on the users previously watched videos.

The three main types of ML are supervised, unsupervised, and reinforcement learning. Supervised learning refers to an algorithm finding the relationship between a set of input variables and known labeled output variable(s), so it can make predictions about new input data. Unsupervised learning refers to the task of intelligently identifying patterns and categories from unlabeled data and organizing it in a way that makes it easier to discover insights. Lastly, reinforcement learning refers to intelligent agents that take actions in a defined environment based on a certain set of reward functions.

Deep learning, a subset of ML, had numerous ground-breaking advances throughout the 2010s. Similar to the connections between the nervous system cells in the brain, neural networks consist of several thousand to a million hidden nodes and connections. Each node acts as a mathematical function, which, when combined, can solve extremely complex problems like image classification, translation, and text generation.

Human lifestyle and productivity have drastically improved with the advances in artificial intelligence. Health care, for example, has seen immense AI adoption with robotic surgeries, vaccine development, genome sequencing, etc. (Reference 5). So far, the adoption in manufacturing and agriculture has been slow, but these industries have immense untapped AI possibilities (Reference 6). According to a recent article published by Deloitte, the manufacturing industry has high hopes for AI because the annual data generated in this industry is thought to be around 1,800 petabytes (Reference 7).

This proliferation in data, if properly managed, essentially acts as a fuel that drives advanced analytical solutions that can be used for the following (Reference 8):

Ultimately, AI and advanced analytics can augment humans to help mitigate repetitive and sometimes even dangerous tasks while increasing focus on endeavors that drive high value. AI is not a far-fetched concept; it is already here, and it is having a substantial impact in a wide range of industries. Finance, national security, health care, criminal justice, transportation, and smart cities are examples of this.

AI adoption has been steadily increasing. Companies are reporting 56 percent adoption in 2021, an uptick of 6 percent compared to 2020 (Reference 10). With the technology becoming more mainstream, the trends of achieving solutions that emphasize explainability, accessibility, data quality, and privacy are amplified.

Explainability drives trust:To keep up with the continuous demand of more accurate AI models, hard-to-explain (black-box) models are used. Not being able to explain these models makes it difficult to achieve user trust and to pinpoint problems (bias, parameters, etc.), which can result in unreliable models that are difficult to scale. Due to these concerns, the industry is adopting more explainable artificial intelligence (XAI).

According to IBM, XAI is a set of processes and methods that allows human users to comprehend and trust the ML algorithms outputs (Reference 11). Additionally, explainability can increase accountability and governance.

Increasing AI accessibility:The productization of cloud computing for ML has taken the large compute resources and models, once reserved only for big tech companies, and put them in the hands of individual consumers and smaller organizations. This drastic shift in accessibility has fueled further innovation in the field. Now, consumers and enterprises of all sizes can reap the benefits of:

Data mindset shift:Historically, model-centric ML development, i.e., keeping the data fixed and iterating over the model and its parameters to improve performances (Reference 12), has been the typical approach. Unfortunately, the performance of a model is only as good as the data used to train it. Although there is no scarcity of data, high-performing models require accurate, properly labeled, and representative datasets. This concept has shifted the mindset from model-centric development toward data-centric developmentwhen you systematically change or enhance your datasets to improve the performance of the model (Reference 12).

An example of how to improve data quality is to create descriptive labeling guidelines to mitigate recall bias when using data labeling companies like AWS Mechanical Turk. Additionally, responsible AI frameworks should be in place to ensure data governance, security and privacy, fairness, and inclusiveness.

Data privacy through federated learning:The importance of data privacy has not only forged the path to new laws (e.g., GDPR and CCPA), but also new technologies. Federated learning enables ML models to be trained using decentralized datasets without exchanging the training data. Personal data remains in local sites, reducing the possibility of personal data breaches.

Additionally, the raw data does not need to be transferred, which helps make predictions in real time. For example Google uses federated learning to improve on-device machine learning models like Hey Google in Google Assistant, which allows users to issue voice commands (Reference 13).

Maintenance, demand forecasting, and quality control are processes that can be optimized through the use of artificial intelligence. To achieve these use cases, data is ingested from smart interconnected devices and/or systems such as SCADA, MES, ERP, QMS, and CMMS. This data is brought into machine learning algorithms on the cloud or on the edge to deliver actionable insights. According to IoT Analytics (Reference 14), the top AI applications are:

Vision-based AI systems and robotics have helped develop automated inspection solutions for machines. These automated systems have not only been proven to save human lives but have radically reduced inspection times. There have been significant examples where AI has outperformed humans, and it is a safe bet to conclude that several AI applications enable humans to make informed and quick decisions (Reference 15).

Given the myriad additional AI applications in manufacturing, we cannot cover them all. But a good example to delve deeper into is predictive maintenance, because it has such a large effect on industry.

Generally, maintenance follows one of four approaches: reactive, or fix what is broken; planned, or scheduled maintenance activities; proactive, or defect elimination to improve performance; and predictive, which uses advanced analytics and sensing data to predict machine reliability.

Predictive maintenance can help flag anomalies, anticipate remaining useful life, and provide mitigations or maintenance (Reference 17). Compared to the simple corrective or condition-based nature of the first three maintenance approaches, predictive maintenance is preventive and takes into account more complex, dynamic patterns. It can also adapt its predictions over time as the environment changes. Once accurate failure models are built, companies can build mathematical models to reduce costs and choose the best maintenance schedules based on production timelines, team bandwidth, replacement piece availabilityand other factors.

Bombardier, an aircraft manufacturer, has adopted AI techniques to predict the demand of its aircraft parts based on input features (i.e., flight activity ) to optimize its inventory management (Reference 18).

This example and others show how advances in AI depend on advances associated with other Industry 4.0 technologies, including cloud and edge computing, advanced sensing and data gathering, and wired and wireless networking.

This feature originally appeared in InTech magazine's August issue, a special edition from ISA's Smart Manufacturing and IIoT Division.

Ines Mechkane is the AI Technical committee chair of ISAs SMIIoT Division. She is also a senior technical consultant with IBM. She has a background in petroleum engineering and international experience in artificial intelligence, product management, and project management. Passionate about making a difference through AI, Mechkane takes pride in her ability to bridge the gap between the technical and business worlds.

Manav Mehra is a data scientist with the Intelligent Connected Operations team at IBM Canada focusing on researching and developing machine learning models. He has a masters degree in mathematics and computer science from the University of Waterloo, Canada, where he worked on a novel AI-based time-series challenge to prevent people from drowning in swimming pools.

Adissa Laurent is AI delivery lead within LGS, an IBM company. Her team maintains AI solutions running in production. For many years, Laurent has been building AI solutions for the retail, transport, and banking industries. Her areas of expertise are time series prediction, computer vision, and MLOps.

Eric Ross is a senior technical product manager at ODAIA. After spending five years working internationally in the oil and gas industry, Ross completed his master of management in artificial intelligence. Ross then joined the life sciences industry to own the product development of a customer data platform infused with AI and BI.

Check out our free e-newsletters to read more great articles..

Read more here:
Artificial Intelligence and Machine Learning: Enhancing Human Effort with Intelligent Systems - Automation.com

Read More..

Filings buzz: tracking artificial intelligence mentions in the tech sector – Verdict

Mentions of artificial intelligence within the filings of companies in the tech sector were 285% higher between July 2021 and June 2022 than in 2016, according to the latest analysis of data from GlobalData.

When tech companies publish annual and quarterly reports, ESG reports and other filings, GlobalData analyses the text and identifies individual sentences that relate to disruptive forces facing companies in the coming years. Artificial intelligence is one of these topics - companies that excel and invest in these areas are thought to be better prepared for the future business landscape and better equipped to survive unforeseen challenges.

To assess whether artificial intelligence is featuring more in the summaries and strategies of tech companies, two measures were calculated. Firstly, we looked at the percentage of companies which have mentioned artificial intelligence at least once in filings during the past twelve months - this was 81% compared to 47% in 2016. Secondly, we calculated the percentage of total analysed sentences that referred to artificial intelligence.

Of the 10 biggest employers in the tech sector, IBM was the company which referred to artificial intelligence the most between July 2021 and June 2022. GlobalData identified 283 artificial intelligence-related sentences in the United States-based company's filings - 3.4% of all sentences. Hitachi mentioned artificial intelligence the second most - the issue was referred to in 1.3% of sentences in the company's filings. Other top employers with high artificial intelligence mentions included Accenture, Capgemini and Infosys.

Across all tech companies the filing published in the second quarter of 2022 which exhibited the greatest focus on artificial intelligence came from SenseTime. Of the document's 2,020 sentences, 170 (8.4%) referred to artificial intelligence.

This analysis provides an approximate indication of which companies are focusing on artificial intelligence and how important the issue is considered within the tech sector, but it also has limitations and should be interpreted carefully. For example, a company mentioning artificial intelligence more regularly is not necessarily proof that they are utilising new techniques or prioritising the issue, nor does it indicate whether the company's ventures into artificial intelligence have been successes or failures.

GlobalData also categorises artificial intelligence mentions by a series of subthemes. Of these subthemes, the most commonly referred to topic in the second quarter of 2022 was 'machine learning', which made up 38% of all artificial intelligence subtheme mentions by tech companies.

See original here:
Filings buzz: tracking artificial intelligence mentions in the tech sector - Verdict

Read More..

Artificial Intelligence: 3 ways the pandemic accelerated its adoption – The Enterprisers Project

The need for organizations to quickly create new business models and marketing channels has accelerated AI adoption throughout the past couple of years. This is especially true in healthcare, where data analytics accelerated the development of COVID-19 vaccines. In consumer-packaged goods, Harvard Business Reviewreportedthat Frito-Lay created an e-commerce platform,Snacks.com, in just 30 days.

The pandemic also accelerated AI adoption in education, as schools were forced to enable online learning overnight. And wherever possible, the world shifted to touchless transactions, completely transforming the banking industry.

Three technology developments during the pandemic accelerated AI adoption:

[ Also readArtificial Intelligence: How to stay competitive. ]

Lets look at the pros and cons of these developments for IT leaders.

Even 60 years after Moores Law, computing power is increasing, with more powerful machines and more processing power through new chips from companies like NVidia.AI Impactsreports that computing power available per dollar has probably increased by a factor of ten roughly every four years over the last quarter of a century (measured in FLOPS or MIPS). However, the rate has been slower over the past 6-8 years.

Pros: More for less

Inexpensive computing gives IT leaders more choices, enabling them to do more with less.

Cons: Too many choices can lead to wasted time and money

Consider big data. With inexpensive computing, IT pros want to wield its power. There is a desire to start ingesting and analyzing all available data, leading to better insights, analysis, and decision-making.

But if you are not careful, you could end up with massive computing power and not enough real-life business applications.

As networking, storage, and computing costs drop, the human inclination is to use them more. But they dont necessarily deliver business value to everything.

Before the pandemic, the terms data warehouses and data lakes were standard and they remain so today. But new data architectures like data fabric and data mesh were almost non-existent. Data fabric enables AI adoption because it enables enterprises to use data to maximize their value chain by automating data discovery, governance, and consumption. Organizations can provide the right data at the right time, regardless of where it resides.

Pros: IT leaders will have the opportunity to rethink data models and data governance

It provides a chance to buck the trend toward centralized data repositories or data lakes. This might mean more edge computing and data available where it is most relevant. These advancements result in appropriate data being automatically available for decisioning critical to AI operability.

Cons: Not understanding the business need

IT leaders need to understand the business and AI aspects of new data architectures. If they dont know what each part of the business needs including the kind of data and where and how it will be used they may not create the correct type of data architecture and data consumption for proper support. ITs understanding of the business needs, and the business models that go with that data architecture, will be essential.

Statistaresearch underscores the growth of data: The total amount of data created, captured, copied, and consumed globally was 64.2 zettabytes in 2020 and is projected to reach more than 180 zettabytes in 2025. Statista research from May 2022 reports, The growth was higher than previously expected, caused by the increased demand due to the COVID-19 pandemic. Big data sources include media, cloud, IoT, the web, and databases.

Pros: Data is powerful

Every decision and transaction can be traced back to a data source. If IT leaders can use AIOps/MLOps to zero in on data sources for analysis and decision-making, they are empowered. Proper data can deliver instant business analysis and provide deep insights for predictive analysis.

Cons: How do you know what data to use?

More on artificial intelligence

Besieged by data from IoT, edge computing, formatted and unformatted, intelligent and unintelligible IT leaders are dealing with the 80/20 rule: What are the 20 percent credible data sources that deliver 80 percent of the business value? How do you use AI/ML ops to determine the credible data sources, and what data source should be used for analysis and decision-making? Every organization needs to find answers to these questions.

AI is becoming ubiquitous, powered by new algorithms and increasingly plentiful and inexpensive computing power. AI technology has been on an evolutionary road for more than 70 years. The pandemic did not accelerate the development of AI; it accelerated its adoption.

Harnessing AI is the challenge ahead.

[ Want best practices for AI workloads? Get theeBook: Top considerations for building a production-ready AI/ML environment. ]

Continue reading here:
Artificial Intelligence: 3 ways the pandemic accelerated its adoption - The Enterprisers Project

Read More..

Artificial Intelligence is the most trending topic in technology for quite a while now. – Medium

Artificial Intelligence is the most trending topic in technology for quite a while now. As I said in many of my previous articles, AI is the future. Naturally, theres been a lot of news about AI across the internet. Some of it is true and reliable and some are myths, some are assumptions, and some are hypotheses. However, there are a few more facts about AI that are not frequently heard. AI in all its potential and popularity still holds many secrets under its wing. Lets look at some of them.

Branches of AI:

The application of computer recognition, reasoning, and action is known as artificial intelligence. It ultimately comes down to giving computers the ability to mimic human behavior, particularly cognitive ability. Data science, machine learning, and artificial intelligence are all connected, though.

We will become knowledgeable about artificial intelligence and its six main branches as the first point of this blog.

What Artificial Intelligence cant do:

The wonders that modern artificial intelligence is capable of are astonishing. It is capable of creating breath-taking creative content, including poetry, text, pictures, music, and human faces. It is capable of making medical diagnoses that are more precise than those made by a human doctor. It produced a solution to the protein folding problem, a major biological conundrum that has baffled academics for fifty years.

However, there are still important constraints for modern AI. Artificial intelligence (AI) still has a long way to go before it can accomplish what we would anticipate from a really intelligent agent that is, when compared to human cognition, the initial inspiration and standard for AI.

AI problems you should know:

We must be aware of the benefits and difficulties of adopting AI as consumers and developers of AI technology. Understanding the specifics of any technology enables the user or developer to both minimize the risks associated with it and maximize its benefits.

Understanding how a developer should approach or deal with AI issues in the actual world is crucial. The use of AI technologies must be viewed as a friend, not a threat.

See the original post:
Artificial Intelligence is the most trending topic in technology for quite a while now. - Medium

Read More..

Artificial intelligence innovation among air force industry companies has dropped off in the last year – Airforce Technology

Research and innovation in artificial intelligence in the air force equipment and technologies sector has declined in the last year.

The most recent figures show that the number of AI related patent applications in the industry stood at 134 in the three months ending June down from 172 over the same period in 2021.

Figures for patent grants related to AI followed a similar pattern to filings shrinking from 67 in the three months ending June 2021 to 65 in the same period in 2022.

The figures are compiled by GlobalData, who track patent filings and grants from official offices around the world. Using textual analysis, as well as official patent classifications, these patents are grouped into key thematic areas, and linked to key companies across various industries.

AI is one of the key areas tracked by GlobalData. It has been identified as being a key disruptive force facing companies in the coming years, and is one of the areas that companies investing resources in now are expected to reap rewards from.

The figures also provide an insight into the largest innovators in the sector.

The Boeing Co was the top AI innovator in the air force equipment and technologies sector in the latest quarter. The company, which has its headquarters in the United States, filed 39 AI related patents in the three months ending June. That was up from 28 over the same period in 2021.

It was followed by the France based Thales SA with 22 AI patent applications, the United States based Raytheon Technologies Corp (21 applications), and the Netherlands based Airbus SE (16 applications).

Airbus SE has recently ramped up R&D in AI. It saw growth of 37.5% in related patent applications in the three months ending June compared to the same period in 2021 - the highest percentage growth out of all companies tracked with more than 10 quarterly patents in the air force equipment and technologies sector.

Brushless Fans, Motors and Blowers

Design, Development and Manufacturing of Air Armament

Follow this link:
Artificial intelligence innovation among air force industry companies has dropped off in the last year - Airforce Technology

Read More..

Artificial Intelligence Revolutionizing Content Writing – Entrepreneur

Opinions expressed by Entrepreneur contributors are their own.

You're reading Entrepreneur India, an international franchise of Entrepreneur Media.

The idea of Pepper Content germinated in a dormitory of BITS, Pilani. The story of the founders was similar to that of average Indian teenagers who wanted to pursue engineering.

The founders realized a shared passion for content. It was clear that for brands, smartphones and the Internet had changed the principles of customer engagement and experience principles. More than 700 million Internet users, businesses included, were accessing and consuming different forms of content daily. However, access to quality content was not as easy.

"We asked ourselves that if, in this instant noodle economy, items like food and medicine get ordered and delivered at the tap of a button, then why can't content be treated and delivered the same way? Every company in the world has a content need. In today's day and age, this opportunity stands at a staggering $400 billion globally. This was when we began the B2B content marketplace, Pepper Content, in 2017," said Anirudh Singla, co-founder and CEO, Pepper Content.

The co-founders with limited resources, ongoing classes, assignments, and exams, persisted in achieving their dreams. In 2017, the company received its first order of 250 articles on automotives. Pepper Content enables marketers to connect with the best writers, designers, translators, videographers, editors, and illustrators, and vets the marketplace's creative professionals using its AI algorithms to make the right match between business and creative professionals. To support its creators, Pepper Content has invested in building tools that augment their ability and make them more productive, and one of its key products Peppertype.ai is currently being used by over 200,000 users across 150 countries. The company has on-boarded over 1,000 enterprises and fast-growing startups, and works with over 2,500 customers, including organizations such as Adani Enterprises, NPS Trust, Hindustan Unilever, P&G; financial services, and insurance companies such as HDFC Bank, CRED, Groww, SBI Mutual Fund, TATA Capital, and technology firms such as Binance, Google, and Adobe.

According to the co-founders, Pepper Content is not a startup or an agency but a platform that connects people seamlessly. The company aims to create the perfect symphony between creators and brands when it comes to content. The company is enabling strategic collaboration that will have a tangible, on-ground impact.

The co-founders always wanted to take a product-first approach which meant understanding the nuances and solving for every use case. The first products were hyper-customised sheets with deep linking of formulae and scripts that enabled the company to piece together workflows. The team worked on 25,000 content pieces on Google sheets and docs in the initial stages that helped the co-founders understand the customer workflow.

Businesses can directly order quality content on the platform with faster turnaround times and complete transparency on the project's progress. The company's intelligent algorithms take care of all the management aspects: from finding the best creator-project match to running agile workflows and driving integrated tool-supported editorial checks for quality content delivery.

"The content marketing industry stands at $400 billion, globally and it is only going to scale further. However, no organised players are enabling seamless workflow for brands. Every company produces and outsources content in written, image, audio, and video formats. To date, companies are required to post requirements, bid for projects and choose from a large list of bidders, and negotiate pay, making it cumbersome and, frankly, unscalable. We are solving this by offering a managed marketplace. We take care of entire content operations, right from the ordering flow to end-to-end delivery. For companies, quality content delivery creates trust and for creators, takes care of timely payments and operational inefficiencies," said Rishabh Shekhar, co-founder and COO, Pepper Content.

The co-founders struggled in the initial days since they did not know anyone from the investor community. "We cold-emailed 80 VC and angel investors! There were a lot of questions and conversations about the company's scale and our age. It took us three months but we persisted and were oversubscribed for the seed funding round. Over the years we scaled a B2B content marketplace, built a product that was unheard of, and have credible investors backing us. We realized that age is no hindrance if your vision is clear and you have a product that creates real impact."

Visit link:
Artificial Intelligence Revolutionizing Content Writing - Entrepreneur

Read More..

11 new space anomalies discovered using Artificial Intelligence – Innovation News Network

The team examined digital images of the Northern sky obtained using a k-D tree in 2018 to detect space anomalies through the nearest neighbour method. The research then utilised machine learning algorithms to automate the research.

The study is published in New Astronomy.

Astronomical discoveries have increased drastically in recent years due to large-scale astronomical surveys. The Zwicky Transient Facility, for example, employs a wide-field view camera to survey the Northern sky, generating1.4 TB of data each night of observation with its catalogue containing billions of objects.

However, processing such colossal quantities of data manually is extremely expensive and time-consuming. To overcome this, the SNAD team, consisting of researchers from Russia, France, and the US, collaborated to devise an automated process.

When analysing astronomical objects, scientists observe their light curves, which demonstrate the variation of an objects brightness as a function of time. Scientists first identify a flash of light in the sky and then follow its evolution to see if it becomes brighter, weaker, or goes out.

In their study, the researchers analysed a million real light curves from the ZTFs 2018 catalogue and seven simulated live curve models of the types of objects being studied. They followed a total of 40 parameters, including the amplitude of an objects brightness and timeframe.

Konstantin Malanchev, co-author of the paper and postdoc at the University of Illinois at Urbana-Champaign, commented: We described the properties of our simulations using a set of characteristics expected to be observed in real astronomical bodies. In the dataset of approximately a million objects, we were looking for super-powerful supernovae, Type Ia supernovae, Type II supernovae, and tidal disruption events. We refer to such classes of objects as space anomalies. They are either very rare, with little-known properties, or appear interesting enough to merit further study.

Subsequently, the team compared light curve data from real objects to simulations using the k-D tree algorithm which is a geometric data structure for dividing space into smaller parts by cutting it with hyperplanes, planes, lines, or points. The algorithm was employed to narrow the search range when looking for real objects with similar properties to this in the seven simulations.

The researchers identified 15 nearest neighbours (real objects from the ZTF database) for each simulation 105 matches in total, which were then visually examined for space anomalies. The manual verification process confirmed 11 space anomalies seven were supernova candidates, and four were active galactic nuclei candidates where tidal disruption events could occur.

Maria Pruzhinskaya, a co-author of the paper and research fellow at the Sternberg Astronomical Institute, commented: This is a very good result. In addition to the already-discovered rare objects, we were able to detect several new ones previously missed by astronomers. This means that existing search algorithms can be improved to avoid missing such objects.

The study demonstrates that the method is highly effective and easy to apply. Moreover, the method is universal and can be used to discover any astronomical object, not just rare types of supernovae.

Matvey Kornilov, Associate Professor of the HSE University Faculty of Physics, concluded: Astronomical and astrophysical phenomena which have not yet been discovered are, in fact, anomalies. Their observed manifestations are expected to differ from the properties of known objects. In the future, we will try using our method to discover new classes of objects.

Read the rest here:
11 new space anomalies discovered using Artificial Intelligence - Innovation News Network

Read More..

Risks posed by AI are real: EU moves to beat the algorithms that ruin lives – The Guardian

It started with a single tweet in November 2019. David Heinemeier Hansson, a high-profile tech entrepreneur, lashed out at Apples newly launched credit card, calling it sexist for offering his wife a credit limit 20 times lower than his own.

The allegations spread like wildfire, with Hansson stressing that artificial intelligence now widely used to make lending decisions was to blame. It does not matter what the intent of individual Apple reps are, it matters what THE ALGORITHM theyve placed their complete faith in does. And what it does is discriminate. This is fucked up.

While Apple and its underwriters Goldman Sachs were ultimately cleared by US regulators of violating fair lending rules last year, it rekindled a wider debate around AI use across public and private industries.

Politicians in the European Union are now planning to introduce the first comprehensive global template for regulating AI, as institutions increasingly automate routine tasks in an attempt to boost efficiency and ultimately cut costs.

That legislation, known as the Artificial Intelligence Act, will have consequences beyond EU borders, and like the EUs General Data Protection Regulation, will apply to any institution, including UK banks, that serves EU customers. The impact of the act, once adopted, cannot be overstated, said Alexandru Circiumaru, European public policy lead at the Ada Lovelace Institute.

Depending on the EUs final list of high risk uses, there is an impetus to introduce strict rules around how AI is used to filter job, university or welfare applications, or in the case of lenders assess the creditworthiness of potential borrowers.

EU officials hope that with extra oversight and restrictions on the type of AI models that can be used, the rules will curb the kind of machine-based discrimination that could influence life-altering decisions such as whether you can afford a home or a student loan.

AI can be used to analyse your entire financial health including spending, saving, other debt, to arrive at a more holistic picture, Sarah Kocianski, an independent financial technology consultant said. If designed correctly, such systems can provide wider access to affordable credit.

But one of the biggest dangers is unintentional bias, in which algorithms end up denying loans or accounts to certain groups including women, migrants or people of colour.

Part of the problem is that most AI models can only learn from historical data they have been fed, meaning they will learn which kind of customer has previously been lent to and which customers have been marked as unreliable. There is a danger that they will be biased in terms of what a good borrower looks like, Kocianski said. Notably, gender and ethnicity are often found to play a part in the AIs decision-making processes based on the data it has been taught on: factors that are in no way relevant to a persons ability to repay a loan.

Furthermore, some models are designed to be blind to so-called protected characteristics, meaning they are not meant to consider the influence of gender, race, ethnicity or disability. But those AI models can still discriminate as a result of analysing other data points such as postcodes, which may correlate with historically disadvantaged groups that have never previously applied for, secured, or repaid loans or mortgages.

And in most cases, when an algorithm makes a decision, it is difficult for anyone to understand how it came to that conclusion, resulting in what is commonly referred to as black-box syndrome. It means that banks, for example, might struggle to explain what an applicant could have done differently to qualify for a loan or credit card, or whether changing an applicants gender from male to female might result in a different outcome.

Circiumaru said the AI act, which could come into effect in late 2024, would benefit tech companies that managed to develop what he called trustworthy AI models that are compliant with the new EU rules.

Darko Matovski, the chief executive and co-founder of London-headquartered AI startup causaLens, believes his firm is among them.

The startup, which publicly launched in January 2021, has already licensed its technology to the likes of asset manager Aviva, and quant trading firm Tibra, and says a number of retail banks are in the process of signing deals with the firm before the EU rules come into force.

The entrepreneur said causaLens offers a more advanced form of AI that avoids potential bias by accounting and controlling for discriminatory correlations in the data. Correlation-based models are learning the injustices from the past and theyre just replaying it into the future, Matovski said.

He believes the proliferation of so-called causal AI models like his own will lead to better outcomes for marginalised groups who may have missed out on educational and financial opportunities.

It is really hard to understand the scale of the damage already caused, because we cannot really inspect this model, he said. We dont know how many people havent gone to university because of a haywire algorithm. We dont know how many people werent able to get their mortgage because of algorithm biases. We just dont know.

Matovski said the only way to protect against potential discrimination was to use protected characteristics such as disability, gender or race as an input but guarantee that regardless of those specific inputs, the decision did not change.

He said it was a matter of ensuring AI models reflected our current social values and avoided perpetuating any racist, ableist or misogynistic decision-making from the past. Society thinks that we should treat everybody equal, no matter what gender, what their postcode is, what race they are. So then the algorithms must not only try to do it, but they must guarantee it, he said.

Sign up to the daily Business Today email or follow Guardian Business on Twitter at @BusinessDesk

While the EUs new rules are likely to be a big step in curbing machine-based bias, some experts, including those at the Ada Lovelace Institute, are pushing for consumers to have the right to complain and seek redress if they think they have been put at a disadvantage.

The risks posed by AI, especially when applied in certain specific circumstances, are real, significant and already present, Circiumaru said.

AI regulation should ensure that individuals will be appropriately protected from harm by approving or not approving uses of AI and have remedies available where approved AI systems malfunction or result in harms. We cannot pretend approved AI systems will always function perfectly and fail to prepare for the instances when they wont.

Continue reading here:
Risks posed by AI are real: EU moves to beat the algorithms that ruin lives - The Guardian

Read More..

140 artificial intelligence-based systems along border to keep watch on China, Pak – The Tribune India

Tribune News Service

Ajay Banerjee

New Delhi, August 6

Enhancing the use of technology to keep an eye on China and Pakistan, the Army has deployed some 140 artificial intelligence-based surveillance systems to get live feed of the ground situation.

The 749-km-long Line of Control (LoC) with Pakistan and 3,448-km-long Line of Actual Control (LAC) with China now have world-class surveillance systems.

The systems include high-resolution cameras, sensors, UAV feed and radar feed, which are collated and applied through artificial intelligence to arrive at possible scenarios.

Cameras, sensors to keep eye on enemy

5G to help improve frontline connectivity

749-km-long LoC with Pakistan

3,448-km-long LAC with China

AI will enable remote target detection as well as classification of targets, be it a man or machine. All AI-oriented machines are tuned for interpretation, change and anomaly detection and even intrusions at the borders, besides reading drone footage. This will considerably reduce the requirement of manual monitoring.

The AI-based surveillance units can also be utilised for real-time social media monitoring and even prediction of adversary actions. These projects are part of the 12 AI domains identified by the National Task Force of Technology. AI-based suspicious vehicle recognition system has been deployed in eight locations in the Northern and Southern commands. This software has been deployed for generating intelligence in counter-terrorist operations.

The Army has set-up an AI centre at the Military College of Telecommunication Engineering, Mhow. AI is capable of providing considerable asymmetry during military operations and is one of the transformative changes in fighting wars, a source in the defence establishment said.

The Army has been collaborating with academia and Indian industry, as also the DRDO for such projects.

Besides, the Army is looking at 5G technology for supporting operations in the battlefield. The high-bandwidth connectivity is suited for frontline troop communication.

A joint study was carried out on the implementation of 5G in armed forces, which was led by the Corps of Signals.

#China #Pakistan

Go here to see the original:
140 artificial intelligence-based systems along border to keep watch on China, Pak - The Tribune India

Read More..