Page 3,877«..1020..3,8763,8773,8783,879..3,8903,900..»

State and Local Agencies Learn Cloud Strategies from the Feds – StateTech Magazine

The Birth of the Cloud-First Approach

For the past several years, federal agencies have gotten pretty good at understanding what to do (and not to do) when it comes to the cloud. That means theyve got a wealth of knowledge you can easily adopt for your own benefit.

For instance, in early 2011 the Obama administration formulated the Federal Cloud Computing Strategy, commonly known as Cloud First. That strategy gave federal agencies the green light to go all in on the cloud by requiring them to evaluate safe, secure cloud computing options before making any new investments. It was a visionary, necessary stake in the ground that successfully jump-started cloud adoption at the federal level.

Since then, federal agencies learned a few things.

First, they discovered the practical reality that not every workload is appropriate for the cloud. For example, applications that relied on sensitive data as well as applications that would be too costly to move, or legacy apps that were never designed for the cloud or were going to be retired soon were often better kept in on-premises data centers.

Then, agencies realized the costs of exiting the cloud could be quite high, as were the costs to store data. They didnt discover those costs until they had already taken that on-ramp to the cloud.

The feds learned theres no need to take a wholesale approach and migrate every application to the cloud. A hybrid cloud model, in which some applications are stored in the public cloud while others remain on-premises, is a valid approach that allows for better security while still leveraging the cost and flexibility benefits of the cloud.

Eschewing an all-or-nothing approach can save you from, as my companys CEO once put it, the mother of all lock-ins, where all of your data and applications are designed for a single cloud vendor. In the early days, federal IT professionals were unprepared for the potentially high egress costs associated with extracting data from the cloud. You can learn from their experiences and create an exit strategy that includes an appropriate budget.

MORE FROM STATETECH: Find out how CASBs provide visibility and security for enforcing rules in the cloud.

The tough lessons federal agencies learned led to an evolution in the way the government approached the cloud. Instead of thinking Cloud First, the Trump administration encouraged agencies to become Cloud Smart with a revised strategy introduced in 2019.

Cloud Smart focuses on three pillars: security, procurement and workforce. The idea is to use the cloud to modernize and improve data security, use repeatable practices and knowledge sharing to streamline procurement processes and upskill, retrain, and recruit key talent.

Each of these pillars is based on the need for open infrastructure components (such as operating systems and application servers), automation and knowledge sharing, respectively. By standardizing systems across all platforms and programs, your security will remain strong.

Cloud Smart policy suggests expediting procurement as a centralized process in a common portal. Repeatable processes can be avoided by automating everyday tasks, such as installing upgrades and patches. Knowledge sharing stems from an open organization built upon the willingness of managers and employees to adopt philosophies emphasizing transparency, cross-departmental and cross-agency collaboration and continuous updates.

All of these strategies are viable across levels of government. In fact, its possible theyre more applicable at the state and local levels, where agencies tend to be smaller and have limited budgets to devote to security and training, yet need to make processes more efficient.

MORE FROM STATETECH: Find out about the cloud certifications state and local government employees need.

Cloud Smart isnt the only federal resource states should check out. The CIO Councils Application Rationalization Playbook is a great resource for learning about rationalizing the many applications in your organization and determining which are appropriate for the cloud. The National Institute of Standards and Technology also has a number of best-practice documents downloadable for free.

Theres no reason why you shouldnt cherry pick for your own benefit what the federal government has already put in place. You can do so now and be ready to fully realize the promise and benefits of the cloud and steer clear of the well-known drawbacks, thanks to the trail the feds have already blazed.

Every dollar you dont spend on reinventing the wheel can go into innovation and improved service delivery, and youll be on the same level as those federal organizations all without having to go through the cloud-first learning curve.

View original post here:
State and Local Agencies Learn Cloud Strategies from the Feds - StateTech Magazine

Read More..

ARMs new edge AI chips promise IoT devices that wont need the cloud – The Verge

Edge AI is one of the biggest trends in chip technology. These are chips that run AI processing on the edge or, in other words, on a device without a cloud connection. Apple recently bought a company that specializes in it, Googles Coral initiative is meant to make it easier, and chipmaker ARM has already been working on it for years. Now, ARM is expanding its efforts in the field with two new chip designs: the Arm Cortex-M55 and the Ethos-U55, a neural processing unit meant to pair with the Cortex-M55 for more demanding use cases.

The benefits of edge AI are clear: running AI processing on a device itself, instead of in a remote server, offers big benefits to privacy and speed when it comes to handling these requests. Like ARMs other chips, the new designs wont be manufactured by ARM; rather, they serve as blueprints for a wide variety of partners to use as a foundation for their own hardware.

But what makes ARMs new chip designs particularly interesting is that theyre not really meant for phones and tablets. Instead, ARM intends for the chips to be used to develop new Internet of Things devices, bringing AI processing to more devices that otherwise wouldnt have those capabilities. One use case ARM imagines is a 360-degree camera in a walking stick that can identify obstacles, or new train sensors that can locally identify problems and avoid delays.

As for the specifics, the Arm Cortex-M55 is the latest model in ARMs Cortex-M line of processors, which the company says offers up to a 15x improvement in machine learning performance and a 5x improvement in digital signal processing performance compared to previous Cortex-M generations.

For truly demanding edge AI tasks, the Cortex-M55 (or older Cortex-M processors) can be combined with the Ethos-U55 NPU, which takes things a step further. It can offer another 32x improvement in machine learning processing compared to the base Cortex-M55, for a total of 480x better processing than previous generations of Cortex-M chips.

While those are impressive numbers, ARM says that the improvement in data throughput here will make a big difference in what edge AI platforms can do. Current Cortex-M platforms can handle basic tasks like keyword or vibration detection. The M55s improvements let it work with more advanced things like object recognition. And the full power of a Cortex-M chip combined with the Ethos-U55 promises even more functionality, with the potential for local gesture and speech recognition.

All of these advances will take some time to roll out. While ARM is announcing the designs today and releasing documentation, it doesnt expect actual silicon to arrive until early 2021 at the earliest.

Read the original here:
ARMs new edge AI chips promise IoT devices that wont need the cloud - The Verge

Read More..

Configuration mistakes blamed for bulk of stolen records last year: IBM – IT World Canada

Misconfigured servers accounted for 86 per cent of the record 8.5 billion records compromised around the world last year, according to an analysis by IBM Security released today.

That was one of the conclusions reached by the unit in its annual Threat Intelligence Index, which peers into customer sensor and other data. (Registration required)

What IBM calls the inadvertent insider, also know as misconfigured servers across a wide range of vectors including publicly accessible cloud storage, unsecured cloud databases, and improperly secured sync backups, or open internet-connected network area storage devices.

This is a stark departure from what we reported in 2018 when we observed a 52 per cent decrease from 2017 in records exposed due to misconfigurations, and these records made up less than half of total records, the report said.

Its not that the total number of misconfiguration incidents increased. Quite the contrary, the number of such incidents actually dropped 14 per cent year over year. The report says this implies that when a misconfiguration breach did occur, the number of records affected was significantly higher in 2019.

Nearly three-quarters of the breaches where there were more than 100 million records breached were misconfiguration incidents. Two of those misconfiguration incidents alone, which occurred in what IBM calls the professional services sector, accounted for billions of records for each incident.

IBM doesnt name the companies those incidents. But one might have been the discovery of an unsecured ElasticSearch server with data that appeared to come from a U.S. data processing company or one of its subscribers.

Misconfiguration errors will only decrease if companies take security more seriously, Ray Boisvert, an associate partner in IBM Canadas security services who used to be a special security adviser to the Ontario government, said in an interview.

It comes down to for all organizations that security needs to be woven into the fabric. The business processes, the launch of new services, the intranet for employees, web-facing content, needs to be linked to a philosophy that security is the enabler.

Tighter identity and access management including the addition of two-factor authentication is also imperative, he added.

The report also found:

Of the OT attacks, most were centred around using a combination of known vulnerabilities within SCADA (supervisory control and data acquisition) and ICS (industrial control system) hardware components, as well as password-spraying attacks using brute force login tactics against ICS targets.

The overlap between IT infrastructure and OT, such as Programmable Logic Controllers (PLCs) and ICS, continued to present a risk to organizations that relied on such hybrid infrastructures in 2019, says the report.

Meanwhile the huge number of devices clumped under the Internet of Things internet-connected devices ranging from surveillance cameras to toys has been gradually shaping up to be one of the threat vectors that can affect both consumers and enterprise-level operations by using relatively simplistic malware and automated, often scripted, attacks, says the report.

The report urges organizations to take the following steps to better prepare for cyber threats this year:

View original post here:
Configuration mistakes blamed for bulk of stolen records last year: IBM - IT World Canada

Read More..

IT infrastructure trends 2020 – Verdict

The market for IT infrastructure equipment will be dominated by increased options for customers data management and increased demand for solutions that serve specific workloads.

Firms use private clouds to achieve a range of benefits, including improved IT resource efficiency, cost reductions, security, and the ability to gain more control over workload performance, security, and compliance. The use of private cloud solutions will remain strong over the next 12-24 months. Competition between private cloud vendors will also remain intense.

Underpinning edge computing is the cost in time and bandwidth to transport data generated by IoT devices over long distances to be processed at central data centres. Edge computing infrastructure will take multiple forms and will include micro data centres, dedicated edge servers, IoT gateways, and data management platforms, as well as hyperconverged infrastructure for edge deployments. 5G will be both a driver and enabler of edge computing.

HPC evolved in the 1960s from early scientific computing objectives for centralised, highly scalable processing in support of singular, compute intensive workloads. Solutions from HP, Cray, Fujitsu, IBM, and many others combined traditional desktop computer CPUs with specialised storage and connectivity resources in a large computing cluster. HPC will expand rapidly over the coming year to embrace probabilistic styles of computing in response to the growing demand for complex workloads such as AI modeling at scale.

The relationship between AI and data centre technologies focuses on two broad areas: AI for IT operations (AIOps) and the introduction of data centre platforms. AI-optimised data centre platforms will become an increasingly competitive market sub-segment over the next 12-24 months. Some platforms will incorporate AI capabilities as part of the overall solution while others will leverage the latest processing technologies and hardware accelerators to support workloads with high performance requirements.

Virtualisation involves the creation of virtual pools of compute, storage, and networking resources that are linked with but decoupled from the underlying physical hardware. VMware, the pioneer of virtualisation technology, accounts for over 80% of VMs with its ESXi hypervisor and vSphere virtualisation platform. Virtualisation software providers will offer solutions to help enterprises transition from rival technology platforms to their own.

Since cloud computing began ushering in a new application development and delivery economy in the form of platform services, policy based applications have become containerised and orchestrated through Kubernetes technology.

As applications have begun to respond to continuous integration, continuous delivery (CICD), they will present boundless opportunities along with complexities associated with moving containerised apps into production. This will be helped by open source software (OSS) technologies such as Istio service mesh, Prometheus monitoring, and other sidecar projects.

Data centre hardware includes computer servers, storage systems, networking switches and routers, and converged infrastructure appliances. Enterprise investment in data centre hardware is strongly influenced by demand from hyperscale companies, such as AWS, Google, and Facebook, as well as from colocation providers. One major trend that will shape the adoption of data centre hardware will be investments in hardware specifically designed to support next-generation workloads including high-capacity Ethernet switching and GPU-equipped servers and storage systems.

Silicon photonics is a major trend in the networking industry, but is of increasing importance in the data centre industry as well. Today, the practical application of silicon photonics is in pluggable optics for networking where the new packaging brings manufacturing and cost reductions. Companies like Cisco, Intel, and Macom are investing in photonic circuitry for networking, and for use either on die for chips or for interconnects on circuit boards.

Both legacy back-office and modern cloud-first solutions share one common denominator: data. This is big data, historically associated with the Apache Hadoop storage framework Regardless of the underlying data storage platform, when coupled with supportive data processing technologies like Apache Spark, these big data platforms allow companies to ingest, process, and analyse tremendous amounts of data from a wide array of sources. We expect vendors to continue to invest in solutions such as Dataproc to shift discrete, splintered data storage to a unified platform.

Get the Verdict morning email

SDN has settled into three camps dominated by Cisco, VMware, and a scattering of OpenFlow. The SDN market feels like it has stalled because there has not been a typical 2.0 moment, but it is moving quickly into new areas.

Quantum computers could open new market opportunities across security, life sciences, manufacturing, and many other industries. It will be some years yet before quantum supremacy is achieved, and many years before it is commercially available. For the next few years, we expect to see early movers focus on hardware and education.

This is an edited extract from the Tech, Media, & Telecom Trends 2020 Thematic Research report produced by GlobalData Thematic Research.

GlobalData is this websites parent business intelligence company.

More:
IT infrastructure trends 2020 - Verdict

Read More..

Keeping classified information secret in a world of quantum computing – Bulletin of the Atomic Scientists

By the end of 1943, the US Navy had installed 120 electromechanical Bombe machines like the one above, which were used to decipher secret messages encrypted by German Enigma machines, including messages from German U-boats. Built for the Navy by the Dayton company National Cash Register, the US Bombe was an improved version of the British Bombe, which was itself based on a Polish design. Credit: National Security Agency

Quantum computing is a technology that promises to revolutionize computing by speeding up key computing tasks in areas such as machine learning and solving otherwise intractable problems. Some influential American policy makers, scholars, and analysts are extremely concerned about the effects quantum computing will have on national security. Similar to the way space technology was viewed in the context of the US-Soviet rivalry during the Cold War, scientific advancement in quantum computing is seen as a race with significant national security consequences, particularly in the emerging US-China rivalry. Analysts such as Elsa Kania have written that the winner of this race will be able to overcome all cryptographic efforts and gain access to the state secrets of the losing government. Additionally, the winner will be able to protect its own secrets with a higher level of security than contemporary cryptography guarantees.

These claims are considerably overstated. Instead of worrying about winning the quantum supremacy race against China, policy makers and scholars should shift their focus to a more urgent national security problem: How to maintain the long-term security of secret information secured by existing cryptographic protections, which will fail against an attack by a future quantum computer.

The race for quantum supremacy. Quantum supremacy is an artificial scientific goalone that Google claims to have recently achievedthat marks the moment a quantum computer computes an answer to a well-defined problem more efficiently than a classical computer. Quantum supremacy is possible because quantum computers replace classical bitsrepresenting either a 0 or a 1with qubits that use the quantum principles of superposition and entanglement to do some types of computations an order of magnitude more efficiently than a classical computer. While quantum supremacy is largely meant as a scientific benchmark, some analysts have co-opted the term and set it as a national-security goal for the United States.

These analysts draw a parallel between achieving quantum supremacy and the historical competition for supremacy in space and missile technology between the United States and the Soviet Union. As with the widely shared assessment in the 1950s and 1960s that the United States was playing catchup, Foreign Policy has reported on a quantum gap between the United States and China that gives China a first mover advantage. US policy experts such as Kania, John Costello, and Congressman Will Hurd (R-TX) fear that if China achieves quantum supremacy first, that will have a direct negative impact on US national security.

Some analysts who have reviewed technical literature have found that quantum computers will be able to run algorithms that allow for the decryption of encrypted messages without access to a decryption key. If encryption schemes can be broken, message senders will be exposed to significant strategic and security risks, and adversaries may be able to read US military communications, diplomatic cables, and other sensitive information. Some of the policy discussion around this issue is influenced by suggestions that the United States could itself become the victim of a fait accompli in code-breaking after quantum supremacy is achieved by an adversary such as China. Such an advantage would be similar to the Allies advantage in World War II when they were able to decrypt German radio traffic in near-real time using US and British Bombe machines (see photo above).

The analysts who have reviewed the technical literature have also found that quantum technologies will enable the use of cryptographic schemes that do not rely on mathematical assumptions, specifically a scheme called quantum key distribution. This has led to the notion in the policy community that quantum communications will be significantly more secure than classical cryptography. Computer scientist James Kurose of the National Science Foundation has presented this view before the US Congress, for example.

Inconsistencies between policy concerns and technical realities. It is true that quantum computing threatens the viability of current encryption systems, but that does not mean quantum computing will make the concept of encryption obsolete. There are solutions to this impending problem. In fact, there is an entire movement in the field to investigate post-quantum cryptography. The aims of this movement are to find efficient encryption schemes to replace current methods with new, quantum-secure encryption.

The National Institute of Standards and Technology is currently in the process of standardizing a quantum-safe public key encryption system that is expected to be completed by 2024 at the latest. The National Security Agency has followed suit by announcing its Commercial National Security Algorithm Encryption Suite. These new algorithms can run on a classical computera computer found in any home or office today. In the future, there will be encryption schemes that provide the same level of security against both quantum and classical computers as the level provided by current encryption schemes against classical computers only.

Because quantum key distribution enables senders and receivers to detect eavesdroppers, analysts have claimed that the ability of the recipient and sender [to] determine if the message has been intercepted is a major advantage over classical cryptography. While eavesdropper detection is an advancement in technology, it does not actually provide any significant advantage over classical cryptography, because eavesdropper detection is not a problem in secure communications in the first place.

When communicating parties use quantum key distribution, an eavesdropper cannot get ciphertext (encrypted text) and therefore cannot get any corresponding plaintext (unencrypted text). When the communicating parties use classical cryptography, the eavesdropper can get ciphertext but cannot decrypt it, so the level of security provided to the communicating parties is indistinguishable from quantum key distribution.

The more pressing national security problem. While the technical realities of quantum computing demonstrate that there are no permanent security implications of quantum computing, there is a notable longer-term national security problem: Classified information with long-term intelligence value that is secured by contemporary encryption schemes can be compromised in the future by a quantum computer.

The most important aspect of the executive order that gives the US government the power to classify information, as it relates to the discussion of quantum computing and cryptography, is that this order allows for the classification of all types of information for as long as 25 years. Similarly, the National Security Agency provides guidelines to its contractors that classified information has a potential intelligence life of up to 30 years. This means that classified information currently being secured by contemporary encryption schemes could be relevant to national security through at least 2049and will not be secure in the future against cryptanalysis enabled by a quantum computer.

In the past, the United States has intercepted and stored encrypted information for later cryptanalysis. Toward the end of World War II, for example, the United States became suspicious of Soviet intentions and began to intercept encrypted Soviet messages. Because of operator error, some of the messages were partially decryptable. When the United States realized this, the government began a program called the Venona Project to decrypt these messages.

It is likely that both the United States and its adversaries will have Venona-style projects in the future. A few scholars and individuals in the policy community have recognized this problem. Security experts Richard Clarke and Robert Knake have stated that governments have been rumored for years to be collecting and storing other nations encrypted messages that they now cannot crack, with the hope of cracking them in the future with a quantum computer.

As long as the United States continues to use encryption algorithms that are not quantum-resistant, sensitive information will be exposed to this long-term risk. The National Institute of Standards and Technologys quantum-resistant algorithm might not be completedand reflected in the National Security Agencys own standarduntil 2024. The National Security Agency has stated that algorithms often require 20 years to be fully deployed on NSS [National Security Systems]. Because of this, some parts of the US national security apparatus may be using encryption algorithms that are not quantum-resistant as late as 2044. Any information secured by these algorithms is at risk of long-term decryption by US adversaries.

Recommendations for securing information. While the United States cannot take back any encrypted data already in the possession of adversaries, short-term reforms can reduce the security impacts of this reality. Taking 20 years to fully deploy any cryptographic algorithm should be considered unacceptable in light of the threat to long-lived classified information. The amount of time to fully deploy a cryptographic algorithm should be lowered to the smallest time frame feasible. Even if this time period cannot be significantly reduced, the National Security Agency should take steps to triage modernization efforts and ensure that the most sensitive systems and information are updated first.

Luckily for the defenders of classified information, existing encryption isnt completely defenseless against quantum computing. While attackers with quantum computers could break a significant number of classical encryption schemes, it still may take an extremely large amount of time and resources to carry out such attacks. While the encryption schemes being used today can eventually be broken, risk mitigation efforts can increase the time it takes to decrypt information.

This can be done by setting up honeypotssystems disguised as vulnerable classified networks that contain useless encrypted dataand allowing them to be attacked by US adversaries. This would force adversaries to waste substantial amounts of time and valuable computer resources decrypting useless information. Such an operation is known as as defense by deception, a well-proven strategy to stymie hackers looking to steal sensitive information. This strategy is simply an application of an old risk mitigation strategy to deal with a new problem.

Quantum computing will have an impact on national security, just not in the way that some of the policy community claims that it will. Quantum computing will not significantly reduce or enhance the inherent utility of cryptography, and the outcome of the race for quantum supremacy will not fundamentally change the distribution of military and intelligence advantages between the great powers.

Still, the United States needs to be wary of long-term threats to the secrecy of sensitive information. These threats can be mitigated by reducing the deployment timeline for new encryption schemes to something significantly less than 20 years, triaging cryptographic updates to systems that communicate and store sensitive and classified information, and taking countermeasures that significantly increase the amount of time and resources it takes for adversaries to exploit stolen encrypted information. The threats of quantum computing are manageable, as long as the US government implements these common-sense reforms.

Editors Note: The author wrote a longer version of this essay under a Lawrence Livermore National Laboratory contract with the US Energy Department. Lawrence Livermore National Laboratory is operated by Lawrence Livermore National Security, LLC, for the US Department of Energy, National Nuclear Security Administration under Contract DE-AC52-07NA27344. The views and opinions of author expressed herein do not necessarily state or reflect those of the United States government or Lawrence Livermore National Security, LLC. LLNL-JRNL-799938.

Read more:
Keeping classified information secret in a world of quantum computing - Bulletin of the Atomic Scientists

Read More..

AI on steroids: Much bigger neural nets to come with new hardware, say Bengio, Hinton, and LeCun – ZDNet

Geoffrey Hinton, center. talks about what future deep learning neural nets may look like, flanked by Yann LeCun of Facebook, right, and Yoshua Bengio of Montreal's MILA institute for AI, during a press conference at the 34th annual AAAI conference on artificial intelligence.

The rise of dedicated chips and systems for artificial intelligence will "make possible a lot of stuff that's not possible now," said Geoffrey Hinton, the University of Toronto professor who is one of the godfathers of the "deep learning" school of artificial intelligence, during a press conference on Monday.

Hinton joined his compatriots, Yann LeCun of Facebook and Yoshua Bengio of Canada's MILA institute, fellow deep learning pioneers, in an upstairs meeting room of the Hilton Hotel on the sidelines of the 34th annual conference on AI by the Association for the Advancement of Artificial Intelligence. They spoke for 45 minutes to a small group of reporters on a variety of topics, including AI ethics and what "common sense" might mean in AI. The night before, all three had presented their latest research directions.

Regarding hardware, Hinton went into an extended explanation of the technical aspects that constrain today's neural networks. The weights of a neural network, for example, have to be used hundreds of times, he pointed out, making frequent, temporary updates to the weights. He said the fact graphics processing units (GPUs) have limited memory for weights and have to constantly store and retrieve them in external DRAM is a limiting factor.

Much larger on-chip memory capacity "will help with things like Transformer, for soft attention," said Hinton, referring to the wildly popular autoregressive neural network developed at Google in 2017. Transformers, which use "key/value" pairs to store and retrieve from memory, could be much larger with a chip that has substantial embedded memory, he said.

Also: Deep learning godfathers Bengio, Hinton, and LeCun say the field can fix its flaws

LeCun and Bengio agreed, with LeCun noting that GPUs "force us to do batching," where data samples are combined in groups as they pass through a neural network, "which isn't efficient." Another problem is that GPUs assume neural networks are built out of matrix products, which forces constraints on the kind of transformations scientists can build into such networks.

"Also sparse computation, which isn't convenient to run on GPUs ...," said Bengio, referring to instances where most of the data, such as pixel values, may be empty, with only a few significant bits to work on.

LeCun predicted they new hardware would lead to "much bigger neural nets with sparse activations," and he and Bengio both emphasized there is an interest in doing the same amount of work with less energy. LeCun defended AI against claims it is an energy hog, however. "This idea that AI is eating the atmosphere, it's just wrong," he said. "I mean, just compare it to something like raising cows," he continued. "The energy consumed by Facebook annually for each Facebook user is 1,500-watt hours," he said. Not a lot, in his view, compared to other energy-hogging technologies.

The biggest problem with hardware, mused LeCun, is that on the training side of things, it is a duopoly between Nvidia, for GPUs, and Google's Tensor Processing Unit (TPU), repeating a point he had made last year at the International Solid-State Circuits Conference.

Even more interesting than hardware for training, LeCun said, is hardware design for inference. "You now want to run on an augmented reality device, say, and you need a chip that consumes milliwatts of power and runs for an entire day on a battery." LeCun reiterated a statement made a year ago that Facebook is working on various internal hardware projects for AI, including for inference, but he declined to go into details.

Also: Facebook's Yann LeCun says 'internal activity' proceeds on AI chips

Today's neural networks are tiny, Hinton noted, with really big ones having perhaps just ten billion parameters. Progress on hardware might advance AI just by making much bigger nets with an order of magnitude more weights. "There are one trillion synapses in a cubic centimeter of the brain," he noted. "If there is such a thing as General AI, it would probably require one trillion synapses."

As for what "common sense" might look like in a machine, nobody really knows, Bengio maintained. Hinton complained people keep moving the goalposts, such as with natural language models. "We finally did it, and then they said it's not really understanding, and can you figure out the pronoun references in the Winograd Schema Challenge," a question-answering task used a computer language benchmark. "Now we are doing pretty well at that, and they want to find something else" to judge machine learning he said. "It's like trying to argue with a religious person, there's no way you can win."

But, one reporter asked, what's concerning to the public is not so much the lack of evidence of human understanding, but evidence that machines are operating in alien ways, such as the "adversarial examples." Hinton replied that adversarial examples show the behavior of classifiers is not quite right yet. "Although we are able to classify things correctly, the networks are doing it absolutely for the wrong reasons," he said. "Adversarial examples show us that machines are doing things in ways that are different from us."

LeCun pointed out animals can also be fooled just like machines. "You can design a test so it would be right for a human, but it wouldn't work for this other creature," he mused. Hinton concurred, observing "house cats have this same limitation."

Also: LeCun, Hinton, Bengio: AI conspirators awarded prestigious Turing prize

"You have a cat lying on a staircase, and if you bounce a soccer ball down the stairs toward a care, the cat will just sort of watch the ball bounce until it hits the cat in the face."

Another thing that could prove a giant advance for AI, all three agreed, is robotics. "We are at the beginning of a revolution," said Hinton. "It's going to be a big deal" to many applications such as vision. Rather than analyzing the entire contents of a static image or video frame, a robot creates a new "model of perception," he said.

"You're going to look somewhere, and then look somewhere else, so it now becomes a sequential process that involves acts of attention," he explained.

Hinton predicted last year's work by OpenAI in manipulating a Rubik's cube was a watershed moment for robotics, or, rather, an "AlphaGo moment," as he put it, referring to DeepMind's Go computer.

LeCun concurred, saying that Facebook is running AI projects not because Facebook has an extreme interest in robotics, per se, but because it is seen as an "important substrate for advances in AI research."

It wasn't all gee-whiz, the three scientists offered skepticism on some points. While most research in deep learning that matters is done out in the open, some companies boast of AI while keeping the details a secret.

"It's hidden because it's making it seem important," said Bengio, when in fact, a lot of work in the depths of companies may not be groundbreaking. "Sometimes companies make it look a lot more sophisticated than it is."

Bengio continued his role among the three of being much more outspoken on societal issues of AI, such as building ethical systems.

When LeCun was asked about the use of factual recognition algorithms, he noted technology can be used for good and bad purposes, and that a lot depends on the democratic institutions of society. But Bengio pushed back slightly, saying, "What Yann is saying is clearly true, but prominent scientists have a responsibility to speak out." LeCun mused that it's not the job of science to "decide for society," prompting Bengio to respond, "I'm not saying decide, I'm saying we should weigh in because governments in some countries are open to that involvement."

Hinton, who frequently punctuates things with a humorous aside, noted toward the end of the gathering his biggest mistake with respect to Nvidia. "I made a big mistake back in with Nvidia," he said. "In 2009, I told an audience of 1,000 grad students they should go and buy Nvidia GPUs to speed up their neural nets. I called Nvidia and said I just recommended your GPUs to 1,000 researchers, can you give me a free one, and they said no.

"What I should have done, if I was really smart, was take all my savings and put it into Nvidia stock. The stock was at $20 then, now it's, like, $250."

Read more here:
AI on steroids: Much bigger neural nets to come with new hardware, say Bengio, Hinton, and LeCun - ZDNet

Read More..

Why The Race For AI Dominance Is More Global Than You Think – Forbes

Getty

When people hear about the race for Artificial Intelligence (AI) dominance, they often think that the main competition is between the US and China. After all, the US and China have most of the largest and most well funded AI companies on the planet, and the pace of funding, company growth, and adoption doesnt seem to be slowing anytime soon. However, if you look closely, youll see that many other countries have a stake in the AI race, and indeed, some countries have AI efforts, funding, technologies, and intellectual property that make them serious contenders in the jostling for AI dominance. In fact according to a recent report from analyst firm Cognilytica, France, Israel, United Kingdom, and the United States all are equally strong when it comes to AI, with China, Canada, Germany, Japan, and South Korea equally close in their AI strategic strength. (Disclosure: Im a principal analyst with Cognilytica).

The Current Leaders in AI Funding and Dominance: US and China

AI startups are raising more money than ever. AI-focused companies raised $12 Billion in 2017 alone, more than doubling venture funding over the previous year. Most of this funding is concentrated in US and Chinese companies, but the source of those funds is much more international. Softbank, based in Japan, has amassed a $100 Billion investment fund, with many international investors including Saudi Arabias sovereign investment fund and other global sources of capital. While US companies have put up significant investment rounds with the power of Silicon Valleys VC funds, China now has the most valuable AI startup, Sensetime, which raised over $1.2 Billion and a rumored additional $1 Billion raise on the way.

However, what makes AI as a technology sector different from previous major waves of investment, is that AI is seen as strategic technology by many governments. In 2017 China released a three step program outlining its goal to become a world leader in A.I. by 2030. The government aims to make the AI industry worth about $150 billion and is pushing for greater use of AI in a number of areas such as the military and smart cities. Furthermore, the Chinese government has made big bets including a planned $2.1 Billion AI-focused technology research park. And in 2019 TheBeijing AI Principleswere released by a multistakeholder coalition including the Beijing Academy of Artificial Intelligence (BAAI), Peking University, Tsinghua University, Institute of Automation and Institute of Computing Technology in Chinese Academy of Sciences, and an AI industrial league involving firms like Baidu, Alibaba and Tencent.

In addition, the Chinese technology ecosystem has developed to become a powerhouse in its own right. China has many multi-billion dollar tech giants including Alibaba, Baidu, Tencent, and Huawei Technologies, who are each heavily investing in AI. Chinese companies also work more closely with the Chinese government, and laws in China are the most relaxed with regards to customer privacy and use of AI technologies such as facial recognition on their citizens. Chinas government has already embraced the use of facial recognition technology and has quickly adopted this technology in everyday use. In most other counties such as the US for example, privacy concerns prevent pervasive use of facial recognition technology, but such concerns or impediments to adoption dont exist in China.

The story of technology company creation and funding in the United States is already well known. Silicon Valley is both a region as well as a euphemism for the entire tech industry, showing how dominant the US has been for the past several decades with technology creation and adoption. Venture capital as an industry was invented and perfected in the US, and the result of that has been the creation of such enduring tech giants like Amazon, Apple, Facebook, Microsoft, Google, IBM and thousands of other technology firms big and small. Collectively trillions of dollars has been invested in these firms by private and public sector investors to create the technology industry as we know it today. Certainly, none of that is going away anytime soon.

In addition, the US has an extremely well developed and highly skilled labor pool with academic powerhouses and research institutions that continue to push the boundaries of what is capable with AI. What is notable is that even in the US, the dominance of Silicon Valley as a specific, San Francisco-bay geographic region is starting to slip. The New York city region has produced many large AI-focused technology firms, and research in the Boston-area centered around MIT and Harvard, Pittsburgh with Carnegie Mellon, the Washington, DC metro area with its legions of government-focused contractors and development shops, Southern Californias emerging tech ecosystem, Seattle-based Amazon and Microsoft, and many more locations in the US are loosening the hold that Northern California has on the technology industry with respect to AI. And just outside the US, Canadian firms from Toronto, Montreal, and Vancouver are further eroding the dominance of Silicon Valley with respect to AI.

In 2018 the United States issued an Executive Order from the President naming AI the second highest R&D priority after the security of the American people for the fiscal year 2020. Additionally, the U.S. Department of Defenseannouncedit will invest up to $2 billion over the next five years towards the advancement of AI. As recently as 2020 the United States launched the American AI Initiative with the strategy aimed at focusing the federal government resources. The US federal government also launched AI.gov to make it easier to access all of the governmental AI initiatives currently underway. Once potentially seen lackluster in comparison to that of China and other countries the US government has really started making AI a priority to keep up in recent years.

Countries With Significant Stakes in AI

As mentioned above, what makes the AI industry unique is that it is actually not a new thing, but rather evolved over decades, even prior to the development of the modern digital computer. As a result, many technology developments, investment, and intellectual property exists outside the US and China. Countries that have been involved with AI since the early days are realizing the strategic nature of AI and doubling down on their efforts to retain a stake in global AI share and maintain their relevance and importance.

Japan

Japan has long been a leader in the AI industry, and in particular their development and adoption of robotics. Japanese firms introduced concepts such as the 3 Ds (Ks) of robotics that we discussed in our research on cobots. Not only is their technology research excellence on par with anywhere in the world, they have the funding to back it up. As mentioned earlier, Japan-based Softbank is an investor powerhouse unrivaled in the venture capital industry.

Japans government released their Artificial Intelligence Technology Strategy in March 2017. This strategy includes an Industrialization Roadmap and focuses the development of AI into three phases: the utilization and application of AI through 2020, the publics use of AI from 2025-2030, and lastly an ecosystem built by connecting multiplying domains. The countrys strategy focuses on R&D for AI, collaboration between industry, government, and academia to advance AI research, and addressing areas related to productivity, welfare and mobility.

However, it is important to note that while Japan continues to exhibit dominance in robotics and other AI fields as well as its Softbank powerhouse, many of the firms that Softbank is investing in are not Japan-based, and so much of the investment is not remaining focused on Japans own AI industry. In addition, while technology development is advanced and rapidly progressing and while Japan is known as a country to embrace technology, many Japanese companies have not been quick to embrace AI technology and the use of AI is largely limited to the financial sector and concentrated in the manufacturing industry. The country is also facing significant demographic pressure, with an aging population, causing a shortage in available workforce. On the one hand, the adoption of AI and robotic technologies are seen as a solution to labor and aging demographics, on the otherhand, the lack of workforce will cause strategic problems for creation of AI dominant companies.

South Korea

South Koreas government is a significant investor and strong supporter of local technology development, and AI is certainly no exception. The government recently announced it plans to spend $2 billion by 2022 to strengthen its AI R&D capability including creating at least six new AI schools by 2020, with plans to educate more than 5,000 new high quality engineers in Korea in response to a shortage of AI engineers. The government also plans to fund large scale AI projects related to medicine, national defense, and public safety as well as starting an AI R&D challenge similar to those developed by the US Defense Advanced Research Projects Agency (DARPA). The government will also invest to support the creation and development of AI startups and businesses. This support includes the creation of an AI-oriented start-up incubator to support emerging AI businesses and funding for the creation of an AI semiconductor by 2029.

South Korea is home to many large tech companies such as Samsung, LG, and Hyundai among others, and is known for its automotive, electronics, and semiconductor industries as well as the use of industrial robotics technology. It also famously hosted the match where DeepMinds AlphaGo defeated Gos world champion Lee Sedol (a Korean-native). Clearly, you cant count South Korea out of any race for AI dominance. The only thing significantly lacking is a well-developed venture capital ecosystem and a large number of startups. South Koreas AI efforts are almost entirely concentrated in the activities of the major technology incumbents and government activities.

United Kingdom

The United Kingdom is a clear leader for AI and the government is financially supporting AI initiatives. In November 2017, the UK government announced 68 million of funding for research into AI and robotics projects aimed at improving safety in extreme environments as well as funding four new research hubs that will be created to help develop robotic technology to improve safety in off-shore wind and nuclear energy. It has a goal to invest about $1.3 billion in AI investment from both public and private funds over the coming years. As part of this plan, Global Brain, a Japan-based venture capital firm, plans to invest about $48 million in AI-focused UK-based tech startups as well as open a European headquarters in the United Kingdom. Canadian venture capital firm Chrysalix also plans to open a European headquarters in the U.K. as well as invest over $100 million in UK-based startups who specialize in AI and robotics. The University of Cambridge is installing a $13 million supercomputer and will give U.K. businesses access to the new supercomputer to help with AI-related projects.

The U.K. is of course also the home of Alan Turing, renowned forefather of computing and an early proponent of AI, with the namesake Turing Test. The UK can also claim (in not such a great light) to be one of the precipitating factors of the first AI Winter when the Lighthill Report was released in 1973 leading to significant declines in AI investment. As such, the UK has exhibited in the past significant influence positively, and negatively, in worldwide AI spending and adoption. To avoid future problems, the U.K. is looking to position itself as a world leader in ethical AI standards. The UK sees this as an opportunity to position itself as an AI leader with ethical AI, helping to create standards used for all. It knows it cant compete with AI funding and development from counties like the US and China but thinks it has a shot by taking an ethical standards approach and leveraging its early status as a lead in AI development.

France

Frances President Emmanuel Macron released a national strategy for artificial intelligence in early 2018. The country announced that over the next five years it will invest more than 1.5 billion for AI-related research and support for emerging startups in a bid to compete with the US, China, and others for AI dominance. The French strategy is to put an emphasis on and target four specific areas of AI related to health, transportation (such as driverless cars), the environment, and defense/security. Some notable AI researchers and data scientists were educated in France, such as Facebooks head of AI Yann LeCun. France wants to try to keep that talent in France instead of moving to overseas companies.

Many companies such as Samsung, Fujitsu, DeepMind, IBM and Microsoft have announced plans to open offices in France for AI research. The French administration also wants to share new data sets with the public making it easy to access and build AI services using those data sets. The caveat to receiving public funds is that research projects or companies financed with public money will have to share their data. Many European Union (EU) officials have expressed dismay with the way that Facebook, Google, Microsoft, Amazon, and others have hoarded user data, and Macron and his administration are concerned about the black box of AI data and decision-making. France is also focused on addressing the ethical concerns around AI as well as trying to create unbiased data sets which is part of the reason for open algorithms and data sets. While Frances efforts are significant, they pale in terms of total money put into the industry and resources available to compete with the efforts of other nations.

Germany

Germany is an industrial powerhouse, has long been known to have great engineering capabilities, and Berlin is currently Europes top AI talent hub. According to Atomicos 2017 State of European Tech report, Germany is most likely to become a leader in areas such as autonomous vehicles, robotics and quantum computing. In fact, almost half of all worldwide patents on autonomous driving come from German car companies or their suppliers such as Bosch, Volkswagen, Audi and Porsche. These German companies had begun their autonomous vehicle development activities as early as 1986.

A new tech hub region in southern Germany, called Cyber Valley, is hoping to create new opportunities for collaboration between academics and businesses with a specific focus on AI. The new hub plans to focus on AI and robotics, make better use of research talent, and collaboratively work with companies such as Porsche, Daimler and Bosch. In addition to autonomous vehicles, Germany has an early lead with robotics, with one of the first cobots developed in Germany for use in manufacturing. Additionally, Germanys AI strategy was published in December 2018 in Nuremberg. And, in 2019, The German government tasked a new Data Ethics Commission with producing guidelines for the development and use of AI.

Despite these intellectual property and early market leads, Germany has not invested at the same levels as other countries, and the technology firms are highly concentrated in manufacturing, automotive, and industrial sectors, leaving other markets mostly untapped with AI capabilities. Furthermore, American automakers such as Ford, GM, and Google Waymo, as well as Uber and other firms are quickly catching up with the number of patents issued and threatening Germanys dominance for intellectual property in that area.

Russia

Russian president Vladimir Putin made a statement that: Artificial intelligence is the future, not only for Russia, but for all of humankind and that whichever country becomes the leader in this sphere will become the ruler of the world. This is one powerful statement. Russia has said that intelligent machines are vital to the future of their national security plans and, by 2025, it plans to make 30% of its countrys military equipment robotic. The government also wants to standardize development of artificial intelligence focusing on image recognition, speech recognition, autonomous military systems, and information support for weapons life-cycle. There is also a new Russian AI Association bringing the academic and private-sector together. Additionally, Russian President Vladimir Putin approved the National Strategy for the Development of Artificial Intelligence (NSDAI) for the period until 2030 on October 2019.

Russia is still a world superpower in terms of military might, and exerts significant influence in world markets, especially in the energy sector. Despite that, Russian investment in AI is still significantly lacking that of other countries, with only a reported $12M invested by the government in research efforts. While Russia has had significant input and efforts around AI research in the university setting, the countrys industry lacks overall AI talent and number of companies working towards AI related initiatives. Many skilled Russian engineers leave the country to work at other firms worldwide who are throwing lots of money at skilled talent. As such, the biggest application of AI in Russia is in physical and cyberwarfare situations, leveraging AI to enhance the capabilities of autonomous vehicles and information warfare. In this arena, Russia is certainly a country to be contended with regards to AI dominance.

Other AI Hotspots

In addition to the above, there are many countries that are seeing AI as a country level strategic initiative including Israel, India, Denmark, Sweden, Estonia, Finland, Netherlands, Poland, Singapore, Malaysia Australia, Italy, Canada, Taiwan, the United Arab Emirates (UAE), and other locations. Some of these countries have more financial than technical resources, or vice-versa. The key is that for each of these countries, they see AI in a strategic light and as such theyve crafted a strategic approach to AI.

AI technologies have the ability to transform and influence the lives of many people. Not only will AI transform the way we work, interact with each other and travel between locations, but it also has an impact on weapons technology, modern warfare, and a countrys cyber security. AI can also have a dramatic impact on the labor market, disrupting entire industries and creating whole new ones. As such, having a focus on AI dominance can also help strengthen that countrys economy, shift global leadership and power, and give military advantages. While the race for AI domination might seem similar to the Space Race or aspects of the Cold War, in reality the AI market doesnt support a winner take all approach. Indeed, continued advancement in AI requires research and industry collaboration, continued research and development, and industry-wide thinking and solutions to problems. While there will no doubt be winners and losers in terms of overall investment and return, countries worldwide will reap the benefits of increased adoption and development of cognitive technologies.

Excerpt from:
Why The Race For AI Dominance Is More Global Than You Think - Forbes

Read More..

Bitcoin Dominance Could Cause Catastrophic Ending to Current Altcoin Season – newsBTC

Bitcoin is the first-ever cryptocurrency that all other altcoins were designed after, with many providing additional benefits above and beyond what the original crypto has to offer.

Altcoins have recently been vastly outperforming Bitcoin in what crypto analysts refer to as an alt season, but it all may come to a surprise end leading to additional collapse if the strong signal Bitcoin dominance is showing confirms.

Because Bitcoin was the first crypto asset to ever be released, it has first-mover advantage and all that comes in tow.

Bitcoin has the most brand recognition, it has the largest market cap, and it is the most widely used cryptocurrency in terms of on-chain transactions.

Related Reading | Alt Season Cancelled: XRP, Ethereum, and Litecoin All Trigger Sell Signal

Oftentimes, its Bitcoin that dictates the greater trend across the crypto market, including each peak and trough in between, while altcoins like Ethereum, Litecoin, and XRP play follow the leader.

Occasionally, altcoins will begin to outperform Bitcoin for an extended period of time in what crypto analysts have dubbed an alt season. Since the start of 2020, altcoins have been surging, helping to carry Bitcoin higher from local lows.

The surge in altcoins and a lagging Bitcoin, has caused Bitcoin dominance to drop from highs around 72% to as low as 64% in recent weeks.

But BTC dominance a metric that weighs Bitcoin against the rest of the altcoins in the crypto market has formed a falling wedge suggesting that altcoins will experience a deadly drop in the days ahead against Bitcoin. Whether this suggests Bitcoin will explode through $10,000 leaving altcoins in its dust, or if the two crypto classes drop together with altcoins falling even harder that could cause the rise in BTC dominance remains to be seen.

The falling wedge typically a bullish structure is also accompanied by a massive bullish divergence on the MACD. Divergences occur when an indicators values move opposite from the price action and can be a powerful signal of whats to come.

Breaking up from the falling wedge could cause a revisit to highs around 70% dominance. But zooming out, even a rise in BTC dominance may be short-lived before a massive breakdown of dominance happens.

When zooming out on the same Bitcoin dominance chart from daily to weekly timeframes, a very different picture is painted.

BTC dominance can be seen resting on a massive, tw0-year-long trendline dating back to the crypto bubble. During this time, altcoins had exploded following Bitcoin reaching an all-time high. But the bubble popped, and altcoin prices fell by as much as 99% in many cases to their lows.

BTC dominance breaking below the line could cause an all-out alt season that would shock the crypto market.

On the higher timeframe chart, BTC dominance can also be seen with a massive bearish divergence. Its difficult to say if the bear div was foretelling the recent move down from highs to the diagonal trendline, or if its signaling a substantial fall in Bitcoin dominance following a breakdown of the multi-year trendline.

Related Reading | Bitcoin Bull Market Failure: Why A Year-Long Trendline Could Signal Doom

Lastly, despite the bearish signals, the upward slanting diagonal trendline and the dashed horizontal line appear to be forming an ascending triangle a powerfully bullish chart pattern that typically breaks upward.

Should Bitcoin dominance break upward from the ascending triangle rather than breaking down from the diagonal trendline, it could spell the end of all altcoins outside of the top ten.

See more here:
Bitcoin Dominance Could Cause Catastrophic Ending to Current Altcoin Season - newsBTC

Read More..

Why these 3 altcoins are outperforming the rest this year – Micky News

Total crypto market capitalization is at its highest level since September 2019. A further $9 billion has flowed in overnight taking the figure to just below $280 billion.

Bitcoin has been making slow but steady gains as it inches towards the psychological $10k barrier but altcoins are having a bit of a resurgence at the moment with some performing much stronger than others.

These three altcoins are the top twentys top performers with triple digit gains so far in 2020 according to coinmarketcap.com.

Love it or hate it, Bitcoin SV is one of the years top performing crypto assets so far. With a price just below $100 on New Years day, the Craig Wright spawned BTC offshoot has gained a whopping 200% to current levels.

From close to falling out of the top ten, it is now the fifth largest cryptocurrency by market cap which is currently around $5.5 billion.

BSV hit an all-time high this year of over $400 before retreating to current prices around $300 which is still triple what it was at the beginning of the year.

FOMO over Wrights alleged Tulip Fund has driven momentum for this one along with the Genesis hard fork earlier this month.

This altcoin has also had a monumental year with a gain of 163% to date. ETC was priced at a lowly $4.50 at the beginning of the year and has since surged to just below $12.

An 18 month high was hit just a few days ago when Ethereum Classic surged to top $13. It has also moved up the market cap chart at is currently at 14th with just shy of $1.4 billion.

Network fundamentals are improving with all-time high hash rates and a successful Agartha hard fork. Additionally, Grayscale Investments has committed to funding the development of ETC for two more years.

The privacy-centric Dash is the third altcoin to notch up a triple-digit gain so far this year.

Priced at just $41 on New Years Day, Dash has surged a whopping 195% to current levels around $120. Prices are at a seven month high at the moment as this crypto asset keeps attracting investment.

Dash market cap has topped $1 billion again and it has moved up to 16th spot in the charts.

As reported by Micky earlier this month, Dash momentum has largely been driven by greater adoption in Latin America. A number of network improvements and wallet deployments have also improved the digital cash platform.

The other two top twenty altcoins with three-figure gains this year are Bitcoin Cash at 117%, and IOTA gaining 107%.

These will be the ones to watch as 2020 plays out.

Read more here:
Why these 3 altcoins are outperforming the rest this year - Micky News

Read More..

Ripples XRP Emerges Victorious in Altcoin Twitter Ring Fight Against IOTA, Cardano, and Tron – ZyCrypto

Ripples XRP was overwhelmingly voted for by majority of respondents in a recent Twitter poll. This was after DYOR podcast host, Tom Buonincontri, asked his Twitter followers to choose one altcoin amongst; IOTA, ADA, XRP, and TRX that they would stick with forever.

After the final vote count, IOTA had the least number of enthusiasts counting to 15.4% of the total votes. The third runners up position was taken by Trons TRX with a score of 17.9%. The second spot was taken by Cardanos ADA with a record of 23% of the total counted votes.

The results generated a heated debate in the comment section with mixed reactions on the choices provided. On one comment, a follower said he would exit the crypto ecosystem if it only consisted of the given list.

Without a detailed report on why the respondents picked their choices, we are only left to speculate and analyze the results. Majorly, it is likely that the voters used different criteria to pick their crypto of choice.

Some based on the future scalability of the altcoin by leaning on the more affordable ones with a promising future. This might give a clear insight into why XRP won the fierce Twitter ring fight.

XRP holders and investors seem to be the majority, as it is supported by the current market valuation. At press time, the XRP market cap stood at $12.134 billion with the 24-hour trading volume at $2.38 billion. As its volume keeps growing by the day, the recent XRP bull rally could be a mere shadow to what is awaiting in the coming weeks.

With Ripple possibly looking to go public through an IPO in the course of the year, public confidence on the coin is expected to scale up before the big event. Analysts have been giving reasons to believe Ripples XRP is bettering the king coin, Bitcoin, and the broader crypto market in 2020. In addition to that, Ripples CEO Brad Garlinghouse has been very vocal in defending the coins eligibility in the crypto space.

The crypto market offers a wide variety of digital assets to choose and invest from, getting a finite answer as to why the respondents were inclined to specific coins might be difficult.

With each altcoin in the list provided by Tom offering a different kind of taste to its users, it trickles down to the marking craft, user-friendliness of the coin, and definitely the coins future scalability.

Get Daily Crypto News On Facebook | Twitter | Telegram | Instagram

Originally posted here:
Ripples XRP Emerges Victorious in Altcoin Twitter Ring Fight Against IOTA, Cardano, and Tron - ZyCrypto

Read More..