Page 2,957«..1020..2,9562,9572,9582,959..2,9702,980..»

Mighty plans fast Chromium browser that streams from the cloud – Mac Software Discussions on AppleInsider Forums – AppleInsider

Developer Mighty plans to "Make Chrome Faster," by streaming the browser from the cloud -- and it's starting on macOS.

Google Chrome is famously slow on the Mac, and has been shown to be much less efficient than Safari. Now a developer is aiming to offer a browser that is "indistinguishable" from Chrome, except faster because it is streamed.

"We're excited to finally unveil Mighty, a faster browser that is entirely streamed from a powerful computer in the cloud," says Mighty in a blog post. "After two years of hard work, we've created something that's indistinguishable from a Google Chrome that runs at 4K, 60 frames a second, takes no more than 500 MB of RAM, and often less than 30% CPU even with 50+ tabs open."

"If you're not sure what that means, imagine your browser is a Netflix video but running on cutting-edge server hardware somewhere else," continues the blog. "When you switch to Mighty, it will feel like you went out and bought a new computer with a much faster processor and much more memory. But you don't have buy a new computer. All you have to do is download a desktop app."

That desktop app is a Mac one that is only available in private beta, though users can request access. It can also be seen in a demonstration video.

Mighty says that it has forked Chromium in part to make "the software interoperate with a long list of macOS features."

Running a browser remotely obviously requires fast internet connections if a user is not to keep experiencing delays.

"Lag would have been a real problem 5 years ago," continues the company, "but new advances since then have allowed us to eliminate nearly all of it."

Its solutions revolve around its own design for "a new low-latency network protocol," plus siting "servers as close to users geographically as possible." However, Mighty is still working on this, and states that part of its "master plan" is to "improve worldwide latency of the Internet."

Separately, Google itself is attempting to improve the speed and performance of Chrome, including on the Mac. It's attempting to shrink "its memory footprint in background tabs on macOS," said the company in March 2021.

Stay on top of all Apple news right from your HomePod. Say, "Hey, Siri, play AppleInsider," and you'll get latest AppleInsider Podcast. Or ask your HomePod mini for "AppleInsider Daily" instead and you'll hear a fast update direct from our news team. And, if you're interested in Apple-centric home automation, say "Hey, Siri, play HomeKit Insider," and you'll be listening to our newest specialized podcast in moments.

See the original post:
Mighty plans fast Chromium browser that streams from the cloud - Mac Software Discussions on AppleInsider Forums - AppleInsider

Read More..

The Coolest Big Data Systems And Platform Companies Of The 2021 Big Data 100 – CRN

A Systemic Approach

Business analytics and data visualization applications, database software, data science and data engineering toolsall are critical components of a comprehensive initiative to leverage a business data assets for competitive gain.

But all those components run on hardware servers, operating software and cloud platforms that pull all those pieces together.

As part of the 2021 Big Data 100, CRN has compiled a list of major system and cloud platform companies that solution providers should be aware of. They include major computer system vendors like Dell Technologies, Hewlett Packard Enterprise and IBM that provide servers and operating software packaged for big data applications; cloud service providers like Amazon Web Services, Google and Snowflake that offer cloud-based big data services; and leading big data software developers including Microsoft, Oracle and SAP.

This week CRN is running the Big Data 100 list in slide shows, organized by technology category, with vendors of business analytics software, database systems, data management and integration software, data science and machine learning tools, and big data systems and platforms.

(Some vendors market big data products that span multiple technology categories. They appear in the slideshow for the technology segment in which they are most prominent.)

See the article here:
The Coolest Big Data Systems And Platform Companies Of The 2021 Big Data 100 - CRN

Read More..

Cloud security and privacy: 7 action items you should consider – TechBeacon

While cloud migration isn't as controversial as it used to be for many organizations,issues about security linger. That's why it's important for security teams to put together a solid program to protect their cloud environments.

To do that, it's useful to have a list of action itemshigh-priority projectsthat will serve as the pillars of a robust cloud security program.

Here are key action items to consider to bolster your cloud security and privacy.

Using an industry standard should be the starting point for building, implementing, and maintaining a cloud security strategy.

"Security guidelines can be useful for organizations to ensure that theyve covered a full set of protections," said Eric Hanselman, chief analyst at 451 Research.

"The challenge is adapting them to your specific operational capabilities and team skills. By their nature, these arent one-size-fits-all recommendations, and organizations will need to translate them into a workable plan."Eric Hanselman

Guides available to organizations include the Center for Internet Security Controls Cloud Companion Guide, the Cloud Security Alliance Cloud Controls Matrix, and the National Institute of Standards and Technology publication SP 800-144, "Guidelines on Security and Privacy in Public Cloud Computing."

"Guides can be used to show customers, stakeholders, and partners that you're investing in security and can pass audits, which is key to doing business in the modern world," said Arick Goomanovsky, co-founder and chief business officer at Ermetic, an access policy management company.

However, he cautioned: "You have to remember that they give you high-level guidance of what you should be thinking about. They're not comprehensive. They can miss a lot."

"They tell you what to do, but they don't tell you how to do it."Arick Goomanovsky

Cloud security posture management (CSPM) is a way to determine whether an organization's cloud applications and services are configured securely.

"CSPM is one of the first things an organization should do when it deploys to the cloud, because it allows it to get a quick sense of its security posture," noted Deloitte Risk & Financial Advisory partner Aaron Brown. "CSPM can identify misconfigurations and vulnerabilities within the cloud platform."

"CSPM is the necessary sanity check on cloud operations," 451 Research's Hanselman added. "Its the backstop that organizations require to ensure minimum security configurations in cloud."

Tim Bandos, CISO at Digital Guardian, a data loss protection and managed detection and response company, explained that CSPM allows organizations to monitor risks and fix some security issues automatically. "CSPM can detect issues like lack of encryption, improper encryption key management, extra account permissions, and others," he said.

The technology also fits nicely into modern application development techniques by integrating security procedures into DevOps processes. "Locking down your environment with vulnerability scanning and CSPM solutions is a key part of a shift-left strategy, securing as much as possible pre-runtime," saidJohn Morgan, CEO of Confluera, a cloud cybersecurity detection and response provider.

Cloud access security brokers give organizations the visibility to maintain consistent security policies and governance across one or multiple cloud deployments. The broker does network inspection as it sits between the cloud service provider and the organization.

"It can catch shadow IT," Deloitte's Brown said. "It makes sure people in your organization aren't consuming cloud services outside your governance model."

"CASBs arent a comprehensive solution to cloud or SaaS security," warned Tim Bach, vice president of engineering at AppOmni, a provider of security posture management services.

"They can inspect cloud traffic that flows through the proxy-access gateway, but they dont have visibility into traffic that bypasses the proxy and connects to the cloud provider directly. This means that they dont monitor or manage the many data access points outside the network. These access points are used by external guest users, customers, contractors, partners, third-party applications, and IoT devices."Tim Bach

"Access may get intentionally granted to these users or granted accidentally through misconfiguration or user error," he continued. "Unfortunately, we see that more than 95%of organizations have overprovisioned access to their external users."

Knowing what security controls are offered by a cloud service provider (CSP) is an essential part of cloud management. Clearly defined, documented, and agreed-to responsibilities are imperative to securing an organization's cloud environment.

"Cloud service providers and cloud customers have different requirements within different types of cloud environments, such as IaaS, PaaS, and SaaS," observed Kayla Williams, vice president for information technology governance, risk, and compliance at Devo Technology, a cloud-native logging and security analytics company.

For example, according to the CIS shared responsibilities model network, control responsibilities within an IaaS environment are split between the CSP and the customer, while network controls in a SaaS environment are the responsibility of the CSP alone. "If a company were not aware of these differentiating control obligations," Williams said, "they could be left exposed to critical risks in their network."

"You cannot secure what you do not know about," AppOmni's Bach said. "Creating an inventory of your cloud providers, cloud services, and the controls they do and do not provide is a critical starting point to deploying proper security management tooling and processes."

That inventory becomes particularly important when dealing with multiple clouds. "Security controls and their depth differ across CSPs so enterprises need to be aware of it and potentially use third-party cloud-native security solutions that provide a single pane of visibility and control across clouds and take the burden away from enterprises to understand these differences across CSPs," saidVishal Jain, co-founder and CTO of Valtix, a maker of a multi-cloud network security platform.

While identifying a CSP's security controls sounds like a straightforward process, it may not be. "The additional challenge in cloud is understanding the nature of controls that are available in detail," 451 Research's Hanselman noted. "Its all too easy to presume that similar-sounding control capabilities are the same as those that were used to. Thats often not the case, and can lead to coverage gaps."

Many organizations are having trouble with who has access to their cloud services. Common mistakes include enabling global permissions on servers, allowing any machine to connect to them, and permitting Secure Shell connections directly from cyberspace, allowing anyone who can figure out the server location to bypass the firewall and directly access data on the server.

All CSPs offer identity and access control tools that can be used to determine who or what has access to cloud resources. Use them.

Access to your cloud by human users should have some form of multifactor authentication. Privileged identities for users, applications, and services should be tightly controlled, and least-privilegepolicies implemented. "You have to make sure that users and applications in the environment have access only to relevant data," Ermetic's Goomanovsky said.

Rajiv Pimplaskar, chief revenue officer of Veridium, maker of an authentication platform, also recommends that organizations consider scrapping passwords.

"A modern access management strategy has to consider going passwordless as a core principle. Passwordless solutions offer the best security while also reducing friction, thereby enhancing user experience."Rajiv Pimplaskar

Encryption is a fail-safe for data anywhere. If security controls fail, encryption prevents attackers from doing anything with any data they steal.

All of the major CSPs offer encryption tools and key management services. Before using those tools, an organization has to ask itself, "What can I accomplish with the default encryption capabilities of my cloud service provider?"

Some organizations, though, don't believe encryption should be delegated to a CSP, especially when it comes to allowing CSPs to manage encryption keys. "That's like locking a door and leaving the key in the lock," observed Reiner Kappenberger, product management director for data security at Micro Focus.

"Organizations should consider format-preserving encryption or tokenization to protect data at a field level so they de-identify data without making changes to a database. With format-preserving encryption, you can encrypt fields that contain sensitive data and leave other fields unencrypted."Reiner Kappenberger

"That's a key aspect," he continued, "especially when migrating into the cloud because the organization is handing their data to someone else, the cloud provider. Data protection is never more important than it is in that environment."

Although this sounds like a no-brainer, many organizations just don't seem to get around to doing it.

Leveraging native cloud security capabilities is always a good idea,451 Research's Hanselman said. "The challenge organizations face is in integrating those capabilities into their existing security operations."

"Native security tools cant become an operational island, disconnected from the core security environment," he continued. "That's a path that will create additional work for security teams and potentially leave gaps in coverage and understanding."

Cost considerations may also influence the decision to take full advantage of the security offerings of a cloud service provider,Ermetic's Goomanovsky added.

"You have to realize there's no free lunch. These tools aren't free. When you turn them on, you're going to have to pay for them."ArickGoomanovsky

You have to make an informed decision about the best strategy,he continued. "Do you want to turn on all these services for your all your cloud service providers? How do you synchronize events coming from each provider?

The alternative would be to go to a third-party vendor, which will give you a unified view of your environment and will do the integration of the events coming from each cloud,Goomanovskymaintained.

Whether an organization uses its CSP's security and monitoring tools or someone else's, having them in place is importantnot only for security, but also for its brand. "Having controls in place to safeguard a companys systems and information entrusted to it is the first step to gaining customer and market confidence as a security conscience company," Devo Technology's Williams observed.

"Being able to monitor those security controls and your network and to respond in near-real time to anomalies and potential events and incidents is absolutely critical. Company reputational risk is not only dependent on an event that impacts a company but also on how quickly it is acknowledged and responded to."KaylaWilliams

See more here:
Cloud security and privacy: 7 action items you should consider - TechBeacon

Read More..

Dell tackles telco cloud transition with Red Hat blueprints – FierceTelecom

Dell Technologies rolled out new telco cloud and multi-access edge compute (MEC) reference architectures based on Red Hats OpenShift platform, addressing two key challenges operators face as they transition to cloud-native environments. Red Hat is a subsidiary of IBM.

Dells Red Hat OpenShift Reference Architecture for Telecom addresses the components needed to build a cloud-native network extending from the core to the edge. In addition to Red Hats platform, it comprises Dells edge and core servers, open networking switches and storage.

Kevin Gray, director of solutions marketing for Dells Telecom Systems Business, told Fierce the company integrated Ansible modules into its edge servers and storage products, allowing developers using the architecture to adopt an infrastructure-as-code approach.

Dell also unveiled aMEC reference architecture, which marries its servers with Red Hats OpenShift container platform and Intels smart edge technology.

Gray said the new offerings aim to address two key challenges facing operators by streamlining their path to building cloud-native networks and accelerating the deployment of edge services.

RELATED: Dell EMC positions itself as a player in 5G

If you look at some of the issues operators have, theyre moving to cloud-native architectures and theyre looking to accelerate the process of delivering those, he said, adding another of their most significant concerns is just building new revenue streams. And if you look at this Dell platform its ideally suited for delivering services either on customer premises or at the telco edge.

He noted its edge architecture in particular can be paired with 5G or LTE for private networkdeployments, serving use cases across manufacturing, healthcare and other segments.

Gray added Dells decision to use Red Hat OpenShift for its reference design reflects strong demand for open solutions in the marketplace.

The push toward open standards and open networking we see as an inevitable trend, especially in the service provider segment, he said. Hands down what were hearing from operators is this move to open is believed to give them more choice, improved cost structure based on industry standards, greater agility.

Dell Technologies is no stranger to involvement in open initiatives, contributing to the Linux Foundations OpenSwitch project and recently signing on as a founding member of the Open Grid Alliance.

Link:
Dell tackles telco cloud transition with Red Hat blueprints - FierceTelecom

Read More..

Middle East Data Center Market Outlook and Forecast 2021-2026: Big Data, IoT & Cloud Drive Data Center Investment & Migration From On-Premises…

Dublin, April 28, 2021 (GLOBE NEWSWIRE) -- The "Data Center Market in Middle East - Industry Outlook and Forecast 2021-2026" report has been added to ResearchAndMarkets.com's offering.

The Middle East data center market by investment is expected to grow at a CAGR of 7% during the period 2020-2026.

The data center market has observed a steady growth due to the outbreak of the COVID-19 pandemic, resulting in heightened access to internet-related services aided by nationwide lockdowns and restrictions. IoT-enabled devices witnessed high acceptance for monitoring and surveillance purposes, especially in the healthcare sector, during the pandemic.

Government agencies have also contributed to the growth of cloud-based services in the Middle East. In Kuwait, the Ministry of Health (MoH), the Central Agency for Information Technology (CAIT), collaborated with Zain to launch an application - Shlonik to monitor citizens that have returned to the country.

The growing adoption of cloud, IoT, big data in the wake of the COVID-19 has increased colocation investments in the region. Cloud, social media, and video conferencing service providers have contributed majorly to data generation. Many enterprises operating in the cloud are migrating to colocation data centers to operate hybrid infrastructure services.

Middle East Data Center Market Segmentation

The Middle East data center market research report includes a detailed segmentation by IT infrastructure, electrical infrastructure, mechanical infrastructure, cooling technique, cooling systems, general construction, tier standards, geography. The Middle East IT infrastructure market expects to grow at a CAGR of over 6% during 2020-2026.

The server market is shifting slowly from rack-based to blade servers to support a high-density operating environment. This is because of the increased usage of IoT, big data analytics, artificial intelligence, and machine learning by enterprises across the Middle Eastern market. Enterprises prefer servers that can reduce space in the data center environment without affecting performance.

The UPS market in the Middle East anticipates to cross USD 89 million during the forecast period. There has a steady rise in the deployment of edge computing; also, colocation operators are investing in the region, which increases the demand for high-capacities UPS in the region. The use of 750-1,500 kVA systems has witnessed high adoption along with less than 500 kVA UPS systems. The adoption of lithium-ion batteries is also likely to increase during the forecast period as the price of these batteries is expected to decline during the forecast period.

Cooling systems, including water-cooled chillers, CRAH, and cooling towers, are installed with N+20 redundancy in the Middle East region. Most facilities in the region design to cool servers via water-based cooling techniques. The growing construction of data centers in the UAE is a key factor in developing multiple chillers, cooling towers, and CRAH units. In Saudi Arabia, data centers' construction will increase the adoption of multiple chillers, cooling towers, and CRAH units.

Story continues

Newly constructed data center facilities will use advanced air-based cooling techniques because of the high temperature. Data centers in Turkey adopt CRAC & CRAH units and chiller units. In terms of redundancy, most operators use N+1 and N+2 cooling redundancy. Besides, operators are likely to deploy dual water feed for efficient and uninterruptable operations.

Brownfield development is more cost-effective than greenfield development in the Middle East. In the Middle Eastern region, data center commissioning service providers follow standard operating procedures, depending on the depth of commissioning required in the facility. The Middle East data center market is mostly dominated by small data centers with less than 15 MW power centers. The building & engineering design market is expected to reach over USD 32 million in 2026, growing at a CAGR of approx-7%.

The increasing demand for reliable, efficient, and flexible building infrastructure among service operators is expected to influence its growth. The adoption of physical security apparatus is increasing to protect the data and information. BFSI, telecommunication, and healthcare are the most vulnerable sectors for intrusions and breaches. Sensors and video cameras are installed for surveillance in data centers in the region.

The use of artificial intelligence and machine learning in the data center is driving the DCIM solutions market. DCIM solutions improve efficiency, monitor power consumption, and predict system failures. They are becoming a major part of data center operations as they monitor critical elements such as power, cooling, and IT infrastructure.

Most data centers in the UAE are Tier III certified or built according to Tier III standards. Several data centers in Saudi Arabia are built according to Tier III or Tier IV standards and have a minimum redundancy of N+1 in power and cooling infrastructure. Tier III data centers are equipped with UPS systems redundancy of N+1.

The majority of modern data centers in Turkey are developed according to Tier III standards, with a minimum of N+1 redundancy in power infrastructure. A few data centers operate 2N power infrastructure or have additional capacity to commission 2N infrastructure solutions based on the customer's demand. Most developments in Turkey are greenfield projects, whereas modular data centers are confined to enterprise on-premises deployments.

INSIGHTS BY VENDORS

Arista Network, AWS, Atos, Broadcom, Cisco Systems, and Dell Technologies are among the major IT vendors expanding their presence in the Middle East region.

They have a strong physical presence in the region and are the major adopters of high-density, mission-critical servers, storage infrastructure, and network infrastructure. Vendors are focusing on sustainable data centers. They procure renewable energy sources in partnership with local service providers. Construction and design are critical for data center operators because they need to adhere to the Uptime standards.

Key Data Center Critical (IT) Infrastructure Providers

Key Data Center Contractors

Key Data Center Investors

Key Topics Covered:

1 Research Methodology

2 Research Objectives

3 Research Process

4 Scope & Coverage4.1 Market Definition4.2 Base Year4.3 Scope of The Study4.4 Market Segments

5 Report Assumptions & Caveats5.1 Key Caveats5.2 Currency Conversion5.3 Market Derivation

6 Market at a Glance

7 Introduction7.1 Internet & Data Growth7.2 Data Center Site Selection Criteria

8 Market Opportunities & Trends8.1 5G To Grow Edge Data Center Investments8.2 Growing Procurement Of Renewable Energy8.3 Submarine Cable Deployment & Impact On Data Center Investment8.4 Smart City Initiatives Fuel Data Center Deployments

9 Market Growth Enablers9.1 Impact Of COVID-19 On Data Center Market9.2 Rise In Data Center Investments9.3 Big Data, IoT & Cloud Drive Data Center Investment9.4 Migration From On-Premises Infrastructure To Colocation & Managed Services9.5 Modular Data Center Development

10 Market Restraints10.1 Location Constraints For Data Center Construction10.2 Lack Of Skilled Workforce10.3 Security Breaches Hinder Data Center Investments

11 Market Landscape11.1 Market Overview11.2 Market Size & Forecast11.3 Area11.4 Power Capacity11.5 Five Forces Analysis

12 Infrastructure12.1 Market Snapshot & Growth Engine12.2 IT Infrastructure12.3 Electrical Infrastructure12.4 Mechanical Infrastructure12.5 General Construction

13 IT Infrastructure13.1 Market Snapshot & Growth Engine13.2 Server Infrastructure13.3 Storage Infrastructure13.4 Network Infrastructure

14 Electrical Infrastructure14.1 Market Snapshot & Growth Engine14.2 UPS Systems14.3 Generators14.4 Transfer Switches & Switchgear14.5 Power Distribution Units14.6 Other Electrical Infrastructure

15 Mechanical Infrastructure15.1 Market Snapshot & Growth Engine15.2 Cooling Systems15.3 Racks15.4 Other Mechanical Infrastructure

16 Cooling Systems16.1 Market Snapshot & Growth Engine16.2 CRAC & CRAH Units16.3 Chiller Units16.4 Cooling Towers, Condensers & Dry Coolers16.5 Other Cooling Units

17 Cooling Technique17.1 Market Snapshot & Growth Engine17.2 Air-Based Cooling Techniques17.3 Liquid-Based Cooling Techniques

18 General Construction18.1 Market Snapshot & Growth Engine18.2 Core & Shell Development18.3 Installation & Commissioning Services18.4 Building & Engineering Design18.5 Physical Security18.6 DCIM/BMS Solutions

19 Tier Standards19.1 Market Snapshot & Growth Engine19.2 TIER I & II19.3 TIER III19.4 TIER IV

20 Geography20.1 Market Snapshot & Growth Engine20.2 Area Snapshot & Growth Engine20.3 Power Capacity Snapshot & Growth Engine20.4 United Arab Emirates20.5 Saudi Arabia20.6 Turkey20.7 Jordan20.8 Other Middle Eastern Countries

21 Competitive Landscape21.1 IT Infrastructure21.2 Construction Contractors21.3 Data Center Investors

For more information about this report visit https://www.researchandmarkets.com/r/6pglwg

Go here to see the original:
Middle East Data Center Market Outlook and Forecast 2021-2026: Big Data, IoT & Cloud Drive Data Center Investment & Migration From On-Premises...

Read More..

Arm boosts its Neoverse V1 and N2 chip designs with new mesh interconnect – SiliconANGLE News

British chip designer Arm Ltd.today announced a more advanced mesh interconnect for the Neoverse architecture chips that it said is a key element for partners aiming to develop more sophisticated systems-on-chip.

Arm announced its Neoverse V1 and Neoverse N2 platforms in September, saying the chip blueprints will serve as the basis for a new breed of more powerful, server-grade central processing units. At the time of their release, Arm said the V1 and N2 platforms were an upgrade on its existing Neoverse N1 architecture used by Amazon Web Services Inc. and three others that are counted among the worlds seven largest so-called hyperscale data center operators.

Its important to note that Arm doesnt build chips itself. Rather, it designs and licenses chip architectures to processor manufacturers, who use them as the basis of their own branded chips.

In a blog post today, Chris Bergey, senior vice president and general manager of Arms Infrastructure Line of Business, said the V1 and the N2 are compatible with the latest five-nanometer manufacturing process being adopted in the chip industry. They also support a number of other technologies that should give a boost to certain niche use cases, he said.

For example, the Neoverse V1 supports Scalable Vector Extension technology that delivers a higher per-core performance and better code longevity and provides SoC designers with more implementation flexibility. The Neoverse N2, meanwhile, supports SVE2, a newer version of the SVE technology that Bergey said drives more performance efficiency in cloud-to-edge use cases such as machine learning, digital signal processing, multimedia and 5G.

Bergey explained the Neoverse V1 and N2 chips offer up to 50% and 40% better performance, respectively, than the current-generation Neoverse N1. The V1 is designed for building processors that provide a high amount of performance per thread while the N2 is for CPUs with high core counts, he explained.

Which of these two chip types is preferable mainly depends on what applications a company is running: Software products that are billed per processor core, for example, are best run on processors with modest core counts and higher per-thread performance.

As Moores Law comes to an end, solution providers are seeking specialized processing, Bergey wrote. Enabling specialized processing has been a focal point since the inception of our Neoverse line of platforms, and we expect these latest additions to accelerate this trend.

Wikibon analyst David Floyer told SiliconANGLE that Arms Neoverse server chips are striking at the high-end of a data center server market, including high-performance computing, artificial intelligence and matrix application workloads that are currently dominated by x86-based chips sold by companies such as Intel Corp. and Advanced Micro Devices Inc. But he said that while more than 90% of servers in use currently run on x86 chip architectures today, those companies do have good reasons to be worried by the emergence of the Neoverse chips.

I believe that Arm-based architectures will offer an increasingly lower cost, higher performance and greater security for cloud vendors, Floyer said.

Analyst Patrick Moorhead of Moor Insights & Strategy told SiliconANGLE the Neoverse N2 is the most interesting of the two new designs thanks to its focus on general purpose server computing.

I believe it will raise the bar on single-thread performance and will challenge both AMD and Intel on that vector, he said. V2 is interesting too as its focused primarily on the high-performance computing market where Arm has been very effective.

Bergey said Arms partners have made strong progress as they work to implement the new Neoverse designs into their products. For example, Marvell Technology Group Ltd. recently announced a new family of Octeon networking chips based on Neoverse N2 and said it will provide its first samples by the end of the year. Marvell claims the Octeon chips will deliver a threefold performance boost over its previous-generation Octeon processors.

Arms Neoverse N2 offers the industrys top computing power per watt, delivering three times the compute performance and four times the SPECint per watt of the current generation, said Raj Singh, executive vice president of Marvells Processors Business Group. Our next-generation Octeon DPU family will leverage Arms Neoverse N2 cores to power critical infrastructure applications from 5G to storage to signal processing and security.

The Neoverse chips are seeing broad adoption in the cloud too. For example, the Neoverse-powered AWS Graviton2 processors are rapidly expanding their footprint in Amazon Elastic Compute Cloud, while Oracle Corp. has said it plans to use Ampere Altra CPUs, based on Neoverse designs, in its Oracle Cloud Infrastructure, Bergey said. And Alibaba Cloud, the cloud computing arm of Chinas e-commerce giant Alibaba Group, has shown some impressive results in benchmark tests of its upcoming Alibaba Cloud ECS Arm instances, showing 50% performance improvement versus older instances based on the Neoverse N1.

Another big Chinese tech firm thats heavily invested in the new Neoverse architectures is Tencent Corp. Bergey said Tencent is currently working on both hardware tests and software enablement with a view to using Neoverse to power its cloud applications.

These partners are taking full advantage of what is under the hood of Neoverse platforms, Bergey said. This is just the tip of the iceberg for both infrastructure workload benefits and on how our partners plan to implement and take Neoverse IP to market.

Floyer told SiliconANGLE the major friction against moving data center servers to Arms architecture is the cost of migration. But he said cloud vendors own their data center platforms, and so they can bear the cost of migration and will most likely make the move first. On-premises servers, except for cloud provider servers from AWS and others, will lag behind because individual ISVs will adopt Arm later than cloud vendors, he said.

All in all, it would seem Neoverse has a bright future so bright, in fact, that Arm is also pushing for the Neoverse designs to serve as the basis for a new breed of SoCs. To make that happen, it announced a new mesh interconnect that can be used to link the multiple components they house.

SoCs integrate multiple components such as the CPU, memory, input/output ports and secondary storage onto a single chip. SoCs are often seen in mobile devices and help to improve performance and reduce power consumption in such devices, though the designs do cost more to implement than single components.

Arm said the mesh interconnect is one of the most critical components of any SoC design. The new CMN-700 mesh interconnect is said to provide significant improvements on the previous-generation CMN-600, and enables what Bergey said is a step function increase in performance on every vector, from core counts and cache sizes to the number of types of memory and I/O devices that can be attached to the SoC.

Arm has done very well in the recent years and continues to do so with the Neoverse N2 and V1 upgrades, said Constellation Research Inc. analyst Holger Mueller. Now all eyes are on adoption, and the news from Arm today in that respect is very encouraging.

Show your support for our mission with our one-click subscription to our YouTube channel (below). The more subscribers we have, the more YouTube will suggest relevant enterprise and emerging technology content to you. Thanks!

Support our mission: >>>>>> SUBSCRIBE NOW >>>>>> to our YouTube channel.

Wed also like to tell you about our mission and how you can help us fulfill it. SiliconANGLE Media Inc.s business model is based on the intrinsic value of the content, not advertising. Unlike many online publications, we dont have a paywall or run banner advertising, because we want to keep our journalism open, without influence or the need to chase traffic.The journalism, reporting and commentary onSiliconANGLE along with live, unscripted video from our Silicon Valley studio and globe-trotting video teams attheCUBE take a lot of hard work, time and money. Keeping the quality high requires the support of sponsors who are aligned with our vision of ad-free journalism content.

If you like the reporting, video interviews and other ad-free content here,please take a moment to check out a sample of the video content supported by our sponsors,tweet your support, and keep coming back toSiliconANGLE.

Read the original:
Arm boosts its Neoverse V1 and N2 chip designs with new mesh interconnect - SiliconANGLE News

Read More..

The Advantages of Using Remote IT Support – FinSMEs

Having an efficient technical support department is a must for every business looking to thrive in the modern, technological world. One of the main issues is that network technicians and other IT professionals are few and far between, meaning that infrastructure becomes harder to efficiently maintain.

Remote IT support promises to solve these troubles by giving everyone remote access to support when it comes to technical issues.

In todays guide, were going to look at some of the main advantages of enlisting remote IT support for your business, so lets get right into it.

One of the major advantages of hiring remote support to act as a help desk or server technician is that they are able to react to issues much more quickly than traditional IT professionals. Instead of having to physically come to your business to come up with solutions, they can resolve issues immediately, provided theyre awake.

If your business has network monitoring in place, then your remote access tech support professionals will be able to maintain software and hardware without even needing a call from you. This is on-demand service, and depending on how much you pay your remote support professional, they may be available immediately.

Even if you dont have a full-time remote support team, you can bring aboard a remote technician to ensure that issues are managed when your in-house IT team is otherwise indisposed. Whether your team is busy with another job or if they just lack the expertise to fix the issue, you can bring aboard remote tech support whenever you need to.

When you work remotely with an IT company, you wont always be dealing with a single technician. Depending on how much youre willing to pay, you can potentially hire an entire team of remote support professionals to go over your software and hardware to ensure that your services remain consistent.

One of the best things about using a whole team to manage your IT needs is that you can provide them with desktop access and have each member of the team deal with a specialized aspect of IT. This means that they can come up with network solutions as well as managing your businesss computer system.

You can even enlist the help of anti-virus specialists if youre looking to clear your servers or cloud of a recent infection. It can often be difficult to secure and control things if you arent experienced in the domain of Windows troubleshooting, so having a team to rely on can often be beneficial.

While remote support likely wont be able to match the efficiency of an in-person IT team most of the time, there are other times when the results will be extremely similar. This is because remote teams often use the same tools as in-person technicians to come up with solutions to network and computer troubles.

For example, how many times have you seen your local IT professional use remote access to determine the issue with one of your employees Windows installations? Remote IT support specialists have the exact same set of tools at their disposal, and they can often react even faster.

With the increase in reliable cloud service providers, more and more businesses are transitioning entirely to cloud-based technology. This means that remote support specialists will have an even better chance of accessing all of the relevant files that they need to either repair or maintain.

Remote access services can also get you a technician whenever you need one, regardless of time zones. If your business operates worldwide, you cant afford to have any downtime, even if its in the early hours of the morning. This means that youll need to be able to get some tech solutions.

Software and server support can be found in all corners of the world, and if your current IT technician is asleep, you can enlist the help of one who lives on the other side of the world. This will allow you to secure your business remotely.

The final advantage of hiring remote IT support services is that they are often more affordable than traditional IT employees. Along with not needing to bring these support teams aboard full-time, they typically have lower base rates in the first place.

Despite remote IT support services offering much lower prices for their work, theyll still provide your business with quality results that you can verify because everything is done digitally. Whether you need assistance with a device or shoring up your network security, remote support services will get the job done.

View original post here:
The Advantages of Using Remote IT Support - FinSMEs

Read More..

Armed and serious – telcos take note that Arm is serious about infrastructure and the cloud – Diginomica

(Source: Arm)

Arm has been the dominant processor architecture for mobile devices since the iPod, but has otherwise seen limited use outside of embedded systems and IoT. Despite some success with low-end Chromebooks, Arm never penetrated the PC market until Apple began migrating its entire lineup to the custom-designed M1.

Data center operators similarly shunned Arm processors as too underpowered until AWS built the Graviton1 SoC around Arm's first 64-bit v8 architecture. However, with sub-10nm process nodes enabling 64-core and higher SoCs and cloud operators prioritizing power efficiency over raw performance, Arm is now a viable option for many workloads. With the Armv9 architecture and Neoverse CPU platforms, Arm is a compelling alternative to x86 processors for many applications.

Arm teased the new products and roadmap last fall and as I wrote at the time:

Arm has the advantage with an extensive library of standard modules and cores, partner ecosystem and more flexible IP licensing model (which, in the hands of someone like Apple can be used for further product differentiation). While nature abhors a vacuum, the technology world abhors a monopoly and after decades of dominance, Intel's monopoly faces a two-front attack, from AMD and Arm-NVIDIA.

Last month Arm shed more light on Armv9 by discussing design goals and features, including:

This week, Arm has provided more details about the Neoverse platforms, including performance estimates, targeted markets and licensees using Neoverse in new products and services.

By expanding into cloud data centers and HPC installations, Arm follows a path set by Intel when it expanded an instruction set and chip architecture that was initially designed for PCs into processors optimized for servers (Xeon) and embedded systems (Atom). With Neoverse, Arm targets four growing markets:

The v9 architecture provides the foundation, while the Neoverse platforms and Arm's Coherent Mesh Network provide the building blocks and glue for creating SoCs tailored to such a divergent set of workloads.

With the Neoverse N2 and CMN-700, Arm has made evolutionary enhancements to existing technologies, while the Neoverse V1 is a new platform designed for maximum performance and workloads that currently require Xeon, AMD Epyc or IBM POWER, processors. However, Arm's strategy, and critical parts of its intellectual property, extend beyond processor architecture to the surrounding design, development and support ecosystem. To facilitate new products, Arm provides licensees with reference designs and IP blocks, EDA and compiler tools and optimizations, foundry partnerships, IDEs and a growing community of open source and commercial developers.

Neoverse N2 rolls the features introduced in Armv8.4, 8.5, 8.6 and 9 into an update to a platform that AWS has already demonstrated in its Graviton2 to be very effective and efficient for cloud workloads. Significant updates in Neoverse N2 include:

Microarchitecture improvements that provide a 40 percent increase in IPC (instructions per clock cycle) over N1.SVE2, which adds instructions useful for image and video processing, genomics, in-memory databases and LTE/5G baseband processing.Improved scalability including support for 128 core SoCs.Memory partitioning and monitoring (MPAM) to control access to shared system resources, cache and memory bandwidth.Improved power efficiency and management including the ability to dynamically adjust CPU prediction parameters to maximize the power efficiency for a given workload.Armv9 security and debugging improvements detailed above.

Whereas the N2 is Arm's answer for multi-threaded, general-purpose, efficiency-optimized infrastructure, V1 is squarely focused on maximizing performance per core. If N2 is the Mercedes C-Classfamily sedan, V1 is the AMG edition. Like the N2, V1 builds on the first-generation Neoverse, but the design choices favor maximizing performance over power efficiency with features like:

V1 also uses an improved CMN-700 (see below) interconnect to connect both on-die and chiplet-based CPUs, accelerators, memory and I/O controllers. The updated mesh interconnect enables designs exceeding 128 CPUs and 128 combination PCIe5, CCIX and CXL I/O lanes.

Arm CMN provides the connectivity between CPU core clusters, accelerators like GPUs and DSPs and memory, making it analogous to Intel's UltraPath Interconnect (UPI) or AMD's Infinity Fabric (a variant of PCIe). CMN-700 increases the scalability of Neoverse systems by augmenting CMN-600 in several areas, including:

It also supports the CCIX and CXL standards to enable multi-chip packages (MCP) using chiplets to improve die yield, chip capacity and allow composite chips using specialized accelerators and silicon photonics chiplets (for example, the TeraPHY from AyarLabs).

In the middle of the last decade, several companies like Applied Micro, Calxeda and Cavium failed to convince data center operators that Arm was a viable server platform despite having some compelling designs. AWS silenced most of the skeptics when it released instances based on its home-grown Graviton processors in 2018, however, despite Microsoft supposedly toying with the platform, Arm remains a rarity in both cloud and enterprise data centers.

A new architecture, a pair of chip platforms and enhanced intra- and off-chip I/O fabric make Arm a legitimate alternative for a wide range of enterprise, cloud, HPC and telecommunications applications. To hammer that point, Arm unleashed a cavalcade of partners to rhapsodize on the benefits of the Arm architecture, Neoverse performance and customizability afforded by the Arm platform and associated IP. Indeed, the group ran the gamut of industries Arm is targeting, namely:

As NVIDIA has long contended (see my coverage of GTC 2021 here for more details), modern applications require more than just general-purpose compute cycles. Maximizing performance and security requires hardware subsystems, whether GPUs, tensor processors, crypto engines or secure enclaves. With a competitive CPU architecture, robust ecosystem of design tools and hardware IP, a wide array of hardware licensees and multiple licensing options, Arm is ideally positioned to power the next generation of data center and edge infrastructure. It will be fascinating to see the products that come from the minds of experienced designers at deep-pocketed companies like AWS, Microsoft, Alibaba and Marvell, but if Apple's example in the consumer business is any indication, the improvements to price, performance and efficiency should be impressive.

Read the original post:
Armed and serious - telcos take note that Arm is serious about infrastructure and the cloud - Diginomica

Read More..

How to Build a Better Switch by Making it Smart and Superior – DesignNews

It seems like everything has gotten smarter, thanks to the evolution of embedded technology and the proliferation of IoT platforms. Even the humble switch is now more intelligent.

Smart switches are often used as replacements for traditional built-in switches. Naturally, a user can still physically turn lights or devices on and off with a smart switch, but these switches can also be controlled by the users voice via an audio assistant like Amazons Echo or Google Home or with an application on a smartphone.

Related: How to Build a Better Automotive Radar System

There are several good reasons for using a smart switch. One is that a smart switch is often easier and more convenient than using a regular switch, which requires someone to find and physically flip the switch. This task becomes more difficult in the dark and within new surroundings.

Smart switches are also more flexible than a smart bulb, which is an internet-capable LED light bulb that allows lighting to be controlled and customized remotely. However, whereas a smart switch could be controlled by anyone using a voice assistance device, a smart bulb typically requires a smartphone-based application with a password. The latter would be cumbersome for babysitters and guests who visit a users home.

Related: How to Build a Better In-Vehicle Connectivity System

Another benefit of smart switches connected to popular platforms like Echo is that switches can be grouped together to easily illuminate specific areas in a home or factory. Controlling groups of smart switches can also be handy to schedule when the lights come on and off, e.g., during a vacation away from home. Naturally, smart switches can be used in conjunction with light-dimming mechanisms and motion-detection sensors.

Build a Better (Smart) Switch

Depending on system requirements and operational needs, it may be better to make your own smart switch rather than buy an existing one off-the-shelf. A quick search on Google or Amazon reveals a surprising number of commercial-off-the-shelf (COTS) smart switch providers. Less common are examples of how to design a smart switch to enable a better product, which is the focus of this article.

There are four basic parts to any smart switch design:

John Blyler/Design News)

The Amazon Echo, Google Home, or similar voice assistant ecosystems enable natural language-activated, web-based queries that can be used to activate smart devices in the home, office, automobile, or almost anywhere a wireless Internet signal can reach. The voice assistant ecosystem is a voice-to-text-to-action device. For example, the Echo accepts an audio voice command from a user, records the audio, and uploads the snippet to Amazons cloud servers. Those servers translate the audio into text and then determine (via software tables) the best way for Alexa to respond or initiate a specific action.

Next, the translated voice command actions feed into a processor-based platform, such as the Raspberry Pi. This is the brain of the operation that understands the commands and sends out the appropriate signals to control the relays for the smart switches.

Relays are electric switches that use electromagnetism to convert small electrical input into a high-current output. The low voltage electrical inputs activate electromagnets to either form or break existing high-current circuits. Relays form the foundation of modern electronic equipment in industrial and consumer markets.

On a side note, many of todays component relay vendors include smart relays in their product offerings. In industrial applications, these relays are often connected to legacy automation interface and network protocol systems from Modbus, and BACnet, to Zigbee, Wi-Fi, and cellular.

One example is the Zelio Logic smart relay from Schneider Electric. These relays are designed to be used in small automated systems, such as the automation of product, assembly, and packaging machines to the control of lighting installations and air conditioning systems to name just a few applications.

These types of compact smart relays are typically used for simple automated systems with up to twenty I/Os. If required, the relays can be fitted with I/O extensions and a module for communication on the Modbus networks to increase the number of I/Os and ease of programming the devices.

Lets return to our Arduino-based smart switch. The relay doesnt need to be a smart one, as in the Zelio Logic above. Instead, a simple relay board with hardware and software interfaces to the Arduino platform will be fine, such as the ESP-8266 relay board. Another approach would be to have a bank of simple relays to control many devices, such as the JBtek 8 Channel Relay Module, also for the Raspberry Pi. This relay board can be used to control any standard 110V wall socket outlet.

The ESP-8266 is an example of a relay board that contains an option Wi-Fi chip interface as a smart switch interface.

The relay board is then connected directly to a wall outlet to control whatever devices are plugged into the outlet, from lights and music speakers to household appliances.

In order to make responding to certain Echo commands more manageable, the user might include a touchscreen display in this design. Also, a wireless keyboard will make for easier programming on the Arduino.

Heres an example of a bank of 8 switches as part of a smart switch control circuit.

John Blyler is a Design News senior editor, covering the electronics and advanced manufacturing spaces. With a BS in Engineering Physics and an MS in Electrical Engineering, he has years of hardware-software-network systems experience as an editor and engineer within the advanced manufacturing, IoT and semiconductor industries. John has co-authored books related to system engineering and electronics for IEEE, Wiley, and Elsevier.

Excerpt from:
How to Build a Better Switch by Making it Smart and Superior - DesignNews

Read More..

How to Make Money With Cryptocurrency – NBC Connecticut

This story originally appeared on LX.com

Cryptocurrency is part computer science and part finance, but dont let that intimidate you. Its simple to get started and you dont have to be an expert.

It's good to know what a blockchain is and how it works but its not a necessity. Think about what happens when you buy something online do you know how an Automated Clearing House works? How well do you understand the system of banks and payment processors that make up traditional finance? Lacking this knowledge doesnt prevent you from using dollars, and likewise wont prevent you from using crypto.

That said, what you need to know is that a cryptocurrency relies on a blockchain, a special type of digital network. There are different blockchainslike Ethereum, Cardano and Stellar. They work similarly, but have different features.

Bitcoin [BTC] is the most popular cryptocurrency. BTC transactions are processed and verified by people called miners. When miners process enough transactions, using specialized computers, theyre rewarded with some BTC. Essentially, the act of verifying transactions is what creates more BTC. So as long as miners want more cryptocurrency, the blockchain will function.

Blockchains use special apps, called protocols, that put your crypto to work. So in traditional finance you might have a savings account, but in crypto, youd use a savings protocol. The language of crypto is rooted in computer science.

Youll need a place to store your crypto a wallet. You can choose a software wallet like an app, or a hardware wallet an offline devicesort of like a flash drive.

Since software wallets are online, theyre potential targets for hackers. Hardware wallets are offline and cant be hacked, but they can be lost or stolen like a real wallet.

You can skip this step by downloading an exchange app like Coinbase, eToro, or Gemini, then connecting a debit card or bank account. This is the fastest way to start buying and trading crypto. Your assets will be stored in a wallet managed by the exchange, which adds some risk.

Think about it, if youre a hacker trying to steal millions, your time is better spent hacking large exchanges to access thousands of wallets. Hacking a single software wallet is probably a waste of time. To learn more about crypto wallets check out this resource from Benzinga.

If you only want to trade crypto, a wallet and exchange is all you need. But there are other ways to use crypto to make money.

Decentralized finance [DeFi] is a system of peer-to-peer finance tools that provide options like interest accounts, loans, and advanced trading for people with crypto. DeFi disrupts traditional finance by removing middlemen [bankers, lawyers, brokers] from finance processes. DeFi advocates say this makes finance faster, more affordable, more transparent, more democratic and eliminates in-person discrimination.

Getting started in DeFi takes more research. You can learn about different DeFi protocols on the web starting with The DeFi List. There, protocols are sorted by function, making it easy to understand what they do. Protocol developers share their mission statement by distributing a white paper. Heres the white paper for Compound, a popular protocol, as an example.

To use DeFi protocols, youll need access to the decentralized web [dWeb]. To learn more about DeFi protocols, their history, and how they work, check out Finematics on YouTube.

Finally, here are some scenarios to help understand dollars and crypto.

There are a lot of experts on YouTube and Reddit. To get you going, here are some free online resources ranging from the basic to the meta.

See original here:
How to Make Money With Cryptocurrency - NBC Connecticut

Read More..