Category Archives: Cloud Storage

We Need to Talk About Cloud Neutrality – WIRED

We spent a lot of years talking about net neutralitythe idea that the companies that provide access to the internet shouldnt unfairly block, slow down, or otherwise interfere with traffic even if that traffic competes with their services. But theres an even bigger issue brewing, and its time to start talking about it: cloud neutrality.

While its name sounds soft and fluffy, Microsoft president and general counsel Brad Smith and coauthor Carol Ann Browne write in their recent book, Tools and Weapons: The Promise and the Peril of the Digital Age, in truth the cloud is a fortress. Their introduction describes the modern marvel of the data center: a 2 million-square-foot, climate-controlled facility made up of colossal electrical generators, diesel fuel tanks, battery arrays, and bulletproof doors. At its center is what they call a temple to the information age and cornerstone of our digital lives: thousands of machines connected to the fastest possible internet connections, providing offsite storage and computing power to businesses that otherwise couldnt possibly afford the hardware for all that storage and computing power.

Smith and Browne note cheerfully that Microsoft operates or leases more than 100 such facilities in 20-plus countries and hosts at least 200 online services. Each data center costs hundreds of millions of dollars to build and many millions more to maintain; and you pretty much cant build a successful new company without them. So, thank goodness for Microsoft, right?

The book means to portray this might and power as both a source of wonder and an enabling feature of the modern economy. To me, it reads like a threat. The cloud economy exists at the pleasure, and continued profit, of a handful of companies.

The internet is no longer the essential enabler of the tech economy. That title now belongs to the cloud. But the infrastructure of the internet, at least, was publicly financed and subsidized. The government can set rules about how companies have to interact with their customers. Whether and how it sets and enforces those rules isnt the point, for now. It can.

Thats not the case with the cloud. This infrastructure is solely owned by a handful of companies with hardly any oversight. The potential for abuse is huge, whether its through trade-secret snooping or the outright blocking, slowing, or hampering of transmission. No one seems to be thinking about what could happen if these behemoths decide its against their interests to have all these barnacles on their flanks. They should be.

Almost every modern tech company is paying to outsource its storage and computing services, either all or in part, to the cloud. This setup allows startups to emerge with very little overhead, and huge companies to run more efficiently by avoiding investment in physical hardware. It has spawned a generation of companies that plan to use the cloud to offer everything as a service.

But turn that transaction around, and you realize that the companies that actually built and operate the cloud are essentially incubating and hosting their competition. One easy example? Netflix runs its streaming video product on the cloud-based Amazon Web Services; indeed, it was widely praised for saving money by going all in on AWS in 2009 and 2010. Amazon started its own streaming service in 2011. The two have coexisted for a decade now, but how long will the famously ruthless Amazon tolerate that situation?

The problem is that few have the resources to replicate the cloud infrastructure, should the landlords suddenly turn on their tenants.

The big three cloud providers in the world are Amazon, Google, and Microsoft. Theyve collectively spent tens of billions of dollars on data center infrastructure. And to be clear, they have profited handsomely from those investments. Just last week, in fact, Alphabet revealed its cloud services revenue for the first time: it accounts for nearly $9 billion of the companys $37.57 billion in quarterly earnings, up more than 50 percent from 2018. Amazons AWS business made almost $10 billion, and Microsofts Azure business made almost $12 billion. Cloud computing was a $141 billion market in 2018.

Read more here:
We Need to Talk About Cloud Neutrality - WIRED

Why fast object storage is poised for the mainstream – Blocks and Files

Four years ago, Pure Storage pioneered fast object storage with the launch of its FlashBlade system.Today fast object storage is ready to go mainstream, with six vendors touting the technology.

Object storage has been stuck in a low performance, mass data store limbo since the first content-addressed system (CAS) was devised by Paul Carpentier and Jan van Rielat FilePool in 1998. EMC bought FilePool in 2001 and based its Centera object storage system on the technology it acquired.

Various startups including Amplidata, Bycast, CleverSafe, Cloudian, Scality developed object storage systems. Some were bought by mainstream suppliers as the technology gained traction For instance, HGST bought Amplidata, NetApp bought Bycast and IBM bought CleverSafe.

Objects became the third pillar of data storage, alongside block and file. It was seen as ideal for unstructured data that didnt fit in the highly structured database world of block storage or the less highly structured file world. Object storage strengths include scalability, ability to deal with variably-sized lumps of data, and metadata tagging.

Object storage systems typically used disk storage and scale-out nodes. They did not take all-flash hardware on board until Pure Storage rewrote the rules with FlashBlade in 2016. Since then only one other major object storage supplier NetApp with its StorageGRID has focused on all-flash object storage. This is a conservative side of the storage industry.

Commonsense is one reason for industry caution. Disk storage is cheaper than flash and object storage data typically does not require low latency, high-performance access. But this is changing, with applications such machine learning requiring fast access to millions of pieces of data. Object storage can now be used for this kind of application because of:

A look at products from MinIO, OpenIO, NetApp, Pure Storage, Scality and Stellus shows how object storage technology is changing.

MinIO develops open source object storage software that executes very quickly. It has run numerous benchmarks, as we have covered in a number of articles. For instance:

MinIO has demonstrated its software running in the AWS cloud, delivering more than 1.4Tbit/s read bandwidth using NVMe SSDs. It has added a NAS gateway that is used by suppliers such as Infinidat. Other suppliers view MinIO in a gateway sense too. For example, VMware is considering using MinIO software to provision storage to containers in Kubernetes pods, and Nutanixs Bucket object storage uses a MinIO S3 adapter.

All this amounts to MinIO object storage being widely used because it is fast, readily available, and has effective S3, NFS and SMB protocol converters.

OpenIO was the first object storage supplier to demonstrate it could write data faster than 1Tbit/sec. It reached 1.372Tbit/s (171.5GB/sec) from an object store implemented across 350 servers. This is faster than Hitachi Vantaras high-end VSP 5500s 148GB/sec but slower than Dell EMCs PowerMax 8000 with its 350GB/sec.

The OpenIO system used an SSD per server for metadata and disk drives for ordinary object data, with a 10Gbit/s Ethernet network. It says its data layer, metadata layer and S3 access layer all scale linearly and it has workload balancing technology to pre-empt hot spots choke points occurring.

Laurent Denel, CEO and co-founder of OpenIO, said: We designed an efficient solution, capable of being used as primary storage for video streaming or to serve increasingly large datasets for big data use cases.

NetApp launched the all-flash StoregeGRID SGF6024 in October 2019. The system is designed for workloads that need high concurrent access rates to many small objects.

It stores 368.6TB of raw data in its 3U chassis and there is a lot of CPU horsepower, with a 1U compute controller and 2U dual-controller storage shelf (E-Series EF570 array).

Duncan Moore, head of NetApps StorageGRID software group, said the software stack has been tweaked and there is scope for more improvement. Such efficiency was not needed before as the software had the luxury of operating in disk seek time periods.

FlashBlade was a groundbreaking system when it launched in 2016 and it still is. The distributed object store system uses proprietary hardware and flash drives and was given file access support from the get-go, with NFS v3. It now supports CIFS and S3, and offers up to 85GB/sec performance.

Pure Storage markets FlashBlade for AI, machine learning and real-time analytics applications. The company also touts the system as the means to handle unstructured data in network-attached storage (NAS), with FlashBlade wrapping a NAS access layer around its object heart.

The AIRI AI system from Pure, with Nvidia GPUs, uses FlashBlade as its storage layer component.

Scality is a classic object storage supplier which has seen an opening in edge computing locations.

The company thinks object storage on flash will be selected for edge applications that capture large data streams from mobile, IoT and other connected devices; logs, sensor and device streaming data, vehicle drive data, image and video media data.

The data is used by and needed for local, real-time computation and Scality supports Azure Edge for this.

Stellus Technologies, which came out of stealth last week, provides a scale-out, high-performance file storage system wrapped around an all-flash, key:value storage (KV store) software scheme. Key:value stores are object storage without any metadata apart from the objects key (identifier).

An object store contains an object, its identifier (content address or key) and metadata describing the object datas attributes and aspects of its content. Object stores can be indexed and searched using this metadata. KV stores can only be searched on the key.

Typically KV Stores contain small amounts of data while object stores contain petabytes. Stellus gets over this limitation by having many KV stores up to 4 per SSD, many SSDs and many nodes.

The multiple KV stores per drive and an internal NVMe over Fabrics access scheme provides high performance using RDMA and parallel access. This is at least as fast as all-flash filers and certainly faster than disk-based filers, Stellus claims.

There are two main ways of accelerating object storage. One is to use flash hardware with a tuned software stack, as exemplified by NetApp and Pure Storage. The other is to use tuned software, with MinIO and OpenIO following this path.

Stellus combines the two approaches, using flash hardware and a new software stack based in key:value stores rather than full-blown object storage.

Scality sees an opening for all-flash object storage but has no specific version of its RING software to take advantage of it yet.Blocks & Files suggests that Scality will develop a cut-down and tuned version for edge flash object opportunities, in conjunction with an edge hardware system supplier.

We think that other object storage suppliers, such as Cloudian, Dell EMC (ECS), Hitachi Vantara, IBM and Quantum, will conclude they need to develop flash object stores with tuned software. They can see the possibilites of QLC flash lowering all-flash costs and the object software speed advances made by MinIO, OpenIO and Stellus.

More here:
Why fast object storage is poised for the mainstream - Blocks and Files

Choosing the right disaster recovery for your business – ComputerWeekly.com

Historically, building and maintaining a disaster recovery (DR) site, while critical to ensure business continuity, was often too costly and complex for most companies.

As Rajiv Mirani, chief technology officer at Nutanix, points out: It simply wasnt feasible for many enterprises to pay for the upfront costs and ongoing expenses of maintaining a second site, to be used only in the event of a disaster.

From a DR perspective, the starting point for most large enterprises is their core IT infrastructure, which is often based on a primary on-premise datacentre or private cloud.

This is then supported by a secondary DR site at a separate geographic location. There, the core and primary systems and data are backed up and replicated, ready for activation in the event of the primary site suffering a failure and no longer being able to serve the business in a reasonable capacity.

Clearly, replicating an entire datacentre, with all its equipment, management, cooling and power needs, is a huge expense. Some businesses may require a hot standby, where the disaster recovery centre switches over the moment the primary site goes down. This is the most expensive option.

It can be so redundant that data is synchronised between the two sites, leading to minimal disruption in the event of a failure. Others are warm or on standby, which leads to a certain level of delay before the backup site is fully operational. In July 2019, analyst IDC forecast public cloud spending would grow annually by 22.3%, from $229bn in 2019 to nearly $500bn in 2023.

The analyst firm noted that infrastructure-as-a-service (IaaS) spending, comprising servers and storage devices, will be the fastest-growing category of cloud spending, with a five-year compound annual growth rate of 32%.

These figures illustrate that cloud computing is increasingly becoming a mainstay of enterprise IT. Feifei Li, president and senior fellow of database systems at Alibaba Cloud Intelligence, explains: Organisations around the world are embracing a range of disaster recovery solutions to protect themselves against hardware and software failures and ensure zero-downtime for their business applications, which are business-critical but can be costly.

Cloud-native DR provides a cost-effective option for customers to back up data in case of a disaster with a pay-per-use pricing model.

However, one size doesnt fit all when it comes to computing architecture, cloud and disaster recovery. It is not simply a matter of moving cloud-ready business-critical applications and data to the cloud and hoping that the existing storage architecture provides the right set of services to support it.

Enterprise compute environments differ vastly, and there are many environments that, either through technical reasons or data movement regulations, cannot be hosted or backed up in a cloud environment.

While promising attractive IT economics and easy accessibility, Mirani, says: Cloud-based DR services come with their own challenges.

IT companies claim technology is now advanced enough, given the right architecture, for them to offer zero-time recovery. In reality, though, this depends on a number of factors, all of which must be assessed. But first and foremost, organisations must define DR plans that prioritise their mission-critical data and applications, and how much downtime they can sustain before the business begins to suffer.

Recovery point objectives (RPO) and recovery time objectives (RTO) are used in the industry to measure business continuity.

RPO describes how often I need to protect my data to ensure that I can get back to what I had when disaster or data corruption struck or as close to that as possible, says Tony Lock, a principal analyst at Freeform Dynamics.

It relates to how fast my data changes. RTO is how quickly I must make the recovered data available should disaster strike or a request come in from a user, an auditor or even a regulator.

Lock says that answering these fundamental questions is simple enough for small numbers of different data sets. However, it becomes complicated very quickly when you have lots of different data sets of varying business importance, all of which may have very dissimilar protection and recovery needs.

Enterprises may be at different stages of cloud maturity. Some are at the planning stage, while others, that have deployed workloads in the public cloud and are comfortable with how to manage their on-premise and public cloud environments, may be on a path to multicloud and hybrid datacentre transformation.

Whatever the level of cloud maturity, there are good reasons to use the cloud as an emergency backup in the event of a disaster.

In fact, according to the Gartner Magic Quadrant for the backup and disaster recovery market, published in October 2019, backup and recovery providers are consolidating many features such as data replication, (cloud) disaster recovery automation and orchestration, and intercloud data mobility on a single platform.

In addition, Gartners study reported that backup providers are adding data management functionality to their backup platforms to support analytics, test and development, and/or ransomware detection on cloud copies of backup data.

By bundling these additional services, the backup and disaster recovery providers are looking to deliver a higher return on investment in data protection.

Cloud-based disaster recovery services can be hybrid in nature; they move infrastructure and services between the cloud and on-premise datacentres.

Traditional hardware companies now offer capacity-on-demand and flexible consumption-based pricing, providing managed services wrapped around their cloud or pay-as-you-go offerings.

Traditional data backup hardware (tape backup) and software companies are also building out scalable DR cloud platforms, giving their customers a two-or-more-tier approach to their DR.

Architecturally, the customers business continuity and disaster recovery system is located on-premise in the primary and/or secondary datacentre.

Through a managed service offering, the disaster recovery provider manages the backup and replication of data not only locally on the customers hardware, but as part of the service. Copies are sent to the cloud, where a mirror copy of virtual servers, applications and data are kept, ready to spin up in the event of a disaster that takes out the customers primary and/or secondary on-premise systems.

Hyperscale cloud platform providers, such as Amazon Web Services, Microsoft, Google and Alibaba, have security, redundancy and recovery measures in place that make it very unlikely for them to lose your data but they are not infallible, says Bola Rotibi, research director of software development at CCS Insight, who warns that many organisations falsely assume data and information stored in cloud applications and services is safe from loss.

Without a plan that actively addresses protecting critical data stored in the cloud through software-as-a-service solutions in operations, that comfort blanket could just as easily smother an organisation when the light gets turned off, she says.

Beyond the basic requirements of using cloud-based, on-premise or a hybrid approach to DR, there are plenty of tools available that can help make general operational IT and DR run efficiently and cost-effectively. These need to be considered alongside the DR platform choices. Such tools are generally designed to help IT administrators responsible for data and backups understand, visualise and manage the lifecycle of all of the data across the organisation, as well as the relationships between different datasets, systems and people that require access.

With an effective data and information management policy and supporting toolset, enterprises are able to have a better view of what data should remain on-premise or on private cloud platforms, and what data can reside in public cloud systems.

Building on this data governance framework, data backup, replication and recovery policies for disaster can be applied to critical and less critical data and their associated applications.

Along with effective data and information management, data discovery can be used to help an organisation understand and regain control over its data.

Data discovery not only helps to mitigate expensive industry compliance penalty charges, but also enables IT administrators to have better insight into the organisations data. This helps with cost optimisation.

Data discovery lets IT departments see where their data resides across disparate geographic systems or locations, and classify the criticality of that data. The discovery process can check to ensure the data is compliant with legal or regulatory requirements and corporate data governance policies.

While it may not be seen as part of DR, data discovery has its place alongside data retention policies, cyber security and data loss prevention initiatives, as part of a firms data stewardship.

As is the case across many aspects of IT, artificial intelligence (AI) and machine learning (ML) also have their place in DR and business continuity.

Thanks to advances in AI and ML, many routine administration tasks that were previously people-intensive can now be fully automated. Automation, performance, high availability and security are key differentiators when choosing DR solutions, says Li.

For example, many customers prefer virtual machine snapshot backup with high snapshot quota and a flexible automatic task strategy, which helps reduce the impact on business I/O [input/output].

Disaster recovery automation can run tests on applications, for example, or recover data to another environment for testing or development. Typically, AI is used to track metrics relevant to data backup and recovery, such as a performance statistics, rates of change, speed of access, performance insights and bottlenecks.

The AI dynamically makes changes to the DR system if it needs to be optimised, re-prioritised or modified to improve a desired business outcome, such as a service-level agreement to speed up recovery after a system failure.

In essence, the AI ensures the disaster recovery is running optimally and matches the requirements of the business.

Additional reporting by Cliff Saran.

Read this article:
Choosing the right disaster recovery for your business - ComputerWeekly.com

Understanding my new-found appreciation of Google Maps – IT PRO

It's often said that the world is getting smaller. With a few clicks in a browser, I can transport myself to almost anywhere on earth. I can cut a path through the throng of New York's Times Square, I can stroll along the eerily deserted streets of the irradiated town of Pripyatand experience the majesty of Alaska's ice caves, all from my office in central London.

This weekend, Google Maps, the technology that makes such globetrotting possible, turns 15 years old quite remarkable, given that Maps predates the first iPhone and all the modern smartphone technology that it truly excels on. In fact, Maps has become so inextricably linked with mobile devices that it's easy to forget it had a life before this.

Yet, the reason why I thought this was worth talking about was not because of how much smaller Google Maps has made the world, but rather its unparalleled ability to ground you in your own history something that I'd argue many of us take for granted.

Anyone who has lived and worked in London will understand when I say that the place changes very quickly. I moved to the city in 2016, and since then the skyline has continued to morph with each new project. The area north of King's Cross, now the Coal Drops Yard, was just a hole in the ground when I first arrivedand London's new Scalpel skyscraper has sprouted from the rubble of Prudential House, becoming the staggering feat of engineering we see today.

Advertisement - Article continues below

Advertisement - Article continues below

But that change is a constantand sometimes it takes a technology like Google Maps for us to truly appreciate this.

London St Pancras

London Bridge

Blackfriars

As stirring as some of these images are, I wasn't quite expecting to feel so homesick. They reminded me of looking at old photos of Northumberland, and of Bedlington, the town where I spent my teens; of the old streets that had been demolished and rebuilt in the wake of the collapse of the mining industry; and of the high street where businesses seem to behave like popup stalls, guaranteeing a new experience every time I return for a visit.

Liverpool Exhibition Centre

Belfast

Dundee V&A Museum

In the days since looking at those photos, I've taken a virtual stroll down my home high street, I've visited the old football field where I used to run every eveningand witnessed the faintly horrific blurred outline of my father washing his car. But I've also gone further back than that. I've visited Killingworth, the small town where I was born. My old primary school. The old village. The twin lakes where I used to try to fish, but I'm now convinced were entirely empty and were just there to give passing drivers something to look at. All of it entirely unrecognisable.

I've come to understand how important tools like Google Maps really are, something that I've never really truly appreciated until now. Over the past 15 years, Maps has provided a means of connecting with our history in a truly tangible way, even if you feel a sense of loss at how much areas change in your absence.

Advertisement - Article continues below

The world has become smaller because of Maps, but not in all theways that you might appreciate. It's not the visitsto far off places you've seen on TV that make Maps a thing of beauty, but rather the feeling you get when you type in your old postcodes.

Pictures courtesy of HK Strategies

Report: The State of Software Security

This annual report explores important trends in software security

A fast guide to finding your cloud solution

One size doesn't fit all in the cloud, so how do you find the best option for your business?

Digitally perfecting the supply chain

How new technologies are being leveraged to transform the manufacturing supply chain

Small & Medium Business Trends Report

Insights from 2,000+ business owners and leaders worldwide

View original post here:
Understanding my new-found appreciation of Google Maps - IT PRO

Synology DS218play 2 Bay NAS Review Is it worth the extra 40+ over the TerraMaster F2-210? – Mighty Gadget

After singing the praises of the TerraMaster F2-210 NAS for the level of value it offers over competing Synology devices, the DS218play dropped to its lowest price over on Amazon at just 183.99, and I couldnt help but buy it so I can compare.

From a hardware point of view, these two devices are almost identical. Both are two-bay NAS drives, and they both use the 64-bit Realtek RTD1296 chipset which features a quad-core ARM Cortex-A53 CPU running at 1.4GHz and 1GB of RAM.

They both have two USB 3.0 ports, and they offer a similar level of power draw and noise.

Id argue that the TerraMaster F2-210 has a marginally superior design with the quick release style hard drives, whereas the Synology requires you to slide off the casing. It barely makes a difference though.

Fundamentally they offer similar features, they are both managed via the web using a desktop-style web interface. They also both have the same sort of NAS and backup functions such as SMB/NFS shares, FTP, TimeMachine, and cloud sync.

They both have installable packages, and if you dont mind things like docker you could probably replicate everything the Synology does on TerraMaster.

However, on the Synology website, there are 150 different packages to install vs 40 of the TerraMaster.

Furthermore, Synology also has a range of mobile and desktop apps. In particular, is the Synology Drive Client which gives you a Google Drive like solution but with your own private cloud.

Synology also makes external access easier with its quick-connect function and Synology DDNS.

A had issues getting this to work at first, Synology refused to installed DSM on my two hard drives. I was worried that the old drives were not in good enough condition for Synology to accept the or the DS218play itself was broken. I eliminated the NAS itself as it worked with a third hard drive immediately. After adding the other two drives to my PC and running various tests, no problems were highlighted. I eventually deleted all the partitions that TerraMaster had created and retried the installation, this time it worked immediately. So it seems the DS218play didn't like the old TerraMaster set up.

Once I got DSM to install, I experienced no other major issues. This includes getting CloudSync to work which I had problems with on the TerraMaster

Surveillance Station is perhaps one of the standout features that Synology has over other options. It is not perfect, but at the same time, I am impressed. Each NAS comes with two camera licences for free, additional ones will set you back around 50, so it can be quite an expensive option.

Getting this to work with cameras is a bit hit or miss. I managed to get it to work with my H.View 5MP Colour Night Vision camera straight away with no problems.

I could not get this to work at all with the Annke cameras I use. It just did not like any of the settings I tried. This is not entirely Synology's fault, each camera brand has different settings, but BlueIris was able to resolve all the camera feeds with no issues.

Assuming you get the camera to work, performance is excellent and the user interfaces superb, it is far better than a cheap NVR.

I didnt buy this for Plex server as I already have a dedicated server for that, but it is no doubt a popular feature.

If you play most of your files locally, this works well, which is 99% of my usage. While Synology claim this can do 4K transcoding, Plex explicitly states this can only go to 720p software transcoding with some 1080P transcoding. You will need the DS218+ or the DS418play if you want hardware accelerated transcoding.

This is another feature that is superior to the TerraMaster. This has extensive support for multiple methods of download, including torrents, Usenet, FTP, and file hosting. There are auto extra features which are essential if you use some applications to manage your media, and you even have the ability to add various bit torrent search engines.

Moving from TerraMaster to Synology, the immediate difference was the companion apps available for both PC/Mac and mobile.

One the Play Store there is just a single TNAS mobile app with some poor reviews.

There are 16 different apps made by Synology. Admittedly, a lot of these have so-so reviews too, but it gives you vastly more functionality compared to TerraMaster.

One of the mains one thats I use on mobile has been the Moments app, which gives you Google photos like featureset. I have read a few reports recently of people losing access to their Google accounts then losing years worth of photography, so this is a perfect secondary backup solution.

The Synology Drive Client is also superb, I havent used it for months so I can't say what the long term performance will be like, but so far it has been excellent. You can, of course, replicate this functionality with other apps, I use Syncovery for some backups and this would offer the same features depending on your settings.

Read and write performance is good, and you should have no issues capping out the gigabit speed of the ethernet connection. I also plugged in a USB 3.0 2.5-inch drive and this was consistently able to achieve over 90MB/s and often hitting the 110MB/s threshold for the gigabit speeds.

In general, it is going to be the hard drive and ethernet that hold back file transfer performance of this unit.

There are a lot of 2-bay NAS options out there, and I covered them in my NAS round-up.

My personal opinion is TerraMaster provide the best competition if you dont need all the frills Synology has, they offer superior value with the F2-210 being 40 cheaper or the four-bay F4-210 being around the same price

The QNAP TS-328 looks like a tempting alternative too, costing a little bit more but offering a 3 bay solution allowing you to use RAID 5 and therefore only losing a third of the available storage. This is also powered by the Realtek RTD1296 featured in the Synology DS218play and TerraMaster F2-210 but you get 2GB of RAM, giving you a bit more wiggle room for performance.

The Synology DS218j is worth considering if you want the Synology features but need it as cheap as possible. You can forget about transcoding media via Plex though, but it should be fine for most basic functions.

There is obviously a reason why Synology dominates the NAS market. The DSM software outclasses its competitors by some margin.

I am tight-fisted, so I am always going to be a little mad about paying more for the same spec hardware, but in this case the additional 40 or so you pay for this over the TerraMaster F2-210 is justifiable.

I still like the TerraMaster F2-210 a lot, its priced correctly for the functionality it offers.

I was surprised how good Surveillance Station was, but at the same time it's not perfect, camera set up can be awkward depending on what brand you use, but if you pick up compatible cameras it could be worth buying this over a dedicated NVR.

For most home and SOHO users, especially people wanting something with a minimal set up as possible the Synology DS218play will be a best buy for an affordable 2-bay NAS. It offers all of the functions you need, and many you dont, straight out of the box, the set up is straightforward and getting the packages installed and running is no harder than installing an app on your phone or PC. The Synology made apps complement the NAS giving you a true private cloud storage feature set that can compete with the likes of Google Drive/Docs.

Product Name: Synology DS218play

Product Description: 2-bay NAS with optimal multimedia solution for home users 4K video transcoding on the fly with 10 bit H.265 codec support Up to 112 MB/s and 112 MB/s sequential reading and writing Powered by a 64-bit 1.4 GHz quad-core processor with 1 GB DDR4 RAM Supports up to 15 IP cameras

Price: 183.99

Currency: GBP

Availability: InStock

More...

85%

Summary

Last Updated on 10th February 2020

Last Updated on 10th February 2020

Last update on 2020-02-10 / Affiliate links / Images from Amazon Product Advertising API

Visit link:
Synology DS218play 2 Bay NAS Review Is it worth the extra 40+ over the TerraMaster F2-210? - Mighty Gadget

7 Great ETFs to Buy for the Rise of 5G – Investorplace.com

For investors that are paying attention to the communication services and technology sectors, theres a good chance youre hearing plenty about 5G, the next generation of wireless communication systems.

Wide deployment of 5G started in some locations last year, but the rollout should gain significant momentum this year. At its core, 5G aims to reduce latency and increase download speeds. For those that are confused by all this tech jargon, Verizon (NYSE:VZ) has a straightforward definition.

That means quicker downloads, much lower lag and a significant impact on how we live, work and play, the telecom giant writes. The connectivity benefits of 5G are expected to make businesses more efficient and give consumers access to more information faster than ever before.

As the 5G theme unfolds, there will be both winning and losing ideas. And beyond some obvious, large-capitalization names, stock picking in the 5G landscape is likely to prove difficult. For many investors tapping 5G via exchange-traded funds will prove to be a sensible option. With that in mind, here are some of the top ideas among 5G ETFs to consider.

Source: Shutterstock

Expense ratio: 0.30% per year, or $30 on a $10,000 investment

A couple weeks shy of its first birthday, the Defiance Next Gen Connectivity ETF (NYSEARCA:FIVG) isnt just a success story among 5G ETFs. Its confirmation that some thematic funds can captivate investors. A few weeks ago, FIVG vaulted to $200 million and now has over $235 million in assets.

FIVG follows the BlueStar 5G Communications Index and for a thematic ETF, its roster is fairly deep. It touches a wide array of segments applicable to 5G, including semiconductor names, telecom gear makers, satellite companies, cloud computing firms and many more.

FIVGs deep bench is important because the 5G ETF allocates about 10% of its combined weight to Nokia (NYSE:NOK) and Ericsson (NASDAQ:ERIC). These are both credible 5G plays, but also two of the themes most obvious laggards.

FIVG is only modestly higher to start 2020, but it popped 3.5% last week. That indicates it could be starting to accrue some momentum.

Expense ratio: 0.70%

The First Trust Indxx NextG ETF (NASDAQ:NXTG) was once a smartphone ETF. Nine months ago, First Trust threw in the towel on that concept and converted the smartphone fund into a 5G ETF. The difference has been meaningful as NXTG now has north of $315 million in assets under management.

Although they are both 5G ETFs, investors should not expect similar performances from FIVG and NXTG. While the First Trust fund has more holdings, its reach isnt as deep. It relies heavily on chip and computer services stocks to drive performance.

Another marquee difference between the two dedicated 5G ETFs and an important one is the fee. NXTG charges 0.70% per year, or 40 basis points more than the rival FIVG. For long-term investors, thats a trait that cannot be overlooked.

Expense ratio: 0.35%

Some semiconductor makers, including Qualcomm (NASDAQ:QCOM), have significant 5G exposure, putting the spotlight on chip funds such as the VanEck Vectors Semiconductor ETF (NYSEARCA:SMH).

Looking at the recent uptick in demand for Taiwan Semiconductor (NYSE:TSM) services, its evident that 5G is a legitimate catalyst for some chipmakers. Taiwan Semiconductor is SMHs largest holding at a weight of 12.7%.

TSM expects the penetration rate of 5G smartphones globally to reach midteens next year [2020] more optimistic than its single-digit forecast six months ago, reports the Wall Street Journal.

IHS Markit confirms that 5G is meaningful for semiconductor makers.

5Gs impact will spread far beyond the confines of the tech industry, impacting every aspect of society and driving new economic activity that will spur rising demand for microchips, according to the research firm.

Expense ratio: 0.68%

The Internet of Things (IoT), like 5G, is a standalone theme. But due to the intersection between the two, the latter brings opportunity for the former, and thats potentially rewarding for the Global X Internet of Things ETF (NASDAQ:SNSR). SNSR, the first ETF dedicated to the IoT space, tracks the Indxx Global Internet of Things Thematic Index.

One of the primary reasons that 5G and IoT belong in the same conversation is that both themes revolve around enhanced connectivity. Thats how they play off each other and thats why SNSR is a practical, if not under-appreciated, 5G ETF.

IoT vendors are working closely with manufacturing enterprises to provide more secure solutions tailored to their clients operations and digital transformation strategy, according to KPMG. With the help of 5G networks, IoT platforms will be able to connect solutions and sensors to monitor entire processes.

SNSR holds 50 stocks, over 41% of which are semiconductor names, another trait confirming its potency as a 5G ETF.

Expense ratio: 0.60%

There cant be 5G without 5G infrastructure. And many ETFs arent adequately inclusive of the real estate names dominating this infrastructure. For that matter, many traditional real estate funds sorely lack 5G exposure. Enter the Pacer Benchmark Data & Infrastructure Real Estate SCTR ETF (NYSEARCA:SRVR).

SRVR is a coming off a year in which it obliterated standard real estate ETFs. And its off to a strong start in 2020. Its outpacing the widely followed MSCI US Investable Market Real Estate 25/50 Index by 120 basis points.

SRVR isnt just a run-of-the-mill cap-weighted fund. It screens components based on property, revenue and tenant type. The real estate companies in SRVR count firms such as AT&T (NYSE:T) and Verizon as their tenants.

Theres more to the SRVR story. In addition to being a realistic 5G ETF, the fund touches another booming theme: cloud computing. All those data centers that are powered by high-flying semiconductor makers require space, and lots of it. Rising demand for data consumption and cloud storage are two of the most pivotal factor seen driving members of SRVRs underlying index.

Expense ratio: 0.35%

The SPDR S&P Telecom ETF (NYSEARCA:XTL) is one of the last remaining traditional telecom ETFs,but as an equal-weight fund, its not excessively exposed to AT&T and Verizon.

XTLs underlying index provides exposure to alternative carriers, communications equipment, integrated telecommunication services, and wireless telecommunication, according to State Street.

XTL lacks the 5G glamour associated with some of the other funds mentioned here, but its a decent avenue for conservative investors. Just dont expect the big returns of FIVG, SNSR or SRVR.

Expense ratio: 0.65%

This list wouldnt be complete without some mention of China, because the worlds second-largest economy was one of the swiftest deployers of 5G last year. It is also sure to wind up with one of the most expansive 5G networks. For tactical investors, the Global X MSCI China Communication Services ETF (NYSEARCA:CHIC) is one of the best avenues for accessing Chinas 5G prowess.

China Mobile (NYSE:CHL), China Telecom(NYSE:CHA)and China Unicom(NYSE:CHU) the first two of which are among CHICs holdings have already been deploying 5G on a broad scale. Their cooperation means investors dont have to worry about competition adversely affecting these companies.

The three operators were expecting to operate nearly 130,000 5G base stations by the end of 2019. China Mobile announced plans to install 50,000 5G sites by end-2019, while China Unicom and China Telecom each target about 40,000, according to RCR Wireless.

Heres another reason to consider CHIC. China is obviously devoted to the 5G cause as it has outspent the U.S. on this front by more than $24 billion.

As of this writing, Todd Shriber did not own any of the aforementioned securities.

Read more:
7 Great ETFs to Buy for the Rise of 5G - Investorplace.com

Warren Mead of NS1 Recognized as 2020 CRN Channel Chief – Business Wire

NEW YORK--(BUSINESS WIRE)--NS1, a leader in next-generation application networking, announced today that CRN, a brand of The Channel Company, has named Warren Mead, vice president of channel and strategic accounts and head of the NS1 Global Channel Partner Program, to its 2020 list of Channel Chiefs. This annual list recognizes the top vendor executives who continually demonstrate exemplary leadership, influence, innovation, and growth for the IT channel.

In his first six months at NS1, Mead built the foundation of the Global Channel Partner Program for systems integrators, managed service providers, and resellers to augment their offerings with NS1s high-performance application delivery solutions built for modern infrastructure. Among its first 20 partners is Promark Technology, a U.S.-focused distributor and subsidiary of Ingram Micro (NYSE:IM). Mead also established a strategic alliance with Cisco, bringing NS1s software-defined, API-first approach in the DNS, DHCP, and IP Address Management (DDI) market to work alongside Cisco Umbrellas enterprise approach to network security.

A big impact on our channel strategy is that organizations are racing to adopt modern infrastructure and application delivery strategies such as hybrid and multi-cloud, microservices, and edge computing. This drive toward new computing approaches creates unique challenges in application networking that NS1 solutions are designed to address, said Mead. By constantly engaging with our partners and adding value to their businesses, were making it easy for them to sell to their customers, while providing industry-leading recurring revenue opportunities. To date, we have signed strategic partners that align with our go-to-market strategy and our goal for FY2020 is to source 50% of all of our sales qualified opportunities through our worldwide partnerships.

Now a three-time CRN Channel Chief, Mead has 20 years of experience in channel marketing and business development management. Prior to joining NS1, he served as the vice president of global channel and business development at Nasuni to build up its channel and alliances for its cloud storage solution. Mead spent several years building SpeechWorks (now Nuance; Nasdaq: NUAN) from its first customers to a $160M public company with the 12th most successful IPO of 2000. At Virtual Iron, he launched a sales model that brought in more than 3,000 customers and secured an acquisition by Oracle in 2009. He also led a successful turnaround at Akorri Networks that resulted in an acquisition by NetApp in 2011. Mead holds a Bachelor of Arts in communications from Marist College and Leeds Trinity University.

CRNs 2020 Channel Chiefs list honors the distinguished leaders who have influenced the IT channel with cutting-edge strategies and partnerships. The 2020 Channel Chiefs have shown outstanding commitment, an ability to lead, and a passion for progress within the channel through their partner programs. The Channel Chief honorees were chosen by the CRN editorial staff for their dedication, industry prestige, and exceptional accomplishments in driving the channel agenda and evangelizing the importance of channel partnerships.

The IT channel is undergoing constant evolution to meet customer demands and changing business environments, said Bob Skelley, CEO of The Channel Company. CRNs Channel Chiefs work tirelessly, leading the industry forward through superior partner programs and strategies with a focus on helping solution providers transform and grow. The Channel Company congratulates these outstanding individuals for their dedication to the channel.

CRNs 2020 Channel Chiefs list will be featured in the February 2020 issue of CRN magazine and online at http://www.CRN.com/ChannelChiefs.

About NS1

NS1 automates the deployment and delivery of the worlds most trafficked internet and enterprise applications. Its software-defined, next-generation application networking stack modernizes DNS, DHCP, and IPAMthe familiar and universal foundations of all network and internet servicesto unlock unprecedented automation, visibility, and control in todays complex, heterogeneous environments. NS1 has more than 500 customers worldwide, including LinkedIn, Dropbox, Pitney Bowes, Bleacher Report, and The Guardian, and is backed by investments from Dell Technologies, Cisco Investments, and GGV Capital.

About The Channel Company

The Channel Company enables breakthrough IT channel performance with our dominant media, engaging events, expert consulting and education, and innovative marketing services and platforms. As the channel catalyst, we connect and empower technology suppliers, solution providers, and end users. Backed by more than 30 years of unequaled channel experience, we draw from our deep knowledge to envision innovative new solutions for ever-evolving challenges in the technology marketplace. http://www.thechannelcompany.com

Follow The Channel Company: Twitter, LinkedIn, and Facebook.

2020 CRN. CRN is a registered trademark of The Channel Company LLC. All rights reserved.

Excerpt from:
Warren Mead of NS1 Recognized as 2020 CRN Channel Chief - Business Wire

How to Simplify Resource Management in Higher Education with Azure Tags – EdTech Magazine: Focus on Higher Education

One way to organize resources in the Azure environment is to use tags, or name-value pairs, generally for resource management, automation and accounting. Tags, unlike resource groups, can label any resource within a subscription, and an effective tagging strategy makes resource administration much easier. To get the most out of tagging the Azure Portal, follow these four tips.

Each resource can have a maximum of 15 tags associated with it. Tag support in automation is only available for the Azure Resource Manager (ARM) model, not the classic deployment model. Tags arent hierarchical, so they cant be nested. Finally, certain tag prefixes are reserved, including Azure, Windows and Microsoft. Though tags can be up to 512 characters (128 characters for storage accounts), a better practice is to keep them focused and to the point.

Develop a naming strategy early on ideally, applying tags when resources are created. Automate the process using tools such as PowerShell, the Azure command-line interface or ARM templates. Ensure that a consistent naming scheme is adopted and applied to prevent obsolete and unused tags down the road. Reporting will also benefit from a proper naming scheme and allow for proper accounting of all resources.

MORE FROM EDTECH:To optimize cloud deployments, campuses need to close the skills gap.

A well-thought-out application taxonomy simplifies resource management. There are many ways of tagging resources; for example, by department, such as Human Resources or Payroll. Or, focus on technology type, grouping resources by function, such as Web Servers or Load Balancers. Another common scheme is to tag by environment, such as Production, Staging or Development.

Once established, a proper tagging structure can easily be used in automation to quickly apply settings and policies across resources. In cloud environments, managing resources is crucial to keeping costs down. By indicating a required review or an expiration date, tags help admins limit resources to only those that are needed. Tagging is an excellent way to monitor, control and manage an Azure environment.

Read the original post:
How to Simplify Resource Management in Higher Education with Azure Tags - EdTech Magazine: Focus on Higher Education

Cloud Storage Market Size Overview by Rising Demands, Trends and Huge Business Opportunities 2020 to 2028 – VOICE of Wisconsin Rapids

The Global Cloud Storage Market is a thorough piece of work and is organized by conducting both primary as well as secondary research. The data included in the report has been generated by consulting industry leaders and taking inputs from them. The topmost subdivisions of the Global Cloud Storage Market have been emphasized and these divisions have been presented by giving statistics on their current state by the end of the forecast horizon. These facts and figures help the forthcoming players to estimate the investment possibility within its sector.

Request A Sample Copy Of Report: Click Here https://www.reportconsultant.com/request_sample.php?id=5087

Top Key Players: Amazon Web Services, Inc.Microsoft CorporationIBM CorporationHewlett Packard Enterprise Development LPGoogle Inc.VMware, Inc.Oracle CorporationEMC CorporationRackspace Hosting, Inc.Red Hat, Inc.

North America, Europe, Asia Pacific, Middle East & Africa, and Latin America are labeled to be the most prominent regional Global Cloud Storage Market. Among these, North America has attained the overall market and is still rising continually. But, now it is also being anticipated that in the next few years, some other regions might take over and turn out to be the most promising regional markets. The Asia Pacific is also expected to witness a high rise in the Global Cloud Storage Market in the near future owing to the presence of a large number of people, getting into this market sector.

The major strategies accepted by the established players for a better saturation in the Global Cloud Storage Market also forms a key section of this study. These methods can be employed by the upcoming players for a better view of the market. The global market has also been examined in terms of its revenue. Dynamics such as market drivers, restraints and opportunities have been combined and displayed which helps in collecting the statistics on the future growth of the market. In addition to this, the Global Cloud Storage Market report also gives a brief on the evaluation of the key players which is based on SWOT analysis, contact figures, product outlines and product profile.

Ask For Discount@ https://www.reportconsultant.com/ask_for_discount.php?id=5087

Segment By Regions/Countries, This Cloud Storage Market Report Covers

South America

North America

Europe

Center East and Africa

Asia Pacific

Table Of Content:

The Global Cloud Storage Market Report Contains:

Rebecca Parker

View all posts byRebecca Parker

View original post here:
Cloud Storage Market Size Overview by Rising Demands, Trends and Huge Business Opportunities 2020 to 2028 - VOICE of Wisconsin Rapids

5 tips to clean up your Chromebook and keep it running fast – Chrome Unboxed

Its no secret that we love Chromebooks and truly believe in the future of cloud computing. Chromebooks are awesome for many reasons and they make great daily computers because they boot up in seconds, you dont have to spend half your day updating the operating system, and are simple to use. You constantly hear about the cloud and, to some degree, we are all connected all the time. Chromebooks take full advantage of this cloud computing future and thats why so many people have decided to make the switch from other operating systems. But if you arent careful, your Chromebook might start to feel bogged down. So I want to go over 5 quick tips to optimize your Chromebook and keep it running fast.

First of all, we have covered some of these tips in our segment Chromebook Tip Tuesday, so you can check out those videos here.

1. Clean up your extension Extensions are basically small packages of software that can run in the Chrome browser and you can use them to more easily get things done. I use Grammarly, Bitly, and Pocket basically every day and they have become part of my workflow. Occasionally though, some of your extensions can become outdated and can start to cause all kinds of issues. We always recommend cleaning up extensions when people are having issues with their Chromebook because normally an unsupported and outdated extension is normally the root of the problem. You can clean up your extensions by selecting Extensions in your browser setting or browsing to chrome://extensions. From there, remove any extensions you arent using and then, if you are still having issues, go through and turn off all your extensions. Then turn them on individually to see if you can find the culprit to your issues.

Shop The Best Chromebooks of 2019 at Chrome Shop

2. Clean up your hard drive Chromebooks are built for the cloud and so you will notice that most Chrome OS devices dont have the same internal hard drive storage that you are accustomed to with Windows or Mac. Thats because most of your files should be in the cloud. Creating folders and utilizing Google Drive for all your files will help to keep your Chromebook speedy.

3. Use Google Drive for your downloads You can take full advantage of Google Drive with this hack that many people arent aware of and it will technically work on a Chromebook or any other device using the Chrome browser. Changing your downloads to a Google Drive folder will automatically upload all your downloads to the cloud so they are always accessible from other devices and will never be lost when you Powerwash or use another Chromebook.

4. Review and uninstall Apps The new app manager in the Chrome OS settings is a useful place to see all your apps and review which ones you are using and which ones can be deleted. Although non-running apps dont use system resources when they arent actively open, they are still using up your local storage, so in general, its a good idea to check out the app manager every now and then and delete any apps that you dont have any need for any more or havent used lately.

5. Powerwash If all else fails a Powerwash in Chrome OS will restore the Chromebook back to factory settings. This is something that we do all the time around the office as we are testing and reviewing different devices but it can also be useful when you are having issues. Like any other operating system, you might just run into a system glitch every now and then so a Powerwash from the lock screen can be a very useful hack.

Important point to consider: when you Powerwash, if you havent cleaned up problematic extensions and/or apps, they will be reinstalled upon signing in with the same account and thus, the same issues will persist. If after a Powerwash you end up seeing the same issues, make sure you are doing everything on the list above, too.

I know that Chrome OS is simple and you normally dont need to put much thought into its performance, but I hope you have learned something from these 5 quick tips. These simple hacks are great to troubleshoot a problem the next time you are having issues with your Chromebook or they can be used if you just want to speed up your Chromebook.

Go here to read the rest:
5 tips to clean up your Chromebook and keep it running fast - Chrome Unboxed