Category Archives: Cloud Servers
2020s: The decade that tears down LANs, WANs, VPNs and Firewalls – ITProPortal
The walls of the corporate workplace will become fluid for enterprises over this decade. This will be a movement driven by the way you want to work and the birth of the fully internet-driven workplace; put another way, its the death of the legacy corporate network which naturally means the death of traditional network security. Its a dramatic improvement that will restyle the way we all connect, rewrite how IT leaders help you access work, and reshape entire technology markets where legacy infrastructure companies will struggle.
This movement will drive jobs closer to workers lives as part of a monumental reckoning with connectivity, mobility, cloud, and the way we all want to work. Mobility, BYOD, or whatever you may call it may be commonplace in Silicon Valley, large cosmopolitan cities, and some verticals like high tech, but outside of these fairly early adopters, it is not mainstream. We do already see this happening, though, in pockets. However, the 5G era is going to dramatically speed the adoption of this new way of working, and that will in turn speed the demise of the traditional corporate network.
The fallout from this change in the way we work will be extreme. Here are four of my predictions for the 2020 decade.
Any time you connect to the internet, there is an IP address to connect you, often through a firewall. A firewall is like a door that protects a house or a castle. Every firewall with an internet-facing IP address is an attack surface that creates significant business risk. New approaches and technologies will evolve this decade to mitigate this risk.
As more applications sit in the public cloud and more offices use the internet to connect to the cloud or SaaS applications, the attack surface is drastically increasing. As you connect 100x more applications, data, devices, and people to the internet, what happens to the attack surface? It increases 100-fold.
Think of it this way. If you want 100 friends to be able to reach you, you can publish your phone number on a website. Now they can call you, but so can robocallers. This is precisely what happens when you publish applications on a public cloud and use the internet to reach them. Your users can access them but so can a million hackers who can discover vulnerabilities or launch a DDoS attack.
How do you solve it? Suppose you hired a phone operator and gave him the names of your 100 friends. Your friends would be able to quickly reach you when they call the operator, but a robocaller that tries to connect to you would be denied by the operator and wouldnt be able to bother you. A similar approach works for protecting your enterprise and starts by preventing exposure of your enterprises user/branch traffic or applications/servers to the internet. This approach replaces the castle-and-moat legacy model with a digital exchange, somewhat like a sophisticated phone switchboard. Your applications remain invisible behind the exchange. Users connect to the exchange which then connects them to their applications. In this model, the user, the offices, and the applications are never exposed to the internet. This approach for secure access to applications will become widely used in the coming decade.
Internet connectivity improved so much in the past decade that enterprises started to dump their private, expensive wide area networks (WANs) that connected various offices to the data centre. Frederik Janssen, head of global infrastructure at Siemens, is a pioneer and a thought leader who coined the phrase The internet is the new corporate network several years ago. What he meant was that Siemens business was being done everywherethe office, coffee shops, airports, hotelsand the internet had become the de facto transport for all traffic.
With the widespread use of 5G in the 2020s, local area networks (LANs) will also disappear. Today, while sitting in our office, we look for Wi-Fi to access the internet, which securely connects us through routers or firewalls sitting at the companys perimeter. But when every PC or mobile phone is equipped with ultrafast 5G, would you ever connect to Wi-Fi in your office? No wayyou will use direct 5G connections and bypass traditional routers and firewalls. And, if there is no WAN or LAN in your control, then there is no use case for firewalls. The traffic from your 5G devices will connect the right people to the right applicationsthrough a digital services exchangeand this will deliver faster, more secure, and more reliable access to apps and services.
There are countless stories about VPNs being the launch pad for devastating malware/ransomware attacks. This is happening because firewalls and VPNs were built for the network-centric world, where apps resided solely in the data centre and a security perimeter around the castle was all you needed. With so many organisations moving toward a perimeter-less model, traditional network security based on the castle-and-moat approach, which is how firewalls fundamentally protect, is no longer relevant. They give enterprises a false sense of security. New approaches are being developed that use business policy engines to act like the previously mentioned digital services exchange to enforce security and provide better enterprise security.
Today, to provide a user access to applications, they are connected to the so-called trusted corporate network. Once on the network, the user can see more than they should. This was acceptable when you controlled the network, but with the internet being the corporate network, putting users on a network to access applications is dangerous. If a user machine becomes infected, the malware can laterally traverse the network and infect all the servers on the network. Maersk, a massive shipping company, faced that issue about 18 months ago, highlighting the danger of putting users and applications on the same network. A better approach to this problem is badly needed.
Many CISOs also manage physical security, so I like using an office metaphor to illustrate zero trust. If I am visiting an office, I get stopped at reception, which checks my ID, confirms my appointment, and issues me a badge. They could direct me to the elevators and tell me to head up to the sixth floor for my appointment. But this rarely happens anymore because I could simply wander around the company to do whatever I want, wherever I want. In contrast, a zero trust approach would have someone escort me directly to the conference room and take me back to the front desk after my meeting.
Gartners ground-breaking research note on zero trust network access (ZTNA) states how enterprises should provide users access to the specific applications they need; instead of granting access to a network, ZTNA provides access only to those applications a user is authorised to use. This approach provides security for the world of cloud thats far better than trying to create lots of network segments to create application segmentation.
At a high level, think of ZTNA like this: It starts with an assumption that you trust nobody; you can establish a level of trust based on authentication, device posture, and other factors, but youll still only trust users with the applications they are specifically authorised to use. Any other activity would be highly suspicious.
These are not simple incremental changes; these megashifts will bring tons of opportunities and challenges to businesses. Technologies such as cloud, mobility, IoT, and machine learning are upending many large global brands while giving rise to new businesses at a pace never seen before. They are also disrupting large, incumbent technology providers while creating new giants.
Jay Chaudhry, CEO, Chairman & Founder, Zscaler
Read the original here:
2020s: The decade that tears down LANs, WANs, VPNs and Firewalls - ITProPortal
Ransomware protection is killer app for Datrium DRaaS – Blocks and Files
A ransomware attack is a disaster. When ransomware infects an organisations IT systems, stored and backup data are encrypted and made unavailable.
The IT system is unable to function and in many cases that means the organisation cannot function either until it remedies the attack. In essence there are two ways to do this: paying the ransom to decrypt the files or getting clean files restored from a disaster recovery (DR) facility.
Affordable and fast DR is a good way to defeat a ransomware infestation. Datrium, a hyperconverged systems vendor, has recognised this and in August 2019 launched its own DRaaS (disaster recovery as a service), incorporating home-grown HCI system backup technologies.
Historically, disaster recovery has been a hugely expensive but relatively niche aspect of customer storage and system buying strategy. But the massive increase in ransomware attacks in recent years has expanded the DR vulnerability surface. At the same time availability of the public cloud to provide a form of remote DR facility has brought costs tumbling.
A September 2016 FBI alert said: New ransomware variants are emerging regularly. Cyber security companies reported that in the first several months of 2016, global ransomware infections were at an all-time high. Within the first weeks of its release, one particular ransomware variant compromised an estimated 100,000 computers a day.
Data protection vendor Acronis reported the Spring 2017 WannaCry outbreak afflicted over 200,000 computers in over 150 countries. Global costs were estimated to total $8bn.
A second FBI alert in October 2019 said: Ransomware attacks are becoming more targeted, sophisticated, and costly, even as the overall frequency of attacks remains consistent. Since early 2018, the incidence of broad, indiscriminant ransomware campaigns has sharply declined, but the losses from ransomware attacks have increased significantly, according to complaints received by IC3 and FBI case information.
Although state and local governments have been particularly visible targets for ransomware attacks, ransomware actors have also targeted health care organizations, industrial companies, and the transportation sector.
Indeed ransomware is now so prevalent that automated failover to a recovery site is becoming table stakes for all data protection suppliers. In that sense ransomware recovery is a killer feature, and suppliers without this capability will be in trouble.
Many data protection suppliers already offer DR facilities, including Cohesity, Commvault, Dell EMC, Druva, Rubrik and Zerto. And more are sure to follow.
Datriums background is somewhat different. Founded in 2012, the company is a venture-backed startup that has raised $165m to date, including $60m in the most recent round in September 2018.
Datrium pioneered a middle way between converged and hyperconverged systems with hyperconverged nodes running storage controller software that linked them to a shared storage box. However, it faced enormous competition and the HCI market consolidated rapidly around two leading suppliers: Dell EMC, with VxRail, and Nutanix.
Datrium then moved in to unified hybrid cloud computing and protecting its DVX systems, specifically backup to the cloud. The company announced Cloud DVX in August 2018, claiming up to 10 times lower AWS costs for cloud backup, and CloudShift, a SaaS-based disaster recovery orchestration service for VMware.
This hit the market as the necessity of dealing with ransomware became even more pressing, and Datrium realised it had a potential killer app for VMware users.
CEO Tim Page told Blocks & Files in a phone interview that Datrium has gained 60 new accounts in under two months since launching its disaster recovery as a service. DR is catapulting our business revenues upwards.
He said the reason for this is that Datriums DRaaS preserves the VMware environment, is affordable and lightning fast, failing over in minutes when an attack takes place.
Datrium offers DR as a Service (DRaaS) using the VMware Cloud on AWS. In other words it protects VMware virtual machines (VMs) by spinning up DR copies in AWS. Page told me the time between attack detection and recovery should be as short as possible i.e. the DR copy VMs should be spun up quickly.
He said backups, even air-gapped backups such as tape, are inferior to a DR facility. It takes time to restore backup files and the ransomware infestation must be removed from the affected IT site. With a DR facility in place, the victim can use clean files while the ransomware is found, removed and infected files deleted. Post clean-up, the DR facility can fail back to the main site.
Datrium stores backup immutable snapshots in Amazons S3 storage, which lowers cost, but in a form that means they can be immediately spun up without rehydration or conversion as VMs running in the VMware cloud. Admin staff at the ransomware-infected customer just switch from one VMware environment to another; there is no difference.
Immutability means that the snapshotted data cannot be altered subsequently. Any ransomware infection after the date the snapshot was taken will not infect that snapshot.
Datrium offers a short RTO (Recovery Time Objective) because it has selectable restore points. This short RTO is made feasible by automating the recovery process, which can involve hundreds or thousands of separate operational steps to get a large suite of VMs up and running in the right order.
With the orchestration routine in place, the DRaaS facility is told via a mouse click to fail over to the cloud DR site when a ransomware attack or other disaster happens, and that takes just minutes. DR recovery can then start a few minutes later at the source site.
Backed-up VMs exist in a timeline. Some time before an attack with its file locking-by-encryption and ransom notification, ransomware infects a system and starts started encrypting files. This event can be located by checking file activity records.
In a recent incident a Midwest US municipality was attacked (the town is unwilling to reveal its identity, Datrium said). The IT department had backed up its VMs to a Datrium DVX system but without the DRaaS option in place. Admin staff and Datrium consultants checked the incoming snapshots to the target DVX system and found a sudden size increase:
The highlighted snapshots in the image above have sizes of 23.6Gib, 80.2Gib, and 80.7Gib, while prior and subsequent snapshots are 6.1Gib and 3.6Gib in size. This enlargement was caused by Ryuk ransomware encrypting files.
To combat the attack, a prior snapshot from a day earlier was used and powered up on a quarantined network. It was verified malware-free by a security team and became a so-called recovery golden copy.
The recovery team restored individual VMs in priority order and verified each one was clean with an anti-virus scanner before restoring the next one. This took almost two days to complete. A mass update restoration of all their VMs would have taken less time and a DRaaS option would have been quicker again.
Datrium initially provided cloud backup for its own on-premises DVX semi-hyperconverged system semi, because the storage repository was separate from the compute nodes. It extended this to source systems from Dell EMC, NetApp, Nutanix, Pure Storage and others, and also to VMware running in AWS.
Datrium can provide DR with failover to VMware Cloud on AWS so long as the source site is a VMware site. Datrium uses its own backed up VMs and data from the source site.
VMware is accommodating Kubernetes and containers and Page pointed out that as VMware embraces Kubernetes we can do so too.
He said Datrium DRaaS will work with Microsoft Azure cloud by the end of 2020.
And what about the rising tide of cloud-native applications that do not use VMware? We have a CSS login for bare metal servers, Page said. He suggested Datrium could develop this ability to backup bare metal Kubernetes environments to the public cloud, and reinstantiate containers there for DR, in the same way as it spins up VMs today.
As long as ransomware infections exists Datrium should prosper by offering a simple and fast recovery option, viable both for virtual machines and containerised environments.
View original post here:
Ransomware protection is killer app for Datrium DRaaS - Blocks and Files
The Cloud Snooper malware that sneaks into your Linux servers – Naked Security
SophosLabs has just published a detailed report about a malware attack dubbed Cloud Snooper.
The reason for the name is not so much that the attack is cloud-specific (the technique could be used against pretty much any server, wherever its hosted), but that its a sneaky way for cybercrooks to open up your server to the cloud, in ways you very definitely dont want, from the inside out.
The Cloud Snooper report covers a whole raft of related malware samples that our researchers found deployed in combination.
Its a fascinating and highly recommended read if youre responsible for running servers that are supposed to be both secure and yet accessible from the outside world for example, websites, blogs, community forums, upload sites, file repositories, mail servers, jump hosts and so forth.
In this article, were going to focus on just one of the components in the Cloud Snooper menagerie, because its an excellent reminder of how devious crooks can be, and how sneakily they can stay hidden, once theyre inside your network in the first place.
If youve already downloaded the report, or have it open in another window, the component were going to be talking about here is the file called snd_floppy.
Thats a Linux kernel driver used by the Cloud Snooper crooks so that they can send command-and-control instructions right into your network, but hidden in plain sight.
If youve heard of steganography, which is where you hide snippets of data in otherwise innocent-looking files such as videos or images where a few noise pixels wont attract any attention, then this is a similar sort of thing, but for network traffic.
As we say in the steganography video that we linked to in the previous paragraph:
You dont try and scramble the message so nobody can read it, so much as deliver a message in a way that no one even realises youve sent a message in the first place.
The jargon term for the trick that the snd_floppy driver uses is known as in-band signalling, which is where you use unexceptionable but unusual data patterns in regular network traffic to denote something special.
Readers whose IT careers date back to the modem era will remember probably unfondly that many modems would helpfully interpret three plus signs (+++) at any point in the incoming data as a signal to switch into command mode, so that the characters that came next would be sent to the modem itself, not to the user.
So if you were downloading a text file with the characters HELLO+HOWDY in it, youd receive all those characters, as expected.
But if the joker at the other end deliberately sent HELLO+++ATH0 instead, you would receive the text HELLO, but the modem would receive the text ATH0, which is the command to hang up the phone and so HELLO would be the last thing youd see before the line went dead.
This malware uses a similar, but undocumented and unexpected, approach to embedding control information in regular-looking data.
The crooks can therefore hide commands where you simply wouldnt think to watch for them or know what to watch for anyway.
In case youre wondering, there isnt a legitimate Linux driver called snd_floppy, but its a sneakily chosen name, because there are plenty of audio drivers called snd_somethingorother, as you can see from this list we extracted from our own Linux system:
In real life, the bogus snd_floppy driver has nothing to do with floppy disks, emulated or real, and nothing to do with sound or audio support.
What snd_floppy does is to monitor innocent-looking network traffic to look for in-band characteristics that act as secret signals.
There are lots of things that sniffer-triggered malware like this could look out for slightly weird HTTP headers, for instance, or web requests of a very specific or unusual size, or emails with an unlikely but not-too-weird name in the MAIL FROM: line.
But snd_floppy has a much simpler and lower-level trick than that: it uses whats called the network source port for its sneaky in-band signals.
Youre probably familiar with TCP destination ports theyre effectively service identifiers that you use along with an IP address to denote the specific program you want to connect to on the server of your choice.
When you make an HTTP connection, for example, its usually sent to port 80, or 443 if its HTTPS, on the server youre reaching out to, denoted in full as http://example.com:80 or https://example.com:443. (The numbers are typically omitted whenever the standard port is used.)
Because TCP supports multiple port numbers on every server, you can run multiple services at the same time on the same server the IP address alone is like a street name, with the port number denoting the specific house you want to visit.
But every TCP packet also has a source port, which is set by the other end when it sends the packet, so that traffic coming back can be tracked and routed correctly, too.
Now, the destination port is almost always chosen to select a well-known service, which means that everyone sticks to a standard set of numbers: 80 for HTTP and 443 for HTTPS, as mentioned above, or 22 for SSH, 25 for email, and so on.
But TCP source ports only need to be unique for each outbound connection, so most programmers simply let the operating system choose a port number for them, known in the jargon as an ephemeral port.
Ports are 16-bit numbers, so they can vary from 1 to 65535; ephemeral ports are usually chosen (randomly or in sequence, wrapping around back to the start after the end of their range) from the set 49152 to 65535.
Windows and the BSD-based operating systems use this range; Linux does it slightly differently, usually starting at 32768 instead you can check the range used on your Linux system as shown below.
On our Linux system, for example, ephemeral (also known as dynamic) ports vary between 32768 and 60999:
But there are no rules to say you cant choose numbers outside the ephemeral range, and most firewalls and computers will accept any legal source port on incoming traffic because it is, after all, legal traffic.
You can see where this is going.
The devious driver snd_floppy uses the usually unimportant numeric value of the TCP source port to recognise secret signals that have come in from outside the firewall.
The source port just 16 pesky bits in the entire packet is what sneaks the message in through the firewall, whereupon snd_floppy will perform one of its secret functions based on the port number, including:
Sure, the crooks are taking a small risk that traffic that wasnt specially crafted by them might accidentally trigger one of the their secret functions, which could get in the way of their attack.
But most of the time it wont, because the crooks use source port numbers below 10000, while conventional software and most modern operating systems stick to source port numbers of 32768 and above.
For details of the port numbers used and what they are for, please see the full Cloud Snooper report.
As suggested above, there is a small chance that source port filtering of this sort might block some legitimate traffic, because its not illegal, merely unusual, to use source port numbers below 32768.
Also, the crooks could easily change the secret numbers in future variants of the malware, so this would be a temporary measure only.
There are five TCP source port numbers that the driver watches out for, and one UDP source port number. Ironically, leaving just TCP source port 9999 unblocked would allow any kill payload commands to get through, thus allowing the crooks to stop the malware but not to start it up again.
This will help you to spot and stop dangerous files of many types, including rogue kernel drivers, unwanted userland programs, and malicious scripts.
Crooks need administrator-level access to your network to load their own kernel drivers, which means that by the time you are vulnerable to an attack like Cloud Snooper, the crooks are potentially in control of everything anyway.
Many network-level attacks where criminals need root or admin powers are made possible because the crooks find their way in through a legimitate remote access portal that wasnt properly secured.
Yes, crooks who already have root powers can tamper with your logging configuration, and even with the logs themselves, making it harder to spot malicious activity.
But its rare that crooks are able to take over your servers without leaving some trace of their actions such log entries showing unauthorised or unexpected kernel drivers being activated.
The only thing worse than being hacked is realising after youve been hacked you could have spotted the attack before it unfolded if only youd taken the time to look.
Read more:
The Cloud Snooper malware that sneaks into your Linux servers - Naked Security
Cloud Snooper firewall bypass may be work of nation state – ComputerWeekly.com
Next-gen security specialist Sophos has revealed details of a sophisticated new attack known as Cloud Snooper, which enables malware on servers to communicate freely with its command and control (C2) servers through its victims firewalls, and may have been developed by a nation state actor.
The attack technique was uncovered by SophosLabs threat research manager Sergei Shevchenko whilst investigating a malware infection of some AWS hosted cloud servers. However, it is not an AWS-specific attack, but rather it represents a method of piggybacking C2 traffic on legitimate traffic to get past firewalls and exfiltrate data.
Cloud Snooper uses three main tactics, techniques and procedures (TTPs) in tandem. These consist of a rootkit to circumvent firewalls, a rare technique to gain access to servers while disguised as legitimate traffic essentially a wolf in sheeps clothing and a backdoor payload that shares the malicious code between both Windows and Linux systems. Each of these elements has been seen before, but never yet all at once.
This is the first time we have seen an attack formula that combines a bypassing technique with a multi-platform payload targeting both Windows and Linux systems,said Shevchenko.
IT security teams and network administrators need to be diligent about patching all external-facing services to prevent attackers from evading cloud and firewall security policies.
IT security teams also need to protect against multi-platform attacks. Until now, Windows-based assets have been the typical target, but attackers are more frequently considering Linux systems because cloud services have become popular hunting grounds. Its a matter of time before more cyber criminals adopt these techniques.
Shevchenko said that the complexity of the attack and the use of bespoke advanced persistent threat (APT) toolkit strongly suggested that the malware and its operators are highly advanced and possibly being backed by a nation state actor.
He added that is was possible, indeed highly likely, that the specific package of TTPs would trickle down to the lower rungs of the cyber criminal hierarchy, and eventually form a blueprint for widespread firewall bypass attacks.
This case is extremely interesting as it demonstrates the true multi-platform nature of a modern attack, said Shevchenko.
A well-financed, competent, determined attacker will be unlikely ever to be restricted by the boundaries imposed by different platforms building a unified server infrastructure that serves various agents working on different platforms makes perfect sense, he added.
Shevchenko said that in terms of prevention against this or similar attacks, while AWS Security Groups (SGs) provide a robust boundary firewall for EC2 instances, this does not in and of itself remove the need for network admins to fully patch all their outward-facing services.
He added that the default installation for the SSH server also needs extra steps to harden it, turning it into a rock-solid communication daemon.
Sophos shared a number of steps proactive admins should be taking. These include creating a full inventory of all network-connected devices and keeping their security software updated; fully-patching outward-facing services above and beyond what Amazon or your cloud service of choice might provide; check and double-check all cloud configurations; and enable multi-factor authentication on security dashboards or control panels to stop attackers disabling your defences, or at least to make it harder for them to do so.
Continued here:
Cloud Snooper firewall bypass may be work of nation state - ComputerWeekly.com
Cloud Security Risks Will Be a Top Concern for Organizations in 2020 – Security Magazine
Cloud Security Risks Will Be a Top Concern for Organizations in 2020 | 2020-02-28 | Security Magazine This website requires certain cookies to work and uses other cookies to help you have the best experience. By visiting this website, certain cookies have already been set, which you may delete and block. By closing this message or continuing to use our site, you agree to the use of cookies. Visit our updated privacy and cookie policy to learn more. This Website Uses CookiesBy closing this message or continuing to use our site, you agree to our cookie policy. Learn MoreThis website requires certain cookies to work and uses other cookies to help you have the best experience. By visiting this website, certain cookies have already been set, which you may delete and block. By closing this message or continuing to use our site, you agree to the use of cookies. Visit our updated privacy and cookie policy to learn more.
See the rest here:
Cloud Security Risks Will Be a Top Concern for Organizations in 2020 - Security Magazine
Windows 10: Containers are the future, and here’s what you need to know – TechRepublic
With two use cases for its containers, and five different container models, it would seem that Microsoft's container strategy is ripe for confusion. But that's not the case.
Microsoft offers many different container models on Windows. If you're running Windows 10 you're running several without even realising it: wrapping and isolating all your UWP apps; using thin virtual machines to deliver security; and, if you're a developer, either Windows or Linux Docker instances.
That layered container model is key to the future of Windows -- one that reaches into the upcoming Windows 10X and out into the wider world of public and private clouds, with Docker Windows containers now officially part of Kubernetes. Microsoft is working on shrinking Windows Server to produce lightweight container base images with a more capable Windows.
While the desktop containers are intended to both simplify and secure your desktop applications, providing much-needed isolation for apps installed via appx or MSIX (and in Windows 10X for any other Win32 code), Windows 10's containers are based on Windows' own process isolation technology. It's not the familiar Docker model that we find in our cloud-hosted enterprise applications.
That's not to say Windows 10 can't run Docker containers. Microsoft is using Docker's services to underpin its Windows Server containers. You can build and test code running inside them on Windows PCs, running either Pro or Enterprise builds, and the upcoming 2004 release of Windows 10 brings WSL2 and support for Linux containers running on Windows.
Docker has been developing a new version of its Docker Desktop tools for Windows around WSL2, making it as easy to develop and test Linux containers on Windows 10 as it is to work with Windows' own containers. With Microsoft positioning Windows as a development platform for Kubernetes and other cloud platforms, first-class Docker support on Windows PCs is essential.
It's not only Linux containers in the cloud. Windows containers have a place too, hosting .NET and other Windows platforms. Instead of deploying SQL Server or another Windows server application in your cloud services, you can install it in a container and quickly deploy the code as part of a DevOps CI/CD deployment. Modern DevOps treats infrastructures (especially virtual infrastructures) as the end state of a build, so treating component applications in containers as one of many different types of build artifact makes a lot of sense.
What's important here is not the application, but how it's orchestrated and managed. That's where Kubernetes comes in, along with RedHat's OpenShift Kubernetes service. Recent releases have added support for Windows containers alongside Linux, managing both from the same controller.
While both OpenShift and Kubernetes now support Windows containers, they're not actually running Windows containers on Linux hosts. There's no practical reason why they can't use a similar technique to that used by Docker to run Linux containers on Windows. However, Windows Server's relatively strict licensing conditions require a Windows licence for each virtual machine instance that was hosting the Windows containers.
Using Windows containers in Kubernetes means building a hybrid infrastructure that mixes Linux and Windows hosts, with Windows containers running on Windows Server-powered worker nodes. Using tools like OpenShift or the Azure Kubernetes Service automates the placement of code on those workers, managing a cross-OS cluster for your application. .NET code can be lifted into a Windows Docker container and deployed via the Azure Container Registry. You can manage those nodes from the same controller as your Linux nodes.
SEE:Serverless computing: A guide for IT leaders(TechRepublic Premium)
There's no need to learn anything new, if you're coming to Windows containers from Linux. You're using familiar Docker tools to build and manage your container images, and then the same Kubernetes tooling as you'd use for a pure Linux application. Mixing and matching Windows and Linux microservices in a single application allows you to take advantage of OS-specific features and to keep the expertise of existing developer teams, even as you're switching from a traditional monolithic application environment to a modern distributed system.
Microsoft is building a suite of open-source tools to help manage Windows containers, with a GitHub repository for the first one, a logging tool. Improving logging makes sense for a distributed application, where multiple containers interact under the control of Kubernetes operators.
Outside of Kubernetes, Windows containers on Windows Server have two different isolation modes. The first, process isolation, is similar to that used by Linux containers, running multiple images on a host OS, using the same kernel for all the images and the host. Namespaces keep the processes isolated, managing resources appropriately. It's an approach that's best used when you know what all the processes running on a server are, ensuring that there's no risk of information leaking between different container images. The small security risk that comes with a shared kernel is why Microsoft offers a more secure alternative: isolated containers.
Under the hood of Windows Server's isolated containers is, of course, Hyper-V. Microsoft has been using it to improve the isolation of Docker containers on Windows, using a thin OS layer running on top of Hyper-V to host a Docker container image, keeping performance while ensuring that containers remain fully isolated. While each container is technically a virtual machine with its own kernel, they're optimised for running container images. Using virtualization in this way adds a layer of hardware isolation between container images, making it harder for information to leak between them and giving you a platform that can host multiple tenant images for you.
It's easy enough to make and run a Hyper-V container. All you need to do is set the isolation parameter in the Docker command line to 'hyperv', which will launch the container using virtualisation to protect it. The default on desktop PCs is to use Hyper-V, for servers it's to use Docker isolation. As a result, you may prefer to force Hyper-V containers on your Windows Server container hosts.
Microsoft has been working hard to reduce the size of the Hyper-V server image that's used for Windows containers. It's gone down from nearly 5GB with Windows Server 1809 and 1903, to half the size at 2.46GB in the upcoming 2004 release. And that's Windows Server Core, not Nano! Building on Windows Server Core makes sense as it has a larger API surface, reducing the risk of application incompatibility.
With two use cases for its containers, and five different container models, it would seem that Microsoft's container strategy is ripe for confusion. But that's not the case. Windows' own application isolation technologies are managed automatically by the installer, so all you need to consider is whether your server applications run using process isolation or in Hyper-V. And that's a decision best made by whether you're running your applications on your own servers in your own data centre, or in the public cloud.
Be your company's Microsoft insider by reading these Windows and Office tips, tricks, and cheat sheets. Delivered Mondays and Wednesdays
See more here:
Windows 10: Containers are the future, and here's what you need to know - TechRepublic
Explain MEC like I’m a 7-year-old. – Verizon Communications
Even for a tech-savvy millennial like me, some concepts can be confusing. I find the easiest way to learn -- or teach -- is to break things down into simple terms, as if talking to a child.
So who better to teach me about MEC than a 7-year-old?
Ava Carnes, daughter of V Teamer Lauren Schulz, described MEC in terms I could understand: Ice cream.
If you live in the suburbs and want to go for a treat, you wouldnt drive all the way to the big city; youd go to your local shop.
In the same way, imagine you live in a smart home and you give a command to your lamp. Currently, that data needs to be transmitted to cloud servers that may be hundreds of miles away, analyzed and then a command is sent back to the lamp to switch on.
But with Mobile Edge Computing (MEC), the command only has to travel to a mini cloud that exists in your own neighborhood. By performing processing tasks closer to you, the end user, it improves the performance of applications and our network. The result? An Internet of Things (IoT) that will be smarter, faster, more responsive and more efficient than ever before.
To learn more about the power of MEC, watch the VTalk with Kyle Malady, Srini Kalapala and Valerie Feldmann and another on MEC strategy and partnerships.
Thanks, Ava! You made us all a little smarter. Now, can you help me with my taxes?
Tell us what you think of Up To Speed.
See original here:
Explain MEC like I'm a 7-year-old. - Verizon Communications
Arm-based AI Inference Edge Server Takes on GPU Price/Performance – EnterpriseAI
source: shutterstock
Edge computing specialist SolidRun and ASIC solutions company Gyrfalcon Technology this week announced an Arm-based AI inference edge server that the companies say outperforms GPU performance for less cost and power consumption.
The server, called the Janux GS31, can be configured with up to 128 Gyrfalcon Lightspeeur SPR2803S neural accelerator chips, delivering a maximum of 24 TOPS per watt, outperforming SoC- and GPU-based systems by orders of magnitude, while using a fraction of the energy required by systems with equivalent computational power, the companies said in a joint announcement. The hardware supports low latency decoding and video analytics of up to 128 channels of 1080p/60Hz video designed for such edge AI use cases as monitoring smart cities and infrastructure, intelligent enterprise/industrial video surveillance applications and tagging photos and videos for text-based searching.
"AI is rapidly moving to the edge of the network to address the performance and security needs of many applications, said Jim McGregor, founder and principal analyst, Tirias Research. As a result, new networks will drive increasing demand for processing performance and efficiency. The SolidRun platform, leveraging the GTI AI acceleration technology, will provide a powerful and efficient way to build a new intelligent network bridging the gap between devices and the cloud."
Milpitas, CA-based Gyrfalcon bills itself as a developer of high-performance AI accelerators that use low power small-sized chips. SolidRun is an Israeli Arm and x86 computing and network technology company focused on AI edge deployment and 5G.
"Powerful, new AI models are being brought to market every minute, and demand for AI inference solutions to deploy these AI models is growing massively," said Dr. Atai Ziv, CEO at SolidRun. "While GPU-based inference servers have seen significant traction for cloud-based applications, there is a growing need for edge-optimized solutions that offer powerful AI inference with less latency than cloud-based solutions. Working with Gyrfalcon and utilizing their industry-proven ASICs has allowed us to create a powerful, cost-effective solution for deploying AI at the Edge that offers seamless scalability."
Related
Go here to see the original:
Arm-based AI Inference Edge Server Takes on GPU Price/Performance - EnterpriseAI
Its time for smart home devices to have local failover options during cloud outages – Stacey on IoT
Earlier this week, a Nest outage lasted for 17 hours. Nest cameras didnt capture any video footage during that time. This downtime was likely a minor inconvenience, in most cases, if it was noticed at all. But for others who experienced some type of incident during the 17-hour window and that video footage would have been valuable to have, its a complete fail.
Yep. And my elderly father fell. Only the two times I needed it history was deleted.
Proud Knights Fan (@3600dollarsgone) February 25, 2020
Im using the word fail for a specific reason. As smart home systems mature and gain more mainstream acceptance, the failure of a cloud-based device or service becomes less acceptable. One possible solution is to start engineering these devices with some type of local failover, even if its limited in function.
Google says that the Nest outage was caused by a server update that didnt go as planned. Having managed servers in Fortune 100 companies, I get that. And Im not specifically calling Nest out here. Amazon Echo devices have occasionally experienced similar outages as have Ring products, which are part of the Amazon family.
Just in the past 30 days Ring device owners have experienced some service disruptions, as noted by Rings outage history page:
And last week, some owners of the PetNet smart feeding system saw their pets go hungry due to a service disruption with one person saying My cat starved for over a week, in a Twitter response to PetNet support.
The point here is that people are fully reliant on these types of smart home products to work. Not most of the time, but all of the time. When a supporting cloud service (often paid for in subscription fees) does go down, it can have very negative implications.
So whats the answer?
We need smart home companies to deliver on the promises of local controls for existing products, and we need new products designed to smartly failover in some local capacity.
Last year, Google and Amazon both announced more localized services and smarts at the edge. Yet we havent seen much progress on this front. If new localized controls and smarts have found their way to our smart homes, I havent seen either company make a big news splash about it.
When it comes to new products, most of the ones Ive seen are still focused on the subscription revenue model, which generally means some sort of cloud service for integrations or video storage. I dont bemoan companies making money from services, but a local failover of some kind would improve the customer experience and, therefore, could sell more products and services in the long run.
Take the example of cloud-connected cameras and video doorbells, which are both a hot category right now. Having them solely dependent on a web connection to some servers is a recipe for disaster. Sure, they need the cloud in many cases for person recognition, data storage, or other services, but theyre IP-based devices on a home network, too. Before the smart homes of today, we had IP-based cameras that we could view in real-time from a phone.
Why cant todays smart devices failover to some localized viewing mode and rudimentary notification system during an outage? And if youre going to add that, why not a limited amount of on-device storage, or a storage expansion slot for times like that?
Yes, theres cost involved to add such storage or slots for a memory card. But as Wyze has proved with its $20 WyzeCam, it cant be that much money. I have my own 32 GB memory card in my WyzeCam for this very purpose.
Somethings got to give here because the smart home is increasingly being relied upon by millions to monitor, react to and inform us of changes in the roof over our head. Server outages are a question of when, not if, even for the best of companies that have large-scale redundancy. Its time for smart device makers to consider building in local failover options for when the inevitable system outage occurs.
Related
Visit link:
Its time for smart home devices to have local failover options during cloud outages - Stacey on IoT
Cloud demands new way of thinking about IT – Gadget
Overall shipments of personal computing devices (PCD) will decline 9% in 2020, reaching 374.2-million by the end of this year, as a result of the impact of coronavirus, or COVID-19, on manufacturing, logistics and sales.
According to new projections from the Worldwide Quarterly Personal Computing Device Tracker,International Data Corporation (IDC) has lowered its forecast for PCDs, inclusive of desktops, notebooks, workstations, and tablets.
The long-term forecast still remains slightly positive, with global shipments forecast to grow to 377.2-million in 2024, with a five-year compound annual growth rate (CAGR) of 0.2%. However, this is based on an IDC assumption that the spread of the virus will recede in 2020. Since the figure represents only marginal growth, the ongoing impact of the virus could quickly reduce long-term forecasts into negative expectations. IDC did not provide alternative scenariosshould this occur.
The decline in 2020 is attributed to two significant factors; the Windows 7 to Windows 10 transition creates tougher year-over-year growth comparisons from here on out and, more recently, the spread of COVID-19 is hampering supply and leading to reduced demand. As a result, IDC forecasts a decline of 8.2% in shipments during the first quarter of 2020 (1Q20), followed by a decline of 12.7% in 2Q20 as the existing inventory of components and finished goods from the first quarter will have been depleted by the second quarter. In the second half of the year, growth rates are expected to improve, though the market will remain in decline.
We have already forgone nearly a month of production given the two-week extension to the Lunar New Year break and we expect the road to recovery for Chinas supply chain to be long with a slow trickle of labour back to factories in impacted provinces until May when the weather improves, saidLinn Huang, research vice president, Devices & Displays. Many critical components such as panels, touch sensors, and printed circuit boards come out of these impacted regions, which will cause a supply crunch heading into Q2.
Theres no doubt that 2020 will remain challenged as manufacturing levels are at an all-time low and even the products that are ready to ship face issues with logistics, addedJitesh Ubraniresearch manager for IDCsWorldwide Mobile Device Trackers. Lost wages associated with factory shutdowns and the overall reduction in quality of life will further the decline in the second half of the year as demand will be negatively impacted.
Assuming the spread of the virus subsides in 2020, IDC anticipates minor growth in 2021 as the market returns to normal with growth stemming from modern form factors such as thin and light notebooks, detachable tablets, and convertible laptops. Many commercial organizations are expected to refresh their devices and move towards these modern form factors in an effort to attract and retain a younger workforce. Meanwhile, consumer demand in gaming, as well as the rise in cellular-enabled PCs and tablets, will also help provide a marginal uplift.
Worldwide Topline Personal Computing Device Forecast Changes, Year-Over-Year Growth %, 2020-2021 (Annual)
Worldwide Topline Personal Computing Device Forecast Changes, Year-Over-Year Growth %, 2020 (Quarterly)
Source: IDC Worldwide Quarterly Personal Computing Device Tracker, February 19, 2020
Read the rest here:
Cloud demands new way of thinking about IT - Gadget