Category Archives: Cloud Servers

Calculating the true cost of cloud – ITWeb

Darren Bak, Synthesis Technologies.

There is no question that cloud computing is the way of the future, and that having access to the cloud can benefit organisations of every type and size. But how many businesses have really considered the true costs associated with doing business in the cloud? And with uncertain economic times and a slew of different cloud and on-prem options available, it has become critical for organisations embarking on a cloud journey, to thoroughly examine the total cost of ownership of cloud versus their existing infrastructure before making a move.

The public cloud isnt the silver bullet that everyone is saying it is, says Joshua Grunewald, cloud hosting manager at Saicom. It has a purpose for specific systems or applications, which are very specific to any particular business. It may be a 95% fit for one business, 5% fit for another, or a 50/50. IT leaders need to know what they need and how to make use of what's available.

When it comes to auditing your current costs, it is about making sure that you have factored absolutely everything in - from infrastructure to shared costs and day-to-day operational costs, he adds. A lot of people forget the support contracts and agreements that you have to renew on an annual basis. This is generally a capex cost and will have to be depreciated over a certain period of time. This means there's opportunity costs lost. A good checklist would include hardware, support and maintenance agreements, software and licensing, staff resources to manage the infrastructure (including shared resource time from other teams). It would also include overheads such as power, cooling, security, fire suppression and environment control, and any associated support contracts.

The easiest way to conduct an audit is to use a software tool, of which there are a variety of different tools on the market to choose from, says Barry Kemp, head of IaaS at Vox. A key aspect is to have all the right information from the business to input into the tool, for example the costs of electricity consumption, HR in terms of the IT team, rent, the hardware currently being used by the business, and where it is in its depreciation cycle to be able to build a holistic picture of the business infrastructure cost.

Laggard organisations are now forced to go to cloud as the threat of disruption becomes a reality.

Darren Bak

Darren Bak, head solutionist at Synthesis technologies says most organisations do not have an up to date inventory management data store and therefore the best approach to data collection is an automated approach. TSO Logic (now part of AWS) can automatically gather the infrastructure information and generate a business case for you. However, gathering infrastructure costs alone does not give you a full picture. Application, people, third party, and migration costs all need to be gathered in order to produce a well round cloud business case.

It is surprising how many businesses under-estimate their costs for owned infrastructure, says Andrew Cruise, MD of Routed. The main reasons are firstly their inability to apportion shared costs correctly, and secondly their inability to value risk and business agility. It is trivial to cost infrastructure initially by looking at the capital expense of purchasing or financing the hardware and often software. These are usually individual line items in an income statement or ledgers on a balance sheet, easy to audit and ascertain.

But digging deeper, these costs are augmented by costs of the server room estate, power, cooling, security, and operating expenses such as human resources. Often these items are bulk line items in a business' income statement, such as power, cooling, rent. Or for human capital, invariably the resources have multiple responsibilities including infrastructure administration and management and its difficult to separate out the infrastructure costs. To audit these costs, businesses need to identify all contributing costs to the infrastructure and separate out the shared costs in a reasonable manner.

Alternately, its possible that some costs are just not borne by the business, which in this case increases the operational risk, such as lack of security, or understaffing. This too is difficult to value.

The public cloud isnt the silver bullet that everyone is saying it is. It has a purpose for specific systems or applications, which are very specific to any particular business.

Grunewald

Once you have your infrastructure cost it is fairly easy to determine the total cost of ownership, which will include HR costs, says Kemp. "Free software tools such as an Azure tool can be used and there are some paid ones that do a deeper dive into what the companys current hardware looks like. Once again it is important to have the right data to put into the model to receive an accurate set of outputs.

For Grunewald it isnt so simple. If you browse the internet for five minutes around TCO, you will come across multiple articles talking about calculating total cost of ownership in many different ways. Is there really one correct way or all encompassing way to calculate this? I dont believe there is. What will work for one business will possibly not work for another. Calculating the total cost of ownership is really about assessing the long term value of IT investments within an organisation, and infrastructure, whether on premise, or in the cloud, is no different.

An effective cloud business case is made up of migration costs, TCO, cost optimisations and value benefits, says Bak. Analysing the intangible benefits will provide a more rounded business case to justify a cloud migration and create urgency to get ready for cloud. The majority of a cloud business cases focus on cost savings, however, this is not the most compelling benefit. Business agility, productivity and resilience are far more so, however, they are intangible and therefore complex to calculate, and are often just ignored. Business strategy and objectives should drive any cloud strategy. Cloud adoption tends to be far more successful when executive sponsors see cloud as an enabler of business agility, productivity and resilience; not just cost savings.

The majority of a cloud business cases focus on cost savings, however, this is not the most compelling benefit.

Darren Bak

Speaking of the hyperscale cloud providers, Cruise says there are plenty of online calculators available to assist with assessing the costs of running infrastructure in their respective clouds. However, traditional IT workloads are often not suitable to these hyperscale clouds as they demand high resource usage such as storage IOPS and network data transfers. Its non-trivial to measure these on-premise and then input these into calculators to determine the overall cost. Migration costs need to be taken into account and tend to be grossly underestimated. Many businesses make a sweeping decision to migrate to hyperscale cloud without assessing how much time it will take to convert their workloads into native virtualisation format, let alone think about rearchitecting the applications in a cloud native fashion.

Then there's the question of shadow IT, and how to get handle on its true cost. According to Cruise, shadow or stealth IT usually refers to departments utilising hyperscale cloud providers (AWS, Azure, GCP) without central management.Costing this is relatively simple. For these services there must be invoices, which need to be discovered and aggregated. Of course, its the discovery that is tricky, its not called stealth IT for nothing. However, if a cloud migration is formally undertaken, the need for shadow IT recedes and all of these costs should be folded into the whole project.

Cost savings forecasted in a business case are not always realised in reality mainly due to development teams not having a culture of cost transparency and accountability, adds Bak. The shift when migrating to cloud means application teams are now responsible for their spend and have the control and ability to optimise on costs. The culture shift is not a priority in most organisations and this leads to bill shock at the end of the month. The unintended consequence is that developers have free reign to use whatever cloud service they want without consideration of cost and most importantly how the design of the solution impacts cost.

In a world where services are quite easy to consume, costing shadow IT can be very difficult unless one has the right processes in place. There are a few ways that this can be controlled, all of which should be employed to successfully get a handle on shadow IT, comments Grunewald. IT should be sitting with each department of the business and discussing their needs - understanding how to deliver services to each part of the business and successfully doing this will go along way into controlling shadow IT. Educating the business on how resources or services are consumed is also key to empowering the business, which will deter from trying to find alternatives and introduce unknowns and risks into the IT environment.

Quantifying is tricky, but not impossible. Less understood is the cost of stickiness", or being captive in any one particular cloud.

Andrew Cruise

Kemp believes shadow IT is another problem that can also be solved with software. Most of the cloud vendors have some sort of cost management ability built in that can show businesses if they have unused resources in the cloud or have spun up a virtual machine (VM) and that it is sitting idle. Robust governance policies are very important. These policies provide guidelines in terms of who has permission to spin up VMs or turn on services, and combined with an approval process ensure the business has a handle on its cloud costs. In the days of capex, the IT team would have to motivate for a hardware expense, which meant they had to do an analysis as to whether they really needed that particular piece of kit. With cloud, because it is so easy, IT teams may spin up a server and give it very little thought, which leads to a bill shock at the end of the month. The IT director is aware of the extra server, but doesnt necessarily conduct the analysis to find out if it is the right option and whether there is a better, more cost-effective way.

Over and above shadow IT, there are other hidden costs, which are hard to uncover and quantify, says Kemp. It is usually the small things that add up to big costs. An example is storage - in Azure it is charged according to the amount of storage a business takes, but it is also charged according to the number of transactions and it is very difficult to work out how many transactions there will be beforehand. Cloud providers also charge for data coming out of their cloud, which makes it difficult to budget as it depends on the application being used. Some applications may only push data into the cloud once, while others may need to send it out multiple times and then the business has to budget for the extra bandwidth cost.

Month end bill shock from the hyperscale cloud providers has been well documented so many of the hidden costs are now known, adds Cruise. However, quantifying is tricky, but not impossible. Less understood is the cost of "stickiness" - being captive in any one particular cloud. The hyperscale cloud providers differentiate by offering specialist functions and services, which businesses begin to rely on and can hold them hostage in the long run. Businesses should bear in mind that there are alternatives when it comes to infrastructure.

If you take like-for-like and move what you have on-prem into the cloud it is generally going to be more expensive, adds Kemp. An assessment of the on-prem infrastructure is crucial so that the servers can be resized for the cloud. Most of the assessment tools would recommend that a smaller sized VM can be moved into the cloud, which means the business ends up saving money. With cloud the business can budget for what it needs tomorrow and if it needs more resources later in the week it can just increase its requirements. The cloud requires a different mindset and although you budget for a year in advance you definitely dont have to budget three years in advance on your spending. Another aspect to consider is managing costs across the different clouds. With more multinational datacentres set to arrive this year costs management across different clouds can be a challenge.

For Bak, the most challenging aspect of cloud adoption in terms of hidden costs is, unsurprisingly, non-technical in nature. Upskilling your current IT teams on the new technology, creating new job roles and families, driving the change in culture and communicating this across the organisation is often underestimated and realised too late. From a technical perspective, your networking costs can become expensive however this can be mitigated by having experienced solution architects and cloud engineers with a deep and broad understanding of all the cloud services, how they work and in particular the cost metrics associated. Another hidden cost is the guardrails enabled to secure your AWS Account like AWS Config for asset management, Amazon Guard Duty for threat detection, KMS for managing encryption keys, AWS CloudTrail for monitoring and alerting and VPC Flow Logs for monitoring network traffic. These costs are often overlooked when designing applications in Cloud and therefore not contained in the estimate costs.

So do the costs of cloud outweigh the benefits of keeping everything on-prem? Definitely, says Kemp. Simply put, there is less that the business has to worry about such as load shedding that is a massive challenge for companies as well as connectivity. Headaches such as these go out the window. However, companies have to guard against thinking that they will save on their HR costs when they take their servers off-site. Moving to the cloud means that the business still requires people to run those services even though they are delivering more high-value tasks.

Not always, says Grunewald. A company needs to properly consider what their needs are and deploy their environments accordingly. This is why, with a greater knowledge of what is out there, many companies are employing a hybrid or multi-cloud strategies Companies can deploy into the hyperscalers, only what needs to be in public cloud, keeping everything else on-prem or in a private cloud.

Most cultural changes such as Agile, DevOps and similar are impractical and realistically not achievable without cloud, says Bak. Similarly cloud is the enabler for initiatives such as digital transformation, AI and big data. Without cloud all these initiatives are slow, expensive and hinder innovative thinking. Time to market in a highly competitive sectors isthe difference between customer attrition and true increases in market capitalisation. And while the cost of implementing cloud has meant that CIOs prefer to stick with the status quo, laggard organisations are now forced to go to cloud as the threat of disruption becomes a reality especially in the fintech space where start-ups are far more agile and productive.

In the beginning of your journey, you need to answer the core reasons for going to cloud, ends Bak. The easiest way to stimulate conversation with technical executives is by asking this simple question: What would happen if you did nothing? For financial areas, use the term opportunity costs; for marketing, sales and business areas call them intangible benefits. These mutually agreed reasons should drive all decisions about technology, process, tools and priority.

See more here:
Calculating the true cost of cloud - ITWeb

Global Cloud Server Market Analysis 2020-2025: by Key Players with Countries, Type, Application and Forecast Till 2025 – Germany English News

This report studies the global Cloud Server market size, industry status and forecast, competition landscape and growth opportunity. This research report categorizes the global Cloud Server market by companies, region, type and end-use industry.

A cloud server is a logical server that is built, hosted and delivered through a cloud computing platform over the Internet.In 2017, the global Cloud Server market size was xx million US$ and it is expected to reach xx million US$ by the end of 2025, with a CAGR of xx% during 2018-2025.

ACCESS THE PDF SAMPLE OF THE REPORT @HTTPS://WWW.ORBISRESEARCH.COM/CONTACTS/REQUEST-SAMPLE/2188240

This report focuses on the global top players, coveredIBMHPDellOracleLenovoSugonInspurCISCONTTSoftlayerRackspaceMicrosoftHuawei

Market segment by Regions/Countries, this report coversUnited StatesEuropeChinaJapanSoutheast AsiaIndia

MAKE AN ENQUIRY OF THIS REPORT @HTTPS://WWW.ORBISRESEARCH.COM/CONTACTS/ENQUIRY-BEFORE-BUYING/2188240

Market segment by Type, the product can be split intoLogical TypePhysical Type

Market segment by Application, split intoEducationFinancialBusinessEntertainmentOthers

The study objectives of this report are:To study and forecast the market size of Cloud Server in global market.To analyze the global key players, SWOT analysis, value and global market share for top players.To define, describe and forecast the market by type, end use and region.To analyze and compare the market status and forecast between China and major regions, namely, United States, Europe, China, Japan, Southeast Asia, India and Rest of World.To analyze the global key regions market potential and advantage, opportunity and challenge, restraints and risks.To identify significant trends and factors driving or inhibiting the market growth.To analyze the opportunities in the market for stakeholders by identifying the high growth segments.To strategically analyze each submarket with respect to individual growth trend and their contribution to the marketTo analyze competitive developments such as expansions, agreements, new product launches, and acquisitions in the marketTo strategically profile the key players and comprehensively analyze their growth strategies.

Browse the complete report @https://www.orbisresearch.com/reports/index/global-cloud-server-market-size-status-and-forecast-2025

In this study, the years considered to estimate the market size of Cloud Server are as follows:History Year: 2013-2017Base Year: 2017Estimated Year: 2018Forecast Year 2018 to 2025For the data information by region, company, type and application, 2017 is considered as the base year. Whenever data information was unavailable for the base year, the prior year has been considered.

Key StakeholdersCloud Server ManufacturersCloud Server Distributors/Traders/WholesalersCloud Server Subcomponent ManufacturersIndustry AssociationDownstream Vendors

Available CustomizationsWith the given market data, QYResearch offers customizations according to the companys specific needs. The following customization options are available for the report:Regional and country-level analysis of the Cloud Server market, by end-use.Detailed analysis and profiles of additional market players.

Table of Contents

Global Cloud Server Market Size, Status and Forecast 2025

Chapter One: Industry Overview of Cloud Server

1.1 Cloud Server Market Overview

1.1.1 Cloud Server Product Scope

1.1.2 Market Status and Outlook

1.2 Global Cloud Server Market Size and Analysis by Regions (2013-2018)

1.2.1 United States

1.2.2 Europe

1.2.3 China

1.2.4 Japan

1.2.5 Southeast Asia

1.2.6 India

1.3 Cloud Server Market by Type

1.3.1 Logical Type

1.3.2 Physical Type

1.4 Cloud Server Market by End Users/Application

1.4.1 Education

1.4.2 Financial

1.4.3 Business

1.4.4 Entertainment

1.4.5 Others

Chapter Two: Global Cloud Server Competition Analysis by Players

2.1 Cloud Server Market Size (Value) by Players (2013-2018)

2.2 Competitive Status and Trend

2.2.1 Market Concentration Rate

2.2.2 Product/Service Differences

2.2.3 New Entrants

2.2.4 The Technology Trends in Future

Chapter Three: Company (Top Players) Profiles

3.1 IBM

3.1.1 Company Profile

3.1.2 Main Business/Business Overview

3.1.3 Products, Services and Solutions

3.1.4 Cloud Server Revenue (Million USD) (2013-2018)

3.2 HP

3.2.1 Company Profile

3.2.2 Main Business/Business Overview

Continued.

Orbis Research (orbisresearch.com) is a single point aid for all your market research requirements. We have vast database of reports from the leading publishers and authors across the globe. We specialize in delivering customized reports as per the requirements of our clients. We have complete information about our publishers and hence are sure about the accuracy of the industries and verticals of their specialization. This helps our clients to map their needs and we produce the perfect required market research study for our clients.

Hector CostelloSenior Manager Client Engagements4144N Central Expressway,Suite 600, Dallas,Texas 75204, U.S.A.Phone No.: +1 (972)-362-8199 ; +91 895 659 5155

See more here:
Global Cloud Server Market Analysis 2020-2025: by Key Players with Countries, Type, Application and Forecast Till 2025 - Germany English News

China-backed hacking of the world’s servers uncovered – Radio Canada International (en)

BlackBerry researchers say they've uncovered work by five Chinese-affiliated hacker groups that have accessed vast amounts of data from world computer systems, possibly undetected for a decade (Shutterstock)

Blackberry Ltd. says it has discovered what it claims is Chinese-backed hacking of the worlds servers. Originally known as Research In Motion based in Waterloo Ontario, the company says its researchers have discovered how hackers have managed to infiltrate many of the worlds servers unnoticed for up to a decade.

Likely with an intended pejorative double-entendre, the 44-page report by BlackBerry, is called Decade of the Rats.(pdf).

The title refers (also) to a popular remote administration tool (NetWire-rat) that BlackBerry found to have striking code similarities to a remote access android trojan (RAT) that was discovered two years before the business tool came on to the commercial market, raising questions about the origins of each says the report.

The report notes that While Chinese IP (intellectual property) theft is now a story old enough for the history books, there continue to be new chapters to add with new lessons to learn for security teams and the organizations they serve.

The report details activities of 5 APT- advanced persistnat threat- groups noting they avoided detection because cyber security was focussed elsewhere. (BlackBerry)

The company says some five separate groups with ties to the Chinese government have been extracting vast quantities of information through Linux operating systems as well as Windows and Android systems. Linux is used to run the New York, London, and Tokyo stock exchanges, and major tech giants like Amazon, Yahoo, and Google also rely on it and indeed dominates the back-end infrastructure ofalmost all advanced supercomputers around the world, including computers used by many U.S. government agencies and the Department of Defense.

The report notes that While Chinese IP (intellectual property) theft is now a story old enough for the history books, there continue to be new chapters to add with new lessons to learn for security teams and the organizations they serve.

The five groups although apparently have different objectives and targets, the report says they share tools and tactics and so appear to be coordinated. One of the succesful methods used to escape cyber-security is through theft of adware certificates that prove a products authenticity and which are considered low security threats and then their disguised spyware can communicate through innocuous domain names on cloud servers.

BlackBerry says the hackers have been able to gather vast amounts of data and intellectual property, potentially worth billions.

Additional information-sources:

See the original post:
China-backed hacking of the world's servers uncovered - Radio Canada International (en)

How to build your own firewall with pfSense – IT PRO

Having migrated your IT infrastructure and services to the cloud, you need a decent enterprise firewall to handle your internet connection and any site-to-site or site-to-cloud VPN requirements.

The licensing costs for devices from Cisco, Juniper, Sonicwall et al are often extremely high, however. Many admins live in fear of the yearly license renewal invoice turning up, knowing that itll take a significant chunk out of their yearly IT budget, especially if some bright spark in senior management suddenly decides that the firm needs to roll out a costly extra feature.

Advertisement - Article continues below

pfSense is an open source enterprise firewall based on FreeBSD, with comparable features to many of the most expensive enterprise firewall devices and a huge range of packages available to extend its capabilities. As an open source solution, the software is free, and all the features are available without any commercial licensing requirements. Support for pfSense is provided by Netgate, which also manufactures network appliances that use the operating system.

This tutorial will take you through the installation and basic setup of a pfSense device. We will be using the scenario of a business with no on-premises servers, using cloud services or hosting for their IT requirements.

The minimum requirements to run pfSense are an x86 or x64 compatible device with 1GB or more of memory, two or more network interfaces and at least 4GB of storage (this can be a hard disk or a flash device such as an SD card).

Advertisement - Article continues below

Advertisement - Article continues below

How fast a processor you need, and how much memory, will depend on the number of rules, VPNs, and so on that you will have on your device, and the amount of data flowing through it. VPN performance, in particular, is dependent on how much processor power your endpoint has. Depending on the size and complexity of your local network layout, you may want a device with more than two network interfaces.

Purpose-built pfSense devices are available from many manufacturers, including the makers of pfSense themselves. However, you can also set it up on a virtual machine running on your choice of hypervisor, or build your own using a standard desktop PC or server.

Whatever hardware youre using, the setup process is the same. Hook up a monitor and keyboard to your device or use the virtual console if you are installing on a virtual machine. Do not connect any of the network interfaces to a network yet: well get to that later in the installation and setup process.

Download the installer from the pfSense website, taking care to get the version that matches your environment and preferred installation method. Burn the CD or write the image to a USB drive as required.

Advertisement - Article continues below

Boot your device from the installation media you created and wait until it has completed booting, and displays the software license screen. Go through and accept the license terms and move on to the installation. Select Install from the menu, choose the correct keyboard layout for your region, then select continue.

From the next menu, select automatic partitioning and hit enter to continue.

pfSense will partition the disk, and move straight on to the installation. Nows a good time to make some coffee whilst you wait for the installation to complete. When the installation has finished, say no to opening a shell to edit the system. Finally, remove the installation media and hit enter on the next screen to reboot into your new pfSense system.

After the system has rebooted, youll be prompted to set up basic networking. Answer no when asked if VLANs should be set up now. Next, move on to the network interface setup. Hit a to start auto-detection of the WAN interface and follow the instructions on screen, connecting the cable when required, in order to correctly identify the interface. Repeat the process for the LAN interface. Dont forget to physically label the interfaces on the device as well.

Once you have both the LAN and WAN interfaces identified correctly, hit y to continue. pfSense will carry on booting, then display the status of the network interfaces and present you with the console admin menu.

The LAN interface defaults to an IPv4 address of 192.168.1.1/24. If you need to change this to match your existing network, select option 2 (set interface IP address) from the menu, then option 2 again to edit the LAN interface. Enter the desired LAN IPv4 address and subnet mask for the device when prompted. Dont enable IPv6 or DHCP right now; well do that later from the web admin interface.

Configure a computer with a static IPv4 address in the same range as the IPv4 address you assigned to the LAN interface on the firewall. You can connect this computer directly to the LAN port on the firewall (using a crossover cable if youre working with older hardware that doesnt support Auto-MDIX) or connect via a switch.

Advertisement - Article continues below

Advertisement - Article continues below

Using your web browser, go to the LAN IPv4 address that we configured in the previous step. Log in using the username admin and the default password pfsense. You will be presented with the initial setup wizard. Click on next, then next again at the following screen to begin the setup of your new firewall.

Enter the name you want to give your firewall, and the domain associated with your internal office network. Were going to be boring and use firewall for the name, and local for the domain, but you should probably come up with something more distinctive.

Click on next to move on to step 3 of the wizard. The time server can be left on the default, or set to a different one if you have a preferred NTP server for devices on your network. Set your time zone, and then click next to move on to step 4.

Now you need to set up your WAN interface. Were using DHCP, so can leave everything on the defaults, but if you are connecting this device to an ADSL line via a DSL modem in bridge mode, you should select PPPoE here and enter the details provided by your ISP in the PPPoE section of this page. Once youve completed WAN configuration, scroll to the bottom of the page and click next to move on to step 5, where we can review the LAN IPv4 address we configured earlier, and change it if necessary. Click next to keep the address the same and move on to step 6.

Advertisement - Article continues below

Set a new admin password, not forgetting to make a note of it somewhere, and then click next to move on to step 7.

Click on reload to apply these changes to the device. If you changed the LAN IPv4 address in step 5, you will need to enter that address in your browser after this to access the device. Wait for the reload to complete, then click Finish on the last screen to exit the wizard and go to the device dashboard. Read and accept the license for the software again when prompted, then click close to clear the Thank you popup.

If your ISP offers IPv6 (as almost all do now) this is the time to set up the WAN interface IPv6 options to match those provided by your ISP. Select the Interfaces pull-down menu from the top menu bar, and select the WAN interface.

You will also need to set up IPv6 on your LAN interface. pfSense supports a range of different IPv6 configurations, from static IPv6 and DHCPv6 to stateless address autoconfiguration (SLAAC), 6to4 tunnelling and upstream interface tracking. Exactly which one you need will depend on the IPv6 provision from your ISP, who should provide you with adequate setup information to correctly configure your connection.

From the menu bar across the top of the pfSense admin page, open the Services pull-down menu and select DHCP server. Tick the Enable box to turn on the DHCP server for your LAN interface, then enter the range of IPv4 addresses that will be allocated to devices on your LAN. Well set up a range of 200 addresses in this instance. Leave the DNS and WINS server options unset, as the firewall will use those allocated by the ISP on the WAN interface.

Scroll down to the bottom of the page and hit save. The DHCP service will start automatically. The setup wizard will have automatically created a single outbound NAT rule for you, so you should be able to access the internet from devices behind your new firewall.

If you require VPN links to your cloud provider, or to other offices, you can now set them up. We will not go into detail about that here as there are too many different types of VPN to cover, and the process is largely the same with any enterprise firewall device.

Advertisement - Article continues below

Additional services such as traffic prioritization, web filtering, load balancing multiple internet connections and so on are all available, either already built in or via add-on packages. These can be installed from the package manager, found on the System menu pull-down at the left of the top menu bar.

Take some time to explore the various menus and services to familiarize yourself with your new firewall and discover its many features.

Successful digital transformations are future ready - now

Research findings identify key ingredients to complete your transformation journey

Cyber security for accountants

3 ways to protect yourself and your clients online

The future of database administrators in the era of the autonomous database

Autonomous databases are here. So who needs database administrators anymore?

The IT experts guide to AI and content management

Your guide to the biggest opportunities for IT teams when it comes to AI and content management

More:
How to build your own firewall with pfSense - IT PRO

5 ways CIOs can curb costs before recession hits – CIO

Now that CIOs have completed step 1 of their ad-hoc coronavirus crises playbook hammering out remote work and business continuity strategies many are evaluating step 2: cost containment, ranging from right-sizing instances of cloud software and renegotiating SaaS contracts to eliminating excess applications and shuttering legacy servers and other hardware.

Such are the moves CIOs may take to get ahead of any financial fallout the coronavirus crisis may have on their business, as the pandemic rages on, crippling productivity across every sector, and CFOs get increasingly skittish about budgets.

Eighty-seven percent of finance leaders reported great concern for their business, with 80 percent expecting COVID-19 to decrease revenue or profits in 2020, according to a recent survey of 55 CFOs in the U.S. and Mexico polled by PwC. Sixty-seven percent of these CFOs say they are prepared to reduce costs to counteract the financial impact of the COVID-19 pandemic. Translation: Its knives out time.

CFOs are just doing their jobs and focusing on driving value, says ServiceNow CIO Chris Bedi. They will start to question things that are not on the value side of the ledger. Accordingly, its more important than ever for CIOs to show quantitative proof that their investments are providing value to the business, Bedi says.

Read this article:
5 ways CIOs can curb costs before recession hits - CIO

Five Of The Biggest Challenges Faced By Cloud Analytics In 2020 – Analytics India Magazine

The Cloud Analytics market globally has been projected to grow by $39.1 billion, with a compound growth rate of 7.6%. While many companies view these services as economical, they also choose cloud analytics because it makes it easier for them to handle and process massive volumes of data from different sources. It offers real-time information while providing excellent security. So, it comes as no surprise when we read that 91% of the industry say that analytics should be moved to the cloud at a faster rate.

While many analytics companies look to adopt cloud and search for the cheapest cloud platform or a way to optimize their network costs, they still battle a lot of problems:

Security is a significant factor when it comes to cloud analytics and computing challenges. Security has been the most significant concern because it involves a lot of data. Analytics deals with a lot of private data, so the cloud-based software that is used should have built-in flexibilities that allow it to work efficiently with popular security tools. Security is such a big concern that software vendors and organizations have to always be on the lookout for the latest trends and new designs for their software to work with different security protocols.

Finding an application which can be efficiently implemented and functions properly is one of the most important challenges when it comes to cloud analytics. The software has to run on almost all the operating systems or hardware configurations.

Ideally, the analytics software should not be operating under a predefined processing strategy, and it should be capable enough to distinguish between different processing scenarios and dynamically choose how the analytics should be executed without moving a lot of data around. This is not something that is achieved easily, and a lot of organizations struggle to find an appropriate software solution.

IT departments in an organization generally advise business users not to install software they cannot control or support. Also, most apps which are critical to the process, need 100% uptime, which gives no scope for the server that is being shut down to replace or upgrade it. Maintenance can be made more accessible by building redundancies into the software so that duplication of control and processing exists.

Companies nowadays struggle with controlling their cloud spending. In fact, controlling spending is as important to them as cloud security. Nowadays, the mistake of cloud spending is committed not only by beginners, but also by companies who have been using cloud services for a long time. It was estimated in a report that in the year 2020, wasted cloud spending could exceed $17.6 billion. As cloud computing grows, companies are still struggling to manage their cloud spending.

Many online vendors offer services which appear cheap at first, but when it comes to using the analytics results, the costs do not seem reasonable anymore. What is presented as a low-cost solution at the front end represents accumulated costs for data storage, bandwidth, database access, numbers of users, memory allocations, row-level scoring, and many tasks along with other resources.

To assess the profitability, the entire lifecycle of the analytics has to be quantified and evaluated, so that hidden costs can be avoided. This will allow cloud users to know whether the technology they are using is cheaper than using software on a server.

comments

Link:
Five Of The Biggest Challenges Faced By Cloud Analytics In 2020 - Analytics India Magazine

The EDPB gives its view on connected car technology – but will it reach the chequered flag? – Lexology

In February, the European Data Protection Board (EDPB) published its draft guidelines on the processing of personal data in the context of connected vehicles and mobility-related applications (the Guidelines). These are draft guidelines published for public consultation.

These are the first European-wide guidelines issued on connected car technology by privacy regulators. In France, the CNIL published its "compliance package" in 2017 and previous guidance has been issued in Germany. The Article 29 Working Party had previously issued guidance on the Internet of Things, but never ventured into the connected car space.

So, while perhaps overdue, it is helpful to have the views of the EDPB and, through them, the collective views of the European supervisory authorities on important issues this technology raises.

Some of the views adopted in the Guidelines will not be surprising an expansive view on the concept of personal data and the applicability of the ePrivacy Directive, to take two examples. These may still, however, cause operational challenges for many actors in the connected car space currently wrestling with using this innovative technology in a privacy-compliant way.

Equally, there are some areas of the Guidelines that are arguably more controversial particularly on the interface between GDPR and the ePrivacy Directive. The Guidelines also focus heavily on an owner-use scenario which, for some data privacy issues, lends itself to more practical solutions particularly where consent-led solutions are appropriate. However, as mobility service models have become more sophisticated, the Guidelines would benefit from being expanded to consider other scenarios where the issues they raise are not so easily handled e.g. company fleets, vehicle leasing arrangements, car sharing clubs and various forms of long-term and short-term vehicle rental services. The Guidelines specifically exclude employee use of vehicles, due to employee monitoring issues. However, they could address the other privacy issues that are raised in this context.

Due to the current COVID-19 crisis, the deadline for responses to the consultations remains open to the public (delayed to 1 May). So, interested organisations still have time to submit their views.

At the time of writing, more than 20 responses to the public consultation have been published, with many raising challenges with legal interpretations adopted in the Guidelines, as well practical difficulties in some of the solutions and restrictions proposed. Some of these represent important points of substance that, if not addressed by the EDPB in the final Guidelines, could lead to the "flashing of hazard lights" in developments in this area in the years to come.

We have set out below a selection of the key themes from the Guidelines, along with our thoughts.

What constitutes personal data?

The Guidelines widely construe the concept of personal data in the context of connected vehicle data. Under the Guidelines "personal data" could include directly identifiable personal data, such as the driver's name, as well as indirectly identifiable data, including data relating to driving style, mileage, vehicle wear and tear and metadata, such as the maintenance status of the vehicle. This is not surprising and is consistent with the approaches of regulators (and European case law) to date.

However, the Guidelines would benefit from acknowledging some flexibility here. Context and intention of processing are important considerations in determining when information constitutes personal data under GDPR in any scenario. This is the case both in law and under case law. This is particularly relevant for connected car technologies in scenarios other than user-owner arrangements. Should corporates have to treat "wear and tear" data of their assets as "personal data"?

The Guidelines also flag specific categories of personal data warranting special consideration due to the sensitive/high-risk nature of the information.

The Guidelines provide recommendations when processing such high-risk data, including ensuring consents obtained are valid and unbundled from other terms, defining a limited retention period, encouraging local processing within the vehicle where possible, providing alternatives (e.g. non-biometric access) and allowing for drivers to turn off certain tracking, such as location. Many are sensible privacy-enhancing measures. However, in certain scenarios, these requirements will present some controllers with challenges.

Interplay between GDPR and ePrivacy Directive

This is perhaps the most controversial topic that the Guidelines touch on. There is also, arguably, a degree of internal inconsistency with how the Guidelines address this issue.

The Guidelines are clear that, in addition to GDPR, the ePrivacy Directive will apply. Specifically, that connected vehicles and all devices connected to them are "terminal equipment" for the purposes of the ePrivacy Directive in the same way as a mobile device or laptop is "terminal equipment". This activates the requirement for consent to the storage of information, or gaining access to information stored, on the connected vehicle and other connected devices.

Strictly, this seems a fair interpretation of the applicability of the ePrivacy Directive (although a technical assessment of whether information is "accessed" from the vehicle for some technologies may permit some flexibility).

However, the Guidelines arguably fail to accommodate all potential solutions for obtaining consent resulting in a potentially unduly restrictive application, particularly in scenarios where the user (or, in this case, driver) is not the owner of the vehicle. The ePrivacy Directive allows for consent to be given by the "user or subscriber" of the relevant service. So, consent by actors other than the driver may be appropriate in some circumstances. The Guidelines could acknowledge this flexibility to allow the law to accommodate other connected car scenarios without fundamentally undermining the protection the ePrivacy Directive seeks to provide.

The challenges of obtaining a consent under the ePrivacy Directive in certain circumstances also arguably demonstrate a need for updates to it. Consent is not required for access to data requested as part of an "information society service" (i.e. a digitally delivered service; think Spotify or Netflix). This exemption makes sense. However, it is too inflexible to accommodate technology-enabled services that also involve a "real world" element that may, therefore, not constitute "information society services". For example, the Guidelines suggest that this issue in the context of "pay-as-you-drive" insurance can be managed by obtaining the consent of the driver. However, is that truly an "unbundled" and "freely given" consent (as required under GDPR)? Would it not be neater, and maintain consistent logic, if a similar exemption applied where connectivity was an inherent part of service delivery? Or should there be some recognition of the potential to rely on legitimate interests, subject to certain safeguards, as has been included in the most recent proposal on the new ePrivacy Directive issued by the Croatian Presidency?

In addition, and potentially most importantly, the Guidelines also appear to adopt an unduly restrictive interpretation of the ability to rely on legal basis other than consent under GDPR in scenarios where the ePrivacy Directive is also engaged.

The Guidelines state that any consent requirement under Article 5(3) ePrivacy Directive takes precedence over GDPR in relation to the storage/collection of information from the connected vehicle and other linked devices. In addition, further processing of personal data collected from the connected vehicle or device will require an Article 6 GDPR lawful basis of processing. This is in line with the EDPB's Opinion 5/2019 on the interplay between the ePrivacy Directive and the GDPR.

However, the Guidelines go further and suggest that, where consent is required under Article 5(3) ePrivacy Directive, then consent will generally be the most appropriate lawful basis under Article 6 GDPR for further processing activities. This is a direction of travel we have seen in the ICO's recent guidance on adtech. So, it is not entirely surprising. The stated intention is to ensure that the protection under the ePrivacy Directive is not undermined and this is potentially understandable in certain circumstances. However, it is debatable whether this is what GDPR says. There is no hierarchy to the lawful bases under GDPR.

There is clearly a concern that saying otherwise would, in the regulator's eyes, open the "floodgates" to allow controllers to rely on legitimate interests as a legal basis under GDPR. However, setting aside legalistic arguments for a moment, this is also potentially unfair on responsible controllers. Even if legitimate interests are properly available as a legal basis, this does not mean a "free for all". Responsible controllers understand the balancing tests that need to be carried out and the requirements to deal with issues such as proportionality, transparency and accountability this entails.

Indeed, the case studies included in the Guidelines acknowledge that appropriate reliance on basis other than consent under GDPR does not necessarily undermine the protection under the ePrivacy Directive. For example, in the "pay-as-you-drive" insurance scenario, the EDPB acknowledges that performance of contract would also be appropriate legal basis.

As it stands, the Guidelines leave significant scope for confusion here and clarification would be welcomed.

Other key concerns

The EDPB also helpfully highlights specific concerns and recommendations in relation to connected vehicles and mobility, and focuses on the need for:

These are all existing obligations under GDPR that would need to be considered when processing personal data in any event although some issues are more relevant and high risk in a connected vehicle context. This should all effectively be considered in an associated data protection impact assessment.

However, it is clear from these recommendations that there is need for reliance on multiple actors across the connected car space from OEMs to mobility service providers. Each will have a different ability to implement the measures needed to ensure privacy-compliant deployment of this technology. Again, it would be beneficial for the Guidelines to recognise this.

The Guidelines include a number of case studies applying the recommendations to specific scenarios including:

However, these are some of the most useful sections of the Guidelines. It would be helpful for the Guidelines to consider use cases in other scenarios. In particular, the inclusion of a use case relating specifically to the collection of vehicle maintenance and diagnostics data in a corporate or fleet scenario would be of value, being one of the most prominent use cases for this type of technology.

The consultation period for public submissions closes on 1 May.

Read the original here:
The EDPB gives its view on connected car technology - but will it reach the chequered flag? - Lexology

A cost-effective approach to SQL server high availability in the cloud – ITProPortal

Configuring SQL Server for high availability (HA) can be a costly prospect. In a traditional on-premises approach, one creates a failover cluster instance (FCI) with two (or more) servers. Only one of those servers is typically performing production tasks at any moment; the others are largely standing ready to be called into service should the primary server fail. When your SQL Server requirements demand a large system with multiple high-powered CPU cores and hundreds of gigabytes of memory, your FCI can have a lot of expensive hardware doing nothing but standing by.

The cloud affords you different options when it comes to configuring for HA. In Azure, AWS, and Google Cloud Platform (GCP) you can create a SQL Server FCI on virtual machines (VMs) rather than on physical machines. More interestingly, you may find that you can create an FCI in the cloud whose backup VMs are not equal in size and performance to the primary VM running your production SQL Server instance. You might configure your secondary VMs as much smaller systems.

Why? Because you may be able to cut your operating costs considerably. The VM you need for your primary production environment may be very expensive, but if you provision your backup VMs as smaller servers think of them as emergency spare tires as opposed to full-sized spares you can pay far less for the systems that are doing nothing but waiting to be called into emergency service.

But heres where the cloud and the elasticity of VMs provide a distinct advantage over an FCI built on-premises: If an event occurs that causes your FCI to fail over to one of the smaller secondary VMs, you can re-provision that smaller VM so that it reconstitutes as a new VM that is as large and as powerful as the original primary. The secondary that would have been far too small to support your production load becomes a VM that can then deliver the full support that your SQL Server application demands. The fee for that secondary VM will increase commensurately, but you have avoided paying that higher fee until this moment. In an on-premises FCI you would have been paying for the larger system for months, possibly years while it sat waiting to be brought online.

Later, whenever the previous primary VM comes back online, you have a choice: you can either move your production SQL Server load back to that VM and return the secondary VM to its emergency spare tire size or shrink the original primary to that spare-tire size and continue to use it as the new secondary failover server in the FCI. If the latter, youd continue to use the expanded secondary VM as your primary production system. Note that if youre taking advantage of the AWS EC2 Reserved Instances option, you will continue to be charged the higher rate once youve expanded the VM, even if you subsequently shrink it down to its previously undersized dimensions.

Are there trade-offs to configuring an FCI with undersized secondary VMs? There are, and they are important to weigh in the balance.

Youre configuring for HA for a reason, and its important to have a clear understanding of your expectations. We can talk about HA in terms of a cloud SLA that guarantees access to at least one of the VMs in your FCI 99.99 per cent of the time, but when weighing the use of undersized backup servers in a SQL Server FCI there are two other metrics you need to take into consideration.

The first is your recovery time objective (RTO), which represents the amount of time it will take to get your application back up and running in the event of a failure. By definition, an HA solution must be able to detect a failure of the primary VM and then perform an automatic recovery which, at a high level, means failing over to the secondary VM, rolling back the database to the last committed transaction, and making the secondary instance of SQL Server the primary instance so that users can begin working with the database again. The amount of elapsed time that you would consider acceptable between the event that causes failure of the primary and the resumption of user interaction with SQL Server on the secondary VM is your recovery time objective.

Knowing your RTO is important because one of the trade-offs in using an undersized secondary that you intend to convert into a larger VM when necessary is that reprovisioning takes time. Its only a matter of minutes, but if those extra minutes might result in the loss of millions of dollars worth of transactions then using undersized VMs as your secondaries may not be worthwhile. However, if taking an extra two minutes to reprovision the secondary as a larger VM results in a minimal loss of revenue or customer satisfaction, then the amount of money you save by not paying for a larger standby VM may warrant consideration of an undersized approach.

The second metric to weigh in the balance is your recovery point objective (RPO), which represents the amount of data you can stand to lose in a failure scenario. When youre configuring for HA, its safe to assume that you dont want to lose any data, but that means that you need to ensure that your backup VMs have access to the data that your primary SQL Server instance is working with. Since no provider currently offers a shared cloud storage solution with a 99.99 per cent availability SLA, youll need a way to reliably replicate your SQL Server data among the separate physical locations where your secondary VMs reside.

If you configure for HA using a SQL Server Always On Availability Group (AG) approach (rather than as an FCI), SQL Server will replicate your user-defined databases to your secondary servers. However, Always On Availability Groups require SQL Server Enterprise Edition, which is going to increase your costs (and the whole point of under sizing your secondaries is to decrease your costs). Youll also find that key SQL Server databases (for agents, jobs, passwords, etc.) are not replicated to the secondary VMs under AG.

If youre using SQL Server Standard Edition or if your RPO demands that you replicate all SQL Server databases to the secondary VMs, then youll want to construct an FCI using a SANless Clustering tool such as SIOS DataKeeper, which provides complete database replication between your primary and secondary VMs. That way, when the secondary VM is called into service, all the data that the primary had been working with is available to the secondary.

Second, while services within AG or Windows failover cluster manager can automate failover to the secondary VM, it is not possible to automate the resizing of the secondary server. Youll have to do that manually. You should start by configuring an alert that notifies you when a failover occurs. At that point you will need to make a decisiondo I upsize the target or fail back to the original server? Some failures might be transient, in which case moving the workload back to the original server will be your best option for the quickest recovery. However, its not always obvious why the original server failed, so you may find SQL Server failing over again soon after you fail back. In other cases, such as where there is a service interruption in the availability zone where your primary VM resides, the best option will be to go ahead and resize the undersized VM since you wont know how long the outage will last.

Two final points to consider when weighing the cost-effectiveness of configuring SQL Server for HA in the cloud using undersized secondaries are these:

First, you must be careful when picking the size of the undersized target. Cloud instances throttle disk IOPS based upon instance size. You should check the disk IOPS on the secondary VM to ensure it will not become a bottleneck for your SQL Server load at failover. Fortunately, on the target VM you will typically be seeing write IOPS, not read IOPS.

Dave Bermingham, Senior Technical Evangelist, SIOS Technology

Read more:
A cost-effective approach to SQL server high availability in the cloud - ITProPortal

Zoom’s Flawed Encryption Linked to China – The Intercept

Meetings on Zoom, the increasingly popular video conferencing service, are encrypted using an algorithm with serious, well-known weaknesses, and sometimes using keys issued by servers in China, even when meeting participants are all in North America, according to researchers at the University of Toronto.

The researchers also found that Zoom protects video and audio content using a home-grown encryption scheme, that there is a vulnerability in Zooms waiting room feature, and that Zoom appears to have at least 700 employees in China spread across three subsidiaries. They conclude, in a report for the universitys Citizen Lab widely followed in information security circles that Zooms service is not suited for secrets and that it may be legally obligated to disclose encryption keys to Chinese authorities and responsive to pressure from them.

Zoom could not be reached for comment.

Earlier this week, The Intercept reported that Zoom was misleading users in its claim to support end-to-end encryption, in which no one but participants can decrypt a conversation. Zooms Chief Product Officer Oded Gal later wrote a blog post in which he apologized on behalf of the company for the confusion we have caused by incorrectly suggesting that Zoom meetings were capable of using end-to-end encryption. The post went on to detail what encryption the company does use.

Diagram of how Zoom meetings work.

Zoom

Based on a reading of that blog post and Citizen Labs research, here is how Zoom meetings appear to work:

When you start a Zoom meeting, the Zoom software running your device fetches a key with which to encrypt audio and video. This key comes from Zooms cloud infrastructure, which contains servers around the world. Specifically, it comes from a type of server known as a key management system, which generates encryption keys and distributes them to meeting participants. Each user gets the same, shared key as they join the meeting. It is transmitted to the Zoom software on their devices from the key management system using yet another encryption system, TLS, the same technology used in the https protocol that protects websites.

Depending on how the meeting is set up, some servers in Zooms cloud called connectors may also get a copy of this key. For example, if someone calls in on the phone, theyre actually calling a Zoom Telephony Connector server, which gets sent a copy of the key.

Some of the key management systems 5 out of 73, in a Citizen Lab scan seem to be located in China, with therest in the United States. Interestingly, the Chinese servers are at least sometimes used for Zoom chats that have no nexus in China. The two Citizen Lab researchers who authored the report, Bill Marczak and John Scott-Railton, live in the United States and Canada. During a test call between the two, the shared meeting encryption key was sent to one of the participants over TLS from a Zoom server apparently located in Beijing, according to the report.

The report points out that Zoom may be legally obligated to share encryption keys with Chinese authorities if the keys are generated on a key management server hosted in China. If the Chinese authorities or any other hypothetical attacker with access to a key wants to spy on a Zoom meeting, they also need to either monitor the internet access of a participant in the meeting, or monitor the network inside the Zoom cloud. Once they collect the encrypted meeting traffic, they can use the key to decrypt it and recover the video and audio.

Citizen Lab flagged as worrisome not only the system used to distribute Zoom encryption keys but also the keys themselves and the way they are used to encrypt data.

Zooms keys conform to the widely used Advanced Encryption Standard, or AES. A security white paper from the company claims that Zoom meetings are protected using 256-bit AES keys, but the Citizen Lab researchers confirmed the keys in use are actually only 128-bit. Such keys are still considered secure today, but over the last decade many companies have been moving to 256-bit keys instead.

Furthermore, Zoom encrypts and decrypts withAES usingan algorithm calledElectronic Codebook, or ECB, mode, which is well-understood to be a bad idea, because this mode of encryption preserves patterns in the input, according to the Citizen Lab researchers. In fact, ECB is considered the worst of AESs available modes.

Heres why: It should be impossible to tell the difference between properly encrypted data and completely random data, such as static on a radio, but ECB mode fails to do this. If theres a pattern in the unencrypted data, the same pattern shows up in the encrypted data. This Wikipedia page has a useful illustration to visualize this:

Patterns appearing in data encrypted with AES in ECB mode.

Wikipedia

Once it has been poorly encrypted in this manner, video and audio data is distributed to all participants in a meeting through a Zoom Multimedia Router server. For most users, this server runs in Zooms cloud, but customers can choose to host this part on-premises. In this case, Zoom will generate, and thus have access to, the AES key that encrypts the meeting but shouldnt have access to the meeting content itself, so long as none of the aforementioned connector servers (for phone calls and so forth) are participating in the meeting. (In its blog post, Zoom said self-hosting customers will eventually be able to manage their own encryption keys.)

Meeting hosts can settheir meetings to have virtual waiting rooms, making it so that users do not directly enter the meeting when they log on with Zoom but instead must wait to be invited in by a participant. The Citizen Lab researchers discovered a security vulnerability with this feature while conducting their encryption analysis. They said in their report that they have disclosed the vulnerability to Zoom but that we are not currently providing public information about the issue to prevent it from being abused. In the meantime, the researchers advised Zoom users who desire confidentiality to avoid using waiting rooms and instead set passwords on meetings.

The newly uncovered flaws in Zooms encryption may be troubling for many of the companys customers. Since the coronavirus outbreak started, Zooms customer base has surged from 10 million users to 200 million, including over 90,000 schools across 20 countries, according to a blog post by Zoom CEO Eric Yuan. The U.S. government recently spent $1.3 million on Zoom contracts as part of its response to the pandemic, according to a review of government contracts by Forbes, and the U.K. government has been using Zoom for remote Cabinet meetings, according to a tweet from Prime Minister Boris Johnson.

Among those who should be concerned about Zooms security issues, according to Citizen Lab, are governments worried about espionage and businesses concerned about cybercrime and industrial espionage.

Despite a recent flood of security and privacy failures, Yuan, Zooms CEO, appears to be listening to feedback and making a real effort to improve the service. These new, mostly consumer use cases have helped us uncover unforeseen issues with our platform. Dedicated journalists and security researchers have also helped to identify pre-existing ones, Yuan wrote in his blog post. We appreciate the scrutiny and questions we have been getting about how the service works, about our infrastructure and capacity, and about our privacy and security policies.

In addition to promptly fixing several security issues that were reported, the company removed an attendee attention tracker feature, a privacy nightmare which let meeting hosts track whether participants had the Zoom window or some other apps window in focus during a meeting. It has also invested in new training materials to teach users about the security features like setting passwords on meetings to avoid Zoom-bombing, the phenomenon where people disrupt unprotected Zoom meetings.

Because Zooms service is not end-to-end encrypted, and the company has access to all encryption keys and to all video and audio content traversing its cloud, its possible that governments around the world could be compelling the company to hand over copies of this data. If Zoom does help governments spy on its users, the company claims that it hasnt built tools specifically to help law enforcement: Zoom has never built a mechanism to decrypt live meetings for lawful intercept purposes, Gal, Zooms chief product officer, wrote in the technical blog post, nor do we have means to insert our employees or others into meetings without being reflected in the participant list.

Unlike some other tech companies, Zoom has never released any information about how many government requests for data it gets, and how many of those requests it complies with. But after the human rights group Access Nows open letter urging Zoom to publish a transparency report, Yuan also promised to do just that. Within the next three months, the company will prepare a transparency report that details information related to requests for data, records, or content. Access Now has commended Zoom on committing to publish a transparency report.

Read the rest here:
Zoom's Flawed Encryption Linked to China - The Intercept

The false promise of todays approaches to cloud security – ITProPortal

Its still amazing to me that each one of us is a few clicks away from starting a cluster of servers ready to process data at any scale.

Not so long ago, we needed to buy hardware, CPUs, memory, networks, and storage. It took considerable effort to set up our data centres and connect the devices to our networks.

Now, even classical big organisations who already have huge server farms are taking advantage of the simplicity and scalability of cloud technologies.

But what about security in cloud-native environments? Infrastructure? Application?

No one starts to build a business by ordering machines. We define what we want to do, develop a system, and deploy it. We dont really care about the brand printed on the server blade. Our systems have to be running; they need to be reliable, usable, and prompt and secure.

Every IaaS vendor be it AWS, Google, Microsoft, or someone else offers infrastructure security. By using their infrastructure, business delegates considerable amount of the security responsibilities to cloud vendors. At this point in time, business assumes that the work done by AWS, Google and Microsoft is more secure than they could do on their own.

Lets look at the layered model of modern computing.

Cloud infrastructure services (IaaS) provide the virtual machine -- memory, storage, processors, and networking. Higher-level services provide the operating system, orchestration, and object stores.

Security features of the infrastructure can only prevent attacks from the layers below them. For example, if you choose Amazon Elastic Block Storage (EBS) encryption, your data on the actual data storage will be encrypted at the virtualisation level between the OS level and the hardware. If the attacker breaks into Amazons data centre and steals the disk, takes it home, and attaches it to his computer, he will see encrypted data.

If the attacker breaches the same virtual machine remotely, he can open the files on the same EBS volume and read the data transparently just as the legit application does because the virtualisation layer has no way of telling who is trying to read the information.

The same applies to other infrastructure-level security features like firewalls. If I have services A and B, where B is a client for A, I can define firewall rules that restrict access to the machine running A so only the B machine can access it. Therefore, attackers breaking into machine B have easy access to A.

In general, if the attacks origin is above the layer of protection, the protection isnt effective. Given that attacks are mostly coming from the direction of the application layer, the infrastructure level protection is giving only partial security.

While the infrastructure can limit application-level activities to prevent unwanted behaviour, the result will be very tight and very expensive to maintain. This means that the perimeter is either too wide to provide enough security or too narrow to maintain security in the cloud-native world.

If applications could secure themselves, it would be a big step toward complete cloud-native security. Of course, instead of treating applications as the things needing protection, the industry has invented many infrastructure security features to get around the problem.

Furthermore, self-protecting applications are hard to configure and maintain; their security levels are all over the place. Actually, in this type of environment, true application security is very hard to accomplish because their versions may vary, and they come from a variety of vendors, as well.

This security protocol, the de facto industry standard of protecting TCP connections (sorry, SSH), was invented in the 90s. While its design is exemplary, whats important for our discussion is that TLS connections were designed to be created between a browser application and web server software. It is not an infrastructure feature; it isnt even a feature of the network drivers. It is a pure application-level feature, which means ideally only the application can access the data sent over the network.

As time passed, server-side TLS products evolved, like RSAs TLS server termination hardware. TLS termination has become a common practice, meaning the TLS connection arrives at a reverse proxy software or hardware, whose only goal is to strip the connection from protection and forward it to the right web server unprotected.

On one hand, it is not as secure, but its hard to maintain TLS certificates and private keys across the whole server park. When it became clear that internal service-to-service communications must be protected as well, different cloud vendors had different answers the same infrastructure security problem we discussed. Cloud independent solutions like Istio and other side-car solutions have put an extra container next to the protected application, performing the TLS termination as it was done with web servers, but it isnt effective.

TLS has been used so sub-optimally because it is hard to use to configure and maintain applications. TLS requires constant reconfiguration (certificate renewals) and key protection (private keys whose theft compromise the entire TLS system). All applications are configured a little bit differently, which makes maintenance difficult; some applications, of course, do not support TLS at all.

Of course, this simple example of TLS highlights operational problems with putting broad security features into the application. Also, business and application development are focused on functionality; security is secondary, if at all...

Business-driven thinking pushes security within the infrastructure; it should be there out of the box. In many cases it is -- but security in infrastructure is limited. The infrastructure-focused approach to application security isnt working either.

The answer security has to be at the application level but not part of the application.

Ben Hirschberg, VP of R&D and Co-founder, Cyber Armor

Read the original post:
The false promise of todays approaches to cloud security - ITProPortal