Category Archives: Cloud Servers

IBM Misses Estimates in 20th Straight Quarterly Sales Drop – MSPmentor

IBMs revenue fell short of analysts projections, marking a 20th consecutive quarterly decline as growth in new businesses like cloud services and artificial intelligence failed to make up for slumping legacy hardware and software sales.

That sales miss, the first in more than a year, could temper estimates for a return to revenue growth by early 2018. For years, Chief Executive Officer Ginni Rometty has been investing in higher-growth areas and moving away from older products like computers and operating system software. Even as she has shed units to cut costs and made acquisitions to bolster technology and sales, the legacy products are still a drag.

Sales in the first quarter fell 2.8 percent from a year earlier to $18.2 billion, IBM said in a statement Tuesday. That was a bigger drop than the 1.3 percent decline in the previous quarter. Analysts had expected $18.4 billion on average. The shares fell as much as 4.3 percent to $162.71 in late trading.

Part of the sales miss stemmed from IBMs technology services and cloud platforms segment, where revenue declined for the first time in three quarters. That group helps clients move applications onto cloud servers and manage workloads through multi-year deals of $500 million to $1 billion. Some of those contracts were expected to get signed in the first quarter but didnt go through, Chief Financial Officer Martin Schroeter said in an interview.

Had they been completed, revenue from the Global Technology Services group would have been better, Schroeter said. When we do get those done in April, May or June, theyll start to deliver.

Profit, adjusting for some items, was $2.38 a share. Analysts expected $2.35 a share on average, according to data compiled by Bloomberg.

International Business Machines Corp. is aiming to reach $40 billion in sales in the new growth businesses by next year, which would require about a 21 percent jump from 2016. The company said it was ahead of pace to reach that target. Included in this group are all the products and services related to cloud, analytics, security and mobile technology.

The companys cognitive solutions segment, which houses much of the software and services in the newer businesses, has shown the most promise in recent quarters. Sales in cognitive solutions, which includes the Watson artificial intelligence platform and machine learning, grew for the fourth quarter in a row.

As part of its transformation, the company is working to sell more software that works over the internet, where customers pay as they use the tools. IBM has spent billions over the last few years building and buying the products and cloud data centers needed to support this type of business, a move thats eroded overall profitability. Gross margin shrank from a year earlier for the sixth straight quarter.

IBMs systems segment, home to legacy businesses like mainframe and operating systems software business, posted a 17 percent drop in sales. That compared with a 22 percent drop during the same period last year. IBM doesnt expect growth in the area, but Rometty has said the company can still extract value from the business.

Continue reading here:
IBM Misses Estimates in 20th Straight Quarterly Sales Drop - MSPmentor

Microsoft tools coalesce for serverless computing – InfoWorld

Microsofts adoption of serverless computing is a big piece of Azure maturing as a platform. Theres a lot going on here, as architectures and services evolve to take advantage of the unique capabilities of the cloud and we as users and developers migrate away from traditional server architectures.

Mark Russinovich, Microsofts CTO of Azure, has a distinct view on the evolution of cloud as a platform. Infrastructure as a service [IaaS] is table stakes, he said at an Azure Serverless computing event at Microsofts Redmond, Wash., headquarters last week, Platform as a service [PaaS] is the next step, offering runtimes and developing on them, an API and an endpoint, where you consume services. Thats where we are today, where we still define the resources we use when we build cloud applications.

Then comes serverless computing. Serverless is the next generation of computing, the point of maximum value, Russinovich said.

What hes talking about is abstracting applications from the underlying servers, where code is event-driven and scales on demand, charged by the operation rather than by the resources used. As he said, I dont have to worry about the servers. The platform gives me the resources as I need them. Thats the real definition of serverless computing: The servers and OS are still there, but as a user and a developer you dont need to care about them.

You can look at it as a logical evolution of virtualization. As the public cloud has matured, its gone from one relatively simple type of virtual machine and one specific type of underlying hardware to specialized servers that can support IaaS implementations for all kinds of use cases, such as high-performance computing servers with massive GPUs for parallel processing and for scientific computing working with numerical methods, or such as arrays of hundreds of tiny servers powering massive web presences.

That same underlying flexibility powers the current generation of PaaS, where applications and code run independently of the underlying hardware while still requiring you to know what the underlying servers can do. To get the most out of PaaS (that is, to get the right fit for your code), you still need to choose servers and storage.

With serverless computing, you can go a step further, concentrating on only the code youre running, knowing that its ephemeral and youre using it to process and route data from one source to another application. Microsofts serverless implementations have an explicit lifespan, so you dont rely on them being persistent, only on them being there when you need them. If you try to use a specific instance outside that limited life, you get an error message because the application and its hosting container will be gone.

Nir Mashkowski, principal group manager for Azure App Service, noted three usage patterns for Azures serverless offerings.

The first, and most common, pattern is what he calls brownfield implementations. They are put together by enterprises as part of an overall cloud application strategy, using Azure Functions and Logic Apps as an integration tool, linking old apps and new and on-premises systems and cloud.

The second pattern is greenfield implementations, which are typically the province of startups, using Azure Functions as part of a back-end platformthat is, as switches and routers moving data from one part of an application to another.

The third pattern is for internet of things applications. It is a combination of the two, using Azure Functions to handle signals from devices, triggering actions in response to specific inputs.

For enterprises wanting a quick on-ramp to serverless computing, Azure Functions closely related sibling Logic Apps is an intriguing alternative. Drawing on the same low-code foundations as the more business-focused Flow, it gives you a visual designer with support for conditional expressions and loops. (You can even can run the designer inside Visual Studio.)

Like Azure Functions, Logic Apps is event-triggered and can be used to coordinate a sequence of Azure functions. Wrapping serverless code in a workflow adds more control, especially if its used to apply conditions to a triggerfor example, launching one function if a trigger is at the low end of a range of values, another if its at the high end.

Russinovich described three organizations working with serverless computing:

One of the more interesting aspects of both Azure Functions and Logic Apps is that theyre not limited to running purely in the cloud. Functions themselves can be developed and tested locally, with full support in Visual Studio, and both Azure Functions and Logic Apps will be supported by on-premises Azure Stack hybrid cloud systems.

Inside the Azure datacenters, its serverless options are all containerized for rapid deployment. That same model will come to your own servers, with Azure Functions able to run on any server, taking advantage of containers for rapid deployment.

Currently, Azure Functions is based on the full .Net Framework release, so theres a minimum requirement of Windows Server Core as a host. But thats going to change over the next few months with an open source release based on .Net Core and the upcoming .Net Standard 2.0 libraries. With those in hand, youll be able to run Azure Functions in containers based on Windows Server Nano, as well as on .Net Core running on Linux. Youll be able to migrate code from on-premises to hybrid cloud and to the public cloud depending on the workload and on the billing model you choose.

Such a cross-platform serverless solution that runs locally and in the cloud starts looking very interesting, giving you the tools to build and test on-premises,then scale up to running on Azure (or even on Linux servers running on Amazon Web Services).

Theres a lot to be said for portability, and by working with REST and JSON as generic input and output bindings, Microsofts containerized serverless implementation appears to avoid the cloud lock-in of its AWS and Google competitors while still giving you direct links to Azure services.

View post:
Microsoft tools coalesce for serverless computing - InfoWorld

Your Amazon Echo Recordings Can be Listened To and Deleted, Like This – 1redDrop

Is your Amazon Echo always listening to you and recording all your conversations? In short, yes and no. The Echos AI system is always listening for the wake word Alexa or Computer or whatever youve set it to but it is not always recording. The recording starts when the wake words are spoken, and the recording is then sent to Amazons cloud servers for processing. Those recordings are all stored there until you delete them.

Heres how to listen to everything that your Amazon Echo has recorded, and then delete those recordings.

But before you go on an Echo-recordings-deleting spree, you need to understand that past commands help Amazon Alexa understand your needs in a more personalized way. Deleting all your past recordings will hamper that ability.

To listen to your recordings, you can go to the Amazon Alexa app on a smartphone or tablet, go to Settings > History. There, youll be able to see the tens, hundreds or thousands of entries stored on Amazons cloud servers, depending on how busy youve been keeping Alexa on your Amazon Echo device.

You can listen to any of those recordings, which will be served from the cloud. To delete just a few recordings, select the ones you want to delete and then hit Delete.

What if you want to delete the whole bunch of them? That could take hours or days if you do them one by one. To delete your entire recordings history, youll need to open up your browser and go towww.amazon.com/mycd, where youll be asked to sign in with the same ID you used on the Alexa app.

Once logged in, youll be able to view the audio files, listen to them and delete everything that was ever recorded since you bought your Amazon Echo.

But again, be warned that if you delete everything, itll be like Alexa has to start learning again from scratch, which you might not want. Alternatively, you can delete the oldest recordings the ones that were made when you first bought the device and asked test questions before you got the hang of it.

Remember, Amazon Echo only records voice commands that are heard after the wake word is spoken, so you dont have to worry about Alexa spying on you, as many people believe. Shes always listening, its true, but she doesnt send any data to Amazon until an authentic voice command is issued.

Thats how it works. If not, your Amazon Echo would be the size of a large bedroom, or even as large as your house, because thats how much computing power artificial intelligence needs in order to be intelligent. Thats the reason nearly everything is processed on the cloud it cant be otherwise; at least, not until processing power evolves to a much, much greater level than it is today.

Thanks for reading our work!If you enjoyed it or found value, please share it using the social media share buttons on this page. If you have something to tell us, theres a comments section right below, or you cancontact@1redDrop.comus.

Read more from the original source:
Your Amazon Echo Recordings Can be Listened To and Deleted, Like This - 1redDrop

New Azure migration tools analyze which apps should move to the cloud – TechTarget

A new service from Microsoft can help IT shops interested in a move to Microsoft Azure better estimate workload performance.

Potential customers have access to new Azure migration tools, such as the free Cloud Migration Assessment, which analyzes on-premises workloads to determine how applications will perform and the cost to run them on Azure. The move gives Azure parity with Amazon Web Services (AWS), which added similar capabilities last year.

The new feature was rolled out along with two other attempts to ease the transition of Windows Servers to Azure: licensing discounts and improved capabilities in Azure Site Recovery.

The Cloud Migration Assessment works across a company's IT environment to evaluate hardware configurations. Microsoft then provides a free report that estimates the cost benefit to house those workloads on Azure, as well as suggestions to appropriately size environments in the cloud. It also informs users on which VM types to select.

"This was an area Microsoft didn't have and really needed," said Angelina Troy, an analyst at Gartner.

Other updates rolled out this week provide access to the Azure Hybrid Use Benefit in the Azure Management Portal. Customers can save up to 40% on Windows Server licenses that include Software Assurance, according to Microsoft.

In the coming weeks, Azure Site Recovery -- Microsoft's tool for migrating Hyper-V, VMware and physical servers -- will add new tools to tag VMs directly within the Azure portal, rather than using PowerShell.

Cloud migration is a more prominent issue as customers shift from born-in-the-cloud startups to enterprises that want to shift existing VMs to the public cloud. They often have a hard time predicting how workloads will perform in these environments; a cottage industry of third-party vendors has sprung up to help migrate and manage workloads.

Cloud providers have also extended their capabilities as they seek to eliminate hindrances to adoption and use. They offer a variety of tools for real-time replication or transfer of configuration-dependent images. AWS and Azure now have similar options in terms of ways to migrate a VM into their respective compute services, though Azure may actually have a few more replication services and tools than AWS, Troy said.

The assessment capability isn't necessarily superior to what other third-party companies provide, but the main benefit is that it's free, Troy said. This tool can now be combined with other Azure migration tools, such as Azure Migration Accelerator and Azure Site Recovery, to coordinate and move workloads to Microsoft's public cloud.

Third parties don't always have that same depth of knowledge of cloud platform updates, but they can provide insights across providers to help users find the best fit, especially if they're vendor-agnostic.

The actual migration can often be the simplest part of the move to the cloud, said Timothy Campbell, product manager at Datapipe, a managed service provider based in Jersey City, N.J., that partners with AWS and Azure. Still, navigating Azure's large product and feature set can be daunting, so these features address an important piece of the puzzle, he added.

These updates "will likely accelerate adoption by providing a native tool that can help align workloads correctly and create efficiencies that are specific to the platform," he said.

Trevor Jones is a news writer with SearchCloudComputing and SearchAWS. Contact him at tjones@techtarget.com.

Microsoft bulks up its set of Azure cloud monitoring tools

How will Azure Stack vs. Azure functionality compare?

Five tips for more efficient Azure management

See the original post:
New Azure migration tools analyze which apps should move to the cloud - TechTarget

How Fog Computing Will Shape The Future Of IoT Applications And Cybersecurity – ISBuzz News

Fog computing may be the next big thing for the Internet of things. The fog computing market, valued at $22.3 million in 2017, willexpand at an explosive rateand grow to $203.5 million over the next five years, according to projections by Markets and Markets. IoT interconnectivity, machine to machine communication, real-time computing demand and demand for connected devices are driving the fog markets growth.

Businesses impacted by these trends are turning to fog computing for greater efficiency, faster decision-making processes and lowered operating costs. Heres a closer look at what fog computing is, why it will play a key role in the future of IoT technology and how it will help with cybersecurity.

Fog Computing vs. Traditional Cloud Computing

Fog computing is an extension of cloud computing to adjust to the emerging Internet of things. The IoT is connected to a vast array of devices, including mobile phones, wearables, smart TVs, smart homes, smart cars and even smart cities. The amount of devices collecting data and the amount of data being processed is growing exponentially.

Public cloud computing provides the computing space to process this volume of data through remote-located servers. But uploading this amount of data to remote servers for analysis and delivering the results back to the original location takes time, which can slow down processes that demand rapid responses in real time. Additionally, when Internet connectivity is unreliable, relying on remote servers becomes problematic.

Fog computing is a solution to these issues, explains Cisco, a pioneering member of the OpenFog Consortium. Rather than relying primarily on remote servers at a central location, fog computing uses distributed computer resources located closer to local devices to handle processes that demand rapid processing, with other, less time-sensitive processes delegated to more remote cloud servers.

This can be visualized as pushing the border of the cloud closer to the edge of local devices connected to the Internet of things. Because of this, fog computing is also sometimes called edge computing. Thus, fog computing is not really opposed to cloud computing, but it can be viewed as a variety of hybrid cloud computing where some processes are handled by private fog networks closer to network devices and some are handled by the public cloud.

Why Companies are Turning to Fog Computing

There are a few major reasons why companies are turning to fog computing, explains TechTalks software engineer Ben Dickson. One is the emergence of IoT applications, where real-time responsecan be a matter of life or death. A key example is the healthcare industry: Medical wearables are increasingly being used by healthcare providers to monitor patient conditions, provide remote telemedicine and even to guide on-site staff and robots in procedures as delicate as surgery. Thus, reliable real-time data processing is crucial for these types of applications.

Another IoT application where rapid response is crucial is vehicle communications. Many cars use online information to guide navigational decisions. In the near future, driverless cars will rely entirely on automated input to perform navigation. Thus, a slow response when vehicles are moving at 60 mph can be dangerous or even fatal, so real-time processing speed is required. Fog computing networks are especially suitable for applications that require a response time of less than a second, according to Cisco.

How Fog Computing Helps Cyber Security

Security is another big reason companies are turning to fog computing. Data for applications such as healthcare and point-of-sales transactions is very sensitive and a primary target for cyber criminals and identity thieves. However, fog computing provides a way to keep this type of data under tight guard.

Fog systems are designed from the ground up to protect the security of information exchange between IoT devices and the cloud, providing security suitable for real-time applications, according to the OpenFog Consortium. Fog systems can also be used to keep device data securely in-house and away from vulnerable pubic networks. Data backups can then be safely stored bydeploying reliable backup services, like those provided by Mozy, allowing companies to schedule automated backups protected by military-grade encryption.

Go here to read the rest:
How Fog Computing Will Shape The Future Of IoT Applications And Cybersecurity - ISBuzz News

IBM Gives AIX Some Of The Integration Spice Of IBM i – IT Jungle

April 17, 2017 Timothy Prickett Morgan

Sometimes I just have to laugh. One of the best things about the IBM i platform, and the thing that truly separates it aside from its sophisticated single memory storage architecture is the fact that it is an integrated system that is easy to deploy and even easier to administer. So many functions of the system are automated that companies that dont want to hire database experts can do a very good job of coding applications and running their business with far fewer techies than other platforms require.

The same has never been said of AIX, and it certainly cannot be said of Linux when it comes to running traditional transaction processing systems. (Although I definitely concede that new container-based application systems running with Docker containers on top of Linux go a long way toward automating and securing the entire application workflow from development to testing to deployment to tweaking to redeployment to eventual retirement.) Companies deploying on AIX and Linux rarely assemble an entire application stack for transactional systems from a single vendor, although it can be done from IBM with the AIX stack coupled with PowerVM and PowerVC and DB2 and from Oracle with its eponymous Linux and database. We know that the Red Hat Enterprise Linux distribution includes MySQL, MariaDB, and MongoDB, but that is not the same thing as getting enterprise-grade support for the databases that is comparable to the operating system and middleware stack that Red Hat has brilliantly and successfully assembled and turned into a multi-billion support business. Microsoft, of course, can sell you a complete stack with Windows Server, Hyper-V virtualization, SQL Server database, and Visual Studio development tools.

We got a chuckle out of an announcement that IBM made in conjunction with an AIX business partner called Vendita that essentially creates a Unix-based AS/400 style system on the four-socket Power Systems E850C system, which does not support IBM i just like its predecessor Power E850 system. The Power E850 was rolled out in May 2015 using Power8 processors and the Power E850C came out in October 2016 with some hardware and packaging enhancements and a focus on cloudy workloads but no Power8+ chip. (Technically, there has been no Power8+ chip, as we have discussed in the past.) In announcement letter 117-030, IBM is partnering up with Vendita, which is based in Toledo, Ohio, to make it easier to deploy and manage database servers running the combination of AIX and Oracle, which can be a might cranky at least by the integration standard of the OS/400 and IBM i platform.

The offering being resold by IBM puts a layer of software called the Database Cloud Server on top of a four-socket Power E850 system with anywhere from 24 to 48 Power8 cores running at 3.65 GHz and equipped with 512 GB minimum of main memory. The software stack includes AIX Enterprise Edition, the top-end PowerVM Enterprise Edition logical partitioning hypervisor, and Cloud PowerVC Manager, which is IBMs homegrown implementation of the OpenStack cloud controller for AIX, Linux, and sometimes IBM i environments. The key part of the Vendita software is called Master Automation Sequencer and it is used to provision and manage Oracle database management systems; the Vendita stack also includes add-ons to AIX that include the Git repository, the BASH shell scripting language, and the Python programming language that is popular these days for back end as well as front end stuff. The setup does not include licenses to Oracle databases themselves, so you have to buy them either through IBM or Oracle directly, and the flagship Oracle 12c Enterprise Edition is preferred.

This Vendita stack will be available from IBM starting April 21, and the sales pitch will be somewhat familiar to the IBM i faithful:

Yup. Thats an AS/400 approach and sales pitch if I ever saw one.

There are a few interesting things about this Vendita deal. First, it doesnt support DB2 or Informix, the two databases sold by Big Blue itself. The latter not being supported by Vendita is not much of a surprise, considering IBM has not done much with Informix since it acquired it years ago and particularly since there are rumors that IBM will soon announce the end of life of Informix products. (We cant vouch for these rumors.) If IBM was going to resell the Vendita stack, you would think it would wait until it had a DB2 variant out the door first, but perhaps it cannot wait and perhaps the market has told IBM that what it really wants is Oracle on the database.

The second peculiar thing relates to the cost and the value of such integration of the database and the automation. IBM has always been clear that this automation is worth something, and in fact worth a lot and therefore why IBM i platforms and their predecessors command a hefty premium over Windows and Linux stacks. (The gap is not always large, and sometimes it is absurdly huge, particularly with anything but the most modest Power Systems iron.) The funny bit here is that if you look at the list price for the Vendita tools on the Power E850C system, it is zero. Yup, IBM is giving it away, presumably with a reseller agreement. This, we think, sends precisely the wrong message. Clearly, the Vendita software has a cost and provides a value, and not outlining that makes it seem like it does not have a value at all. I was excited to actually see such pricing precisely because it would allow us to quantify the value of database integration and automation. But alas, no such luck.

What Vendita does say, and what we found particularly interesting, was that by using the management tools for Oracle databases that it has created, customers who might otherwise have to buy Oracles Real Application Clusters (RAC) database clustering can get by using logical partitions and its tools on PowerVM, and adds that if you take into account the cost of onsite startup and provisioning consulting and licensing of separate storage servers for Oracle database engines, then customers might shave as much as 25 percent off the cost of an Oracle deployment.

That sounds like a pretty good reason to try to do the same thing with DB2 on AIX and Linux to us. It also sounds like IBM can make the same case with the actually integrated IBM i platform, and as we have said time and again, we surely do wish IBM was pitching the Power E850 and then the Power E850C as an IBM i platform rather than making customers choose between a high-end two-socket Power S824 and a four-socket (and much more expensive) Power E870C. Such a box might not bring in a lot of new customers to the IBM i fold, but it sure might help keep a bunch of them there rather than being pushed to an Linux/Oracle or Windows/SQL Server platform by an upper management that probably doesnt know how to quantify the differences between the platforms.

And so, we say once again. What IBM i needs to do is prove that integration has a value one we all know is there and quantify it and show it. Hell, even brag about it a little.

The Deal The Power 850C Implies For IBM i Shops

Private Big Iron Power8 Clouds To Puff Up With IBM i

Sundry April Power Systems Announcements

Is There No Midrange In The IBM i Midrange?

Thoughts On The Power E850 And I/O Contraction

Power9 Gets Ready To Roll In Systems In 2017

Tags: Tags: AIX, IBM i, Linux

Secrets Of IBM i Magic Act Revealed

View post:
IBM Gives AIX Some Of The Integration Spice Of IBM i - IT Jungle

Alphabet’s Verily shows off health-focused smartwatch – Ars Technica UK

Alphabet's Life Sciences division, called Verily, is giving the world a peek at its health-focused smartwatch. The Google sister company introduced the "Verily Study Watch" on its blog today, calling it an "investigational device" that aims to "passively capture health data" for medical studies.

Many wearables technically capture health data with simple heart-rate sensors, but Verily's watch aims to be a real medical device.The blog post saysthe devicecan track"relevant signals for studies spanning cardiovascular, movement disorders, and other areas." The Study Watch does this by usingelectrocardiography(ECG) and by measuringelectrodermal activity and inertial movements.

The Study Watch beams this datato Verily's cloud infrastructure for all sorts of big-data analysis. Study Watch seems to be the Verily hardware platform of the future, with the company saying the watch will be used in several studies being run by Verily and its partners. The company specifically said the watch would be used in "Baseline Study," a Verily project that aims to measure what a healthy human looks like, and the "Personalized Parkinson's Project."

With the goal of Study Watch to be an unobtrusiveway to collect medical data, battery life is a concern. Verily promises "a long battery life of up to one week" for the device. The "always-on" display seems to be e-ink, which ispractically a requirement for any watch with a week-long battery life. Verily alsogave the watch enough storage to keep "weeks' worth of raw data" encrypted on the device, removing the need to frequently sync with cloud servers. There also isn't much in the way of user features: Study Watch displays the time and date, and that's it for now. The watch is capable of getting over-the-air software updates, though, so the interface might change.

There's no word on price, as the Study Watch is "not for sale." It's just something that will be given out to participants in Verily's medical studies.

This post originated on Ars Technica

Read more:
Alphabet's Verily shows off health-focused smartwatch - Ars Technica UK

Lack of agility with Windows Server licenses hamstrings cloud hopes – TechTarget

Public cloud providers often promise limitless possibilities, including flexibility and agility that on-premises hardware can't match. But complex Windows Server licensing issues that an organization must contend with when it moves workloads from on premises to the cloud -- and back -- make the dream of hybrid cloud deployments difficult to execute.

Microsoft often touts the benefits of cloud, but Windows Server licensing issues complicate application portability needed for hybrid cloud. The marketing for hybrid cloud is a lot stronger than technical and licensing realities. Until vendors remove these migration complications, and portability and decoupling of Windows Server licenses become possible, hybrid cloud challenges will continue.

Despite the talk about hyperscale and investments in not one, but two Azure portals, it's still a significant task to establish a direct connection to Azure. Organizations need to set up a standing virtual private network or ExpressRoute to create an open highway to move workloads back and forth.

Tools such as StorSimple can solve some of these connection problems, but specific workloads may have different needs. The effort it takes to move a simple three-tier application from on premises to Azure is not a simple point-and-click operation, nor is it easily programmable.

There is little permanence in public cloud. Offerings change constantly. The instance type that cost $3 an hour last year could go to $4.50 an hour this year. Or, depending on the product, it might not be available at all.

Customers who sign up with the Microsoft Products and Services Agreement program can lock in prices for one, two or three years. But organizations that are not part of the program are at the whim of the market.

While the cost of cloud services has been trending downward, that may not always be the case. There are some more hardware-intensive workloads -- such as those that require many one-to-one relationships with discrete physical graphical processing units -- that could become more expensive if the provider can't oversell the hardware capacity.

There is little permanence in the cloud. Offerings change constantly.

Another problem with hybrid cloud is there's no solid way to track licenses as workloads bounce from the on-premises data center to the cloud provider's platform. That means enterprises can pay twice for licenses. An organization will have licenses that are held perpetually or come from a subscription-oriented volume license agreement -- these would cover normal usage. Then, depending on the cloud provider, there is the cost to run instances of software and services that could also include the incremental and proportional cost of those same licenses.

For a company that uses this hybrid cloud capability once a year, it may not represent a substantial expense. But a business that plans to send stuff up and down the pipe frequently could be hit with a significant financial burden.

For organizations that subscribe to the Software Assurance license option, which costs $3,080 per 16-core server, Microsoft has some tools for license portability. However, they are tuned for use with Azure public cloud services. To my knowledge, there are no technical limitations to prevent the use of that benefit on the Google Compute Engine service or on Amazon's Elastic Compute Cloud, but it certainly would not be a seamless exercise, which is the point of being flexible and agile.

How do you track the disposition of licenses that the organization permanently moves from one place or the other? If a workload starts in Azure and the organization decides to put that workload in a server closet, how should they acquire the proper on-premises license? How does an IT team dispose of the Azure instance?

The same principle applies in the reverse scenario: How do you track Windows Server licenses for workloads that move from on-premises servers into the cloud and use a runtime license that was included in the service cost? You could reassign the license elsewhere or move it, or perhaps reduce use when true-up time comes if the business is on a subscription-based volume license agreement, or so on.

It is imperative to have a way to track this type of license movement; traditional license management methods need to be extended and modified to keep up with the times. Microsoft is big on license audits now, so organizations need to ensure they have up-to-date paperwork for licenses to avoid trouble.

Differences in Windows Server 2016 editions require study

Change in Windows Server licensing may stall migrations

Hybrid Use Benefit may draw enterprises to Azure

Read more here:
Lack of agility with Windows Server licenses hamstrings cloud hopes - TechTarget

Prepare your server fleet for a private cloud implementation – TechTarget

Private cloud services promise flexibility and scalability, while allowing organizations to maintain full control...

Enjoy this article as well as all of our content, including E-Guides, news, tips and more.

By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.

You also agree that your personal information may be transferred and processed in the United States, and that you have read and agree to the Terms of Use and the Privacy Policy.

of their enterprise data centers. It's a compelling goal -- but private cloud implementation can be challenging and frustrating.

The path from a traditional data center to a private cloud starts at the lowest levels of the infrastructure. IT leaders must evaluate their current server fleet to ensure that each system offers the features needed to support virtualization and the subsequent cloud stack. Here are some considerations that can help sanity check whether your data center infrastructure is ready for private cloud implementation.

It's important to understand individual processor technologies and properly enable each feature before you deploy hypervisors and, eventually, the private cloud stack. For example, processors will invariably require hardware virtualization support through processor extensions, including Intel Virtualization Technology and AMD Virtualization. This technology typically includes support for the second level address translation required to translate physical memory space to virtual memory space at processor hardware speeds.

The path from a traditional data center to a private cloud starts at the lowest levels of the infrastructure.

Enable AMD No eXecute (NX) and the Intel eXecute Disable (XD) bits for processors, which will mark memory pages to prevent buffer overflow attacks and other malicious software exploits. You can typically enable processor extensions and NX/XD bits through the system BIOS or the Unified Extensible Firmware Interface (UEFI).

Consider the processor core/thread count for each server. Hypervisors, such as ESXi 6.0, demand a host server with at least two processor cores, but this is generally a bare minimum system requirement. Additional processor cores will vastly expand the number of VMs and workloads that each server can handle, and you can treat each additional processor thread as a separate core. For example, an AMD Opteron 6200 Series processor can support VMware ESXi 6.5 with eight cores and total of 16 threads; an Intel Xeon E5-2600-v4 Series processor offers 24 cores and a total of 48 threads.

Finally, consider the availability of UEFI on the server. UEFI is a later-type of system firmware -- a kind of advanced BIOS -- that allows additional flexible boot choices. For example, UEFI allows servers to boot from hard disk drives, optical discs and USB media -- all larger than 2 TB. However, it's important to evaluate the boot limitations of the hypervisor. As an example, ESXi 6.0 does not support network booting or provisioning with VMware Auto Deploy features -- this requires traditional BIOS and isn't currently supported by UEFI. If you change from BIOS to UEFI after you install a hypervisor, it might cause boot problems on the system. Consequently, it's a good idea to identify the firmware when identifying processors on each server.

Every VM or container exists and runs in a portion of a server's physical memory space, so memory capacity plays a critical role in server virtualization and in private cloud implementation. Hypervisors, such as ESXi, typically recommend a system with at least 8 GB to host the hypervisor and allow capacity for at least some VMs in production environments. Private cloud stacks such as OpenStack are even lighter, recommending only 2 GB for the platform -- each VM will demand more memory.

However, such memory recommendations are almost trivial when compared to the memory capacity of modern servers. As an example, a Dell R610 rackmount server is rated to 192 GB, while a Dell R720 is rated to 768 GB of memory capacity. This means existing enterprise-class servers already possess far more than the required minimum amount of memory needed for virtualization and a private cloud implementation. The real question becomes: how many VMs or containers do you intend to operate on the server, and how much memory will you provision to each instance? These considerations can vary dramatically between organizations.

As you virtualize, and place more workloads on, physical servers, network utilization increases dramatically. Network limitations can cause contention between workloads and result in network bandwidth bottlenecks that can impair the performance and stability of other workloads. This can be particularly troublesome during high-bandwidth tasks like VM backups, especially when multiple VMs attempt the same high-bandwidth tasks simultaneously.

This makes adequate bandwidth and network architecture choices critical on the road to private cloud implementation. A hypervisor, such as ESXi, typically demands at least one Gigabit Ethernet (GbE) port. Although a faster Ethernet port, such as 10 GbE, can alleviate bandwidth bottlenecks, it is often preferable to deploy two or more GbE ports instead. Multiple physical ports can present several important benefits. For example, you can combine multiple GbE ports can to aggregate the bandwidth of slower, less expensive network adapters and cabling infrastructure. This can also build resilience since a port failure at the server or corresponding switch port can failover to another port.

Storage is another core attribute of virtualization, so pay close attention to issues like storage capacity. A hypervisor like ESXi typically needs about 10 GB of storage divided between a boot device -- which creates a VMFS volume -- and a scratch partition on the boot device. Private cloud services platforms like OpenStack recommends at least 50 GB of disk space. The real capacity issue depends on the number of VMs and the amount of storage you allocate to each VM instance. An environment that uses few fixed VM disk images may need less capacity than an environment that deploys many different VM images with various storage requirements. As a rule, 1 TB should be adequate for a typical virtualized server.

Local storage capacity is typically not a gating issue with modern servers and storage equipment. In actual practice, however, enterprise servers rarely depend on local per-server storage, and instead use shared storage systems. In this case, the primary server concern may be adequate local storage to boot the system, but defer to a storage area network (SAN) for VM and workload data retention. This means the server should include adequate SAN support, such as two or more dedicated Ethernet ports (i.e., iSCSI or FCoE) or Fibre Channel ports for redundant SAN connectivity. Disks should always provide some level of RAID support -- RAID 5 or even RAID 6 can offer the best data protection and rebuild performance to hot spare disks.

As more VMs coexist on fewer physical servers, a server fault or failure can impact more VMs, which can be disruptive. As a business embraces virtualization and moves toward private cloud implementation, the underlying server hardware should include an array of resiliency features that can forestall failures.

Critical server hardware should include redundant power supplies and intelligent, firmware-based self-diagnostics that can help technicians identify and isolate faults. Modern servers typically include a baseboard management controller capable of system monitoring and management. If a server fails, it may be crucial to remove and replace the failed unit quickly.

Inside the server, select and enable memory resilience features like advanced error correcting code to catch single- and multi-bit errors, memory mirroring, hot spares that can swap in a backup DIMM if one DIMM fails and memory scrubbing -- sometimes called demand and patrol scrubbing -- that can search for and address memory errors on-demand or at regular intervals.

Any capable configuration management tool or framework can summarize and report many of these attributes for you directly from the local configuration management database. This can ease the time-consuming and error-prone manual review of physical systems and hypervisors. But a review of servers and hypervisors is really just the start of a private cloud implementation -- they form the critical cornerstone for other components, like storage, networks and software stacks, within the infrastructure.

OpenStack support lifecycles grow for the enterprise

The on-premises vs. cloud computing battle continues

Don't label all infrastructure as a commodity

Continue reading here:
Prepare your server fleet for a private cloud implementation - TechTarget

Alphabet’s Verily shows off health-focused smartwatch – Ars Technica

Enlarge / The Verily Study Watch, strategically photographed to not show how thick it is.

Alphabet's Life Sciences division, called Verily, is giving the world a peek at its health-focused smartwatch. The Google sister company introduced the "Verily Study Watch" on its blog today, calling it an "investigational device" that aims to "passively capture health data" for medical studies.

Many wearables technically capture health data with simple heart-rate sensors, but Verily's watch aims to be a real medical device.The blog post saysthe devicecan track"relevant signals for studies spanning cardiovascular, movement disorders, and other areas." The Study Watch does this by usingelectrocardiography(ECG) and by measuringelectrodermal activity and inertial movements.

The Study Watch beams this datato Verily's cloud infrastructure for all sorts of big-data analysis. Study Watch seems to be the Verily hardware platform of the future, with the company saying the watch will be used in several studies being run by Verily and its partners. The company specifically said the watch would be used in "Baseline Study," a Verily project that aims to measure what a healthy human looks like, and the "Personalized Parkinson's Project."

With the goal of Study Watch to be an unobtrusiveway to collect medical data, battery life is a concern. Verily promises "a long battery life of up to one week" for the device. The "always-on" display seems to be e-ink, which ispractically a requirement for any watch with a week-long battery life. Verily alsogave the watch enough storage to keep "weeks' worth of raw data" encrypted on the device, removing the need to frequently sync with cloud servers. There also isn't much in the way of user features: Study Watch displays the time and date, and that's it for now. The watch is capable of getting over-the-air software updates, though, so the interface might change.

There's no word on price, as the Study Watch is "not for sale." It's just something that will be given out to participants in Verily's medical studies.

More:
Alphabet's Verily shows off health-focused smartwatch - Ars Technica