Category Archives: Cloud Servers
Overspending in the cloud: Lessons learned – ZDNet
One of the reasons virtualization (the precursor to cloud computing) gained popularity in the early 2000s is that companies had too many servers running at low utilization. The prevailing wisdom was that every box needed a backup and under-utilization was better than maxing out compute capacity and risking overload.
The vast amounts of energy and money wasted on maintaining all this hardware finally led businesses to datacenter consolidation via virtual machines, and those virtual machines began migrating off-premises to various clouds.
The problem is, old habits die hard. And the same kinds of server sprawl that plagued physical datacenters 15 years ago are now appearing in cloud deployments, too.
According to a recent survey from RightScale, 35 percent of cloud spending is wasted via VM instances that are over-provisioned and not optimized. The report found that most enterprises run their virtual instances 24/7, many VMs are running at less than 40 percent of CPU and memory capacity, and old backup snapshots and other unattached data repositories are clogging up cloud storage resources.
It turns out that the ease and elasticity of the cloud are a double-edged sword. When spinning up new instances is effortless, who has the discipline to keep track of and sunset resources when they're no longer needed?
Expensive lesson
This was one of the lessons learned at Ecolab, a global provider of water, hygiene, and energy technologies and services. Ecolab works with large-scale facilities around the world, monitoring and managing water systems using a vast network of sensors and probes. A team of some 60 developers works on mining this data for performance insights and trendspotting.
Craig Senese, Director of Analytics and Development at Ecolab, says the transition from on-premises datacenter to cloud was critical, as physical resources were reaching their limits. Ecolab was already using Microsoft technologies to manage its infrastructure and analytics, so the Microsoft Azure Cloud was a logical fit.
Once Azure was deployed, however, developers began to leverage resources without focusing on optimization and cost-efficiency.
"I think the biggest lesson that we've learned to this point is that it's a different model," Senese said. "You've gone from having our own servers, having our own datacenter working through IT to get the resources you need, to basically carte blanche for our developers where they can add and remove resources as needed. The lesson learned there has been that we really need to make sure that everyone is educated on our plan as an architecture, our plan as a resource model, because it's very easy to spend. We need to make sure that we control that and we're not spinning up resources uncontrollably.
"We have a large team, and making sure everyone is on the same page with the strategy of how we want to deploy in the cloud is important."
Being new to cloud computing, Senese and his team weren't sure where and how tweaks could be made to optimize Ecolab's cloud usage and efficiency. Fortunately, Microsoft reps helped assess the environment and workloads, then build out a plan.
"We started by working with Microsoft to see where we could optimize, and they were great in helping us understand where we could optimize our spend," Senese said. "We do a lot of compute. We do a lot of data analytics, and we wanted to see whether we can optimize spending, because we were new to this space."
Once the team found out more about Microsoft's strategies and created a resource model that could support existing workloads and scale as needed, they were able to spread the word among other areas of the business.
To find out more about Ecolab's setup and how Microsft's experts can help you guide the discussion forward, please visit zdnet.com/Microsoft-cloud.
Excerpt from:
Overspending in the cloud: Lessons learned - ZDNet
How safe is iMessage in the cloud? – Macworld
Examining privacy and security in the world of Apple
Of all the problems iMessage has, Apple says it plans to solve a persistent one: having access to all your conversations on every device, instead of messages and data lying scattered across all the Macs, iPhones, and iPads you use. But is this the right problem to solve?
Apples Craig Federighi explained at the 2017 Worldwide Developers Conference that iMessage will be stored in iCloud with end-to-end encryption, but provided no other details. Later, he mentioned that Siri training will sync across iCloudinstead of being siloed on each of your Apple devices, and that training and marking faces in Photos People album will do the sameand with end-to-end encryption.
Despite that encryption promise, this concerns me. Its better to have the least amount of personal and private information pass through other systems, instead of directly between two devices. Its especially good to have the least amount of private data stored elsewhere, except if the encryption for that data is firmly under your control or fully independently vetted.
That storage issue is particularly problematic with iMessage. While Apples design for at-rest storage could be terrific, iMessage itself is way behind its competition in providing an effective, modern encryption model. Notably, if a party sniffs and records encrypted iMessage data from a privileged position and a later flaw allows the recovery of an encryption key, all previously encrypted data can be unlocked. The way to prevent that is using forward secrecy, which Signals OpenWhisper protocol employs in the Signal app and in WhatsApp.
Craig Federighi explains how Siri training syncs among devices using end-to-end encryption.
While Ive queried Apple for more details on how all this will work, its likely they wont provide any until closer to the OS updates or even afterwards. If youre installing developer or public betas, you should consider how this might affect you without having all the details to hand.
Apple designed its iCloud Keychain sync in an admirable way. It uses a zero knowledge approach, which is the gold standard for hands-off data transfer and storage. With a cloud-storage system like Dropbox or how Apple hands email, contacts, calendars, photos, and other iCloud data, all information has an encryption overlay while in transit and another form of encryption at rest on the cloud servers.
However, that at-rest encryption lies under the control of the company offering the service. It possesses all the keys needed to lock your data on arrival and unlock it to transmit it back. Thus, its susceptible to internal misuse, hacking, legitimate government warrants, and extralegal government intrusion.
With iCloud Keychain and other similar syncingsuch as that used by 1Password and LastPass, which I discussed in a recent columna secret gets generated by software running only on client devices and that secret is stored only there. The company that runs the sync or storage service never has possession. Data is encrypted by the mobile or desktop OS and transmitted.
When multiple devices need access to the same pool of data, systems typically use device keys to encrypt a well-protected encryption key that in turn protects the data. (This is the approach used as far back as PGP in the 1990s.) That way, theres a process to enroll and remove devices from the pool of legitimate ones that can access the actual data encryption key.
I fully expect this is what Apple is using: an expansion of iCloud Keychain to more kinds of data. iCloud Keychain has a sometimes funky enrollment process that, when it hiccups, can leave users adrift. I receive email every several weeks from those who have iOS iCloud Keychain errors that they cant fix or permanently dismiss, even by un-enrolling and re-enrolling in that iCloud option.
But its the right way to do, when you consider the intensely personal information in text messages, Siri training data, and Photos facial-recognition and -tagging. Imagine someone gaining full access to all that in a form they could decode? (Were not sure yet either whether that encrypted information will be created in such a way that its not useful without source data on devices, of course.)
Its reasonable to worry about centrally stored and synced data, because it represents such a weak point in data protection. Given that Apple is stepping up the kind of data you can sync and store, it should also be upgrading its under-the-hood encryption techniques and disclosing more information about how it works. And it should submit its work to external independent auditing and provide more transparency to allow outsiders to monitor for government or third-party intrusion.
All of this can be done without compromising security; all of it would, in fact, dramatically improve the integrity of your data from outside examination. Apples stance on keeping our information unavailable to it is admirable. But it needs to give more assurances that nobody else could possibly access it either.
Cloud security: The castle vs open-ended city model – Cloud Pro
With cloud security, the boundary for the system stops being the edge of your physical network but the individuals who use it.
When you see major breaches of either cloud services or corporate networks, its not usually the external boundaries of the organisationthat have been compromised, its more often the identity of an individual.
The Verizon Data Breach Investigations Report 2017 proves that that security is continually having to change in order to keep up with fluctuations in the threat landscape. With 81% of hacking-related breaches leveraging either stolen or weak passwords, its no wonder that identity is a new focal point.
Changing boundaries
How are the boundaries changing for organisations in terms of security? In the last ten years, security boundaries have changed so much so that they have become invisible or, at the very least, barely recognisable. In its redefined state, security now starts with identity, authentication, and account security.
Adoption of cloud-based services is partly to blame, according to Richard Walters, CTO CensorNet, as unstructured data now resides in cloud-storage applications.
Work is no longer a place. Its an activity, he says. Users have an expectation of instant, 24/7 access to apps and data regardless of location, using whichever device is convenient and close to hand. Just when we thought wed got a handle on things, along came millions of IoT devices that connect to cloud servers. The identity of things is becoming as important as the Identity of human beings.
IT's shift beyond the physical boundaries of a company means the goalposts have moved, with security focusing on protecting applications, data and identity instead of simply guarding entrances and exits to the network.
This radically changes the role of the traditional firewalls, says Wieland Alge, EMEA general manager at Barracuda Networks.
For a while, experts predicted that dedicated firewalls would eventually be absorbed by network equipment and become a feature of a router. Since we build infrastructures bottom-up now, everything starts with users and their access to applications, regardless where they are physically; the firewalls not only need to be user and application-aware, but also to show the same agility and deployment flexibility as the respective entities they protect.
The castle vs the open city
Is security in the modern digital world like an open city, as opposed to traditional corporate computing, which is more like a castle?
A castles spiral stairs turn clockwise to give an advantage to right-handed sword-wielding defenders. According to Memsets head of security, Thomas Owen, that kind of subtlety and defence in depth (plus the motte and bailey, moat, keep, etc.) are where the state of the cyber-security art now lies.
The increase in adoption of identity federation or outsourced/crowdsourced Security-as-a-Service capabilities, such asTenable.ioor HackerOne, speak of democratisation and an increase in trust of third parties, but if youre lazy on patching or have flabby access control in place youre still going to get hacked, he says.
Open cities still have rings of trust, policers/enforcers, strictly private spaces, laws, etc.Weve not been in a place where a single castle wall is sufficient for decades.
Nigel Hawthorn, chief European spokesperson atSkyhigh Networks, says that another issue with the castle-based cybersecurity approach is that there are a lot of keys to secure.
Each employee who has access to networks is a potential threat. They could begin acting maliciously or have their details stolen by cybercriminals who then have keys to the kingdom. With the number of credential thefts ever-increasing, no company that utilises a castle approach is truly safe, he says.
Stopping hackers acquiring identities
Hawthorn says that businesses must become better at detecting when an employees credentials have been hijacked.
He says the issue is that many still rely on a single authentication process, with access being granted on the basis of having a company email address and password. For example, the heist on the Central Bank of Bangladesh, in which $81 million was stolen, took place after hackers gained the SWIFT log-in credentials of a few employees. Had the bank had more stringent identity checks the attack may have been mitigated.
The best approach is behavioural analytics, which works in a similar way to how credit card companies detect and prevent fraud, according to Barry Shteiman, director of Threat Research at Exabeam.
It creates a baseline of normal activity for each individual person, then compares each new activity against the baseline. In the same way that Visa would block a UK-based consumer from buying a TV in Beijing for the first time, corporations will detect hackers trying to use valid but stolen credentials.
He says that with one customer, a national retailer, suddenly saw an employee in the HR department attempt to access 1,500 point-of-sale systems in the retail stores.
Shed never done it before. In fact, no one in her department had done so before. It turns out that she was on holiday and her corporate credentials had been stolen and were being used by a hacker to steal credit card info. The password was valid, so the question wasnt 'can she access this system' but instead, 'should she be accessing this system?', says Shteiman.
Evolving security models
Over the next few years, security models will need to be updated to include cloud-based monitoring and controls, says Jeremy Rasmussen, director of cybersecurity atAbacode.
Typically, there is a shared security responsibility for systems hosted in the cloud. The cloud service provider is responsible for security of the underlying infrastructure. However, protecting anything stored on that infrastructure - from the operating system up to applications is the responsibility of the individual organisation, he says.
Hawthorn says that as the cloud and applications continue to become more vital to operations, businesses must begin to view them as an extension of the firm.
Data controls need to be enforced at the cloud application level, as opposed to stopping at the business network perimeter. Companies and their cloud third parties are being forced into a shared responsibility model due to GDPR, so there will be a greater focus on protecting data wherever it is in its journey.
Original post:
Cloud security: The castle vs open-ended city model - Cloud Pro
Telefnica launches data centre in Lima, Peru – The Stack
Telefnica, the global broadband and telecom provider, has opened Phase I of a new cloud data center in central Lima, Peru. Rather than build a new facility, the company chose to repurpose a central office in an effort to reduce capital expenditures and reuse assets, while reducing construction timelines.
The new cloud data center, located in the Lince district of Lima, will contain 6,000 square meters of floor space, built in three phases. The final product will have 584 cabinets with 2.6 megawatts of total IT power. 100 cabinets are currently available with the completion of Phase I of the project.
The new facility was built to conform to the Uptime Tier III standard, with 99.98% availability and online maintenance of equipment.
Telefnica selected Huawei to provide infrastructure integration services, helping to design and construct the new data center inside the previous central office building.
Huawei managed construction personnel and subcontractors, delivering HVAC, power, and cabling along with five additional systems. Huawei was also responsible for optimizing the construction process, delivering Phase I of the project in just five months.
The completion of the Lima data center helps to cement Telefnicas presence as a premier cloud provider in the Peruvian market. The company has noted interest from financial, transportation, and municipal enterprises who are drawn to the project in part due to its Tier III certified high reliability.
Telefnica has been expanding its presence in the Central and South American data center market, having recently opened facilities in Santiago, Chile, Sao Paulo, Brazil, and Mexico City. The Lima data center will provide OpenCloud and Cloud Server services to Peruvian customers.
Recently, Telefnicaannounced a new cloud solution for customers in Europe and the Americas. Known as VDC, or Virtual Data Center 3.0, the new solution is targeted at assisting medium to large-scale enterprises in moving workloads to the cloud. The VDC 3.0 solution provides customers with virtualization technology from VMware, delivered on Huawei servers in Telefnica data centers. VDC 3.0 is currently available for customers in the U.S., Spain, and the UK, as well as throughout Central and South America.
Telefnica also announced plans to launch Cloud Foundation, a new corporate cloud solution aimed at providing a fully hybrid environment for enterprise customers. Cloud Foundation is expected to be available by the end of 2017.
Go here to read the rest:
Telefnica launches data centre in Lima, Peru - The Stack
Packet, Qualcomm to Host World’s First 10nm Server Processor in Public Cloud for Developers – Data Center Knowledge
Packet, abare metal cloud for developers, announced that it will collaborate with Qualcomm Datacenter Technologies, Inc. to introduce the latest in server architecture innovation on the 48-core Qualcomm Centriq 2400 processor.
The New York City-based companyis currentlyshowcasing itsconsumable cloud platform at Red Hats AnsibleFest conference in London, and demonstrating leveraging open source tools such as Ansible, Terraform, Docker and Kubernetes all running on Qualcomm Datacenter Technologies ARM architecture-based servers.
The series of joint efforts willcontinue at Hashiconf (Austin), Open Source Summit North America (Los Angeles), and AnsibleFest (San Francisco).
We believe that innovative hardware will be a major contributor to improving application performance over the next few years. Qualcomm Datacenter Technologies is at the bleeding edge of this innovation with the worlds first 10nm server processor, said Nathan Goulding, Packets SVP of Engineering. With blazing-fast innovation occurring at all levels of software, the simple act of giving developers direct access to hardware is a massive, and very timely, opportunity.
Packets proprietary technology automates physical servers and networks to provide on-demand compute and connectivity, without the use of virtualization or multi-tenancy. The company, which supports both x86 and ARMv8 architectures, provides a global bare metal public cloud from locations in New York, Silicon Valley, Amsterdam, and Tokyo.
Our collaboration with Packet is the first step of a shared vision to provide an automated, unified experience that will enable users to access and develop directly on the Qualcomm Centriq 2400 chipset, noted Elsie Wahlig, director of product management at Qualcomm Datacenter Technologies, Inc. Were thrilled to work with Packet to engage with more aspects of the open source community.
While an investment by SoftBank accelerated the companys access to developments in the ARM server ecosystem, Packet has been active in the developer community since its founding in 2014.
Original post:
Packet, Qualcomm to Host World's First 10nm Server Processor in Public Cloud for Developers - Data Center Knowledge
Egenera Leverages Acronis for Managed Cloud Backup Services – ChannelE2E
by Ty Trumbull Jun 23, 2017
Egenerais leveragingAcronis Backup Cloud as part of its Xterity wholesale managed cloud service.Through the agreement, Egenera will provide training, management tools, and marketing collateral to help its service provider partners meet customer demands.
When we previously looked at Egenera, the company had expanded its Xterity offering by adding a CloudMigrate service for its partners. We were curious to see how partners would react, given steep competition from the likes of Amazon, Microsoft, and Google.
The company appears to be doubling down on its business model by adding Acronis data protection solution to its network. The service is designed specifically for MSPs, web hosting companies, and cloud resellers. It comes with multi-tenant and multi-tier management capabilities and offers full protection of all data, including servers, computers, Microsoft Office 365 accounts, websites, and applications in physical, virtual and cloud environments, the company says.
Egenera ranks among the companies offering private cloud platforms (along with competitors like Abiquo, Flexiant, and Embotics).
To whit, the Xterity portfolio includes wholesale managed public cloud, wholesale managed dedicated compute cloud, managed private cloud, bare metal servers, business continuity, and cloud suite software. Egenera launched the wholesale cloud service business Xterity in 2015, making it available exclusively to the channel. AddingAcronis backup capabilities to the suite makes Egeneras business continuity solution more robust while adding name recognition.
Meanwhile, the partnership is a more traditional move for Acronis, which has been evolvingbeyondfrom its role as a backup company. Its been shifting focus lately toward software-defined storage (SDS) as it bets on the artificial intelligence revolutions ramifications for providers. Still, the company is keeping a firm foothold in its traditional sphere, launching the recent 12.5 iteration of its backup solution complete with ransomware protection just last month.
Link:
Egenera Leverages Acronis for Managed Cloud Backup Services - ChannelE2E
Huawei enhances public cloud capabilities to deliver HPC applications – Techseen
Global information and communications technology (ICT) solutions provider, Huawei has released the public-cloud-based HPC Cloud Solution 2.0 at ISC17. It has been developed based on the OpenStack architecture and has been enhanced in performance compared with the previous version .
The new solution is co-developed by Huawei and Mellanox based on InfiniBand and is claimed as the first HPC public cloud solution providing 100 Gbit/s EDR computing network capabilities in the industry. In addition, this solution uses high-performance local storage to improve the overall HPC computing performance and instant data erase and storage encryption to enhance security.
We are pleased to cooperate with Mellanox in HPC public cloud solutions, helping Huawei further enhance Huawei public cloud capabilities based on Mellanoxs advanced technologies in the HPC interconnection industry and provide more choices for customers. This cooperation will accelerate cloud-oriented transformation for industrial customers and assist enterprises in continuous innovation in HPC services, said Sun Jiawei, Director, IT Business Development Dept, Huawei.
The HPC solution will be firstly rolled out on Open Telekom Cloud, a public cloud platform jointly provided by Huawei and Deutsche Telekom in Europe. It uses high-performance Elastic Cloud Servers (ECSs), Bare Metal Service, and heterogeneous computing acceleration, InfiniBand-based 100 Gbit/s EDR computing network capabilities, as well as parallel file system storage capabilities.
The public cloud is the perfect fit for all customers demanding short-term powerful computing capacities. With the new HPC features, we furthermore enhanced Open Telekom Cloud and broadened our range of use cases for each industry, said Andreas Falkner, Vice President, Open Telekom Cloud.
See more here:
Huawei enhances public cloud capabilities to deliver HPC applications - Techseen
Qualcomm’s server silicon has a cloud customer: Packet – The Register
Qualcomm looks to have a customer for the Centriq 2400 , the 48-core CPU it's aiming at the server market: the minor cloud player Packet has signed up to introduce the architecture to its customers.
Packet bills itself as a cloud for developers and has been running Cavium's 48-core ARMv8-A ThunderX processors since November 2016. Now it's announced that it's going to show up at developer gabfests to show off a consumable cloud platform, providing access to a series of demonstrations leveraging open source tools such as Ansible, Terraform, Docker and Kubernetes, all running on Qualcomm Datacenter Technologies ARM architecture-based servers.
There's announcement doesn't mention a firm commitment to run the Centriq, but both companies express the usual admiration for each other's complementary offerings. The Register can't imagine what would stop the pair from taking the next step and running a Centriq-powered cloud.
That means the chance to run the cut of Windows Server Microsoft has ported to Centriq. Throw in the fact that Linux is happy running on ARM and things get interesting.
And more interesting again with news that another minor cloud, Scaleway, has thrown some more Cavium ThunderX SoCs packing ARM V8 tech into its cloud. The company's therefore renting 64-core servers at 0.56 an hour, albeit as a preview as we're still deploying nodes to handle large scale deployments.
Scaleway's nonetheless declaring the new and larger instances as proof that ARM is a true alternative for the server market with solution for small and large workloads.
Here's the instances on offer.
News of Scaleway's and Packet's efforts ends a tough week for Intel, which entered it with Xeon as just about the only CPU worth putting on a cloudy shopping list and ended it with AMD's Epyc seeing the light of day and two clouds contesting for buyers' consideration. All of which can't be bad for customers.
Read more:
Qualcomm's server silicon has a cloud customer: Packet - The Register
Scaleway doubles down on ARM-based cloud servers | TechCrunch – TechCrunch
Iliads cloud hosting divisionScalewayhas been betting on ARM chipsets for years because they believe the future of hosting is going to be based on ARMs processor architecture. The company just launched more powerful ARMv8 options and added more cores to its cheapest options.
If youre not familiar with processor architecture, your computer and your smartphone use two different chipsets. Your laptop uses an x86 CPU manufactured by Intel or AMD, while your smartphone uses an ARM-based system-on-a-chip.
Back in April, Scaleway launched 64-bit ARM-based virtual servers thanks to Cavium ThunderX systems-on-a-chip. And the most affordable option is crazy cheap. For 2.99 per month ($3.30), you could get 2 ARMv8 cores and 2GB of RAM, 50GB of SSD with unlimited bandwidth at 200Mbit/s.
With todays update, Scaleway is doubling the number of cores on this option you now get 4 cores instead of 2, making it quite competitive with entry-level virtual private servers on DigitalOcean or Linode. The company told me that it could be the best compute-to-price ratio on the market. For 5.99, you now get 6 cores and 4GB of RAM.
And Scaleway also thinks you should be using ARM-based servers for your demanding tasks, as well. You can now get up to 64 cores and up to 128GB of RAM. This beefy option is quite expensive, at 279.99 per month, but Scaleway also added a bunch of intermediary options with 16, 32 or 48 cores.
My main complaint remains the same. Scaleway currently has two data centers in Paris and Amsterdam. The company needs to think about opening up new offerings in Asia and the U.S. if it wants to become a serious contender in the highly competitive cloud hosting market.
More here:
Scaleway doubles down on ARM-based cloud servers | TechCrunch - TechCrunch
Microsoft Azure, Baidu embrace AMD’s new Epyc data center processor – GeekWire
Advanced Micro Devices long road back to relevance as a data center computing supplier got a little easier with promises of support from two of the biggest server buyers on the planet.
Microsoft Azure and Baidu promised to deploy AMDs new Epyc data center chip for their cloud customers to select as an option, the companies announced Tuesday at AMDs Epyc launch event. Those two companies deploy a lot of servers: Intel, which has over 95 percent of the market for data center processors, includes them in its Super Seven customer group along with Amazon, Google, Facebook, Alibaba, and Tencent.
Epyc is a new processor design with 32 separate processing cores. If youre a super chip nerd, check out The Next Platforms overview of the new design and what it can accomplish, but the bottom line is that the new processor seems like it will be capable of competing with Intels forthcoming Skylake processors, giving cloud server buyers their first alternate supplier option in a very long time.
Ten years ago, AMD put quite the scare in Intel with its Opteron design, which caught Intel flat-footed as server power consumption became just as important and in some cases, more important as pure performance. But Intel recovered relatively quickly and has not looked back, virtually controlling the entire market for cloud server chips for the last several years, while AMD has floundered.
Microsoft Azure has been among the most experimental cloud providers when it comes to the processors that power its cloud, or at least the most willing to talk about it in public. Microsoft has said it is evaluating ARM-based processors for its cloud, which cant run the same software as the x86 chips made by Intel and AMD but have interesting power-consumption characteristics.
See the original post:
Microsoft Azure, Baidu embrace AMD's new Epyc data center processor - GeekWire