Category Archives: Cloud Servers

Data Storage Corporation Merges with ABC Services – Business Wire (press release)

MELVILLE, N.Y.--(BUSINESS WIRE)--Data Storage Corporation (OTCQB:DTST), a provider of diverse business continuity and disaster recovery protection solutions, today announced the acquisition of ABC Services and ABC Services II, a 25-year provider of IBM equipment, infrastructure as a service, managed and professional services, including the remaining 50% ownership of Secure Infrastructure and Services.

Due to the acquisition, Data Storage Corporation will expand its current solutions including email archival and compliance, Recovery Cloud, Office 365, IBM DR and Cloud Servers, while leveraging ABCs network and data security, managed services and equipment. Tom Kempster, president of ABC, will be responsible for DSCs Technical Operations Group and will serve as president of that unit. Hal Schwartz, president of Secure Infrastructure and Services, will serve as DSCs new president.

We plan to grow our organization organically as well as through strategic acquisitions, stated Chuck Piluso, chairman and CEO, DSC. We have had a six-year relationship with Tom and Hal and are excited about the potential. As part of the acquisition, we expanded our client base, extended our service offerings and gained the expertise of an excellent management and technical team that has been providing managed services and infrastructure for over 25 years.

Tom Kempster added, Our businesses unifying in the present shall be seen as our first steps in the creation of additional value for our clients. This merger presents DSC as a leader, in that we have a technical team, client response times and infrastructure second to none.

I am excited about the merger and the opportunities the merger will provide to both Data Storage Corporation and our clients as our services are in high demand across all markets and industries, stated Hal Schwartz. Because of the merger, we now have first-class management and technical teams, which will allow us to deliver and support full service, security, cloud, hybrid cloud and cloud backup solutions with the highest confidence and service levels.

More information about DSC can be found by visiting http://www.DataStorageCorp.com, http://www.SIASMSP.com and http://www.abcservices.com/.

About Data Storage Corporation

Data Storage Corporation (DSC) delivers and supports a broad range of premium solutions focusing on data storage and protection. Clients look to DSC to ensure disaster recovery and business continuity, strengthen security, and to meet increasing industry, state and federal regulations. The company markets to business, government, education and the healthcare industry by leveraging leading technologies, including Virtualization and Cloud Computing. The company provides hardware, SaaS, managed IT services, installation and maintenance. For more information, please visit http://www.DataStorageCorp.com and http://www.SIASMSP.com

About ABC Services

Since 1994, ABC Services has been empowering companies with innovative business solutions and assisting them to prosper and surpass their competition. Services range from IBM Power Systems including IBM i and AIX; and, Managed Cloud Services as well as on-premise IT services. The company has the ability and resources to meet a variety of IT systems integration needs, quickly and cost-effectively, please visit, http://www.ABCServices.com

Visit link:
Data Storage Corporation Merges with ABC Services - Business Wire (press release)

Stop wasting the cloud! – App Developer Magazine

Posted Wednesday, February 08, 2017 by RICHARD HARRIS, Executive Editor Some people think about the public cloud is as a utility - you can buy services on demand, just like electricity, or water, or heating. Each of these utilities are consumable - as you grow you can consume more, as you shrink you can consume less. In the case of the public cloud, you are consuming IT-related infrastructure and services to build, test, and run enterprise and consumer applications which we consume either as an enterprise (e.g., Salesforce) or a consumer (e.g., Netflix) . But like any utility, there is waste - lights are left on, faucets leak or are left running, and the heat running when you are not home. This is why there are now consumer applications like Nest. Buildings and homes alike have automated ways to turn lights and water off/on to reduce waste and save money and protect the environment. Why should the public cloud be any different? We wanted to dig deeper into cloud waste and the problem it's creating, so who better to talk to than the CEO ofParkMyCloud,Jay Chapel. Here's how the conversation went. ADM: What is "cloud waste? Chapel: Cloud waste occurs when organizations spend money on cloud services they are not actually using. According to estimates by ParkMyCloud (based on numbers from Gartner and others), up to $6 billion is wasted on unused cloud services every year. This waste comes from servers left running when people are not using them (at nights and weekends), oversized databases, servers not optimized for the applications they support and storage volumes not being used or lost in the cloud. These are just a few examples of cloud waste. ADM: Why is it a problem? Chapel: Wasting money is always a problem for businesses. When significant portions of the budget are spent on unneeded services, it limits the resources available for more critical uses. Worse, cloud waste often goes unnoticed. This can impose unnecessary strains on limited budgets over months or even years. ADM: Why has there been little written about this issue? Chapel: Little has been written about this issue because most are not even aware of it. In the broader scheme of things, cloud services are still young. We saw a trend in data center usage that were now seeing repeated in cloud: first, companies adopt the new services. Second, they grow in use of those services. Those two stages are obvious, but then there's a third stage after the growth: optimization. Both usage and spend need to be optimized. ADM: Are most companies even aware of their amount of cloud waste? Chapel: Many companies are aware that they need to better optimize their cloud services, but are only just now exploring the best ways to do so. So, with cloud waste, there is not a great level of awareness. Of course, this is what makes it such a problem. Once organizations become aware of it, they can start to act to reduce cloud waste. ADM: What is the solution to ending cloud waste? Chapel: Since wasted spend is a multi-faceted problem, there is no one answer that solves all aspects of cloud waste. However, there are a few straightforward steps that organizations can take. First, they should ensure that they are using proper governance measures to limit who in the organization can spin up new resources, and to ensure there is one point of single visibility. Second, they should start with one of the easiest ways to reduce cloud spend immediately: turning non-production resources - those used for internal purposes such as development, testing, and staging - off when they are not needed. A great place to start is on nights and weekends.

After that, organizations can look at right-sizing resources - ensuring that servers are not over-sized for the applications they support - and finding and eliminating orphaned storage volumes that are no longer needed.

Originally posted here:
Stop wasting the cloud! - App Developer Magazine

Cloud-Based Disaster Recovery as a Service: How is DRaaS Different from Backup and Recovery? – Enterprise Storage Forum

For businesses, cloud-based backup and recovery has become common these days. If backup is fast enough to fit within a backup window, and if recovery times hit recovery time objective (RTO) and recovery point objective (RPO) service levels, youre golden.

After that, it gets complicated. Backup and recovery are critical components of disaster recovery (DR), but alone they cant assure that application processing continues uninterrupted. Many enterprises have built their DR plans around remote sites because they already own multiple data centers, or they have the budget for secondary hot sites. However, unless they have an extra data center hanging around or can afford to lease a secondary hot site midsized and small businesses were out of luck for remote DR.

In response, many cloud service providers and disaster recovery vendors took the cloud-based backup and recovery model to its logical next step: failing-over applications to the cloud.

One of the big advantages of cloud-based failover is that even small and midsize companies can now afford to contract for cloud-based failover. Its more expensive than just using cloud-based backup and recovery, but its considerably less expensive than building or leasing fully mirrored secondary data centers. We call this offering disaster recovery as a service, or DRaaS.

The three primary service types of cloud-based data protection (DP) services include backup as a service (BaaS), recovery as a service (RaaS) and disaster recovery as a service (DRaaS). Lines between the three can be blurry, but in general, the following definitions apply:

DRaaS is not by definition necessarily based in the cloud. Some service providers offer DRaaS as a site-to-site service where they host and operate a secondary hot site. Others offer server replacement, where they rebuild and ship servers to the client site.

The primary advantage of cloud-based DRaaS is its ability to immediately failover applications, reconnect users via VPN or Remote Desktop Protocol (RDP), and orchestrate failback to rebuilt servers in the customer data center.

Cloud-based disaster recovery service providers deliver their services different ways. Some use appliances; some do not. Some limit failover to the cloud while others offer managed site-to-site failover as well. Many offer DR testing on at least a quarterly basis.

Nothing is Perfect: DRaaS has a lot to offer but there are some drawbacks.

One particularly important consideration is choosing between virtual to virtual (V2V)-only providers and providers who do both V2V, physical to physical (P2P), and perhaps physical to virtual (P2V). V2V works best when a customer has a near 100 percent virtualization rate, and when physical servers can be easily re-created on-site. Customers with high-priority physical servers should look for DRaaS providers who do both. In this case, the customer replicates image-based server backups to the cloud disaster site as well as replicating VMs. Also, look for providers that offer bare metal restore services for efficient server rebuilds.

It's important to understand not only failover services but also reconnection. Its one thing for applications to restart in the cloud; its quite another for application users to reconnect to the cloud securely and with high enough performance. Learn about your providers methods of reconnecting users including networking details, WAN speeds and security details, including firewalls, intrusion protection and port monitoring. You will also want to know exactly what youre getting with failback and restore services. Be very clear in your service level agreements over application restore orchestration.

There are many DRaaS service providers to choose from. Some DRaaS vendors use their own products as a service offering. Other service providers use DRaaS products from partner vendors, so be sure to ask who their partners are and what products they are using. Dont forget to check your providers geography and data center resources. In a regional disaster when many companies are attempting to restart their applications in the providers data center, that provider had better have sufficient resources to serve them all.

Below is a selection of vendors who develop their own DRaaS products. Most of them sell their services to partners, and a few of them exclusively offer their own services.

Zetta.nets software-only DRaaS offering enables physical and virtual server spin-ups, scales from a single server to a network and remotely boots over a VPN or RDP connection.

Unitrend's DRaaS works with Recovery Series or Unitrends Enterprise Backup appliances to rapidly spin-up critical applications in the Unitrends cloud.

Acronis acquired nScaled in 2014 to develop Acronis Disaster Recovery Service. The offering is targeted at midsized and enterprise organizations and has an appliance option.

Datto SIRIS 3 backs up, restores and fails over physical, virtual and cloud environments running on Windows, Mac or Linux OSes. Customers have the choice of deploying SIRIS as a physical, software or virtual appliance.

Public cloud vendors Amazon, Microsoft Azure, and Google Cloud all offer DRaaS services through partners, and may also offer their own. Their shared advantages are economies of scale and many global data centers. AWS works with multiple DR product providers to enable rapid failover, scaling from a single server to large-scale enterprise environments. DRaaS service providers like Geminare DRaaS run on Google Cloud, and Google also positions Google Cloud Storage Nearline as a low-cost DRaaS alternative. MS Azure Site Recovery automates VM protection and replication plus remote health monitoring, DR plan testing and orchestrated recovery. MS Azure StorSimple is a widely deployed on-premise onramp to Azure DRaaS.

Veeam provides V2V DRaaS services for VMware and Hyper-V, and it runs on both private and public clouds.

Zerto is an enterprise offering that provides backup, recovery, and DRaaS services across multiple replication sites. It offers RPOs in seconds and RTOs in minutes.

IBM offers BaaS and fully managed DRaaS in its data centers. It provides three levels of spin-up service agreements so customers can control costs: the premium service offers minutes per server, the next level one hour for shared VMs, and the third level a maximum of six hours for shared VMs.

Finally, Actifio is something of a hybrid: its primarily a copy data management player but powers DRaaS service offerings from companies like Verizon.

Photo courtesy of Shutterstock.

More:
Cloud-Based Disaster Recovery as a Service: How is DRaaS Different from Backup and Recovery? - Enterprise Storage Forum

Microsoft brings Azure Backup to UK data centres – Cloud Pro

Azure customers can now backup and restore files from Microsoft's UK data centres.

Azure Backup and Site Recovery will now feature as part of Microsoft's cloud offering from its London, Durham and Cardiff facilities, allowing UK companies to securely host their data in the the same region.

The services, which have been widely available in US data centres, represent the latest additions in a gradual rollout of features for the new UK cloud region. Azure Backup will protect customer data both on-premise and in the cloud, while Site Recovery allows for physical servers to be replicated in the cloud, which can be used in the event of an on-premise failure.

"With Azure Backup and Site Recovery, Microsoft customers can be confident that their information is safe, secure and available whenever and wherever they need it," said Mark Smith, senior director of cloud and enterprise at Microsoft. "These features add to the fantastic services already being offered from Microsoft's UK data centres, which are being utilised by the government and other major organisations in this country because of the transparency, security and compliance they offer."

Since opening last September, Microsoft's UK data centres have attracted a number of high profile customers, including the Met police, and parts of the NHS and MoD. The features join already available services such as the Azure Security Centre and the Azure Marketplace platform.

UK customers signing up for Azure Backup will only pay for the storage they use, and are able to choose between two backup storage options. The first creates three copies of stored data, which are then relocated to a paired datacentre in the same region, while the second allows companies to create a backup at a site hundreds of miles away from the original.

Microsoft has also announced price cuts to its virtual machines (VMs) and storage options, with the aim to lower barriers to entry for the many companies still wishing to migrate to the cloud. Microsoft's F-series VM cloud servers are down by 23% for Linux and 18% for Windows, while its A1 Basic series is down by 42% and 51% respectively.

Customers on Azure Blob storage accounts will also see a 31% price cut on Hot Block Blob storage, while Cold Block Blob storage is down 38%. However, Redmond hiked UK cloud prices by 22% in response to Brexit at the start of 2017, as the pound's value plummets against the US dollar.

Read the rest here:
Microsoft brings Azure Backup to UK data centres - Cloud Pro

Congressional tech forecast: Clouds with a chance of freedom – Conservative Review

After years of trying, Congress may, finally be set to update the laws surrounding the privacy of emails to the 21st Century. For the second consecutive Congress, the House has passed the Email Privacy Act by an overwhelming bi-partisan majority. After constant and inexplicable delays, perhaps this can be the year that basic due process protections for our online emails and files can make it into law.

The Email Privacy Act addresses a basic flaw in the Electronic Communications Privacy Act of 1986 (ECPA). Ironically, ECPA was designed, as its name indicates, to strengthen legal due process with respect to electronic data and communications. The goal, of course, was to bring legal protections up to date with modern technology at the time. But the law was more protective of communications in transit than of data at rest, especially with respect to third-party data storage.

The actual text of ECPA (18 U.S. Code 2703) provides the means for government agencies to demand that any remote computing service cough up the contents of a wire or electronic communication that has been in electronic storage in an electronic communications system for more than one hundred and eighty days via administrative subpoena. In English, this means that your communications and data stored external to your computer, like in Gmail, Dropbox, or any other cloud service, can be demanded by the feds without a warrant (and without you being notified), so long as the requested files are over 180 days old.

In 1986, this provision wasnt a huge deal because the modern web didnt exist. Data storage was expensive, so most computer users stored their email and other files on their own hard drives. In the present day, tens of millions of people routinely store years worth of their communications and personal files alike on third-party cloud servers. The lack of a basic warrant requirement to access these is an insane breach of privacy.

The need to reform ECPA is so completely self-evident, in fact, that the House of Representatives passed the Email Privacy Act by a vote of 412-0 in 2016. Yet it went nowhere in a Senate preoccupied by the upcoming election, despite bi-partisan support for ECPA reform in that chamber.

Part of the hesitancy in passing ECPA reform has been protests from executive agencies like the Securities and Exchange Commission that they need the ability to quickly grab documents as part of their investigations into various regulatory and criminal offenses. But there is a simple reply: Get a warrant. Court orders dont take a ton of time to get if there is probable cause. Outside of emergency situations, the system isnt supposed to make violating the privacy of peoples files and communications easy or convenient.

But a new Congress means a fresh start, and the Email Privacy Act has not only already been reintroduced by original sponsors Rep. Kevin Yoder, R-Kan. (D, 65%) and Rep. Jared Polis, R-Colo. (F, 20%), but has already passed the House again, by an easy voice vote.

A great start. Now, in the spirit of better late than never, the Senate should take up the bill as soon as the major nomination crunch is over and send it to President Trumps desk.

Josh Withrow is an Associate Editor for Conservative Review and Director of Public Policy at Free the People. You can follow him on Twitter at @jgwithrow.

Read the rest here:
Congressional tech forecast: Clouds with a chance of freedom - Conservative Review

Stratoscale buys Tesora to bolster hybrid cloud database capability – Computerworld

Thank you

Your message has been sent.

There was an error emailing this page.

Cloud service provider Stratoscale has snapped up database-as-a-service vendor Tesora to beef up its hybrid cloud offering.

Stratoscale's key product, Symphony, is built on OpenStack and allows businesses to set up an Amazon Web Services (AWS) "region" in their own data center, so they can easily move workloads between private and public cloud servers or scale up capacity without having to migrate to a different service.

Further reading: Want to run your own Amazon 'region'? Stratoscale shows you how

Tesora's database as a service, also built on OpenStack, runs in public, private or hybrid clouds. Stratoscale plans to use it to expand its existing managed database support, which includes AWS Relational Database Service and the AWS NoSQL database, DynamoDB. Tesora will bring Stratoscale self-service provisioning capabilities for Oracle, MySQL, MariaDB, MongoDB, PostgresSQL, Couchbase, Cassandra, Redis, DataStax Enterprise, Persona and DB2 Express databases.

It's only a couple of months since Stratoscale released version 3 of Symphony, introducing compatibility with Amazon's S3 object storage service, Kubernetes-as-a-service containerization, and the ability to freely migrate AWS EC2, EBS, S3 and VPC workloads between public and private cloud infrastructure.

Tesora has long pitched its database as a service as better than the AWS database offerings in one important respect: Thanks to its OpenStack underpinnings, it could run as easily in public or private clouds. However, Stratoscale's introduction of the AWS region capability to Symphony 3 took that advantage away.

Peter Sayer covers European public policy, artificial intelligence, the blockchain, and other technology breaking news for the IDG News Service.

Sponsored Links

See the article here:
Stratoscale buys Tesora to bolster hybrid cloud database capability - Computerworld

Amazon Web Services Continues to Grow as Servers Move to the Cloud – Server Watch

When most people talk about the public cloud, the conversation inevitably will include Amazon Web Services (AWS). Amazon reported its first quarter fiscal 2017 financial earnings on Feb. 2, once again showing growth in the cloud.

Overall revenue for Amazon during the fourth quarter was reported at $43.7 billion, a 22 percent year-over-year gain. For the full year, Amazon's revenue was $136 billion, up by 27 percent from 2016.

Looking specifically at AWS, Amazon reported $3.5 billion in fourth quarter revenue. For the past 12 months, AWS generated a staggering $12.2 billion in revenue from its cloud operations.

Amazon first began to break out its cloud earnings in April 2015, during the company's first quarter fiscal 2015 earnings call. At the time, AWS revenue was reported at $1.57 billion for the quarter. By July 2016, AWS had nearly doubled its quarterly revenue, with $2.9 billion in cloud sales.

"On AWS, we're very happy with the response from customers," Amazon CFO Brian Olsavsky said on his company's earnings call. "I feel we've got a very broad base of customers from startups to small and medium businesses to large enterprises, to the public sector and we're continuing to see strong growth across all those sectors."

The sectors that AWS operates in also includes new regions that opened up in 2016. During the year, Amazon opened eleven new Availability Zones across five geographic regions in the U.S., Korea, India, Canada and the U.K.

In AWS' terminology, a Region is defined as a geographic location, while an Availability Zone is infrastructure within a Region that has its own power, cooling and capacity. The general idea is that AWS customers can run applications in multiple Availability Zones to help protect and mitigate the impact of a failure in a single zone.

Among the new regions is the Ohio US-East Region that opened in October 2016, providing much needed relief to Amazon's primary US-East location in North Virginia. In December 2016, Amazon opened its first AWS region in Montreal, Quebec, Canada.

For 2017, Amazon plans on adding two new regions, including one in France and another in China.

Sean Michael Kerner is a senior editor at ServerWatch and InternetNews.com. Follow him on Twitter @TechJournalist.

View post:
Amazon Web Services Continues to Grow as Servers Move to the Cloud - Server Watch

Securing IoT devices from within – GCN.com (blog)

Securing IoT devices from within

Security experts have long fretted about the rapidly expanding number of internet of things devices. While most such tools may not contain data that should be protected, many connect to the cloud and represent easy targets for hackers to gain access -- not only to that device, but to all other devices connected to an IoT mesh.

To address this issue, AWS in 2015 released its IoT platform, which includes provisions for mutual authentication which is intended to verify the integrity of all devices connecting to the AWS IoT cloud.

Connecting devices can use the AWS SigV4 method of authentication or follow the traditional approach of using X.509 certificates to manage public-key encryption. IoT managers can map roles and/or policies to each certificate so that devices or applications can be authorized (or de-authorized) without ever touching the device.

As might be expected, an organization with thousands of IoT-enabled devices might find it too difficult to provision and manage all those certificates and keys. One solution is the AWS the Use Your Own Certificate program, which allows original equipment manufacturers to register digital certificates signed by a third-party authorities with the AWS IoT platform using an application programming interface, according to Embedded Computing.

That means unique cryptographic keys can be generated for each device during production, signed by a certificate authority and then loaded into the AWS IoT platform to await a service request from systems containing the corresponding key pairs, the site said.

A hardware solution that offers built-in end-to-end security between the device and cloud servers has been developed by Microchip Technology Inc. and AWS.

It uses a small chip that is preloaded with the unique cryptographic codes to allow data to be transmitted more securely from an IoT device to the cloud.

According to Eustace Asanghanwa, strategic marketing manager for Microchip Technology, the AWS-ECC508 chip eliminates the need for IoT device manufacturers to go through a multistep process of preregistering their device with AWS servers and generating encryption keys for communications. Instead, the AWS-ECC508, a 3mm by 2mm, 60-cent device (in quantities of 10,000 or more) handles the connection and encryption automatically.

The device can be soldered onto a circuit board and connected to the host microcontroller that configures the chip for the AWS IoT. Because the AWS-ECC508 is preconfigured to be recognized by AWS without any intervention, there is no need to load unique keys and certificates because the information is contained in a small, easy to deploy crypto companion device, the company said.

Unlike the RSA encryption algorithm in widespread use, the Microchip Technology processor employs a more efficient elliptic curve cryptography algorithm that does require as big a key and is, therefore, faster and calls for less hardware.

According to Asanghanwa, IoT device manufacturers have often not paid sufficient attention to building security into their devices because of an overriding focus on keeping costs down.

Looking at the product holistically, the AWS-ECC508 actually reduces overall cost, he said. If you consider not just hardware but also implementation, such as the capital and operational costs of securely injecting keys and managing them in a supply chain, the AWS-ECC508 actually creates a significant cost-reduction for any given product.

While the AWS-ECC508 will only work with Amazon Cloud Services, the underlying ECC508 technology can be configured to work with any storage or cloud vendors services.

Posted by Patrick Marshall on Feb 06, 2017 at 12:57 PM

Continued here:
Securing IoT devices from within - GCN.com (blog)

ARM Server Chips Challenge X86 in the Cloud – The Next Platform

First they ignore you, then they laugh at you, then they fight you, then you win.

Six years following first commercial ARM Server development that was a Network Storage Appliance ZT Systems (PHYTEC) R1801e up to 16 discrete ST SPEAr ARM 9s, 80w system power, first encompassing applications report arrives; India Institute of Sciences ARM Wrestling with Big Data January 21, 2017.

Follows a history of focused assessment; Tirias, MySQL Database using Thunder X, November 14, 2016; Anandtech, Investigating Cavium Thunder X June 15, 2016; Linley Group, X-Gene 3 Challenges Xeon E5, April 2016; Journal of Physics, Frederic Pinel, University of Luxembourg, Dissertation; Energy-Performance Optimization for Cloud, November 27, 2014, HSOC Benchmark for an ARM Server May 13, 2014; University of Edinburgh, Energy Efficiency of SOC Based Processors; April 23, 2013; Calxedas own ECX1000 1.1 GHz v E3 1240 3.3 GHz, June 21, 2012 and all I can say is thats running at Intel speed.

Now all ARM consortium has to do is beat Intel fabrication production cost : price, and the customer concerns.

Intel must be concerned with the antitrust and competitive potentials having hunkered down to barricade Data Center Group Xeon in commercial pricing at 82% first tier discount off 1K. Intel statement on DCG revenue lag has little to do with actual demanders appears Intel diversion.

Assessment Broadwell Xeon E5 2600 v4 EP DP

Checked against Intel 2016 DCG revenue divided by analyst Xeon unit total production volume for determining per unit average price. Also, analyst q4 2016 broker channel inventories holding report by product category volumes, for calculating broker holdings 1K revenue value adjusted to reflect Intel 2016 percent of division revenues statement determines Intel price discount level.

Summary 33% of run or 10,775,736 units of BW v4 14nm production are priced less than full run Marginal Cost $160 is among the competitive development hurdles. Intel on Ivy and Haswell volumes has signaled to first tier dealers all margin values rung from less than 16 cores. Now sit in channels reverberating for other than cloud procurement. On amount of Intel surplus enterprise procurement secures the open market price advantage.

Note 1- BW v4 Average Marginal Cost @ $160 is $31 more than 22 nm Haswell per unit of production cost.

Note 2 Average Price of BW v4 Cost calculated below @ $174 represents the non weighed price on grade SKUs, and is not profit maximized.

Industry total revenue displacement on Broadwell E5 2600 v4 dumping is competitive cost entry barrier; $4,441,803,931, estimate a good 2x five year ARM system constituent development cost.

Q4 2016 Intel x86 broker market inventory holdings report here:

http://seekingalpha.com/article/4033057-intel-another-threat-emerges-zen

ARM fabricators and design producers;

Key; E5 26xx v4 Grade SKU, Intel 1K Price, 1st Tier Customer Price, Intel profit or (loss).

Approximately 32,852,212 units of production.

FOUR CORE

2623, 3.0 GHz, 10 MB L3, 85w; 1K $444, $79.92 ($80.08) 2637, 3.5 GHz, 10 MB L3, 135w, 1K 996, $179.28, $19.28 above cost

SIX CORE 2603, 1.7 GHz, 15 MB L3, 50w; $213, $38.34 = Variable Cost, ($121.66) 2643, 3.4 GHz, 15 MB L3, 135w, $1552, $279.36, $119.36 profit > cost

EIGHT CORE 2608L, 1.6 GHz, 20 MB L3, 50w; $363, $65.43, ($94.66) 2609, 1.7 GHz, 20 MB L3, 85w; $306, $55.087, ($104.92) 2620 2.1 GHz, 20 MB L3, 85w; $417, $75.06, ($84.94) 2667 3.2 Ghz, 20 MB L3, 135w, $2057, $370.26, $210.26 profit > cost

TEN CORE 2618L 2.2 GHz, 25 MB L3, 75w; $779, $140.22, ($19.78) 2630L 1.8 GHz, 25 MB L3, 55w; $612, $110.16, ($49.84) 2640 2.4 GHz, 25 MB L3, 90w; $939, $169.02, $9.02 above cost 2689 3.1 GHz, 25 MB L3, 165w; $2723, $490.14, $330.14 competitive profit level for Intel

TWELVE CORE 2628L 1.9 GHz, 30 MB L3, 75w; $1364, $245.52, $85.52 profit > cost 2650 2.2 Ghz, 30 MB L3, 105w; $1166; $209.88, $49.88 profit > cost 2687W 3.1 GHz, 30 MB L3, 160w; $2141, $385.38; $225.38 competitive profit level

FOURTEEN CORE 2648L 1.8 GHz, 35 Mb L3, 75w; $1544, $277.92, $117.92 2650L 1.7 GHz, 35 MB L3, 105w; $1332, $239.22, $79.22 2658 2.3 GHz, 35 MB L3, 105w; $1832, $329.76, $169.76 2860 2.0 GHz, 35 MB L3, 105w; $1445, $260.10, $100.10 2680 2.4 GHz, 35 MB L3, 120w; $1745, $314.10, $254.10 2690 2.6 GHz, 35 MB L3, 125w; $2090, $376.20, $216.20

SIXTEEN CORE 2683 2.1 GHz, 40 MB L3, 120w; $2424, $436.42, $276.32 2697A 2.6 GHz, 40 MB L3, 145w; $2702, $486.36, $326.36 competitive profit level

EIGHTEEN CORE 2695 2.1 GHz, 45 MB L3, 120w; $2424, $436.32, $276.32 2697 2.3 GHz, 45 Mb L3, 145w; $2702, $486.36, $326.36 competitive profit level

TWENTY CORE 2698 2.2 GHz, 50 MB L3, 135w; $3226, $580.68, $420.68 just below economic profit point for Intel

TWENTYTWO CORE 2696 2.2 GHz, 55 MB L3, 150w; $4115, $740.70, $580.70 entering economic profit points for Intel 2699 2.2 GHz, 55 Mb L3, 145w; $4115, $740.70, $580.70 2699R 2.2 GHz, 55 MB L3, 145w; $4569, $822.42, $662.42 2699A 2.4 GHz, 55 MB L3, 145w; $4938, $888.84, $728.44

Marginal Cost of 24 (22 core) master on production economic total revenue total cost assessment (before marginal cost of sort and dice) = $511.

Science rarely leads to objects of replication, but objects for further articulation and specification under new and more stringent conditions.

Be the change you wish to see in the world. Which is the more powerful, the elephant or beehive.

Mike Bruzzone, Camp marketing

See the original post:
ARM Server Chips Challenge X86 in the Cloud - The Next Platform

How to install OpenStack on a single Ubuntu Server virtual machine – TechRepublic

Image: Jack Wallen

OpenStack has become the de facto standard in private cloud server platforms. If you've ever considered spinning up OpenStack, you know that it is not only an administrative challenge, but it also requires considerable hardwareto the tune of around five servers. However, if you have a powerful enough virtual machine, you can run OpenStack on a single Ubuntu Server.

SEE: How Mark Shuttleworth became the first African in space and launched a software revolution (TechRepublic PDF download)

First, you must update Ubuntu. Open a terminal window and issue the commands:

Once those commands have completed, you'll need to install git. To do this, issue the command:

Allow that installation to complete.

We're going to use git to clone devstack. To do this, go back to your terminal window and issue the following two commands:

Next, we have to copy the sample configuration file and set a password to be used for automated deployment. To complete this task, issue the following commands:

Now we must set the automated deployment password. Open the local.conf file with the command:

Search for the password variable section and ensure it reflects the following (YOURPASSWORD is the actual password you want to use):

It's time to run a few scripts. The first script will create a new user for devstack. The command to run this script is:

Once the script completes, you'll need to change the permissions of the devstack folder with the command:

It's time to run the installation scriptthis must be run as the stack user by first issuing the command sudo su stack. After you change to the stack user, kick off the install with the command /devstack/stack.sh. This command will take at least 30 minutes to complete.

When the command finally completes, you will be given an IP address (the address of the server), as well as two usernames (the admin password was created in the local.conf file). Open a browser, point it to http://SERVER_IP_ADDRESS/dashboard, and log in with the given credentials. You will be presented with the OpenStack Dashboard (Figure A).

Figure A

Congratulations! You're ready to create and deploy.

Here is the original post:
How to install OpenStack on a single Ubuntu Server virtual machine - TechRepublic