Category Archives: Cloud Servers
Neither here nor there – The Hindu
The Hindu | Neither here nor there The Hindu On the flipside, most of our cloud servers promise security and let us own the virtual space, but users can never shake the thought of a security hack, the server going down, or the need to pay more as storage needs rise. Another aspect here is privacy ... |
More here:
Neither here nor there - The Hindu
Why Salesforce’s Focus on Japan Is Opportune – Market Realist
Could Salesforce Outshine Its Peers in the Cloud Space? PART 2 OF 19
Earlier in this series, we discussed Salesforce (CRM) opening adata center in Japan (EWJ) to cater to its expanding customer base in the Asia-Pacific region. Amazon (AMZN), Microsoft (MSFT), IBM (IBM), Alphabet (GOOG), Salesforce, and other cloud providers are competing in adding new data center facilities.
According to Cisco Systems (CSCO) Global Cloud Index, the global count ofhyperscale data centers is expected to grow from 259 in 2015 to 485 by 2020, which would amount to 47% of all installed data center servers by 2020. This growth could comprise 83% and 86% of the public cloud server installed base and public cloud workloads, respectively, in 2020. This trend explains tech giants billion-dollar investments towards data centers and IT (information technology) infrastructure.
In fiscal 4Q17, Salesforces revenue grew in all geographies, as the above presentation shows. Salesforce derives the bulk of its revenue from the Americas. However, it is the Asia-Pacific (FAX) region that grew the most, growing 35%, or 30% in constant currency, in 4Q17. The Americas region posted ~$1.7 billion in revenue, Europe (EFA) posted ~$361 million, and the Asia-Pacific region posted ~$215 million. Salesforces continued growth in the Asia-Pacific region indicates its success in and keenness on implementing its diversification plans.
President Donald Trumps proposed tax reductions are expected to bring in more than $2 trillion parked overseas by US companies. This move is likely to benefit Salesforce, which, unlike peers Microsoft, IBM, and Oracle (ORCL), generates a substantial portion of its revenue from the Americas.
See the original post here:
Why Salesforce's Focus on Japan Is Opportune - Market Realist
Intel (INTC) to Report Q1 Earnings: What’s in the Cards? – Yahoo Finance
Intel Corp INTC is set to report first-quarter 2017 results on Apr 27, after the closing bell. Notably, the company has positive record of earnings surprises in the trailing four quarters, with an average surprise of 9.11%.
Last quarter, the company posted a positive earnings surprise of 5.33%. Non-GAAP earnings of 79 cents per share increased almost 4% from the year-ago quarter but declined 1.3% sequentially.
Strong year-over-year earnings growth was driven by 9.8% increase in revenues, which totaled $16.37 billion and comfortably surpassed the Zacks Consensus Estimate of nearly $15.80 billion. Revenues increased 3.8% sequentially.
Skyworks Solutions, Inc. Price and EPS Surprise
Skyworks Solutions, Inc. Price and EPS Surprise | Skyworks Solutions, Inc. Quote
Intel guided first-quarter 2017 revenues of around $14.8 billion (+/-$500 million), almost flat sequentially. The non-GAAP gross margin is expected to be around 63% (+/-1%). R&D and MG&A expenses are anticipated to come in at around $5.3 billion.
Operating income is projected to be approximately $4.1 billion, while earnings are anticipated to be 65 cents (+/- 5 cents) per share.
The lacklustre guidance along with declining growth in core PC and data center markets has hurt share price movement on a year-to-date basis. Intel shares have inched up 0.2% as compared with the Zacks Semiconductor General industrys gain of 2.1%.
Lets see how things are shaping up for this announcement.
Factors at Play
We note that Intels growing focus into areas with better growth prospects, such as the artificial intelligence (AI), autonomous car and Internet of Things (IoT) businesses are key catalysts.
In this regard, the acquisition of Mobileye is a significant development that will boost its presence in the autonomous vehicle market. Further, the recently completed divestiture of the security business will help the company to focus on these fast growing businesses.
However, increasing competition in the data center market is a headwind. Reportedly, Microsoft is collaborating with ARM chip-makers Qualcomm QCOM and Cavium to design ARM-based servers, which will run a major part of its cloud services going ahead. (Read More: Microsoft Says Yes to ARM-Based Chips for its Cloud Servers)
Moreover, per IDC data worldwide server shipments decreased 3.5% year over year to 2.55 million units in fourth-quarter of 2016. The research firm cited slowdown in hyperscale datacenter growth and continued drag from declining high-end server sales as the primary reason behind this decline.
The data center business now comprises a major part of Intels overall business (29% of 2016 revenues). Hence, a decline in the server shipment amid intensifying competition is a major concern for the company.
Moreover, sluggish PC market growth will also continue to hurt Client Computing Group (55.8% of revenues) growth. Despite strong PC shipments in the first quarter as per data available from both Gartner and IDC we believe they are not sufficient enough to drive significant growth for the segment in the near term. (Read More: Strong PC Shipments Witnessed in Q1: Gartner, IDC)
Earnings Whispers
Our proven model does not conclusively show that Intel will beat earnings this quarter. This is because a stock needs to have both a positive Earnings ESP and a Zacks Rank #1 (Strong Buy), 2 (Buy) or 3 (Hold) for this to happen. That is not the case here as you will see below.
Zacks ESP: Both the Most Accurate estimate and the Zacks Consensus Estimate are pegged at 65 cents. Hence, the difference is 0.00%. You can uncover the best stocks to buy or sell before theyre reported with our Earnings ESP Filter.
Zacks Rank: Intel carries a Zacks Rank #4 (Sell). We caution against stocks with a Zacks Rank #4 or 5 (Sell-rated stocks) going into the earnings announcement, especially when the company is seeing negative estimate revisions.
Stocks to Consider
You could consider the following stocks with a positive Earnings ESP and a favorable Zacks Rank:
Teradyne TER, with an Earnings ESP of +2.63% and a Zacks Rank #1. You can see the complete list of todays Zacks #1 Rank stocks here.
Seagate Technology STX, with an Earnings ESP of +3.77% and a Zacks Rank #2.
Read More
Follow this link:
Intel (INTC) to Report Q1 Earnings: What's in the Cards? - Yahoo Finance
Next-Generation Personal Music Server – BRIO by OraStream – PR Newswire (press release)
SINGAPORE, April 21, 2017 /PRNewswire/ -- OraStream Private Limited has launchedBRIO by OraStream ("BRIO"), a next-generation consumer music server.BRIO is a novel personal music server for consumers to stream music at native resolution. It lets users stream 16bit/44kHz up to 24bit/192kHz resolution audio, whichdeliversall the digital information to bring true musical reproduction.
Consumers can choose from three levels of service:
BRIO streams the best possible music fidelity at any given time and place by means of OraStream's patented quality-adaptive streaming technology. OraStream will also power Xstream, Neil Young's streaming music service.
Celebrated singer-songwriter Neil Young, who has passionately pursued the goal of musical fidelity for many years, says, "OraStream's technology delivers the best fidelity one would ever hear with digital music streaming today. As bandwidth increases, the music will increase in quality to the highest level possible, subject only to the quality of the original music source."
OraStreamCEOFrankie Tansays, "There are many solutions available to stream consumers' music library in-home. What's unique in BRIO is the ability to stream consumers' music library remotely'on-the-go' or 'in-car'. It offers the freedom to listen to one'smusic library at native resolution anywhere with an internet connection."
About OraStream Private Limited
The company's mission is to reshape mobile cloud music. Its adaptive streamingplatform powers next-generation music streaming based on 16/24-bit resolution lossless audio.OraStream Connect is a digital supply chain to deliver music streaming at the best possible musical fidelity to consumers. BRIO by OraStream is a music library-player and streaming server to stream personal music and connected cloud-music services at native resolution.
For more information, visit http://www.orastream.com/brio or emailfrankie@orastream.com
Related Files
BRIO by OraStream Release FAQ 12 April 2017.docx
BRIO by OraStream - Product Brief.pdf
Related Images
image1.jpg
image2.jpg
This content was issued through the press release distribution service at Newswire.com. For more info visit: http://www.newswire.com.
To view the original version on PR Newswire, visit:http://www.prnewswire.com/news-releases/next-generation-personal-music-server---brio-by-orastream-300438264.html
SOURCE OraStream Private Limited
Original post:
Next-Generation Personal Music Server - BRIO by OraStream - PR Newswire (press release)
Getting to grips with server and storage virtualisation – Cloud Pro
The way that virtualisation has been talked about over the last decade you would be tempted to think every server and storage system has had the virtualisation treatment, but many organisation still haven't completely climbed on board the virtualisation ship headed for cloud nirvana. That said, there is still time and the benefits still stand.
According to Kong Yang, head geek at SolarWinds, virtualisation can bring many benefits to a business, from cost savings and flexibility to making IT workflows and processes more efficient and effective. "However, one of the key benefits is the ability to abstract infrastructure resources, which allows for the re-distribution of resources to applications at a radically faster rate," he says.
He adds the need to embrace virtualisation comes hand-in-hand with growing hybrid IT environments. The recent SolarWinds IT Trends Report 2017 reflects this, as it shows 52% of companies have server virtualisation included within their hybrid IT strategy.
Yang says hybrid IT is very much a reality for UK businesses, as many host some of their infrastructure in the cloud while also maintaining some on-premises. "In fact, our research shows that in the past 12 months, UK organisations have migrated applications (69%), storage (54%), and databases (37%) to the cloud -- this is more than any other area of IT," he says.
Consolidating servers and storage
So, what's the best way to go about consolidating servers and storage with virtualisation? It is critical to first understand the objectives of the virtualisation project and the outcomes that are desired. Often virtualisation forms part of a larger project, such as an application re-platform or datacentre consolidation or migration and it's important to understand how the larger picture affects the specific project.
"Once the objectives are clear, organisations should consider the individual workloads and operational areas to be implemented and how the implementation programme is to be conducted," says Pete Hulme, data centre technical lead at Dimension Data.
"For example, is it acceptable (or desirable) for test and development to share a platform with production? And is it important to be able to produce 'clones' of production for test purposes?"
He adds that companies should consider how business continuity and disaster recovery will be conducted and how the data management tools interact with the virtualisation platform. Adding, they should consider how security and segmentation is to be managed and who is to hold authority and control.
"Once these decisions are made it should then be possible to select a virtualisation platform for each component and to understand how these will interact and interoperate. It is essential to consider the management platform and interfaces that will be required to deploy and operate the platform and how the processes will interact with both new and existing processes," Hulme says.
Best practices
When virtualising infrastructure there are several best practices to be aware of. All x86 servers are now candidates for virtualisation. But according to Richard Stinton, enterprise solutions architect at iland, the main blockers to virtualisation are normally the risk associated with the migration process, especially downtime, and licensing policies of the software being run, the best example being Oracle.
"For this reason, the main servers still running on physical tend to be large transactional databases, such as SQL Server clusters, and Oracle," he says.
David Cottingham, director of XenServer product management and partner engineering at Citrix, says organisations should understand the characteristics of the workloads it is trying to virtualise, in terms of their CPU, memory, and I/O needs.
"A classic problem in virtualisation is for an overly-optimistic administrator to attempt to pack too many virtual machines onto a server and the end users experiencing poor performance when load is high," he says.
Technologies such as dynamic memory control and storage caching can help pack more VMs per server, workload balancing technologies can also support, "especially where VMs' performance is recorded over time, and then recommendations can be made on how to change the distribution of VMs across physical servers," Cottingham adds.
For storage, different issues come into play. Tom O'Neill, CTO international at all-flash storage provider Kaminario, believes the most important aspect of virtualising storage is to ensure the performance and capacity aspect are sized correctly.
"Storage is often the slowest part of any IT project and virtualised workloads often drive storage solutions harder," he says.
This is due to the I/O blender effect of multiple layers of virtualisation. Friendly sequential I/O can become increasingly random as extra layers of virtualisation are introduced. The impact of random vs sequential is much reduced with modern storage media like flash, but it's still considerable because of the caching and CPU deployed within the storage device (SSD).
Virtualising to move to the cloud
Virtualisation can also be considered a stepping stone to the cloud. However, if organisations opt for a private cloud, they should have a few resources in place - VM templates, resource pools, a user interface for self-service and a request process.
"VM templates let individuals requesting a new VM provision [it] themselves automatically, while resource pools designate a maximum quantity of resources that an end user can consume," says Arun Balachandran, product manager, at ManageEngine. "Organisations should also select the right tool as a user interface for self-service and have a request process in place."
He adds that in moving to private cloud, organisations must confirm that it remains cost-effective by means of constant performance monitoring and capacity planning.
"Without right-sizing VMs and failing to track and anticipate resource needs, businesses could face problems such as VM sprawl and resource overconsumption. Businesses should plan for the long term to ensure they have enough resources on hand to meet future business demands. To keep costs low, they can use low cost hypervisors or hypervisors from multiple vendors," he concludes.
Excerpt from:
Getting to grips with server and storage virtualisation - Cloud Pro
Oracle Embraces Containers to Speed Cloud Apps – EnterpriseTech
(everything possible/Shutterstock)
Oracle's databases and developer tools can now be pulled as images from a Docker container registry as the partners look to speed development of cloud-native database applications.
Oracle becomes the latest enterprise IT vendor to jump on the Docker container bandwagon as it seeks to expand its reach in the public cloud market. Among the container-based application, middleware and development tools made available on the container platform are Oracle's MySQL database and its WebLogic server. Those tools are in addition to the more than 100 images of Oracle products already available on Docker Hub, its cloud-based image registry.
Separately, Docker said Wednesday (April 19) it is partnering with Cisco Systems (NASDAQ: CSCO), Hewlett Packard Enterprise (NYSE: HPE) and Microsoft (NASDAQ: MSFT) to speed deployment of secure applications as micro-services in the cloud or on-premises.
The partnerships reinforce Docker's assertion that container advances are extending to "more mainstream deployments" ranging from computing, servers and, now, databases and related development tools. Earlier this week, Docker unveiled what it described as a "lego set" of standard container components and frameworks called the Moby Project. "Essentially anything that can be containerized can be a Moby component, providing a great opportunity for collaboration with other projects outside of Docker," noted Solomon Hykes, company founder and CTO.
Oracle (NYSE: ORCL), which launched a major cloud push last year, is betting harried applications developers will turn to application containers to accelerate delivery of secure enterprise workloads to production. In February, Oracle rolled out a data integrator cloud service designed to accelerate support for real-time analytics across enterprises.
The extended partnership with Docker helps bring "bedrock software" to enterprise application developers via a maturing container infrastructure, asserted Mark Cavage, Oracle's vice president of software development. Application containers are "revolutionizing the way developers build and deploy modern applications, but mission-critical systems in the enterprise have been a holdout until now," Cavage added in a statement announcing the partnership.
Oracle said the new container images on the Docker Store could be downloaded now from a public cloud or on-premises servers, and then deployed on virtual machines, bare metal or managed containers.
Meanwhile, Docker announced a separate application modernization effort designed to upgrade legacy applications as more companies shift to micro-services infrastructure. Docker said the service eliminates the need to modify source could or rework application architectures.
The service is based on the enterprise version of Docker along with hybrid cloud infrastructure from partners Cisco, HPE and Microsoft. A fourth partner, Avanade, is a Seattle-based IT consulting firm that works closely with Microsoft.
The program is based on "two realities facing enterprise IT organizations today," Docker COO Scott Johnston noted in a blog post: "Existing applications consume 80 percent of IT budgets, and most IT organizations responsible for existing apps are also tasked with hybrid cloud initiatives."
Application development and deployment on hybrid clouds via Docker received another boost this week when IBM (NYSE: IBM) plans to offer Docker's enterprise version on its Linux-based servers and Power-based systems. IBM claimed its Linux servers could support as many as 1 million Docker containers on a single system while its Power platforms could reduce latency to boost the performance of analytics and other applications running in containers.
About the author: George Leopold
George Leopold has written about science and technology for more than 25 years, focusing on electronics and aerospace technology. He previously served as Executive Editor for Electronic Engineering Times.
See the rest here:
Oracle Embraces Containers to Speed Cloud Apps - EnterpriseTech
IBM expands Bluemix cloud developer console – The Stack
IBM has announced an expansion to the developer console for the companys cloud platform, Bluemix.
The expanded developer console will include templates of applications that are ready to code to user specification. The templates include database integration, as well as cognitive and security services.
This is expected to significantly reduce the amount of time it takes for developers to set up microservices and roll out applications. Using the templates as building blocks for applications, rather than starting from scratch, developers can easily and quickly create cloud apps across mobile, web, and backend systems.
Bluemix developers can nowaccess the dashboard to create a customized template that includes the type of building block they would like to create, followed by the type of services they want to incorporate into the new app. Once a language is determined, the console generates a project pattern that can then be downloaded and edited to user specifications. Upon completion, the application may be deployed locally or directly to Bluemix.
The new expansion includes the Mobile App template, which allows developers to create applications that integrate with mobile services such as Push and Mobile Analytics. It also includes Backend for Frontend, which integrates backend patterns such as data, security, and cognitive intelligence into new applications. A template for WebApp is available, to assist in the development of a client-side web app using tools like Gulp, Sass, and React.
The developer console expansion also builds on last weeks announcement of new techniques for writing Bluemix-compatible microservices in a variety of languages, creating the flexibility for enterprises to assign several different developers to work on an app in different languages.
Now, with the expanded developer console, users can access microservice templates to get starter code for a single function that can be reused across multiple clients, with built-in Dockerfile to run the microservice in Docker container environments.
The expansion also includes integration with data services IBM Cloud Object Storage and Cloudant, and security services using AppID.
Earlier this month, IBM announced that Bluemixwould become the first major global cloud provider to make the NVIDIA Tesla P100 GPU accelerator available on the IBM Cloud platform. Clients are now able to equip IBM Bluemix bare metal cloud servers with NVIDIA accelerators, helping enterprises to run compute-heavy workloads including AI and analytics more quickly and efficiently.
Read the rest here:
IBM expands Bluemix cloud developer console - The Stack
AWS v Oracle: Mark Hurd schooled on how to run a public cloud that people actually use – The Register
Amazon's AWS infrastructure boss has slapped down Oracle co-CEO Mark Hurd after the latter boasted that Big Red needs fewer data centers because its systems are, apparently, twice as good.
Writing on his personal blog this week, James Hamilton, an Amazon distinguished engineer, said the suggestion that Oracle can compete with the cloud world's Big Three AWS, Azure and Google Cloud by building fewer data centers with "better servers" is, to loosely paraphrase the exec, a bit bonkers.
In a Fortune interview last week, Hurd bragged:
Hurd was trying to explain why cheapskate Oracle had spent just $1.7bn on increasing its cloud data center capacity in 2016, whereas the Big Three together had blown through $31bn that year. The bigwig insisted third-tier Oracle is competitive in the market despite this scrimping.
Straight off the bat we can think of two problems with the database giant's approach: redundancy and latency. Fewer data centers means when a large IT breakdown happens and even AWS has epic meltdowns the impact will be greater because you've put all your eggs in few baskets. And if you don't have many data centers spread out over the world, customers will find their packets take longer to reach your servers than a rival's boxes. That's not particularly competitive.
Hamilton had similar thoughts, and took the opportunity to lay a few facts down on Hurd. If you're interested in the design of multi-data-center systems, it's a rare insight into Amazon's thinking.
"Of course, I don't believe that Oracle has, or will ever get, servers two-times faster than the big three cloud providers," Hamilton opened with.
"I also would argue that 'speeding up the database' isn't something Oracle is uniquely positioned to offer. All major cloud providers have deep database investments but, ignoring that, extraordinary database performance won't change most of the factors that force successful cloud providers to offer a large multi-national data center footprint to serve the world."
The Amazon man also brought up the big costs and engineering limits that arise when building extremely large data centers. At some point, energy bills, network infrastructure overheads and other factors will negate the cost benefits of throwing more servers into a single region, he said. That's another reason why multiple smaller centers is better than a few stuffed-to-the-gills warehouses.
AWS limits its facilities to a 25 to 30MW range, as scaling beyond that begins to diminish cost returns, we're told.
Hamilton also notes the logistical issues that arise when a cloud provider relies too heavily on "last mile" networks to carry traffic for entire regions, rather than building lots of individual facilities connected via a private backbone, as Amazon prefers to do. He also said businesses prefer to use nearby centers not just for latency reasons but also for legal reasons: an organization in one country may not be able to store particular data in, say, the United States, so having a healthy choice of facilities scattered across the world is more customer friendly than a limited number.
"Some cloud computing users really want to serve their customers from local data centers and this will impact their cloud provider choices. In addition, some national jurisdictions will put in place legal restrictions that make it difficult to fully serve the market without a local region," Hamilton said.
"Even within a single nation, there will sometimes be local government restrictions that won't allow certain types of data to be housed outside of their jurisdiction. Even within the same country [they] won't meet the needs of all customers and political bodies."
The comments underscore just how divided the various cloud compute providers remain in their approaches from both an engineering and business perspective. Oracle, for example, has opted to push its cloud as part of a larger Exadata server brand, while Amazon focuses on the reliability and scale of its AWS network, and Google pushes its Cloud to businesses by promising link-ups to its G Suite and AdWords offerings.
AWS held a summit for customers in San Francisco on Wednesday, where it announced a bunch of stuff summarized here a lot of it you'll have seen previewed at re:Invent in November. The announcements include a DynamoDB accelerator, the availability of Redshift Spectrum for running really large S3 storage queries, EC2 F1 instances with FPGAs you can program, AWS X-Ray with Lambda integration, the arrival of Lex, and Amazon Aurora with PostgreSQL compatibility.
The F1 instances are pretty interesting. One startup in this space to watch is UK-based AWS partner Reconfigure.io, which is offering an alpha-grade toolchain to build and run Go code on the Xilinx UltraScale Plus FPGAs attached to F1 virtual machines. That's much nicer than fooling around with hardware languages to accelerate bits of your codebase in silicon.
Originally posted here:
AWS v Oracle: Mark Hurd schooled on how to run a public cloud that people actually use - The Register
Cloud Computing Chips Changing – SemiEngineering
An explosion in cloud services is making chip design for the server market more challenging, more diverse, and much more competitive.
Unlike datacenter number crunching of the past, the cloud addresses a broad range of applications and data types. So while a server chip architecture may work well for one application, it may not be the optimal choice for another. And the more those tasks become segmented within a cloud operation, the greater that distinction becomes.
This has set off a scramble among chipmakers to position themselves to handle more applications using more configurations. Intel still rules the datacentera banner it wrestled awayfrom IBM with the introduction of commodity servers back in the 1990sbut increasingly the x86 architecture is being viewed as just one more option outside of its core number-crunching base. Cloud providers such as Amazon and Google already have started developing their own chip architectures. And ARM has been pushing for a slice of the server market based upon power efficient architectures.
ARMs push, in particular, is noteworthy because it is starting to gain traction in a number of vendors server plans. Microsoft said last month it would use ARM server chips in its Azure cloud business to cut costs. This seemed like dream just a couple years ago, but a lot of people are putting money into it big time right now, said Kam Kittrell, product management group director in the Digital & Signoff Group at Cadence. As time goes on, what well see is that is instead of just having a general-purpose server farm that runs at different frequencies but basically has a different chip in it (depending if its high performance or not), youre going to see a lot of different types of compute farms for the cloud.
Fig. 1: ARM-based server rack. Source: ARM
Whether ARM-based servers will succeed just because they use less power than an x86 chip for specific workloads isnt entirely clear. Unlike consumer devices, which typically run in cycles of a couple years, battles among server vendors tend to move in slow motionsometimes over a decade or more. But what is certain is that inside large datacenters, power expended for a given workload is a competitive metric. Powering and cooling thousands of server racks is expensive, and the ability to dial power up and down quickly and dynamically can save millions of dollars per year. Already, Google and Nvidia have publicly stated that a different architecture is required for machine learning and neural networking.
In looking at the power performance tradeoffs, and how to target the designs properly, there are two distinct things that cloud has accelerated in both the multicore and networking space. What is common between these chips is that they are pushing whatever the bleeding edge is of technology, such as 7nm, Kittrell said. Youve got to meet the performance, theres no question. But youve also got to take into account the dynamic power in the design. At 65nm we got used to power being dictated by leakage all the way through 28nm. At 28nm, which was the end of planar transistor, dynamic power became more dominant. So now youre having to study the workloads on these chips in order to understand the power. Even today, datacenters use 2% of the power in the United States, so they are a humongous consumer. And when it comes to power, its not just how much power the chip uses, its the HVAC in order to keep the datacenter cool. In essence, youve got to keep the dynamic power under target workloads under control, and the area has to be absolutely as small as possible. Once you start replicating these things, it can make a tremendous difference in the cost of the chip overall. The more switching nodes you put in there, the more power it consumes overall.
Slicing up the datacenter Changes have been quietly infiltrating datacenters for some time. While there are still racks of servers humming along in most major data centers, a closer look reveals that not all of them are the same. There are rack servers, networking servers, and storage servers, and even within those categories the choices are becoming more granular.
While there is still a need for enterprise data centers to have a general server/traditional server primarily based on Intel Xeon-core-based processors with a separate NIC card connecting to external networking where the switching and routing occur, we see in these large-scale cloud datacenters that they have a number of specific applications that they feel can be optimized for those applications within the cloud, within that data center, said Ron DiGiuseppe, senior strategic marketing manager in the Solutions Group at Synopsys.
As an example, DiGiuseppe pointed to Microsofts Project Olympus initiative under its Azure business, which defines a server that is targeting different applications such as web services. Microsoft is large scale they are estimating that 50% of their data center capacity is allocated to web server applications. And obviously, every cloud data center would be different. But they wanted to have servers that can be optimized for the web applications. They announced last month that they have five different configurations of servers targeting segment-optimized applications.
Another example would be database services, he said. This is very fast, and therefore low latency search and indexing for databases, such as for financial applications. With that in mind, the system architectures are being optimized for those applications, and the semiconductor suppliers are architecting their chips to have acceleration capabilities tied to the end applications. Therefore, you could optimize semiconductor adding features to support those different segmented applications.
That could include a 64-bit ARM processor-based server chip or an Intel Xeon-based server chip in a database services application, where the database access is accelerated by adding very close non-volatile storage using SSDs or NAND Flash SSDs through PCI Express, which connects directly to the processor using NVMB protocol. The goal is to minimize latency and to be able to store and access commands.
Seeing through the fog While equipping the datacenters is one trajectory, a second one is reducing the amount of data that floods into a datacenter. There is increasing interest to be able to use the network fabric to do at least some of the signal processing, data processing, and DSP processing to extract patterns and information from the data. So rather than pushing all of this data up through the pipe into the cloud, the better option is to refine that data so only a portion needs to be processed in the cloud servers.
This requires looking at the compute equation from a local perspective, and it opens up even more opportunities for chipmakers. Warren Kurisu, director of product management in the embedded division at Mentor, a Siemens Business, said current engagements are focused on working with companies that build solutions for local processing, local data intelligence, and local analytics so that the cloud datacenters are not flooded with reams of data that clog up the pipes.
One of the key areas of focus here involves intelligent gateways for everything from car manufacturing to breakfast cereal and pool chemicals. It requires multicore processors in the gateway that can enable a lot of the fog processing, a lot of data processing in the gateway, he said. And that adds yet another element, which is security.
Security is the number one question, so we have made a very huge focus on being able to create a gateway that leverages hardware security built-in to the chip and the board and establish a complete software chain of trust so that anything that gets run and loaded onto that gateway any piece of software is authenticated and validated through cryptologythrough certificates and other things, Kurisu said. But you need some processing power to do just that sort of stuff. There needs to be some sort of hardware security available. One of our key demonstration platforms is on the NXP I.MX6 processor, which has a high-assurance boot feature in it. The high-insurance boot basically has a key burned into the silicon, and we can leverage that key to be able to establish that chain of trust. If there isnt that hardware mechanism enabled in the system, then we can leverage things like secure elements that might be on the board that would all do the same thing. There would be some sort of crypto element there or a key used to establish the whole chain of trust. A change in thinking The key to success comes down to thinking about these chip designs very holistically, Kurisu added, because when it comes to cloud in the datacenter, and if you think about Microsoft Azure or Amazon Web services or any of the others, the types of capabilities that are available from the cloud datacenter down to the actual embedded device, these things need to work in tandem. If you have a robot controller, and you need to do a firmware updateand you want to initiate that from the cloudhow that gets enabled on the end device is tied very explicitly into how the operation is invoked from the cloud side. What is your cloud solution? Thats going to drive what the embedded solution is. Youve got to think of it as a system, and in that way the stuff that happens in the datacenter is very closely related to the things that might seem very disconnected on the edge. But how the IoT strategy is implemented is somewhat tied together so it all has to be considered together.
It also could have an impact on chip designs within this market, and open doors for some new players that have never even considered tapping into this market in the past.
View original post here:
Cloud Computing Chips Changing - SemiEngineering
How robots will retool the server industry – ITProPortal
Larry Ellison, founder of Oracle, summed up on the concept of cloud computing very succinctly. All it is, is a computer attached to a network. Ellison and Oracle have gone on to embrace both open source and cloud technologies including OpenStack, but the basic premise that it starts with a physical server and a network still holds true.
The server industry is going through massive change, driven in the main part by advances in open source software, networking and automation. The days of monolithic on-site server rooms filled with rack-space, and blinking lights and buzzing air-con, are gone. However, the alluring simplicity of this concept is not quite how it works in the real world.
Organisations who want to run a private cloud on premises or a hybrid with public cloud must first master bare metal servers and networking and this is causing a major transition in the datacentre.
Instead, large organisations are deploying hybrid clouds, running on multiple smaller servers distributed across far-flung sites around the globe. These are being deployed and managed, as demand dictates, by robots, rather than IT administrators.
There is minimal human interaction, as the whole whirligig turns on slick IT automation, designed and directed by an IT technician on a dashboard on a laptop in any physical location.
Suddenly, traditional IT infrastructure is less gargantuan, and less costly, if no less important. But servers remain a part of a bigger solution, residing in software. It is crucial CIOs also make use of their existing hardware to take full advantage of the opportunities the cloud offers, rather than just tearing it out, and ripping it up, and starting again.
It does not make sense to renew their infrastructure, at great expense; such squandering actions hinder progress, ultimately. New architectures and business models are emerging that will streamline the relationship between servers and software, and make cloud environments more affordable to deploy.
What do robots bring to the party?
The automation of data centres to do in minutes what a teams of IT administrators used to do in days does present a challenge to organisations.
Reducing human interaction could be linked to fear of job losses, but instead IT Directors and CIOs will find that they can redeploy the workforce to focus on higher value tasks, giving them more time back to interact with their infrastructures and enabling them to extract real value from their cloud architectures.
In addition, automation opens up the field to new and smaller players. Rather than requiring an organisation to spend a great deal of time and money on specialist IT consultancy, automation and modelling allows smaller organisations to take advantage of the opportunities offered by cloud, and offer their service more effectively.
For example, imagine you are a pharmaceutical company analysing medical trial data. Building a Hadoop big data cluster to analyse this data set could have previously taken ten working days. Through software modelling on bare metal, this workload can be reduced to minutes, allowing analysts to do what they need to do more quickly, finding the trends or results from a trial, and bringing new drugs to market faster.
Deployment and Expansion
The emergence of big data, big software, and the internet of things is changing how data centre operators design, deploy, and manage their servers and networks. Big software is a term we at Canonical have coined to describe a new era of cloud computing. Where applications were once primarily composed of a couple of components on a few machines, modern workloads like Cloud, Big Data & IoT are made up of multiple software components and integration points across thousands of physical and virtual machines.
The traditional practice of delivering scale on a limited number of very large machines is being replaced by a new methodology, where scale is achieved via the deployment of many servers across many environments.
This represents a major shift in how data centres are deployed today, and presents administrators with a more flexible way to drive value to cloud deployments, and also to reduce operational costs. A new era of software (web, Hadoop, Mongodb, ELK, NoSQL) is enabling them to make more of their existing hardware. Indeed, the tools available to CIOs for leveraging bare metal servers are frequently overlooked.
Beyond this, new software and faster networking is starting to allow IT departments to take advantage of new workload benefits from distributed, heterogeneous architectures. But we are at a tipping point, as much of the new server software and technology takes hold, and comes to light.
OpenStack has been established as a public cloud alternative for enterprises wishing to manage their IT operations as a cost-effective private or hybrid cloud environment. Containers have brought new efficiencies and functionality over traditional virtual machine (VM) models, and service modelling brings new flexibility and agility to both enterprises and service providers.
Meanwhile, existing hardware infrastructure can be leveraged to deliver application functionality more effectively. What happens from here, in the next three-to-five years, will determine how end-to-end solutions are architected for the next several decades.
Next Generation Solutions
Presently, each software application has different server demands and resource utilisation. Many IT organisations tend to over-build to compensate for peak-load, or else over-provision VMs to ensure enough capacity in the future. The next generation of hardware, using automated server provisioning, will ensure todays IT professionals dont have to perform capacity planning in five years time.
With the right provisioning tools, they can develop strategies for creating differently configured hardware and cloud archetypes to cover all classes of applications within their current environment and existing IT investments.
This way, it is possible for administrators to make the most of their hardware by having the ability to re-provision systems for the needs of the data centre for instance, a server being used for transcoding video 20-minutes ago is a Kubernetes worker node now, a Hadoop Mapreduce node later, and something else entirely after that.
These next generation solutions, affording a high degree of automation, bring new tools, efficiencies, and methods for deploying distributed systems in the cloud. The IT sector is in a transition period, between the traditional scale-up models of the past and the scale-out architecture of the future, where solutions are delivered on disparate clouds, servers, and environments simultaneously.
Mark Baker, OpenStack Product Manager, Canonical
Image Credit: Scanrail1 / Shutterstock
See original here:
How robots will retool the server industry - ITProPortal