Category Archives: Cloud Servers
Serverless Architectures from an MSP’s Point of View – MSPmentor
Serverless architectures are getting a lot of focus now as a next-gen application platform and potentially as a successor to containers on public cloud platforms.
The reason for the interest is clear: companies want to minimize infrastructure management overhead by relying on the cloud platform to orchestrate cloud services rather than managing virtual servers themselves.
Public cloud platforms have already removed the burden of managing physical infrastructure.
Now serverless architecture stitches together various cloud services so that companies can simplify IT management while automatically benefiting from the frequent release of new cloud service capabilities.
The leading cloud platforms have noted this interest and new serverless options are coming out seemingly every day.
Of course, there are plenty of caveats, such as having to fit within the constraints of the service or set of services, a lack of visibility into what was traditionally points of monitoring concern, and a lack of familiarity on the part of many development teams.
But even so, adoption of serverless is quite discernable.
I encountered my first serverless application last summer when a company came to Logicworks looking for a managed services partner.
The challenges were immediately apparent.
First, there was no infrastructure to manage.
Seems obvious, but from the perspective of an infrastructure MSP, were immediately looking for where we can add value.
Second, so much of the tooling and systems used to manage traditional infrastructure (or even fully virtualized infrastructure such as Amazon EC2) simply had no place in this world.
The challenges were like what we faced with the rise of containerized applications, but more so.
In a containerized system, we found ways to do intrusion detection, log aggregation and host-level monitoring because we still had access to the host OS.
In a serverless system, customers that must meet PCI, HIPAA or HITRUST standards will have to wait for serverless-ready solutions.
These are real challenges for an MSP and while solutions will come that help companies achieve specific compliance requirements.
The bigger question faced by our industry is: what we do when theres nothing to manage but the application itself.
We had encountered similar challenges with the rise of Platform as a Service (PaaS) but there was always room to run the more complex workloads that didnt fit within those constraints.
Now that the code is running directly on the services and the number and capabilities of those services have grown so much, the line between infrastructure and code is blurring even further than the traditional understanding of infrastructure as code (IaC).
Of course, while new methods and tools are always arriving in IT, almost nothing ever goes away entirely.
We still have mainframes and we will have monolithic applications running on traditional topologies for some time.
There will be room for an MSP to add value for a very long time.
But Im still prompted to think about our place in a serverless world.
As an industry, we want to do more than just manage what will ultimately become legacy workloads.
And frankly, keeping good talent requires that we keep working on the cutting edge in addition to older platforms.
Also, we need to be able to work with those risk-welcoming verticals to be experts as the platforms mature and come into use by the slower moving, more risk-averse businesses down the line.
The most obvious take is that we will move up the stack as weve already done, getting closer to and sometimes owning the code deployment process, integrating with cloud services and providing an overlay of governance and expertise.
We would need to take this one step further by participating in the application architecture, suggesting services and helping teams integrate them.
Given the rate of change, having an MSP focused on keeping abreast of the tooling would be an asset.
Another nuance that differentiates serverless from PaaS is that we can use serverless services like Legos to build a platform or PaaS solution ourselves.
This would be a welcome difference from PaaS in that we can assemble the services into solutions that match our clients needs more completely, then continuously improve that deployment as new functionality is developed.
Examples of this from our team have included writing AWS Lambda functions on a clients behalf, incorporating AWS Simple Server Manager into our existing automation framework, and gluing together both cloud-native services and third party ISV offerings.
We quickly came to appreciate the speed and interoperability these serverless services provided.
So, while we dont know the future of serverless, were keeping our eye on this next stage of virtualization and are well aware that we need to stay in front of it.
We are increasingly encouraged by what we can do on our clients behalf using the serverless ecosystem.
Frankly, any MSP worth their salt is or should be having the same concerns.
Ken Ziegler is CEO of Logicworks.
Send tips and news to MSPmentorNews@Penton.com.
See the original post here:
Serverless Architectures from an MSP's Point of View - MSPmentor
Unisecure Data Centers Offers 15% Discount On Cloud Server Hosting Services – HostReview.com (press release)
06:34:00 - 02 August 2017
Philadelphia, US, August 2, 2017 | We are delighted to announce that Unisecure Data Center is now offering 15% off on Cloud hosting services. This offer is applicable for the whole month of August 2017 and can be availed by all new customers looking for Cloud hosting solutions as well as to the existing who want to shift to CloudComputing Services .
Unisecure has a broad affair of 20 years in Web hosting and Data Center industry. The organization is perceived for its expert skill, quality deliverance, and astounding fast activity bolsters 24x7x365. Unisecure has earned the trust of clients over the world in almost 20 nations with an immense customer base of 40,000 clients including a few Fortune 500 organizations.
"As a premier data center company, we needed to offer something unique to our clients which will help them to take an advantage of this fast growing technology in the server industry. While we have declared a few markdown offers before, this is the biggest opportunity we are putting forth to the people who want to experience the world of cloudcomputing services & solutions. We believe that organizations are intending to move from an on-premise IT infrastructure to the cloud, these organizations should profit the advantage of this offer and band together with us for their different facilitating needs" said Benjamin, Vice President - Business Development at Unisecure.
Olivia, Head of Business Development says: "One of our sole reason for bringing down the cloud costs is to give the small businesses an opportunity to explore and grow by reducing IT cost.
About Unisecure
Unisecure is a US-based dedicated web server hosting and data center services provider with several world class data centers in the USA. Unisecure started in 1996 and since has effectively conveyed various ventures in the territories of Data Center Services, Dedicated Servers, VPS facilitating, colocation, Cloud arrangements and Disaster Recovery Services.
Unisecure offer 99.995% Network Uptime SLA ensure. They have three privately owned state-of-art data centers located in the US, catering to the customers across the globe. The data centers are encouraged with modern and redundant framework; Unisecure is among a couple of suppliers to offer Linux and Windows hosting services. The competitive offerings and service levels by Unisecure are translated into customers' delight.
For more information, visit http://www.unisecure.com
Read more from the original source:
Unisecure Data Centers Offers 15% Discount On Cloud Server Hosting Services - HostReview.com (press release)
How The Cloud Will Disrupt The Ad Tech Stack – AdExchanger
The Sell Sider is a column written by the sell side of the digital media community.
Todays column is written by Danny Khatib, co-founder and CEO at Granite Media.
One of the most powerful aspects of the cloud platform is the innovation created by the unbundling of component services. There is a full menu of options for every hardware and software component, and companies can mix and match to achieve their desired configuration, trading off service and cost for each component. No more monolithic apps.
For the web stack, a company can rent elastic hardware from a primary service like Amazon Web Services or Google Cloud, plug in content delivery network services from a different vendor, install basic application monitoring from yet another vendor, and the list goes on. A company can also run its independent data stack in parallel storing logs at one provider, using one of many data pipeline services, pushing data to a separate structured data warehouse while selecting a decoupled, best-in-class visualization tool to make sense of it all.
Changing any of these decisions at any layer has super-low friction and only requires one or two developers or operations employees to manage it all. Now that is disruption.
Isnt this how the ad tech stack should run, too? Lets imagine that future.
The ad stack of the future will be cloud-based, component-driven, functionally independent from parallel web and data stacks and will have every component decoupled and rebundled at the customers discretion. Importantly, almost all layers of the existing ad stack will be reconceived as operational infrastructure, not as access to demand or supply.
The Basic Layer
Buyers and sellers will each run their own ad servers, and access to the general RTB bidstream between them will be a single component service for each party, which will often be managed by the cloud provider hosting the server. A server will be swappable without affecting access to the bidstream.
The bidstream itself will be a commodity delivery service, similar to basic web traffic table stakes for cloud providers or component providers, with no charge other than the cloud resources used to manage the bidstream. The major cost decision point for both sellers and buyers will be the desired maximum queries per second to be supported by the server. If publishers want to manage more bids per second, they will have to pay for the resources to manage them, not for the value of the bids. Just imagine a content delivery network charging more for articles or users that monetize well, no, thank you.
Gone will be the days where publishers run 10 bidstreams in parallel, because there will no longer be a need to do so. Publishers will manage demand through one component pipe that doesnt affect other layers of the stack, and they will pay a cloud usage fee to manage it. Publishers will get a single unified auction, and buyers wont have to solve for deduplication anymore.
The Service Layer
Around this basic layer, component services will flourish. A publishers server could run one of several available auction engines that house the priority and decisioning rules to select a winning bid. It will enable intelligent bid filtering services to manage bidstream cost, and also many different internal monitoring services for bidstreams, ad serving reports, custom metrics and so on. A separate cookie-matching service will be easy to plug in, as will a creative diagnostics service to help detect pesky redirects and creatives that hurt user engagement.
The buyers server will run in a similar fashion, with campaign management as a component.Server logs will be pushed to a parallel data stack for offline analysis by yet another service. The client SDK to fetch ads from the server will be a separate component, probably just open-source software. Its not tied to a particular server, and its most definitely not tied to a particular demand source.
The Transaction Layer
Best-of-breed components will be designed to tackle secure stream connections, identity verification, transaction confirmation and financial settlement.
In the offline world, a seller can choose to accept Visa, MasterCard or American Express, and the buyer decides which to use. Similarly, a sellers server will be able to disclose what transaction, verification and settlement providers are supported, and buyers can respond with which service to choose, such as a preference for Moat or Integral Ad Science and how they prefer to pay.
The buyer will get to choose from a pre-approved list of vendors supported by the seller, then the sellers server will render the code and pixels required for third-party verification for a particular impression.
Financial settlement might be bundled with transaction confirmation or offered as a separate service. Any services that dont involve the handling of money should charge based on resources used for a given number of transactions. Any service that involves financial settlement can charge a percentage of revenue since there is financial risk to be managed, and money is the underlying resource used. Again, this is all functionally independent of access to demand, supply or any other layer in the stack.
Instead of using ads.txt, publishers will manage a name server to verify identify and defend against fraud, similar to how domain name server resolution works.
The Data Layer
Ad networks will reshape themselves as data providers that plug into buyers and sellers servers but dont reroute the bidstream. Deals and unique data can be managed by inserting rules and attributes into both servers so a bid request can be signed with additional deal ID attributes before it is sent to a particular buyers server, which has been configured to look for it.
There will be no more reselling problems because the bidstream integrity is preserved.Networks and other data providers can try several different business models, such as charging on transactions or revenue, depending on the unique insights they provide.
As we move away from monolithic apps in the ad stack toward cloud-based component services, buyers and sellers will absolutely win. For the ad tech ecosystem, there are large implications for who might be the long-term winners and losers and how consolidation will play out, but well leave those predictions for a future column.
Follow Danny Khatib (@khatibda), Granite Media (@Granite_Media) and AdExchanger (@adexchanger) on Twitter.
Original post:
How The Cloud Will Disrupt The Ad Tech Stack - AdExchanger
Packet launches edge compute service in 15 global locations – RCR Wireless News
Packet, a New York-based startup that specializes in bare metal infrastructure, recently launched its new edge compute service in 15 locations across the globe. Eleven of these locations are new, which are online in Los Angeles, Seattle, Dallas, Chicago, Ashburn, Atlanta, Toronto, Frankfurt, Singapore, Hong Kong and Sydney.
Contrary to its other locations, the new edge compute locations provide a single server configuration based on an Intel SkyLake processor. However, Packet intends to bring the majority of its server options to these new locations later.
While edge compute is still in its infancy, new experiences are driving demand for distributed infrastructure, especially as software continues its relentless pursuit down the stack, said Zachary Smith, a co-founder and CEO of Packet. We believe that the developers building these new experiences are hungry for distributed, unopinionated and yet fully automated compute infrastructure and thats what were bringing to the market today.
Packet was founded in 2014 as a way to provide developers with un-opinionated access to infrastructure. I saw this huge conflict happening between this proprietary lock-in view of infrastructure that was being provided by the cloud, and open source software that effectively wanted to eat that value all the way down, said Smith. We started Packet with the idea we can provide this highly automated, fundamental compute service with as little opinion as possible, while still meeting the demands of a millennial-based developer.
Packet currently has about 11,000 users on its platform. The company is expanding its services to make it more appealing for businesses that demand low latency communication. In addition, the company offers private deployments. I think we are the only platform that is purely focused on providing developer automation, a.k.a., a cloud platform, without opinion, said Smith. Every other cloud provider is based on multi-tenancy, or virtualization or some other service because they want to lock you in. We are really the only one that is out there to automate hardware.
Packet describes itself as the bare metal cloud company for developers. Bare metal cloud servers do not have a hypervisor. The company believes the next-generation of cloud computing will require customized hardware, and that placing metal power at the edge will play a significant role in fueling the internet of things (IoT).
As these new edge centers grow, it could be pressed they will evolve into data centers overtime. I could see these things essentially staying as pretty bespoke edge markets with a core market being very similar to major public clouds today, Smith said. Its pretty difficult to put everything in every location.
The company intends to expand to new locations within the next six to 12 months, and add to the portfolio of its current locations.
Related Posts
See the original post here:
Packet launches edge compute service in 15 global locations - RCR Wireless News
IBM adds Optane to its cloud, only as storage and without GPUs – The Register
IBM's made good on its promise to fire up a cloud packing Intel's Optane non-volatile memory in the second half of 2017. But Big Blue has fallen short of the broad services suite it foreshadowed and can't even put Optane to work as memory.
Big Blue announced the availability of servers with Optane inside on Tuesday. You can run Intel's baby on selected IBM cloud bare metal configurations that give you the chance to provision a server with the 375GB SSD DC-4800X. Because that's a PCIe device, you can either have an Optane or a GPU, not both.
Another limitation is that you can only use Optane as storage, which is nice because it's pleasingly fast storage. But if you wanted to try Optane as a massive pool of memory, a role Intel feels is particularly impactful, you can't do that. Scheduled availability is to be determined, IBM says in its fine print.
The inability to use Optane as memory makes IBM's announcement of the service incongruent, as it lauds Optane as is the first product to combine the attributes of memory and storage, thereby delivering an innovative solution that accelerates applications through faster caching and faster storage performance to increase scale per server and reduce transaction costs for latency sensitive workloads.
But IBM can't do that now. And can't say when it will.
For now, Optane's only available in five IBM data centres. If latency between you and Dallas, London, Melbourne, Washington DC or San Jose, California, is going to be a problem, this service may not be for you.
We'd love to tell you more about the price of the service, but the online server configuration tool IBM suggests does not have an option for Optane that your correspondent could find. Nor is a price list apparent.
We can tell you that Optane-packing servers in the IBM cloud can run Windows Server 2012 or 2016, Red Hat Enterprise Linux 6.7 and up, or ESXi 5.5 and 6.0.
Sponsored: The Joy and Pain of Buying IT - Have Your Say
Read more from the original source:
IBM adds Optane to its cloud, only as storage and without GPUs - The Register
Joining Apple, Amazon’s China Cloud Service Bows to Censors – New York Times
The move came at roughly the same time that Apple said it took down a number of apps from its China app store that help users vault the Great Firewall. Those apps helped users connect to the rest of the internet world using technology called virtual private networks, or VPNs.
Taken together, the recent moves by Apple and Amazon show how Beijing is increasingly forcing Americas biggest tech companies to play by Chinese rules if they want to maintain access to the market. The push comes even as the number of foreign American tech companies able to operate and compete in China has dwindled.
Beijing has become increasingly emboldened in pushing Americas internet giants to follow its local internet laws, which forbid unregistered censorship-evasion software. Analysts say the government has been more aggressive in pressuring companies to make concessions following the passage of a new cybersecurity law, which went into effect June 1, and ahead of a sensitive Communist Party conclave set for late autumn.
The government has been intent on tightening controls domestically as well. It recently shut down a number of Chinese-run VPNs. New rules posted to government websites in recent days said Communist Party members can be punished for viewing illegal sites and that they must register all foreign or local social media accounts.
Also in response to the new law, Apple said it planned to open a new data center in China and store user data there.
Ms. Wang, who said that Sinnet handles Amazon Web Services operations across China, said that the company has sent letters warning users about such services in the past but that the government had been more focused on other issues.
Amazon Web Services allows companies small and large to lease computing power instead of running their websites or other online services through their own hardware and software. Because Amazons cloud services allow customers to lease servers in China, it could be used to give Chinese internet users access to various types of software that would help them get around the Great Firewall.
Keeping in line with censorship rules is only a part of it. In cloud computing, China requires foreign companies have a local partner and restricts them from owning a controlling stake in any cloud company. New proposed laws, which have drawn complaints of protectionism from American politicians, further restrict the companies from using their own brand and call for them to terminate and report any behavior that violates Chinas laws.
While Microsoft and Amazon both run cloud services in China, similar ones run by local Chinese internet rivals dwarf them in scale. In particular Chinese e-commerce giant Alibaba runs its own cloud services, which have grown rapidly in China. In order to operate in the country, Chinas biggest internet companies must stay in close contact with the government and carry out Beijings various demands, whether they be a request for user data or to censor various topics.
While China is not a major market for Amazon, the company has been in the country for a long time and has been pushing its cloud computing services there. Also recently the company announced a partnership with the state-run telecom China Mobile to create a Kindle, the companys e-reader device, aimed at the local Chinese market.
Link:
Joining Apple, Amazon's China Cloud Service Bows to Censors - New York Times
Cisco Launches New UCS Servers, Hybrid Cloud Management … – SDxCentral
Cisco today debuted new servers and software, which includes a hybrid cloud management tool.
The new cloud tool is called the Workload Optimization Manager, and its powered by cloud management software fromTurbonomic. The product uses intent-based analytics to match workload demand to infrastructure supply across on-premise and multi-cloud environments.
It also compares costs of moving workloads from public clouds, older Cisco servers, and non-Cisco machines to the latest Unified Computing System (UCS) M5 servers announced today.
Ciscos own IT department started using Turbonomic for data center management, and they said, you need to check this out, said Joann Starke, senior manager ofCisco Data Center Solutions.
The company used the software to manage 30 million watts of raised data center floor space. Eighteen months after installing the product, the IT department optimized half of their data center environments and downsized the data center footprint by one-quarter. This saved the company $17 million in equipment costs over the same time frame, and $2.8 million every month in rental costs for space.
The hybrid cloud management software also integrates with UCS infrastructure, which enables Cisco customers to identify idle machines and increase workload density.
Cisco also updated its UCS Director the software that manages the UCS hardware.
The UCS Director 6.5 extends automation capabilities beyond infrastructure by automating native PowerShell functions, virtual machine mobility across vCenter data centers, and support for VMware VMRC console.
It also integrates with Workload Optimization Manager, which enables the automatic creation of a new virtual machine or configuration of a physical server by UCS Director. Workload Optimization Manager then reallocates resources. This ensures application performance and cost efficiency, Starke said.
It is our plan and our vision to expand this across Ciscos entire hybrid cloud stack, she added.
The software updates and new workload management tool help IT departments modernize their data centers with automation, Starke said.
Workloads are increasing by 26 percent, year over year, but IT budgets are only increasing by 3 percent, she explained. Clearly we have a gap and you cant hire enough humans to fill it. You need automation. Youre letting software manage software.
In addition to the software, Cisco today launched new UCS M5 servers. They are built on the Intel Xeon processors, also announced today. Of the five new Cisco machines, three are rack servers and two are blade servers.
The servers include up to double the memory capacity of previous systems and deliver up to 86 percent higher performance compared to the previous generation of UCS, Cisco claims.
Our customers are telling us they want faster applications with fewer complications, said Todd Brannon, director of product marketing, unified computing at Cisco. The demand for real-time analytics the trend there is certainly pointing upward.
Customers want servers with more memory and more graphics processing units (GPUs), which accelerate machine learning algorithms, Brannon added. To this end, one of the new blade servers includes a half-width blade form factor, which allows it to support two GPUs. Cisco says this is an industry first.
Additionally, one of the new rack servers tripled its GPU support, so it now can support six.
The hardwares key differentiator is really the software, Brannon said. Where others warp their servers up in sheet metal, were warping them up in software, he said. Its definitely all about the software for us in UCS.
When asked about the new servers and software, analyst PatrickMoorhead, president of Moor Insights & Strategy said, I like what I see, particularly for current UCS customers. Their new hardware and software is focused at solving real problems and the automation is differentiated.
But, he added, hed like to hear more about Ciscos server security. The new attack point is server firmware, less so on the network and client device, he explained.
The rest is here:
Cisco Launches New UCS Servers, Hybrid Cloud Management ... - SDxCentral
Verizon data of 6 million users leaked online – CNNMoney
The security issue, uncovered by research from cybersecurity firm UpGuard, was caused by a misconfigured security setting on a cloud server due to "human error."
The error made customer phone numbers, names, and some PIN codes publicly available online. PIN codes are used to confirm the identity of people who call for customer service.
No loss or theft of customer information occurred, Verizon told CNN Tech.
UpGuard -- the same company that discovered leaked voter data in June -- initially said the error could impact up to 14 million accounts.
Chris Vickery, a researcher at UpGuard, discovered the Verizon data was exposed by NICE Systems, an Israel-based company Verizon was working with to facilitate customer service calls. The data was collected over the last six months.
Vickery alerted Verizon to the leak on June 13. The security hole was closed on June 22.
The incident stemmed from NICE security measures that were not set up properly. The company made a security setting public, instead of private, on an Amazon S3 storage server -- a common technology used by businesses to keep data in the cloud. This means Verizon data stored in the cloud was temporarily visible to anyone who had the public link.
ZDNet first reported the breach.
Related: Data of almost 200 million voters leaked online by GOP analytics firm
The security firm analyzed a sample of the data and found some PIN codes were hidden but others were visible next to phone numbers.
UpGuard declined to disclose how the leaked data was discovered.
Dan O'Sullivan, a Cyber Resilience Analyst with UpGuard, said exposed PIN codes is a concern because it allows scammers to access someone's phone service if they convince a customer service agent they're the account holder.
"A scammer could receive a two-factor authentication message and potentially change it or alter [the authentication] to his liking," O'Sullivan said. "Or they could cut off access to the real account holder."
Verizon customers should update their PIN codes and not use the same one twice, O'Sullivan advises.
The is the latest leak to surface from a misconfigured Amazon S3 storage unit. In June, an analytics firm exposed the data of almost 200 million voters, and earlier this month, an insecure server leaked 3 million WWE fans' data last week.
Why does this keep happening? Amazon secures these servers by default. This means the errors that occur are due to changes someone makes with a security setting -- typically by accident, O'Sullivan said.
O'Sullivan says the Verizon case highlights how many third-parties have access to our personal data.
"Cyber risk is a fact of life for any digital service," O'Sullivan said. "As data becomes more powerful and more accessible, the potential consequences for it to be misused also becomes more dangerous."
CNNMoney (New York) First published July 12, 2017: 4:14 PM ET
Visit link:
Verizon data of 6 million users leaked online - CNNMoney
Server vendors board the Xeon SP party bus – The Register
As expected when Intel processors power virtually all x86-class servers, the vendors all hopped on the Skylake Xeon SP party bus.
They hope to ride the server update into the market better than each other, and get every last upgrade penny they can pocket.
Cray's XC50 supercomputers and CS line of cluster supercomputers will be available with Xeon Scalable Processors and should run their jobs faster than before.
Cray XC50 supercomputers with Xeon SPs are available now. The Cray CS500 cluster supercomputers and CS-Storm accelerated cluster supercomputers with Xeon SP will be available in the third quarter.
Fujitsu is announcing its new, refreshed range of dual and quad-socket PRIMERGY servers and octo-socket PRIMEQUEST business critical servers using Xeon SPs.
The line includes the multi-node and modular PRIMERGY CX400 M4, which has blade servers inside a rack chassis.
Technical features include DDR4 memory modules and up to 6TB capacity in quad-socket PRIMERGY servers, flexible configuration options to support mix-and-match of storage drive bays and graphics processing units (GPUs) to accelerate high-performance computing, hyperscale, and enterprise data centre workloads.
Fujitsu says its PRIMEQUEST server pushes the performance envelope of SAP HANA up to 12TB.
These server lines are available worldwide from Fujitsu and its distribution partners. Prices vary by region, model and configuration.
Taipei-based Gigabyte's Xeon SP line will initially offer four new 1U-form factor and four new 2U-form factor systems, as well as two motherboard SKUs that support the Scalable series, and have a range of options for storage and expansion slots.
Gigabyte R281 rack server
Check them out here. No specific Xeon SP processors are detailed.
HPE's social media whizz, Calvin Zito, has two videos available showing HPE product people talking about Xeon SP use in the DL380 and DL560 ProLiant gen 10 server line. Gen 10 signifies Xeon SP use.
Youtube Video
IBM? Launching Xeon SP servers? Didn't it sell off its x86 server line to Lenovo way back when? Yes, it did, but this time around the x86 block it's launching bare metal Xeon SP servers in the IBM Cloud. They'll use Xeon Silver 4110 or Xeon Gold 5120 processors, Big Blue burbles, for faster insights from big data workloads.
These join its POWER servers in the cloud, and will be available in IBM Cloud data centres in the US, UK, Germany and Australia from Q3 2017. We don't know who is building these servers for IBM.
Lenovo blade, rack, tower, dense, mission-critical and hyperscale servers. The SN550 and SN850 blades support Xeon SP Platinum CPUs as do the SR530, 550, 630 and 650 rack servers. So far its website doesn't specify Xeon SP support for the SR570 and SR590 rack servers
The ST550 tower and SD530 dense servers support Xeon SP Platinums but it doesn't say whether the liquid-cooled SD650 does.
Both mission-critical servers, SR850 and SR950, fly the Xeon SP Platinum colours.
Lenovo has announced 42 new world-record benchmarks for its ThinkSystem server portfolio integrated with Xeon SP processors.
The TPC-E benchmark uses a database to model a brokerage firm with customers who generate online transactions related to trades, account inquiries, and market research.
The TPC Benchmark H (TPC-H) is a decision support benchmark for systems looking at large volumes of data, execute complex queries and returning answers.
Lenovo said it is:
The STAC-M3 benchmarks measure workloads in time-series analytics.
Specific Xeon SP CPU models used in these servers are not yet listed.
White box server king Supermicro has a new X11 generation server and storage motherboard and chassis line optimised for the Xeon SPs.
There are X11 dual-processor (DP) and uniprocessor (UP) Serverboards and SuperWorkstation boards, with single, dual and quad-socket motherboards. Xeon SPs up to the Platinum models with 28 cores are supported.
Supermicro claims it offers the most extensive range of computing products with this Xeon SP line for data centre, enterprise, cloud, HPC, Hadoop/Big Data, AI/deep learning, storage, and embedded environments.
Charles Liang, Supermicro president and CEO, said: "Our Server Building Block Solutions are designed to not only take full advantage of Xeon Scalable Processors' new features such as three UPI, faster DIMMs and more core count per socket, but also fully support NVMe through unique non-blocking architectures that achieve the best data bandwidth and IOPS."
Find out more here.
Hyperconverged infrastructure appliance software supplier Maxta says it has immediate support for any server designed for Xeon SP CPUs.
Benchmark testing of a Maxta HCI cluster configured with Xeon Platinum 8168 processors and Intel data centre SSDs with NVMe delivered a storage performance gain of 120 per cent in IOPS, with less than half the storage latency compared to previous Xeon technology.
Using QuickAssist Technology to offload and accelerate real-time data compression operations, the platform offered a further performance gain of 25 per cent and a 13 per cent decrease in latency. Nice.
There's no need, it purrs, to wait months for hardware-based hyperconvergence products to integrate Xeon SP technology.
See more here:
Server vendors board the Xeon SP party bus - The Register
New Azure servers to pack Intel FPGAs as Microsoft ARM-lessly embraces Xeon – The Register
Microsoft may have said ARM servers provide the most value for its cloud services back in March, but today it's given Intel's new Xeons a big ARM-less hug by revealing the hyperscale servers it uses in Azure are ready to roll with Chipzilla's latest silicon and will all use Chipzilla's field programmable gate arrays.
Those servers are dubbed Project Olympus and Microsoft has released their designs to the OpenCompute Project. In a post doubtless timed to co-incide with the release of the new Xeons, Microsoft reveals worked closely with Intel to engineer Arria-10 FPGAs, which are deployed on every single Project Olympus server, to create a 'Configurable Cloud' that can be flexibly provisioned and optimized to support a diverse set of applications and functions.
Redmond also praises the Xeon Scalable Processors as being jolly powerful and all that, which will help Azure to scale and handle different workloads. But it's the news that Redmond's all-in with Intel Arria FPGAs that must be warming cockles down Chipzilla way, as using Xeons as the main engine and tweaking them for different roles with FPGAs is Intel's strategy brought to life.
IBM's also embraced the new Xeons, gushing that it will be the first to offer them on bare metal cloud servers. But not, in all likelihood, the first to use them at all: Google has claims to have been running them since June 1st, 2017.
The deal that gave Google early access to Skylake Xeons was thought to be one reason Microsoft let its excitement about ARM servers emerge into public view.
But The Register does not believe that ardour and today's kind words for Xeon are mutually exclusive: Redmond is surely contemplating future Azure architectures, so while Wintel looks strong today, there's still plenty of time in which the alliance could splinter.
Read more from the original source:
New Azure servers to pack Intel FPGAs as Microsoft ARM-lessly embraces Xeon - The Register