Category Archives: Cloud Servers

Your next heat source could come from a server, if Nerdalize has its way – Digital Trends

Get today's popular DigitalTrends articles in your inbox:

Why it matters to you

Your laptop generates plenty of excess heat as it sits on your lap, and now, this Dutch startup wants to take that excess energy and have it heat your home.

The notion of waste not, want not has never been quite this high tech.

In an effort to ensure that our future is sustainable, one Dutch company is looking to turn servers into heat sources. Because if youre running laptop is enough to warm your lap on a cold winters day, then shouldnt servers (which emit alotmore energy in the form of heat) be enough to heat your home? Thats certainly the bet thatNerdalize is making. The startup hopes to createfree heat for everyone and make cloudcomputing sustainable and affordable.

Its method? Placing cloud servers in individual homes, and turning them into heating systems. Your house will serve as a data center for companies that depend on cloud computing (which is to say, all companies), and in return, those companies will effectively provide you with heat and hot water.

Its the 21st-century definition of symbiosis.

Nerdalize estimates that by turning common homes into data centers, homeowners can save up to $340 a year, while companies can forego the cost of expensive server centers, saving about 50 percent of their own operational costs. Thisinnovative set-up drastically reduces the households energyconsumption while slashing the energy originally needed for servercooling, Nerdalize claims. Adding up all those free hot showers and avoided cooling, wecan save up to three tons of CO2 per household per year.

The plan is to start installing these servers in Dutch homes in August. 42 households will serve as guinea pigs, and if all goes well, theyll be able to turn corporate data into hot water. Indeed, the company says, demand appears to be quite high for this innovative technology, as more than 3,500 people have signed up and expressed interest in a server heater.

The company has already hit 130 percent of its 250,000-euro ($282,000) funding goal, so if youre looking for an alternative heat source, you may just want to look toward a server.

Go here to see the original:
Your next heat source could come from a server, if Nerdalize has its way - Digital Trends

OVH makes foray into APAC cloud market – ComputerWeekly.com

French infrastructure-as-a-service (IaaS) supplier OVH has expanded its footprint in Asia-Pacific (APAC) with new datacentres in Sydney and Singapore, along with a regional headquarters in Melbourne, Australia.

What are your peers in the Nordics region looking to spend their budget on in 2017? Unsurprisingly, cloud computing is one of the biggest draws and more than half of CIOs in the region will spend more on cloud technologies this year than they did in 2016.

By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.

You also agree that your personal information may be transferred and processed in the United States, and that you have read and agree to the Terms of Use and the Privacy Policy.

The recent investments are part of the companys efforts to tap the booming APAC public cloud market, especially in mature economies, where public cloud spending is expected to hit $10bn by 2017.

Although the market for IaaS in the region is currently dominated by the likes of Microsoft, Amazon Web Services and Alicloud, as well as smaller regional players, OVH is confident of delivering a differentiated IaaS offering powered by its hyperscale infrastructure comprising 270,000 physical servers hosted in 20 datacentres around the globe.

We deliver a hosted dedicated cloud infrastructure with the commercial attributes of the public cloud very fast provisioning, full elasticity up and down and zero minimum commitment, said Laurent Allard, vice-chairman of OVHs board of directors. We do this with the bare metal cloud servers, as well as in the VMware space with SDDC [software-defined datacentre] on demand.

Noting that cloud adoption is still nascent in the APAC region, Clement Teo, principal analyst at Ovum, said there are growth opportunities for new market entrants. Cloud suppliers such as OVH, in particular, could support the infrastructure needs of large European enterprises that are expanding into the region, he told Computer Weekly.

Besides targeting large enterprises, OVH which was started 17 years ago in the garage of its current CEO, Octave Klabahas is also eyeing startups, which Allard said will be the big companies of tomorrow.

To reach more startups, OVH has introduced its Digital Launch Pad (DLP) programme in Singapore and Southeast Asia. Through the DLP, which has already enrolled 700 startups globally, the company will support local startups at each stage of their development and offer free cloud computing resources ranging from $1,000 to $100,000 per company.

Teo said going after startups is a sound strategy, especially if the startups are gaming companies that need to scale up quickly. OVH could also look into providing a marketplace for developers to pick and choose the services they want to deploy, he added.

OVHsexpansion plans do not stop at two regional datacentres. The company will hire 80 full-time staff for its Melbourne regional HQ within three years, including highly skilled employees covering technical pre-sales, technical sales, customer support and marketing.

Later in 2017, we will review the need for additional footprint across the region based on customer feedback and requirements, said Allard.

The company is also looking to extend its certifications for its Singapore datacentre in the next few months, including alignment with cloud usage guidelines set out by the Monetary Authority of Singapore and local security standards such as Multi-Tier Cloud Security.

Continued here:
OVH makes foray into APAC cloud market - ComputerWeekly.com

Global Server Load Balancing Moves to the Cloud – The Data Center Journal

Even as applications move from traditional data centers to the cloud, server load balancing continues to be a core element of IT infrastructure. Whether servers are real or virtual, permanent or ephemeral, there is always a need to intelligently distribute workloads across those multiple servers.

But there remains a chronic gap in the ability to reliably distribute workloads across multiple clouds, multiple data centers and hybrid infrastructures. The result is poorly distributed workloads and degraded application performance that could be avoided if workloads were better managed globally. In short, there is a need for better global server load balancing (GSLB).

Also referred to as application-delivery controllers (ADCs), load balancers are widely deployed in data centers. Their function is to distribute workloads to back-end servers, thereby ensuring optimum use of aggregate server capacity and better application performance.

Providers including Citrix, F5, Kemp Technologies and Radware occupy the traditional load-balancer market. Their hardware ADCs have been the go-to solutions for infrastructure and operations teams for some time. Recently, software-based ADCs from these vendors and software-only solutions such as HAProxy, Nginx and Amazon ELB have emerged as enterprises have moved applications to the cloud.

Organizations can implement multi-data-center, multi-cloud GSLB using one of two basic approaches. The first is to use a traditional managed-DNS provider for basic traffic management. It has the advantage of being easy to implement, low in cost and reliable, requiring no capital outlay. Unfortunately, it offers only minimal traffic-management capabilities such as round-robin DNS and geo-routing. These approaches fail to prevent maldistribution of workloads because they use fixed, static rules rather than basing traffic routing on the real-time workloads and capacity at each data center. For example, geo-routing can only ensure that users (and their workloads) are sent to the geographically closest data center. It cannot account for uneven distribution of users geographically, local demand spikes or server outages in a data center.

Many ADC vendors offer their own purpose-built DNS appliances that have a tighter integration with their load balancers to address these limitations. This is the second basic approach. These appliances can make traffic-management decisions on the basis of actual use levels at each data center by receiving real-time load and capacity information from the local load balancers.

The benefit is overshadowed by its tradeoffs, which many enterprises find unpalatable:

Consequently, most enterprises that have deployed data center load balancers arent using the GSLB functions available from their load-balance vendor. Those that have deployed GSLB functions are open to replacing them with a better solution. A superior approach is a cloud-based, managed GSLB solution that uses real-time telemetry from load balancers to make intelligent traffic-management decisions.

GSLB is best delivered as a cloud-based managed service. The core attributes and advantages of such an approach are as follows:

Its now possible to enjoy the best of both worlds: a globally performing, reliable managed DNS service and advanced traffic-management capabilities that were previously available only with proprietary ADC solutions. This combined offering provides new opportunities for enterprises to prevent maldistribution of application workloads and deliver better overall application performance as well as a better, more consistent end-user experience.

Jonathan Lewis brings to NS1 over 25 years of IT-industry experience comprising product management, product marketing, customer service and systems engineering. Jonathan has played key roles contributing to the success of several industry-leading companies including Nortel, Arbor Networks, and SSH Communications Security (SSH1V). He holds BS and MS degrees from McGill University, an MBA from Bentley College and CISSP certification.

Global Server Load Balancing Moves to the Cloud was last modified: June 1st, 2017 by Jonathan Lewis

Link:
Global Server Load Balancing Moves to the Cloud - The Data Center Journal

NVIDIA Amps Up AI Cloud Strategy with ODM Partnerships – TOP500 News

NVIDIA is hooking up with four of the worlds largest original design manufacturers (ODMs) to help accelerate adoption of its GPUs into hyperscale datacenters. The new partner program would give Foxconn, Inventec, Quanta and Wistron early access to the HGX reference architecture, NVIDIAs server design for machine learning acceleration

Source: NVIDIA

HGX is an attempt to establish an industry-standard GPU box that maximizes computational density for machine learning workloads. It uses NVIDIAs most advanced GPUs, namely the Tesla P100, and soon, the Tesla V100. It glues eight of these into an NVLink cube mesh, and uses PCIe switching to allow CPUs to dynamically connect to them. Examples of this architecture include Microsoft's Project Olympus HGX-1 chassis, Facebook's Big Basin system, and NVIDIAs own DGX-1 server.

Facebooks Big Basin and Microsofts HGX-1 systems are GPU-only boxes, which rely on external CPU servers as hosts. Since the processor and co-processor are disaggregated, applications can fiddle with GPU-CPU ratio as needed. In most machine learning situations, you want a rather high ratio of GPUs to CPUs, since most of the processing ends up on the graphics chip. And in hyperscale/cloud datacenters, you also want the flexibility of allocating these resources dynamically as workloads shift around.

The DGX-1 server is a different animal altogether. Its a stand-alone machine learning appliance and includes two Xeon processors, along with the same eight-GPU NVLink mesh of its hyperscale cousins. As such, its not meant for cloud duty, but rather for businesses, research organizations, and software development firms that want an in-house machine learning box. SAP in the most prominent commercial buyer of the DGX-1, at least of those revealed publicly. But NVIDIA never intended to sell boatloads of these systems, especially since a lot of customers would prefer to rent machine learning cycles from cloud providers.

Thats why the ODM partnership could end up paying big dividends. These manufacturers already have the inside track with hyperscale customers, who have figured out that they can use these companies to get exactly the gear they want, and at sub-OEM pricing. ODMs are also more nimble than traditional server-makers, inasmuch that they can shorten the design-to-production timeline. That makes them better suited to the nearly continuous upgrade cycle of these mega-datacenters.

Given that the HGX-1 is manufactured by Foxconn subsidiary Ingrasys and the Big Basin system is built by Quanta, its a logical step for NVIDIA to include the other big ODMs, Inventec and Wistron, into the fold. The goal is to bring a wider range of HGX-type machinery to market and make them available to hyperscale customers other than just Microsoft and Facebook.

The other aspect of this is that NVIDIA would like to solidify its dominance with machine learning customers before Intel brings its AI-optimized silicon to market. Startup companies like Wave Computing and Graphcore also are threatening to challenge NVIDIA with their own custom chips. Establishing an industry-standard architecture before these competing solutions get market traction would help NVIDIA maintain its leadership.

To some extent, NVIDIA is also competing with some its biggest customers, like Google and Microsoft, both of which are building AI clouds based on their own technologies. In the case of Google, its their Tensor Processor Unit (TPU), which the search giant has upgraded for an expanded role that threatens NVIDIA directly. Meanwhile, Microsoft is filling out its AI infrastructure with an FPGA-based solution that, likewise, could sideline NVIDIA GPUs in Azure datacenters.

The prospect of using the future V100 Tesla GPUs in HPX platforms actually intensifies the competition, since these upcoming processors are built for both neural net training and inferencing. Although NVIDIA used to build its own inferencing-specific GPUs (the M4 and M40, followed by the P4 and P40), inferencing is also performed by regular CPUs and FPGAs, not to mention Googles TPUs, running in regular cloud servers.

Inferencing has somewhat different requirements than training, especially with regards to minimizing latency, but with the Volta architecture and the V100, NVIDIA thinks it has designed a solution that is capable of doing both, and doing so competitively. From a hyperscale companys point of view, there are some obvious advantages in separating inferencing, and certainly training infrastructure from the rest of the server farm not the least of which is being able to deploy and run machine learning gear in a more flexible manner. And since these upcoming V100 GPUs will be used by hyperscale companies for training, they are also likely to get a shot at some of those same companies inference workloads.

Finally, if NVIDIA manages to establish HGX as the standard GPU architecture for AI clouds, it makes its own recently announced GPU cloud platform more attractive. Since NVIDIAs cloud stack of machine learning libraries and frameworks runs on top of other peoples infrastructure, pushing its HGX architecture into the ecosystem would make NVIDIAs job of supporting the various hardware solutions that much simpler. It would also make it easier for customers to switch cloud providers without having to tweak their own software.

Well be able to tell if these ODM relationships pay off when we start seeing additional HGX solutions coming to market and being adopted by various cloud providers. As NVIDIA likes to remind us, its GPUs are used in the worlds top 10 hyperscale businesses today. If all goes as planned, someday it will be able to make the same claim for HGX.

Continue reading here:
NVIDIA Amps Up AI Cloud Strategy with ODM Partnerships - TOP500 News

Cloud computing takes off as top new discipline on campus – Education Dive

Indranil Gupta, an associate professor in the Department of Computer Science at the University of Illinois Urbana-Champaign, recalled the first time he offered a free Coursera online class on Cloud Computing Concepts in the spring of 2015. In the first class, Gupta said, Coursera registered a total of 179,000 enrollees from 198 countries.

That shows you how much interest there is, he said. It seems like every single country has some students who are interested.

Guptas assessment matches numerous reports that interest in cloud computing among students had skyrocketed, and courses in computer science departments throughout the nation were increasingly becoming commonplace. However, a recent report by Clutch, a Washington, D.C. based B2B and research firm, found that there were still concerns among universities and professors regarding the cost of teaching cloud computing. Riley Planko, a content developer at Clutch who authored the report, noted that while individual courses and certification programs were increasingly available, undergraduate and Masters programs were still developing.

For the cost, there was definitely optimism. Theres potential with regulation, and learning how to manage this, that its something that can be more more under control by the university, she said. It still a young field. Its only been around in its true power for a couple of years.

Higher education institutions have been interested in storing data on cloud servers for several years, and as the Clutch report indicates, cloud computing skills are in high demand by corporations, and increasingly, public institutions (LinkedIn found that knowledge in cloud computing was the most desirable skill in job applicants among employers, according to the report).

Kevin McDonald, the founder and managing director of GreyStaff Group, LLC, also teaches a cloud computing course in the Technology Management Masters program at Georgetown Universitys School of Continuing Studies. He said the sea change cloud computing brought to public and private industry was now benefitting individual startups. By eliminating the need for expensive server infrastructure and IT staff, new companies can significant cut their upfront costs, building their entire infrastructure in the cloud. It is an opportunity McDonald echoes in his course, with teams visualizing and building a phone app within a matter of weeks before presenting it to the class; some had even sought investors for their creations.

Its a total revolution under our feet, so as weve developed the program, weve tried to keep it in the real world, he said, marveling at the fact that students come up with an idea, and go through a startup and are able to present to a venture capitalist within six weeks.

Gupta agreed there was an ongoing transition amongst higher education institutions on how to offer cloud computing courses integrated in disciplines, instead of in isolation, and he detailed a Masters of Computer Science in Data Science currently offered by UIUC. The MCS-DS is an online program with a $19,200 tuition, offering students the ability to proceed at their own pace, and Guptas Coursera class in Cloud Computing Systems is integrated into the degree.

Gupta said that while there is always a period of transition where professors in a particular discipline may wonder whether a new facet of the discipline should be integrated or is merely temporal, he was optimistic about how computer science had quickly warmed to introducing cloud computing and big data into curricula.

Cloud computing as it is today is new, but many of the systems in cloud computing have been around for decades, he said. Many of the building blocks have been around for a long time, its just that its become more available and accessible to students.

Gupta also said the imposing costs of accessing cloud storage for student use could be alleviated by partnering with companies that offer free or reduced-price resources for students, citing that Amazon Web Services ran a program for several years that would offer $100 worth of credit for proposed research projects.

The company currently offers AWS Educate for institutions, educators and even individual students, touting access to company technology, training resources and open-source content for educational use. Much of UIUCs work, Gupta said, was done with Microsoft Azure due to a mutual partnership. He said students benefitted from the cloud space, while industries could see benefits once students enter the workforce.

Companies want students who are more familiar with the state of the technology, so they need as little training as possible when they join, he said. They know that all our students are smart; its whether they have the necessary skills or need extra training. If Microsoft has students use Microsoft Azure courses, theyre kind of already training them.

McDonald, who is also the author of Above The Clouds: Managing Risk In The World Of Cloud Computing, said government, after some lag time, was catching up to private industry in the adoption of cloud technology. The Federal Cloud First Initiative, instituted in 2010 by the Obama administration, had led to the closure of more than 3,000 data centers as of April 2016, with a goal of closing 5,203 federal data centers in total by 2019, almost half of the 2010 number.

He said cloud computing, like many burgeoning computer science fields, was increasingly viewed as interdisciplinary, asserting that while the School of Professional Studies valued the technical processes inherent in cloud computing, the increased accessibility of cloud storage for novice users lowered the complexity barrier for interested students.

Its gotten to that level of simplicity where we dont need to worry about that unless were turning out system engineers, he said. Thats always been the philosophy for this program since day one.

In addition to cost concerns, Pankos report found that some professors expressed concern with how to appropriately teach cloud computing in a rapidly-changing field, and also said the lack of necessary staff at universities that could be a hindrance.

Nevertheless, the report concluded that it would be worthwhile for colleges and universities to at least consider the topic for future implementation in their curricula.

Read the original here:
Cloud computing takes off as top new discipline on campus - Education Dive

Booz Allen Hamilton employee left sensitive passwords unprotected online – Washington Post

An unnamed employee of federal contracting giant Booz Allen Hamilton temporarily left sensitive government passwords exposed online last week, raising new questions about the McLean companys cybersecurity practices after drawing scrutiny for the way top secret data was mishandled in two earlier, high-profile cases.

The leak was discovered when an unaffiliated cyber analyst named Chris Vickery happened upon the passwords while trying to guess Internet addresses that might be used in certain web servers. His company Upguard published his findings in a Wednesday blog post.

Booz Allen and its government customer, the National Geospatial-Intelligence Agency, both said that the passwords could not have been used to access classified information. The agency says it invalidated the affected passwords immediately after being notified of the incident.

A Booz Allen Hamilton spokesman described the incident as an isolated mistake made by one employee.

It appears that this is an individuals mistake, said Booz Allen spokesman James Fisher. While any incident of this nature is unacceptable and we hope to learn from it, so far we see this event as having limited impact.

Fisher declined to name the employee, citing personnel rules, saying only the company is taking appropriate action.

Cybersecurity experts decried the leak, arguing that leaving government passwords unprotected online could give hackers a point of entry to other networks, even if they didnt provide direct access to classified databases. If an outsider like Vickery could find the information by trying random web addresses, a hacker could just as easily do the same.

Its just straight up sloppiness, laziness, and really not adhering to policies, said Bob Wandell, vice president of services at Nehemiah Security, a Tysons-based cybersecurity company.

The passwords in question were stored on an Amazon cloud server, which organizations use to host and share projects. Individuals and organizations can rent storage space online and share access through common web addresses, or URLs, similar to filesharing services like Dropbox and Google Drive. (Amazon founder Jeffrey P. Bezos owns the Washington Post.)

Hackers are constantly scanning the whole cloud environment...they do this repeatedly just to wait for someone to make a mistake like this, said Tim Prendergast, a cloud security expert with cybersecurity firm Evident.io. I think were going to see more of these over time as cloud computing continues to accelerate its growth.

The findings are the latest blow for Booz Allen Hamilton, which has come under scrutiny in recent years after employees leaked highly classified information to the public. Edward Snowden, whose 2013 disclosures of classified National Security Agency information upended a number of a government survelliance programs, was a Booz Allen contractor. More recently, a long-time Booz Allen employee named Harold Martin III was charged with hoarding a massive cache of classified NSA data in his home and car.

The leaks brought to light Wednesday appear to be much less consequential. Its possible the employee wanted to avoid the hassle of frequent log-ins while working on a project.

They probably did it for convenience, Vickery said. Thus far we have no reason to believe it was a purposeful leak.

The fact that Amazons cloud server was being used to service a contract with a U.S. intelligence agency is indicative of a broader shift happening across the government, as data and applications move off individual computers and internal networks and into less costly and more adaptable cloud-based systems.

Capitalizing on that shift within the government is a key component of Booz Allen Hamiltons business strategy.

Read the original here:
Booz Allen Hamilton employee left sensitive passwords unprotected online - Washington Post

Amazon Web Services (AWS) Bank On AI to Add New Servers Everyday – Dazeinfo

The advent of Machine Learning (ML) and Artificial Intelligence (AI) technology has helped companies to push the boundaries of their apps to a great extent. While most of the tech giants are leveraging on AI to make path-breaking solutions, Amazon Web Services (AWS) has employed AI technology to make their internal procurement process much more robust and effective.

In a recent interaction at Pacific Science Centers 14th Annual Foundation of Science Breakfast, Amazon Web Services (AWS) CEO Andy Jassay told that AWS is relying on AI to add the number of new servers required to meet the growing demand of cloud servers. He explained that artificial intelligence is playing a crucial role in anticipating the demand for its services. Not only it is helping AWS to meet the demand but also cutting down the cost of operations.

One of the least understood aspects of AWS is that its a giant logistics challenge, its a really hard business to operate, said Andy.

Considering the fact that adding servers is a time-consuming affair, by anticipating the accurate demand and keeping it deployed AWS is able to cater to direct consumers, partners and resellers.

Andy also told that, on daily basis, AWS is adding new servers to the network. This is clearly indicating at the scale AWS is growing.

The advancement in computing and exploded adoption of the Internet has spiked the demand for cloud servers. Companies are willing to keep their business on 24/7. Besides, their unprecedented interest in crunching accumulated data has created a demand for process specialised servers. Cloud-based Analytics Servers, Cache Servers etc. are high in demand.

The geographical demand for AWS is spreading across the world. The company is leaving no stone unturned to make the procurementprocess as easy as possible. Both, offline and onlinesales partners are selling AWS services, be it dedicated cloud server migration, reseller hosting, specialised cloud hosting or migrating & managing infrastructure on the cloud, round the clock. The AI-powered process help AWS to pick up signals its sales arm follow. Unlike consumer facing products, enterprise sales cycles are notoriously long and could end up straining the delivery mechanism.

Besides, tracking consumer behaviour on AWS has also helped the company understand the growing demand to a certain level.

Most of the customers start slow with AWS, and then accelerate their usage as they see more benefits, which could lead tospikes in demand if they move faster than anticipated, said Andy.

Therefore, Amazon uses a forecasting model powered with Machine Learning and artificial Intelligence that helps AWS to take capacity related decisions.

AWS is not only replying in AI for anticipating demand, its also leveraging on the technology to improve their support services. AI powered system is helping AWS to understand the maintain the capacity of components for its data centres,which are crucial during the time of any downtime.

More:
Amazon Web Services (AWS) Bank On AI to Add New Servers Everyday - Dazeinfo

Defense contractor stored intelligence data in Amazon cloud unprotected – Ars Technica

Enlarge / NGA headquarters. A trove of top secret data processed by NGA contractor Booz Allen Hamilton was left exposed on a public Amazon cloud instance.

On May 24, Chris Vickery, a cyber risk analyst with the security firm UpGuard, discovered a publicly accessible data cache on Amazon Web Services' S3 storage service that contained highly classified intelligence data. The cache wasposted to an account linked to defense and intelligence contractorBooz Allen Hamilton. And the files within were connected to the US National Geospatial-Intelligence Agency (NGA), the US military's provider of battlefield satellite and drone surveillance imagery.

Based on domain-registration data tied to the servers linked to the S3 "bucket," the data was apparently tied to Booz Allen and another contractor, Metronome. Also present in the data cache wasa Booz Allen Hamilton engineer's remote login (SSH) keys andlogin credentials forat least one system in the company's data center.

[Update, 5:10 PM] UpGuard's post suggested the data may have been classified at up to the Top Secret level. A Booz-Allen spokesperson told Ars that the data was not connected to classified systems. However, the credentials included in the store could have provided access to more sensitive data, including code repositories.

In a statement, an NGA spokesperson said that no classified data had been disclosed by the security oversight and that the storage was "not directly connected to classified networks."

Upon finding the cache, Vickery immediately sent an e-mail to Booz Allen Hamilton's chief information security officer but received no response. The next morning, he contacted the NGA. Within nine minutes, access to the storage bucket was cut off.

"NGA takes the potential disclosure of sensitive but unclassified information seriously and immediately revoked the affected credentials," the NGA's spokesperson said in the official statement.

At 8pm ET on May 25, Booz Allen Hamilton's security team finally responded to Vickery and confirmed the breach.

Booz Allen Hamilton has suffered a number of stunning security lapses over the past few years. Most infamous, Edward Snowden was a Booz Allen contractor at the National Security Agency. But another Booz Allen Hamilton employee at the NSA, Hal Martin, was recently arrested for theft of sensitive data. Martin's cache even eclipsed Snowden's leaks in size.

NGA has used Amazon's cloud for a number of unclassified tasks. In 2015, NGA contracted Esri and Lockheed Martin to create a portal to unclassified geospatial intelligence based on Esri's ArcGIS geospatial information system using Amazon's commercial cloud. Amazon Web Services also offersGovCloud, an isolated "region" in AWS for handling sensitive government applications.

Go here to read the rest:
Defense contractor stored intelligence data in Amazon cloud unprotected - Ars Technica

CrowdStrike Extends Falcon Platform with Enhanced Cloud and Data Center Coverage – CSO Australia

Company offers maximum protection and best-in-class performance for servers in all data centre deployment models

June 1, 2017 - CrowdStrike Inc., the leader in cloud-delivered endpoint protection, today announced, as part of its Spring release, new features of the CrowdStrike Falcon platform custom-built for cloud providers and modern data centres, providing best-in-class prevention, detection and response for Windows, Linux or macOS servers, powered by artificial intelligence/machine learning.

The servers used in the modern-day data centre are faced with commodity, as well as advanced stealthy attacks. CrowdStrike Falcon leverages its industry-leading artificial intelligence/machine learning as well as industry-leading Indicator-of-Attack (IoA) behavioral analysis to bring real-time protection to servers whether on-premise, virtualised or in the cloud. As data centre or cloud deployments grow or evolve, with CrowdStrike Falcon, customers are freed from having to add additional management servers or controllers for endpoint protection.

With Falcons lightweight agent, customers can quickly and easily add end-to-end protection with instant zero reboot deployments, no performance impact or signature updates - all of which improve the performance of business-critical servers. CrowdStrike Falcon enables management of all systems, irrespective of their location, from a single console providing a consolidated view into all assets for the enterprise.

CrowdStrike Falcon supports all major platforms including Amazon AWS, Google Cloud Platform and Microsoft Azure. It also provides protection for guest OS hosted on all popular hypervisors and protects Windows, Linux and macOS guests with a kernel-mode agent. CrowdStrike Falcon allows for complete protection policy control, with full flexibility around policy deployment at the individual server, group or cloud platform/data centre levels. Irrespective of how a server is deployed, the security team retains complete visibility and the control required to prevent or contain the attack.

New and Enhanced Capabilities

CrowdStrike Falcon provides features critical to securing data centres, focused on control, visibility and complete protection:

Linux Kernel-mode Agent Falcon Linux agent is now a full kernel-mode module, providing comprehensive real-time visibility from its high position in the kernel into key OS events. Amazon Linux Support Falcon Linux agent now fully supports Amazon Linux distribution, a popular platform on Amazon Web Services (AWS).

Falcon Discover Falcon Discovers asset, application and user account visibility features help to optimise workloads, manage costs and audit/remove unauthorised accounts of systems deployed in the cloud, data centres and on-premise.

Falcon Data Replicator Falcon Data Replicator provides real-time access to the raw event data stream, which customers can ingest into their local data lakes for correlation against event data collected from other systems. This opens up the full comprehensive dataset of more than 270 OS-level event types that Falcon Insight customers can now integrate into their own data analytics solutions.

AV-Comparatives has certified CrowdStrike Falcon for anti-malware and exploit protection and noted that Falcon can help organisations efforts with respect to PCI, HIPAA, NIST and FFIEC compliance.

For a while now, within our highly complex environment, managing high-value systems required a choice between maximum protection and maximum performance CrowdStrike has removed that dilemma, said Anton Reynaldo Bonifacio, chief information security officer, Globe Telecom. Adding best-in-class prevention, detection and response without increasing complexity has long been atop every CISOs wish list. CrowdStrike Falcon is lightning fast to deploy and manage, and doesnt slow down a single machine on-premise, in the cloud, or anything in between.

With this Spring release, we continue to advance the Falcon platform to ensure customers can protect all of their systems, whether physical, virtual or cloud-based, with reduced complexity and improved performance, said Dmitri Alperovitch, CrowdStrikes co-founder and chief technology officer. Many legacy AV solutions dont provide sufficient visibility to enable threat hunting and forensic use cases, they poorly protect non-Windows environments, and are cumbersome and sometimes risky to deploy to cloud or hybrid cloud-based data centres. CrowdStrike Falcon addresses all of these pain points and adds scalability, efficacy, and speed.

Recently named a Visionary in the 2017 Gartner Magic Quadrant for Endpoint Protection Platforms, CrowdStrike has set the new standard for endpoint security providing organisations with the only solution that can prevent, detect, respond and hunt for attacks via a single lightweight agent. The platform has achieved impressive success in the market replacing not only legacy AV solutions, but also a variety of next-generation AV point products. CrowdStrike Falcon has been independently tested and proven as an effective AV replacement, including verification from testing with AV-Comparatives and SE Labs.

View original post here:
CrowdStrike Extends Falcon Platform with Enhanced Cloud and Data Center Coverage - CSO Australia

New Server Hardware Boosts Data-Crunching for AI, Cloud – Data Center Frontier (blog)

A four-rack "pod" of Google Tensor Processing Units (TPUs) and supporting hardware inside a Google data center. (Photo: Google)

The rise of specialized computing is bringing powerful new hardware into the data center. This is a trend we first noted last year, and has come into sharp focus in recent weeks with a flurry of announcements of new chips and servers. Much of this new server hardware offers data-crunching for artificial intelligence and other types of high-performance computing (HPC), or more powerful and efficient gear for traditional workloads.

Some of this new hardware is already being deployed in cloud data centers, bringing new capabilities to users looking to leverage the cloud for machine learning tasks or HPC. In some cases, these new offerings will factor into server refresh plans for companies operating their own data centers, even as the industry awaits the release of new products from Intel later this year.

One thing is clear: Innovation is alive and well in the market for data center hardware, with active contributions from hyperscale players, open hardware projects and leading chip and server vendors. Heres an overview of the new hardware offerings from Google, NVIDIA, AMD, ARM, Intel and Microsoft.

Googles in-house technology sets a high bar for other major tech players seeking an edge using AI to build new services and improve existing ones. Thus, Googles May 17 announcement of a new version of its Tensor Processing Unit (TPU) hardware made major waves in the AI world. Google will offer the new chips as a commercial offering on Google Cloud Platform.

The TPU isa custom ASIC tailored for TensorFlow, an open source software library for machine learning that was developed by Google. An ASIC (Application Specific Integrated Circuit) is a chip that can be customized to perform a specific task. Recent examples of ASICs include the custom chips used in bitcoin mining. Google has used its TPUs to squeeze more operations per second into the silicon.

The new TPU 2.0 brings impressive performance and supports both categories of AI computing training and inference. In training, the network learns a new capability from existing data. In inference, the network applies its capabilities to new data, using its training to identify patterns and perform tasks, usually much more quickly than humans could. These two tasks usually require different types of hardware, but Google says its newest TPU has surmounted that challenge.

Each of these new TPU devices delivers up to 180 teraflops of floating-point performance, Google executives Jeff Dean and Urz Holzle said in a blog post. As powerful as these TPUs are on their own, though, we designed them to work even better together. Each TPU includes a custom high-speed network that allows us to build machine learning supercomputers we call TPU pods. A TPU pod contains 64 second-generation TPUs and provides up to 11.5 petaflops to accelerate the training of a single large machine learning model. Thats a lot of computation!

A Google TPU pod built with 64 second-generation TPUs delivers up to 11.5 petaflops of machine learning acceleration. (Photo: Google)

With Cloud TPUs, you have the opportunity to integrate state-of-the-art ML accelerators directly into your production infrastructure and benefit from on-demand, accelerated computing power without any up-front capital expenses, said Holzle and Dean. Since fast ML accelerators place extraordinary demands on surrounding storage systems and networks, were making optimizations throughout our Cloud infrastructure to help ensure that you can train powerful ML models quickly using real production data.

One of the other key players in AI hardware is NVIDIA, which rolled out its long-anticipated Volta GPU computing architecture on May 10 at its GPU Technology Conference. The first Volta-based processor is the Tesla V100 data center GPU, which brings speed and scalability for AI inferencing and training, as well as for accelerating HPC and graphics workloads.

NVIDIA founder and CEO Jensen Huang introduces the companys new Volta GPU architecture at the GPU Technology Conference in Las Vegas. (Photo: NVIDIA Corp.)

Artificial intelligence is driving the greatest technology advances in human history, said Jensen Huang, founder and chief executive officer of NVIDIA, who unveiled Volta at his GTC keynote. It will automate intelligence and spur a wave of social progress unmatched since the industrial revolution.

Volta, NVIDIAs seventh-generation GPU architecture, is built with 21 billion transistors and delivers a 5x improvement over Pascal, the current-generation NVIDIA GPU architecture, in peak teraflops. NVIDIA says that by pairing CUDA cores and the new Volta Tensor Core within a unified architecture, a single server with Tesla V100 GPUs can replace hundreds of commodity CPUs for traditional HPC.

The arrival of Volta was welcomed by several of NVIDIAs largest customers.

NVIDIA and AWS have worked together for a long time to help customers run compute-intensive AI workloads in the cloud, said Matt Garman, vice president of Compute Services for Amazon Web Services. We launched the first GPU-optimized cloud instance in 2010, and introduced last year the most powerful GPU instance available in the cloud. AWS is home to some of todays most innovative and creative AI applications, and we look forward to helping customers continue to build incredible new applications with the next generation of our general-purpose GPU instance family when Volta becomes available later in the year.

Specs for the new AMD EPYC processors.

Weeks after unveiling its new Ryzen family of PC chips, AMD introduced its new offerings for the data center. The EPYC processor, previously codenamed Naples, delivers the Zen x86 processing engine scaling up to 32 physical cores2. The first EPYC-based servers will launch in June with widespread support from original equipment manufacturers (OEMs) and channel partners.

With the new EPYC processor, AMD takes the next step on our journey in high-performance computing, said Forrest Norrod, senior vice president and general manager of Enterprise, Embedded & Semi-Custom Products. AMD EPYC processors will set a new standard for two-socket performance and scalability. We believe that this new product line-up has the potential to reshape significant portions of the datacenter market with its unique combination of performance, design flexibility, and disruptive TCO.

AMD was once a major player in the enterprise and data center markets with its Opteron processors, particularly in 2003-2008, but then lost ground to a resurgent Intel. AMD sought to shake things up in 2011 with its $334 million acquisition of microserver startup SeaMicro, but by 2015 it had retired the SeaMicro servers and gone back to the drawing board.

Securities analysts have been impressed with AMDs server prospects with the EPYC processors, and at one point AMD shares surged on rumors that it would license its technology to old rival Intel (which turned out to be untrue). There are some signs that EPYC is at least getting a look from the type of web-scale customers that are critical for server success, as Dropbox is among the companies evaluating AMDs new processors.

There has long been curiosity about whether low-power ARM processors could slash power bills for hyperscale data centers. Those hopes have led to repeated disappointments. That may be changing, as Microsoft has given a major boost to the nascent market for servers powered by low-energy processors from ARM, which are widely used in mobile devices like iPhones and iPads.

Building on that momentum, ARM is targeting the market for AI computing. At this weeks Computex electronics show in Taiwan, ARM has announcedtwo new processors the Cortex-A75 high-performance processor and Cortex-A55 high-efficiency processor.Both are built for DynamIQ technology, ARMs new multi-core technology announced in March 2017. The Cortex-A75 brings a brand-new architecture that boosts processor performance, while Cortex-A75 CPU will expand the capabilities of the CPU to handle advanced workloads.

An overview of new processor technology from ARM.

ARM is not looking to go head-to-head with NVIDIA and Intel on training workloads in the data center. ARMs focus is on mobile devices, where it has been a dominant player, and is positioning its new chips to power AI processing on these edge devices.

A cloud-centric approach is not an optimal long-term solution if we want to make the life-changing potential of AI ubiquitous and closer to the user for real-time inference and greater privacy, writes Nandan Nayampally on the ARM blog. ARM has a responsibility to rearchitect the compute experience for AI and other human-like compute experiences. To do this, we need to enable faster, more efficient and secure distributed intelligence between computing at the edge of the network and into the cloud.

As its competitors roll out new hardware, market leader Intel is preparing to unveil new server offerings later this year to update the Intel Xeon Processor Scalable Family, the chipmakers new brand for its data center offerings. These include:

Intels Jason Waxman shows off a server using Intels FPGA accelerators with Microsofts Project Olympus server design during his presentation at the Open Compute Summit. (Photo: Rich Miller)

In the meantime, Intel has been making the case for field programmable gate arrays (FPGAs) as AI accelerators. FPGAs are semiconductors that can be reprogrammed to perform specialized computing tasks, allowing users to tailor compute power to specific workloads or applications. Intel acquired new FPGA technology in its $16 billion acquisition of Altera in 2016.

The flagship customer for FPGAs has been Microsoft, which last year began using Altera FPGA chips in all of its Azure cloud servers to create an acceleration fabric, an outgrowth of Microsofts Project Catapult research.

At last months Microsoft Build conference, Azure CTO Mark Russinovich disclosed major advances in Microsofts hyperscale deployment of Intel FPGAs, outlining a new cloud acceleration framework that Microsoft calls Hardware Microservices. The infrastructure used to deliver this acceleration is built on Intel FPGAs. This new technology will enable accelerated computing services, such as Deep Neural Networks, to run in the cloud without any software required, resulting in large advances in speed and efficiency.

Microsoft is continuing to invest in novel hardware acceleration infrastructure using Intel FPGAs, said Doug Burger, one of Microsofts Distinguished Engineers.

Application and server acceleration requires more processing power today to handle large and diverse workloads, as well as a careful blending of low power and high performanceor performance per Watt, which FPGAs are known for, said Dan McNamara, corporate vice president and general manager, Programmable Solutions Group, Intel. Whether used to solve an important business problem, or decode a genomics sequence to hel cure a disease, this kind of computing in the cloud, enabled by Microsoft with help from Intel FPGAs, provides a large benefit.

Visit link:
New Server Hardware Boosts Data-Crunching for AI, Cloud - Data Center Frontier (blog)