Category Archives: Cloud Servers
Rambus Brings Ease of Use to IoT Security – DesignNews
Days after a massive cyberattack crippled computer hardware around the world, Rambus Inc. is rolling out a service designed to bring a simple but powerful form of security to Internet of Things (IoT) applications.
Known as IoT Device Management, the service is said to provide a secure channel between IoT devices and their cloud servers, and do so in a way that requires little or no security expertise on the part of the equipment designer. The company is targeting it at all types of IoT applications, from smart appliances to factory floor machinery. Were providing end-to-end secure connectivity, and its all pre-integrated, Asaf Ashkenazi, senior director of product marketing for Rambus, told Design News. You dont need to have security experts not in the cloud or at the client.
The IoT Device Management system is said to provide a secure channel between IoT devices and their cloud servers, and do so in a way that requires little or no security expertise on the part of the equipment designer. (Source: Rambus, Inc.)
The solution is made up of software modules that are pre-integrated into the firmware of chipsets made by silicon vendors who manufacture microprocessors, microcontrollers and wireless devices. The technology is also pre-integrated into the platforms of cloud service providers. Rambus said it is working with Qualcomm Technologies, Inc. , which makes wireless devices, but it has not yet named any other silicon vendors, or cloud service providers, who will incorporate its IoT Device Management system.
The companys announcement comes at a time when cybersecurity is making headlines around the world. Last week, attackers spread malware to businesses in at least 74 countries, effectively hijacking their computer systems. Victims included Britains National Health Service, Nissan Motor Co., Renault SA, and FedEx Corp., along with hundreds of banks and gas stations.
Rambus aims to head off such attacks with a form of security that locks up all the IoT systems Internet communication. Once a Rambus-supported device is powered up and connected to the Internet, it is automatically identified and authenticated by the IoT Device Management system. The device is then securely provisioned over the air, creating a secure communication channel. Data encryption and decryption, mutual authentication and key management is handled automatically by the software, the company said in a statement.
The service could potentially plug a gaping hole in IoT applications, the majority of which are woefully unsecured. A 2014 study by Hewlett-Packard revealed that 70% of IoT devices had security holes, with each having about 25 vulnerabilities, on average. Problems included insufficient authorization, lack of encryption, insecure web interfaces and inadequate software protection. In a particularly well-known case at Target Corp., thieves made off with 40 million credit card numbers after entering the companys network through an Internet-connected air conditioning system.
With many of these devices, anyone can connect to them, Ashkenazi said. They have no authentication, no encryption. You can connect to them from anywhere in the world and manipulate them. Its really scary.
Ashkenazi said IoT systems are particularly vulnerable, largely because they are at once
Originally posted here:
Rambus Brings Ease of Use to IoT Security - DesignNews
SolarWinds Closes Acquisition of Scout Server Monitoring – Talkin’ Cloud
IT management software provider SolarWinds announced on Wednesday that it has completed the acquisition of Scout Server Monitoring, which will bring deep server monitoring capabilities for DevOps professionals. The terms of the deal were not disclosed.
Pingdom Server Monitor, formerly Scout Server Monitoring, will join Pingdom website uptime and performance monitoring products, as well as Librato, Papertrail, and TraceView in its SaaS portfolio for monitoring cloud-native applications, servers and other infrastructure.
[Download Talkin Clouds free Essential Guide to Application Performance Management and Monitoring Software]
DevOps and the technologies and practices that have come with it have ushered in new goals and requirements for monitoring. With Pingdom Server Monitor, customers are able to track custom metrics and create alerts, and integrate with more than 90 plugins including DevOps tools like Chef and Puppet.
The deal will see Scout co-founder and CTO Andre Lewis join SolarWinds. Back in 2015, Lewis said in an interview with The WHIR that its focus on user and developer experience helped it gain loyal customers.
[Scout] started as a labor of love. We were doing consulting for Ruby on Rails development at the time and we built Scout as an internal tool to help keep tabs on some of the software that we were building for customers. We ended up productizing it and as a result I think Scout server monitoring is unusually finely tuned to the needs of developers because it actually grew out of our own needs at the time, Lewis told The WHIR at AWS re:Invent 2015.
Were very excited to add Pingdom Server Monitor, formerly Scout Server Monitoring, to our portfolio of products, Christoph Pfister, executive vice president, products, SolarWinds said in a statement. With it, developers and DevOps practitioners have access to an affordable, SaaS-based server monitoring solution. We look forward to investing in it and we welcome Andre Lewis to the SolarWinds team to help in those efforts.
The era of cloud and digitalization is driving exponential application growth and increased complexity, Pfister said. Its clear that cloud-native developers and DevOps teams need faster troubleshooting that enables them to more easily solve problems and improve performance across the full stack, including servers. The goal of our SaaS portfolio, and the market-leading products within it, is to provide just that, and to do so at an affordable price.
Last year, SolarWinds acquired LOGICnow to form its SolarWinds MSP division.
Here is the original post:
SolarWinds Closes Acquisition of Scout Server Monitoring - Talkin' Cloud
With Volta, NVIDIA Pushes Harder into the Cloud – TOP500 News
Amid all the fireworks around the Volta V100 processor at the GPU Technology Conference (GTC) last week, NVIDIA also devoted a good deal of time to their new cloud offering, the NVIDIA GPU Cloud (NGC). With NGC, along with its new Volta offerings, the company is now poised to play both ends of the cloud market: as a hardware provider and as a platform-as-a service provider.
At the heart of NGC is a set of deep learning software stacks that can sit atop NVIDIA GPUs not just the new Tesla V100, but also the P100, or even the consumer-grade Titan Xp. The stack itself is comprised of popular deep learning frameworks (Caffe, Microsoft Cognitive Toolkit, TensorFlow, Theano and Torch), NVIDIAs deep learning libraries (cuDNN, NCCL, cuBLAS, and TensorRT), the CUDA drivers, and the OS. The various stacks are containerized for different environments using NVDocker (a GPU-flavored wrapper for Docker), and those stacks are then collected in a cloud registry.
Source: NVIDIA
The value proposition here is providing a big choice of integrated stacks that can be used to run deep learning applications in many different environments (as long as there is a good-sized Pascal or Volta NVIDIA GPU sitting in the hardware). For an application developer, composing a coherent stack from scratch can be a chore, given the variety of deep learning frameworks and their dependencies on libraries, drivers, and the operating system. And keeping up with the latest versions of all these software components arguably the most complex stack of software the world has ever seen, says NVIDIA CEO Jen-Hsun Huang adds another daunting layer of complexity. With NGC, NVIDIA removes all this fiddling with software.
NGC allows you to run your deep learning application either locally, on your own PC or DGX system, or remotely in the cloud. In fact, a typical progression would be to run your application on an in-house machine and then burst it into the cloud when greater scale is needed. This is really the worlds first hybrid deep learning cloud computing platform, noted Huang.
After you figure out if you want to run locally or remotely, you select the appropriate stack for the runtime environment, along with your deep learning application and your dataset. If you are running in the cloud, you will have a number of choices. A demonstration during Huangs GTC keynote illustrated a selection of NVIDIAs in-house DGX SATURNV supercomputer, Microsoft Azure GPU instances, or AWS GPU instances. Its not clear if the SATURNV will be generally available as public resource, but the demo implies that it will. If so, NVIDIA would be able to charge users both for their cloud platform and the underlying infrastructure.
Beta testing on NGC will begin in July, with pricing to be determined at a future date.
NVIDIA will also use the new Volta V100 GPU to gain a bigger foothold in the cloud hyperscale space. At GTC, Amazon said it was already committed to adding the V100 into its cloud offerings as soon as NVIDIA starts cranking them out. Well make Volta available as the foundation for our next general-purpose GPU instance at launch, says Matt Wood, Amazons General Manager for Deep Learning and AI.
Amazon has been a good customer of NVIDIA, using their GPUs in its own learning efforts for things like Alexa and for product recommendations associated with its online store. But making that technology available to cloud users on AWS is now driving additional GPU uptake at Amazon. Apparently, the current GPU instances are among the fastest growing for AWS. Our most recent instance, the P2, is just growing like wildfire, says Wood. According to him, its being used extensively for deep learning across many verticals everything from medical imaging to autonomous driving.
Likewise, Microsoft has used NVIDIA GPUs to drive their deep learning training on Azure for several years now. Jason Zander, Microsoft corporate VP for Azure, noted that GPUs form the basis for their natural language translation capability in Skype. Thats one of the most sophisticated language deep neural nets thats out there, says Zander. Its really cool. I can talk to someone in English and they can hear it in Chinese. We cant do that without the power of the cloud and GPUs.
Microsoft is also likely to pick up the enhanced HGX-1 GPU expansion box for the cloud, which will soon be available with V100 GPUs. The HGX-1 was co-designed by Microsoft to offer a hyperscale GPU accelerator chassis for AI. The original HGX-1, announced in March, came with eight P100 GPUs, which can be expanded to a four-chassis system containing 32 GPUs. When such a system is built with the new V100s, that mini-cluster will deliver 3.8 petaflops of deep learning performance.
Source: NVIDIA
Amazon and Microsoft, along with most of the other cloud providers and their users, are employing GPUs for the training of the deep neural networks. But NVIDIA want to expand on that success with its 150-watt V100 offering. As we wrote last week, this low-power version offers 80 percent of the performance of the full 300-watt V100 part, and is aimed at the inferencing side of deep learning. That means NVIDIA is looking to sell these low-power V100s in hyperscale-sized allotments to the big cloud providers.
NVIDIA has targeted this area before, with its Maxwell M4 and M40 GPUs, and more recently with the Pascal P4 and P40 GPUs. But the new V100 offers much better performance and lower latency, than any of its predecessors. It also has upgraded the TensorRT library for Volta, which can now compile and optimize a trained neural network for ultra-fast inferencing using the V100s Tensor Cores.
Although 150 watts is a fairly high power draw for an accelerator aimed at commodity cloud servers, the rationale is that the V100 is able to perform a lot more inferencing throughput per server than competing solutions, thus saving on overall datacenter costs. According to NVIDIA, just 33 nodes of P100-accelerated servers can inference 300 thousand images per second. They estimate thats about 1/15 as many servers as would be needed by CPU-only machines.
Inferencing, though, is increasingly using more specialized hardware to maximize performance and minimize power usage. Microsoft, for example, is employing FPGAs for this task, while Google has turned to its own custom-built Tensor Processing Unit (TPU). Additional purpose-built solutions from the likes of Graphcore and Intel/Nervana are also in the works. Whether low-power V100s can compete in this environment remains to be seen, but at least for the time being, NVIDIA seems to be wagering that offering more powerful deep learning silicon, which can serve both training and inferencing, will win the day. And given the nearly insatiable demand for both these days, that could be a smart bet.
More here:
With Volta, NVIDIA Pushes Harder into the Cloud - TOP500 News
How to monitor data center servers from the cloud with CloudStats – TechRepublic
Image: Jack Wallen
If you are responsible for a data center, you know how important it is to be able to keep tabs on the servers that empower your company. In some instances, it's pretty simple to monitor your servers, especially if you're on site all day. But what about those situations where on-site monitoring isn't possible? What do you do then? One option is to look to the cloud and a relatively new service called CloudStats. This server monitoring solutions enables you to add as many servers as you like (at a cost), to give you an easy to use dashboard where you can:
There are two packages to sign up for (more on this in a bit):
It should be worth noting, the above information was taken directly from the CloudStats site, but it is a bit deceiving. After setting up a free account, you will quickly come to find the free account really only allows you to monitor your server. In order to gain access to alerts and other features, you have to pony up for what they call the Premium account, which is:
The free account also does not include the backup feature listing in the pricing plans. In effect, the free account gives you little more than a glance at your servers/services and what CloudStats offers (should you pay up for a Premium account). It should also be noted that the free account does include email alerts for server up/down. You cannot customize these alerts or integrate with Slack or Skype.
That being said, CloudStats works with both Linux and Windows servers. I am going to walk you through the process of connecting a Ubuntu 16.04 server. It's quite simple and takes very little time.
The first thing you must do is sign up for an account. I'd recommend signing up for the free account, to make sure this is a service that meets your needs. You can also sign up for a seven day free trial for the Premium account. The signup page is a bit hard to find, so use this link and then fill in the necessary information. Once you've done that, click on the ADD NEW SERVER button (Figure A).
Figure A
Adding a new server is but a click away.
You'll need to be logged into your server to add it to your CloudStats account. Do that and then return back to the browser where, in the next window (Figure B), you must select the platform running the server (Linux or Windows).
Figure B
Select your server platform.
The resulting window will give you a command that will be used to connect your server to the newly created account. Copy that command and then paste it into a terminal window on your server. You'll be prompted for your sudo password and then the command will run. Once the command lands on Done publishing (Figure C), go back to the web browser and click Finish.
The command running on our server.
Once you've clicked Finish, you'll be taken back to your CloudStats account, where your server will appear on the dashboard and you can start the process of adding service monitors (you can add monitors for HTTP, database, FTP, SSH, NFS, DNS, and mail), and checking the various status of your server. Should you find the need to set up alerts, backups, etc., you will have to pay up for the Premium account.
Even though the free account is limited in what it can do, CloudStats is definitely worth a look. If you've been searching for a cloud-based monitoring service that makes it simple to add your servers, set up alerts, and more, you'd be hard-pressed to find an easier solution.
Continue reading here:
How to monitor data center servers from the cloud with CloudStats - TechRepublic
IBM-Nutanix Deal Moves Power Servers to Datacenters – EnterpriseTech
(By Arjuna Kodisinghe/Shutterstock)
Targeting AI, machine learning and other big data workloads, IBM and Nutanix will join forces to deliver the enterprise cloud vendor's software via Power servers. The deal is Nutanix's first non-Intel x86 offering, and is aimed at bringing software-defined hyper-converged infrastructure to emerging cognitive workloads while helping enterprises shift those computing-intensive jobs to the cloud.
The partners said Tuesday (May 16) their multi-year partnership would yield a hyper-converged platform for datacenters designed to handle demanding application development projects as well as an increasing number of cognitive workloads. The partners are betting that large enterprises will retain but seek to "refresh" datacenters by leveraging cloud computing, storage, and faster networking along with the ability to scale capacity.
To that end, the partners said their cloud software-Power server collaboration would create a path from the datacenter to the public cloud.
Along with adoption of Power-based servers, the deal with IBM (NYSE: IBM) also gives Nutanix (Nasdaq: NTNX) another server partner along with its collaboration with Dell Technologies (NYSE: DVMT).
In addition to cognitive and DevOps workloads, the initiative targets a range of computing-intensive jobs, including databases, data warehouses, web infrastructure and distributed applications. The combination also would support emerging cloud-native workloads, encompassing "full stack open source middleware and enterprise databases and [application] containers," the partners said.
The initiative also calls for the partners to launch a "simplified" private cloud that supports the Power processor architecture in datacenter servers. The hyper-converged infrastructure would be managed via Nutanix's AHV virtualization tool along with other datacenter automation and remediation tools. Meanwhile, stateful cloud native services would run on the software vendor's Acropolis hypervisor that in this instance also serves as a container service. The configuration is designed to automate deployment while meeting growing requirements for persistent storage when deploying stateful services via containers.
As a result of the partnership, "IBM customers of Power-based systems will be able to realize a public cloud-like experience with their on-premise infrastructure," Nutanix CEO Dheeraj Pandey asserted in a statement announcing the collaboration.
Along with Dell, Nutanix has been collaborating with other x86-based server makers such as Lenovo (HKSE: 992). The partners announced a converged IT platform last May that incorporates the Nutanix Xpress software package designed to allow the new appliances to manage storage-area networks by aggregating computing, storage and networking.
The deal with Nutanix underscores how IBM has positioned its Power-based servers as geared toward big data and cognitive workloads. The partnership is designed to combine those performance gains with a "one-click" path to the cloud in enterprise datacenters.
IBM and Nutanix, said the new hyper-converged service would be offered exclusively through IBM and its channel partners. Specific timelines, models and supported server configurations will be announced at the time of availability, they added.
About the author: George Leopold
George Leopold has written about science and technology for more than 25 years, focusing on electronics and aerospace technology. He previously served as Executive Editor for Electronic Engineering Times.
Read the rest here:
IBM-Nutanix Deal Moves Power Servers to Datacenters - EnterpriseTech
Compare function, value of on-premises private cloud vs. public cloud – TechTarget
Although public cloud is popular and looks like it's here to stay, it isn't the best option for every organization....
Enjoy this article as well as all of our content, including E-Guides, news, tips and more.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
You also agree that your personal information may be transferred and processed in the United States, and that you have read and agree to the Terms of Use and the Privacy Policy.
An on-premises private cloud is, in certain cases, more beneficial than a public cloud. If you're trying to figure out which type of cloud is right for you, this article is a good place to start: Here, we'll take a look at which applications public cloud is best suited for and how to provide a competitive on-premises private cloud offering.
Jack of all trades, master of none When they look at the bottom-line costs on Amazon Web Services (AWS), most business-oriented people notice the low cost of compute without putting much thought into why it's cheap. While Amazon deploys VMs with ease, it doesn't include high-availability functionality most data centers require -- at least, not for free.
Be sure to compare resources on an equal playing field when you develop a budget. Consider how you will back up ultra-cheap Amazon instances and whether your disaster recovery (DR) strategy for cloud instances is equivalent to the protection you maintain for on-premises servers. Think about how you intend to back up site and cloud servers. What is the recovery point objective? The recovery time objective? These considerations aren't exciting, but they are important. If your livelihood depends on a machine, ensure it has a working DR plan, regardless of where it's located.
I often see potential cloud customers run the numbers for storage and find that cloud technology is significantly less expensive than AWS.
The cloud isn't always cheaper because you need to factor in storage and compute costs. Again, never forget to compare resources. I often see potential cloud customers run the numbers for storage and find that cloud technology is significantly less expensive than AWS; this is especially true when high-performance storage factors into the equation. Of course, that means nothing if the storage resources don't meet your organization's needs. What works for one organization might not work for another.
One of the selling points for the cloud is that it spins up VMs in just a few clicks; however, this doesn't consider account management. Tier one vendors usually have automation platforms that provide self-service workflows and approval mechanisms. A word to the wise: Learn these tools and use them to their full capability.
Not only will this help your company provision VMs faster, but it will also bill the correct department for the cost of these VMs -- after all, unused VMs still cost money. You wouldn't want someone to subvert IT and pull out the credit card, as there is no protection against them misunderstanding costs. One way to save money is to show the user how much it costs to provision workloads so that they can see exact costs for subsequent months.
Reporting is critical as well. Anyone who has ever worked for a large company knows that the administration wants to see management and usage reports. These reports need to be accurate and ready to go from day one, so build cost estimation models and prove they work.
One way to reclaim unused RAM and storage (thin provisioning) is to use hypervisor features. Optimize VM builds, ensure that all unused services are powered off and optimize the build to the environment. Every saved cycle of compute is a cycle you can use elsewhere.
When building the service offering, standardize and then standardize some more. Build a service catalog and maintain it fastidiously; this keeps requesters honest and constrains them to what you want to provide. All services and systems should come from the service catalog. Give users the option to change compute and disk -- within reason -- but bill them for it. When users add nonstandard complexity, costs skyrocket, even though the cost to the end user remains the same. If an option isn't available in the catalog, then the option needs to be reviewed.
The cloud is excellent at horizontal scaling. If your customers' scaling needs are seasonal, consider using a combination of private and public clouds to scale into the public cloud when a spike occurs. As the old adage goes, "Own the base, rent the spike."
Although the public cloud is an excellent option for some cases, it isn't the best option for every case -- sometimes an on-premises private cloud is better. Evaluate your decision on a case-by-case basis. Why are we putting this application in the cloud? Does it fit the cloud profile? What do we gain from putting it in the cloud? Sometimes the honest answer is somewhere in the middle, but it's good to show the application owner the pros and cons, the cost and the options.
Find a balance between on premises and the cloud
Store massive amounts of data with an on-premises cloud
Cloud computing experiences the boomerang effect
Go here to see the original:
Compare function, value of on-premises private cloud vs. public cloud - TechTarget
Array NFV platform melds ADC functions to virtualized server appliance – TechTarget
Array Networks Inc. has introduced a network functions virtualization, or NFV, platform that blends application delivery, security and other networking operations in a single appliance.
The new AVX Series Network Functions Platform, rolled out this week at Interop ITX in Las Vegas, is a series of virtualized servers capable of hosting a variety of Array and third-party applications, said Paul Andersen, director of marketing at Array, based in Milpitas, Calif.
The NFV platform is tailored to enterprises and service providers who are intrigued by NFV's benefits, but concerned about the computing resources and overhead required to manage them.
Array is offering three AVX models, with the largest one capable of supporting 3 million connections a second, up to 32 virtual appliance instances and up to 140 Gbps of throughput. Customers can host a combination of entry, small, medium or large virtual machines on the NFV platform servers, depending upon their requirements.
Brad Casemore, an analyst with IDC, said by combining a cloud-managed NFV platform with virtualized server capabilities on a hardware appliance, Array is trying to resolve the performance tradeoff between dedicated devices and virtual form factors in how NFV is delivered.
"The platform aspect is important," he said, citing Array's support of third-party virtual network functions (VNFs), in addition to fundamental application delivery controller services like load balancing and Secure Sockets Layer VPNs. "That's significant because service providers are embracing NFV so that they no longer need to support a cornucopia of function- and vendor-specific hardware appliances. They want to consolidate, manage and service-chain different VNFs from different vendors on standardized hardware."
For now, Array has certified VNFs from Positive Technologies and Fortinet for web application firewalls and next-generation firewalls, respectively. But security services from other third-party vendors, including Arbor Networks, Imperva Inc. and Palo Alto Networks, can also be used.
For cloud management, the AVX NFV platform appliances support VMware vRealize Orchestrator, Microsoft System Center Configuration Manager and OpenStack Neutron.
IT professionals at SMBs have an overwhelmingly positive opinion of the cloud, according to a study conducted by network monitoring vendor Paessler.
The global study of 2,000 IT decision-makers at companies with fewer than 500 employees found 80% of respondents had a favorable opinion of the technology. That said, companies have mostly moved basic operations, such as web hosting, email and file sharing, to cloud-based data centers.
For now, more complex business applications continue to be managed on site, although the study found many organizations do plan to migrate these programs to the cloud in the next 12 to 18 months. Use cases include data backup, network monitoring, customer relationship management, sales and ticket systems.
The shift in workloads comes even as IT managers express concerns about cloud security. Almost half said security is a "big obstacle."
"Migration to the cloud in the SMB market is underway and will inevitably continue. Ultimately, cloud adoption and BYOD will forever change the way small businesses handle IT," said Dirk Paessler, founder and CEO, in a statement. "While cloud will become a major part of how workers experience IT, system administrators will still be managing local area networks, switches and data rooms," he added.
Savvius Inc. introduced two new versions of its Savvius Insight network monitoring micro-appliance, geared to retailers and other businesses with multiple locations.
The new devices offer faster analysis, larger storage capacities and, in the Insight Plus configuration, VoIP analysis, said Jay Botelho, senior director of products at Savvius, based in Walnut Creek, Calif.
Both versions have multiple 1 Gbps interfaces, "fail-to-wire" bridge ports and support for Savvius' Omnipeek analytics software.
"We took the same software we run in our 2U appliance and put it in this new appliance," Botelho said, adding that the Insight versions are geared to customers that may find it too expensive to put network monitoring and visibility systems in branch or retail locations.
The models are priced at $1,595 for an appliance with 256 GB of capacity and RAM, and $2,995 for a device with 1 TB of capacity and 16 GB of RAM.
Understanding the basics of NFV
Benefits of SMB migration to the cloud
Looking into monitoring appliances
See original here:
Array NFV platform melds ADC functions to virtualized server appliance - TechTarget
Nutanix, IBM hug each other in Power pity party – The Register
Nutanix and IBM will announce on Tuesday a new relationship that will see Nutanix build hyperconverged systems out of IBM Power servers its first non-Intel-powered boxes.
Details of what will be delivered, and when, have not yet been revealed. But The Register understands Nutanix will bring its hyperconverged stack to Power systems complete with its software-defined storage play and ability to make private clouds out of Big-Blue-powered servers. IBM's Bluemix public cloud offers Power servers too, so there's potential for a hybrid cloud play too for Power people. Nutanix may also make it possible to consider x86 and Power as a single pool of resources.
We understand that the alliance was struck for a few reasons.
IBM knows that its Power systems don't have a stellar future. Commodity x86 and the operating systems it can run have mostly caught up to the resilience and scalability of the Power ecosystem. There's little reason to keep running it, other than the fact that many Power systems are tightly coupled to core applications.
That tight coupling means Power systems and the apps they run can't easily access public-cloud-like elasticity, or let developers adopt cloud-native tools. Power users also see the elasticity and pay-as-you-go models falling from public clouds and want that in their own data centres.
If Power users can't get that from IBM, it makes it more likely they'll consider migrating away from the platform even if that means the pain of moving a tier-one app.
Nutanix has problems of its own. The advent of Dell EMC, with its multiple own-brand hyperconverged products, plus HPE buying SimpliVity, means it now faces rivals with colossal resources, and in Dell's case little reason to continue nourishing a rival. Nutanix is also experiencing lumpy seasonal revenue that has spooked investors.
Despite sounding like a breakfast cereal, Nutanix has done very well very fast, but it isn't yet entertained in discussions about core apps inside big enterprises. And those discussions are what every enterprise wants, because once you run a core application the incumbency is very hard to dislodge.
An IBM/Nutanix alliance therefore makes sense. IBM gets a way to show Power users they can start to adopt cloudy models, and a way to show investors that the bound-for-legacy-status Power platform has a longer future than might previously have been imagined. Nutanix gets a way to talk to big companies about their core apps, which may cheer up its investors.
More here:
Nutanix, IBM hug each other in Power pity party - The Register
Oracle Bets On India To Grow Cloud Business – CXOToday.com
Technology major Oracle has chosen India as one of its key markets to take on rivals Amazon, Microsoft and Google and hence, it has taken the cloud route seriously. The California-based IT giant that already accounts for 40,000 of the companys global employee strength of 130,000 in the country, is aiming at the enterprises and government for its initiatives like Digital India.
At its flagship event Oracle OpenWorld in New Delhi, held for the first time in the country, the company announced a slew of cloud-based initiatives for the Indian market. Oracle has announced it willexpand its cloud services in Indiaover the next six to nine months. This can be achieved with the opening of a new Oracle data center in India. The $37 billion company [2016] has also announced the availability of Oracle Enterprise Resource Planning (ERP) Cloud in India to help local and multinational firms operating in India prepare for the countrys transformational tax reforms - GST.
The justification for the expansion is the explosion of growth in demand within Indian businesses and government, Dr. Andrew Sutherland, Senior Vice President, Technology and Systems, Oracle Europe, Middle East, and Africa told CXOToday.
We are in fact hoping to get a bigger slice of government spending in India on cloud, he added, stating that Gartner believes that theIndian government would spendat least US$ 7 billion on IT products in 2017, which means a lot more opportunity for players like us in the country.
Oracle CEO Safra Catz who has made a second visit to India in less than a year, also calledIndia as one of their fastest growing markets in quarterly financial results announcements in recent times.
Governments at the national and state levels are rapidly moving into the future. Digital India is the only way to empower citizens and make governments accountable - a reason why we are investing so much here, she said.
The worldwidepublic cloud servicesmarket would grow 17.2 percent to total $208.6 billion, with the highest growth come frominfrastructurecloud services. India is mirroring this trend. A recentGartnerreport indicated that Indian businesses and government are adopting cloud in greater numbers, with the public cloud computing market in the country is expected to touch a total of $1.8 billion in 2017. Of that the Infrastructure-as-a-Service (IaaS) segment is expected to see the highest growth (49.2%).
With its next generation cloud platform, Oracle is gunning for aggressive growth in the IaaS segment.
IaaS continues to be the strongest-growing segment as it has become more mainstream and enterprises move away from data center build-outs and move their infrastructure needs to the public cloud. It allows customers to go to market faster. Being able to access a secure and scalable infrastructure will help customers to run any workload in the cloud for instant added value and productivity for their business, saidSutherland.
Asked Sutherland, since there is a strong race in the cloud computing domain, how Oracle differentiates its services from rivals Amazon, Microsoft and Google, among others, he said, Oracle is now the worlds fastest growing scaled cloud company and the only company that can offer a complete portfolio across all three layers of the cloud: SaaS, PaaS and IaaS is betting big on the cloud, specifically in IaaS. What differentiates Oracles IaaS cloud from others is itsenterprise orientation and cost effectiveness.
Earlier, at a select media briefing session, Catz took a dig at cloud rivals, especially Amazon Web Services (AWS), stating, Do they (AWS) provide software as a service? Do they provide Oracle database as a service? They provide raw compute as a service, yes. And this is where we compete, and at the same price, she said.
In other words, while its rivals only have cloud infrastructure, Oracle has everything, clarified the executives, as Sutherland added, Very recently, we announced the broadest array of IaaS offerings in the industry. In a nutshell, Oracles newly announced portfolio of IaaS and PaaS solutions include market leading offerings, such as:Cloud servers that are 11 times faster and 20 % cheaper (IaaS), perform 105 times faster for analytics and 35 times faster for online transaction processing, to mention a few.
With Oracle able to offer the full cloud stack of SaaS, PaaS and IaaS they can definitely in the market. DD Mishra, Research Director at Gartnercommented:As the demand for agility and flexibility grows, organizations will shift toward more industrialized, less-tailored options. Organizations that adopt hybrid infrastructure will optimize.
In this regard, Oracle has full faith in the countrys stakeholders, asThomas Kurian, the president for product development for Oracle mentioned,Our customers and partners in India have trusted their businesses and mission-critical workloads to the Oracle Cloud for years. Our slew of offerings further support customer choice and strengthen our commitment to the countrys market, he summed up.
See more here:
Oracle Bets On India To Grow Cloud Business - CXOToday.com
8 Steps to Evaluating Cloud Service Security | CPA Practice Advisor – CPAPracticeAdvisor.com
With the current break-neck pace of software and technology we can often overlook the fact that "the cloud" is really just outsourcing. The term "cloud" is simply a catch-all term for subscription-based services running on someone else's network. Evaluating the security of such services requires digging in and asking the provider some possibly uncomfortable questions. If you aren't currently doing this for each cloud opportunity, and thinking through how its failure will impact your firm and your clients, you are simply putting the firm at risk.
As an example, I recently had a Partner forward me some information about a potential cloud service that we could use to help our staff by easing their manual data entry tasks. The idea behind the service was straightforward. Their cloud service would aggregate a client's transactions and allow the transactions to be bulk downloaded into our chosen software. To accomplish this, we would need to have each client enter their financial institution credentials into this cloud provider's system.
Our use of a cloud application like this would necessarily mean asking the client to participate. And, even if not actually stated, the fact that we would use it and ask the client to use it, conveys to the client that we "endorse" this software in some way. That means I had to ask the right questions before committing. If we ask our clients to participate in a cloud application, and then down the road that application is breached or found to be low quality, the client will be askingusthe hard questions.
These are the questions I always ask any potential cloud vendor:
If you can't get satisfactory answers to these questions, deciding to do business with such a provider boils down to a decision about how much risk your firm is willing to take on to gain the potential benefits the service will provide. And, if this is an app for doing client work, you will also be passing on that risk on to your clients. That has to be fully understood at the Partner level.
So, what do I consider "satisfactory" answers to the questions above?
Not answering one of the above questions doesn't necessarily shut the door on using the service. As long as the refusal to answer makes sense. For instance, a provider might tell you they definitely hash passwords stored in their database, but for security reasons they don't want to divulge which hashing algorithm they use. I'd be ok with that, as long as the rest of their answers seem competent and pass the "smell test".
Unfortunately, you will run into many startups that refuse to give straightforward answers to these questions. It's not enough that an app works well or solves a problem. If the people running the service don't have enough experience running and protecting such a service reliably at large scale, it's up to us to identify that ahead of time before we commit the data of our firm or our clients into their hands.
-------
Dave Jones is the IT Manager for Pearce, Bevill, Leesburg, Moore, P.C in Birmingham, AL. He has been a network and system administrator in the Birmingham, AL area for 20 years. He has been in the CPA technology field for 18 years. Email: dave@pearcebevill.com; LinkedIn: https://www.linkedin.com/in/daveajones.
Visit link:
8 Steps to Evaluating Cloud Service Security | CPA Practice Advisor - CPAPracticeAdvisor.com