Category Archives: Cloud Servers

GTSI Launches Government-Focused HP Cloud Center of Excellence

HERNDON, Va.--(BUSINESS WIRE)--

GTSI Corp. (GTSI), a systems integration, solutions and services provider to government, today announced the establishment of an HP Cloud Center of Excellence at its headquarters in Herndon, Virginia. Integrated into the companys Customer Briefing Center, the new HP center enables GTSI to showcase private, public and hybrid cloud environments specifically geared toward the public sector.

Given our position in the market, GTSI has the unique ability to highlight options that are optimized for the government while taking the various governance requirements into consideration, said Jeremy Wensinger, Chief Operating Officer, GTSI. We are making significant investments in our ability to increase our customer offerings to help government meet cloud mandates and achieve cost savings. As the only HP Cloud Center of Excellence specifically dedicated to the public sector, GTSI has a unique opportunity to help government agencies understand the benefits and possibilities of transitioning to a cloud environment.

The Cloud Center of Excellence features HP CloudSystem, the industrys most complete and integrated solution for building and managing services across private, public and hybrid cloud environments. In addition to industry-leading HP servers, the center also includes HP Storage, HP Networking and Enterprise Security solutions, as well as third-party applications. The objective of the center is to enable customers to gain firsthand knowledge of the benefits of moving to the cloud, including:

Government customers are looking for expert guidance to help them understand how cloud can accelerate their agency missions, said Henry Fleischmann, Chief Technologist, HP Federal Cloud Solutions. At the HP Cloud Center of Excellence, agencies can experience the benefits of HP cloud solutions tailored for the public sector by GTSI engineers.

About GTSI Corp.

GTSI (GTSI) is a leading provider of technology solutions and professional services to federal, state and local governments. Founded in 1983, the company has helped meet the unique IT needs of more than 1,700 governmental agencies nationwide. GTSI professionals draw on their deep knowledge, strategic partnerships, customer service and more than 740 industry certifications to guide agencies in selecting the most cost-effective technology available. GTSI has extensive capabilities and past performance in software development, data center, networking, collaboration, security and cloud computing solutions. In addition, GTSI's advanced engineering, integration, support and financial services and broad portfolio of contracts ease the planning, purchasing and deployment of solutions, and facilitates the management of mission-critical IT throughout the lifecycle. Headquartered in Herndon, Va., GTSI has approximately 450 employees. For more information visit the company's website at http://www.gtsi.com.

Link:
GTSI Launches Government-Focused HP Cloud Center of Excellence

The Cloud, Day 28: My Five Biggest Cloud Complaints

Its Day 28 of the 30 Days With the Cloud series. As with previous 30 Days series, this day is dedicated to recapping the five biggest issues or problems I encountered during the 30 Days journey.

So, without further ado, here are my five biggest cloud complaints:

1. Sometimes Its Not Cloudy

The cloud has tons of great tools and services to address virtually any need, which is greatas long as you can connect to the cloud. Living in a large metropolitan area it seems like strong wireless carrier signals, and Wi-Fi hotspots are fairly ubiquitous. But, the unthinkable still happens sometimes, and you just cant connect to the Internet. The more rural or remote you are, the greater the odds and frequency will be.

I like the cloud when its available, but I dont like being at the mercy of the cloud being available. When Im flying at 30,000 feet, I still want to play my music, view my photos, and type my next article--and on most flights that wont be possible if I rely on the cloud.

Download speeds are blazing, but we still need faster upload speeds as well.2. Upload Speeds

Broadband providers have drastically increased download speeds in recent years. Thats great for things like streaming movies or music from the Web, but it doesnt help you get your movies and music uploaded to the cloud in the first place.

The upload speed on most broadband connection is a fraction of the download speed. My broadband in my house gets about 30Mbps downloading, but only uploads at about 5Mbps. The Internet is rapidly evolving from a one-way pipe to a two-way road, and we need upload speeds to be more on par with the download speeds to make cloud services more practical.

3. Still Need a Plan B

Ive been preaching the cloud for yearsas a Plan B. I put data in the cloud so that if something happens to my PC I can still get to it. I upload photos and music so I have a backup copy stored safely on the Web in case my house and all of my data burn to the ground.

See more here:
The Cloud, Day 28: My Five Biggest Cloud Complaints

Amazon Rival Rackspace Evokes Dot-Com Era Deal: Real M&A

By Will Robinson and Danielle Kucera - 2012-06-20T02:58:37Z

Rackspace Hosting Inc. (RAX) is tempting buyers that covet a foothold in the cloud to tackle the largest U.S. Internet takeover since the dot-com bubble.

Rackspace has more than tripled since its 2008 initial public offering as it evolved into Amazon.com Inc. (AMZN)s biggest competitor in cloud computing, which allows businesses to save money on data centers by storing information on remote servers and accessing it via the Web. While the $6.1 billion company has a higher valuation relative to earnings than almost two-thirds of Internet software and e-commerce firms, its less than half as expensive as Amazon, according to data compiled by Bloomberg.

Even after profit failed to top analysts estimates for the first time in four quarters, the company is still projected to almost triple net income by 2014 as the market for cloud- computing infrastructure services expands to $10.5 billion from $3.7 billion last year, according to Gartner Inc. Benchmark Co. says that may lure AT&T Inc. (T), International Business Machines Corp. or Dell Inc. (DELL) An acquisition may fetch as much as 13 times estimated 2013 earnings, said Dougherty & Co., valuing the San Antonio-based company at $7.9 billion for the biggest takeover of a U.S. Internet company in 12 years, the data show.

There truly isnt anyone else out there thats independent and as big as Rackspace in cloud infrastructure, Clayton Moran, a Delray Beach, Florida-based analyst at Benchmark, said in a telephone interview. Theres good value here given the strong growth. Potential acquirers are pretty deep-pocketed so they certainly could pay a healthy multiple.

We think this is a paradigm shift in computing and the future is huge for the winners in this space, Lew Moorman, president of Rackspace, said in an interview yesterday. We want to build something great. Our board has fiduciary duties, but were not for sale.

Rackspace lets its more than 180,000 business customers store their websites and applications on its servers. The fleet of data centers it runs competes with Amazon Web Services in the public-cloud market, where customers rent computing power, storage and other services.

Rackspace is moving its cloud services to OpenStack, an open-source project that it created as an alternative to Seattle-based Amazons product. OpenStack lets companies build their own clouds using Rackspaces code. The effort has the backing of the U.S. space agency NASA, and its being used by companies such as Dell and AT&T, whose offerings compete with Rackspaces.

Competition in the market is still heating up. Amazons cloud business may have reaped $800 million in revenue last year, Heather Bellini, a New York-based analyst at Goldman Sachs Group Inc., estimated in a February report. Microsoft Corp. (MSFT) is promoting its Azure services, while traditional technology providers IBM and Hewlett-Packard Co. (HPQ) are also in the market.

Rackspace increased revenue at its cloud unit by 88 percent last year to $189.2 million. The company has gained share with its early entry into the market and has maintained it by charging a premium for service, said Mark Kelleher, a Boston- based analyst at Dougherty.

Read the rest here:
Amazon Rival Rackspace Evokes Dot-Com Era Deal: Real M&A

Garantia Data Unveils First In-Memory NoSQL Cloud at LaunchPad 2012

SAN FRANCISCO & TEL AVIV, Israel--(BUSINESS WIRE)--

GigaOM Structure LaunchPad 2012 - Garantia Data today announced the beta release of the first fully-automated, in-memory NoSQL cloud service offering reliable Memcached and infinitely-scalable Redis data store systems. Garantia Data is one of 11 finalists for the GigaOM Structure LaunchPad competition, which recognizes emerging startups in the cloud computing industry, and will present its new In-Memory NoSQL Cloud on stage at the event today in San Francisco, CA.

Web companies, such as Facebook, Twitter, Instagram and Pinterest, rely on Memcached and Redis to support high-performance and rapid growth. However, Memcached lacks reliability and Redis is limited in scalability - a dataset cannot grow beyond a single Master server. In addition, both require constant operational care. Garantia Datas breakthrough dynamic-auto-sharding technology virtualizes multiple cloud servers into an infinite pool of memory, enabling datasets to scale autonomously and continuously from gigabytes to terabytes and even petabytes based on their actual size. This zero-management service completely frees developers from dealing with nodes, clusters, server lists, configuration, scaling and failure recovery, while guaranteeing absolutely no data loss.

We have leveraged sophisticated technology to solve real industry pains, said Ofer Bengal, CEO of Garantia Data. Our new in-memory NoSQL cloud reinvents the way people use Memcached and Redis. We offer our customers infinite, continuous and fully automated scalability in a completely hassle-free cloud service. Memcached users enjoy full reliability with absolutely no data loss. Redis users enjoy infinite-scalability without compromising on any of the Redis commands.

Garantia Data is a true zero-management service, said Adoram Rogel, CTO of Abe's Market, an online reseller of all-natural products. We connected to the in-memory NoSQL cloud in seconds, and from that moment on we never had to deal with scaling, configuration or failure recovery again.

Every web-scale application developer that uses distributed memory caching should take a close look at this break-through solution from Garantia Data, said Paul Burns, president of cloud computing industry analyst firm Neovise. It not only delivers Memcached and Redis capabilities as a cloud service through an API, it makes them reliable and scalable. It also eliminates the time consuming, error-prone administrative processes typically involved in establishing and maintaining private caching deployments.

Price and Availability

The Garantia Data in-memory NoSQL cloud is currently available free of charge to early adopters during the beta phase. When it transitions to general availability later this year, the company will offer a pay as you go model, by which instead of paying for full instances, the customer will only pay for actual memory consumption, in an analogous way to metered utilities and at price per-gigabyte similar to those of plain cloud instances.

The Garantia Data in-memory NoSQL cloud is currently available on Amazon Web Services. The company intends to expand its offering to other public clouds later this year.

About Garantia Data

Read more from the original source:
Garantia Data Unveils First In-Memory NoSQL Cloud at LaunchPad 2012

ENG TechDays: Hadoop vs RDMS, Gert Drapers on Big Data – Video



18-06-2012 13:22 Microsoft TechDays 2012: Presentation of Gert Drapers on Hadoop the open source addition to Microsoft Azure Gert Drapers explains Big Data and the relationship between Microsoft SQL Server as a RDMS and the open source Hadoop system. Since Oct 12 2011 Microsoft is committed to Hadoop on Azure, the Microsoft cloud system in close cooperation with Hortonworks. Drapers elaborates on key components of Hadoop like HDFS, Hadoop Distributed File System, Hive& Pig, Sqoop. Additions to Hadoop are given back to the Hadoop Community. Drapers emphasizes the true essence of what Hadoop means for the cloud: failure free operations, meaning that in the cloud, it is not the question if something goes wrong, but when it goes wrong. Hadoop effectively delivers a very redundant environment where disk failure does not matter because data is stored in many places. Hadoop was basically invented by Google en adopted by the open source community. Gert Drapers:,,What is the big deal about Big Data? It is a very interesting acronym soup. Some people say that it is all about size, but I don't think that is true. That is one of the trends that we see today. Servers has become dirt cheap and is much more economically sensible to stack up pizza boxes. Big Data also means NOSQL to a lot of people. Those two are actually not necessarily correlated. I'll make that distinction in a moment. You will find there MongoDB and CouchDB. So let us look at some stats. If we look at the US market, in 2009 it was ...

Read more:
ENG TechDays: Hadoop vs RDMS, Gert Drapers on Big Data - Video

Consumerization of IT Driving Virtualization to the Private Cloud

by Rich Bourdeau

Thanks to improved efficiencies and greater business agility, the number of companies looking to deploy private cloud infrastructures has increased dramatically in 2012. CIOs across industries can no longer ignore the compelling business case for cloud computing. In his February 2012 report, Top Five Trends for Private Cloud Computing, Gartner analyst Tom Bittman forecasts: the number of [private cloud] deployments throughout 2012 will be at least 10 times higher [than 2011].

Many companies view private clouds as the next phase in their virtualization efforts with the goal of enabling the consumerization of IT. Compared to virtualization, private clouds are an even larger paradigm shift that requires significant changes in the way many companies deliver IT services. Unless IT teams thoughtfully plan how they are going to address the organizational and cultural challenges, they will likely derail their initial cloud deployments or deliver a solution that cannot be scaled to other groups or businesses within the enterprise.

Provisioning Virtual Infrastructure Still Takes too Long During the last five to ten years, cost savings through server consolidation was the primary motivation driving most companies to virtualize large portions of their IT infrastructures. A secondary benefit was that virtualization also helped reduce IT service delivery from weeks to days. However, with the pace of business accelerating, companies need faster access to compute resources so they can continue to grow their businesses in an increasingly competitive market. The challenge at most companies is that IT consumers still wait days for access to servers, desktops, storage and other compute resources they needed yesterday.

Agility Replaces Cost Savings as Private Cloud Motivation In their non-work, consumer lives, many of the services people use can be acquired online through a self-service portal and delivered immediately. However, in their business lives, these same people have to submit requests and wait days for manual processes to deliver much-needed services. Increasingly, IT consumers are demanding the same levels of service they have in their personal lives. They want self-service access to resources with delivery measured in minutes, not days. In order to achieve this and maximize the benefits from virtualization, businesses need to evolve into IT as a service or private cloud. While virtualization provides the foundation that enables the delivery of private cloud services, private clouds provide the consumer interface that will empower companies to take virtualization to the next level.

Like many other companies, you have probably recognized that for IT to remain relevant, you need to embrace the consumerization of IT and offer on-demand access to private and, yes, even hybrid cloud services. As with any IT project, you probably want to start out small and grow your implementation in both size and complexity over time. Before you begin, you should be aware of the more common mistakes companies make in their initial private cloud deployments and how to overcome these obstacles to ensure that your company achieves the quickest time to cloud value.

Common Pitfalls One of the theories exposed by the so-called experts is that, in order to successfully deploy a private cloud, companies need learn from the public cloud providers and standardize on a few offerings with a single deployment process. The theory is that it will be easier to automate the delivery of these services if you limit the number of permutations. In theory, this sounds great, but in practice this is one of the biggest obstacles that stalls the growth of many private cloud deployments.

For companies to achieve the economies of scale enjoyed by cloud providers, they need to achieve higher utilization by sharing resources and amortizing the cost over multiple business and departments. The problem that many companies encounter is that they start by building their private cloud to meet the specific needs of one group. Like virtualization, which started first in lower-risk IT applications, initial private clouds pilots are frequently deployed in Dev/Test environments. The choice of Dev/Test by itself is not a bad option, but when combined with standardizing the offering and automation process, it typically leads to poor adoption by other groups within the company because the offerings dont meet the unique needs of their business.

Lets take a look at provisioning methodology, which is just one of the many attributes that make up an infrastructure service. The Dev/Test groups primary need is to get access to machines quickly. For them, cloning a machine meets their needs. The production group is more concerned about compliance to software revision levels and patch management; for them, using enterprise software deployment tools from BMC, CA, HP, IBM and others is a critical necessity. The desktop group, on the other hand, is more concerned with creating space-efficient desktops to lower its storage costs. However, if you build a private cloud service where the provisioning methodology only supports machine cloning, then you are likely to achieve poor adoption by the other groups.

The rest is here:
Consumerization of IT Driving Virtualization to the Private Cloud

Google: Use the Cloud, Save the Planet

Organizations generally switch to cloud-based services to save money, but there are environmental benefits as well. Cloud computing reduces energy use and carbon emissions, according to Google, which claims that an average enterprise can lower its energy usage by 65 percent to 85 percent by switching to online productivity tools such as Google Apps.

"Lower energy use results in less carbon pollution and more energy saved for organizations," writes Google's Urs Hoelzle, senior vice president for technical infrastructure, in a Monday post on the Google Green Blog.

A typical organization has more servers than it needs for back up, failures, and spikes--an inefficient system that wastes energy and money, Hoelzle writes. Cloud-based services, by comparison, are used far more efficiently by thousands of people, and are engineered to minimize energy output for operating cooling servers.

Energy Impact of the Cloud Source: Google

How much energy and money can organizations save by switching to the cloud? According to Google, the U.S. General Services Administration (GSA) cut its server energy use by nearly 90 percent and its carbon emissions by 85 percent, when it recently switched 17,000 users to Google Apps for Government. As a result, the GSA will slash its annual energy bill by about $285,000.

There are security issues with cloud computing, of course, and organizations must weigh the pros and cons before making the switch from in-house solutions. But the potential energy savings of using the cloud are crystal clear.

Contact Jeff Bertolucci at Today@PCWorld, Twitter (@jbertolucci) or jbertolucci.blogspot.com.

Continue reading here:
Google: Use the Cloud, Save the Planet

Blitz.io and CopperEgg Partner to Deliver Integrated Real-Time Performance Testing & Cloud Monitoring

AUSTIN, TX--(Marketwire -06/18/12)- CopperEgg, Corp., a cloud analytics and monitoring company, today announced a partnership with Blitz.io, a new approach to Web performance testing for apps, websites and cloud services, to deliver integrated real-time Web performance testing and monitoring for cloud infrastructures. The integration delivers real-time insight into cloud capacity and performance to help better test, scale, and optimize cloud application delivery.

"Blitz.io is committed to working with best-in-class solutions to help our customers meet the new challenges of rolling out applications in a dynamic cloud infrastructure," said Tamer Abbas, Blitz.io's head of business development. "Combining Blitz.io's outside-in metrics such as response times, rates and number of users, with CopperEgg's inside-out metrics such as CPU, Disk I/O and memory utilization, enables end-to-end visibility into your app or your website performance."

Connecting CopperEgg's real-time system performance measurements to the Blitz.io native interface via the CopperEgg API enables users to correlate Application Performance Management (APM) metrics with system capacity and performance statistics on the same graph. This allows application developers and DevOps engineers to immediately see the effect of a code or system change within seconds of that change, creating a much tighter and higher fidelity testing loop.

"Integration between Blitz.io and CopperEgg not only delivers instant system performance and capacity feedback to customers, it also demonstrates the power and ease-of-use of the CopperEgg API," said Mike Raab, V.P. Business Development at CopperEgg. "Integration with Blitz.io followed the CopperEgg mantra of simple, smart, and fast. We look forward to working with Blitz.io in taking APM for DevOps to the next level."

For more information about the partnership visit: http://copperegg.com/do-you-have-the-right-vertical-scale-for-your-app.

About Blitz.ioBlitz.io is a simple yet powerful cloud-based service that enables developers creating apps, websites or cloud services to immediately and cost-effectively test the performance of their solutions under real-world conditions. Either on its own or as an integrated part of its large ecosystem of partners, Blitz.io helps application and website developers throughout the DevOps lifecycle with continuous monitoring and performance testing with no scripting required. Blitz.io supports APIs for development languages such as Ruby, Java, Maven, Node.js, Python, Perl, PHP and more. To learn more, visit, http://blitz.io/ or follow us on Twitter @blitz_io

About CopperEggCopperEgg next-generation cloud monitoring provides simple, smart, and fast insight into the performance, quality, and availability of servers, applications and services deployed on cloud, virtual and physical infrastructures. Our SaaS-based, real-time cloud monitoring and cloud analytics deliver immediate intelligence into critical cloud performance problems, correlated visibility into developing trends, and split-second decision support for organizations of all sizes. CopperEgg products are simple to try, install, use, and grow. CopperEgg is backed by Silverton Partners and based in Austin, Texas.

For more information, visit: http://copperegg.com, as well as on Twitter: @CopperEgg. You can also read their blog: http://copperegg.com/category/blog/.

Read the rest here:
Blitz.io and CopperEgg Partner to Deliver Integrated Real-Time Performance Testing & Cloud Monitoring

Mellanox Announces Connect-IB, World’s Leading Scalable Server and Storage Interconnect Adapter

HAMBURG, Germany--(BUSINESS WIRE)--

ISC12 Mellanox Technologies, Ltd. (MLNX) (MLNX.TA), a leading supplier of high-performance, end-to-end interconnect solutions for data center servers and storage systems, today announced Connect-IB, the worlds leading scalable server and storage adapter solution for High-Performance Computing (HPC), Web 2.0, Cloud, Big Data, financial services, virtualized data centers and storage environments. Connect-IB adapters deliver the highest throughput of 100Gb/s utilizing PCI Express 3.0 x16, unmatched scaling with innovative transport services, sub-microsecond latency and 130 million messages per second 4X higher message rate over competing solutions.

Connect-IB is the new foundation for scalable computing. HPC, Web 2.0, and cloud environments are challenging todays interconnect technologies with their demand for infrastructures, utilizing tens-of-thousands of servers, and hundreds of virtual machines per server. New applications such as Big Data analytics and in-memory computing depend on parallel execution and RDMA (Remote Direct Memory Access). RDMA has also become critical for storage solutions. The new Connect-IB interconnect architecture delivers the performance and capabilities required by compute and storage intensive applications and enables IT managers to build the most efficient, extreme scale data centers.

In the high-performance computing server and storage markets, the explosion of data volumes is significantly increasing the demand for network throughput, said Steve Conway, IDC research vice president for HPC. The introduction of interconnect technology at 100Gb/s is an important step towards meeting these demands. With the rollout of powerful next-generation compute servers including Intel's Romley, we expect growing demand from a variety of HPC markets for highly scalable, high bandwidth, low latency interconnect solutions such as those being offered by Mellanox.

Mellanox is the first company to deliver 100Gb/s interconnect throughput a significant breakthrough to take our customers to the next level of scalable computing, said Eyal Waldman, chairman, president and CEO of Mellanox Technologies. Connect-IB delivers the industrys highest performing server and storage interconnect with maximum bandwidth, low latency and highest application efficiency.

The Connect-IB product line consists of single and dual port adapters for PCI Express 3.0 with options for x8 and x16 host bus interfaces as well as a single port adapter for PCI Express 2.0 x16. Each port supports FDR 56Gb/s InfiniBand with MPI ping latency less than 1us. All Mellanox HCAs support CPU offload of transport operations and RDMA for efficient computing. New in Connect-IB is Dynamic Transport operation support for unlimited scalability and end-to-end data protection for unmatched data reliability. Adapter cards are sampling today.

Supporting Resources:

About Mellanox

Mellanox Technologies is a leading supplier of end-to-end InfiniBand and Ethernet interconnect solutions and services for servers and storage. Mellanox interconnect solutions increase data center efficiency by providing the highest throughput and lowest latency, delivering data faster to applications and unlocking system performance capability. Mellanox offers a choice of fast interconnect products: adapters, switches, software and silicon that accelerate application runtime and maximize business results for a wide range of markets including high performance computing, enterprise data centers, Web 2.0, cloud, storage and financial services. More information is available at http://www.mellanox.com.

Mellanox, BridgeX, ConnectX, CORE-Direct, InfiniBridge, InfiniHost, InfiniScale, PhyX, SwitchX, Virtual Protocol Interconnect and Voltaire are registered trademarks of Mellanox Technologies, Ltd. Connect-IB, FabricIT, MLNX-OS, Unbreakable-Link, UFM and Unified Fabric Manager are trademarks of Mellanox Technologies, Ltd. All other trademarks are property of their respective owners.

Original post:
Mellanox Announces Connect-IB, World’s Leading Scalable Server and Storage Interconnect Adapter

Dynamic Data Centers and Efficient Operations in the Cloud. Patent for Auction. ICAP Patent Brokerage Announces for …

SAN FRANCISCO, June 18, 2012 /PRNewswire/ --ICAP Patent Brokerage, a division of ICAP plc and the world's largest intellectual property brokerage firm and organizer of the ICAP Ocean Tomo Auctions, is offering for auction a patent portfolio of seventeen (17) issued U.S. patents and associated pending applications regarding enterprise "rack servers" and their provision and management. The lot will be included in the 16th ICAP Ocean Tomo IP Auction on July 26, 2012, at the Julia Morgan Ballroom in San Francisco, CA.

(Logo: http://photos.prnewswire.com/prnh/20100614/CG20517LOGO)

"We are excited to be offering this patent technology lot for auction to our global buyer base." Dean Becker, CEO, ICAP Patent Brokerage, ICAP Ocean Tomo Auctions.

Background

Virtual data centers are comprised of enterprise servers that store data remotely. Thus, the location of these enterprise servers is sometimes referred to as "the cloud" and their remote access and use is sometimes referred to as "cloud computing." As these data centers (or "clouds") require to be scaled up or down based on use or demand, significant addition / re-location of servers, hardware reconfiguration, and re-cabling is often required, sometimes taking significant time and being very expensive. A cost-effective and reliable solution was required to address the need to make data centers more dynamic and efficient, all without requiring the physical relocation and rewiring of servers.

Key Characteristics & Benefits

With priority dates from 2004, the patents in this portfolio disclose an architecture for enterprise servers (ES) with varying arrangements of pluggable modules called an enterprise fabric (EF), with the following benefits:

Market Potential

This patented technology will be important to all data center providers and operators as well as manufacturers of enterprise servers and networking equipment.

Companies who have cited this patent portfolio include: Cisco, Oracle, IBM, Hewlett-Packard, Intel, Microsoft and Toshiba.

Read the original:
Dynamic Data Centers and Efficient Operations in the Cloud. Patent for Auction. ICAP Patent Brokerage Announces for ...