Category Archives: Cloud Hosting

Verizon sells its cloud and managed hosting services to IBM – PCWorld

Thank you

Your message has been sent.

There was an error emailing this page.

Verizon shut down its public cloud service in early 2016, and is now unloading its virtual private cloud and managed hosting offerings to IBM.

The deal, announced Tuesday, allows IBM to improve its position in cloud computing, a spokesman said by email. Meanwhile, the deal allows Verizon to get out of the cloud infrastructure market dominated by Amazon, Google, and Microsoft, allowing it to focus on its managed network, security, and communications services.

The companies did not disclose the terms of the sale. The transaction is expected to close later this year.

"This is the latest in a series of IBM initiatives enhancing its leadership position in such areas as cloud computing, cognitive technologies, internet of things, security, mobility, and analytics," IBM's spokesman added. "And the agreement aligns to and supports IBM's hybrid cloud and IT-as-a-Service strategy."

The deal gives IBM an increased presence across several industries, including the U.S. federal government, healthcare, retail, and utilities, he added.

As part of the sale, Verizon and IBM agreed to work together on strategic initiatives involving networking and cloud services,George Fischer, senior vice president and group president of Verizon Enterprise Solutions, said in a blog post.

The deal is a "great opportunity" for Verizon Enterprise Solutions and its customers, he added. "It is the latest development in an ongoing IT strategy aimed at allowing us to focus on helping our customers securely and reliably connect to their cloud resources and utilize cloud-enabled applications," he added.

The deal supports Verizon's goal to become "one of the world's leading managed services providers enabled by an ecosystem of best-in-class technology solutions from Verizon and a network of other leading providers," Fischer said.

Affected customers shouldn't expect any immediate impact on their services, he added.

Separately, data center providerEquinix announced Monday it had closed a US$3.6 billion deal to buy29 Verizon data centers, representing more than half of the data centers operated by the telecom carrier.

Grant Gross edits and assigns stories and writes about technology and telecom policy in the U.S. government for the IDG News Service. He is based outside of Washington, D.C.

Read more here:
Verizon sells its cloud and managed hosting services to IBM - PCWorld

Hostgee Cloud Hosting Launches New Cloud Control Panel For Linux And Windows VPS Hosting Services – HostReview.com (press release)

Hostgee Cloud Hostings Linux and Windows VPS Hosting plans are now available with a new Cloud Control Panel. The new control panel offers all clients the convenience of distributing their resources on multiple VPS Cloud servers in whichever way they need them to be configured. Customers only need to buy their resources, starting with 2GB RAM and one vCores and scaling up to 128GB RAM and 24 vCores, and then distribute them on as many VPS Cloud servers they need.

And of course, contrary to big cloud VPS providers practices, Hostgee Cloud Hostings plans don't need customers to use a pricing calculator to predict their cloud VPS costs. Every plan includes 100% fixed expected costs every month and no commitments.

Additionally, 24/7 Support & Standard Administration with 20-minute average response time along with Ultra-fast Intel Xeon E5-2670 CPUs, Maximum Disk I/O with RAID10 Real SSD arrays and Managed Backups, RAM based Read Caching and Enterprise-class Virtualization based on Hyper-V and System Center.

The new Cloud Control Panel is a valuable addition to the industry-leading features of Hostgee Cloud Hostings Cloud VPS and Dedicated Server Hosting Services which have been designed to give world-class reliability, speed and versatility to todays demanding VPS Cloud clients. Every service is based on Hyper-V which is the best platform for virtualizing any workload. Hyper-V gives complete virtual machine isolation which gives every Cloud VPS 100% free from different Cloud Servers on the same physical host.

The new Cloud Control Panel completes our Cloud services, making Hostgee Cloud Hostings Cloud VPS plans the most complete and dependable in the Globe.

http://www.hostgee.com marketing@hostgee.com 7910 - ash shawqiyah 24351 - 3081 Makkah Al Mukarramah Phone # +966 125360100 Zip Code : 21955 Kingdom Of Saudi Arabia

See the original post:
Hostgee Cloud Hosting Launches New Cloud Control Panel For Linux And Windows VPS Hosting Services - HostReview.com (press release)

Cloud Computing Continues to Influence HPC – insideHPC

This is the second entry in an insideHPC series that explores the HPC transition to the cloud, and what your business needs to know about this evolution. This series, compiled in a complete Guideavailable here, covers cloud computing for HPC, industry examples, IaaS components, OpenStack fundamentals and more.

Cloud technologies are influencing HPC just as it is the rest of enterprise IT. The main drivers of this transformation are the reduction of cost and the increase in accessibility and availability to users within an organization.

Traditionally, HPC applications have been run on special-purpose hardware, managed by staff with specialized skills. Additionally, most HPC software stacks are rigid and distinct from other more widely adopted environments, and require a special skillset by the researchers that want to run the applications, often needing to become programmers themselves. The adoption of cloud technologies increases the productivity of your research organization by making its activities more efficient and portable. Cloud platforms such as OpenStack provide a way to collapse multiple silos into a single private cloud while making those resources more accessible through self-service portales and APIs. Using OpenStack, multiple workloads can be distributed among the resources in a granular fashion that increases overall utilization and reduces cost.

While traditional HPC systems are better for a certain workload, cloud infrastructures can accommodate many.

Another benefit of breaking down computation siloes is the ability to accommodate multidisciplinary workloads and collaboration. While traditional HPC systems are better for a certain workload, cloud infrastructures can accommodate many. For example, they can be used to teach computation techniques to students as well as provide a resource for researchers to make scientific discoveries. Traditional HPC infrastructures are great at solving a particular problem, but they are not very good at the kind of collaboration that modern research requires. A multidisciplinary cloud can make life-changing discoveries and provide a platform to deliver those discoveries to other researchers, practitioners or even directly to patients on mobile devices.

Definitions of cloud computing vary, but the National Institute of Standards and Technologies(NIST) has defined it as having the following characteristics:

Applied to HPC workloads, the service and delivery model is generally understood to include the following buckets, either individually or combined (derived from NIST definition):

Public clouds will contain sufficient compute servers, storage amounts and the networking necessary for many HPC applications.

The various types of infrastructure described here can physically reside or be deployed over the following types of clouds:

The various types of infrastructure can physically reside or be deployed over the above three types of clouds.

Over the next few weeks this series on the HPC transition to the cloud will cover the following additional topics:

You can also download the complete report, insideHPC Research Report onHPC Moves to the Cloud What You Need to Know, courtesy of Red Hat.

Visit link:
Cloud Computing Continues to Influence HPC - insideHPC

Hosting, Cloud Services to Lead Total Enterprise IT Spending Growth – Channel Partners

Hosting and cloud-services spending among enterprises is growing, both in dollar-value terms and as a portion of overall IT spending, across nearly all sectors in terms of company size, geography and vertical market.

Thats according to 451 Researchs latest Voice of the Enterprise: Hosting and Cloud Managed Services, Budgets and Outlook study. Based on research conducted in January and February with about 1,000 IT professionals around the world, the quarterly study combines 451s analysis with survey responses and interviews from a panel of more than 60,000 senior IT buyers and enterprise technology executives.

Liam Eagle, 451s research manager of hosting and cloud services, tells Channel Partners the trend will lead to new opportunities for the channel.

As part of that transformation, hosting and cloud services spending is trending toward infrastructure and application services packaged with value-added managed services and security services," he said. We believe that is where a lot of the opportunity for the IT channel will lie in the future supporting the consumption of hosted infrastructure and applications by enterprises that may require assistance with the operational management or security of those resources, including being a complimentary third party to the infrastructure or application subscription."

Enterprises this year expect growth in their hosting and cloud services spending to outpace growth in overall IT spending by 25.8 percent to 12 percent. Among large businesses (1,000-9,999 employees), an average of 33.3 percent growth is expected in hosting and cloud services spending.

Among respondents, 88 percent expect to increase their hosting and cloud services budgets in 2017 versus 2016, compared to 70 percent that expect to increase total IT budgets year over year.

Just 9.5 percent expect a decrease in hosting and cloud services spending, compared to 22.3 percent that expect a decrease in total IT spending, according to the study.

The increased spending is being driven by: migration of workloads from on-premises environments to the cloud; adding new resource capacity due to business growth; new IT initiatives; and businesses buying additional services they previously did not have. These drivers vary significantly by company size, with small businesses strongly emphasizing new capacity due to growth, and medium and very large businesses primarily focused on migrating on-premises workloads to the cloud.

Public cloud and SaaS providers such as Microsoft Azure and Amazon Web Services are being adopted by the largest portion of respondents, according to the study. However, about 50 percent of respondents indicate they are ...

Excerpt from:
Hosting, Cloud Services to Lead Total Enterprise IT Spending Growth - Channel Partners

Rackspace CEO Taylor Rhodes Leaving Company – Talkin’ Cloud

Brought to you by Data Center Knowledge

Taylor Rhodes is leaving Rackspace after three years as CEO of the former hosting and cloud heavyweight that recently pivoted to providing managed cloud services for Amazon Web Services, Microsoft Azure, and other hyper-scale platforms.

Rhodes will be replaced by Rackspace president Jeff Cotten, who is stepping in as interim CEO but whom the companys board considers a strong candidate for the chief executive role long-term.

Rhodes said his new company is about as big as Rackspace was when he joined 10 years ago.

He was appointed as Rackspaces chief executive in 2014, replacing then publicly traded companys co-founder and former CEO Graham Weston. The company made the CEO switch after declining several buyout and partnership offers.

Last August however, Rackspace went private, bought out by investment management company Apollo Global Management for $4.3 billion.

Heres Rhodes on company performance since the buyout:

In a follow-up blog post of his own, Cotten said Rhodes was leaving Rackspace in solid condition. Since the company is now private, it does not report the details of its financial performance, but according to Cotten, its managed cloud business is growing exceptionally well.

Managed AWS and Azure services have grown more than 1,400 percent year over year since the were launched two years ago, he said. The company also recently entered a partnership with Google, expecting to launch managed public cloud services for Google Cloud Platform in the near future. Rackspace also provides private cloud services, using VMware, Microsoft, and OpenStack platforms.

Cotten also said Rackspace is working to launch a data center in Germany. Its current footprint consists of data centers in Dallas, Chicago, Northern Virginia, London, Hong Kong, and Sydney.

Read the original:
Rackspace CEO Taylor Rhodes Leaving Company - Talkin' Cloud

Serverless computing might finally deliver on the promise of the cloud – GeekWire

Amazon CTO Werner Vogels discusses serverless computing at the AWS Summit in April. (Credit: Amazon)

The original promise of cloud computing was simple: no longer would you need to buy, configure, and maintain racks and racks of servers in hopes of growing a tech business into that capacity. All you needed to get up and running was a credit card and some code; if you started slow, you were only on the hook for the resources you consumed, and those resources were limitless.

As with most technology advances, the reality turned out to be a bit more complicated.

A current customer of Amazon Web Services, Microsoft Azure, or Google Cloud Platforms infrastructure-as-a-service products still needs to do quite a bit of work to provision servers, monitor performance, and make sure their costs arent running out of control. And customers running legacy applications that would like to move to the cloud have to do even more work to ensure nothing breaks in the transition.

But serverless computing might just be the technology that delivers on that original promise. Serverless technologies allow application developers and technology organizations to account for unpredictable spikes in demand without having to specify the resources theyll need from their cloud provider.

Serverless is thelatest in a long line of confusing tech marketing terms. Put another way: the servers are still there, but youll never know it.

Last year, people were looking to explore. This year will be the year of great maturity, said Sam Kroonenburg of A Cloud Guru, who will be hosting the Serverless Conference today in Austin, Texas, where 450 serverless enthusiasts will hear presentations from all the major cloud providers on their approaches.

Serverless computing is relatively old as a concept inside forward-thinking elite technology companies, but its only been about three years since it has started to gain traction.

The spark behind this movement was the preview release of AWS Lambda in 2014, which Amazon CTO Werner Vogels recently called the last crucial piece in the promise of the cloud. Lambda became generally available almost exactly two years ago.

Lambda really brought to life a managed computing environment, where you no longer need to think about managing instances, or managing servers or managing any type of infrastructure: you could just write code and deploy it, Vogels said at the AWS Summit in San Francisco last week.

Almost all cloud providers now offer serverless capabilities for their cloud customers. Lambda is probably the gold standard, thanks to its early debut and the healthy market share enjoyed by AWS, but Azure Functions have been generally available since last November and Azure CTO Mark Russinovich recently gave an update on the state of Micrsofts serverless efforts.

Ahead of the Serverless Conference on Thursday, IBM announced new capabilities for its Bluemix OpenWhisk product, including a new API Gateway that allows developers to target multiple endpoints. Google just elevated Google Cloud Functions to beta status, but has not announced a time frame for general availability.

Their approaches can be a little different, but they all allow a developer to upload code once and set a trigger that instructs the application to behave a certain way in response to certain inputs.

A classic example of an application that can benefit from a serverless approach is one that might experience rapid, unpredictable spikes in demand, such as when DJ Khaled posts something to Snapchat. Serverless tools can automatically execute code in response to a flood of incoming traffic, or a pre-determined event such as when a file is uploaded to a database. (This well-written primer from Martin Fowler covers all the technical bases.)

One of the things that has come out of this serverless movement is the recognition that an event-based or trigger-based programming model is actually a very powerful model one where I can get code activated very quickly and respond to it, Russinovich said earlier this month.

Along those lines, Algorithmia CEO Diego Oppenheimer will discuss how the benefits of serverless computing could enable the next generation of machine learning at our Cloud Tech Summit in the Seattle areathis June.

And because serverless functions can be spun up and taken down in fractions of a second, a cloud provider is able to charge its customers accordingly, rather than charging them for computing services by the hour, week, month, or even year.

You only have to pay for what you use. This is a tremendous change in the way people are developing applications; build highly scalable environments and only build what they are paying for, Vogels said last week.

This is still early-adopter territory. After all, there are so many companies that are just getting started designing applications for the cloud, let alone embracing something like serverless computing.

You have to be a bit of a self-starter to put serverless computing at the heart of your application strategy. The biggest complaint among early adopters of serverless computing is the lack of proper tools optimized for this style.

Some of the basics of software development were not there in early days, like being able to properly debug and deploy, said Kroonenburg, noting this has improved a lot in just the last year. Google hopes to address this problem by working with the open-source community to develop serverless tools instead of building its own Google Cloud Functions-oriented tools, said Alan Ho, a product marketing manager for the company.

Serverless can also get very complicated, very fast, if youre using it for Internet of Things applications.

The Next Web recently published an account of how iRobot is using serverless computing to run its robot vacuum cleaners, and while there are a lot of benefits, with any technology, here are the places where it works and where it doesnt, you are always trying to find a balance with how pragmatic the solution is to meet your goals, said Ben Kehoe, cloud robotics research scientist, in the profile.

However, thisis clearly an exciting cloud technology one that could really drive the promise of cloud computing to the next level.

To me, this has strong potential for being the future model of compute, said Michael Behrendt, a Distinguished Engineer at IBM, because you really dont have to pre-buy or pre-allocate; with serverless that goes away. Were used to that already in other domains, like with APIs: you pay by the API call. Serverless is applying that notion to compute in general.

[Editors Note: This story has been corrected to properly spell the last name of IBMs Michael Behrendt.]

Read the original post:
Serverless computing might finally deliver on the promise of the cloud - GeekWire

Canadian Web Hosting Successfully Completes Annual SOC 2 and SOC 3 Audits, Continues Commitment to Security – Yahoo Finance

VANCOUVER, British Columbia, May 1, 2017 /PRNewswire/ --Canadian Web Hosting, a leading provider of cloud hosting and data center infrastructure in Canada, has announced that it has once again successfully completed its annual independent audit for Service Organization Control (SOC) 2, in accordance to AT 101, making this its seventh consecutive year of completion.

The SOC 2 audit was conducted between February 2016 and January 2017, and examined all of Canadian Web Hosting's services, including dedicated server hosting, cloud hosting, Canadian colocation and web hosting services. The audit process scanned Canadian Web Hosting's compliance to industry best practices, covering controls, processes and procedures. Upon completion, it was determined that its control activities were compliant and the company displayed the ability to effectively operate throughout the reporting period.

In addition to the annual SOC 2 audit, Canadian Web Hosting also completed the SOC 3 audit, which adheres to the Trust Service Principles and focuses on the design of e-commerce systems. The SOC 3 report is available for download, while the SOC 2 report can be obtained by customers, members of the media, or other interested individuals upon request.

One of Canadian Web Hosting's core missions is to help businesses meet their certification requirements in accordance with AT 101 (formerly SAS70 and CSAE 3416 Type II), which meets the new international service organizations standards for Type I and Type II reporting. As a result, its web hosting customers with services including dedicated servers, VPS, cloud servers, cloud computing, cloud storage and/or shared hosting can feel confident that they are in a secure, reliable and effective environment equipped with the proper controls for internet operations and highly available IT services.

"Canadian Web Hosting not only continues to secure a safe and reliable environment for its clients, but also assures its clients that they are receiving the technology, support and verifiable processes that surpass the industry standards for compliance," said Matt McKinney, Chief Strategy Officer at Canadian Web Hosting. "So long as you are with Canadian Web Hosting, you can expect the very best."

Download Canadian Web Hosting's SOC 3 report, or contact us at sales@candianwebhosting.com for our SOC 2 report.

About Canadian Web Hosting

Since 1998, Canadian Web Hosting has been providing on-demand hosting solutions that includeShared Hosting,Virtual Private Servers (VPS),Cloud Hosting,Dedicated Servers, and Infrastructure as a Service (IaaS) for Canadian companies of all sizes. Canadian Web Hosting isAT 101 SOC 2 and SOC 3 certified,ensuring that their processes and business practices are thoroughly audited against industry standards. Canadian Web Hosting guarantees a 100% network uptime, and a total money-back guarantee that backs everything they do. Customers can contact Sales & Billingbycalling 1-888-821-7888. The 24/7 support line is1-604-283-2127 or you can emailsupport@canadianwebhosting.com. For more information, visit them atwww.canadianwebhosting.com, or get the latest news by following them on Twitter at@cawebhostingor by liking theirFacebook page.

Media Contact: Sheila Wong 157195@email4pr.com 1-888-821-7888

To view the original version on PR Newswire, visit:http://www.prnewswire.com/news-releases/canadian-web-hosting-successfully-completes-annual-soc-2-and-soc-3-audits-continues-commitment-to-security-300448427.html

Continue reading here:
Canadian Web Hosting Successfully Completes Annual SOC 2 and SOC 3 Audits, Continues Commitment to Security - Yahoo Finance

OFFSITE Cloud Computing And Data Center Operator Announces … – PR Newswire (press release)

Anthony Portee, OFFSITE chief technology officer said, "The fully customized private cloud solutions being envisioned and commissioned by our client base demand a highly scalable and efficient NGFW solution which can satisfy a wide range of business and security needs. From application inspection and content enforcement to executive reporting and traffic visibility, the Palo Alto solution satisfies all of the unique and challenging technical requirements our customers look to OFFSITE to resolve. The high performance and tightly integrated ecosystem provided by the Palo Alto product family fulfills a critical role for OFFSITE's customers."

About OFFSITE

OFFSITE redefines the data center experience for mid-tier IT organizations with its high performing environment for managing data operations. Operating since 2001, OFFSITE offers private cloud services, IaaS, colocation services, disaster recovery services, network operations center (NOC) services, and hosted and managed solutions. OFFSITE's spacious facilities and customized services enable mid-market businesses to solve their IT challenges and discover new managed services, hosting and private cloud computing solutions. OFFSITE is a privately held company headquartered at its own 50,000 square foot facilities in Southeastern WI with redundant data center operations in Chicago, Illinois. For additional information about OFFSITE, call 262-564-6400, email info@off-site.com, or visit http://www.off-site.com

To view the original version on PR Newswire, visit:http://www.prnewswire.com/news-releases/offsite-cloud-computing-and-data-center-operator-announces-integration-of-palo-alto-security-platform-to-its-private-cloud-infrastructure-300444593.html

SOURCE OFFSITE, LLC

Home

Read the original post:
OFFSITE Cloud Computing And Data Center Operator Announces ... - PR Newswire (press release)

Cloud computing has another killer quarter – Network World

Fredric Paul is Editor in Chief for New Relic, Inc., and has held senior editorial positions at ReadWrite, InformationWeek, CNET, and PC World. His opinions are his own.

Your message has been sent.

There was an error emailing this page.

To most people, Jeff Bezos Amazon is known as the company reshaping the way people buy everything from books to shoes to groceries. But the part of Amazon that is driving Bezos within shouting distance of becoming the worlds richest person doesnt really sell anything, it rents computing power in the cloud.

As the New York Times put it on Thursday, The profit Amazon can make on cloud-computing services is significantly bigger than in its retail sales, and that has helped turn the Seattle company from a consistent money-loser to a respectable moneymaker.

And that, as Boomberg noted, sparked a jump in Amazons stock price in after-hours trading that added more than $3 billion to Bezos nest egg, topping $80 billion for the first time and putting him within $5 billion of becoming the worlds richest person.

The first quarter numbers tell the tale. Amazon Web Services (AWS) booked a whopping $890 million in operating income in the period ending March 31, accounting for most of the companys profits: the company as a whole recorded just $1.01 billion in net income. AWS revenue grew 43 percent, which is amazingly not quite as fast as previous quarters, to hit $3.66 billion. But even with the remarkable growth, competition and price cuts, AWSs net profit margin topped 24 percent, higher than in Q1 2016. Put it all together and AWS delivered almost 90 percent of the companys profits. Thats a really big deal, for reasons Ill discuss later in this post.

Microsoft and Google s parent company Alphabet also recorded strong first-quarter cloud numbers, though apples-to-apples comparisons are difficult because the companies dont break out their results in the same ways.

For example, number-two cloud player Microsoft said its Azure cloud hosting business grew 93 percent year over year. Thats more than twice as fast as AWS, but its hard to tell exactly what that means because Microsoft didnt reveal the actual numbers. Similarly, Microsofts Office 365 productivity software as a service business grew 45 percent, but to what level the company isnt saying. (In contrast to Bezos, Bill Gates fortune actually slipped, as Microsofts mixed overall results drove down the companys stock in after hours trading.)

Alphabet, unfortunately, also does not break out it cloud revenues, lumping them into a category called Google other revenues, which was up 49 percent since the same period last year. In the companys earnings call with analysts, Ruth Porat, Alphabets chief financial officer, reportedly described the Google cloud platform as one the companys fastest-growing businesses, though most observers peg it as a distant third in the cloud hierarchy. (Google co-founders Sergei Brin and Larry Page did get richer, though, as the company stock was buoyed by strong mobile ad sales.)

Its frustrating for cloudwatchers that the number two and three players in this all-important industry dont share more financial information, but no amount of financial fog can obscure the continued and phenomenal rise of cloud computing.

Still, unlike many other hot internet sectorsthink ride sharing, for examplethe clouds growth isnt being fueled by speculative venture capital investment. The cloud is actually earning big bucks even as it experiences hypergrowth. Now thats a real unicorn.

Fredric Paul is Editor in Chief for New Relic, Inc., and has held senior editorial positions at ReadWrite, InformationWeek, CNET and PCWorld.

Sponsored Links

Excerpt from:
Cloud computing has another killer quarter - Network World

Misconceptions about applying ALM to cloud app development processes – TechTarget

Most application architects and development teams often turn to their application lifecycle management tools and...

Enjoy this article as well as all of our content, including E-Guides, news, tips and more.

By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.

You also agree that your personal information may be transferred and processed in the United States, and that you have read and agree to the Terms of Use and the Privacy Policy.

processes to deal with the challenges of the cloud app development, but they often fall prey to common misconceptions about cloud ALM. In this tip, we'll examine the commonly held beliefs about cloud ALM that can hamper app development efforts.

There are three common myths about cloud ALM:

ALM works to stabilize IT by creating a framework in which applications can be added and changed without risking security, compliance and even functional problems. The fundamental principle of ALM is to enforce that framework through the entire cloud app development and deployment process. The cloud -- and cloud ALM -- complicates this in two specific ways: it abstracts resources and it leads inevitably to continuous delivery pressures.

ALM is meaningless if the hosting of applications and components, and their integration, is based on different resources and connections than will be used in production. That's a basic truth of ALM; yet, in the cloud, the location and connectivity of applications and components is never certain.

Most users will try to address this uncertainty by refining their cloud ALM processes, but the problem with that is that it increases ALM complexity, reduces its responsiveness to business problems and eventually creates a lifecycle process so difficult and expensive that its failure is a given.

Continuous delivery is aimed at improving application responsiveness to business needs, and the cloud encourages that by reducing or eliminating the normal capital equipment inertia associated with deploying IT resources. If I can spin up a server in minutes instead of taking a month's worth of procurement and installation time, why wouldn't that make IT more responsive? The only way to have that happen is to accelerate the application lifecycle, which argues against adding extensive accommodation to cloud hosting.

ALM alone can't solve the cloud problems. You need to have two new dimensions of management: virtual resource management, and service and microservice management. A cloud-ALM approach absolutely must deal with the cloud by establishing a consistent way of viewing virtual resources. If all hosting, cloud or data center, is considered virtual, and if management processes are suitable for any hosting option that's adopted, then ALM can be made independent of hosting. Similarly, if the services and microservices used for multiple applications are lifecycle-managed as a group, then applications that use them aren't impacted nearly as much by the cloud-driven trend toward shared services.

Another destructive misconception about cloud ALM is that adopting DevOps will fix everything. DevOps tools are wonderful ways of enforcing structure and consistency on ALM practices, but they can't frame those practices.

Even adopting virtual resource and service/microservice management is best handled if there's a specific IT model that you're managing to secure. That model can come only from tighter integration between enterprise architecture (EA) and ALM. EA practices must feed ALM with the business goals and the operational framework in which application lifecycles are managed. That is true not only for traditional ALM, but also for virtual resource and service/microservice lifecycle management.

EA integration is the starting point for effective definition of service/microservice sharing policies, because common business practices that are evolving in harmony are the very place where service/microservice componentization will benefit most. EA business requirements also set the resource availability constraints that both cloud and data center must meet. Getting ALM extended across the EA boundary to capture this information is critical, and will require some work on both sides to accomplish.

DevOps could change, based on this EA integration, into something more broadly useful for cloud ALM. If DevOps tools are used to structure resource management and service/microservice lifecycle management, along with ALM, the result can be an automated and responsive process for continuous delivery that works at cloud speed. Making this happen demands a revision in some DevOps methods; it's important to use modular features and event-driven behavior more, no matter what specific DevOps product you adopt.

Perhaps the most critical of all cloud ALM misconceptions is that "there will always be ALM." The truth is that every trend in cloud app development is taking us to a place where there is no such thing as an application in a monolithic sense.

Applications are evolving into a set of event-driven microservices connected through a loose specification of business requirements, almost a cloud-centric form of an enterprise service bus. In this model, the number of applications and the variations on each are enormous, and no testing of all the possible combinations could ever hope to succeed.

ALM today has to work against the tendency, common in all technology concepts, to enshrine itself in practices and become a goal rather than a means to a goal. A model of the future where EA defines the component relationships based on functional needs of business activities, and where resources are managed against an experience-driven vision of responsiveness and availability, fits the trends we can already see. That model would have to recognize the decreasing value of an application-centric view of IT.

The problem is that while it's technically reasonable to look at the future as a set of event-driven, business-composed components using abstract resources, it's a lot harder. Neither current tools and practices nor the experience of DevOps personnel prepare a business for this cloud app development evolution. They never will, though, if we continue to see that future through monolithic-colored glasses. ALM has to break out and redefine itself in terms of how it integrates certified classes of resources and tools -- not how static tools support dynamic business processes.

Planning application lifecycle management in the cloud

Explore new technologies for cloud ALM

What are application security concerns in cloud-based ALM?

Continued here:
Misconceptions about applying ALM to cloud app development processes - TechTarget