Category Archives: Cloud Hosting

Choosing an LMS: Cloud or Hosted? – HR Technologist

Installing a learning management system (LMS) is not a new thing; many organizations have already done it and embraced e-learning as a culture. The change lies in the way the learning management system is hosted. The popularity of the cloud has touched the space of learning and development too, and the cloud-based LMS is the latest on company wish list. But before blindly foregoing the current LMS and rushing to adopt the next best cloud-based LMS, it is important to know what each entails.

Learning management systems are basically of two types:

On the face of it there may not seem much difference in both these types, but each has a unique impact on the hosting capacity, architecture and infrastructure, resources and budgets etc. Also, the user experience is entirely different in each. Here is a closer look at the cloud-based LMS:

Cloud LMS presents many advantages especially from the point of view of becoming future ready. The most important thing is that cloud-based learning can be the booster for your learning and development initiatives by creating an engaging learning experience. It will not only help you achieve your training objectives, but also act as a driver of employee engagement and retention.

See the original post here:
Choosing an LMS: Cloud or Hosted? - HR Technologist

Rackspace Hosting Inc. experienced temporary global outage after … – San Antonio Business Journal


San Antonio Business Journal
Rackspace Hosting Inc. experienced temporary global outage after ...
San Antonio Business Journal
Rackspace engineers were able to fix the issue that affected about 4 percent of the local company's customer base.

and more »

See more here:
Rackspace Hosting Inc. experienced temporary global outage after ... - San Antonio Business Journal

ZNetLive plans to expand product portfolio with hybrid cloud backed by Microsoft Azure – ETCIO.com

Bangalore: ZNetLive, a web hosting and cloud services provider in India said its plans to expand its product portfolio to include Microsoft Azure Stack based Hybrid Cloud Solution to help users get proficient computing capabilities with increased productivity at reduced operational costs.

The Azure Stack Hybrid Cloud solution expected to be made commercially available by early Q3 this year, will provide users with single vendor support for Azure Stack and Azure public cloud. The solution encompasses Azures storage, geo presence and flexible capacity to enable an organizations expansion on a global level.

The pay per usage solution will be available with one contract and monthly invoicing and the users will get their own client portal, with which they will be able to seamlessly create, deploy and manage their workloads on cloud. The kind of IT automation that this solution will provide will enable organizations maintain crucial business data on premise while getting economies of scale with faster deployment of new services and applications on public cloud.

This is one more step by ZNetLive in empowering digital transformation. The Azure Stack Hybrid Cloud solution combines path breaking ZNetLives cloud management services with modern, scalable and flexible Microsoft Azure Stack capabilities to provide a comprehensive hybrid cloud solution to enterprises seeking digital transformation for modernization, high performance and scalability at low costs, said Munesh Jadoun, Founder & CEO, ZNetLive.

For those onboarding the solution, the management and support provided by ZNetLive will include cloud strategy formulation, building Azure hybrid cloud following detailed analysis of crucial and routine workloads and customers workload migration. It will also include security, back-up and disaster recovery services.

The company has been implementing integrated Azure Stack and Azure pack cloud solutions for some time now.

The solution also includes Azure ExpressRoute Services to provide users with speedier private Azure connection from production environment for regular data migration, data transfer and thus, helping in expenditure reduction. With it, the users can add storage and compute capacity to their existing datacenter for high throughput and reduced latencies. It enables them to build applications spanning on premise infrastructure and Azure without compromising on privacy or performance.

Excerpt from:
ZNetLive plans to expand product portfolio with hybrid cloud backed by Microsoft Azure - ETCIO.com

Microsoft to Announce Major Restructuring July 5 – Investopedia

Microsoft Corp. (MSFT) is reportedly gearing up to announce a major overhaul of its business on July 5 with the focus shifting more toward cloud computing, which has been booming for the software giant.

Unnamed sources told the Puget Sound Business Journal the changes are aimed at better aligning the company with its so-called cloud-first strategy that is being led by Azure, its cloud hosting business. It not clear if that will result in layoffs or any restructuring charges. (See also: Microsoft's Azure Cloud Revenue Estimated at $3B.)

While it has long played second fiddle to Amazon.com Inc. (AMZN) and its Amazon Web Services (AWS) unit, Microsoft has been chipping away at its dominance. Earlier this month, Pacific Crest Securities analyst Brent Bracelin said in a research report that Azure could have more revenue than AWS for the first time in 2017. The analyst said Microsoft is becoming the biggest cloud provider for the first time in 10 years, which would transition it from a cloud laggard to a cloud leader.

Bracelin said he came to this conclusion after conducting an analysis of the 60 biggest cloud computing companies. The analyst is predicting spending on cloud initiatives could explode to $239 billion in the span of five years, with the Redmond, Wash., software giant benefiting the most from the growth. Bracelin pointed to what he called unmatched product depth and breadth" in software as a service, platform as a service and infrastructure as a service as the main reasons. (See also: Amazon, Microsoft Still Rule Cloud; Oracle, Alibaba May Catch Up.)

When Microsoft reported third-quarter earnings results in late April, it said revenue in its Intelligent Cloud business came in at $6.8 billion, up 11% compared to the year ago, and up 12% on a constant-currency basis. During the quarter, Microsoft said server products and cloud services revenue increased 15%, driven by Azure cloud revenue growth of 93%. Enterprise services revenue declined 1% with a decrease in customer support agreements offsetting growth in Premier Support Services and consulting.

For the quarter, Microsoft posted revenue of $23.6 billion on a non-GAAP bases and diluted earnings per share of $0.73, also on a non-GAAP basis. Our results this quarter reflect the trust customers are placing in the Microsoft Cloud, said Satya Nadella, chief executive officer at Microsoft. From large multi-nationals to small and medium businesses to non-profits all over the world, organizations are using Microsofts cloud platforms to power their digital transformation. This isnt the first time the company has overhauled its business strategy in recent years. It has cut more than 4% of its employee headcount amid a restructuring being led by Nadella.

Go here to read the rest:
Microsoft to Announce Major Restructuring July 5 - Investopedia

Nutanix Unveils Hybrid Cloud Computing Platform – IT Business Edge – IT Business Edge (blog)

At a .NEXT 2017 conference today, Nutanix unveiled a cloud operating system based on its hyperconverged infrastructure (HCI) software capable of unifying public and private clouds under a common hybrid cloud computing architecture.

Sunil Potti, vice president of engineering for Nutanix, says Nutanix Calm, due out by the end of the year, extends the Nutanix Enterprise Cloud OS multi-cloud strategy by making it possible to deploy applications at a higher level of abstraction employing a common stack of software that can be deployed at the edge of the network, in a local data center, in a hosting service, or on a public cloud. To underscore that latter point, Nutanix today also announced a partnership with Google tomake Nutanix Calmavailable on the Google Cloud Platform.

That capability is being complemented by Nutanix Xi Cloud Services, which is a turnkey cloud service due to be available under an early access program in the first quarter of 2018. It can be employed to both provision Nutanix infrastructure as well as provide additional capabilities such as disaster recovery services.

Based on the stack of software that Nutanix developed as an alternative to the implementation of VMware that Nutanix also supports, Nutanix Calm is an ambitious effort to make hybrid cloud computing an everyday enterprise norm.

Potti says, ultimately, Nutanix expects to automate almost every aspect of a hybrid cloud computing.

If we cant do that, it will be a missed opportunity, says Potti.

In the meantime, Potti says, Nutanix is committed to making the process of lifting and shifting of workloads between clouds invisible. Of course, Nutanix isnt the only IT vendor with similar ambitions. But its arguably the only one with an existing footprint in both the public cloud as well as its own and other third-party platforms from Dell EMC, Lenovo and others. The challenge and opportunity now is to turn that reach into a federated environment that effectively erases the lines between one cloud platform and another.

Read the original:
Nutanix Unveils Hybrid Cloud Computing Platform - IT Business Edge - IT Business Edge (blog)

Harnessing the Cloud for CAD: The Case for Virtual Workstations – Cadalyst Magazine

28 Jun, 2017 By: Alex Herrera Herrera on Hardware: The demands of CAD-heavy workflows in manufacturing, design, architecture, and construction are growing. Some companies are looking beyond their local machines, and implementing virtual computing options to augment or replace traditional deskside and laptop workstations.

Workstations, virtualization, and the cloud this trio of technology tools is joining forces, ready to transform the way design teams deploy and use workstation-caliber systems to tackle the increasingly challenging issues facing cutting-edge CAD workflows.

The first component of that trio is the tried-and-true foundation that CAD users and IT administrators have long relied on to power visually intensive workflows quickly and reliably. The second is a more recent computing tool that enables users to run their familiar client desktops on shared datacenter resources. And the third represents today's hottest markets and technologies, upon which IT vendors and users alike are looking to resolve the future's thorniest computing problems. Today, the confluence of the three is creating a valuable new weapon for the CAD IT arsenal: Cloud-hosted virtual workstations are here.

Weve seen this potential and evolution of cloud-hosted virtual workstations coming for a while; I discussed some of the evolving supporting products and technologies over the past couple of years in "New Computing Solutions for CAD Take Fuller Advantage of the Cloud" and "Is Cloud-Based CAD Ready for Prime Time?" This month, I kick off a series on what this cloud-based technology is all about: Why its appealing, whether you should consider its adoption, and key considerations to take in deployment. This first installment explains what virtual workstations are and how they work, and also explores whether your business and workflow might benefit from adopting them in place of traditional, physical workstations.

Why a Virtual Workstation?

Traditional deskside and laptop workstations power the vast majority of CAD environments today. They have done so for years, reliably and effectively. But some businesses particularly those running CAD-heavy workflows in manufacturing, design, architecture, and construction are finding it increasingly difficult to satisfy the demands imposed by a host of growing challenges. Skyrocketing dataset sizes, dynamic workflows, a globally distributed workforce that needs immediate access to complex visual data, heightened concerns of security, and the constant incursion of personal digital devices into the workplace: all are conspiring to push traditional, distributed client environments to the brink.

Huge files no longer take seconds to transfer from client to client, or site to site instead it might be minutes or even hours. Security risks spread, while the burden of protecting priceless IP has never been heavier. And complex projects are more often requiring teams assembled not just from employees, but also contractors and consultants who might be in the field or in an office halfway around the world. Yet, all need access to the same datasets, on demand, from wherever they are at the moment and that data must be up to date.

In urgent need of solutions to these growing problems, businesses that rely on high-performance visual computing for CAD are beginning to look elsewhere, and one solution shining particularly brightly is the virtual workstation.

What Is a Virtual Workstation?

With the traditional, distributed client-side model that now dominates professional computing, all user processing and rendering is performed locally by the client computer. But with a virtual workstation approach, a remote server hosts a virtual representation of that machine, somewhere in the cloud or possibly in a corporate-owned datacenter. That virtual workstation performs everything that the physical machine at the desk would: running the operating system (OS) and applications, and processing graphics. Only the final displayed image the pixel stream traverses the network, to a simple client that need only display those pixels and handle any user input (e.g., commands from the keyboard and mouse).

In the traditional, tried-and-true environment of distributed workstation clients, the client handles at least part of the computing.

In the server-centric, cloud-capable virtual workstation environment, the entire compute burden is lifted off clients.

How Virtual Workstations Can Address CAD Needs

The ability of a virtualized workstation environment to store one golden set of data safely in one place looks particularly attractive when considering the explosion in the size of todays ambitious and complex project datasets. With a virtualized, centralized IT environment, it makes no difference if staff is located all in the same building or scattered across the globe. With potentially massive, global teams comprising employees, contractors, and partners, success hinges on ITs ability to efficiently connect people to the data, without costly, time-consuming copies and downloads.

When machines are no longer physically moved around just virtually and dynamically allocated IT administration becomes faster, simpler, and less error-prone. De-provisioning one user while provisioning another is fast, making rapid expansion and contraction over a project's life far less problematic. And centralized control and management consoles can simplify and streamline administration overhead, particularly for geographically dispersed enterprises.

By design, the use of virtual workstations hardens corporate security. Critical IP never strays beyond company grounds on laptops and flash drives. Only pixels cross corporate firewalls, and those pixel streams can be (and typically are) encrypted. Better still, virtually any device can suffice, regardless of OS or underlying hardware, making personal smartphones, tablets, or Macs capable and safe for CAD-related work.

The advantages of such a virtual workstation environment appeal as much in CAD as in any other application and arguably more. In the CAD world, huge, visually complex datasets abound, numerous scattered staff and third parties must contribute and collaborate, and security is paramount. So its no surprise that some of the earliest adopters are coming from the automotive, aerospace, and architecture spaces. Consider Honda Automotive, which having completed several successful proof-of-concept trials, is green-lighting the deployment of around 10,000 virtual workstations to replace physical deskside machines. Or CannonDesign, a Top 50 architecture firm that is moving to a virtual workstation environment to ease the growing problem of managing huge Revit designs spread across as many as 16 corporate sites.

Virtual or Physical: Which Is Right for Your Business?

The benefits are compelling, and everyone who relies heavily on CAD run on conventional deskside workstations should explore this new potential of virtual workstations. But its important to know that there is neither a mandate nor a one-size-fits-all solution when it comes to deploying them. Virtual workstations might represent a replacement to deskside machines, an add-on to a traditional client-side environment, or neither. Which situations call for virtual solutions, and which are probably best left (at least for now) to physical ones? Ultimately, answers to these questions depend largely on who you are, what you do, and how you work.

For example, the locations where your staff works and lives and the ways in which they need to collaborate matter. How unwieldy your datasets are today, and how you see them growing down the road, will have a big impact on the choice.

Security is important to all, but a virtual approach will appeal especially to more vulnerable companies, or those for whom a breach would be catastrophic. Similarly, while no business wants outages and long recovery times due to natural disaster, for some the probability might be substantially higher and the penalties far more severe.

A move to a virtual workstation environment comes with mandatory infrastructure requirements, including access to a capable network with high, reliable bandwidth and consistently low round-trip latencies. Businesses that have that access or have the means to acquire it particularly for the wide-area network (WAN) can consider going virtual. (More to come on this topic in a future installment).

Since most virtual workstations run the identical client operating system users run on traditional workstations, such as Windows, they are inherently compatible with applications that run on those operating systems. However, you will want to make sure any virtual workstation solution youre considering is certified for use with your mission-critical application, just as you would with a traditional deskside workstation.

Virtual platforms do not typically support every possible peripheral a user may demand, and high-demand input and output (I/O) can impact visual performance. Tasks and workflows with more pedestrian peripheral requirements (e.g., mouse and keyboard) are a better fit for virtual workstations, though specific I/O support will vary by solution.

Some key decision criteria to consider in the decision to go virtual or not.

Consider how your specific business and workflow measure up on these key criteria. Dont find many that are calling out for a virtual approach? Then you may be better off sticking with traditional physical workstations, at least for now. But if you find youre checking off most of these items, and more than a few represent hot-button issues for your business, then it's probably time to consider taking the plunge into the world of virtual workstations.

For more on hosting virtual workstations in the cloud, keep an eye out for more Herrera on Hardware columns focusing on these topics, coming soon. Over the next few months, Ill explore the following considerations:

Note: Comments are moderated and will appear live after approval by the site moderator.

The rest is here:
Harnessing the Cloud for CAD: The Case for Virtual Workstations - Cadalyst Magazine

From Word Processors to Cloud Computing: 40 Years of Tech at Department of the Navy – Nextgov

In the late 1970s, a young graduate from The George Washington University armed with an electrical engineering degree and a passion for computer science accepted a job at a Department of the Navy organization called the Naval Facilities Engineering Command. Now, over 40 years later, that graduate serves as one of the organization's deputy chief information officers and credits his longevity in the NAVFAC organization to simply keeping busy.

I kept finding challenging things to do, Tony Joyce says.

His day-to-day responsibilities includebriefings and updates, conversations and staff meetings, and other supervisory chores. Joyce'steam manages or supports portals and records management data sets, portfolios of software and business systems, software licenses, distribution, testing and certifications. The team alsomaintains oversight of a data center and a mainframe financial application, approves procurement requests, and coordinatesrigorous investment decision management certifications.

Get the best federal technology news and ideas delivered right to your inbox. Sign up here.

"On a given day, I may be working on preparations for IT system financial audits, data center consolidation, cloud hosting and migration, budgets and spending plans, or business system acquisition documentation," Joyce said.

Nextgov caught up with Joyce to chat about how technology has evolved over the years and where its been most transformative in government.

Nextgov: Looking back at the decades, what have been the most transformational technologies in government?

Tony Joyce: Computers. Thinking about what was available when I started, 300 baud modems and original IBM PCs, things like that, and compare to what we have now, its just an incredible improvement and the sophistication we have now isnt anything I think anyone knew was possibleit was short of science fiction.

Nextgov: What were some challenges you encountered when implementing a new technology back then?

Joyce: One of my early efforts was word processing, so I bought one of the first dedicated word processors and expanded that to 15 or so centrally, one of each in our regional offices to do specifications. Before that, we were using 3270 emulatorsstraight up text editing specifications that were 50-page documents. Trying to do that over rather flaky networks in those days was really difficult. But it was better than doing it on a typewriter.

Deciding to get it, train people, work out how to use it, and try to work out some consistency in that, was probably the biggest challenge. Some people were afraid of the technology, but most of all, I think, is that people were afraid of change. That still is the case. Doing something substantially different is difficult for most people.

Nextgov: What technology came after word processing that you had to address?

Joyce: The next one I was involved with after that was computer graphics: CAD, CAM and mapping, which were very new at the same. The difficulty there was buying it: It took years to get the specifications. The acquisition process remains one of the most difficult parts of the business.

Nextgov: Youve been an early adopter of technology. So,of which technology did you think, "This is going to change the world?"

Joyce: Im not sure there was one; I keep looking for something exciting, new ways to do things better.

Nextgov: So, whats the most exciting tech you work with today?

Joyce: I think cloud computing is probably the most exciting one. If its done right, it has enormous potential. Were just standing upour first Amazon-hosted environment. If we get it right, the promise of being able to easily modify and scale is immense.

Even with cloud, I expect there to be certain limitations. But if we can capture most of that in how we do things and then emulate how Amazon and Google are providing robust software services at incredible scale. Ill use our analytics as an example. Our current on-premise stack and many of the queries against our analytical database take 20, 40 an hour or two to run. Being able to scale that query and run 1,000 processes to do that job in a minute as opposed to an hour.

Nextgov: What do you see on the horizon as the most important technologies to the Department of the Navy?

Joyce: I think data analytics in a variety of forms. Knowing what your cash position is in your budget at all times is a major hurdle and we havent gotten there yet. The other technology that will probably change things radically is intelligent, well-structured text recognition [to]being able to effectively search through half million documents that I have in our portals.

Right now, the best technology we have is some sort of keyword, which means you got to apply metadata, and building that metadata is a huge burden on people, which means it doesnt get done. So text mining in the government will have great promise because the government even more so than the private sector organizations operates off paper.

More here:
From Word Processors to Cloud Computing: 40 Years of Tech at Department of the Navy - Nextgov

Why Managed Cloud Hosting Is Best For Business? – HostReview.com (press release) (blog)

Why Managed Cloud Hosting Is Best For Business?

The future holds great promise when it comes to the cloud. Its no secret cloud hosting in India has taken off in recent years with new innovations and business applications. Businesses of all sizes, industries, and geographies are turning to cloud services. Industry experts say that the Cloud hosting serviceswill grow tremendously from $35 Billion today to around $150 Billion by 2020, because by then, it will be key to most of the big companys IT infrastructures.

Obviously assests of this can't be ignored like it is cost efficient, provides almost unlimited storage, back up and recovery is much more easier, easy to access information from anywhere, and much more.

This surely looks valuable for the client, but it even holds something superb for internal IT team.

Providing the suitable choice:

As demand is increasing, soon the number of choices in cloud solutions will be overwhelming; even for well-versed IT decision maker it will be a difficult task to choose the 'one'. Now, here your relationship with your managed cloud hosting provider comes into picture. This relationship will assure that you are taking right decision in terms of your IT investment and it should drive Return On Investment (ROI) at the same time should eliminate the missteps.

So, before putting your first feet on the cloud, it is important to ensure that your managed hosting partner can addressed every cloud needs.

Transfering work load means transfering responsibilities:

As more and more important workloads is getting migrated to cloud, this results in shifting responsibility from internal IT team to the managed cloud provider. By shifting to cloud, a company still own everything that has been migrated, but now it is been maintained and upgrade by managed service provider.

As a result internal IT team gets more time toconquer other pressing technological initiatives, such experimenting with new software or investigating a new IT strategy. The possibilities are endless.

Power of cloud without the complexity:

No matter your business is small or big, or in which category your organisation falls, cloud is always a powerful assest. This investment becomes even more effective when coupled with a managed service provider.

It is always an advantage to take expertise from a managed service provider. They assure that right technology is in right place and this environment is skillfully maintained for now and future. So, you dont have to get into the complexity of it yet your are up to date.

So, internal IT people, isn't managed cloud hosting a relief?

About Web Werks India Pvt. Ltd.:

Web Werks is an India-based CMMI Level 3 Web Hosting company with 5 carrier neutral data centers in India and USA. Started in 1996, Web Werks has served several Fortune 500 companies with successful projects in the areas of Web Hosting, Data Center Services, Dedicated Servers, Colocation servers, Disaster Recovery Services, VPS Hosting Services, and Cloud Hosting.

For further information contact:

Web Werks India Pvt. Ltd.

http://www.webwerks.in

See more here:
Why Managed Cloud Hosting Is Best For Business? - HostReview.com (press release) (blog)

PitchBook moves to a microservices infrastructure scaling the business through scalable tech – Network World

PitchBook is a data company. Its reason for being is to provide a platform that tracks a plethora of different aspects of both private and public markets. Want to know about whats happening in venture capital, private equity or M&A? Chances are PitchBook can give you the answer. The company is a subsidiary of Morningstar and has offices in Seattle, New York, and London.

But heres the thing, though. PitchBook was founded in 2007 when cloud computing was pretty much just beginning and there was no real awareness of what it meant. In those days, enterprise IT agility meant leveraging virtualization to gain efficiencies. Now dont get me wrong, moving from a paradigm of racking and stacking physical servers to being able to spin up virtual servers at will is a big deal, its just that since 2007, there has been massive further innovation in the infrastructure space.

So if youre PitchBook, built in the early days of the cloud in a monolithic way, and you want to scale to your stated business ambition of hosting data about 10 million companies, what do you do? Well, one thing you can do is to rethink your entire infrastructure footprint to take advantage of modern approaches. And this is what PitchBook has done, moving from a monolithic infrastructure to microservices, which should enable PitchBook developers to easily scale the platform.

Breaking from a monolithic environment will allow us to easily make changes under the hood of different modules without affecting any of the other services tied to it. This ultimately is pushing the PitchBook Platform into a new era, defined by greater scale and usability, said Alex Legault, lead product manager at PitchBook. With an aggressive product roadmap that involves loading massive datasets, leveraging modern cloud techniques and enabling more machine learning, a microservices infrastructure will provide the right framework to execute on our plans, quickly and efficiently.

The PitchBook journey piqued my interest and so I sat down (in the modern sense of the word where sit down means get email answers to questions) with Legault to learn more about this journey. Without further ado, heres the PitchBook story.

What tech are you using? K8S? Docker? Mesos? Serverless?

We made a lot of moves to new tech with our front end in this release: React, ES2016 (EcmaScript 2016 - version of Javascript). Spring too. Were currently evaluating Docker and K8S.

Why did you make the decision to migrate to microservices?

Our clients need to move fast and require timely access to data and new datasets. To meet these needs, we require an architecture that will allow our product team to run fast and scale. Microservices provides this. At PitchBook, were at a critical inflection point where were growing at a rapid pace, and the platform needs to keep up, both from a data perspective as well as from a feature set and scalability standpoint. While a monolithic infrastructure could have met our needs, as our platform gets bigger and more complex, it would get increasingly challenging to make changes or updates. With microservices, each service becomes its own module allowing our developers to easily make changes without impacting other services.

+ MORE ON NETWORK WORLD What you need to know about microservices +

In some instances, microservices can lend itself to an explosion of modules/services that need to be managed within an enterprise. Did you think about that going into this migration and what sorts of management technology have you implemented to avoid the chaos that some companies are facing?

Moving to microservices naturally creates the problem of module explosion. There are few recipes to avoid or minimize this:

1) Mini-services versus nano-services approach. We tried not to be too idealistic and not design microservices as nano-services. Getting too small and too specific with the services can quickly introduce a headache. For us, it made sense to start with bigger modules, which we call mini-services first, then adapt and split down further when necessary. Each team can control this process and split things only when it serves a real purpose or advantage to do so.

2) Unify the service interface and infrastructure, use containerization and orchestration. Our ideal end state is a fully programmable and automated infrastructure (IAC), which requires a formalized DevOps function. Cant state enough how important having good DevOps folks is in making this transition successful.

What will this switch allow you to do? Whats next in the road map where microservices will play a huge role?

There are several benefits microservices provides us, including:

It will allow us to speed up delivery of new features, innovations and data sets. Our goal is to eventually host 10 million private and public companies within the platform and microservices will help us get there faster and with scale.

We can also more easily adopt different technology where needed and arent bound to the same databases or languages in any part of the application.

Redeployment will become easier. While the system is more fragmented, it's less fragile so when individual services are down, it doesnt bring down the entire system.

Allows us to scale individual services that are the bottleneck, its not just one big instance anymore. This helps us with scaling as our datasets grow.

On the horizon, we have several initiatives related to high-speed data visualization and analysis. We have such great datasets, so how can we generate and surface more insights to customers. Microservices will play a huge role in enabling this.

How will your customers benefit from the switch?

Were all about serving our customers which is why we made this move. Institutional investors are under more pressure than ever before to make intelligent investment decisions and generate higher returns, making access to quality data absolutely essential. New technology that can help us recommend, analyze and surface personalized insights to customers is hitting the jackpot - were confident microservices can unshackle us so we can go after these initiatives. Customers can expect to start seeing more releases, more innovation and a platform that can handle much larger scale while staying fast.

Technology is a progression mainframes to physical x86 to virtualization. Microservices is but the latest move in this process and we can already see things on the horizon (event-driven infrastructure, for example) that will take organizations like PitchBook to the next level. It is interesting to have a glimpse inside and explore the thinking that goes into a significant platform shift.

Continue reading here:
PitchBook moves to a microservices infrastructure scaling the business through scalable tech - Network World

Avoid steep network integration costs in multicloud – TechTarget

One of the most important -- and most complex -- concepts in multicloud is network integration between public cloud...

Enjoy this article as well as all of our content, including E-Guides, news, tips and more.

By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.

You also agree that your personal information may be transferred and processed in the United States, and that you have read and agree to the Terms of Use and the Privacy Policy.

providers. This model facilitates cross-cloud load balancing and failover but, without careful planning, can also lead to hefty network integration costs.

Nearly all enterprises have a virtual private network (VPN) that connects their sites, users, applications and data center resources. When they adopt cloud computing, they often expect to use that VPN to connect their public cloud resources as well. Many cloud providers have features to facilitate this, and even when they don't, it's usually possible to build VPN support into application images hosted in the cloud.

When enterprises put their public cloud applications on their VPN, it's easier to integrate applications and workflows because the application IP addresses are all internal and under their control. From a networking perspective, cloud bursting and failover look similar to scaling or replacing resources in your own data center. This kind of integration is even more valuable in multicloud because it makes all cloud providers look almost like extensions of your own data center or like a single, elastic pool of resources.

Problems arise, however, when multicloud users add more providers -- especially around traffic charges. Seemingly insignificant changes to applications and workflows can increase overall network integration costs.

Most public cloud providers price some or all of their services based, in part, on traffic in and out of their cloud. This is particularly true of database services. It's routine for providers to offer free data traffic between cloud applications in the same region and between applications and cloud data services, but not across a cloud border into a VPN. As a result, if you access two cloud providers via your VPN, you need to pay each for transfers between the cloud and your data center.

Most users expect these charges, but the billing surprises occur when a workflow moves between application components hosted in different public clouds. In this case, you transfer data from one cloud to another via your VPN. This means you pay once for the traffic exiting one cloud and again when it enters the other.

If you move a single application component from one provider to another, you could double traffic charges.

If all your cloud providers share your VPN address space, it makes it easier to deploy and redeploy components to accommodate changes in load or to respond to failures because you control all the addressing. This flexibility enables you to easily -- and sometimes accidentally -- create workflows that generate those expensive provider-to-provider charges. If you move a single application component from one provider to another, you could double traffic charges.

Here are some steps to optimize that flexibility, without creating additional costs.

First, understand when your cloud providers apply traffic charges. Not all services incur these costs, and cost assessments could vary between providers. Even if you don't currently use any services that incur these traffic charges, one application modification could change that and create significant cost variability as your hosting patterns shift. Always keep track of pricing and traffic policies.

Second, think in terms of workflows, not application components. Cloud computing performance and efficiency depend on how effectively you manage the intercomponent connections that support application workflows. In multicloud, this workflow is the largest cost variable you can control. When you shift where you host application components -- either among cloud providers or between providers and your own data center -- remap the workflows to assess costs.

Cloud computing performance and efficiency depend on how effectively you manage the intercomponent connections that support application workflows.

Third, remember that workflows are the result of where you host your application components in the cloud. Manage multicloud workflows by controlling where you put application components -- meaning which clouds you can use and when -- and not by applying network traffic filters to prevent cloud border crossings. Establish cloud bursting and failover policies to ensure you don't introduce workflows that cross provider boundaries, adding to overall network integration costs.

Finally, assign each cloud provider its own address range within your VPN. Most enterprises use an RFC 1918 (IPv4) or RFC 4193 (IPv6) address for their VPNs. In IPv4, the Class A space 10.x.x.x is available, and it offers ample room to create cloud provider subnetworks. If you use this approach, it's possible to control -- or at least identify -- where application component placement creates a workflow that crosses between two or more cloud providers and therefore runs the risk of multiple traffic charges.

Prepare your IT team for a multicloud model

Navigate the challenges of APIs in multicloud

Build a successful multicloud strategy with automation

Original post:
Avoid steep network integration costs in multicloud - TechTarget