Category Archives: Cloud Servers
Global Machine Translation Market Report 2021-26: Global Size, Share and Industry Trends The Courier – The Courier
The latest report by IMARC Group, titled Machine Translation Market: Global Industry Trends, Share, Size, Growth, Opportunity and Forecast 2021-2026, The global machine translation market grew at a CAGR of around 14% during 2015-2020. Machine translation (MT) refers to automated translation in which computer software is used to translate a text from one natural language to another. This tool interprets and analyzes all the elements in the text by using extensive expertise in grammar, syntax, and semantics in both the source and target language. Google translate and LingoHub are some well-known machine translation engines used across the globe.
Request Free Sample Report: https://www.imarcgroup.com/machine-translation-market/requestsample
The global MT market is primarily driven by the reinvention of translational tools and the growth of adaptive machine translation. Besides this, the demand for cloud-based applications, which eliminate the need to invest in in-house hardware development or installations and provide access to different services via cloud servers, is also influencing the market growth. Moreover, several key players are launching advanced MT systems to enhance the productivity of human translators. This technology has also made it easier to disseminate healthcare information about the outbreak of coronavirus disease (COVID-19) in various regional languages. These factors are expected to provide a positive impact on the market in the coming years 2021-2026. Looking forward, IMARC Group expects the global machine translation market to exhibit strong growth during the next five years.
Breakup by Technology Type:
Breakup by Deployment Type:
Breakup by Application:
Breakup by Region:
Competitive Landscape with Key Player:
Ask Customization and Browse Full Report with TOC & List of Figure: https://www.imarcgroup.com/machine-translation-market
As the novel coronavirus (COVID-19) crisis takes over the world, we are continuously tracking the changes in the markets, as well as the industry behaviours of the consumers globally and our estimates about the latest market trends and forecasts are being done after considering the impact of this pandemic.
If you want latest primary and secondary data (2021-2026) with Cost Module, Business Strategy, Distribution Channel, etc. Click request free sample report, published report will be delivered to you in PDF format via email.
Other Report by IMARC Group:
About Us
IMARC Group is a leading market research company that offers management strategy and market research worldwide. We partner with clients in all sectors and regions to identify their highest-value opportunities, address their most critical challenges, and transform their businesses.
IMARCs information products include major market, scientific, economic and technological developments for business leaders in pharmaceutical, industrial, and high technology organizations. Market forecasts and industry analysis for biotechnology, advanced materials, pharmaceuticals, food and beverage, travel and tourism, nanotechnology and novel processing methods are at the top of the companys expertise.
Contact US
IMARC Group30 N Gould St, Ste RSheridan, WY (Wyoming) 82801 USAEmail: Sales@imarcgroup.comTel No:(D) +91 120 433 0800Americas:- +1 631 791 1145 | Africa and Europe :- +44-702-409-7331 | Asia: +91-120-433-0800, +91-120-433-0800
Read more from the original source:
Global Machine Translation Market Report 2021-26: Global Size, Share and Industry Trends The Courier - The Courier
Laying the IT Groundwork for a Crowded Space Economy – Via Satellite
Private companies from OneWeb, Boeing, and Amazon to SpaceX are busy flooding Low-Earth Orbit (LEO) with thousands of small satellites to deliver high-speed internet and other services to the most remote corners of Earth. Add to that dozens of more specialized mini-constellations that track anything from ship movements to natural catastrophes and greenhouse gas emissions. In short, we are wrapping our planet with a novel type of nervous system that can detect minute events or disturbances with a resolution down to a few meters or feet.
While these boom times thrill launch companies and hardware and software developers, most discussions around the impending traffic jam miss a crucial point. If we want to make sure the space economy takes off, we must lay a reliable terrestrial groundwork now. That means putting an IT architecture in place thats simple, safe, secure, and scalable to accomplish several objectives simultaneously.
As the industry keeps growing, hundreds of startups plus aerospace incumbents will add thousands of employees to deal with design, testing, launches, and operations, plus analyzing the rich data streams those satellites generate and which companies want to monetize. Companies need to manage a rapidly growing workforce, prevent unauthorized access and intrusions and be ready to add new services as their portfolio will almost certainly expand.
Based on my companys work with satellite clients such as GHGSat, Momentus, and High Precision Devices (now part of FormFactor), I have seen that all segments of the industry from builders and operators to the designers of sensor packages face very similar challenges.
These companies have grown quickly and had to find a way to consolidate their IT operations without slowing down the launch preparations. This is not an easy task if youre constantly adding new employees who have to be onboarded and whose rights and permissions have to be carefully managed to make sure they only work with apps and data sets theyre supposed to see or manipulate.
The reality is that most companies systems have been cobbled together over the years, with some parts on-premise and some in the cloud. Departments often add new services and servers, which eventually leads to a tangled mess. For instance, engineers have to remember multiple sign-on ID and password combinations, which wastes time and creates unnecessary tech support issues when someone is accidentally locked out.
Whats worse, the ID and password mess increases the risk of intruders gaining access for mischief, espionage, or sabotage. Most attacks still are carried out by social engineering tricks or using a human vector to get into a target system. Managing such a jumble of IT components, multiple operating systems and servers, and confusing user roles is the bane of every startup coping with plenty of other growing pains. Its even more relevant for highly sensitive and costly aerospace operations, where access to, sometimes classified, data is highly compartmentalized.
Imagine what inventive hackers could do, for instance, if they were able to tap into the satellite feeds and analytics stream around the greenhouse gas emissions of a large oil and gas company or the maintenance schedule of a satellite constellation.
Many companies have found a way to simplify their terrestrial ops by battening down the hatches. They deploy a unified system with a single sign-on across all parts of the organization and maintain a centralized local database of their users. It lets them manage the roles and permissions for every team member on their own server instead of entrusting it to a big cloud provider. In fact, even installing that ID server is usually handled by internal staff only, not outside contractors.
Going that route has several benefits. It makes onboarding of new employees and managing existing staff easier, thereby hardening the whole IT architecture. The same goes for a clear and clean audit trail, often mandatory for regulatory and government compliance. If a satellite company has everything on one system and in one dashboard in-house, theres little wiggle room when questions come up about who had access to what data or apps at what times and made what changes.
Localizing terrestrial ops has another advantage. It lets companies maintain better control over all their data, starting with the seemingly innocuous metadata. While the proprietary files themselves may be encrypted in transit and/or at rest with a cloud provider, the metadata wrappers around them, from timestamps to IP addresses or locations, rarely are and can, in fact, be sold to third parties.
Logging in at a certain location or joining a corporate Wi-Fi network can provide outsiders with valuable intelligence as to which company is negotiating the next big deal with whom. Aerospace startups are therefore well advised to check with their IT providers how they handle metadata. Again, a local, open-source option is in many cases the safer bet.
As programs for small and cubesats proliferate and the cost of launching one keeps dropping, the danger of data breaches in this industry is both real and growing. These hacks will have costly consequences long before a mishap in space garners headlines. Its high time to think about safety on the ground before you hit the launch button.
Kevin Korte is the President of Univention North America, where he is responsible for the US team and helps clients use open source identity management systems.
Original post:
Laying the IT Groundwork for a Crowded Space Economy - Via Satellite
5 Ways Developers Can Get the Most out of Edge Computing Platforms – ITPro Today
Edge computing is one of the buzzwords du jour of the IT world. Arguably, its merely a new term for an old idea. But, either way, if youre not up to speed with edge computing concepts and priorities, nows the time to learn. Toward that end, heres a primer on what developers should know about edge computing platforms: how edge computing platforms work, how they relate to the cloud and data centers, and how to approach application development for the edge.
Edge computing is a broad term that refers to any type of application deployment architecture in which applications or data are hosted closer to users--in a geographic sense--than they would be when using a conventional cloud or data center.
The big idea behind edge computing is that by bringing workloads closer to end users you can reduce network latency and improve network reliability--both of which are key considerations in an age when applications in realms like IoT, machine learning and big data require ultra-fast data movement.
At its core, edge computing is an architectural concept, not a development concept. Applications dont need to be designed or programmed in any particular way to run on edge computing platforms.
Nonetheless, there are a number of things that developers can help their organizations get the most out of edge computing.
For applications to take full advantage of edge architectures, its important for application instances to be able to start quickly. Its hard to benefit from an ultra low-latency network when your applications take 30 or 40 seconds to start.
Thats one reason to consider containerizing applications that will be deployed on an edge platform. Containers can start and scale more quickly, enabling organizations to capitalize on the agility and speed that edge computing platforms offer.
In some cases, edge computing platforms involve hardware devices that you wouldnt find in a conventional data center. You may be dealing with IoT devices or with mobile phones that serve as a device edge (which means that the devices perform processing tasks that would traditionally be handled on the server side). Not only can the hardware profiles of these devices vary tremendously, but they also may also not offer the ability to virtualize hardware (and, by extension, standardize computing environments).
For this reason, its wise to choose a development strategy that can support any type of device or hardware configuration. Even if your edge applications run today on conventional servers, you may want to extend them in the future into more specialized devices. Sticking to programming languages, libraries and processes that help you do that will future-proof your organizations edge strategy.
In addition to the aforementioned device edge, edge computing platforms come in the form of whats known as the cloud edge. In the latter edge computing model, data processing happens in the cloud rather than on end user devices. However, the cloud data centers in a cloud edge are geographically closer to users than they would be in a conventional, highly centralized cloud architecture.
The device edge and the cloud edge both help to improve application performance and reliability, but in different ways. Developers should understand the differences and decide which type of edge model makes sense for their applications. For a device edge, theyll need to build applications that can optimize data processing directly on end-user devices. Applications in cloud edge environments look more like traditional server-side applications.
It can be tempting to view edge computing as an alternative to cloud computing, or even as the antithesis of it. In fact, edge extends the cloud rather than competes with it.
From a development perspective, this means that you can and should take full advantage of cloud services when it makes sense while building an edge application. Edge apps dont need to avoid reliance on the cloud. However, they should be capable of running in an environment where traditional cloud data centers are not available.
The fact that edge applications are deployed outside of traditional data centers also makes software testing extra important when you are developing for an edge computing platform. Not only do you need to ensure that you test each release for all of the environment configurations you will be deploying to, but you should also factor in how varying levels of network availability, proximity to content delivery networks, and even (if you are deploying to a device edge) battery life on end user devices can impact application performance.
In other words, testing edge applications requires planning for more variables and unique test cases than you would traditionally have to handle when building a standard application.
Again, developers are only one set of stakeholders in edge computing. Cloud architects, data architectures, and network and security engineers also have important roles to play in ensuring that businesses capitalize on the benefits that edge computing platforms stand to offer.
But developers can do their part by writing applications that are high-performing under any and all edge configurations that their organizations may choose to use--now or in the future.
See original here:
5 Ways Developers Can Get the Most out of Edge Computing Platforms - ITPro Today
How Cloud Computing Can Be the Key to Ameliorating Outcomes While Mitigating Health Care Costs – Journal of Clinical Pathways
Cloud computing has been around in the health care industry for a few decades now. However, what's truly remarkable is that the adoption of this technology has increased at a frenetic pace only recently.
One 2019 research study by Technavio states that the global health care cloud technology market is anticipated to grow by USD 25.54 billion during 2020-2024. The coronavirus pandemic has only reinforced this trend further.
This new reality, along with new payment models and changes in patients' expectations, have together pushed cloud technology to the forefront. Today, the cloud is not only helping providers improve patient care, drive efficiency, and eliminate waste, but it is also playing a huge role in ensuring health care data safety by averting potential cyber attacks and thefts.
Integrating cloud computing into your practice can be the key to streamlining care delivery.
In this blog post, we'll discuss a few ways this state-of-the-art tech solution can support the health care industrys efforts to improve patient outcomes and mitigate costs in doing so.
1. Making Patient Data Interoperable while Mitigating Storage Costs
According to a recent survey conducted by the Center for Connected Medicine (CCM) in partnership with HIMMS Media, close to one-third of health care organizations report that their interoperability efforts are insufficient, even within their own organizations.
In most cases, physical data centers that are deployed on-premise not only demand an investment in hardware ahead of time, but they also come with ongoing costs of maintaining servers, spaces, cooling solutions, etc.
Cloud technology can be the solution to this persistent problem.
With health care organizations rapidly embracing virtual care delivery models such as telemedicine, especially amid the ongoing COVID-19 pandemic, the collaboration between various doctors, departments, and even institutions has become of increasing importance. The cloud enables physicians to share data in a hassle-free manner.
Health care cloud vendors can aid providers in seamlessly integrating various processes within the organization and lowering their data storage costs by managing the structure, and ensuring the harmonious functioning and maintenance of cloud storage services. This will significantly help care providers in focusing their efforts on ameliorating patient outcomes alone.
This, in turn, boosts interoperability across the organization and helps with faster care delivery.
2. Keeping Patient Information Secure at Each Stage of the Data Lifecycle
In 2018, health care data breaches of 500 or more medical records were being committed at a rate of approximately 1 per day. In 2020, the frequency at which these breaches were committed nearly doubled with the average number of breaches per day adding up to 1.76.
Source: HIPAA Journal
The fact that health care organizations need to have highly robust security measures in place to safeguard sensitive patient data is universally known.
Cloud computing adds supplemental layers of security and monitoring to health care data.
One best practice for health care organizations here would be to put adequate access controls in place. For instance, particulars about a patients medical condition and treatment can be blocked from back-office staff who dont require such details to do their work. Similarly, a patients financial information may be blocked from frontline care providers.
Telehealth is another technology where cloud computing is proving its potential. Getting a telemedicine platform developed for your practice that stores data on a cloud server can furnish robust security features such as end-to-end encryption of data, multi-factor authentication (MFA), etc. These ensure the patient data on your platform remains safeguarded at all times.
Today, a number of health care cloud providers also offer services in compliance with the Health Insurance Portability and Accountability Act (HIPAA). Choosing a compliant provider can further ensure that all the sensitive data you store adheres to HIPAA Rules and remains protected at all times. This can significantly help providers avoid fines and penalties.
3.Furnishing Efficient and Integrated Patient Care
Todays patients are quite savvy about their wellbeing.Equipped with state-of-the-art digital solutions, these patients are willing to accept nothing less than high-quality medicineone that will deliver care in a patient-centric and streamlined manner, making an integrated model of care delivery critical for caregivers.
Cloud technology is playing a huge role in delivering patient-centric care.
The integration of cloud storage with patients electronic health records (EHR) has helped revolutionize collective patient care, making it hassle-free for authorized individuals from the medical staff to retrieve vital patient information from any remote location, and at any given point in time. This further promotes anytime care and augments patient outcomes.
With EHRs, every provider can have the same accurate and up-to-date information about a patient, as explained by the Office of the National Coordinator for Health Information Technology on their website. Better coordination can lead to better quality of care and improved patient outcomes.
The cloud-based software behind collaboration tools of the likes of video conferencing and enterprise messaging holds the potential to leave a positive influence on both health care teams and their patients.
Moving to the cloud for our communications was the best decision weve made, as were now connected with our patients and colleagues whether we are in the office, at home or traveling overseas, states Dr. Ravi Patel, founder of the Comprehensive Blood & Cancer Center, Bakersfield, CA, in a recent press release.
Today, with the rapid innovation happening on the cloud technology front, the data gathered from remote patient monitoring devices can also be uploaded to a specific medical cloud or the user's private centralized cloud. This helps maintain a record of all the monitored data which can easily be retrieved at a later time by authorized medical personnel to suggest treatment.
All in all, cloud computing has transformed the health care industry in innumerable ways.
Now, this transformation may be occurring at a comparatively slower pace for some, but the growing need to make data more interoperable will eventually get many to notice the cloud and its endless benefits.
Having said that, it wouldnt be wrong to assume that the future of health care is in the cloud!
Rahul Varshneya is the co-founder and president ofArkenea, a digital health consulting firm. Rahul has been featured as a technology thought leader across Bloomberg TV, Forbes, HuffPost, Inc, among others.
The rest is here:
How Cloud Computing Can Be the Key to Ameliorating Outcomes While Mitigating Health Care Costs - Journal of Clinical Pathways
2 Top Cloud Computing Stocks to Buy in 2021 – The Motley Fool
Cloud computing has revolutionized the business world over the last two decades. Enterprises no longer need to provision and maintain costly on-premises computing infrastructure. Instead, they can access resources like servers, storage, databases, and software remotely through the internet. Moreover, those resources can be accessed on demand, allowing enterprises to quickly and efficiently scale their operations.
According to research firm Gartner, spending on public cloud services will increase by 19% annually through 2022. That growth should be a tailwind for industry titans like Amazon (NASDAQ:AMZN) and Microsoft(NASDAQ:MSFT). Here's what investors should know about these two companies' opportunities in the cloud space in 2021.
Amazon's cloud computing business, Amazon Web Services (AWS), launched in 2006. Today, it's still the clear leader in the space, with a more extensive global infrastructure, a broader product offering, and a larger market share than any of its rivals. In fact, during the fourth quarter of 2020, AWS took 32 cents of every dollar spent on cloud infrastructure services.
Image source: Getty Images.
That dominance has attracted a diverse network of partners -- enterprises that use AWS to build solutions for their own clients. For example, consulting firm Deloitte developed its Smart Factory Fabric, a cloud-enabled manufacturing process, using Amazon IoT systems. This suite of applications brings smart manufacturing capabilities to its clients' operations. Notably, Deloitte's role as a consultant to 80% of Fortune Global 500 companies means it's well-positioned to bring new customers to AWS.
Not surprisingly, its robust product portfolio and large partner network have powered strong growth in Amazon's cloud computing business. Last year alone, AWS's revenue rose 30% to $45 billion.
Moreover, AWS's operating margin was 30% in 2020. Compare that to the combined operating margin of Amazon's other businesses -- 3%. The cloud segment's high profitability has helped the tech giant bankroll its e-commerce efforts, making it an even greater threat to traditional retailers. In other words, AWS generates enough cash that Amazon can afford to run its e-commerce business at a loss to gain market share.
AWS's lead in cloud computing should help the company grow its top and bottom lines quickly. That, in turn, should drive increased profitability for Amazon as a whole, while allowing it to fund the rapid innovation that has kept AWS ahead of its rivals.
Microsoft launched its cloud computing business, Microsoft Azure, in 2008. While it still trails AWS in terms of market share, the company is executing on a strong growth strategy, and Azure is gaining ground.
Market Share
Q4 2018
Q4 2019
Q4 2020
Amazon
33%
32%
32%
Microsoft
15%
18%
20%
Source: Canalys.
Specifically, Microsoft has focused on supporting hybrid and edge computing use cases. This strategy makes sense -- some types of data need to remain on company premises due to privacy or regulatory requirements. That can put an enterprise at a disadvantage, though, if it means they don't have access to cloud services to help them manage, analyze, and secure that data. But Microsoft has a solution.
First, Azure Arc extends Azure's management capabilities across any environment, from private data centers to public clouds. In other words, it allows clients to manage all their digital resources in a unified way, even if some of those resources are stored on-site or in a rival cloud like AWS. For example, Azure Arc makes it possible to train and run AI models using data stored in multiple different locations. That puts Microsoft ahead of rivals like AWS and Alphabet's Google Cloud in terms of its ability to power hybrid AI.
Second, Azure Stack allows clients to run their own Azure environments using on-premises servers. This makes it possible to bring Azure services -- think artificial intelligence, analytics, monitoring, and security -- to private data centers or even disconnected environments. Azure Stack also makes it possible for developers to build and run hybrid applications across cloud and on-premise locations.
In recent years, Microsoft has also forged partnerships with companies like Datadog, SpaceX, and General Motors that have helped expand Azure's client base. In another partnership that began in 2019, SAP started working with it to migrate its on-premise software customers to Azure. And in 2021, the two companies expanded this partnership, enabling SAP to integrate Microsoft Teams into its own software solutions.
On the whole, Microsoft's efforts have powered strong growth in its cloud computing business. In the company's fiscal 2020 (which ended June 30, 2020), Azure revenue surged 56%, and through the first two quarters of its fiscal 2021, sales were up 49% year over year.
Microsoft's size gives it an advantage over the vast majority of its rivals. And its focus on hybrid scenarios should power continued growth as more enterprises migrate to the cloud.
This article represents the opinion of the writer, who may disagree with the official recommendation position of a Motley Fool premium advisory service. Were motley! Questioning an investing thesis -- even one of our own -- helps us all think critically about investing and make decisions that help us become smarter, happier, and richer.
See the rest here:
2 Top Cloud Computing Stocks to Buy in 2021 - The Motley Fool
The Bright Future of Cloud SIEM – Security Boulevard
TL;DR: People keep questioning SIEM value, but cloud SIEM makes SIEM so much better. SIEM is now capable of delivering a lot of security value with far less effort from security teams.
The SIEM market is a US$5B market with a two-digit annual growth rate. Still, we keep seeing multiple questions and discussions around SIEMs role, future and value. Why?
There are many reasons, including:
Nothing is more important to those discussions as Cloud SIEM. Not just hosted in the cloud, but as a native cloud offering. Why? Because now SIEM vendors can have some control over deployment success. What are you saying, Augusto? Didnt they have control over the success of their own product before? Yes, thats true!
As a traditional SIEM vendor, it is very hard for you to ensure the customer will be able to get all the benefits your product can provide. First, they may underestimate the required capacity for their environment. They will end with a sluggish product, overflowing with data, having to deal with adding servers, memory, storage, or even stopping the deployment to rearchitect the whole solution before getting any value from it. Ive seen countless SIEM deployments dying this way before generating any return of investment.
But it doesnt stop there. They may get the sizing right but underestimate the effort to keep it running. They estimate the number of people to use the SIEM, but they forget that a traditional SIEM requires people to use it but also to keep it running. That means people will spend their time keeping servers running, applying patches (to operating systems, middleware and to the SIEM software too), troubleshooting log collection, ensuring storage doesnt blow up, and not paying attention to what the SIEM should actually be doing for them. The tool is up and running, but again, not providing any value.
We can see how much the vendor depends on the customer to provide value. And even if the customers do things properly, there are other challenges too. Traditional software allows for high variation of deployments: Customers running on different versions, with different hardware and architecture. How can a vendor distribute SIEM content (parsers, rules, machine learning models, etc) that works in a consistent manner to its customers in this scenario? It just cant.
Considering these factors, I risk saying that offering a traditional SIEM solution is like the Sisyphus Myth. As much as the vendor tries to deliver value, the solution will eventually fail to achieve the customer objectives. As traditional software, SIEM was really destined to die.
First, many challenges on SIEM deployments are related to problems that are completely solved or minimized by the SaaS model. Cloud services are highly scalable and elastic, and SaaS practically eliminates the need to maintain the application and underlying components. Now you have a SIEM that finally scales and does not require an army to keep it running. You can focus on using it appropriately.
Second, a SaaS SIEM puts customers on highly standardized deployments. With most customers running on the same version, without capacity challenges, its far easier to deliver content that works for all of them. That makes a huge difference in perceived value. And it doesnt stop there. With this scenario it becomes easier to the vendor to finally realize the benefits of the wisdom of the crowds. Developing more complex ML models for threat detection, for example, becomes easier and more effective. The vendor now has access to more data to train and tune the models. Even simple IOC match detection content can be quickly developed and delivered to all customers, allowing the SIEM vendor to provide detection of new, in the wild threats.
Finally, delivering any software solution via SaaS gives the developer the opportunity to embrace more agile development practices. Upgrading a traditional SIEM deployment is so complex that vendors would naturally rely on traditional waterfall development practices, generating big releases with long times between them. SaaS SIEM can leverage agile development and CI/CD practices, so new features can be quickly added, and defects quickly fixed.
Cloud SIEM is on its infancy when you consider SIEM is just past its teenage years. But there are so many opportunities to explore with this model that I believe now we can say Next-Gen SIEM without feeling silly about it. Be careful with SIEM is dead claims. That sounds to me much like I think there is a world market for maybe five computers, by Thomas Watson in 1943.
*** This is a Security Bloggers Network syndicated blog from Security Balance - Augusto Barros authored by Unknown. Read the original post at: http://feedproxy.google.com/~r/SecurityBalance/~3/BAcr0fKDFm4/the-bright-value-of-cloud-siem.html
Read more:
The Bright Future of Cloud SIEM - Security Boulevard
Pensando tech slashes network management costs – but you may need 2000-servers to benefit Blocks and Files – Blocks and Files
Enterprises with 2,000-plus servers could save up to 84 per cent in various network monitoring and management costs over three years by by using Pensando SmartNIC server offload chips.
This is the headline finding of an Enterprise Strategy Group Economic Validation report, published yesterday: ESGs analysis found that Pensandos scale-out software-defined services approach enabled organisations to centralise management, simplify administration, and optimise performance.
Carriers and CSPs have embraced a scale-out approach, which enables services to be run on homogeneous, industry-standard server hardware, ESG adds. The challenge is that spinning up network or security functions on the generic servers employed by carriers and CSPs burns CPU resources and is generally less efficient and performant than specialty hardware.
Pensando, a California startup, has built the Arm-powered Naples DSC (Distributed Services Card) which connects to a host server across a PCIe interface. The card offloads and accelerates networking, storage and management tasks from its host server, freeing up the host CPU to run application workloads instead of infrastructure-focused tasks.
Pensandos DSC card replaces speciality hardware appliances. Infrastructure services such as security, encryption, flow-based packet telemetry, and fabric storage services are deployed on the DSC at every server. Pensando provides Policy and Services Manager (PSM) software to carry out centralised management. PSM collects events, logs, and metrics from the installed DSCs to speed troubleshooting.
In its report, ESG notes: While at first glance it might seem expensive to implement Pensando hardware and software into each data centre server ESGs modelled scenarios demonstrate significant savings for both traditional enterprises and cloud service providers.
In other words the performance and related savings per server are not enough at a server level to justify the Pensando card cost. But the total cost of ownership savings across large fleets of servers, 2,000 and upwards in ESGs scenarios, over three years make the expense of buying Pensando cards worthwhile.
The ESG researchers modelled two scenarios, an enterprise data centre with 2,000 servers each fitted with a DSC card, and a cloud services provider with 20,000 similarly equipped servers.
ESG calculated the costs of network adapters and monitoring appliances, east-west firewalls, load balancers, micro-segmentation nodes, and their associated license fees and operational expenditures (Opex). These were summed over three years and compared with the same servers fitted with Pensando DSCs and services software.
In the enterprise model and over three years, ESGs model predicts a total three-year savings of $16,180,092, or 84 per cent. In the cloud services provider model ESGs model predicts total three-year savings of $104,456,894, or 64 per cent. These are large numbers and suggest that enterprises and CSP with a thousand-plus servers might be well-advised to look at the costs and benefits of using Pensando DSCs in their data centres.
Continued here:
Pensando tech slashes network management costs - but you may need 2000-servers to benefit Blocks and Files - Blocks and Files
New SQL Monitor release gives organizations the opportunity to manage their on-premises and cloud databases from a single global dashboard – IT News…
RealWire2021-04-15
Cambridge UK, Thursday, 15 April To help organizations explore and manage the advantages the cloud provides, the latest release of Redgates popular database monitoring tool, SQL Monitor, now supports Amazon EC2 and RDS, and Azure SQL Database and Azure Managed Instances as well as on-premises SQL Server. A new global dashboard allows users to check the health of their entire SQL Server estate at a glance and pinpoint issues with individual servers and instances, wherever they are, however large the estate.
The new SQL Monitor keeps the user-experience consistent, and allows organizations to focus on responsiveness, improving performance and supporting business-critical areas, rather than trying to understand the complexity across database platforms. It also brings consistency and familiarity to database monitoring, and avoids the learning curve, cost and time involved in using multiple monitoring tools for different databases.
The release follows research from Redgates 2021 State of Database DevOps report, showing that 58% now use the cloud either wholly or in combination with on-premises servers, compared to 46% in the same report a year earlier.
This accelerated move to the cloud reflects how organizations are looking to reap the benefits that cloud platforms offer, even if it means that managing and monitoring server estates become more complex and difficult. Different use cases and requirements make choosing a single cloud offering rare and many server estates now feature a changing mixture of on-premises servers and platforms like Amazon and Azure.
To support these business needs and keep up with the evolution of hybrid server estates, its critical that organizations have the ability to monitor every type of server and instance with the same monitoring tool, using a consistent approach to minimize the time and effort involved. This ensures the availability, security and performance of all the databases across different hosts can be managed far more easily and effectively.
As Phil Grayson, CEO of Managed Service Provider xTEN, comments: Weve seen a big shift in the SQL Server space over the last few years, with hybrid estates growing in size and complexity. The latest version of SQL Monitor simplifies their management because organizations can now focus on choosing the right database solution for their business need, without worrying how theyre going to monitor it.
The development team behind SQL Monitor are now looking to add more estate management capabilities to the tool like providing security related information on demand, and automating the discovery and inventory of entire SQL Server estates.
To find out how Redgate SQL Monitor offers a complete overview of hybrid SQL Server estates with fast deep-dive analysis, organizations can download a 14-day, fully functional free trial or see a live demo online at http://www.red-gate.com/sql-monitor.
About Redgate SoftwareRedgate makes ingeniously simple software used by over 800,000 IT professionals around the world and is the leading Database DevOps solutions provider. Redgate's philosophy is to design highly usable, reliable tools which elegantly solve the problems developers and DBAs face every day and help them to adopt compliant database DevOps. As well as streamlining database development and preventing the database being a bottleneck, this helps organizations introduce data protection by design and by default. As a result, more than 100,000 companies use Redgate tools, including 91% of those in the Fortune 100. For more information, visit http://www.red-gate.com.
ContactsMeghana ShendrikarAllison+Partners for Redgate SoftwareRedgate@allisonpr.com
Source: RealWire
See the rest here:
New SQL Monitor release gives organizations the opportunity to manage their on-premises and cloud databases from a single global dashboard - IT News...
A room with a view: a non-tech explanation of containers and Kubernetes – S&P Global
Introduction
Virtualization, containers and Kubernetes are big topics in tech, but the differences aren't always clear. Here, we present a simple analogy to aid understanding for a non-tech audience, and consider the role of these topics in a multicloud future.
The 451 Take
Containers are now a fundamental component of IT infrastructure: 53% of enterprises have at least some adoption today, with just 5% having no plans to implement, according to 451 Research's Voice of the Enterprise: DevOps, Organizational Dynamics 2020. Meanwhile, 43% of enterprises are using Kubernetes to, at least at some level, to manage their IT estates. The crucial benefit of containers is that applications can be decomposed into self-managed components that can live for as long or short as needed, depending on the demand at that time. These components can be updated independently across many physical servers without needing to rebuild the whole application, and components can be shared by multiple applications.
Orchestration platforms such as Kubernetes perform management of containers across many hosts, duplicating containers when needed to handle more requirements, and then scaling back down to save resources when not needed. Because of this, they are a logical enabler of multicloud applications. Being able to update and scale apps so users are always getting the best experience can make the difference between winning business and losing opportunities. In the post-COVID-19 era, the online experience has never been more important.
Virtualization
Imagine a building with a number of floors this represents a server in our analogy. An owner rents it out to a single tenant, but the tenant must pay a lot of rent for the whole building, even though they don't use it all. The landlord is only able to collect rent from a single person.
The owner could break the building into a number of apartments, all sharing the same physical building. Each apartment is a fully self-contained living space. Here, the landlord gets multiple revenue streams, and each tenant pays less than if they were to rent the whole house. This is akin to 'virtualization' in cloud computing, with tenants representing applications that are all hosted within the same building (server).
Virtualization enables the most basic attribute of a cloud service: a massive amount of computing or storage resources can be applied to many users simultaneously, with each user utilizing that service without regard to what other users are doing. Similarly, those users should be able to utilize the service without having to worry about the details of hardware-level implementation. Cloud suppliers virtualize compute, storage and other services on a massive scale using hardware virtualization, where a hypervisor layer (the abstraction layers) that runs on it enables operating systems (and their applications) to share hardware resources such as compute, storage and memory from a single asset, be it a server or even a pool of servers.
Originally, one server meant one operating system, typically delivering one workload. Through virtualization, one server (and its 'host' operating system) can hold multiple 'guest' operating systems, each one operating a logically separated workload, deployed together and contained within so-called virtual machines (the VM is the 'unit' of work in virtualization). Before virtualization, perhaps only a tiny fraction of the asset (the server and its resources) would be used at any one time. Through virtualization, multiple applications can be multiplexed together, so that resources are shared, and the asset is fully used. VMs contain a guest operating system, application payload, and anything else that may be needed (such as a database).
Common hypervisors include VMware ESX, Microsoft Windows Server, Citrix Xen, Red Hat Enterprise Virtualization and Linux KVMs. Cloud providers often have their own virtualization software. Increasingly, the hypervisors are less valuable than the software used to manage hypervisors across large numbers of servers, which allow virtual machines to rapidly be spun up and down, and configured to be resilient and performant the essence of cloud.
Containers
Back to our apartment building. Alternatively, the landlord could just break it up into a number of bedrooms, with a bathroom and kitchen shared among all tenants. In this model, the landlord is squeezing the most from the building, and the tenants are paying the least. Furthermore, a landlord that needs to perform maintenance on the kitchen or bathrooms only has to do it once, and will still satisfy all tenants. This is akin to 'containerization,' where each application isn't just sharing the physical server (the building), it is sharing code that is common across other applications (represented by tenants sharing some rooms). This code can be swapped out quickly across all applications at once, just like the kitchen can be repainted to satisfy all tenants. And if a room is unoccupied, the other tenants can take over that space temporarily, but can also vacate it quickly when a new tenant wants to move in.
The landlord in this case likes the predictability of tenants that have signed a contract to pay rent every month for a term commitment; and the tenants like knowing they have a roof over their head for the foreseeable future. But if there is a steady stream of people who only need a room for a few nights, the landlord might decide to turn the building into a hotel. Here, guests can stay for as long and as short as they need, and can rent as many rooms as they require. Containers can be spun up and consumed for a matter of seconds, to provide immediate capability where needed, but can be rapidly turned down to free up resources for other containers.
Container technology is essentially operating system virtualization workloads share operating system resources such as libraries and code. Containers have the same consolidation benefits as any virtualization technology, but with one major benefit there is less need to reproduce operating system code. Hardware virtualization means each workload must have all its underlying operating system technology. If the operating system takes up 10% of a workload's footprint, then in a hardware virtualized platform, 10% of the whole asset is spent on operating system code. This is regardless of the number of workloads being run. In the same environment utilizing containers, the operating system only takes up 10%, divided by the number of workloads.
In this case, a server runs 10 workloads, but only one operating system in the container environment. In the virtualized environment, the server would be running 10 workloads and 10 operating systems. An application container consists of an entire runtime environment: an application plus all of its dependencies, libraries and other binaries, as well as the configuration files needed to run it, bundled into a virtual container that can run on a variety of infrastructures bare metal, traditional datacenter, virtual environment, or public, private or hybrid cloud. Application containerization represents a method of deploying software that is immune to changes in the underlying computing environment.
The software that actually runs these containers is called a container runtime, and these include Docker, containerd and CRO-O. The Open Container Initiative is an industry project to standardize container runtimes based on Docker, with partners that include AWS, CoreOS, Docker, Google, IBM, HP, Microsoft, VMware, Red Hat and Oracle. As with virtualization, these runtimes are less differentiated and valuable than the orchestration platforms that manage containers across multiple servers, and even clouds.
Kubernetes and orchestration
At our hotel is a reception desk, staffed by a receptionist. The receptionist is responsible for keeping track of who is in what room, and allocating free rooms to new guests. Without the receptionist, the hotel would be static and underutilized. In containers, Kubernetes is the receptionist that manages the lifecycle of containers and automates the deployment, scaling and management of containerized applications.
Kubernetes is a standard today for container orchestration, due to its extensibility, powerful configuration options and ability to manage workloads independent of underlying infrastructure. It is an open source, container orchestration system for automating application deployment, scaling and management. Kubernetes was originally designed by Google, but wasn't the flagship project of the Cloud Native Computing Foundation (CNCF). It load balances the application load, just as the receptionist would allocate a block booking across many rooms, and monitors resource consumption so the hotel isn't overbooked. Kubernetes also allows new resources to be added, and can redistribute containers to different hosts should resources struggle.
Kubernetes 'clusters' are groups of machines referred to as nodes that are responsible for running containerised applications. A 'pod' is a single node in this cluster with one or multiple containers deployed onto it. Each node in a cluster can service pods or machines to run. Kubernetes can automate the management of clusters to perform at a desired state.
Many vendors are building Kubernetes compatibility into their own orchestration platforms, including Docker, Red Hat OpenShift, VMware Tanzu, IBM, Rancher Labs, Mirantis and Morpheus Data. Cloud providers, too, have created cloud services with support for Kubernetes, including AWS's Elastic Kubernetes Service, Google Kubernetes Engine, and Microsoft's Azure Kubernetes Services. There are also offerings from IBM, Oracle, Alibaba and OVH. But not everything is Kubernetes based: AWS's Elastic Container Service and Azure Container Instances are just two examples of container platforms that don't fit the mold.
Multicloud
If a receptionist can manage rooms in a hotel, why can't a centralized receptionist handle rooms at a range of locations, balancing capacity and availability across cities far apart? Kubernetes and containers are being touted as an enabler of multicloud deployments they provide a standard mechanism of managing and updating applications, regardless of the underlying infrastructure platform. This can help remove the risk of lock-in, because containers can be moved between venues without needing to be rewritten, and their lightweight nature (due to the sharing of code) means they can be moved from A to B far quicker than heavy-duty virtual machines.
However, it is unlikely we will see all applications being decomposed to containers. Just like the property and leisure markets sustain houses, apartments, hotels and hostels, there will be demand for physical hosts, virtual machines and containers depending on specific requirements. Some enterprises are happy to pay for a whole physical host, knowing it is isolated and has full access to physical resources. Others might prefer to share resources but isolate code, like in a virtual machine, while others might have the appetite to abstract apps as far as possible into containers. Lots of applications are still monolithic, and slimming them down to VM size or breaking them into containers isn't an easy task. And post-pandemic many enterprises will struggle to find the money and the motivation to rebuild applications from scratch, rather than just move them to more powerful servers.
Anyway, it is often the case that physical, virtual and containerized apps run nested and hand-in-hand. Containers can be deployed on virtual machines, providing flexibility with the benefits of isolation. And no one said containers and Kubernetes would be easy. Virtual machines have reached a point of maturity where they are easy to deploy and easy to manage. Right now, containers are a hodgepodge that are best deployed and managed by experts. This will change over time, but for now, the container hotel is open for business just make sure your receptionist knows what they're doing.
See the rest here:
A room with a view: a non-tech explanation of containers and Kubernetes - S&P Global
Managed Servers Market Expected to Witness a Sustainable Growth over 2026 & Key Analysis by Capgemini, TCS, XLHost, Albatross Cloud, Sungard…
This detailed assessment of all the factors and dynamics affecting the global Managed Servers market landscape provides the client with a critical overview to gain detailed insights to understand the market. This document is well equipped with resourced and information that is essential in changing the growth course of an organization in the Managed Servers market.
Key Companies Covered in This Report: Capgemini, TCS, XLHost, Albatross Cloud, Sungard Availability Services, Hetzner, iPage, Viglan Solutions, Atos, LeaseWeb
Download Sample Copy of Managed Servers Market Report: https://www.reportsintellect.com/sample-request/1840084
NOTE: The Managed Servers report has been assessed while contemplating the COVID-19 Pandemic and its impact on the market.
Managed Servers market segmentation:
The Managed Servers market report has been bifurcated and further divided into various sub segments in order to make it easy to comprehend in a very efficient way, hence increasing the productivity. The segmentation adds a structure and ease of access to the data that has proven to be very overwhelming at times.
By types:Cloud-BasedOn-Premis
By Applications:BFSIIT & TelecommunicationEducationGovernmentRetailManufacturingConsumer GoodsEnergy & UtilityOthers
By Regions:North AmericaEuropeAsia-PacificSouth AmericaThe Middle East and Africa
Check discount for report @ https://www.reportsintellect.com/discount-request/1840084
Some of The Key Aspects Covered in This Report:
Report Highlights:
About Us:
Reports Intellect is your one-stop solution for everything related to market research and market intelligence. We understand the importance of market intelligence and its need in todays competitive world.
Our professional team works hard to fetch the most authentic research reports backed with impeccable data figures which guarantee outstanding results every time for you.So whether it is the latest report from the researchers or a custom requirement, our team is here to help you in the best possible way.
Contact Us:
sales@reportsintellect.comPhone No: + 1-706-996-2486US Address:225 Peachtree Street NE,Suite 400,`Atlanta, GA 30303
Here is the original post:
Managed Servers Market Expected to Witness a Sustainable Growth over 2026 & Key Analysis by Capgemini, TCS, XLHost, Albatross Cloud, Sungard...