Category Archives: Cloud Servers
Cloud-Native and Kubernetes-as-a-Service – Container Journal
On the road to digital transformation, companies seek to gain a competitive edge that enables them to offer new digital experiences, products and services and implement offensive and defensive business strategies. But this cannot be accomplished within existing delivery timelines of traditional software development and technologies. In combination with DevOps, cloud-native offers business leaders both technologies and software processes to deliver dynamic software capabilities at a much higher velocity that can scale.
Cloud-native is an umbrella term for applications created to take full advantage of the dynamic resources, scaling and delivery of cloud-native architecture, self-contained and independently deployable software components, typically operating on cloud services platforms. Cloud-native is achieved by containerizing smaller microservices which can be scaled and distributed dynamically or as needed.
Using DevOps, microservices packaged in containers can be individually created or enhanced in very rapid delivery cycles. Because of the dynamic nature and complexity of running large numbers of containerized microservices, container orchestration and workload management is required. Kubernetes is the most widely-used container orchestration software today.
Companies are shifting their workloads to containers and integrating container orchestration platforms to manage their containerized workloads. Now, workloads might be applications decomposed into microservices inside containers, backends, API servers or storage. To accomplish these tasks, companies may need expert resources and time to implement this transition. The operations team needs to deal with intermittent issues like scaling, upgrades of Kubernetes components and stacks, tracing, policy changes and security.
Kubernetes-as-a-service (KaaS) is a type of expertise and service to help customers shift to cloud-native-enabled Kubernetes-based platforms and manage the life cycle of Kubernetes clusters. This can include migration of workloads to Kubernetes clusters, deployment, management and maintenance of Kubernetes clusters on the customers cloud environment. It mainly manages Day 1 and Day 2 operations while moving to Kubernetes-native infrastructure, along with features like self-service, zero-touch provisioning, scaling and multi-cloud portability.
Companies cannot afford to spend excessive time or money on this transformation since the pace of innovation is so rapid. This is where Kubernetes-as-a-service becomes invaluable to companies; it offers customized solutions based on existing requirements and the scale of the cloud environment while keeping budget constraints in mind. Some of the benefits are:
Security: Deployment of the Kubernetes cluster can be easy once there is an understanding of the service delivery ecosystem and cloud and data center configuration. But this can lead to open avenues for external malicious attacks. With KaaS, we can have policy-based user management so that users of infrastructure get proper permissions to access the environment based on their business needs and requirements. KaaS would also provide security policies that can prohibit most of the security attacks like what is provided by a firewall.
Normal Kubernetes implementation exposes API servers to the internet, inviting attackers to break into services. With KaaS, multiple security methods can be used to protect the Kubernetes API server.
Effective Day 2 operations: This includes patching, upgrading, security hardening, scaling and cloud integration. These are all important as container-based workload management begins to grow. And, when considering Kubernetes, it may still not fit data center use cases because most of the best practices are still evolving to match the increasing innovation.
Additionally, applying containers in infrastructure results in positive strategies instead of backtracking, and having predefined policies and procedures that can be customized for companies to meet the ever-changing demands of working with Kubernetes.
Multi-cloud:Multi-cloud is a new trend wherein containerized applications will be portable across different public and private clouds. Also, access to existing applications will be shared in a multi-cloud environment. In this case, Kubernetes will be useful so that developers can focus on building applications without worrying about the underlying infrastructure, since management and portability will be provided.
Central management:It gives operations the ability to create and manage Kubernetes clusters from a single management system. An operator has better visibility of all components within overall clusters and can get continuous health monitoring using tools like Prometheus and Grafana. Operators can upgrade the Kubernetes stack along with different frameworks used in the setup.
It is also possible to remotely monitor Kubernetes clusters, check for any issues in configuration and send alerts. Additionally, the operator can apply patches to clusters if there are any security vulnerabilities associated with the technology stack deployed within the clusters. An operator can reach out to any pods or containers in a network of different clusters.
Implementing Kubernetes is not just a solution, but it might create several issues that can cause security as well as resource consumption. A Kubernetes-as-a-service offering is a breather for companies ranging from large-scale to small-scale who already have shifted workloads to a containerized model.
Donald Lutz, senior cloud and software architect, Taos, an IBM company, co-authored this piece with Mitch Ashley.
Related
Follow this link:
Cloud-Native and Kubernetes-as-a-Service - Container Journal
Traefik Announces General Availability of Traefik Hub, First-of-its-Kind Cloud Native Networking Platform – Business Wire
SAN FRANCISCO--(BUSINESS WIRE)--Traefik Labs, creator of the open source Traefik Proxy, today announced the general availability, available on October 26 during KubeCon, of its new cloud service that enables networking best practices in minutes and helps eliminate the complexity of managing Kubernetes and Docker networking at scale.
Cloud native emerged in 2015, the same year Traefik Proxy was open sourced. With more than 3 billion downloads and 40,000 GitHub stars, Traefik Proxy has grown alongside the cloud native ecosystem. Traefik Proxy revolutionizes cloud native networking for containerized applications by providing dynamic application-aware traffic management and deep integration with all the major container orchestrators.
Today, as the cloud native ecosystem is becoming more mature and adoption continues to accelerate, organizations still face pressing challenges coming from the multiplication of clusters and the need to extend traffic management up to the edge, to users, and third parties. Traefik Hub, a unified cloud native networking platform, redefines the publication and security of containers to the edge, at scale.
Traefik Hub fills a gap in the industry for both small standalone teams and large distributed organizations, said Darren Shepherd, chief architect and co-founder of Acorn Labs, and former chief technology officer of Rancher.
Announced as a beta in June and already with over 6,000 users, Traefik Hub is a software-as-a-service (SaaS) platform that integrates with Traefik and Nginx and instantly provides a gateway to services running on any Kubernetes or Docker environment. With the Traefik Hub platform, customers are able to:
In today's cloud native world, applications are more distributed than ever before, with services running across increasingly heterogeneous environments and a complex technical stack, said Emile Vauge, CEO and founder of Traefik Labs. That complexity demands a new generation of cloud native networking solutions with greater operational agility with full GitOps readiness.
To learn more about Traefik Hub, visit Traefik Labs at KubeCon, booth S13, or go to https://traefik.io/traefik-hub. The Pro plan promotional launch price is available at $109 per month.
About Traefik Labs
Traefik Labs develops the worlds most popular cloud-native application networking stack. Traefiks modern approach to networking helps developers and operations teams of all sizes build, deploy and run modern microservices applications quickly and easily across data centers, on-premises servers and public clouds from the origin to the edge. Used by the worlds largest enterprises, Traefik Proxy is one of Docker Hubs top 10 projects, with over 3 billion downloads. Founded in 2016, Traefik Labs is backed by investors including Balderton Capital, Elaia, 360 Capital Partner, and Kima Ventures. For more information, visit traefik.io and follow @traefik on Twitter.
Go here to see the original:
Traefik Announces General Availability of Traefik Hub, First-of-its-Kind Cloud Native Networking Platform - Business Wire
Supermicro Extends Best of Breed Server Building Block Solutions to Include OCP Technologies – HPCwire
SAN JOSE, Calif., Oct. 18, 2022 Supermicro, a Total IT Solution Provider for Cloud, AI/ML, Storage, and 5G/Edge, today announced expanded adoption of key open hardware and open source technologies into the core Supermicro server portfolio. These open technologies unlock innovation across a broad developer and supplier ecosystem and reduce proprietary lock-in.
Supermicro continues its commitment to deliver best-of-breed solutions like our 8U 8-GPU AI training system while integrating key open technologies that unlock innovation and flexibility for our customers, stated Charles Liang, president, and CEO, of Supermicro. The solutions are designed to support the best-in-class features, including Intel, AMD or ARM, CPUs up to 400W, up to 700W GPUs, and 400 Gbps networking while supporting open technologies like OpenBMC and Open BIOS providing open systems that deliver superior performance, efficiency, and TCO.
The new and upcoming 8U 8-GPU Rack Optimized Systems delivers superior power and thermal capabilities for large-scale AI Training and includes a host of open technologies. This system incorporates an OAM universal baseboard design for the GPU complex with support for an open, OCP ORV2-compliant DC-powered rack bus bar and OCP 3.0-compliant AIOM cards. These open technologies enable future flexibility for multiple servers and GPU options and allow the system to be more efficient, with reliable power delivery and additional cooling. The 8U 8-GPU System supports NVIDIAs latest H100 GPUs and provides best-in-class performance with support for up to 400W CPUs and 700W GPUs, up to 8TB of memory with 32 DIMMs of DDR4 memory, and future support for DDR5, up to 6 NVMe All-Flash SSDs and up to 10 dedicated I/O modules. In addition, Supermicro offers an open standard 5U 10-GPU server that is ideal for NVIDIA Omniverse applications.
Supermicro also expanded the use of OCP 3.0 compliant Advanced IO module (AIOM) cards, which will provide up to 400 Gbps bandwidth based on PCI-E 5.0. The Open I/O modules are supported on the 8U Universal GPU System, 1U Cloud DC with dual AIOM expansion slots, 2U Hyper and GrandTwin systems featuring next-gen CPUs, and AIOM expansion slots.
In addition to the new hardware offerings, Supermicro will offer OpenBMC and Open BIOS (OCP OSF) software solutions for the next generation of Intel, AMD, and ARM-based systems. The Linux Foundation-based OpenBMC implementation enables developers to include new features, extend the existing implementation, and provides the full functionality of base code, including IPMI 2.0, WebUI, iKVM/SOL, SEL, SSH, and Redfish.
The new OCP-compliant servers will be demonstrated in the Supermicro booth (C9) at the 2022 OCP Global Summit. Showcased products will include a range of high-performance multi-GPU systems featuring OAM universal baseboards supporting a range of industry-standard form factors, as well as rackmount and multi-node architectures with OCP 3.0 compliant AIOM expansion slots, including:
To attend the 2022 OCP Global Summit, visit https://www.opencompute.org/summit/global-summit.
In addition, a demonstration of the Open BIOS (OCP OSF) software will be shown in the OCP Experience Center at the OCP Global Summit.
About Super Micro Computer, Inc.
Supermicro is a global leader in Application-Optimized Total IT Solutions. Founded and operating in San Jose, California, Supermicro is committed to delivering first to market innovation for Enterprise, Cloud, AI, and 5G Telco/Edge IT Infrastructure. We are transforming into a Total IT Solutions provider with server, AI, storage, IoT, and switch systems, software, and services while delivering advanced high-volume motherboard, power, and chassis products. The products are designed and manufactured in-house (in the US, Taiwan, and the Netherlands), leveraging global operations for scale and efficiency and optimized to improve TCO and reduce environmental impact (Green Computing). The award-winning portfolio of Server Building Block Solutions allows customers to optimize for their exact workload and application by selecting from a broad family of systems built from our flexible and reusable building blocks that support a comprehensive set of form factors, processors, memory, GPUs, storage, networking, power and cooling solutions (air-conditioned, free air cooling or liquid cooling).
Source: Supermicro
Follow this link:
Supermicro Extends Best of Breed Server Building Block Solutions to Include OCP Technologies - HPCwire
TransUnion Accelerates Cloud-Native Innovation with Red Hat Ansible Automation Platform – Business Wire
CHICAGO--(BUSINESS WIRE)--Red Hat, Inc., the world's leading provider of open source solutions, today announced that TransUnion has expanded its global automation capabilities with Red Hat Ansible Automation Platform to accelerate feature development for customers and migrate to the cloud at scale and velocity. Ansible Automation Platform helps TransUnion consolidate disparate tooling and increase the speed of delivery of new products and services to its customers.
TransUnion is a global information and insights company that makes trust possible in the modern economy by providing a comprehensive picture of each person so they can be reliably and safely represented in the marketplace. TransUnion is a leading presence in more than 30 countries across five continents where they provide solutions that help create economic opportunity, great experiences and personal empowerment for hundreds of millions of people. TransUnion has been using Red Hat solutions to modernize its IT infrastructure and improve IT performance and costs for several years.
Like many customers, TransUnion is expanding its use of Ansible Automation Platform to address growing demands for automation, including cloud migration and provisioning of on premises and cloud infrastructure. Ansible Automation Platform integrates with platforms that theyre already using, like AWS, so developers can automate right away with the skills they already have. Ansible Automation Platform provides a repeatable way to provision and deploy both middleware and applications right to servers in AWS, with an automation framework that provides a level of governance and compliance, but still enables teams to provision infrastructure in AWS and deploy applications.
Red Hats expert consultants also worked with TransUnion to facilitate a broader culture of automation by integrating communities of practice, tools and processes and breaking down the silos that frequently exist when extending automation across an organization. IT teams now spend less time on repetitive work and more time on their core responsibilities, and no longer need specialized experts to fully realize the value of automation.
Prior to implementing Ansible Automation Platform organization-wide, developer teams in various regions were building bespoke pipelines for automation, using templates that were sparse and duplicated across its ecosystem. With the help of Red Hat Consulting, TransUnion streamlined their underlying automation framework to consolidate tooling and create key template playbooks that are reusable, so teams have a repository of automation content that can be configured in other technical scenarios. As a result, TransUnion shortened its migration pipeline development from months to minutes, reducing the cost of feature development and speeding up the delivery to customers.
TransUnion plans to continue expanding its use of Ansible Automation Platform to further consolidate and simplify its modules and frameworks in the cloud.
Supporting QuotesThomas Anderson, vice president, Ansible, Red HatOur customers are operating in increasingly complex IT environments, spanning on-premises datacenters to public and private clouds, that rely on automation to effectively scale. With the organization-wide extension of Red Hat Ansible Automation Platform, TransUnion can accelerate workloads and migrate to the cloud quicker, allowing them to spend more time innovating and delivering new products to their customers.
Vishal Patel, director, GTP, TransUnionAs we expand our automation use in our technology and across the organization, achieving organizational alignment and interoperability is top priority for TransUnion. Red Hat has helped us establish a community based framework for a more collaborative culture, with easily shareable and reusable templates that make automation accessible for various IT teams and lower the barrier to automation. Now, developers are freed up to advance broader strategic and organizational goals.
Ryan Searles, vice president, Global Technology, TransUnionImplementing Red Hat Ansible Automation Platform to expand our cloud initiatives enables us to scale and standardize application deployments quickly and efficiently, with consolidated tooling and an underlying foundation for future projects. Now, we can focus on innovation in the cloud, without having to worry about how were going to get there.
Additional Resources
Connect with Red Hat
About Red Hat, Inc.Red Hat is the worlds leading provider of enterprise open source software solutions, using a community-powered approach to deliver reliable and high-performing Linux, hybrid cloud, container, and Kubernetes technologies. Red Hat helps customers integrate new and existing IT applications, develop cloud-native applications, standardize on our industry-leading operating system, and automate, secure, and manage complex environments. Award-winning support, training, and consulting services make Red Hat a trusted adviser to the Fortune 500. As a strategic partner to cloud providers, system integrators, application vendors, customers, and open source communities, Red Hat can help organizations prepare for the digital future.
Forward-Looking StatementsExcept for the historical information and discussions contained herein, statements contained in this press release may constitute forward-looking statements within the meaning of the Private Securities Litigation Reform Act of 1995. Forward-looking statements are based on the companys current assumptions regarding future business and financial performance. These statements involve a number of risks, uncertainties and other factors that could cause actual results to differ materially. Any forward-looking statement in this press release speaks only as of the date on which it is made. Except as required by law, the company assumes no obligation to update or revise any forward-looking statements.
Red Hat, Red Hat Enterprise Linux, the Red Hat logo and Ansible, are trademarks or registered trademarks of Red Hat, Inc. or its subsidiaries in the U.S. and other countries. Linux is the registered trademark of Linus Torvalds in the U.S. and other countries.
Follow this link:
TransUnion Accelerates Cloud-Native Innovation with Red Hat Ansible Automation Platform - Business Wire
S&P Global migrates from Oracle EBS on premise to Fusion ERP in the cloud – Diginomica
( akitada31 - Pixabay)
Stock market index and financial data company S&P Global has migrated its on-premise Oracle E-Business Suite (EBS) to Oracles Fusion ERP cloud-based platform, in a bid to streamline reporting and establish global consistency. The company decided to make the move as it faced end of life support for EBS, but has resulted in S&P consolidating on Oracle systems and pursuing a more cloud-focused strategy.
The migration also took place during the COVID-19 pandemic, where the S&P team never actually met its implementation partners KPMG in real-life, and the project was completed one month ahead of schedule.
Speaking with Christopher Craig, SVP Corporate Controller and Chief Accounting Officer at S&P Global, at Oracle CloudWorld in Las Vegas this week, he said that the 22,000 employee company had been using the Oracle EBS reporting environment for approximately a decade. The organization operates out of 180 legal entities, across 37 countries with 64 ledgers. Craig said:
We had grown a lot in the last 10 years, we sold four major businesses in the portfolio and then we acquired somewhere around 15 companies over the same period. And the platform that we were operating on - Oracle EBS - we knew we had to introduce a new chart of accounts.
Our consistent reporting across all of our businesses was growing increasingly challenging. So we looked at doing a full Chart of Accounts remapping. If we had done it on Oracle EBS, which was reaching end of life on January 1st 2022, we were probably going to end up paying premium support.
And then we were going to spend a higher portion of maintenance responsibilities, versus the constant innovation that would come with a product thats not end of life Oracle.
S&P weighed up the pros and cons of sticking with its Oracle EBS on-premise infrastructure, versus going with Fusion, which Craig said had a more modern i structure.
Luckily, S&P had maintained a pretty well structured and integrated system with EBS, following its acquisitions and across its multiple territories. However, across all of its divisions, which was part of the legacy challenge, there were slightly different reporting structures. Craig said:
We had diverse billing platforms for each of the businesses. Wed streamlined our businesses a lot over the years, but there were still differences in terms of reporting and roll-up. So a consistent end-to-end view of natural class reporting was increasingly challenging.
To start, S&P went through and remapped its entire Chart of Accounts. Craig said that once youre going through this process youre conforming your companys reporting structures across the entire business, which aids with consistent reporting and processes. He added:
Wed been working towards this, but this was an opportunity to take a fresh cut at that. Also, our financial reporting tool was a legacy product from IBM, and that was more or less like the lifeblood of the company. So we moved out of that to Oracle EPM. From an architecture perspective that was completely different and we had to really evolve how our reporting was performed across the organization.
The project started shortly before the COVID-19 pandemic kicked in, with the implementation project beginning in May 2020. Craig said it was nerve wracking, given that the implementation was carried out remotely and it was a new way of working.
However, S&P completed its go live one month early, with the new Fusion ERP operating from 1st January 2022. Two months later it also completed a $44 billion acquisition, in March 2022, which was successfully brought into the system.
Craig noted that whilst the journey to a cloud-based ERP wasnt exactly an exciting prospect for S&P, it forms an essential part of the organizations digital plants. He said:
Nobody was necessarily thrilled at the prospect of a 16 month endeavour and a multimillion dollar project that would dominate 200 peoples time ,with multiple vendors over the course of other projects. I think there was a lot of hesitation.
We arent in the business of implementing ERPs. And we were going into new technology we werent familiar with, into the cloud. Going from EBS on premise, where you are completely in control, to a somewhat more standardized out of the box solution was daunting. Youre kind of ceding control and building a different kind of partnership with Oracle that you didnt have before.
But for the last ten years the company has evolved. In addition to all the acquisitions, weve rationalized our real-estate footfall to pursue a more asset-light strategy. Weve moved from on premise servers to a combination of AWS and Oracle to come up with a hybrid cloud strategy. This is all part of pursuing a more nimble operating model.
Craig said the move to the cloud has resulted in a range of changes taking place, from sun setting applications to adopting new workflows. But whats evident is that a consolidation on to Oracle systems has been a priority. He said:
We used to do all the patches on our own, now the patches are done by Oracle. We moved out of our AWS data warehouse to an Oracle autonomous warehouse. Our third party feeds used to use Informatica, now we use Oracle. We got rid of IBM for reporting and now we use Oracle. So weve completely changed our architecture around how we get things done.
And we are still able to achieve a four day close.
Craig added that another key priority has been getting the teams comfortable with the new reporting systems and tools, so that they have the same level of confidence that they had with the legacy on-premise system. He said:
We managed the change and the transition okay, but now we want to create trust in the platform and the solution. That belief that we can be as nimble.
The implementation strategy used SAFe agile as part of the rollout, which includes user testing as part of the fabric - where users are testing as they go along, in the new environments. For Craig this is the key piece of advice from the project, ensuring that the team remained enthusiastic. He said:
Its the governance. Youve got to have a team that believes in the vision and thats behind the vision. The most important thing is, what do you do to keep the team motivated, excited, focused and enthusiastic for 16 months? And thats the governance and accountability and alignment of success to the project itself.
Theres going to be a tonne of problems, a tonne of pitfalls, no solution is perfect, every problem is different, but youve got to keep a team thats motivated to work through the solutions and drive success. That comes down to having the right governance structure that rewards behaviours, replicates success and identifies failures. Get that right and you can probably get any solution right.
Read the original:
S&P Global migrates from Oracle EBS on premise to Fusion ERP in the cloud - Diginomica
The new essentials: IT budgets – TechRadar
When the pandemic changed work as we know it, IT spending fundamentally changed as well. As organizations continue balancing their remote and in-person workforce, it is up to IT leaders to rethink their companys approach to infrastructure and spending.
Cloud investments grew significantly in 2020 due to the new reality of virtual connections, and many experts believe cloud spending will only continue to grow going forward via such markets as the best cloud storage (opens in new tab) and best cloud hosting (opens in new tab). At the same time, the remote workforce highlighted many enterprises lack of secure, remote security systems, and networks.
Now more than ever, IT teams need to focus on creating IT budgets that take into account the need for new essential initiatives.
Cloud investments have risen steadily over the last decade. Prior to the pandemic, 2020 was already a milestone year for the cloud, as the BVP Nasdaq Emerging Cloud Index market cap crossed $1 trillion (opens in new tab) that February, and aggregate SaaS and IaaS revenue both topped $100 billion.
When remote work accelerated in 2020, cloud investments kept growing as more workers than ever collaborate and access private data and information virtually. As shelter-in-place and quarantining have led to a new remote work normal, cloud software services had a record 2020 - Zoom's stock was up 293 percent over the summer, and AWS and Microsoft Azure had strong summer outings.
The pandemic accelerated what was widely understood: most companies are realizing that its essential to update operations and technology to best equip the modern workforce. As companies manage and distribute private data across networks, its important to invest in a cloud service that will distribute data safely and effectively.
Having the ability to access the cloud - and your data - from anywhere should be reason enough, but having cloud services that will securely store and distribute your information is essential in todays remote world. But how much cloud investment is too much investment?
The corporate shift to the cloud because of remote work also accelerated the amount companies spent on cloud infrastructure services. In 2019, businesses spent an estimated $96.4bn on cloud infrastructure services, surpassing their in-house data center hardware and software spending for the first time.
IT leaders believe that companies spent the equivalent of around $15 billion extra a week on technology during the first few months of remote work to enable safe and secure home working. To avoid bill shock, I recommend companies and financial departments take the time to understand their contracts with cloud service providers - they may not look the same as they did last year.
I recommend talking to your cloud provider about ways to contain costs, but also preparing budgets for cloud investments in the new year.
When most companies moved to remote work in March, it became clear that many did not have a robust security plan to protect their data. Malicious activity has seen a massive spike, with hackers across the world breaking into stressed IT systems that were not built to withstand so many network log-ins.
While the pandemic is heightening the need for shared cloud environments, migration to the cloud can also increase risk. As such, cloud migration introduces the need for vulnerability assessments and management.
According to Trustwave, cloud services have become the third-most targeted environment by cybercriminals (opens in new tab), accounting for 20 percent of investigated incidents in 2019, up from seven percent in 2018. Cyber resilience is thus top of mind for enterprises as they bring this technology, which is susceptible to attack, into the fold.
It's important for businesses to implement proper cloud security to prevent a potential organizational breach. Unfortunately, the pandemic served as a wake-up call for many IT teams who learned - the hard way - that a remote security strategy is prudent. Now, security has become many CIOs top priority. In fact, in a survey conducted by Information Week, 47 percent of IT leaders said that security and privacy is the most important technology investment (opens in new tab) for their teams in 2021 and beyond.
One of the easiest and most cost-effective ways to prioritize security is to keep your IT teams educated, well-prepared, and focused. Help them by adding yearly or biannual training sessions - sharing threat knowledge and providing tips and tricks for how to thwart potential attacks.
Doubling down on security also entails incorporating privacy impact assessments in your organizational structure, using Computer Information Systems (CIS) hardening guidelines for your servers and infrastructure, and bringing in tools that help your organization think smarter, not harder, about security.
Its also important to invest in VPNs, VDI solutions, and privileged user jump hosts. As employees work from at-home WiFi networks, it has become key to ensure they are actively connected to a corporate VPN when accessing the cloud to eliminate any risk from potentially unsecured systems.
Remember that most employees did not set up their at-home WiFi networks with the knowledge that they would be utilizing it for more than personal usage. While the networks are secure, they are certainly not set up to host, protect, and secure the organizations data. This is why the best VPN services (opens in new tab) for corporations are essential, and recommended to every IT team looking to secure their organizations data assets.
With security and cloud taking precedent, its important for IT teams to re-evaluate the long term plans they may have previously set for the coming years. While every companys individual priorities will be different, organizations should take a firm look at their upcoming IT budget to ensure efficient and secure remote work takes precedence over other activities that may be less pressing.
While it may be difficult for IT teams to only focus budget on secure remote work, organizations need to make sure to move as many non-essential initiatives as possible to 2022 or beyond. They can wait but secure systems cannot.
While the pandemic has certainly been unprecedented, its created a new IT reality around flexibility and resilience. Being flexible amidst future disruption is critically important and will make sure that data, assets and personal information is not able to be compromised.
No one can predict what this year or beyond will look like, but with an updated IT budget and investments in place, organizations will be ready for any challenge or crisis that comes up.
Continued here:
The new essentials: IT budgets - TechRadar
Top identity and access trends and challenges when moving to the cloud – SC Media
As your organization undergoes its digital transformation, you'll want to include your identity and access management (IAM) systems among the systems that get migrated to the cloud. However, there are certain challenges that you'll need to consider before, during and after your cloud migration.
For most organizations, there are clear benefits in moving IAM systems to the cloud. Not only should total cost of ownership be reduced as staffers spend less time on maintaining data-center infrastructure, but identity-based applications should work more smoothly for staffers and customers alike.
"There's an increased demand in the customer-identity use case, i.e., customers who can quickly log in and sign out of smooth experiences as they can on Netflix or Amazon," said Yev Koup, a senior product marketing manager at Ping Identity. "Other companies want as smooth and convenient an experience for their own customers."
Koup also sees a trend toward greater demand for smooth and easy integration in the IT space, where IAM systems must work well with other applications and technologies.
"There's invisible networking that has to exist, such as with Microsoft Active Directory," he told us. "We're also seeing demand for increased orchestration capabilities."
That said, should you move your IAM systems to the cloud? There are several issues to consider before you even decide to begin your cloud migration.
"Financial institutions often don't want their data happening in the cloud," Koup said. "They want to be first rescuers if something goes wrong, and they have teams that are very capable of doing this. Governments also often want to keep things on-premises."
"Usually for larger organizations, this is a big project," Koup told us. "They need a process and a migration path and plan. Companies like Ping can help. Partially it's a fear of the unknown it helps to get a partner involved."
Once your company has weighed all those factors and decided to migrate its IAM systems to the cloud, then there are other decisions to make before the migration begins.
"Security's generally a little bit higher when you're in the cloud, even though you are relying on a third-party vendor," said Koup. "There are risks, but the risks of being in the cloud are lower than if you have to manage the configuration on your own premises."
"The nice thing about orchestration is that as things continue to change for the business, you can update back-end components a lot quicker and on the fly," Koup told us, adding that Ping's own Da Vinci orchestration software "lets you preview and see end-user experience before you deploy."
After the migration process begins, do it the right way:
"Work closely with your identity or cloud providers," said Koup. "Phase in applications slowly and work with the vendor's professional services team. It costs more to do it this way, but it will be a smoother process."
There are also challenges that may not present themselves until after the IAM cloud migration is complete. You need to prepare yourself for the possibility that:
Despite all these potential pitfalls, Ping Identity's Koup thinks it's well worth moving IAM systems to the cloud.
"You'll get greater scalability and resiliency," he told us. "There will be a lot less behind-the-scenes work to integrate new functions and features, which will improve your total cost of ownership. You may pay higher licensing fees [for your IAM solution], but you'll have less expenditure on manpower or infrastructure or scaling."
Read the original post:
Top identity and access trends and challenges when moving to the cloud - SC Media
Meta shares latest hardware you can’t wear it on your face, so don’t panic – The Register
At the 2022 Open Compute Project (OCP) Global Summit on Tuesday, Meta introduced its second-generation GPU-powered datacenter hardware for machine learning and inference a system called Grand Teton.
"We're excited to announce Grand Teton, our next-generation platform for AI at scale that we'll contribute to the OCP community," said Alexis Bjrlin, VP of engineering at Meta, in a note to The Register. "As with other technologies, weve been diligently bringing AI platforms to the OCP community for many years and look forward to continued partnership."
Tuned for fast processing of large scale AI workloads in datacenters, Grand Teton boasts numerous improvements over its predecessor Zion, such as 4x the host-to-GPU bandwidth, 2x the compute and data network bandwidth, and a 2x better power envelope.
Meta's Grand Teton all-in-one box
Where the Zion-EX platform consisted of multiple connected subsystems, Grand Teton unifies those components in a single hardware chassis.
According to Bjrlin, Zion consists of a CPU head node, a switch sync system, and GPU system, all linked via external cabling. Grand Teton is a single box with integrated power, compute, and fabric interfaces, resulting in better performance, signal integrity, and thermal performance. The design supposedly makes datacenter integration easier and enhances reliability.
Grand Teton has been engineered to better handle memory-bandwidth-bound workloads like deep learning recommendation models (DLRMs), which can require a zetaflop of compute power just to train. It's also optimized for compute-bound workloads like content understanding.
In the hope that someone wants to view its datacenter blueprints using the VR goggles it sells, Meta has created a website to host 3D models of its hardware designs, metainfrahardware.com. The biz is focused on pushing Metaverse, a galaxy of interconnected virtual-reality worlds, accessible using VR headsets.
OCP was founded in 2011 by Facebook, which reorganized last year under a parent company without scandal baggage called Meta. OCP aims to allow large consumers of computing power to share hardware designs for datacenter servers and related equipment optimized for enterprise and hyperscale work. OCP essentially allowed Facebook, Google, and others in the cloud to specify exactly the boxes they wanted, and have contract manufacturers turn them out on demand, rather than have server vendors dictate the designs. The project has since widened its community.
That means OCP is a collection of open specifications, best practices, and other things that people can follow or tap into if they want to build out interoperable gear or take inspiration from the cloud giants. The contributed designs are useful or interesting in seeing where the big players are headed in terms of their datacenter needs, and what design decisions are being taken to achieve the scale they want.
OCP's market impact has been fairly modest: companies spent more than $16 billion on OCP kit in 2020 and that figure is projected to reach $46 billion by 2025. The total datacenter infrastructure market is expected to be about $230 billion in 2025.
Meta is also talking up Open Rack v3 (ORV3), the latest iteration of its common rack and power architecture, which aims to make deploying and servicing rack-mounted IT gear easier. ORV3 features a power shelf that can be installed anywhere in the rack.
"Multiple shelves can be installed on a single busbar to support 30kW racks, while 48VDC output will support higher power transmission needs in the future," said Bjrlin in a blog post due to go live today. "It also features an improved battery backup unit, upping the capacity to four minutes, compared with the previous model's 90 seconds, and with a power capacity of 15kW per shelf."
ORV3 has been designed to accommodate assorted liquid cooling strategies, such as air-assisted liquid cooling and facility water cooling.
"The power trend increases we're seeing, and the need for liquid cooling advances, are forcing us to think differently about all elements of our platform, rack and power, and data center design," explained Bjrlin.
Visit link:
Meta shares latest hardware you can't wear it on your face, so don't panic - The Register
Global eClinical Solutions Market Garnered around USD 7.5 Billion in 2021 and Expected to Grow at ~15% CAGR during 2022-2031; Government Initiatives…
Kenneth Research
Key Companies Profiled in the Global eClinical Solutions Market Research Report by Kenneth Research are DATATRAK International, Inc., Oracle Corporation, Signant Health, Dassault Systmes S.E., eClinical Solutions LLC, IBM Corporation, Veeva Systems Inc., Mednet, Saama Technologies, Inc., and Parexel International Corporation, and Parexel International Corporation among others.
New York, Oct. 18, 2022 (GLOBE NEWSWIRE) -- Kenneth Research has published a detailed market report on the Global eClinical Solutions Market for the forecast period, i.e., 2022-2031 which includes the following factors:
Market growth over the forecast period
Detailed regional synopsis
Market segmentation
Growth drivers
Challenges
Key market players and their detailed profiling
Global eClinical Solutions Market Size:
The global eclinical solutions market is gathered an approximate revenue of USD 7.5 billion in 2021 and is estimated to grow at a CAGR of ~15% over the forecast period. The governments initiative to encourage clinical trials and research is responsible for the expansion of the market. Moreover, increasing investments in R&D by numerous pharma & biotech businesses are anticipated to contribute to the market growth. According to the European Federation of Pharmaceutical Industries and Association (EFPIA), the value of exports from the pharmaceutical business in Europe alone was around USD 469,450 million in 2018. Additionally, it is predicted that one of the key development factors for the market would be the increasing usage of software solutions by pharmaceutical businesses. In addition, the health sectors extensive data collection is thought to be a key factor in the market growth. Over 75% of the pharma sector clients polled believe that in 2022, smart technologies such as artificial intelligence (AI), machine learning (MI), and natural language processing would have the greatest impact on drug development (NLP).
Get a Sample PDF of This Report @ https://www.kennethresearch.com/sample-request-10116988
Story continues
Global eClinical Solutions Market: Key Takeaways
North American region gains the largest portion of the revenue
The cloud segment to influence the revenue graph
The pharma & biotech segment retains a sizable presence in the market
Rising R&D Investments, Government Funding, and Growing Medicaments Trade to Boost the Market
Some of the key reasons propelling the global eclinical solutions market include new regulations, government funding to assist clinical trials, and rising demand for pharma-biotech businesses to invest more in R&D for drug development. For instance, the UK governments net spending on research and development (R&D) increased by USD 2.0 billion from 2019 to a record high of USD 18.77 billion in 2020. Electronic data capture, clinical trial management systems, which frequently use electronic patient diaries, and other applications are all included in eclinical solutions. The Indian ministry of science and technology has been given a budget of over USD 147 billiom for R&D activities in 2021.
Furthermore, the growing export value for medical drugs further drives market growth. For instance, the global export value for medicaments was USD 9,757,494 thousand in 2020.
Global eClinical Solutions Market: Regional Overview
The global eclinical Solutions market is segmented into five major regions including North America, Europe, Asia Pacific, Latin America, and the Middle East and Africa region.
Browse to access an in-depth research report on the Global eClinical Solutions Market with detailed charts and figures: https://www.kennethresearch.com/report-details/eclinical-solutions-market/10116988
Increasing Government funding for Clinical Trial Support to Fuel the North American Market
The market in North America had the biggest market share in 2021. The demand for eclinical solutions in this region is anticipated to increase as a result of rising government funding for clinical trial support, ongoing product development, and new product releases by vendors of eclinical solutions as well as an increase in the number of collaborations for novel drug development. The National Institute of Health (NIH) funded clinical research at a cost of about USD 17 billion in the fiscal year (FY) 2020.
Rising Chronic Diseases and Increasing Government Funding to Propel the Asia Pacific Market Growth
On the other hand, the market in the Asia Pacific region is anticipated to experience significant growth by expanding at a higher CAGR over the forecast period as a result of rising chronic diseases like cancer, cardiovascular conditions, and other infectious diseases, as well as an aging population and growing medical needs. The Asia Pacific region alone accounts for 60% of the worlds population, according to the United Nations Population Fund (UNFPA). A sizable fraction of the Chinese population had chronic illnesses in 2020. Overweight or obesity affected more than 525 million people, and high blood pressure affected more than 410 million people. In China, chronic diseases were responsible for over 89 percent of deaths in 2019. Government programs for drug discovery and research, as well as investments in clinical infrastructure, are also predicted to play a significant role in the expansion of the market in the region. For instance, the GDP for research and development in China was 2.4% in 2020 according to the World Bank Data.
Get a Sample PDF of the Global eClinical Solutions Market @ https://www.kennethresearch.com/sample-request-10116988
The study further incorporates Y-O-Y growth, demand & supply and forecasts future opportunities in:
North America (U.S., Canada)
Europe (U.K., Germany, France, Italy, Spain, Hungary, Belgium, Netherlands & Luxembourg, NORDIC [Finland, Sweden, Norway, Denmark], Poland, Turkey, Russia, Rest of Europe)
Latin America (Brazil, Mexico, Argentina, Rest of Latin America)
Asia Pacific (China, India, Japan, South Korea, Indonesia, Singapore, Malaysia, Australia, New Zealand, Rest of Asia Pacific)
Middle East and Africa (Israel, GCC [Saudi Arabia, UAE, Bahrain, Kuwait, Qatar, Oman], North Africa, South Africa, Rest of Middle East and Africa).
Global eClinical Solutions Market, Segmentation by Delivery Mode
The cloud segment is predicted to hold the biggest market size in value and is estimated to grow at a significant CAGR over the forecast period. The preservation of client data on cloud servers rather than on internal servers at the facility is one benefit related to cloud solutions that is expected to boost the expansion of this market. Over 2.5 billion users are utilizing the Google Drive storage service in 2020. Additionally, over 120 Zettabytes of data are predicted to be in the cloud by 2025. Google Drive is currently the most prominent cloud storage platform in the world with a usage rate of 95.5 percent. The best cloud storage for collaboration, Dropbox comes in second with a still impressive 64 percent followed by One Drive with 40 percent and iCloud with almost 38 percent. Furthermore, it is projected that convenient customer access to data via the internet, which necessitates computer hardware and an internet connection, to contribute to the growth of the segment.
Access full Report Description, TOC, Table of figures, Chart, etc. @ https://www.kennethresearch.com/sample-request-10116988
Global eClincal Solutions Market, Segmentation by End-Users
The pharma & biotech segment is predicted to hold substantial market share over the forecast period. The pharmaceutical sector generates a variety of innovative products that provide valuable medicinal benefits. In 2019, the pharmaceutical sector spent USD 83 billion on research and development as per Congressional Budget Office. In addition to this, the number of new pharmaceuticals licensed for sale climbed by 60 percent during 2019 and 9 years back as compared to the prior decade, reaching a peak of 60% new drugs approved in 2018. Increased sales of new drugs and smart technologies, as well as increased spending by the pharmaceutical industry on R&D, boost the growth of the segment.
Global eClinical Solutions Market, Segmentation by Product Type
Clinical Data Management Systems (CDMS)
Electronic Data Capture (EDC)
Clinical Trial Management Systems (CTMS)
Electronic Clinic Outcome Assessment (eCOA)
Randomization and Trial Supply Management (RTSM)
Electronic Patient-Reported Outcome (ePRO)
Electronic Trial Master File (eTMF)
Clinical Analytics Platform
Few of the well-known market leaders in the global eclinical solutions market that are profiled by Kenneth Research are DATATRAK International, Inc., Oracle Corporation, Signant Health, Dassault Systmes S.E., eClinical Solutions LLC, IBM Corporation, Veeva Systems Inc., Mednet, Saama Technologies, Inc., and Parexel International Corporation.
Enquiry before Buying This Report @ https://www.kennethresearch.com/sample-request-10116988
Recent Developments in the Global eClinical Solutions Market
On 01 March 2021, the collaboration between Saama Technologies, Inc. and Oracle Corporation was made public in order to integrate the Oracle Health Sciences Clinical One Platform. The alliance aims to give pharmaceutical businesses insights powered by artificial intelligence (AI).
On October 20, 2021, Parexel International Corporation and Kyoto University Hospital jointly announced a strategic partnership to expand the potential for clinical research and to provide successful clinical trials. The alliance intends to provide patients and pharmaceutical companies with improved clinical trial benefits.
Browse More Related Reports:
Veterinary Ultrasound Market Segmentation by Product (Portable, Cart Based, and Software Ultrasound Scanners); by Animal Type (Large, and Small Animal); by Type (2-D, 3-D, and Other Ultrasound Imaging); and by End-Use (Veterinary Hospitals, and Clinics)-Global Demand Analysis & Opportunity Outlook 2031
Intracranial Pressure (ICP) Monitoring Market Analysis by Application (Traumatic Brain Injury, and Subarachnoid Hemorrhage); by End Users (Hospitals, Clinics, and Trauma Centers); and by Technique (Invasive, and Non-Invasive Monitoring)-Global Supply & Demand Analysis & Opportunity Outlook 2022-2031
Integrated Operating Room Management Systems Market Segmentation by Component (Services, and Software); by Type (Audio & Video Management System, Anesthesia Information Management, Documentation Management System, Instrument Tracking System, and Others); and by End-Use (Ambulatory Surgical Centers, and Hospitals)-Global Demand Analysis & Opportunity Outlook 2031
Facial Injectors Market Segmentation by End-User (Hospitals, Cosmetic Centers, and Dermatology Clinics); by Application (Facelift, Wrinkle Reduction, Lip Enhancement, Acne Scar Treatment, and Others); and by Fillers Type (Collagen, Dermal, Polymer, Synthetic, Hyaluronic Acid, and Others)-Global Demand Analysis & Opportunity Outlook 2031
Injectable Drug Delivery Devices Market Analysis by Product Type (Conventional, and Self-Injection Devices); by End Users (Hospital, Homecare, Clinics, and Others); and by Application (Curative Pattern, and Immunization)-Global Supply & Demand Analysis & Opportunity Outlook 2022-2031
About Kenneth Research
Kenneth Research is a leading service provider for strategic market research and consulting. We aim to provide unbiased, unparalleled market insights and industry analysis to help industries, conglomerates and executives to take wise decisions for their future marketing strategy, expansion and investment, etc. We believe every business can expand to its new horizon, provided a right guidance at a right time is available through strategic minds. Our out of box thinking helps our clients to take wise decision so as to avoid future uncertainties.
Contact for more Info:
AJ Daniel
Email: info@kennethresearch.com
U.S. Phone: +1 313 462 0609
How SMEs can get to grips with digital transformation – The Manufacturer
As an SME it is always important to recognise where your data is coming from. And, as companies grow, there is an exponential growth of data from various sources.
As part of their keynote at the upcoming SME Growth Summit, taking place as part of Digital Manufacturing Week, Rimsha Tariq, Continuous Improvement & Digital Transformation Technician, and Peter Lai, Continuous Improvement & 4IR Manager, NGF Europe (NGFE), will discuss digital transformation in manufacturing from the point of view of SMEs and the importance of cloud technology, IoT and Big Data. We caught up with them for a sneak preview.
PL: The premise will be on how NGF Europe, which is part of the NSG Pilkington Group, has gone from having zero smart digital technology infrastructure to having real-time dashboarding that has ultimately helped us improve production, operational efficiency and reduce energy consumption and waste.
RT: Well also go into how an SME with a small team and limited resources, can still really home in on becoming a smarter factory.
RT: Cloud computing refers to the access of data or information located in a virtual space, as opposed to the use of local servers or computers. IoT is hardware thats been made smart. There are five key components that make an IoT device:
Big data then refers to larger more complex datasets; the concept that allows access to databases in real-time. Put simply, IoT is the source of the data. Big Data is the analytic platform of the data, and cloud computing is the location for storage, scale and speed of access.
Theres money to be saved in not having to buy physical servers and becoming more efficient purely by looking at the data you already have. And again, it really helps bring a focus on sustainability via waste reduction. The relationship between cloud computing, IoT and Big Data creates a unique opportunity for businesses to become proactive by working smarter, not harder.
PL: A lack of workable infrastructure. At NGFE, we had bits and pieces of data coming from a variety of different sources, whether that was an SQL server within our corporate network, Excel data, or more importantly, industrial networks that included PLC data, etc.
How you go about linking that all together is a big problem. Usually, businesses are looking for short-term solutions, and how a problem is going to get fixed now. When we were setting up a factory many years ago with SCADA, we had SCADA, SAP and QA systems, all in different places, which made everything much more difficult from an infrastructure perspective. Therefore, its key to take a step back and look at the bigger picture and end goal. Then, work backwards from there. Set your KPIs and then see how youre going to go about achieving them.
PL: Its all too easy for too much data to be collected. First and foremost, theres no value in collecting data which youre not going to use. Crucially, whats changed is that the cost of data is much cheaper, so its easier for data to be stored, and this is being driven by cloud computing.
For example, we are currently in proof-of-concept for a service, which is available in AWS cloud, which is using AI and ML to analyse huge amounts of data, looking for statistically significant changes.
So, its easy to get bogged down with the sheer volume of data thats available, and its too much for someone to look at physically. But when youve got a service doing the same thing, then that task is one that manufacturers can let go. So, the solution is using intelligent services to look at huge amounts of data and pinpoint when there is statistically significant differences.
RT: For us, it was about understanding what was relevant to the business; cutting down the noise and focusing on what really matters. Not everything can be fixed with a sensor, and not all data is useful. Building dashboards and putting the architecture in place takes time; hence why we take our proof-of-concept approach where we can carry out experiments on a small scale, and play around and test as much as we can.
If it gives us something tangible, only then will we implement it on a larger scale. As an example, weve got digital projects in place for the next couple of years that we want to roll out across different areas of the business. We put CAPEX in place for only small projects.
Whether something is going to give tangible results is an important point for SMEs to remember. That was a hurdle for us to start with because we were just excited about having sensors across our site. So, it was important for us to focus on what is of real value.
PL: Theres lots of different technologies out there but whats important is whether it actually adds to the business. Theres a lot of hype surrounding digitalisation, Industry 4.0 and IoT, but what is it thats going to deliver value? Once that can be seen to be having an impact, then a business can start to map what else might be required.
PL: The future for SMEs is about how they can become proactive, rather than reactive as a production site; making stepped improvements through the use of technology. And the technology is now cheap enough to allow access to even some of the smallest companies.
RT: IoT, cloud and Big Data is a key part of digital transformation, and SMEs need to make it part of their journey too. They shouldnt be scared of larger companies with big budgets. Something else that well cover at the SME Growth Summit is that we have a very small budget, and an extremely small team. So, although its a journey that can be daunting to start with, its not as scary once you have taken the first step.
Tickets for SME Growth Summit on 16-17 November at Exhibition Centre Liverpool are still available. Why not join us for two days of engaging with your peers to build stronger manufacturing businesses, with three key streams: People, Platform and Processes.
*Tickets are for manufacturers only.
Rimsha Tariq joined NGF Europe two years ago, and as the companys first Continuous Improvement & Digital Transformation Technician, she has played a vital role in their journey to becoming a smarter digital factory. As well as being a certified Cloud Practitioner, she has a background in mathematics, finance, and management. Utilising her vast array of skills, Rimsha architects data and cloud solutions to make production more efficient. She believes that cloud, IoT and Big Data are the future of manufacturing.
Peter Lai started working at NGF Europe over 27 years ago and has held numerous roles over that time ranging from operator to production manager. These perspectives have given him the key insights into the data NGF needs to utilise as a business. As the Continuous Improvement & 4IR Manager, he combines practices from his lean six sigma certification and Level 7 Executive Business Management training to drive the production forward. He believes that digitalisation is key.
For more stories on Digital Transformation click here.
More here:
How SMEs can get to grips with digital transformation - The Manufacturer