Category Archives: Cloud Servers
Cloud ROI: Getting Innovation Economics Right with FinOps – CIO
Is the cloud a good investment? Does it deliver strong returns? How can we invest responsibly in the cloud? These are questions IT and finance leaders are wrestling with today because the cloud has left many companies in a balancing actcaught somewhere between the need for cloud innovation and the fiscal responsibility to ensure they are investing wisely, getting full value out of the cloud.
One IDC study shows 81% of IT decision-makers expect their spending to stay the same or increase in 2023, despite anticipating economic storms of disruption. Another 83% of CIOs say despite increasing IT budgets they are under pressure to make their budgets stretch further than ever beforewith a key focus on technical debt and cloud costs. Moreover, Gartner estimates 70% overspending is common in the cloud.
The need for cloud innovation amid economic headwinds has companies shifting their strategies, putting protective parameters in place, and scrutinizing cloud value with concerted efforts to accelerate return on investment (ROI), specifically on technology.
While many companies are delaying new IT projects with ROI of more than 12 months, others are reducing innovation budgets while they try to squeeze more value out of existing investments. Regardless of how pointed their endeavors are, most IT and finance leaders are looking for ways to better govern cloud transformation. Thats because, in todays economic climate, leaders arent just responsible for driving ingenuity, they are held accountable for ensuring the company is a good steward of its technology investments with concentrated emphasis on:
If the past three years were dedicated to accelerated cloud transformation, 2023 is being devoted to governing it. But its not just todays tumultuous times calling for executives to heed to the reason of fiduciary responsibility. The cloud also necessitates itparticularly when companies want to achieve ROI faster.
The cloud can make for an uneven balance sheet without proper oversight. Itneeds to be closely watched from a financial perspective. Why? The short answer: variable costs. When the cloud is infinitely scalable, costs are infinitely variable. Pricingstructuresare based onservice usage fees and overage charges where even marginal lifts inusage can incur steep increases in cost. While this structure favorscloud providers, it starkly contrasts the needs of IT financial managersmosthave per-unit budgets and preferpredictable monthly costs for easier budgeting and forecasting.
Additionally, companies arent always good at estimating what they need and using everything they pay for. As a result, cloud waste is now a thing.In fact, companies waste as much as 29% of their cloud resources.
As companies lift and shift their workloads to the cloud, they trade in-house management for outsourced services. But as IT organizations are loosening their reign, financial management teams should be tightening their grip. Those who arent actively right sizing their cloud assets are typically paying more than necessary. Hence, why overspending can easily reach 70%.
Achieving ROI in one year requires tracing where your cloud money goes to see how and where it is repaid. Budget dollars go down the drain when companies fail to pay attention to how they are using the cloud, dont take the time to correct misuse, or overlook service pausing features and discounting opportunities.
But cloud cost management is not always a simple task. The majority of IT and financial decision-makers report its challenging to account for cloud spending and usage, with the C-suite cite tracing spend and chargebacks of particular concern. The key to cost control is to pinpoint and track every cloud service cost across the IT portfolioyes even when companies have on average 11 cloud infrastructure providers, nine unified communications solutions, as well as a cacophony of unsanctioned applications consuming up to 30% of IT budgets in the form of Shadow IT.
When you factor in these dynamics and consider that cloud providers have little incentive to improve service usage reports, helping clients better balance the one-sided financials of the relationship, you can see why ROI can be slow-moving.
FinOps comes in to bridge this gap.
Cloud services are now dominating IT expense sheets, and when increasing bills delay ROI, IT financial managers go looking for answers. This has given rise to the concept of FinOps (a word combining Finance and DevOps) which isa financial management discipline for controlling cloud costs. Driving fiscal accountability for the cloud, FinOps helps companies realize more business value and accelerate ROI from their cloud computing investments.
Sometimes described as a cultural shift at the corporate level, FinOps principles were developed to foster collaboration between business teams and IT engineers or software development teams. This allows for more alignment around data-driven spending decisions across the organization. But beyond simply a strategic model, FinOps is also considered a technology solutiona service enabling companies to identify, measure, monitor, and optimize their cloud spend, thus shortening the time to achieve ROI. Leading cloud expense management providers, for example, save cloud investors 20% on average and can deliver positive ROI in the first year.
FinOps Best Practices
As the cloud makes companies agile, managing dynamic cloud costs becomes more important. FinOps help offset rising prices and insert accountability into organizations focused on cloud economics. Best practices for maximizing ROI include reconciling invoices against cloud usage, making sure application licenses are properly disconnected when no longer necessary or reassigned to other employees, and reviewing network servers to ensure they arent spinning cycles without a legitimate business purpose.
Key approaches include:
Is the cloud a good investment? Yes, as long as the company can effectively see and use its assets, monitor its expenses, and manage its service. The cloud started as a means to lower costs, minimize capital expenses, and gain infinite scalability, and that reputation should payout even after being pressure tested by the masses. With a collaborative and disciplined approach to management, companies of every size can recognize quick ROI without generating significant waste or adding unnecessary complexity.
To learn more about cloud expense management services, visit us here.
Read more:
Cloud ROI: Getting Innovation Economics Right with FinOps - CIO
Google Brings PostgreSQL-Compatible AlloyDB To Multicloud, Data Centers And The Edge – Forbes
Google is enabling AlloyDB, the PostgreSQL-compatible database, to run anywhere, including public cloud, on-premises servers, edge computing environments and even developer laptops. Branded as AlloyDB Omni, the engine is the same as AlloyDB, the cloud-based managed database announced last year.
Elephant
pixabay
AlloyDB Omni promises compatibility with PostgreSQL, enhanced performance and support delivered by Google Cloud. Compared to the standard, open source PostgreSQL, AlloyDB Omni delivers 2x faster performance and 100x faster analytical queries. This is possible due to how Google has tuned, enhanced and optimized the database engine.
By analyzing a query's components, such as subqueries, joins, and filters, the AlloyDB Omni index advisor reduces the guesswork involved in tuning query performance. By periodically analyzing the workload on the database, it finds queries that could benefit from indexes, and suggests new indexes that could significantly improve query performance.
The other unique feature of AlloyDB Omni includes a columnar engine, which keeps frequently accessed data in an in-memory columnar format for quicker scans, joins, and aggregations. AlloyDB Omni automatically arranges the data and selects between columnar and row-based execution plans using machine learning. This capability delivers better performance without recreating queries targeting different formats.
AlloyDB Omni is packaged as a set of containers that can be deployed in a Debian-based or a Red Hat Enterprise Linux host. In its technical preview, Google is providing a set of shell scripts to automate the deployment. However, there is no guidance on deploying AlloyDB Omni in a Kubernetes cluster through Helm Chart or an operator. This may change when the software moves towards the general availability.
Google recommends deploying AlloyDB Omni on a machine or a VM with at least two CPUs and 16GB of memory. The machine should have Docker and Google Cloud SDK installed to pull the images of AlloyDB from Google Cloud Container Registry and the shell scripts uploaded to Google Cloud Storage. On a machine with prerequisites installed, it takes a couple of minutes to get AlloyDB Omni up and running.
Interestingly, Google doesnt mention Anthos as the preferred infrastructure for deploying AlloyDB Omni. Though the software is packaged as containers, it can run on any Linux machine with Docker installed.
AlloyDB Omni also supports the creation of read replicas - dedicated database servers optimized for read-only access. A replica server provides a read-only clone of the primary database server while continuously updating its own data to reflect changes made to the primary server's data. Read replicas significantly increases the throughput and availability of the database.
Google is investing in AlloyDB Omni to attract customers migrating their databases from legacy versions of Oracle and Microsoft SQL Server. With 100% compatibility with PostgreSQL, customers can take advantage of the migration tools and the expertise available in the ecosystem. The other use case is running an optimized database at the edge. Customers can ingest IoT device data into AlloyDB for querying and analyzing the telemetry data of various sensors. Similar to BigQuery Omni, enterprises can run a Google Cloud-managed database in other cloud environments such as AWS and Azure. It will simplify the integration of data services while reducing the bandwidth cost involved in moving the data across clouds.
Google is not the only public cloud provider to bring a cloud-based managed database to multicloud and on-premises. Microsoft announced Azure Arc-enabled SQL Server and Azure Arc-enabled PostgreSQL in 2020. Based on Azure Arc, Microsoft has packaged these databases as Kubernetes deployments. Enterprises with Arc-enabled Kubernetes can easily deploy SQL Server and PostgreSQL on their clusters.
Scaling AlloyDB Omni to the cloud-based version is straightforward. Like any other migration, customers can export the data in a CSV, DMP or SQL format and import that data into an AlloyDB instance running in Google Cloud. For lift-and-shift scenarios, Google recommends using the Database Migration Service, which is currently in preview.
With a clear migration plan to the cloud-based AlloyDB based on the recently announced Database Migration Service, Google hopes to drive the adoption of its Data Cloud through AlloyDB Omni.
Janakiram MSV is an analyst, advisor and an architect at Janakiram & Associates. He was the founder and CTO of Get Cloud Ready Consulting, a niche cloud migration and cloud operations firm that got acquired by Aditi Technologies. Through his speaking, writing and analysis, he helps businesses take advantage of the emerging technologies.
Janakiram is one of the first few Microsoft Certified Azure Professionals in India. He is one of the few professionals with Amazon Certified Solution Architect, Amazon Certified Developer and Amazon Certified SysOps Administrator credentials. Janakiram is a Google Certified Professional Cloud Architect. He is recognised by Google as the Google Developer Expert (GDE) for his subject matter expertise in cloud and IoT technologies. He is awarded the title of Most Valuable Professional and Regional Director by Microsoft Corporation. Janakiram is an Intel Software Innovator, an award given by Intel for community contributions in AI and IoT. Janakiram is a guest faculty at the International Institute of Information Technology (IIIT-H) where he teaches Big Data, Cloud Computing, Containers, and DevOps to the students enrolled for the Master's course. He is an Ambassador for The Cloud Native Computing Foundation.
Janakiram was a senior analyst with Gigaom Research analyst network where he analyzed the cloud services landscape. During his 18 years of corporate career, Janakiram worked at world-class product companies including Microsoft Corporation, Amazon Web Services and Alcatel-Lucent. His last role was with AWS as the technology evangelist where he joined them as the first employee in India. Prior to that, Janakiram spent over 10 years at Microsoft Corporation where he was involved in selling, marketing and evangelizing the Microsoft application platform and tools. At the time of leaving Microsoft, he was the cloud architect focused on Azure.
See more here:
Google Brings PostgreSQL-Compatible AlloyDB To Multicloud, Data Centers And The Edge - Forbes
Strengthening Business Cybersecurity With CASB – WebProNews
The development of cloud computing technology has revolutionized business operations worldwide. Companies use cloud computing to process and store data so employees can access them anywhere. Unfortunately, this convenience is accompanied by security challenges companies should address to keep sensitive information and intellectual property safe.
A Cloud Access Security Broker (CASB) is a reliable solution to this problem. Cloud Access Security Brokers can provide protection, visibility, and control to cloud-based data and applications.
These features are essential to business cybersecurity because it prevents unwanted parties from accessing vital company data. This reduces the risk of sensitive information leaking to the public or being stolen and sold to a competitor.
Features of a Cloud Access Security Broker
A Cloud Access Security Broker is an intermediary between an organizations IT infrastructure and its cloud-based applications and services. By using CASB, companies will have visibility into their cloud usage. They will also be able to prevent security incidents like DDoS attacks, ransomware attacks, and data breaches. Here is a complete list of features businesses will benefit from integrating CASB into their cloud security framework.
Company executives will have oversight into the usage of their cloud-based applications so they can track employee activities on these applications. This oversight helps management teams identify security risks and take prompt actions to curb them before they get out of hand.
CASBs can detect, block, and report unauthorized entry and data exfiltration attempts to a companys cybersecurity team so they can take other precautionary measures if necessary. A CASB will also give an organization control over the usage of its cloud servers so it can enforce its cybersecurity policies. Controlling employee cloud server access will prevent data loss and help them adhere to government data regulations.
Cloud Access Security Brokers offer protection against cyber attacks. They use machine learning and behavioral analytics to detect suspicious activity and discover signs that indicate the presence of cyber threats. CASBs also scan traffic moving in and out of cloud servers for malware and other harmful content, so they can be blocked and quarantined before reaching their destination.
Governments require companies to protect consumer data. Using a CASB to prevent data breaches and unwanted data access ensures an organization complies with the regulations. This will help them avoid hefty fines and sanctions and preserve their reputation in the public eye.
As companies expand their operations, it might become challenging to maintain oversight and control of their cloud servers. Fortunately, CASBs allow for scalability so businesses of all sizes can get the cloud protection they need. They can be integrated with other security tools and service providers to create a more robust cybersecurity system.
Endnote
Many businesses use cloud-based services and applications to streamline their operations and make it easy for employees to access files needed for their jobs. However, this can lead to data leaks and exposure to malware which will endanger the system. Using a security tool like CASB will provide threat protection and protect companies from unauthorized entries to their cloud servers.
Read this article:
Strengthening Business Cybersecurity With CASB - WebProNews
Analysis: Alibaba overhaul leaves fate of prized cloud unit up in the air – Reuters
SHANGHAI, March 31 (Reuters) - Alibaba's (9988.HK) six-way breakup plan has raised questions about the long-term shape of its profitable cloud unit, given that it will have to tackle heavy regulatory scrutiny at a time when competition is intensifying both in China and abroad.
While a split into a standalone unit will give investors a chance to make focused bets on a business estimated by analysts to be worth between $41 billion and $60 billion, the step could put Alibaba's cloud unit even more in the cross-hairs of Chinese and overseas regulators, likely slowing its growth.
Some analysts said external investment and separation from Alibaba's core ecommerce business could help it grow overseas, where it is far behind rivals such as Amazon Web Services. But others see the Chinese state investing in the cloud unit or it even going private, given its dominance in the domestic cloud computing industry.
Alibaba's planned Cloud Intelligence Group, which will house the cloud business AliCloud as well as the tech giant's artificial intelligence and semiconductor research, has a 36% market share in China's domestic cloud computing sector.
Its servers host reams of data from companies ranging from tech peers to retailers, the handling and sharing of which has in recent years drawn increasing scrutiny from Beijing.
"Alibabas business lines have different levels and types of regulatory sensitivity," said Gavekal Dragonomics analyst Thomas Gatley in a note this week.
"For cloud computing, data security is paramount."
Alibaba and China's commerce ministry did not immediately respond to queries sent on Friday.
Receiving state investment and drawing closer to the Chinese government could satisfy regulators in Beijing, who have rolled out new laws regulating the handling of data in China and set up a data bureau to underline their focus on the area.
It could also help AliCloud to compete more effectively in China, where overall demand for cloud computing from internet companies is slowing and growth is mainly coming from governments and state-owned enterprises which have not migrated to the cloud as quickly.
While government entities "will not completely reject" companies like Alibaba, Baidu , and Tencent Holdings (0700.HK) for their projects, "they will have a tendency to choose companies with a government funding and backgrounds," said Zhang Yi, who tracks China's cloud computing sector at research firm Canalys.
In the first half of last year, China's top three telcos - China Mobile (0941.HK), China Unicom (0762.HK), and China Telecom (0728.HK) - collectively surpassed Alibaba's share in the domestic cloud market for the first time, according to brokerage Jefferies, underscoring Beijing's growing reliance on state-backed carriers for data management.
But growing closer to Beijing has a downside, said Michael Tan, a Shanghai-based partner of law firm Taylor Wessing.
"It could backfire at the international level, as it might then face even more attention from the U.S.," he said.
In January, Reuters reported that the Biden administration is reviewing Alibaba's cloud business to determine whether it poses a risk to U.S. national security.
The cloud unit has its own domestic problems to fix.
In 2021, China's Ministry of Industry and Information Technology suspended an information-sharing partnership with AliCloud on the grounds that Alibaba did not report a security vulnerability related to the open-source logging framework Apache Log4j2.
And in December 2022, Alibaba Cloud experienced what it called its "longest major-scale failure" for more than a decade after its Hong Kong and Macau servers suffered a serious outage that affected many services in the region including ones belonging to crypto exchange OKX.
Weeks after the outage, Alibaba group Chairman and CEO Daniel Zhang took over as head of the cloud unit, a role he will continue to hold concurrently even after the split-up.
Another risk from the planned split of the cloud unit, which had sales of around $11.5 billion last year, is that previously captive in-house Alibaba clients start courting rivals, hurting its revenue.
But splitting the cloud unit away could also be a positive for the other Alibaba businesses, some analysts said.
"When all data was put in one basket at Alibaba, there could always be concern about misuse of data within the company to maximise profit," said Tan at Taylor Wessing.
"The restructuring will help avoid this."
($1 = 6.8902 Chinese yuan renminbi)
Reporting by Josh Horwitz; Editing by Brenda Goh and Muralikumar Anantharaman
Our Standards: The Thomson Reuters Trust Principles.
Continued here:
Analysis: Alibaba overhaul leaves fate of prized cloud unit up in the air - Reuters
IBM Furthers Flexibility, Sustainability and Security within the Data Center with New IBM z16 and LinuxONE 4 Single Frame and Rack Mount Options -…
New IBM z16 and IBM LinuxONE Rockhopper 4 options are designed to provide a modern, flexible hybrid cloud platform to support digital transformation for a range of IT environments
Consolidating Linux workloads on an IBM LinuxONE Rockhopper 4 instead of running them on compared x86 servers with similar conditions and location can reduce energy consumption by 75% and space by 67% and is designed to help clients reach their sustainability goals1
ARMONK, N.Y., April 4, 2023 /PRNewswire/ -- IBM (NYSE: IBM)today unveiled new single frame and rack mount configurations of IBM z16and IBM LinuxONE 4, expanding their capabilities to a broader range of data center environments. Based on IBM's Telum processor, the new options are designed with sustainability in mind for highly efficient data centers, helping clients adapt to a digitized economy and ongoing global uncertainty.
Introduced in April 2022, the IBM z16 multi frame has helped transform industries with real-time AI inferencing at scale and quantum-safe cryptography. IBM LinuxONE Emperor 4, launched in September 2022, features capabilities that can reduce both energy consumption and data center floor space while delivering the scale, performance and security that clients need.The new single frame and rack mount configurations expand client infrastructure choices and help bring these benefits to data center environments where space, sustainability and standardization are paramount.
"IBM remains at the forefront of innovation to help clients weather storms generated by an ever-changing market," said Ross Mauri, General Manager, IBM zSystems and LinuxONE. "We're protecting clients' investments in existing infrastructure while helping them to innovate with AI and quantum-safe technologies. These new options let companies of all sizes seamlessly co-locate IBM z16 and LinuxONE Rockhopper 4 with distributed infrastructure, bringing exciting capabilities to those environments."
Story continues
Designed for today's changing IT environment to enable new use cases
Organizations in every industry are balancing an increasing number of challenges to deliver integrated digital services. According to a recent IBM Transformation Index report, among those surveyed, security, managing complex environments and regulatory compliance were cited as challenges to integrating workloads in a hybrid cloud. These challenges can be compounded by more stringent environmental regulations and continuously rising costs.
"We have seen immense value from utilizing the IBM z16 platform in a hybrid cloud environment," said Bo Gebbie, president, Evolving Solutions. "Leveraging these very secure systems for high volume transactional workloads, combined with cloud-native technologies, has enabled greater levels of agility and cost optimization for both our clients' businesses and our own."
The new IBM z16 and LinuxONE 4 offerings are built for the modern data center to help optimize flexibility and sustainability, with capabilities for partition-level power monitoring and additional environmental metrics. For example, consolidating Linux workloads on an IBM LinuxONE Rockhopper 4 instead of running them on compared x86 servers with similar conditions and location can reduce energy consumption by 75 percent and space by 67 percent.1These new configurations are engineered to deliver the same hallmark IBM security and transaction processing at scale.
Designed and tested to the same internal qualifications as the IBM z16 high availability portfolio2, the new rack-optimized footprint is designed for use with client-owned, standard 19-inch racks and power distribution units. This new footprint opens opportunities to include systems in distributed environments with other servers, storage, SAN and switches in one rack, designed to optimize both co-location and latency for complex computing, such as training AI models.
Installing these configurations in the data center can help create a new class of use cases, including:
Sustainable design: Easier integration into hot or cold aisle thermal management data center configurations with common data center power and cooling
Optimizing AI solutions: With on-chip AI inferencing and the newest IBM z/OS 3.1, whether rack mount, single frame or multi frame configurations, clients can train or deploy AI models very close to where data resides, allowing clients to optimize AI
Data privacy: Support data sovereignty for regulated industries with compliance and governance restrictions on data location, routing local transactions through local data centers with optimized rack mount efficiency
Edge computing: Enable more efficient rack utilization in limited rack space near manufacturing, healthcare devices, or other edge devices
Securing data on the industry's most available systems3
For critical industries, like healthcare, financial services, government and insurance, a secure, available IT environment is key to delivering high quality service to customers. IBM z16 and LinuxONE 4 are engineered to provide the highest levels of reliability in the industry, 99.99999% availability to support mission-critical workloads as part of a hybrid cloud strategy. These high availability levels help companies maintain consumer access to bank accounts, medical records and personal data. Emerging threats require protection, and the new configurations offer security capabilities that include confidential computing, centralized key management and quantum-safe cryptography to help thwart bad actors planning to "harvest now, decrypt later."
"IBM z16 and LinuxONE systems are known for security, resiliency and transaction processing at scale," said Matt Eastwood, SVP, WW Research, IDC. "Clients can now access the same security and resiliency standards in new environments with the single frame and rack mount configurations, giving them flexibility in the data center. Importantly, this also opens up more business opportunity for partners who will be able to reach an expanded audience by integrating IBM zSystems and LinuxONE capabilities to their existing footprints."
With the IBM Ecosystem of zSystems ISV partners, IBM is working to address compliance and cybersecurity. For clients that run data serving, core banking and digital assets workloads, an optimal compliance and security posture is key to protecting sensitive personal data and existing technology investments.
"High processing speed and artificial intelligence are key to moving organizations forward," said Adi Hazan, director ofAnalycat. "IBM zSystems and LinuxONE added the security and power that we needed to address new clients, use cases and business benefits. The native speed of our AI on this platform was amazing and we are excited to introduce the IBM LinuxONE offerings to our clients with large workloads to consolidate and achieve corporate sustainability goals."
IBM Business Partners can learn more about the skills required to install, deploy, service and resell single frame and rack mount configurations in this blog.
Complementary Technology Lifecycle Support Services
With the new IBM LinuxONE Rockhopper 4 servers, IBM will offer IBM LinuxONE Expert Care. IBM Expert Care integrates and prepackages hardware and software support services into a tiered support model, helping organizations to choose the right fit of services. This support for LinuxONE Rockhopper 4 will offer enhanced value to clients with predictable maintenance costs and reduced deployment and operating risk.
The new IBM z16 and LinuxONE 4 single frame and rack mount options, supported by LinuxONE Expert Care, will be generally available globally[4] from IBM and certified business partners beginning on May 17, 2023. To learn more:
On April 4, at 10 am ET, join IBM clients and partners for behind-the-scenes access to the new IBM z16 single frame and rack mount configurations
On April 17, at 10 am ET, join IBM clients and partners for a deep dive on industry trends, such as sustainability and cybersecurity during the IBM LinuxONE single frame and rack mount virtual event
Check out a preview of the newest version of z/OS, which is designed to scale the value of data and drive digital transformation powered by AI and intelligent automation
About IBMIBM is a leading global hybrid cloud and AI, and business services provider, helping clients in more than 175 countries capitalize on insights from their data, streamline business processes, reduce costs and gain the competitive edge in their industries.Nearly 3,800 government and corporate entities in critical infrastructure areas such as financial services, telecommunications and healthcare rely on IBM's hybrid cloud platform and Red Hat OpenShift to affect their digital transformations quickly, efficiently, and securely.IBM's breakthrough innovations in AI, quantum computing, industry-specific cloud solutions and business services deliver open and flexible options to our clients.All of this is backed by IBM's legendary commitment to trust, transparency, responsibility, inclusivity, and service. For more information, visitwww.ibm.com
Media Contact:Ashley Petersonashley.peterson@ibm.com
1 DISCLAIMER: Compared IBM Machine Type 3932 Max 68 model consisting of a CPC drawer and an I/O drawer to support network and external storage with 68 IFLs and 7 TB of memory in 1 frame versus compared 36 x86 servers (2 Skylake Xeon Gold Chips, 40 Cores) with a total of 1440 cores. IBM Machine Type 3932 Max 68 model power consumption was measured on systems and confirmed using the IBM Power estimator for the IBM Machine Type 3932 Max 68 model configuration. x86 power values were based on Feb. 2023 IDC QPI power values and reduced to 55% based on measurements of x86 servers by IBM and observed values in the field. The x86 server compared to uses approximately .6083 KWhr, 55% of IDC QPI system watts value. Savings assumes the Worldwide Data Center Power Utilization Effectiveness (PUE) factor of 1.55 to calculate the additional power needed for cooling. PUE is based on Uptime Institute 2022 Global Data Center Survey (https://uptimeinstitute.com/resources/research-and-reports/uptime-institute-global-data-center-survey-results-2022). x86 system space calculations require 3 racks. Results may vary based on client-specific usage and location.2 DISCLAIMER: All the IBM z16 Rack Mount components are tested via same process requirements as the IBM z16 traditional Single Frame components. Comprehensive testing includes a wide range of voltage, frequency, temperature testing.3 Source: Information Technology Intelligence Consulting Corp. (ITIC). 2022. Global Server Hardware, Server OS Reliability Survey. https://www.ibm.com/downloads/cas/BGARGJRZ4 Check local availability for rack mount here.
IBM Corporation logo. (PRNewsfoto/IBM)
Cision
View original content to download multimedia:https://www.prnewswire.com/news-releases/ibm-furthers-flexibility-sustainability-and-security-within-the-data-center-with-new-ibm-z16-and-linuxone-4-single-frame-and-rack-mount-options-301789108.html
SOURCE IBM
See the rest here:
IBM Furthers Flexibility, Sustainability and Security within the Data Center with New IBM z16 and LinuxONE 4 Single Frame and Rack Mount Options -...
AWS to boost Australian cloud infrastructure – iTWire
Amazon Web Services plans to spend $13.2 billion to expand its cloud infrastructure in Sydney and Melbourne between 2023 and 2027.
The company said the investment is needed to meet growing customer demand for its services in Australia.
To put the sum into perspective, AWS spent more than $9.1 billion in its Asia Pacific (Sydney) Region between 2012 and 2022.
In addition, AWS launched its Asia Pacific (Melbourne) Region in January 2023.
"For over a decade, AWS has invested billions of dollars into Australia through infrastructure and jobs, and worked closely with the public sector, and local customers and partners, to be a force multiplier across the nation," said AWS managing director for Australia and New Zealand Rianne Van Veldhuizen.
"We are committed to positive social and economic impact, investing in local community engagement programs, workforce development initiatives, cloud infrastructure, and renewable energy project investments. Our plan to invest more than $13 billion into the country over the next five years will help create more positive ripple effects, further solidifying Australia's position in the global economy."
Amazon has pledged to power its operations with 100% renewable energy by 2030, and is on track to achieve this goal by 2025.
In Australia, it is committed to take at total of 262MW from utility-scale renewable projects located in Suntop and Gunnedah in New South Wales, and one that is under development in Hawkesdale, Victoria.
Prime Minister Anthony Albanese said "Economic and infrastructure investment from cloud providers like Amazon Web Services helps create jobs, advances digital skills, boosts innovation, and uplifts local communities and businesses. The Australian Government acknowledges AWS's investment into the nation over the past decade, and welcomes its planned investment over the next five years, the full-time jobs supported annually, and contribution to the nation's GDP."
Tech Council of Australia CEO Kate Pounder said "Investments from tech companies like AWS in Australia have an outsized positive impact on the wider economy. Not only do they bring the direct financing and jobs, but their cloud infrastructure has also enabled the growth of a globally competitive Australian software sector, which has become one of the most successful new industries created in Australia in decades. The support for digital skilling also enables our workforce to learn from leading tech companies, with spillover benefits across the Australian economy. The tech sector will be a key driver of future prosperity in Australia, and AWS's contribution will help propel us forward."
AWS's Australian customers include Atlassian, Australian Bureau of Statistics, National Australia Bank, NSW Health Pathology, Qantas, Swoop Aero, and WA Department of Education.
Reducing WAN latency is one of the biggest issues with hybrid cloud performance. Taking advantage of compression and data deduplication can reduce your network latency.
Research firm, Markets and Markets, predicted that the hybrid cloud market size is expected to grow from US$38.27 billion in 2017 to US$97.64 billion by 2023.
Colocation facilities provide many of the benefits of having your servers in the cloud while still maintaining physical control of your systems.
Cloud adjacency provided by colocation facilities can enable you to leverage their low latency high bandwidth connections to the cloud as well as providing a solid connection back to your on-premises corporate network.
Download this white paper to find out what you need to know about enabling the hybrid cloud in your organisation.
DOWNLOAD NOW!
Marketing budgets are now focused on Webinars combined with Lead Generation.
If you wish to promote a Webinar we recommend at least a 3 to 4 week campaign prior to your event.
The iTWire campaign will include extensive adverts on our News Site itwire.com and prominent Newsletter promotion https://itwire.com/itwire-update.html and Promotional News & Editorial. Plus a video interview of the key speaker on iTWire TV https://www.youtube.com/c/iTWireTV/videos which will be used in Promotional Posts on the iTWire Home Page.
Now we are coming out of Lockdown iTWire will be focussed to assisting with your webinars and campaigns and assistance via part payments and extended terms, a Webinar Business Booster Pack and other supportive programs. We can also create your adverts and written content plus coordinate your video interview.
We look forward to discussing your campaign goals with you. Please click the button below.
MORE INFO HERE!
Read this article:
AWS to boost Australian cloud infrastructure - iTWire
Why Cloud Data Replication Matters – The New Stack
Modern applications require data, and that data usually needs to be globally available, easily accessible and served with reliable performance expectations. Today, much of the heavy lifting happens behind the scenes. Lets look at why the cloud factors into the importance of data replication for business applications.
What is data replication? Simply put, it is a set of processes to keep additional copies of data available for emergencies, backups or to meet performance requirements. Copies may be done in duplicate, triplicate or more depending on the potential risk of a failure or the geographic spread of an applications user base.
These multiple pieces of data may be chopped up into smaller pieces and spread around a server, network, data center or continent. This ensures data is always available and performance is unfailing in a scalable way.
There are many reasons for building applications that understand replication, with or without cloud support. These are basic topics that any developer has had to deal with, but they are even more important when applications go global and/or mobile. Then they need ways to keep data secure and located efficiently.
These particular areas are commonly discussed when talking about cloud data replication:
This refers to making sure all data is ready for use when requested, with the latest versions and updates. Availability is affected when concurrent sessions do not share or replicate their data effectively. By replicating the latest changes to other nodes or servers, it should be instantly available to users who are accessing those other nodes.
Keeping a master copy is important, but it is equally important to keep that copy up to date as much as possible for all users. This means also keeping child nodes up to date with the master node so everyone stays up to date.
Data replication helps reduce latency of applications by keeping copies of data close to the end user of the application. Modern cloud applications are built on top of different networks often located in geographic regions where their user base is most active. While the overhead of keeping copies synchronized and copied might seem extreme, the positive impact on the end-user experience cannot be overstated they expect their data to be close by and ready for use. If local servers have to go around the globe to fetch their data, the outcome is high latency and poor user experience.
Replication is especially important for backup and disaster management purposes, such as when a node goes down. Replicas that were synchronized can then help recover data on new nodes that may be added due to a recent failure. When a data infrastructure requires too much manual copying of data during a failure, there are bound to be issues.
Failover of broken resources can be automated more fully when there are multiple replicas available, especially in different geographic regions that may not be affected by a regional disaster. Applications that can leverage data replication can also take care to preserve user data; otherwise, they risk losing information when a device breaks or a data center is destroyed.
Some see data replication as something nice to have, but as you can see, its not only about backup and disaster management; its also about application performance. There are other benefits as well that you can find as part of enterprise disaster management and performance plans.
The backend systems of a data replication system help keep copies of data spread around and redundant. This requires multiple nodes in the form of clusters that can communicate internally to keep data aligned. Adding a new cluster, a new node or new piece of data would then be automatically synchronized with other nodes to replicate it.
But the application level also needs to understand how the replication works. While a form-based app might just want a set of database tables, it must also understand that the source database has replicas available. Applications must know how to synchronize data it has just collected, as in a mobile app, so other users will have access.
The smaller pieces of data that are synchronized are often known as partitions. Different partitions go on different hardware storage pools, racks, networks, data centers, continents, etc., so they are not all exposed to a single point of failure.
The potential for complexity is often the limiting factor for companies seeking to implement data replication. Having frontend and backend systems that handle it transparently is essential.
As you can see, data replication does not explicitly depend on using cloud resources. Enterprises have been using their internal networks for decades with some of the same benefits. But with the addition of cloud-based resources, the opportunity to have extremely high availability and performance is easier than ever.
Traditional data replication has now been extended beyond just replicating from a PC to a network or between two servers. Instead, applications can replicate to a global network of endpoints that serve multiple purposes.
Traditionally, replication was used to preserve data in case of a failure. For example, replicas could be copied to a node if there was a failure, but replicas could not be used directly by an application.
Cloud data replication extends the traditional approach by sending data to multiple cloud-based data services that stay in sync with one another.
Todays cloud services allow us to add yet another rung on this replication ladder, allowing replication between multiple clouds. This adds another layer of redundancy and reduces the risk of vendor lock-in. Hybrid cloud options also bring local enterprise data services into the mix with the cloud-based providers serving as redundant copies of a master system.
As you can imagine, there are multiple ways to diagram all these interconnections and layers of redundancy. This diagram shows a few of the common models.
(Source: Couchbase)
Though the potential for an unbreakable data solution is more possible than ever, it can also become complicated quickly. Hybrid cloud-based architectures have to accommodate many edge cases and variables that make it challenging for developers to build on their own.
Ideally, your data management backend can already handle this for you. Systems must expose options in an easy-to-understand way so that architects and developers can have confidence and reduce risk.
For example, we built Couchbase from the ground up as a multinode, multicloud replication environment so you wouldnt have to. Built-in options include easily adding/removing nodes, failing over broken nodes easily, connecting to cloud services, etc. This allows developers to select options and architectures they need for balancing availability and performance for their applications.
Couchbases cross datacenter replication (XDCR) technology enables organizations to deploy geo-distributed applications with high availability in any environment (on premises, public and private cloud, or hybrid cloud). XDCR offers data redundancy across sites to guard against catastrophic data-center failures. It also enables deployments for globally distributed applications.
Read our whitepaper, High Availability and Disaster Recovery for Globally Distributed Data, for more information on the various topologies and approaches that we recommend.
Ready to try the benefits of cloud data replication with your own applications? Get started with Couchbase Capella:
Follow this link:
Why Cloud Data Replication Matters - The New Stack
Veeam and AWS team up to accelerate APJ cloud migrations – iTWire
Data protection specialist Veeam Software and Amazon Web Services are collaborating to help joint partners ease customers into the cloud.
Veeam and AWS's joint channel activation program is intended for traditional as well as born in the cloud partners, and aims to promote the use of Veeam Availability Suite to accelerate and simplify cloud migrations.
Veeam's software provides native, fully automated AWS backup and disaster recovery to protect, manage and control all customer data stored on AWS, while integrating with Veeam's other cloud backup solutions or organisations with hybrid cloud environments.
The joint program, which will run in ANZ and certain other markets in the APJ region, involves educating partners on the sales process and messaging, helping build services expertise to drive additional revenue and accelerating opportunities through AWS programs while increasing profitability for the partner.
"Veeam and AWS share a mutual customer obsession and an understanding that partners are critical to delivering great customer outcomes with the move to the cloud," said Veeam APJ vice president of channels, cloud and service providers Belinda Jurisic.
"We have been working closely with AWS to develop strategies and services that can help our joint partners grow their businesses through incremental and adjacent service lines, and ultimately help our customers accelerate transformation in a risk and cost-optimised way."
AWS head of APJ partner sales Corrie Briscoe said "We look forward to the expansion of our relationship with Veeam, an AWS Software Partner with a proven track record of supporting joint customers as they accelerate their cloud migration to digitally transform.
"Veeam's and AWS's new Channel Activation Program will support partners in scaling their cloud migration offerings across markets, to drive cost efficiencies, help solve real world customer challenges, and ultimately drive innovation."
DXC Technology emerging technology and marketplace practice lead Carl Marsaus said "DXC Technology partners closely with Veeam and AWS Marketplace, offering our customers the ability to consume a suite of public cloud offerings via an online catalogue. This provides clients with a strategic and commercial advantage in meeting their data protection requirements, including flexible payment models, seamless procurement and improved visibility and governance."
Reducing WAN latency is one of the biggest issues with hybrid cloud performance. Taking advantage of compression and data deduplication can reduce your network latency.
Research firm, Markets and Markets, predicted that the hybrid cloud market size is expected to grow from US$38.27 billion in 2017 to US$97.64 billion by 2023.
Colocation facilities provide many of the benefits of having your servers in the cloud while still maintaining physical control of your systems.
Cloud adjacency provided by colocation facilities can enable you to leverage their low latency high bandwidth connections to the cloud as well as providing a solid connection back to your on-premises corporate network.
Download this white paper to find out what you need to know about enabling the hybrid cloud in your organisation.
DOWNLOAD NOW!
Marketing budgets are now focused on Webinars combined with Lead Generation.
If you wish to promote a Webinar we recommend at least a 3 to 4 week campaign prior to your event.
The iTWire campaign will include extensive adverts on our News Site itwire.com and prominent Newsletter promotion https://itwire.com/itwire-update.html and Promotional News & Editorial. Plus a video interview of the key speaker on iTWire TV https://www.youtube.com/c/iTWireTV/videos which will be used in Promotional Posts on the iTWire Home Page.
Now we are coming out of Lockdown iTWire will be focussed to assisting with your webinars and campaigns and assistance via part payments and extended terms, a Webinar Business Booster Pack and other supportive programs. We can also create your adverts and written content plus coordinate your video interview.
We look forward to discussing your campaign goals with you. Please click the button below.
MORE INFO HERE!
Continued here:
Veeam and AWS team up to accelerate APJ cloud migrations - iTWire
US sanctions and Chinese pandemic regulations take toll on … – iTWire
Telecommunications equipment provider Huawei generated 35.6 billion (US$5.18 billion) in net profits for 2022 representing a 69% year-in-year declinethe biggest annual decline yet as the US slapped sanctions on its business and tough pandemic regulations in China affected its operations, according to a CNBC report.
"In 2022, a challenging external environment and non-market factors continued to take a toll on Huawei's operations, said Huawei rotating chairman Eric Xu at the company's annual report press conference.
CNBC reported Huaweis revenue rose 0.9% to 642.3 billion (US$93 billion) in 2022 as the company made measures following a 28% sales plunge in 2021.
Huawei is diversifying its business into new segments including cloud computing and automotive after it was battered in a few years by US sanctions.
Throughout 2019 and in 2020, Huawei was cut off from Googles Android operating system and the components it required such as conductors. That crippled Huaweis smartphone business which was touted as the number one in the world.
Huawei struggled to sell smartphones and smartwatches outside China since it was ousted from Android. It launched its own operating system HarmonyOS, which was installed on 330 million devices at end of 2022, up 113% year-on-year. While it gained a huge leap, the operating system failed to gain an attraction in China.
In times of pressure, we press on with confidence, said Huawei chief financial officer Sabrina Meng.
Huaweis carrier business, which includes equipment it sells to telco companies, generated 284 billion (US$41.3 billion) in revenuea 0.9% year-on-year rise. The small increase may be attributed to US urging of other countries to ban Huawei from their next-generation 5G networks. The UK has banned Huawei and Germany may soon follow.
Huaweis enterprise business which includes cloud computing rose 30% year-on-year to 133.2 billion (US$19.2 billion). The cloud computing business in 2022 alone generated 45.3 billion (US$6.5 billion).
This first appeared in the subscription newsletter CommsWire on 03 April 2023.
Reducing WAN latency is one of the biggest issues with hybrid cloud performance. Taking advantage of compression and data deduplication can reduce your network latency.
Research firm, Markets and Markets, predicted that the hybrid cloud market size is expected to grow from US$38.27 billion in 2017 to US$97.64 billion by 2023.
Colocation facilities provide many of the benefits of having your servers in the cloud while still maintaining physical control of your systems.
Cloud adjacency provided by colocation facilities can enable you to leverage their low latency high bandwidth connections to the cloud as well as providing a solid connection back to your on-premises corporate network.
Download this white paper to find out what you need to know about enabling the hybrid cloud in your organisation.
DOWNLOAD NOW!
Marketing budgets are now focused on Webinars combined with Lead Generation.
If you wish to promote a Webinar we recommend at least a 3 to 4 week campaign prior to your event.
The iTWire campaign will include extensive adverts on our News Site itwire.com and prominent Newsletter promotion https://itwire.com/itwire-update.html and Promotional News & Editorial. Plus a video interview of the key speaker on iTWire TV https://www.youtube.com/c/iTWireTV/videos which will be used in Promotional Posts on the iTWire Home Page.
Now we are coming out of Lockdown iTWire will be focussed to assisting with your webinars and campaigns and assistance via part payments and extended terms, a Webinar Business Booster Pack and other supportive programs. We can also create your adverts and written content plus coordinate your video interview.
We look forward to discussing your campaign goals with you. Please click the button below.
MORE INFO HERE!
More here:
US sanctions and Chinese pandemic regulations take toll on ... - iTWire
Singtel eyes integration with Telkomsel – iTWire
Singaporean telecommunications provider Singtel plans to integrate its Indonesian partner Telkom's fixed broadband business into its regional wireless associate provider Telkomsel, reported The Business Times.
In a stock market statement, Singtel said the integration represents rare opportunity for Telkomsel to enter Indonesias high-growth fixed broadband market with an industry leader (Telkom) that has some 70% share of (the) market."
Singtel owns a 35% stake in Telkomsel.
The report noted negotiations are still ongoing but there is no certainty whether the integration will push through.
Last year, Singtel partnered with Telkom to expand its regional data centre strategy operations. Singtel will also support Telkomsel's "transformation" through a fixed mobile convergence strategy with Telkom.
According to analysts cited by The Business Times report, Telkom is among the few telcos in Indonesia that have started to introduce mobile broadband bundles as demand for video streaming in the region is strong.
Reducing WAN latency is one of the biggest issues with hybrid cloud performance. Taking advantage of compression and data deduplication can reduce your network latency.
Research firm, Markets and Markets, predicted that the hybrid cloud market size is expected to grow from US$38.27 billion in 2017 to US$97.64 billion by 2023.
Colocation facilities provide many of the benefits of having your servers in the cloud while still maintaining physical control of your systems.
Cloud adjacency provided by colocation facilities can enable you to leverage their low latency high bandwidth connections to the cloud as well as providing a solid connection back to your on-premises corporate network.
Download this white paper to find out what you need to know about enabling the hybrid cloud in your organisation.
DOWNLOAD NOW!
Marketing budgets are now focused on Webinars combined with Lead Generation.
If you wish to promote a Webinar we recommend at least a 3 to 4 week campaign prior to your event.
The iTWire campaign will include extensive adverts on our News Site itwire.com and prominent Newsletter promotion https://itwire.com/itwire-update.html and Promotional News & Editorial. Plus a video interview of the key speaker on iTWire TV https://www.youtube.com/c/iTWireTV/videos which will be used in Promotional Posts on the iTWire Home Page.
Now we are coming out of Lockdown iTWire will be focussed to assisting with your webinars and campaigns and assistance via part payments and extended terms, a Webinar Business Booster Pack and other supportive programs. We can also create your adverts and written content plus coordinate your video interview.
We look forward to discussing your campaign goals with you. Please click the button below.
MORE INFO HERE!
Kenn Anthony Mendoza is the newest member of the iTWire team. Kenn is also a contributing writer for South China Morning Post Style, and has written stories on Korean entertainment, Asian and European royalty, Millionaires and Billionaires, and LGBTQIA+ issues. He has been published in Philippine newspapers, magazines, and online sites:Tatler Philippines,Manila Bulletin,CNN Philippines Life,Philippine Star,Manila Times, andThe Daily Tribune.Kenn now covers all aspects of technology news for iTWire.com.
The rest is here:
Singtel eyes integration with Telkomsel - iTWire