Category Archives: Cloud Servers

Cloud servers are the most common way in for cyberattacks – BetaNews

New data unveiled by the Atlas VPN team shows that cloud servers are now the number one way in for cyberattacks on businesses, with 41 percent of companies reporting them as the first point of entry.

The data, based on the Cyber Readiness Report 2022 by insurer Hiscox, also shows a 10 percent increase in cloud server attacks over the year before.

Lat year's top attack vector, corporate-owned servers, now occupies the third spot on the list with 37 percent of businesses reporting them as the main cyberattack entry method. Second belongs to business emails, named as the main access point for attackers by 40 percent of businesses.

In total, 48 percent of companies report experiencing at least one cyberattack in the last 12 months. Even with a 60 percent higher cybersecurity spending, cyberattacks rose by five percent compared to the year before.

The Netherlands seems to be most targeted, with 57 percent of companies having experienced cyberattacks in the last 12 months. Organizations in the Netherlands have also seen the most significant rise in cyberattacks, up by 16 percent.

France is next with 52 percent suffering attack, up three percent. Attacks on businesses in Spain dropped by two percent to 51 percent, while i In the US 47 percent suffered attacks, an increase of seven percent.

UK businesses saw the lowest percentage of cyberattacks among the surveyed countries (42 percent), but tops the charts for the median cost of all attacks ($28,000).

You can read more on the AtlasVPN blog.

Image Credit:Chaiyapop Bhumiwat / Shutterstock

The rest is here:
Cloud servers are the most common way in for cyberattacks - BetaNews

State’s centralized Office of Information Technology begins cloud migration – kinyradio.com

Tuesday, September 6th, 2022 8:06am

Juneau, Alaska (KINY) - The Department of Administrations centralized Office of Information Technology will begin the first of two phases in migrating the states data systems from state servers to the cloud this month.

According to a release from the Department, the migration to cloud based servers represents an important part of the states IT strategy and ongoing work to secure Alaskans data, modernize state IT systems and dramatically improve the states resilience to unexpected disruption.

Over the past two years, the Office of Information Technology has executed important work to design the cloud environment based upon the states specific security and operational needs. Said the States Chief Information Officer Bill Smith. OIT has conducted significant training for state employees, performed pilot migrations, built a cloud governance model, and worked to prepare for an effective migration. This move will allow us to take full advantage of world class security features and provide access to a robust computing environment built from the ground up to protect and secure the states data.

In the coming months, the Office of Information Technology will be evaluating every server in the state and executing the large-scale migration.

This migration effort is a culmination of the preparation that has been accomplished over the past two years and will utilize a rapid lift-shift solution unavailable until recently.

The project had its formal kickoff this month and will be conducted in two ninth-month phases.

Go here to read the rest:
State's centralized Office of Information Technology begins cloud migration - kinyradio.com

A Plan to Let Soldiers Interact with the Army Cloud Using Their Own Devices Got a Bit Clouded – Forbes

After years of proposing Bring Your Own Device strategies, the U.S. Army has embarked on Phase III ... [+] of its BYOD Pilot.

The U.S. Army is testing a mobile device application that would let its Soldiers and DoD civilians access the Army Cloud using their personal cellphones or laptops. But theres some confusion about the app and the extent to which it will be used.

For context, its worth explaining that the Army and other services have enabled service members and DoD civilians to work remotely via Government Furnished Equipment (GFE) for over 15 years. The once ubiquitous BlackBerry phones that Soldiers, Airmen, Sailors and Marines carried for years exemplified remote work.

Uncle Sam paid for and supplied these devices and users were/are expected to conduct only official business on them, with the resulting phone in each hand a common sight among service people and government officials physically segregating their professional and personal communications. But a lot has changed in the last decade.

The Army and other branches followed and began to embrace the commercial technology evolution that has brought us digital cloud storage and software-as-a-service (SaaS). For the Army, and the rest of the world, that embrace became a bear hug when COVID-19 hit in 2020.

At the height of the Pandemic the Pentagon turned to a commercial solution for the vastly expanded telework work it believed was necessary to continue to function, enabling Microsoft MSFT Office 365 mobile capability for the military/civilian workforce. The capability was well received but in the span of less than a year DoD recognized it wasnt particularly secure. In June, 2021 Office 365 mobile capability was turned off.

To work remotely and access the cloud, users reverted to their GFE. As they did so, the folks running the DoD cloud enterprise were already asking the question - Do they have to use government funded devices?

Bring Your Own Device

With Microsoft Office 365 connectivity disabled, the DoD CIO and the respective service CIOs established separate pilot programs to assess the potential for military personnel and civilians to work remotely using their own cellphones and laptops. The Pentagon refers to this strategy and to the separate service pilots as Bring Your Own Device or BYOD.

The Army, Navy and Air Force each have their own BYOD Pilots though the Armys Pilot - now in Phase III - is likely the most mature. The goal of BYOD the Army says is to extend the convenience of teleworking on just one device to Soldiers and Army civilians. Essentially, its another app on your phone. A service member can walk out of the Pentagon or off-base, go to the store and still be connected to official business via his or her personal device.

BYOD may also save the service considerable money Army CIO, Dr. Raj Iyer says.

Army CIO Dr. Raj Iyer says its BYOD Pilot is demonstrating the convenience and potential cost ... [+] savings of having Army personnel use their own devices for official business.

We know that there are savings to be had. If you look at the total cost of ownership of government furnished cellphones and how much we pay for data services from the telecom providers, theres an opportunity to reduce those costs by switching to BYOD.

How much potential savings from dropping GFEs/data could be realized is one of a number of issues relating to BYOD over which there has been some confusion. Chief among these has been what kind of work it will enable users to do.

Lieutenant General John B. Morrison, the Armys Deputy Chief of Staff for Command, Control, Communications, Cyber Operations and Networks (G-6), emphasizes that BYOD is largely for administrative work. Technically, it is cleared to carry up to Impact Level 5 (IL 5) information including unclassified and controlled unclassified information the Army says. It is not for use for classified work, communications or data sharing.

Moreover, the Army BYOD Pilot is limited to the strategic administrative level, typically for in-garrison users within the U.S. However, the G-6 is working through use cases outside the continental U.S. LTG Morrison says so personnel in Europe, Africa or South Korea may theoretically be using their own devices through BYOD one day.

Deputy Chief of Staff, G-6 Lt. Gen. John B. Morrison, Jr. emphasizes that the Army's BYOD Pilot is ... [+] evolving and will go forward based on its productivity, security and a cost-driven business case.

While General Morrison says there has been no discussion of using the Bring Your Own Device approach in tactical scenarios at this time, he does not rule out the possibility. That would surely raise additional security concerns and Morrison adds, Were very mindful of the capability some of our adversaries have to use cellphones to do direction-finding and identification.

But for now, BYOD is a tool that replaces the GFEs mostly carried by those at the Army leadership level Morrison says. That includes a fair number of people. Phase III of the pilot will extend to 20,000 users.

Dr. Iyer says it can fully scale to over 20,000 users including National Guardsmen and Reservists whom the Army has also included in the Pilot. If, as LTG Morrison says, the Army will use Phase III to look at other use cases BYOD may have to expand beyond the above number.

The user population brings the BYOD proposition back to cost. If the Army can eliminate the need to provide 20,000 devices, it could probably save come coin. But this proposition has some wrinkles.

For one, both Gen. Morrison and Dr. Iyer stress that the Pilot (and ultimately a program) are strictly voluntary. However if the user base is smaller than anticipated, the cost of acquiring the commercial license for the BYOD app and maintaining its link to the Army cloud may outweigh the savings from handing out fewer phones.

The participation of Guardsmen (both Army and Air Force) and Reservists introduces another nuance to the cost equation. In addition to LTG Morrison and Dr. Iyer, I spoke with Kenneth C. McNeill, CIO at the National Guard Bureau who affirmed that Phase II BYOD testing with Guard Soldiers and Airmen went quite well.

He points out that only a relative handful of Guardsmen (and Reservists) actually have GFEs. To communicate and conduct official business, they have to go to an Armory or other post. When they respond to hurricanes, floods or [provide] whatever support theyre asked to, McNeill said, this will give them the capability to stay connected, pre and post mobilizing.

But since Guardsmen and Reservists who volunteer to use their own phones currently have no GFEs, their participation effectively represents no saving. The convenience may be welcome but Morrison acknowledges, We will do due diligence on whether it fiscally makes sense to move this forward.

Some in the cybersecurity community have already been asking whether moving forward with BYOD makes sense. While Army BYOD is not a classified system, penetrating it would still yield potential insights for U.S. adversaries like China which has derived real benefit over the last three decades from open-source intel, let-alone controlled information.

The Army is cognizant of this and with security foremost in mind, it has given BYOD a Halo.

A Security Halo

The key to BYOD is the ability to securely connect users personal devices to the Armys enterprise cloud environment. Known as cArmy, the services cloud currently offers shared services in the Amazon AMZN Web Services (AWS) and Microsoft Azure clouds at IL 2, 4 and 5.

To enable BYOD the Army turned to Hypori, a Virginia-based SaaS firm which has developed Halo cloud-access software. Halo renders applications and data that reside inside the cArmy cloud on a users device as pixels.

These virtual images allow users to interact and work within cArmy, without any actual transfer of data. Raj Iyer describes Halo-enabled phones as dumb display units which show representations of email, scheduling, spreadsheets or other applications hosted by cArmy. None of it resides on the users device.

This approach largely shifts security from the device to the cloud itself. It allows the service to focus its efforts on defending a single point - cArmy - rather than a collection of phones or laptops. The Army controls access to the cloud (right down to physical access to its servers) and constantly monitors the environment.

Hypori's Halo cloud software connects mobile devices to applications in the cloud via a pixel ... [+] presentation. No data is actually transferred to or from the edge device.

If an anomaly pops up inside cArmy, the Armys Enterprise Cloud Management Agency tells me that it is confident it can rapidly detect and identify an intrusion and defend the BYOD environment. Halo-enabled BYOD has been repeatedly red-teamed Iyer says, passing these evaluations with flying colors and outperforming the solutions the Navy and Air Force have chosen.

Despite their high level of confidence in Halo, both Iyer and General Morrison acknowledge that one can never-say-never in cybersecurity matters. The same centralization in the cloud allows U.S. adversaries to focus their own resources on a single target - cArmy.

While no data rests on the device, the vulnerabilities that always exist at the intersection between hardware, software and the internet remain as does the threat of what the Army cannot control. That stretches from the industrial architecture underpinning the cloud and cloud vendors (Amazon, Microsoft) to the risk of insider exploits.

One of the most notable cloud breaches was publicly acknowledged last May when news broke that in 2019 a former AWS employee exploited her knowledge of cloud server vulnerabilities at Capital One COF and more than 30 other companies to steal the personal information of over 100 million people, including names, dates-of-birth, and social security numbers. The possibility of such an insider breach of BYOD or other cloud systems rings as real to the Army as the name, Bradley Manning.

Even though the Army BYOD is currently intended for non-classified work, LTG Morrison stresses that, Weve baked cybersecurity in early and often and well do it again if we go live and do continual assessments to ensure that we adequately secure the capability were providing.

What was interesting to us about Halo was that we could implement it on devices that were unmanaged, Dr. Iyer says.

Other BYOD solutions come with a Mobile Device Management (MDM) approach which requires the environment (cloud) owner to take control of the device, typically to ensure security and compliance issues. For users, MDM raises privacy concerns which might prove a significant obstacle to adoption. But there is no MDM with Halo. The Army does not control the users device and cannot see beyond its own cloud boundary.

Before BYOD, one of the things we consistently heard from our users was that they didnt want their cellphones to be monitored or wiped if there was any potential [data] spillage, Iyer acknowledges.

The Army G-6 is confident enough in the privacy and security of Halo that I was told that there would be no obstacle to users having it on their phones - right next to Tinder, Reddit, or even TikTok.

Convenience or Burden?

As noted, adoption will be key to BYOD. General Morrison notes that the cost savings it may help the Army realize are up there in terms of importance with the productivity gains and security expected with BYOD. Its success in delivering on this trio of elements will determine a path beyond the current Pilot.

We will do due diligence on whether it fiscally makes sense to move this forward, Morrison affirms.

Users may ultimately have to weigh the convenience of using their own devices for official business with the cost. Some observers have already questioned whether BYOD simply shifts the burden of ownership of appropriate devices with sufficient data plans, identity security, and personal accountability from the government to the individual.

Having the right phone may or may not be a hurdle. In fact, my discussions with the G-6, General Morrison, Dr. Iyer and Hypori illustrated some cloudiness on the issue.

According to the G-6 there will be a list of approved devices which would not include phones no longer supported by their original equipment manufacturers like older Android and Apple versions. An iPhone 6, for example, wouldnt be acceptable. (Nor presumably, would a Huawei phone.) A signed user agreement for BYOD would also require that device owners maintain the latest security updates to remain eligible to work via the app.

However, Raj Iyer differed with the strict notion of approved devices, telling me that a user could bring just about anything to BYOD. Because it is an unmanaged solution, there are no specific requirements for what cellphone you bring. God forbid if you have a BlackBerry somewhere, that might work too.

I was later told Dr. Iyer was joking about the BlackBerry but the impression is that almost anything goes. To be sure I checked with Hypori CEO, Jared Shepard.

Shepard re-emphasized that Hypori Halo is a zero-trust platform which assumes that all edge devices are compromised. By design, it does not allow interaction of data from the protected environment with the device.

But he added, As a Security best practice we recommend that only devices that are still supported [updated and patched] by the manufacturers be allowed. This allows a tremendous amount of flexibility for devices new and old [many 4-6yrs old or more]. Currently iPhone 6 and 7 are still supported by Apple.

We will learn how this capability reacts to different kinds of phones that are out there, Morrison concludes.

As with other aspects of BYOD, the Army will have to have consistent messaging on its user requirements. These include identity. According to Iyer, BYOD employs multi-factor authentication (MFA, passwords augmented by scanning a fingerprint or entering a code received by phone for example).

However, the user identification system employed may also limit devices that can be used with BYOD. For example, Cisco Systems Duo MFA device requirements include a Secure Startup mode and a Cisco-approved operating system (Android 7 or higher) among other things.

Dr. Iyer points out that the Armys enterprise IT management system not only identifies but tracks BYOD phone locations. If a phone operating in Washington DC pops up three hours later in China, somethings obviously wrong. Devices will generally have to indicate active use inside the U.S. While the Army wont have access to personal data, dropping a GFE device wont allow users to go un-tracked.

Iyer says he has seen tremendous excitement about BYOD on social media, suggesting a population eager to embrace the scheme. But given its rollout largely to a group of more senior Army and civilian users, there may be less enthusiasm for yoking ones personal device (and consumer data plan) to BYOD than for a broader cross-section of the Army.

Indeed, one senior Army National Guard officer with a background in cybersecurity told me that while he thinks BYOD may be a useful convenience in the future, hed likely stick with his GFE. Since BYOD is strictly voluntary, potentially eligible users could elect to stay with their government furnished phones prompting a question as to whether personnel who decline to participate might worry about the career implications of taking a pass on BYOD.

This is not going to be viewed favorably or unfavorably, Dr. Iyer assures. I believe that the majority of our users will want it.

Kenneth McNeill thinks people will eventually get comfortable with the idea and says theres already a sizeable group of Guardsmen and Reservists volunteering. General Morrison characterizes early adopters as BYOD champions, people who are helping craft the tactics, techniques and procedures for its use. As Phase III progresses the Army will evaluate its expanded mix of users, continually reassessing the Pilot and iterating the app. How BYOD will ultimately take shape isnt known yet Morrison acknowledges.

Were being very pragmatic, he stresses. That includes putting BYOD through several legal reviews. Army personnel and DoD civilians will have the last word, ultimately making it clear to the service whether theyre comfortable enough with the privacy, security, cost and convenience of personal devices as a gateway to the Army cloud to bring their own.

More:
A Plan to Let Soldiers Interact with the Army Cloud Using Their Own Devices Got a Bit Clouded - Forbes

The Bellwethers Of Enterprise IT Spending Fare Reasonably Well – The Next Platform

If you want to figure out what is going on with spending on the corporate datacenters of the world, a good place to start is to examine the financial results of Hewlett Packard Enterprise and Dell Technologies, which are still the two largest original equipment makers for servers in the world.

Lenovo and Inspur, both based in China with substantial operations in the United States and Europe, have aspirations to bypass HPE and Dell, but thus far that has not happened.

In any event, HPE and Dell reported their financial results before the Labor Day holiday in the United States and the bank holiday in Europe and we are getting around to analyzing what is going on in their numbers to get a better sense of what is happening with the server, storage, and networking supply chains and with the appetite for spending among organizations outside of the hyperscalers and major cloud builders.

We say outside of the hyperscalers and cloud builders, who represent about half the server volumes in the world and a little less than half the server revenues, because these companies have not bought machinery from Dell or HPE for a long time. The Super 8 datacenter operators Amazon Web Services, Microsoft, Google, and Meta Platforms in the United States and Alibaba, Baidu, Tencent, and ByteDance in China have long since designed their own machines, bought their own parts, and shipped them to original design manufacturers such as Quanta, Inventec, Foxconn, WiWynn, and the custom server arms of Lenovo and Inspur for the machines to be made at the lowest possible costs.

There is no way that HPE and Dell can ever get that business back, given the machine volumes the Super 8 require and their inability to pay any kind of reasonable margin to their server manufacturers. HPE and Dell walked away from sales of low margin, minimalist machines because they cant afford to chase such deals and make it up in volume like the ODMs seem to be able to do because they have so many other contract manufacturing deals for consumer, networking, and other gear.

This means that HPE and Dell ride the spending waves of the enterprise, service providers, telcos, HPC centers, and government and academic customers who individually have orders of magnitude lower volumes than a hyperscaler and cloud builder and therefore pay a premium for their machines. Not much of one, mind you, as the financials from HPE and Dell have shown for years.

What has been hard to discern for the past two and a half years is how much actual demand in its own right has been rising and falling and how much supply has crimped demand because of parts and manufacturing capacity shortages. The counterbalance to this is that hyperscalers, cloud builders, service providers, and large enterprises have had to work with semiconductor makers and their ODM or OEM partners to plan their capacity two years out, with some parts shortages at 52 weeks to 72 weeks. So never before have the OEMs and ODMs had justifiable cause to twist customer arms to tell them what their capacity plans are. Planning is easier, with an overlay of uncertainty that is always part of the economy during a pandemic and a war.

None of this is easy, and the organizations of the world that are dependent on compute meaning, all of them should be thankful that someone is still willing and able to bend metal around silicon and provide support for it.

In the quarter ended July 31, which was HPEs third quarter of fiscal 2022, the company reported sales of $6.95 billion, up eight-tenths of a point from the year ago period, and net income of $409 million, up 4.3 percent. That net income represented 5.9 percent of revenues, which is a little higher than average for HPE.

The company exited the quarter with $6.03 billion in cash in the bank, which is a healthy amount of cash.

Tarek Robbiati, chief financial officer at HPE, bragged on the call with Wall Street analysts that the 13.3 percent operating profit for the Compute division was by far the most profitable compute server business in the industry, and that it was 2.1 points higher than the year-ago quarter.

In the quarter, the Compute division, which is mostly sales of X86-based ProLiant servers, rose by six-tenths of a point sequentially to just a hair over $3 billion, but sales were actually off 3.2 percent year on year. Earnings before taxes for Compute were $400 million, up 15.3 percent, but down 3.6 percent sequentially.

Component pricing and shortages are still making it hard to tame margins, but HPE seems to be doing about as good of a job as any company could do under the circumstances. The companys strategy is what you would expect: It is pushing what it has on the truck and not talking about what it doesnt have, and pushing the richest mixes of CPUs, storage, peripherals, and financing that it can.

The supply chain dynamics remain largely unchanged, Antonio Neri, HPEs chief executive officer, said on the call with Wall Street. But what has changed for us is that over the months in the quarter is that we have taken actions to dual source or to steer demand in our products and then obviously implement design changes. I think because of our combination of our portfolio and customer segments, we believe we are very well positioned to move forward through this challenge as we go into next quarter and into 2023. But we expect supply to remain challenged as we get into 2023.

HPE is not being specific about its Compute backlog, but says it is five times the normal level right now and that it grew sequentially from Q2 to Q3 of the fiscal 2022 year. Robbiati said that HPE expected a sequential revenue bump in Compute revenues in the final quarter of the fiscal year due to some multi-sourcing for components and the demand steering mentioned above.

The HPC & AI division had sales of $830 million, up 12 percent thanks to some big deals for its Shasta Cray EX supercomputing systems, but earnings fell by 3.4 percent to $28 million.

This division also sells big NUMA machines based on SGI and HPE chipsets and services the legacy Integrity line of machines that run OpenVMS, HP-UX, and Tandem NonStop operating systems, but we think at this point their contribution is nominal excepting NUMA machines for running large ERP systems and SAP HANA in-memory databases. The backlog for the HPC & AI division has held pretty steady at just under $3 billion, so it is booking revenue about as fast as it is recognizing it.

The core ProLiant business is just a tad under 4X more profitable per dollar of revenue as the HPC & AI business, which is consistent with our observation that it is very difficult indeed to make a profit out of HPC simulation and modeling. But, as we often say here at The Next Platform, someone has to build and support these systems. The NUMA iron is very profitable, we think. In any event, Robbiati said on the call that HPE would recognize some big deals in the fourth fiscal quarter and that its margins in HPC & AI would expand. But they will inevitably contract along the choppy curve we have observed in more than three decades of watching the HPC sector.

HPEs storage business, which is comprised of acquired 3PAR, Nimble Storage, SimpliVity, LeftHand Networks, and others storage products as well as homegrown stuff, had a 2 percent decline in the quarter to $1.15 billion in sales, with earnings of $169 million, down 5.1 percent. Operating profit is 14.7 percent of revenue, also up 2.1 points year-on-year. HPE said it was running a record backlog in storage, but as with its Compute division, it did not provide a figure.

For as long as HPE has been selling systems, we have been keeping an eye on its core systems business, and our latest dataset has tracked system sales in the aggregate since the Great Recession in 2009. We reckon this was a kind of line of demarcation between an economy in one phase and then in another one, much as the Great Infection of 2020 will probably yield another line of demarcation.

Over the decades that core systems business has had many different components and architectures, and in recent years it has held pretty steady at around $5 billion a quarter, more or less. In the third quarter of fiscal 2022, that core systems business had $4.99 billion in sales, off seven-tenths of a point, and earnings of $597 million, up 7.8 percent. Ideally, companies want earnings to grow twice as fast as revenues, which are also growing, but in this environment, holding revenues steady and growing profits is the second best option.

Dell was taken private a bunch of years back and we lost some visibility into it, and so our dataset only runs from its fiscal 2016 year, when it went public again, until now, its fiscal 2023 year, which will end in January 2023.

Both Dell and HPE were IBM wannabes in the 2000s and 2010s, and they acquired all kinds of software and consulting businesses to try to mirror Big Blue. And when that did not raise the profits expected, HPE sold off its software, consulting, and PC businesses and Dell did similarly with its software and consulting businesses but kept its PC business. Argue what you might about this strategy, but Dell has a much larger supply chain and has been able to wield that to push its sales of servers, switches, and storage well beyond that of HPE. And that is not including VMware, which it took control of through its $67 billion acquisition of EMC back in 2015.

Last fall, VMware was spun off into a separate company, refilling the personal coffers of its largest shareholder, Michael Dell, and the core Dell server and switching and EMC storage businesses are still plugging along, albeit with different brand names. And Dell, the man, has long since reached his aspiration of bypassing IBM and then HPE as the dominant provider of hardware platforms in the datacenter.

What do you do as an encore to that?

In the second quarter of fiscal 2023 ended in July, Dells Server & Networking division had sales of $5.21 billion, up 16.7 percent, and its Storage division had sales of $4.33 billion, up 9 percent. Operating income is not posted for either division. But the combined Infrastructure Services Group had sales of $9.54 billion, up 13.1 percent, and operating income of $1.05 billion, up 7.8 percent. That operating income represented 11 percent of datacenter hardware sales, which is about typical of Dell for the past five years.

We dont care about its PC business much except that it gives Dell leverage with chip makers like Intel, AMD, and Nvidia leverage that HPE gave up, and possibly to its chagrin looking at the relative size of the datacenter businesses of the two companies at this point. The Dell Client Services Group has about half again as much revenue and roughly the same profitability as its Infrastructure Services Group, which is pretty good considering how messed up the worlds supply chains have been.

With VMware out of the mix, Dells presence in the datacenter is diminished, and it never made a lot of sense for Dell, the company, to let go of a VMware it paid so dearly for. But it surely made sense for Dell, the man, to take it public and reap that special dividend for shareholders. At the very least, Dell is back to being essentially a pure hardware player, so it has eliminated some difficulties for VMware and its many other hardware partners.

The sale of VMware also helped Dell eliminate some of the massive debt it was carrying as part of the EMC/VMware acquisition. In fact, Dells debt level is roughly equal to a quarter of revenue the same level it was at in fiscal 2016 before the EMC/VMware deal was completed. So with everything unwound except the EMC storage business, Dell is now twice as big with twice as much debt. Maybe that was the plan all along. But we suspect that Dell, the man, had higher aspirations than turned out to be possible.

As far as ISG goes, this was the seventh straight quarter of revenue growth for Servers & Networking and the sixth straight quarter of revenue growth for Storage, which means the coronavirus pandemic did not have much of an effect on Dell despite the supply chain woes and economic tumult. Interestingly, the companys APEX subscription pricing for hardware now has an annualized run rate of over $1 billion, and the orders for APEX were up 78 percent in fiscal Q2.

We wonder how long it will be before half of HPEs core system sales come from GreenLake subscriptions and half of Dells ISG sales come from APEX subscriptions. That is a new race that is a-foot, and one that HPE has a head start on and can win. Some might say must win. With all of HPEs software, services, leases, and GreenLake infrastructure subscriptions together yielding only an $858 million annualized run rate as the July quarter ended, and only 27 percent of that, or about $232 million, being for GreenLake subscriptions, Dell, which started later with hardware subscriptions, has a business that is at least 4.3X bigger.

Given the benefits of hardware subscription pricing, we are surprised these numbers are both not higher.

Read more:
The Bellwethers Of Enterprise IT Spending Fare Reasonably Well - The Next Platform

Fine-Grained Visual Embedding by Amazon QuickSight Brings Visibility without Complexity – Database Trends and Applications

AWS, a leading cloud platform, is debuting its latest addition to Amazon QuickSight: Fine-Grained Visual Embedding. This feature provides users with individualized visualizations from Amazon QuickSight dashboards for embedding within high-traffic web pages and applications. Increased visibility is supplied to end-users through Fine-Grained Visual Embedding, without requiring server or software setup, or infrastructure management.

Amazon QuickSight is a cloud-based, embeddable, and ML-backed business intelligence (BI) platform that provides users with interactive data visualizations, analysis, and reporting to support data-driven decision-making, without the process of managing servers. Amazon QuickSight allows users to embed branded analytics, such as interactive dashboards, natural language querying (NLQ), or BI-authoring experience within internal portals or public sites.

With Fine-Grained Visual Embedding Powered by Amazon QuickSight, developers and ISVs now have the ability to embed any visuals from dashboards into their applications using APIs, said Donnie Prakoso, software engineer and senior developer advocate at AWS. As for enterprises, they can embed visuals into their internal sites using 1-click embedding. For end-users, Fine-Grained Visual Embedding provides a seamless and integrated experience to access a variety of key data visuals to get insights.

Benefits of Fine-Grained Visual Embedding include automatic updates of embedded visuals, as well as automatic scaling without requiring server management, and optimized for efficient performance on high-traffic sites. Amazon QuickSight will also support 1-click embedding for nontechnical users to deploy code embedding, via 1-click enterprise embedding or 1-click public embedding. Users can also employ visual embedding through the API, using AWS CLI or SDK, for increased flexibility to configure allowed domains at runtime.

To learn more about this latest feature, please visit https://aws.amazon.com/.

Read the rest here:
Fine-Grained Visual Embedding by Amazon QuickSight Brings Visibility without Complexity - Database Trends and Applications

Underwater datacenter will open for business this year – The Register

A company called Subsea Cloud is planning to have a commercially available undersea datacenter operating off the coast of the US before the end of 2022, with other deployments planned for the Gulf of Mexico and the North Sea.

Subsea, which says it has already deployed its technology with "a friendly government faction," plans to put its first commercial pod into the water before the end of this year near Port Angeles, Washington.

The company claims that placing its datacenter modules underwater can reduce power consumption and carbon dioxide emissions by 40 percent, as well as lowering latency by allowing the datacenter to be located closer to metropolitan areas, many of which are located near the coast.

However, according to Subsea founder Maxie Reynolds, it can also deploy 1MW of capacity for as much as 90 percent less cost than it takes to get 1MW up and running at a land-based facility.

An illustration showing one of its commercial pods. Pic provided by Subsea

"The savings are the result of a smaller bill of materials, and less complexities in terms of deployment and maintenance," Reynolds told us. "It's complex and costly to put in the infrastructure in metropolitan areas, and in rural areas too: there are land rights and permits to consider and labor is slower and can be more expensive."

The Port Angeles deployment, known as Jules Verne, will comprise one 20ft pod, which is similar in size and dimensions to a standard 20-foot shipping container (a TEU or Twenty-foot Equivalent Unit). Inside, there is space for about 16 datacenter racks accommodating about 800 servers, according to Subsea. Additional capacity, if and when required, is delivered by adding another pod. The pod-to-shore link in this deployment provides a 100Gbps connection.

As it is a commercial deployment, Jules Verne will be open for any prospective clients or partners to come and check it out, virtually or otherwise, according to Reynolds. It will be sited in shallow water, visible from the port, whereas the Njord01 pod in the Gulf of Mexico and the Manannan pod in the North Sea are expected to be deeper, at 700-900ft and 600-700ft respectively.

However, Jules Verne will not likely be used by many customers, as Subsea expects to use it mostly to demonstrate compliance to organizations and advocates that will be inspecting the pod and site, and this may disrupt client operations.

"We are in talks with two of the well-known hyperscalers, though, so it's still a little up in the air," Reynolds said.

The Subsea pods are kept cool by being immersed in water, which is one reason for the reduced power and CO2 emissions. Inside, the servers are also immersed in a dielectric coolant, which conducts heat but not electricity. However, the Subsea pods are designed to passively disperse the heat, rather than using pumps as is typical in submersion cooling in land-based datacenters.

But what happens if something goes wrong, or a customer wants to replace their servers? According to Subsea, customers can schedule periodic maintenance, including server replacement, and the company says that would take 4-16 hours for a team to get to the site, bring up the required pod(s), and replace any equipment.

The viability of underwater datacenters has already been demonstrated by Microsoft, which has deployed several over the past decade as part of its Project Natick experiment. The most recent was recovered from the seabed off the Scottish Orkney islands in 2020, and contained 12 racks with 864 servers. Unlike the Subsea pods, the Project Natick enclosure was filled with nitrogen.

Microsoft reported that only a "handful" of servers failed during the course of its experiment, and Subsea expects its datacenters to require less maintenance due to the reduced risk of environmental contamination like dust and debris and reduced thermal shock.

Subsea said it plans to colocate datacenters at sites offering various types of renewable energy infrastructure, and that it aims for its datacenters to consuming renewable power only by 2026.

Read more:
Underwater datacenter will open for business this year - The Register

Real-World Cloud Attacks: The True Tasks of Cloud Ransomware Mitigation – DARKReading

In Part 1 of our tales of real-world cloud attacks, we examined real-world examples of two common cloud attacks. The first starting from a software-as-a-service (SaaS) marketplace, demonstrating the breadth of potential access vectors for cloud attacks and how it can enable lateral movement into other cloud resources, including a company's AWS environment. The second cloud attack demonstrated how attackers take over cloud infrastructure to inject cryptominers for their profit.

As we have witnessed, more attacks have moved onto the cloud, so it was only a matter of time before ransomware attacks did, too. Let's look at two scenarios where attackers leveraged ransomware to gain profits, and how unique cloud capabilities helped victims avoid paying the ransom.

The first case (or rather cases, as this attack has appeared numerous times) is the notorious MongoDB ransomware, which has been ongoing for years. The attack itself is simple attackers use a script to scan the internet (and now, common cloud vendor address spaces) for hosts running MongoDB exposed to the internet. The attackers then try to connect to the MongoDB with the empty admin password. If successful, the attack erases the database and replaces it with a double ransomware note: pay, and your data will be returned; don't pay, and your data will be leaked.

Intervention was necessary to address the second part of the extortion scheme: data leakage. Luckily, the company had data backups, so recovery was easy, but the database contained considerable amounts of personally identifiable information (PII), which, if leaked, would be a major crisis for the company. This forced them into the position of either paying a hefty ransom or dealing with the press. MongoDB default logging, unfortunately, cannot provide a definitive answer regarding the data accessed, as not all potential types of data collection commands are logged by default.

This is where the cloud infrastructure became an advantage. While MongoDB may not log every command, AWS logs the traffic going in and out of servers, because it charges for network costs. Correlating the network traffic going out of the attacked server with the times when the attackers were connected to the compromised MongoDB server provided proof that the data could not have been downloaded by the attackers.

This allowed the company to avoid paying the ransom and ignore the threat. As expected, nothing further was heard from the attackers.

Another company experienced an attack on its main servers running on AWS EC2, where it was hit by a ransomware Trojan, not unlike those seen on on-premises servers. As often occurs these days, this was another double-extortion ransomware attack and the company needed help dealing with both issues.

Luckily, due to the company's cloud architecture and preparedness, there were AWS snapshots of the environment going back 14 days. The attackers were unaware of the snapshots and had not disabled them in their attack. This allowed the company to immediately revert to the day before the data encryption, resolving the first part of the attack with minimal effort. That still left two challenges to deal with: the potential data leak and the eradication of the attackers from the environment.

To address these challenges, there was a full investigation of the breach, which turned out to be quite complex due to the hybrid nature of their environment. The attackers compromised a single account with limited access, used by an IT person. They then identified a legacy on-premises server where that individual was an admin and used it to take over the Okta service account, allowing privilege escalation. Finally, using a decommissioned VPN service, they were able to hop to the cloud environment. Using the elevated privileges, they took over the EC2 servers and installed the malware.

The investigation yielded two significant findings. The first was the attack timeline. It showed that the compromise of all hosts occurred before the earliest snapshots were taken, indicating that the recovered servers were compromised and could not be used. New servers were installed, the data was transferred to them, and the original affected servers were purged.

The second finding was even more surprising. Malware analysis identified that the attackers used rclone.exe to copy the files to a remote location. The connection credentials were hardcoded in the malware, so the company was able to connect to the same location, identify, and remove their files, eliminating the attackers' access to the files, eradicating the extortion aspect of the attack.

As these real-life scenarios reveal, attackers are infiltrating the cloud and cloud breaches are on the rise. It's time for organizations to prepare for cloud incidents. Cybercriminals are leveraging cloud capabilities in attacks, and you should use them, too, to protect your organization and prevent a crisis from hitting the headlines.

Read the original here:
Real-World Cloud Attacks: The True Tasks of Cloud Ransomware Mitigation - DARKReading

Finance Cloud Market to be Worth $101.71 Billion by 2030: Grand View Research, Inc. – Yahoo Finance

SAN FRANCISCO, Sept. 1, 2022 /PRNewswire/ --The global finance cloud market size is anticipated to reach USD 101.71 billion by 2030, according to a new report by Grand View Research, Inc. The market is expected to expand at a CAGR of 20.3% from 2022 to 2030. Financial organizations are modernizing their processes and embracing different aspects of digital transformation owing to the convenience offered by cloud solutions. Financial institutions using the cloud model benefit from improved disaster recovery, fault tolerance, and data protection.

Grand View Research Logo

Key Industry Insights & Findings from the report:

In terms of solution, the security segment accounted for the largest revenue share of USD 6.63 billion in 2021 and is projected to maintain its position during the forecast period. Rising security concerns due to organizations moving towards cloud-based services & tools and digital transformation strategy as part of their infrastructure development are driving the segment growth. The governance, risk & compliance segment is expected to register the highest CAGR of 22.6% during the projected period.

Based on service, the managed segment led the market with a 64.8% share in 2021 and is expected to retain its position during the forecast period. Managed services allow businesses to outsource all or a portion of their IT operations & infrastructure so they may concentrate on their main corporate objectives. By lowering operational expenditure (OPEX) and capital expenditure (CAPEX), outsourcing enables contact center-based businesses to lower the cost of network and IT spending.

The professional services segment is expected to register the highest CAGR of 23.2% in terms of revenue during the forecast period of 2022 to 2030.

In terms of deployment, the public cloud segment held the largest revenue share of USD 10.22 billion in 2021 and is projected to maintain its position during the forecast period. As a user of the public cloud, organizations are not in charge of administering cloud hosting services. The management and upkeep of the data center where data is stored fall under the purview of the cloud service provider.

This entails eliminating protracted procurement procedures and waiting for operations to set up servers, set up operating systems, and create connectivity. The public cloud also reduces expenditure because businesses only pay for the resources they use, thus cutting down on wasteful expenditure on idle resources. The private cloud segment is expected to register a CAGR of 22.9% during the assessment period.

Based on application, the wealth management segment held the largest revenue share of 29.6% in 2021 and is projected to maintain its position during the projected period. Moving wealth management systems to the cloud could assist in providing agile and flexible solutions that could help create a strategic competitive edge while positioning the business for long-term success. Companies are entering into partnerships for the adoption of cloud-based wealth management services.

The large enterprises segment dominated the market with a share of 68.1% in 2021. The small & medium enterprises segment is likely to register the highest CAGR of 24.0% during the forecast timeline. The growth of this segment is mainly due to the numerous benefits of cloud computing, including improved customer relationship management, regulatory compliance, data analysis, and assistance in detecting frauds in the financial sector.

According to a survey conducted by Ernst & Young Global Limited, a U.K.-based company, in March 2022, 39% of medium enterprises had made progress toward the cloud.

For instance, in January 2022, Avaloq, a provider of business process as a service (BPaaS) and software as a service (SaaS) announced that it is extending its long-standing partnership with RBC Wealth Management, which is a part of the Royal Bank of Canada, throughout Asia, for switching to cloud-based SaaS model and updating the wealth management platform with cutting-edge solutions. The asset management segment is anticipated to register the highest CAGR of 23.3% during the assessment period.

In terms of end-use, the banking and financial services segment generated the largest revenue of USD 13.76 billion in 2021 and is projected to retain its dominance during the projected period. The need to distinguish and personalize services has made it essential for banks to modernize their core technology foundation to cloud-based infrastructure. This was further expedited by the pandemic's requirement for distant operations and the exponential growth of digital transactions.

For instance, in July 2020, Microsoft and Finastra, one of the largest fintech organizations, which offers solutions for the financial sector, announced a global strategic partnership to accelerate transformation in financial services. The insurance segment is anticipated to register the highest CAGR of 23.5% during the projected timeline.

North America dominated the market in 2021 with a revenue share of 35.0% and is expected to expand at a CAGR of 18.9% during the forecast period. Asia Pacific is likely to register the highest CAGR of 21.6% during this timeline, owing to the rapid increase in digitalization and sustained national investment in technological advancements. The rapid rise of banking and insurance organizations as well as the increasing demand for cloud services support the Asia Pacific market's expansion.

Read 100-page full market research report, "Finance Cloud Market Size, Share & Trends Analysis Report By Solution, By Service, By Deployment, By Enterprise, By Application, By End-use, By Region, And Segment Forecasts, 2022 - 2030", published by Grand View Research.

Finance Cloud Market Growth & Trends

The volume of data breaches has surged in recent years, forcing financial companies to step up their security measures. According to the Financial Services Sector Exposure Report 2018-2021 by Constella Intelligence, a global threat intelligence organization, there were 6,472 breaches and data leaks found between 2018 and 2021, with more than 3.3 million records stolen from 20 organizations of Fortune 500.

The COVID-19 pandemic had a positive effect on the market for finance cloud. The financial sector has significantly altered its existing business strategy, improving its business performance and modernizing the old product lines with more cost-effective strategies. To maintain effective internal operations in the event of a pandemic, banks and other financial institutions have embraced the cloud much more widely. As a result, there has been a significant increase in demand for financial cloud during this period.

The market is anticipated to benefit from strategies adopted such as frequent launches, developments, and innovations by market players in the finance cloud industry. For instance, in May 2021, Google Cloud officially confirmed the data share solution for financial services. The data share solution is created to enable sharing of market data with enhanced security and ease across the capital markets, including data consumers like asset managers, investment banks, and hedge funds, as well as market data issuers like exchanges and other providers.

Finance Cloud Market Segmentation

Grand View Research has segmented the global finance cloud market based on solution, service, deployment, enterprise, application, end-use, and region:

Finance Cloud Market - Solution Outlook (Revenue, USD Million, 2017 - 2030)

Financial Forecasting

Financial Reporting & Analysis

Security

Governance, Risk & Compliances

Others

Finance Cloud Market - Service Outlook (Revenue, USD Million, 2017 - 2030)

Professional Services

Managed Services

Finance Cloud Market - Deployment Outlook (Revenue, USD Million, 2017 - 2030)

Public Cloud

Private Cloud

Hybrid Cloud

Finance Cloud Market - Enterprise Outlook (Revenue, USD Million, 2017 - 2030)

Finance Cloud Market - Application Outlook (Revenue, USD Million, 2017 - 2030)

Finance Cloud Market - End-Use Outlook (Revenue, USD Million, 2017 - 2030)

Finance Cloud Market - Regional Outlook (Revenue, USD Million, 2017 - 2030)

North America

Europe

Asia Pacific

Latin America

Middle East & Africa

List of Key Players in Finance Cloud Market

Check out more related studies published by Grand View Research:

Fintech-as-a-Service Market - The global fintech-as-a-service market size is expected to reach USD 949.49 billion by 2030, growing at a CAGR of 17.2% from 2022 to 2030, according to a new report by Grand View Research, Inc. The increasing adoption of financial technology-based solutions and platforms globally is anticipated to drive the growth of the market. The increasing adoption of artificial intelligence, cloud-based software, and big data integrated with financial services is expected to drive the growth of the market for fintech-as-a-service.

Smart Finance Services Market - The global smart finance services market size is expected to reach USD 46.85 million by 2028 and is expected to grow at a CAGR of 2.9% from 2022 to 2028, according to a new report by Grand View Research, Inc. The crucial growth factors of the market include the growing demand for the various IoT-based ATM services, such as installation and management services, across the globe.

Artificial Intelligence In Fintech Market - The global artificial intelligence in fintech market size is expected to reach USD 41.16 billion by 2030, growing at a CAGR of 16.5% from 2022 to 2030, according to a new report by Grand View Research, Inc. Artificial intelligence (AI) is widely used in financial organizations to improvise their precision levels, enhance their efficiency and instant query resolving through digital banking channels. AI technology like machine learning can help organizations raise their value by improving loan underwriting and eliminating financial risk.

Browse through Grand View Research's Next Generation Technologies Industry Research Reports.

About Grand View Research

Grand View Research, U.S.-based market research and consulting company, provides syndicated as well as customized research reports and consulting services. Registered in California and headquartered in San Francisco, the company comprises over 425 analysts and consultants, adding more than 1200 market research reports to its vast database each year. These reports offer in-depth analysis on 46 industries across 25 major countries worldwide. With the help of an interactive market intelligence platform, Grand View Research Helps Fortune 500 companies and renowned academic institutes understand the global and regional business environment and gauge the opportunities that lie ahead.

Contact:Sherry JamesCorporate Sales Specialist, USAGrand View Research, Inc.Phone: 1-415-349-0058Toll Free: 1-888-202-9519Email: sales@grandviewresearch.comWeb: https://www.grandviewresearch.comGrand View Compass| Astra ESG SolutionsFollow Us: LinkedIn | Twitter

Logo: https://mma.prnewswire.com/media/661327/Grand_View_Research_Logo.jpg

Cision

View original content:https://www.prnewswire.com/news-releases/finance-cloud-market-to-be-worth-101-71-billion-by-2030-grand-view-research-inc-301616196.html

SOURCE Grand View Research, Inc.

The rest is here:
Finance Cloud Market to be Worth $101.71 Billion by 2030: Grand View Research, Inc. - Yahoo Finance

Everything as a Service?: Government and the Cloud – Government Technology

State and local IT leaders are galloping toward the cloud, whether they know it or not.

Thirty-seven percent say they moved on-premise infrastructure to public cloud this past year, according to the 2022 CompTIA Public Technology Institute (PTI) State of City and County IT National Survey. And 32 percent said that migrating systems and applications to the cloud will be a top priority in the next two years.

Yet much of state and local cloud adoption still goes unrecognized as such, said Alan Shark, vice president of public sector and executive director at PTI. People say theyre not in the cloud very much, and when you start asking questions, it turns out that theyre very much in the cloud.

Taken together, formal migrations to the cloud and adoption of cloud-based SaaS indicate a steady shift in IT resources: away from on-prem legacy solutions and toward the cloud. Its continuing to gain traction, Shark said.

In Utah, for example, the building that housed the states main data center is slated to be demolished, and CIO Alan Fuller is seizing the moment to jump-start his cloud migration. Were hoping to get better scalability, elasticity, security, redundancy and lower cost, he told GTat the NASCIO Midyear Conference in May. We have a multi-vendor cloud strategy and we want to move as many services and applications as we can from on-premise to the cloud.

Here, a range of state and local leaders help to paint a picture of the state of cloud adoption: the early wins, and the challenges yet to be faced.

The push came from a Deloitte assessment, which found in part that the state could uncover a lot of savings by consolidating its data center investments. Through cloud migration, we could avoid a $15 to $30 million investment that would have been necessary just to keep our primary data center facility going.

The state subsequently demolished its primary data center. (The site is now a parking lot.) Sloan reports that 80 percent of that activity moved directly to the cloud, and an additional 5 percent migrated to a shared data center with a much smaller physical footprint.

In the process, the IT team identified about 96 data centers across state agencies, including everything from formal data centers to servers running in spaces under desks. Ninety-two of those have since been decommissioned, with only a few outliers kept online.

In order to ensure the push to cloud aligns with specific agency needs, the IT team has established relationships with multiple cloud service providers, including Amazon Web Services, Microsoft Azure, Google Cloud and IBMs Z Cloud.

The way a particular agencys infrastructure has matured over time influences which cloud option is best suited for their needs. The child protection agency, for example, had already replaced an aging case management system with a solution that was based on Microsoft Dynamics. That went into the Dynamics 365 cloud, Sloan said. Weve allowed the agencies to have a voice in selecting the cloud providers that best meet their needs.

It shouldnt come as a surprise that Sloan makes a strong case for the benefits of cloud adoption. As government services and data shift to the cloud, he said, you inherently gain access to scalability and elasticity, as well as a whole set of feature functions that you dont inherently have in your current on-premise data center environment.

Theres a benefit on the personnel side as well. You get people out of having to do the managing, the administration, all the care and feeding for physical servers. Now the same number of people are able to do more things. They have more time to put into more valuable activities, he said.

One big adjustment has been the shift from capital to operational budgeting. Here, Sloan advocates for a gradual approach. It takes time. If you try and flip a big switch, there can be some real challenges, depending upon where you are in your capex cycle, he said.

If youre not at the end of a capex cycle, you potentially end up having duplicate costs, he said. So we work with our people to plan ahead. We will say: Youre not going to be buying new servers the next time this comes around, so if youre in year two of your five-year server cycle, plan now. Youll need to make the transition when those are up on their life cycle.

The city has its office productivity tools and online collaboration tools in the cloud, along with an e-signature platform and a records request tool. Next up: enterprise resource planning for human capital management and finance, followed by public safety.

We will have a big ERP system launching here the first week of October, and right on the tail of that, probably in late January or early February, were doing a major public safety systems swap-over to cloud, said CIO Chris Seidt.

While the transition has gone smoothly, Seidt is candid about the bumps hes hit along the road, and the need to manage thoughtfully through the details of a cloud migration. When subscribing to cloud services, for example, its important to pay close attention, to ensure you are subscribing to all of the things that you need, he said.

Because there may be different tiers of subscription models for different cloud services, you really have to look carefully at what is included in that, because it may not include all of the pieces that you would need as an organization in order to maintain your security posture, in order to gain access to functionality, he said.

In fact, Louisville initially undersubscribed for its office productivity suite. It lacked some of the practical security aspects that really needed to accompany it, Seidt said. If you stepped up a tier, a lot of those security tools were just included. Of course, theres a cost jump there too.

In addition to working through those technical details, Seidt has also focused heavily on change management.

One big barrier is staff willingness to go along, he said. Ive got a couple systems engineers that have been with me for decades, and now were asking them to do cloud. So one of the first steps was to make sure people got comfortable. When people are worried about losing their jobs, you have to get out ahead of that. We certainly didnt go into our cloud journey looking to reduce our headcount. If anything, we wanted to repurpose it.

To that end, the citys cloud journey has included not just messaging about the virtues of cloud, but also training to reskill IT for the task that lies ahead. Seidt said hes quadrupled the training spend since 2018, in an effort to upskill staff for the cloud transition.

When you let them know that youre willing to train them in those new skill sets, it makes them more engaged, Seidt said. Some of our systems guys are our biggest champions now, because they see the bigger picture.

While others work with multiple cloud providers, Seidt for now is focused solely on AWS engagements. With a single provider, he said, he can more easily control and manage his cloud resources.

At the same time, Seidt has been working with others in the city to repurpose the data center space hes no longer using.

In a lot of city governments, the data center serves as more than just a place to house compute and storage. It may also have your network core redundancy. It may also have fiber terminations for municipally owned fiber. Theres a lot of other things that live there, he said.

As he moves to decommission compute and storage, hes looking for opportunities to refresh the HVAC systems for downsized demand and potentially repurpose those spaces for other uses.

As for the shift from capex to opex spending, Seidt said good communication is the key to success. It does require a lot of cooperation with your chief financial officer and your elected officials, making sure that they understand the business case for it and the benefits that come with it, he said.

We have been slow to embrace the cloud, said Director of Information Services Eric Romero. Were so busy here, it is hard to find the time to get up to speed, to learn the newer technologies and to get to that comfort level.

The city has nonetheless already shifted a few applications to the cloud, including 311, permitting and a nearly billion-dollar road program thats leveraging cloud-based tools for project management. Romero said hes already seeing the benefits of a modernized approach. The upside is certainly that its less taxing on my department, my resources, my staff, he said.

Still, Romero has questions. Hes particularly concerned about data portability, or the lack thereof. Once all the data is in the cloud, how do we get it out of the cloud in a way that we can either use it for historical purposes or convert it for use in another system? he said, noting that his sense is that cloud service providers dont want to make this easy.

He said hed like to see more data on cloud reliability, too: downtime metrics and the like. Hed also like some assurance that a cloud provider would be able to respond quickly to an urgent need.

When we have an issue, something that might not seem critical to them might be extremely critical to us. If we had it in-house, my guys would be working on it 24 hours a day until it got resolved, he said. That might come through with the contract process, but will they really understand it?

At this point, when he does make a move, its typically a matter of cloud by necessity. For example, were running an old version of Exchange for email and every time that we get a patch from Microsoft, we have issues getting the servers back online, he said. We just finally came to the conclusion that we had to move.

In tandem with such efforts, Romero has been communicating with folks on the budgeting side to ensure a smooth transition to opex as needed.

American Rescue Plan money is helping bridge the gap between capex and opex for the moment, but eventually those new cloud-based contracts will come due again maybe two or five years down the road and Romero said it will be important to have solidified the funding model in support of that spending.

I was very upfront with our finance folks and the administration here, saying: These are my needs and we are going to use ARP funds for this, but weve got to address the budget or else in five years were going to be cutting these services.

In the long run, he said, changes to the budget process and incremental steps toward the cloud together will put the IT team on a better footing. Cloud will ultimately lower the management burden, freeing talent to tackle higher-level tasks. And a move to the cloud could also make it easier to hire the people he needs.

As candidates come in and they see that youre not fully embracing the cloud, that can be a detriment to their considering the position, he said. Seen in this light, cloud becomes increasingly a must-have for state and local governments looking to attract the best and brightest in a highly competitive talent market.

More to the point, Romero describes cloud as an inevitable technological evolution.

We know that this is coming. Were seeing more and more: not just software applications, but solutions that are being pushed to a subscription-based model, most often cloud-based, he said. We know that its coming and at some point, we will get there.

More:
Everything as a Service?: Government and the Cloud - Government Technology

Ransomware attackers expand the attack surface. This Week in Ransomware Friday, Sept 2 – IT World Canada

Ransomware continues to grow and expand, both in the number of attackers and the number of potential victims. This week we feature some of the attackers strategies described in recent news items.

Whats next Ransomware in a box? New Agenda Ransomware can be customized for each victim

A new ransomware strain called Agenda, written in Googles open source programming language Go (aka Golang) was detected and reported by researchers at Trend Micro earlier this week. There has been trend towards using newer languages like Go and Rust to create malware, particularly ransomware.

The fact that many of these languages can operate cross platform makes them a much greater threat. Go programs are cross platform and stand alone. They can execute without a Go interpreter on the host system.

In addition, the creators have added a new wrinkle making this new variant easily customizable. This new strain is being sold on the dark web as Ransomware as a Service (RaaS). Qilin, the threat actor that is selling it to its affiliates, claims it will allow them to easily customize, for each victim, the:

Finally, Agenda has a clever detection evasion technique also used in the other ransomware variant REvil. It changes the user password and enables automatic login with the new credentials. This allows the attacker to use safe mode to reboot and control the victims system.

Trend Micro reported that this allowed one attacker to move from reconnaissance to full-fledged attack in only two days. On the first day, the attacker scanned a Citrix server, and on the second day mounted a customized attack.

For more information you can review the original Trend Micro posting.

New Linux ransomware families

Another way that threat actors are expanding the attack surface is by targeting Linux, one of the predominant operating systems used on internet and cloud servers. RaaS offerings are increasing targeting Linux systems.

Although regarded as a very secure operating system, and despite a consistent move to patch vulnerabilities, the large number of Linux offerings used world-wide ensures there are a significant number of vulnerabilities at any given time. Failure to update and patch systems creates a large potential target base.

But software vulnerabilities are not the only area of weakness. Configuration mistakes are often the more likely factor in the breach of a Linux system, according to researchers at Trend Micro.

Remarkably, these include easily remedied issues such as:

To quote Trends report, given the prevalence of Linux, ransomware actors find the operating system to be a very lucrative target.

Ransomware going to the dogs is no joke

As RaaS and customizability become more and more prevalent, theres an increasing ability to target smaller and more specific groups. We are familiar with ransomware attacking health care organizations, but recently the United Veterinary Services Association has written to its members with recommendations to increase ransomware prevention after an attack that hit more than 700 animal health networks around the world.

It is a reminder that no group, regardless of size or type of business, is immune to ransomware.Every organization must communicate the need to have, at a minimum, the basics of ransomware protection in place:

Read this article:
Ransomware attackers expand the attack surface. This Week in Ransomware Friday, Sept 2 - IT World Canada