Category Archives: Cloud Servers
The future of OT security in an IT-OT converged world – The Register
Paid Feature If you thought the industrial internet of things (IIoT) was the cutting edge of industrial control systems, think again. Companies have been busy allowing external access to sensors and controllers in factories and utilities for a while now, but forward-thinking firms are now exploring a new development; operating their industrial control systems (ICS) entirely from the cloud. That raises a critical question: who's going to protect it all?
Dave Masson, Director of Enterprise Security at Darktrace, calls this new trend 'ICSaaS'. "ICS for the cloud is starting to happen now. That represents a whole new world for industrial technology and security.
This trend has been possible for the last decade or so, he explains, but the uptake has been slow. Now, Masson is hearing from clients who are actioning it.
The move to cloud-controlled ICS took this long to begin in part because of the cultural differences in the ICS world. One mistake configuring the operational technology (OT) underpinning ICS can have profound effects, Masson says. Opening this infrastructure up to access from the internet was a bold enough step on its own, and took a big cultural shift. Putting the means of control in the cloud takes a further shift in mentality.
"Although there are positives, it will still impact reliability," says Masson. "There are ramifications for ICS performance, security, and therefore safety." Many of these environments can't tolerate any downtime at all.
Operational technology admins may be nervous about allowing cloud-based control of their infrastructures, but they're attracted by the potential benefits, Masson asserts. The pandemic has been a strong driver, allowing operators to remotely control industrial systems when they haven't been able to come on-site.
Organizations could enable remote access without cloud-based systems by punching holes in on- premises firewalls, but doing so made cloud-based access more plausible, opening up the conversation.
If operators are accessing ICS remotely anyway, then it makes it easier to consider cloud-based interfaces, Masson says. These make the management infrastructure cheaper and easier to operate. He points out the arguments now familiar to IT decision makers, including the opportunity to reduce operators' own hardware investments and potentially cut their data center real estate. Companies are now seriously considering taking advantage of these operational benefits for the first time.
In this scenario, the hardware components that make up ICS stay where they are. We're not talking about virtualizing programmable logic controllers here. It's the data governing their operation that moves to the cloud. That means the applications, databases, and other services that operators rely on to keep those components running smoothly. Instead of handling planning and scheduling using on-premises data, they'll do it using cloud platforms that then tunnel communications to those legacy systems in the field - which still expect to be spoken to via specialized protocols like Modbus.
Security is just as important in these new cloud-enabled environments as it was in the old legacy walled gardens, but the challenges facing defenders are different. The cloud is eroding the gap between IT and OT, explains Masson. OT is now part of what looks increasingly like a common IT network.
"Now, anybody can access this network from anywhere, so you've got to make sure you have good controls around who's got permission," he says. "This raises questions about data security, compliance, and regulation."
Security teams grappling with this face challenges including more complexity in their infrastructures as they bring different devices and protocols into the fray, with traffic running through different gateways. The number of OT devices can be staggering, far outnumbering the number of servers or endpoints that an IT security team has dealt with before.
OT admins, used to maintaining an iron grip on their infrastructure, now risk a loss of visibility and control, warns Masson. He calls the people looking after this management data in an ICS setting data historians.
"That data is now over the horizon and you need to know what people are doing with it in the cloud," he warns, pointing to a litany of problems with misconfigured databases and storage resources. The prospect of exposing ICS management data to the general public due to a dashboard misstep would turn most data historians grey.
There are organizational worries to consider beyond the technological ones, Masson adds. Converging IT/OT infrastructures is only part of the story. You must also decide who is managing security for the expanded network. Is it the IT security team, or the OT team, or both? Do they speak the same language? Will the organization have to contend with political strife and territorial battles?
When all these challenges combine, it's easy for security problems to slip through the gaps. It takes a cohesive approach with multiple checks and balances to ensure protection that extends from the physical equipment in the field through to the infrastructure that controls it in the cloud. It takes a sharp focus on access controls and permissions at all points in the ecosystem.
This new, more complex environment demands a new approach to security, according to Masson.
Zero trust architecture is a common talking point today when discussing cloud-based security, and that will be important, he says. Its focus on identity-based access, backed by account controls like multi-factor authentication, is valuable. "But that won't tell you when you've misconfigured something providing you with access to your ICS from the cloud," he points out.
He warns that IT teams can't rely on the same protective measures they used in the past. "They'll have one product for this and another for that, all using hard-coded predefined rules and signatures that aren't really designed to adapt with sudden transformation. The rules-based firewalls that might have offered some protection in the past will no longer cut it in a converged IT/OT cloud-based environment.
Darktraces AI technology flips this narrative, evaluating threats to complex systems not using a rigid set of rules, but instead leveraging unsupervised machine learning to constantly understand an organizations 'pattern of life'.
Instead of running every traffic pattern against a complex and often outdated series of signatures to detect malicious behaviour, Darktrace's tools look for activities that deviate from this pattern of life. If it detects communications between ICS systems that don't usually communicate, for example, or unusual access to ICS control systems in the cloud, its AI will investigate the activity in real time.
If granted permission, Darktrace's Antigena product will also take its own steps to contain the threat. It uses an AI-powered Autonomous Response mechanism that takes measured steps to neutralize malicious behaviour, all while allowing normal business operations to continue to run smoothly.
This approach has the advantage of not relying on deep packet inspection for its results. That's a big plus in an environment where tunnelled communications between cloud-based management systems and ICS components are often so obscure that they're effectively encrypted.
"There are tons of these protocols, some invented by people who are now dead," Masson says. "So we stay protocol agnostic."
While the company is learning some of the protocols for clients that demand it, the AI technology doesn't need to understand what's happening in a packet. Instead, Darktrace looks at what the packet is doing within the broader infrastructure, using its self-learning AI to assess deviations from the norm.
A number of cloud-first critical infrastructure organizations use Darktrace to defend their cloud environments one being Mainstream Renewable Power, a major player in wind and solar energy.
ICSaaS is only one part of a broader shift towards OT/IT convergence, says Masson. The advent of 5G, along with the development of edge computing, will accelerate the trend still further.
"Right now people focus on protecting the data that's in the cloud, but with 5G and edge computing that data won't always stay there; it will be on the edge where the computation is actually taking place. Masson argues that self-learning AI, built to maintain a picture of normality in volatile environments, will be well-placed to cope with the speed and complexity of edge-based scenarios.
ICS will be deeply ingrained in this new computing model, which will see local 5G-based networks supporting edge facilities and sensors with software-defined network functions including network slicing. With the world on the cusp of this change, new approaches to protecting it all from attack will be crucial.
Masson is certain that AI will be squarely in the middle of the picture, protecting the network from logic controllers in the field through to virtual servers in hyperscale cloud architectures - and everything in between.
This article is sponsored by Darktrace.
Excerpt from:
The future of OT security in an IT-OT converged world - The Register
Still reeling from the Great Facebook Blackout of 2021? Turns out Zuck is not the worst offender – The Register
UK-based price comparision and broadband swapping service Uswitch has totted up the figures and come up with a surprising candidate for most outage incidents in 2021.
Outages are a tricky thing to quantify, and the metric used by Uswitch was a simple count of the most visited websites against a total number of incidents reported by DownDetector. The "visited website" metric ruled out the likes of Azure and AWS, although both services lurk behind the scenes. However, hotspots like Facebook, Instagram, and TikTok all qualified.
At 180 incidents, according to Uswitch, Reddit was head and shoulders ahead of the pack. A whopping 60 per cent of issues were related to forum's app. Following, at 107 incidents, was Discord (server connection woes afflicted 73 per cent of blackouts), just snatching second place from Instagram. The food and lifestyle happy snapper accounted for 106 incidents (the app accounted for 55 per cent of problems).
Close to Instagram was its big brother Facebook's website, with 95 outages (of which its website accounted for 49 per cent).
However, a count of incidents does not really tell the whole story, and there was no indication from Uswitch which of Facebook's 95 borks was the big one that took out WhatsApp and Instagram in the blast radius and left the world bereft of angry uncle posts and shots of dinner plates for a happy few hours. The Register asked Uswitch for a breakdown by duration.
It told us: "The data we collected is the average count of outages. Unfortunately, the data for how long the outages lasted is not readily available, and trying to create stats from what we do have I believe will not be accurate enough to run with."
WhatsApp itself was in the better-behaved end of the table, with a mere 34 outages, just above TikTok's 30. The video streamers also fared better than social media, with Netflix and YouTube tied at 44 outages.
And as for that bastion of whingeing when things go wrong, Twitter, Uswitch put it quite some way behind Facebook and a gnat's whisker ahead of YouTube at 50 incidents.
While enterprises have service level agreements in place to punish providers for outages (although rarely more than a simple credit rather than the actual cost of lost business), consumers are not so fortunate. What does one get for handing over all that personal data? An outage-prone service, it appears.
Go here to read the rest:
Still reeling from the Great Facebook Blackout of 2021? Turns out Zuck is not the worst offender - The Register
Use smarter key management to secure your future anywhere in the cloud PCR – PCR-online.biz
Marcella Arthur, Vice President Global Marketing, Unbound Security explores using smarter key management in the cloud
Cloud computing remains a dominant trend in global business, boosted by the mass shift to remote working during the pandemic. A report from Deloitte highlights how investment in cloud infrastructure increased through 2020 with the scale of mergers and acquisitions indicating significant expectations of further growth.
Yet as organisations migrate workloads to the cloud in search of greater agility and innovation and reduced costs, they are facing serious security challenges that conventional approaches fail to meet, particularly if they adopt hybrid approaches. By 2022, analysts IDC estimate more than 90 per cent of enterprises worldwide will be relying on a mix of on-premises/dedicated private clouds, multiple public clouds, and legacy platforms to meet their infrastructure needs. As companies become more distributed and more complex than ever through their entry into the hybrid cloud, they find themselves with massively extended security perimeters while constantly exchanging high volumes of data.
Combined with the imposition of stricter demands by regulators, these developments make control of encryption keys used to protect data more important than ever. For those with heavy investments in on-premise infrastructure, hardware security modules (HSMs), or apps partially in the cloud, the inability to secure and manage the cryptographic keys that protect their data across a multitude of scenarios has the potential to bring their organisations to an extremely costly standstill.
Whenever IT managers decide on a cloud shift that requires some existing hardware to remain intact, among the problems they face are the time-consuming task of maintaining multiple systems, implementing key management solutions, and the creation of multiple keys depending on the application supported and authentication path. Developers and solution architects take on the biggest migration risk, because the painstaking work that it took to develop an application once, may now have to be repeatedly refactored to ensure that keys work anywhere in the cloud, at any time.
For key management, organisations may feel they can rely on the solutions provided by the major cloud service providers (CSPs), who have made encryption simple to activate. Sadly, however, there is a basic security flaw in having the keys held by the same entity that holds the data. It is not just penetration by criminals we should worry about in this respect, it is the government warrants and subpoenas that may force CSPs to open up what they hold. Alongside this vulnerability is one of management. It becomes much harder to achieve consistency of data governance across an organisations entire and varied infrastructure including on-premises hardware when keys are managed by the cloud provider. The way CSPs solutions deliver a segmented picture of the key logs and usage reports makes it impossible for enterprises to manage their entire range of keys in one place with full visibility across all sites.
Time to market for new and existing applications suffers as they require keys to ensure the requisite security policies are met in each case. Security is potentially compromised when organisations are unable to manage keys across disparate sites because of dependencies on the applications they are looking to authenticate, each having been written to specific cloud requirements.
The way out of this tangle is to nail down security with a third-party solution that overrides the complexity of refactoring applications to ensure they work in each cloud environment. Enterprises need to write and manage their own keys on a separate, one-stop platform, using multiparty computation (MPC). MPC splits a secret key into two or more pieces and places them on different servers and devices. Because all the pieces are required to get any information about the key, but are never assembled, hackers have to breach all the servers and devices. Strong separation between these devices (different administrator credentials, environments, and so on), provides a very high level of key protection.
Adopting this approach gives enterprises using hybrid cloud or multi-cloud infrastructures the single-pane-of-glass visibility that is essential for security and surveillance, providing information about all keys and digital assets, how they are stored, who is using them and how they are programmed. The use of cloud crypto keys is no longer a leap of faith.
When organisations are moving into the cloud for greater innovation and efficiency, an MPC platform provides the most effective means of securing and managing encryption keys, being highly agile, adaptable, and easy to use without any compromise of safety.
Read the latest edition of PCRs monthly magazine below:
Like this content? Sign up for thefree PCR Daily Digestemail service to get the latest tech news straight to your inbox. You can also follow PCR onTwitterandFacebook.
See the original post:
Use smarter key management to secure your future anywhere in the cloud PCR - PCR-online.biz
In the spirit of open government, France dumps 9,067 repos online to show off its FOSS credentials – The Register
UK-based price comparision and broadband swapping service Uswitch has totted up the figures and come up with a surprising candidate for most outage incidents in 2021.
Outages are a tricky thing to quantify, and the metric used by Uswitch was a simple count of the most visited websites against a total number of incidents reported by DownDetector. The "visited website" metric ruled out the likes of Azure and AWS, although both services lurk behind the scenes. However, hotspots like Facebook, Instagram, and TikTok all qualified.
At 180 incidents, according to Uswitch, Reddit was head and shoulders ahead of the pack. A whopping 60 per cent of issues were related to forum's app. Following, at 107 incidents, was Discord (server connection woes afflicted 73 per cent of blackouts), just snatching second place from Instagram. The food and lifestyle happy snapper accounted for 106 incidents (the app accounted for 55 per cent of problems).
Dutch newspaper accuses US spy agencies of orchestrating 2016 Booking.com breach – The Register
Jointly US-Dutch owned Booking.com was illegally accessed by an American attacker in 2016 and the company failed to tell anyone when it became aware of what happened, according to explosive revelations.
The alleged miscreant, named as "Andrew", is said to have stolen "details of thousands of hotel reservations in countries in the Middle East," according to a new book written by three Dutch journalists.
Their employer, Dutch title NRC Handelsblad, reported the allegations this week, claiming that Booking.com had relied on legal advice from London-based law firm Hogan Lovells saying it wasn't obliged to inform anyone of the attack.
The breach was said to have occurred after "Andrew" and associates stumbled upon a poorly secured server which gave them access to personal ID numbers (PINs), seemingly unique customer account identifier codes. From there the miscreants were able to steal copies of reservation details made by people living and staying in the Middle East. NRC Handelsblad linked this to espionage carried out by the US against foreign diplomats and other people of interest in the region.
Although the accommodation booking website reportedly asked the Dutch AIVD spy agency for help with the breach after its internal investigation identified "Andrew" as having connections to US spy agencies, it did not notify either its affected customers or data protection authorities in the Netherlands at the time, the newspaper allged.
When we asked for comment about the allegations, a Booking.com spokesperson told us: "With the support of external subject matter experts and following the framework established by the Dutch Data Protection Act (the applicable regulation prior to GDPR), we confirmed that no sensitive or financial information was accessed.
"Leadership at the time worked to follow the principles of the DDPA, which guided companies to take further steps on notification only if there were actual adverse negative effects on the private lives of individuals, for which no evidence was detected."
The breach predated the EU's General Data Protection Regulation (GDPR), meaning data protection rules everyone's familiar with today, which (mostly) make it illegal not to disclose data leaks to state authorities, did not exist at the time.
Booking.com was fined 475,000 earlier this year by Dutch data protection authorities after 4,100 people's personal data was illegally accessed by criminals. In that case employees of hotels in the UAE were socially engineered out of their account login details for the platform.
The apparent online break-in once again raises the spectre of European countries being targeted by Anglosphere intelligence agencies. The infamous Belgacom hack, revealed by Edward Snowden in 2013 and reignited in 2018 when Belgium attributed it to the UK, was carried out by British spies trying to gain access to data on people of interest in Africa.
Almost exactly eight years ago, Snowden also revealed the existence of a British spy-on-diplomats programme codenamed Golden Concierge, which on the face of it looks remarkably similar to the Booking.com breach reported this week.
While some readers might shrug and mutter "spies spy," evidence of the theft of bulk data by third parties who may or may not be subject to whatever lax controls spy agencies choose to create for themselves will be cold comfort to anyone who made a Booking.com reservation in the Middle East at the time.
Read more from the original source:
Dutch newspaper accuses US spy agencies of orchestrating 2016 Booking.com breach - The Register
Old Microsoft is back: If the latest Windows 11 really wants to use Edge, it will use Edge no matter what – The Register
Microsoft Windows 11 build 22494 appears to prevent links associated with the Microsoft Edge browser from being handled by third-party applications, a change one developer argues is anticompetitive.
Back in 2017, Daniel Aleksandersen created a free helper application called EdgeDeflector to counter behavioral changes Microsoft made in the way Windows handles mouse clicks on certain web links.
Typically, https:// links get handled by whatever default browser is set for the system in question. But there are ways to register a custom protocol handler, for operating systems and web browsers, that defines the scheme to access a given resource (URI).
Microsoft did just that when it created the microsoft-edge: URI scheme. By prefixing certain links as microsoft-edge:https://example.com instead of https://example.com, the company can tell Windows to use Edge to render example.com instead of the system's default browser.
Microsoft is not doing this for all web links it hasn't completely rejected browser choice. It applies the microsoft-edge:// protocol to Windows 10 services like News and Interest, Widgets in Windows 11, various help links in the Settings app, search links from the Start menu, Cortana links, and links sent from paired Android devices. Clicking on these links will normally open in Edge regardless of the default browser setting.
When the microsoft-edge:// protocol is used, EdgeDeflector intercepts the protocol mapping to force affected links to open in the user's default browser like regular https:// links. That allows users to override Microsoft and steer links to their chosen browsers.
This approach has proven to be a popular one: Brave and Firefox recently implemented their own microsoft-edge:// URI scheme interception code to counter Microsoft's efforts to force microsoft-edge:// links into its Edge browser.
But since Windows 11 build 22494, released last week, EdgeDeflector no longer works.
This is on top of Microsoft making it tedious to change the default browser on Windows 11 from Edge: in the system settings, you have to navigate to Apps, then Default apps, find your preferred installed browser, and then assign all the link and file types you need to that browser, clicking through the extra dialog boxes Windows throws at you. Your preferred browser may be able to offer a shortcut through this process when you install it or tell it to make it your default.
The Register has asked Brave and Mozilla whether their respective link interception implementations for the microsoft-edge:// URI scheme still work.
In an email to The Register, a Mozilla spokesperson confirmed the Windows change broke Firefoxs Edge protocol workaround.
People deserve choice, the spokesperson said. They should have the ability to simply and easily set defaults and their choice of default browser should be respected. We have worked on code that launches Firefox when the microsoft-edge protocol is used for those users that have already chosen Firefox as their default browser.
"Following the recent change to Windows 11, this planned implementation will no longer be possible.
Brave CEO Brendan Eich told The Register his Windows 11 testers haven't yet provided an update, but allowed that Aleksandersen's post seems pretty dire. "[Microsoft] must figure [that the] antitrust Eye of Sauron is looking at [Google, Facebook, and Apple] only," he observed.
In an email to The Register, Aleksandersen said the change affects both Brave and Firefox.
"No program other than Microsoft Edge can handle the protocol," he said. "Ive tested Brave (stable release) and a version of Firefox with the patch to add the protocol. Theyre not allowed to support it either."
Microsoft isnt a good steward of the Windows operating system. Theyre prioritizing ads, bundleware, and service subscriptions over their users productivity
"Microsoft hasnt blocked EdgeDeflector specifically. Windows is just bypassing the normal protocol handling system in Windows and always uses Edge for this specific protocol."
According to Aleksandersen, the latest Windows 11 build allows only the Edge browser to handle the microsoft-edge:// protocol.
"No third-party apps are allowed to handle the protocol," he wrote in a blog post on Thursday. "You cant change the default protocol association through registry changes, OEM partner customizations, modifications to the Microsoft Edge package, interference with OpenWith.exe, or any other hackish workarounds."
Aleksandersen says Windows will force the use of Edge even if you delete it, opening an empty UWP window and presenting an error message rather than falling back on the default browser.
The change to Windows means EdgeDeflector will not receive any further updates unless this behavior is reverted, said Aleksandersen.
"These arent the actions of an attentive company that cares about its product anymore," said Aleksandersen. "Microsoft isnt a good steward of the Windows operating system. Theyre prioritizing ads, bundleware, and service subscriptions over their users productivity."
Aleksandersen advises those opposed to the change to raise the issue with their local antitrust regulator or to switch to Linux.
Ironically, as Aleksandersen tells it, vendor-specific URI schemes took off in February 2014 after Google introduced a googlechrome:// scheme for its mobile apps as a way to counter Apple's anticompetitive insistence that Safari should handle certain links on iOS devices.
"Microsoft just turned the racket on its head and changed more and more links in its operating system and apps to use its vendor-specific URL scheme," he said in a post last month.
The Register asked the US Justice Department whether it's aware of this change and, if so, whether it's concerned, given Microsoft's prior conviction for abusing its market dominance. We've not heard back.
"Microsofts use of the microsoft-edge:// protocol instead of regular https:// links is in itself an antitrust issue," Aleksandersen told The Register. "This annoyed me so much that I created EdgeDeflector to fight back on its monopolistic and user-hostile behavior".
"I believe Microsoft clearly doesnt fear antitrust regulators.
"Theyre putting up more barriers and are being more aggressive now than they were in the past when they were hit with antitrust fines. (E.g. removing the default browser settings from Windows Setting, making it more difficult to programmatically change the default browser, prompting the user to 'choose Edge' after every system update, hiding/unpinning other browsers from your taskbar.) On top of this, theyre using these horrid microsoft-edge:// links in very prominent places in the OS to bypass the default browser setting entirely."
Microsoft did not respond to a request for comment.
Read more here:
Old Microsoft is back: If the latest Windows 11 really wants to use Edge, it will use Edge no matter what - The Register
Analysing cloud providers’ infrastructure management the bank perspective – Finextra
Speaking with two banks with also almost opposite roles and histories, we try to understand how cloud is playing a role in their overarching technology strategy, and the challenges that cloud migration has thrown in their path.
Gordon Mackechnie, chief technology officer at Deutsche Bank explains that the bank has effectively three partners for cloud use: Microsoft is used extensively on the end-user side, Google is used as the strategic public cloud partner and for the bulk of the banks migration, and finally for databases that will remain on premises in the private cloud the bank signed an agreement with Oracle to migrate the bulk of its Oracle Database estate to the Oracle Exadata Cloud@Customer.
We are absolutely multi-cloud, but were not multi-cloud for the same purpose, Mackechnie explains. We want to take best of breed in each instance, and so we will have multiple cloud providers that we dont use simultaneously for the same type of tasks. We think this is important.
Each solution has its strengths in different areas and more than delivers on them.
The app-only Minna Bank which officially began operations inMay, boasts an in-house core banking system built by Zerobank Design Factory andAccenture, and is the first in Japan to run its core banking system on a public cloud Google Cloud. Not only is the core banking system running Minnas retail operations, but will be made available to third parties who wish to offer discrete embedded finance offerings or to run comprehensive branded banking services.
CIO of Japans Minna Bank, Masaaki Miyamoto, also lists a swathe of other cloud providers being leveraged by the bank including Google Cloud, Azure, AWS, Oracle Fusion, DataDog, Salesforce, and PagerDuty.
Understanding the context for cloud migration
Explaining that there has been a range of factors driving the financial services industrys acceptance of cloud adoption, Mackechnie highlights that the investments being made particularly by large cloud providers are significant.
Looking at the alternative building large-scale shared platforms internally, the economics just dont make sense.
This type of project is typically more expensive and more difficult to do than initially expected. Second, he adds that financial institutions can't compete with the level of investment that Google or Microsoft or Amazon are making to their cloud platforms.
Importantly, these projects arent points of commercial differentiation for incumbents. It's not as if were going to build a shared service platform internally, that's going to give us a benefit as an organisation. This removes the incentive to own anything or to self-develop.
He furthers that its important to understand the history of the ecosystem of suppliers in which banks have typically existed. Banks never built their own database software, but relied on software and infrastructure providers to provide these services.
In many cases we already have, a complex supply chain and providers that we use to kind of build the applications that we run the bank on. The cloud is just an evolution of that, rather than a complete revolution.
Getting cloud strategy right from the outset
Now that the industry recognises the importance of cloud use, banks are looking to expedite their migration strategies in ways that will provide both scale and security quickly.
Straight to the point, Mackechnie states that the key mindset that banks should adopt is to go after this migration plan with strong intent.
Theres no point in doing this if youre just going to play around at the edges.
He recommends examining the more difficult problems up front, because while it is possible to shift over certain smaller, more discrete services and operations onto the cloud, in order to get the true benefit that cloud can provide tinkering around the edges isnt going to deliver the progress you desire.
He qualifies this by stating that it is essential to identify the areas where real value will be delivered by shifting the cloud, and lead with these.
If you see the cloud as effectively an infrastructure or an infrastructure cost play, it doesnt necessarily make clear the added value potential to business processes. Were focusing on areas where we see incremental value which can only be achieved in the cloud.
Managing the cloud provider relationship
Minna Bank, while leveraging the services of numerous providers, tends to have a closer relationship with its core-banking cloud provider Google Cloud.
We work very closely and have lots of support from Google Cloud, were in constant contact with their support members, and the Google Cloud team knows how our system works too.
The Minna Bank team also schedules a monthly meeting with Google Cloud to share any error reports or new technologies available through the Google Cloud services which could assist their offering.
Aside from protocol which would involve close assistance with Google Cloud should any critical errors arise, Minna Bank only contacts its other cloud providers when need be.
Miyamoto adds that given its BaaS project currently under construction, it will need API connections to be able to deliver the offering at scale and with the ability to cater to high volumes.
Clients using our APIs dont always notify us ahead of launching a marketing campaign whether our servers are prepared for high volumes of new customers. We need to make sure our servers can withstand the load. To therefore scale up our servers using cloud, Google Cloud is the only provider who can make this possible.
When it comes to what banks should look out for in robust cloud infrastructure management strategies, Miyamoto explains that one factor (among many) is the ability to monitor services efficiently.
It very challenging to monitor all cloud services in one place. Usually core banking services reside in a data centre so that when a problem occurs, you just need to visit the data centre to understand what is going on. But, on a cloud server, there isnt anyone who can contact you to tell you there is a problem going on. Its really important to get to know what has happened in real time, so that people can actually resolve issues quickly.
What risks come with poor cloud infrastructure management?
First and foremost, Mackechnie argues, safety and security of data is the critical consideration that needs to be addressed in any cloud execution process. With anything that is still relatively new and developing, we must be careful about ensuring that at each stage, we manage the risks.
In banking these risks typically manifest themselves as security, stability and operational resilience risks. He furthers that in the same way cloud providers investment significantly in the functionally of their platforms, they also invest heavily in operational resilience and security because much banks, security presents something of an existential threat for cloud providers who must be able to demonstrate and maintain this high level of security to operate in the highly regulated financial space.
That isn't to say that it can't be done safely, but I think we have seen instances of people having problems in the past, maybe because a lack of experience and understanding, maybe a lack of configuration so we must be careful we don't make those mistakes. We have to be very careful to manage those risks effectively as we as we adopt new solution types like the cloud.
Resolving inherent challenges of cloud use
Miyamoto explains that a key challenge faced by Minna Bank is tied to management of cloud servicing protocol. He isnt concerned in a material sense about outages or cloud services going down per say, as Minna Bank has designed into its systems the ability to continue functioning even if one of its cloud services goes down.
It builds this reliability by separating its servers, in fact, Minna Bank holds its data on Google Cloud servers located on both the East and West sides of Japan. This guarantees availability so that business can continue in the unlikely case one of these centres goes down.
However, a challenge it is yet to entirely resolve is managing maintenance periods.
As a cloud infrastructure, the most important thing remains the account-end administration management, security, and operation management. However, because cloud services have to stop their servers for periodic maintenance, there is naturally a period of downtime. When that happens, our services also have to stop from anywhere between a few seconds to a few minutes.
To manage this, Minna Bank tries to control when these maintenance downtimes will occur and prepare our services accordingly. We not only have to negotiate with team members in our office and with the cloud providers, we also have to give notification to our customers, and all these updates cost a lot of money.
Ideally, Miyamoto explains that by automating every process. Even then, as cloud providers are onboarding more and more services, it results in Minna Bank having to spend more to continue operating and managing the costs incurred through maintenance.
Are regulators on the right path for effective cloud use?
Insofar as operational resilience, Mackechnie believes that cloud definitely has a part to play, and Deutsche Bank is managing these resilience requirements as they pertain to cloud use.
We retain full accountability for that resilience, and we are working with the cloud providers to make sure that its happening in a way that would meet our regulatory obligations.
He explains that the cloud providers are very focused on these regulations too, and that they recognise that if they wish to be material players in supporting financial services, they must be able to meet regulatory expectations.
Regulators are also on a journey, continuously evolving their approach to cloud use, but its challenging because a cloud provider is effectively a combination of things [] To some extent cloud providers are a hardware provider, to some extent theyre an open source service provider, and to some extent a theyre a software provider.
Mackechnie adds that while the regulator historically looked at these providers separately, with different levels of oversight depending on the service, they are facing a new challenge today: As you start to conflate these things you think, how do those different regulatory approaches come together in a way that sensible and effective for regulating an activity that's taking place in the public cloud?
According to Miyamoto, Japanese regulators have not ruled out the use of public cloud in banking systems. Rather, they encourage banks to evaluate cloud vendors as potential partners and promote the use of the cloud in line with the specific needs of their business and systems. Banks understand that without cloud use, they won't be able to compete with emerging companies simply by maintaining systems that have been entrenched for years."
How quickly should banks finalise their cloud migration plans?
While there is momentum behind shifting to cloud, Mackechnie argues that despite the increasing prevalence of digital players like Minna Bank which are entirely cloud native, there is not yet a pressure for incumbents to finalise their cloud migration.
If you're starting a bank from scratch today, would you build it all natively in the cloud? I'm sure you would. However you've got to recognise the sheer scale of 50 years-worth of infrastructure, learning, and business logic that has been built in the systems of the larger incumbents.
There's probably a tipping point somewhere down the line, where the cost of carrying a hybrid model becomes a bit painful, but we're a long way from that yet.
He notes that there are still material reasons to maintain on-premises infrastructure at reasonable scale, which will continue for the foreseeable future. These include security, data elements, regulatory elements, online transaction processing, and certain functional low latency type elements (tied to trading activity) that that would be very difficult to move in any way.
On top of this sits the inhibitor of the substantial investment required to migrate properly. To get the benefit you have to re-architect properly, therefore you're picking and choosing where to make those investments really have the most tangible impact.
I don't think there is pressure to finalise, rather, I think right now there's pressure to get it right. The risk profile of this is that we have to be absolutely certain that we're taking the right steps and we're taking those steps in a safe and secure way. So I think that's more the pressure on that than there is to actually finalise.
Original post:
Analysing cloud providers' infrastructure management the bank perspective - Finextra
MemVerge takes Big Memory to the cloud – TechTarget
MemVerge is taking its flagship memory product, Big Memory Machine, to the cloud, giving users the ability to efficiently move data-intensive applications.
Founded in 2017, MemVerge is an in-memory computing vendor. Its main product is Big Memory Machine, a software layer that enables users to create a shared pool of memory across multiple servers, said Eric Burgener, a senior analyst at IDC. He likened it to how VMware virtualizes compute, networking and storage.
The product uses Intel Optane persistent memory modules -- initially released around the time of MemVerge's founding -- under its covers to provide users with some flexibility to create a pool that might be terabytes of memory, bypassing the normal capacity limitations in traditional RAM, he said.
In the cloud, users can access features that enable them to use temporary instances without losing data and cloud bursting without interruptions. MemVerge's Big Memory achieves this through AppCapsules, a way of containerizing the application through integration with Kubernetes and its existing ZeroIO Snapshots. AppCapsules provide portability from on premises to cloud and between cloud environments.
Big Memory Cloud is interoperable with the major public cloud providers AWS, Microsoft Azure and Google Cloud Platform as it sits on top of the cloud and requires no integration.
Companies are increasingly moving to hybrid and multi-cloud environments, and applications need to be built to take advantage of the cloud's agility, flexibility and scalability, according to Charles Fan, CEO and co-founder of MemVerge.
For some applications, moving from an on-premises to a cloud environment has been seamless. For others, including applications that require a server to retain data and to scale easily -- such as those that perform artificial intelligence or video rendering tasks -- the story has been more complicated.
Data services like snapshots only work on persistent storage, which would be of no use to workloads running in memory, according to Burgener. If there is a process or server failure, recovery is time intensive or users need to start over.
ZeroIO Snapshots use persistent memory as opposed to storage; in other words, zero I/Os are sent to storage. Users can refer to the recent snapshot of memory, potentially saving hours on recovery.
"ZeroIO is a memory snapshot [that] captur[es] a running application state and encapsulates it," Fan said.
These snapshots are placed into AppCapsules, Fan said. From there, they can be loaded, replicated, recovered and transported across clouds.
"To move an application from one cloud to another, like Google to Azure, users have to understand all of the storage pieces related to the application," Burgener said. "AppCapsules make this easier by figuring out the underlying persistent storage needed and moving it all to a new environment."
The technology may sound similar to that of vMotion, a VMware product that enables live migration of running VMs from one server to another. But, Fan clarified, Big Memory Cloud enables users to move a point-in-time snapshot of applications. After the move, a user can use the AppCapsule to restart the application where it stopped before it was migrated.
Big Memory Cloud's AppCapsules enable customers to take advantage of spot instances. Cloud providers with spare compute instance capacity or spots often provide deep discounts, up to 90%, until they need them back.
The portability of AppCapsules means customers can use spot instances and if the instance is interrupted -- called back by the service provider -- the ZeroIO Snapshot ensures the data is easily recoverable.
Portability also enables applications to use cloud bursting more efficiently, something in-memory apps can't normally do without long service interruptions. As performance needs outgrow the on-premises capabilities, apps extend or "burst" into the public cloud to use its resources, according to MemVerge.
Finally, AppCapsules together with ZeroIO provide added multi-cloud mobility, helping customers work where it makes the most sense while avoiding vendor lock-in, the memory storage vendor stated.
Big Memory Spot Cloud use cases are expected to be available in the next few months, with the cloud mobility and cloud bursting services becoming available next year.
MemVerge is not alone in its push to help customers move to the cloud, but its approach is unique in the market.
VMware released its software-defined memory tiering, also using Intel Optane persistent memory, in Project Capitola last month, which could go head to head with Big Memory Cloud as they both do tiering, but it is still unclear how it will fit into the cloud storage market. Project Capitola won't be available to customers until next year.
If it does push into the same space as Big Memory Cloud, IDC's Burgener said, the competition might heat up.
"When VMware decides it's important enough for them to get into it, that is a critical inflection point for the technology," he said.
See more here:
MemVerge takes Big Memory to the cloud - TechTarget
At 2021 PASBA Fall Management Conference in Nashville, Infinitely Virtual Affirms Value of Managed IT No Matter the Host – PRUnderground
On the eve of the 2021 PASBA Fall Management Conference set for Nov. 8-11 at the DoubleTree by Hilton Nashville Downtown the CEO of leading cloud pioneer Infinitely Virtual is extolling the virtues of no host Managed IT.
Two years ago, in a bid to dramatically simplify the complex workload confronting managed services providers and IT professionals, we rolled out IV Managed ITSM, a distinctive new take on the MSP model, said Adam Stern, Infinitely Virtual founder and CEO. Now, as remote work/remote access has become more of a necessity than an option, the introduction seems prescient, and our IV Managed IT may be more appropriate for accounting firms than ever. PASBAs Fall Management Conference provides an ideal opportunity for organizations to get acquainted with this compelling way to have a better experience with Managed IT, whether hosted by AWS or Azure.
Offering comprehensive, premium remote monitoring and management, IV Managed IT is built around a modern, intuitive platform that enables Infinitely Virtual to easily look after customer devices, across all environments, from any location in the world. Given that IV is an MS Direct provider, IVs Managed IT solution puts the focus on people and business strategy as well as on hardware, delivering support from the desktop to the server and everything in between.
With IV Managed IT, were placing our distinctive stamp on IT management services, in part by answering the question, who monitors your local equipment? Our Managed IT suite recognizes that, for small and mid-size accounting firms, the cloud isnt pure play. While much of the compute environment may be off-premises, not all of it is, can be or even should be. Managed IT enables your IT resources to live seamlessly in both worlds.
IV Managed IT rests on six pillars: focusing resources on the core business; easily scaling with growth; aligning technology with strategic goals; remote management; IT help desk; and on-site IT management. Managed IT services are tailored to meet any accounting firms needs, from handling employee endpoints to fully overseeing complex IT infrastructure. Ultimately, we believe businesses should dedicate their resources to what they do best, not worrying about IT, Stern said. IV Managed IT is designed to grow with every business.
PASBArepresents Certified Public Accountants, Public Accountants, and Enrolled Agents who provide accounting services to small businesses throughout the United States. Members of the Association have built a nationwide network of accountants to benefit small business clients across the country. Using the collective resources of this network, Association members offer their clients a level of service and expertise that individual practices are unable to rival.
For more information, visit http://www.infinitelyvirtual.com.
About Infinitely Virtual
The Worlds Most Advanced Hosting Environment
Infinitely Virtual is a leading provider of high quality and affordable Cloud Server technology, capable of delivering services to any type of business, via terminal servers, SharePoint servers and SQL servers all based on Cloud Servers. Ranked #28th on the Talkin Cloud 100 roster of premier hosting providers, Infinitely Virtual has earned the highest rating of Enterprise-Ready in Skyhigh Networks CloudTrust Program for four of its offerings Cloud Server Hosting, InfiniteVault, InfiniteProtect and Virtual Terminal Server. The company recently took the #1 spot in HostReviews Ranking of VPS hosting providers. Founder and CEO Adam Stern is a member of the Forbes Technology Council. Infinitely Virtual was established as a subsidiary of Altay Corporation, and through this partnership, the company provides customers with expert 247 technical support. More information about Infinitely Virtual can be found at: http://www.infinitelyvirtual.com, @iv_cloudhosting, or call 866-257-8455.
The rest is here:
At 2021 PASBA Fall Management Conference in Nashville, Infinitely Virtual Affirms Value of Managed IT No Matter the Host - PRUnderground
LA County Assessor’s Office Looks to Oracle Cloud to Improve Operations – PRNewswire
AUSTIN, Texas, Nov. 4, 2021 /PRNewswire/ --The Los Angeles County Office of the Assessor has successfully migrated its Assessor operations from a paper-based, legacy mainframe environment to Oracle Cloud Infrastructure (OCI). By moving to Oracle Cloud, the Office is able to speed up data processing, reduce risk, and improve the user-experience. Using a series of OCI services including Oracle Autonomous Data Warehouse, Oracle Analytics Cloud, Oracle Exadata Cloud Service, and Oracle Database Cloud Service, LA County is seeing dramatic improvements in performance and achieving significant cost savings by eliminating its on premise infrastructure.
As the largest local assessment agency in the country, the Los Angeles County Office of the Assessor reviews more than 400,000 property documents and completes 500,000 physical property appraisals each year. This work was previously conducted with a paper-based process using 40-year-old mainframe technology that required manuals to interpret its code. With such a high volume of assessments, compounded by California's complex requirements, the Assessor's Office realized this model wasn't sustainable.
The Office began work on a five-phase Assessor Modernization Project (AMP) to develop an in-house custom application with Oracle Consulting. After three successful phases, the AMP application became the go-to production system for the Assessor. During the fourth phase of the project in February 2021, Oracle Consulting and the Assessor's Office extended AMP functionality and moved the application from on-premises to OCI with no disruptions, while eliminating 80 servers.
"The decision to move AMP to Oracle Cloud Infrastructure midway through the project had huge benefits, including cost savings, better performance, flexibility, and resource efficiencies," said Kevin Lechner, CIO, Los Angeles County Office of the Assessor. "We could now focus our attention on application development and increased productivity rather than infrastructure requirements and maintenance."
On OCI, data processing jobs that once took up to eight hours are now processed in four; built-in Disaster Recovery is helping mitigate risk; and end users are seeing faster page loads than ever before. Soon, the Los Angeles County Office of the Assessor plans to open access to AMP for other counties in the state.
"The Los Angeles County Office of the Assessor's modernization project is one of the most forward-thinking government agency endeavors we've seen in recent years," said Jeff Kane, group vice president, Oracle Consulting, North America. "We're looking forward to seeing additional performance enhancements and cost savings they'll realize on OCI services, freeing up their IT staff to innovate with new features and functionality on the platform."
"One of my top goals coming into this Office was to make sure that we provide public service that is both effective and cost-efficient," said Jeff Prang, Los Angeles County Assessor. "This milestone of the project with Oracle Cloud Infrastructure is our biggest success to date."
About OracleOracle offers integrated suites of applications plus secure, autonomous infrastructure in the Oracle Cloud. For more information about Oracle (NYSE: ORCL), please visit us at oracle.com.
TrademarksOracle, Java, and MySQL are registered trademarks of Oracle Corporation.
SOURCE Oracle
Continue reading here:
LA County Assessor's Office Looks to Oracle Cloud to Improve Operations - PRNewswire