Category Archives: Cloud Servers

MJFChat: The Role of the IT Pro in a Microsoft 365 Cloud World – Petri.com

Were doing a twice-monthly interview show on Petri.com that is dedicated to covering topics of interest to our tech-professional audience. We have branded this show MJFChat.

In my role as Petris Community Magnate, I will be interviewing a variety of IT-savvy technology folks. Some of these will be Petri contributors; some will be tech-company employees; some will be IT pros. We will be tackling various subject areas in the form of 30-minute audio interviews. I will be asking the questions, the bulk of which were hoping will come from you, our Petri.com community of readers.

Readers can submit questions via Twitter, Instagram, Facebook and/or LinkedIn using the #AskMJF hashtag. Once the interviews are completed, we will post the audio and associated transcript in the forums for readers to digest at their leisure. (By the way, did you know MJFChats are now available in podcast form? Go here forMJF Chat on Spotify; here forApple Podcasts on iTunes; and here forGoogle Play.)

Our latest MJFChat, recorded on October 26, is focused on the role of the IT Pro in an increasingly Microsoft 365-cloud-centric world. My special guest is Tom Arbuthnot, Principal Solutions Architect with Modality Systems.

Tom has lots to say about the new and evolving role of the IT pro based on the work he does in his consultancy. He also answers a couple of reader/listener questions in this chat, as well.

If you know someone youd like to see interviewed on the MJFChat show, including yourself, just Tweet to me or drop me a line. (Let me know why you think this person would be an awesome guest and what topics youd like to see covered.) Well take things from there.

Transcript:

Mary Jo Foley (00:00):Hi, youre listening to Petri.Coms MJF Chat show. I am Mary Jo Foley, AKA your Petri.com community magnate. And I am here to interview tech industry experts about various topics that you, our readers and listeners want to know about. Todays MJF Chat is going to be all about the changing role of the IT Pro and Microsoft 365 cloud world. And my special guest today is Tom Arbuthnot, Principal Solutions Architect with Modality Systems. Hi Tom and thank you so much for doing this chat with me today.

Tom Arbuthnot (00:41):Yeah, hey Mary Jo, thanks for having me.

Mary Jo Foley (00:44):Great. Well, we went a little bit back and forth, when we talked about having you on MJF Chat. We talked about doing a Teams chat, but I feel like weve had a lot of chats everywhere about Teams lately. And I feel like a bigger topic, a broader topic that people want to know a lot about is when youre an IT Pro and the world is increasingly a cloud world, specifically, were going to talk MS 365 here. Things are changing really fast. And so I wanted to talk with you today about what that means for IT Pros and how they can keep up, or at least try to keep up with the new pace of things.

Tom Arbuthnot (01:25):Yeah, it is. Its a really hot topic because it happens all the time, and our customers and with, you know, were a consultancy and a managed service provider and its a constant fire hose of information and changes of what you need to do as an IT Pro as well.

Mary Jo Foley (01:40):Yeah, totally. So lets dig in. I get this question a lot still, which kind of surprises me, but with the increasing pervasiveness of the cloud, is there really even still a need for a traditional IT department at all?

Tom Arbuthnot (01:56):Yeah. So if you say traditional that they should be doing all the things they were doing 10 years ago, a lot of those tasks are going away, but you still need a group of people who are looking after this stuff. Although looking after, as well talk is probably changing in terms of what those requirements are. But yes, every customer needs to have an internal group of people responsible for this stuff, they just might not be doing what they were doing 10 years ago.

Mary Jo Foley (02:28):Exactly. So I agree with you and I feel like to your point, the requirements are what are changing more than the actual like job definition. So what do you see as the new set of requirements for IT Pros, lets define specifically Microsoft 365.

Tom Arbuthnot (02:47):Yeah. So Microsoft 365, thats the area I focus on. If you think about in the olden days, quote unquote, you know, youd be doing server patching. So you might look after the Exchange servers or the SharePoint servers, and youd be doing major version upgrades and backup and restore and monitoring, and maybe even swapping hard disks, stuff like that. All that stuff is largely out of the window with the cloud. Youre now buying it as a service. But, lots of things have come along that were not really as taxing before that now are. So now its the fire hose of, of Microsoft changes and new features and features going away and this changing and that changing, and its really impacting your business or your organization. So new abilities are coming. You want to make the most of them, youre paying for them. Things are going away and changing. Those could impact business process. So, the number one thing is youre moving from a server patcher and maintainer to more of a, kind of more sliding towards the business side of how does the business drive value out of this cloud investment?

Mary Jo Foley (03:54):Speaking of business side, I hate to bring this up, but billing and pricing always comes up when you talk to people about the move from on-prem to cloud, and I dont want to get too deep in the weeds here, because I know like opening the Pandoras box of billing pricing and licensing is like, ahh, but, if somebody is a traditional IT Pro moving into the Microsoft 365 world, do you have any kind of top level advice or guidance for them about ways to think about billing and pricing now in the new world?

Tom Arbuthnot (04:27):Yeah. Yeah. Like I dont get in the weeds of it either, fortunately, which probably tells you that its something that most people avoid, but the very fact that it is avoided means as an area to add value there. I mean, its constantly moving. Its very hard to understand theres the E5 and the M365 E5 and this add on and that add on and this feature and that feature. So theres definitely a role for someone within your organization to be understanding whats being used, whats being spent and why, what the business requirements were. I mean, if you just take the Microsoft sales reps, so youre going to buy the top level of everything, but youre not really doing the job for your company, making sure theyre getting the things they need and the value out of it they want. So there is a role there, its a very unloved role of keeping the business honest about their requirements, or keeping Microsoft honest about which license you need and keeping up with that. I mean, theres no easy silver bullet there. There are some good partners who dig into that stuff. So lean on those partners, I think. But other than that, yeah, its just lots of Excel and keeping up with Microsoft.

Mary Jo Foley (05:35):On the topic of keeping up, Im curious how you do this because we were talking before we started the chat about how every week, if youre an admin, you get this long laundry list from Microsoft of things that are changing in Microsoft 365, you know, some longer term, some nearer term. How do you keep up in your own job with all of these things? I mean like today I got an email and, Im not kidding, there were like 30 things in there that are changing. And Im like, how does anybody keep up with this? Including reporters and customers, IT Pros, partners. How do people do it?

Tom Arbuthnot (06:09):Yeah, its really tough. I dont think many people are. I think its definitely recognized in the Microsoft world as getting a bit crazy. Now, the amount of change I think theres a general keeping on top of the Office 365 message center and the news, but actually Im finding customers are getting more value out of kind of summary information, like summary blog posts and using things like that for picking out the key things. But again, its one of those areas where if youre getting into a cloud world as an IT Pro in an IT department for an organization, you need the organization to recognize that youre not just outsourcing all IT here. There are things changing that will impact the business and you need time to comprehend them, understand them, understand the roadmap. But yeah, theres no easy answer. Other than the Microsoft blogs on tech community, generally lots of information comes out there, the Message Center, the Roadmap. Im fortunate, Im really focused on Microsoft Teams. So I spend a disproportionate amount of time in the area. But Im not sure an end customer can have a specialist in every technology to keep up. So its taking a more general overview about what you think is going to impact your organization.

Mary Jo Foley (07:22):Yeah. Okay. So I dont feel so bad now hearing your answer cause Im like, there must be a secret way that people are doing this, but it sounds like no, there isnt.

Tom Arbuthnot (07:31):Like end customers often think partners have some secret route in where like we get fed the exact information and the exact dates. Its so funny, right? Like, particularly when you work with yeah, like big orgs tend to get it. Cause they work directly with Microsoft a lot, but its kind of a mid tier org, theyre like, well, youre partners with Microsoft. They just, you just contact the PM and ask them whats going on. And Im like, yeah, not so much. Like its a fire hose for everybody, Im afraid.

Mary Jo Foley (07:56):Indeed. So, you know, the other thing that I feel like complicates this a bit is now that were talking about Microsoft 365, instead of just Office 365, meaning were talking about Windows, were talking about Office and were talking about Mobility and Security. So things like Config Manager and Intune. What if youre an IT Pro, do you try to keep up with all of these things now because Microsoft bundles them in a single package? Or how do you advise customers to think about Microsoft 365 versus Office 365?

Tom Arbuthnot (08:30):Yeah. Thats a great question. Its coming up a lot. So my background, I started off in OCS, Lync, Skype. So I was really a UC or voice person and Teams very much pulled me into data, which meant ethical walls, compliance, information barriers, like all the M365 governance story. So yeah, Ive started to widen out because that necessity, I think how deep or wide you go depends on your role. So if youre a customer and your,, you know, you have to be more generalist, then you have to go wider. If youre a specialist, you can go slightly deeper. But even as a specialist, youve got to have an appreciation of all the other workloads and whats going on because as you say, theyre so interdependent now that you need to know that the M365 features will tend to be security, governance, identity stuff, then increasingly affecting all applications. If you call them that, all the different abilities in M365. So I say all things being even, you definitely have to be wider than you used to be, or things are going to happen to your specialist workload that you dont understand because theyre affecting the whole of Office 365 or the whole of M365.

Mary Jo Foley (09:44):Yep. And you bring up an interesting topic, which could be a podcast all on its own, which is security. So a lot of times when a vendor, especially Microsoft, is trying to get somebody to go to the cloud. You hear people say, well, you know what? One thing we can do better than you can in your own org is manage security because weve got all these resources, we do it for ourselves. We have a lot of expertise and depth. But if youre an IT Pro working in this space, what do you do? Do you just give in and say, okay, yeah, you can do this better than me. Or do you try to somehow kind of keep a hand in it? What do you suggest people do around security specifically?

Tom Arbuthnot (10:24):Yeah, its coming up more and more. I think Microsoft had that pitch for a few years, but its really coming to reality and fruition now, as in the features are just pretty untouchable, partly because you can only do certain things in the cloud at cloud scale. But theres still plenty of stuff to be done for the Security Pros. So actually all these features Microsoft talk about, they dont just flick on because youve got the license. So we see a lot of customers thinking theyre buying security, youre buying the abilities and the functions, but you still need to configure them, make sure theyre aligned to your orgs policies, check their working reports on them, stuff like that. So actually I think theres a great gap in IT at the moment for real M365 security specialists. Theres plenty of old school firewall security, perimeters, lockdown my SharePoint server, but theres not tons of people who are keeping up. Again, massive pace of change of theres the Compliance Portal and the Security School and the Compliance Center and the Message Center. And theres a hundred things going on that all affect security, compliance and governance that you need to be on top of.

Mary Jo Foley (11:33):Yep. Thats those are good points. Another topic Im curious of your take on is outages. So you know, when youre an IT Pro just dealing with on-prem and theres a problem in your data center, you have your own ways that youve communicated with your users in the past about this. But when Microsoft has an outage in Azure or in Microsoft 365, or just in one specific service like Teams or Exchange, they are kind of changing how they are trying to communicate with users and admins about this. So what do you think as an IT Pro you should be thinking about when it comes to outages and communication?

Tom Arbuthnot (12:15):Yeah. So the first thing is your organization, your business, Ill use interchangeably are choosing to sign up to Office 365, Microsoft 365. So at that point, as an IT department, you need to make the business aware they slash, we are choosing to go down this route for all the benefits. Heres the SLA, heres how To Dos work. The worst thing I see is when people go to the cloud and then they get an outage and then the business shouts at them to fix it now. And its like, really? We cant, like we dont own it. We dont control it. We can phone up Microsoft, but to be honest, and no one customer is big enough and theyre already on top of it, you know, theyre not gonna move any faster because I phone them up.

Tom Arbuthnot (12:57):So thats the first thing is setting expectation with your organization that this is how it works. Its out of our hands, but that we think is the better approach. In terms of communicating to users. Its an interesting one. We talk to customers about having a way to communicate to them that is out of band of Office 365. So some customers, for example, will have a website domain that only users can know about or a list of users that they can text, SMS message. So you should consider it a way to communicate to your users if you were to have a proper stayed outage, because it is a reality sometimes, but generally it feels like with the cloud, people are getting more okay with that part of the cloud story. Im getting all this benefit and all this productivity and all of these new features, and sometimes it will wobble.

Tom Arbuthnot (13:48):And so far theres been no like real proper sustained outages. Theres lots of impact in the short term, but its kind of like, okay, well Ill go and get a coffee for a few hours and it will be back. And then so far its proven to be so. Its surprising to me having worked with big enterprises for a long time, how relativity relaxed they are about that compared to how they were, when we were installing things like Skype servers. They, you know, when we were on the hook for keeping up Skype, like minutes of outage were real problems. But in the cloud, it just seems to be part of the deal youre paying for is thats the reality of the cloud scale.

Mary Jo Foley (14:27):I guess I know more unreasonable users than you do. You know what I feel like on Twitter, Twitter has like made it so that if theres even a blip in Microsoft 365, any service or Azure, like immediately people take to Twitter and they start tweeting to us as reporters, you know, or they start tweeting to Microsoft, like with outrage and like screaming, like all caps, you know, on Twitter. And Im like, wow.

Tom Arbuthnot (14:55):Yeah, Twitter brings out out the extremes in people. Theres lots of people hanging on to the, you know, it was better when we had servers and we can control it. All the stats overwhelmingly suggests thats not true. Like if you look at the average uptime of these services and the level of security and productivity youre getting, even if you roll that out on the total cost of ownership and stuff, everything points that this is the direction. But theres always going to be someone saying, well, you know, when I had a, well in my backyard, I could always get water whenever I wanted. And it never didnt have water. Tides have turned, but yeah, talking to the organization, having an honest question, the worst thing is a CTO that hasnt had that conversation with the organization. And somehow they dont understand, like, thats what we agreed to sign up for. Thats the reality of it.

Mary Jo Foley (15:45):Yep. All right, now we got a couple reader/listener questions, and Im going to use this one first, Dominic Kent asked on Twitter. And I dont know if hes asking this specifically to you as an individual or just in general, but his question is how do you juggle work within your original role with demand for influencer/SME marketing?

Tom Arbuthnot (16:10):Yeah, it sounds, probably relates to what I do now. So I definitely, Ive moved from kind of hardcore hands on techie to more and more blog and events and speaking and that kind of thing. And I think thats another thing I would recommend for a certain subset of IT Pros, is our job is becoming more and more about understanding and communicating. And thats something that can definitely add value. Im lucky because I work for a partner. So theres kind of a halo effect or a benefit to us for me being in that space, doing the speaking, doing the blogging. So I get to kind of balance my time appropriately. If youre an IT Pro and youre thinking about career stuff, I would definitely recommend finding a place that respects and values that. And if you want to grow in your career, thats one of the things where an end-user organization, probably wont get much value out of that, but a partner or a consultancy, or you can contracting or whatever you want to do. You could definitely spread your wings and blog and speak and get more involved in the community.

Mary Jo Foley (17:19):Yeah. I talked to a number of IT Pros who asked me, do you think I should start a blog? And I think they have a lot of really excellent lessons learned kinds of things to share, but I agree with you that your company has to be on board with it. Right. And has to give you the time and the space.

Tom Arbuthnot (17:35):You can actually get in trouble as well. Like some companies are very strict on that stuff. Like particularly if you work for a big, big org, theyll just have a global policy where you cant talk to anybody about what youre doing. Its just, no, everything goes through the press office. But yeah, if thats something youre aspiring to do, there are partners that will actively encourage and appreciate that. And then that might be a route for you, potentially.

Mary Jo Foley (18:00):Good, good point. Another question from Twitter, from the Unified Comms Influencers account. This is a good, like what if question, what if youre in mid cloud rollout as an IT Pro and you find users have already adopted their own cloud apps? And they used as the hashtag #ShadowIT.

Tom Arbuthnot (18:22):Yeah. Im not sure if thats an if or just the reality. Its more like whether you discover it or not, is the question.

Mary Jo Foley (18:27):I know, thats always happening, right? Like its inevitable.

Tom Arbuthnot (18:29):100%

Mary Jo Foley (18:31):Any thoughts or guidance for people who, you know, youre an IT Pro and suddenly youre like, oh wait, Im trying to do this cloud thing over here. And I just found out all my users are already doing this over here, you know?

Tom Arbuthnot (18:43):Yeah. Its an interesting one. Theres a few different answers there. So the first thing is like historically shadow IT was largely ignored. Its like a kind of, if I cant see it, if I dont know about it, its not happening. Things like GDPR and ISO have largely changed that attitude because now you cant really turn up for a GDPR issue and say, I didnt know. And its less credible, like for example, WhatsApp is the classic one that we see lots of people using WhatsApp in our space, moving to Teams. I think the key thing is, is getting the business on board with why youre doing what youre doing and trying to get business level sponsorship. So the big thing is security compliance, GDPR. If theyre using a third party product, theyre probably breaching, and users arent trying to breach those rules.

Tom Arbuthnot (19:34):Theyre just trying to get their job done. So dont blame the users. Like theyre just trying to get their job done in the most efficient way. But if you can help them understand like, particularly with M365 now, like theres so many tools in the box that are similar in functionality and ability to the kind of consumer prosumer equivalence, you know, we didnt, five years ago, you couldnt really say dont use Dropbox. OneDrive is better because it wasnt. But like theyre there, or there abouts now, but with all the security and compliance benefits. So, but you need sponsorship from someone influential on the organization side, because people are slow to change if theyre already using a tool and they love it, they dont see the immediate benefit to them. But if you can say, well, look, we need to be compliant because of risks and security. Can you meet me halfway, easy to say, hard to sell, but that is a big part of a migration project is persuading the users to use your platform for the benefits.

Mary Jo Foley (20:31):Okay. We touched on this briefly earlier, but the idea of skills and career topics, but any kind of high level thoughts about career, I guess Id say options for people. You mentioned, some people may want to go to a consultancy if they have the desire to be more public facing, you know, but any other career guidance for people who are traditional, IT Pros as the cloud kind of just takes over.

Tom Arbuthnot (21:01):Yeah, its definitely a hot topic. As I said, I think theres different ways you can go. So certainly if I look at our technical teams, there started to be an interesting division where the people who are more Modern Workplace tend to, leaning towards, they understand the fire hose of features. Theyre looking at business value conversations and adoption conversations and transformation conversations. And they like all that value engagement stuff. And then theres another group of people who really liked the IP protocols and packets and fees and speed. And theyre starting to navigate more towards use your stuff where its more building solutions and architecture and that kind of thing. So I think theres a general split there between if youre going down the Modern Workplace, M365 routes, you definitely want to be aware youre moving into a world where the options are laid out for you and youre picking the right one for an organization. If you want to go down the technical architecture route, youre probably looking more at the, Azure, cloudy world, that cloud of world, rather than the Modern Workplace cloud.

Mary Jo Foley (22:07):Thats good. I hadnt ever really thought of the division that way, but yeah, that makes a lot of sense. Okay. And then to close out any guidance around resources for people who are struggling to stay current, you know, we talked about the emails that go out to admins on Mondays with all the myriad features coming to Microsoft 365, but anything else you would say if youre an IT Pro you should definitely be thinking about or looking at this?

Tom Arbuthnot (22:36):Yeah, I think the Microsoft tech community has a whole range of blogs for different technologies. And by and large, Microsoft are trying to do a better and better job of communicating that stuff. The Office 365 Roadmap its far from perfect, but its a great source of information coming up. But, more generally, if you want to be like expert in your area, I think, look at whats going on online. Theres so many online, we talked about this at the start, events, webinars, communities, and try and find peers to talk about this stuff. Were all in it together as this fire hose of change and new features and new information. So dont feel like youre on your own. If you want to learn, you want to do more blogs, webinars, community groups. Theres lots out there.

Mary Jo Foley (23:24):I agree. Even though we all joke about Twitter, I feel like Twitter is a really good place to just sometimes throw a question out there. Has anybody done, blah, youll find people will answer you.

Tom Arbuthnot (23:35):Definitely. Twitter is great and its good because its got that hashtag system of connecting random people. Ive met so many people on Twitter who I now talk to on a regular basis. And LinkedIn actually, interestingly, is starting to pop even moreso for that, I guess maybe more the business end of that conversation, but I found Ive got more and more great engagement on LinkedIn around Office 365, M365 topics too.

Mary Jo Foley (24:02):Thats great. All right, Tom. Well, thank you so much for all the great guidance and thoughts and inspirations. I appreciate you taking the time today to do this.

Tom Arbuthnot (24:11):No, likewise. Appreciate being on and love all the stuff youre doing as well and all your content. It keeps me up to date. So thanks back.

Mary Jo Foley (24:18):Oh, thanks. For everyone else listening to this right now, or reading this transcript, Ill be putting up more information soon on Petri.com about who my next guest is going to be. And once you see that you can submit questions directly on Twitter for our guests using #MJFChat. In the meantime, if you know of anyone else or even yourself who might make a good guest for one of these MJF Chats, please dont hesitate to drop me a note. Thank you very much.

More here:
MJFChat: The Role of the IT Pro in a Microsoft 365 Cloud World - Petri.com

Wiwynn EP100 Participated in the Second Global O-RAN ALLIANCE Plugfest with Radisys – Business Wire

TAIPEI, Taiwan--(BUSINESS WIRE)--Wiwynn (TWSE: 6669), an innovative cloud IT infrastructure provider, today announced the company successfully completed the test in the second global O-RAN ALLIANCE plugfest. With the collaboration of Radisys, the global leader of open 5G solution, Wiwynn EP100 server platform works as an O-DU (O-RAN Distributed Unit) and an O-CU (O-RAN Central Unit) in the radio access network (RAN) to meet mobile operators demand for 5G open RAN architecture.

In the 5G era, mobile operators and vendors have come together in the O-RAN ALLIANCE to enable the open RAN transformation. These new disaggregated and virtualized networks are building with open source software and white box hardware to bring in scalability, flexibility, reliability, and agility to the 5G network. Verification, testing and integration are critical for the open RAN ecosystem to develop commercially available solutions. The European branch of the second global O-RAN Plugfest, hosted by Tier-1 telecoms, the O-RAN ALLIANCE, and the Telecom Infrastructure Project (TIP), speeds up the development of interoperability within the ecosystem.

We are excited to work with Radisys to turn Wiwynn's EP100 into the O-DU, O-CU serving in the open RAN architecture, said Steven Lu, Wiwynns Senior Vice President of Product Development. In the second global O-RAN plugfest, Wiwynns EP100 was tested in the Deutsche Telekom hosted Berlin lab. We are committed to further engaging with the community and demonstrating the capability of EP100 in the O-RAN architecture to address the burgeoning market.

The Radisys O-DU/O-CU software is robust, highly scalable, feature rich and supports both the 5G NR SA and NSA modes of operation. Radisys also provides 5G Core Network functions as defined by 3GPP. It is compliant with 3GPP Release 15 and O-RAN standards with strong roadmap to evolve towards Release 16.

The Wiwynn EP100, an OCP Inspired OpenEDGE platform, is an energy optimized 3U 430mm-depth edge system. It is configured with five 1U half-width single-socket server sleds. Each sled supports one PCIe Gen3 x16 FHHL accelerator card. With the integration of Radisys cutting-edge software, EP100 serves as DU and CU in a disaggregated open 5G network architecture. This enables scaling network capacity in an agile manner, in response to the growth of customer traffic usage.

Explore more for Wiwynns vRAN solutions.

About Wiwynn

Wiwynn is an innovative cloud IT infrastructure provider of high quality computing and storage products, plus rack solutions for leading data centers. We aggressively invest in next generation technologies for workload optimization and best TCO (Total Cost of Ownership). As an OCP (Open Compute Project) solution provider and platinum member, Wiwynn actively participates in advanced computing and storage system designs while constantly implementing the benefits of OCP into traditional data centers.

For more information, please visit Wiwynn website or contact sales@wiwynn.com

Follow Wiwynn on Facebook and Linkedin for the latest news and market trends.

Follow this link:
Wiwynn EP100 Participated in the Second Global O-RAN ALLIANCE Plugfest with Radisys - Business Wire

Zerto beefs up backup, DR and in-AWS-cloud protection Blocks and Files – Blocks and Files

DR specialist Zerto is converging backup and disaster recovery (DR) and hopes to strengthen both sides of that equation with stronger backup and DR facilities, using an expanded continuous data protection engine, as well as previewing an in-AWS cloud backup service.

Zerto Enterprise Cloud Edition (ECE) is its core a DR product providing DR facilities for on-premises and cloud applications. The DR can be provided to the cloud, from the cloud and in-between clouds. It features automation of both DR and backup functions and ECE includes the Zerto Data Protection (ZDP) continuous data protection technology.

Gil Levonai, CMOandSVPof product at Zerto offered a quote: We are now delivering a new offering that I personally believe will change the backup marketan industry that hasnt evolved in more than 30 years. ZDP gives businesses a data protection strategy for all of their applications with significant TCO savings tailored to their unique needs.

ZDP delivers local continuous backup for day-to-day backup restores. Its local journaling technology enables customers to recover without the data loss, downtime, or production impact that Zerto says are inherent to traditional backup systems ensuring business continuity and availability.

In fact ZDP should, in Zertos view, displace traditional backup because it offers lower data loss rates and lower infrastructure costs in TCO terms, with an up to 50 per cent saving claimed. ZDP also provides long-term retention on-premises or in the public cloud; with both AWS and Azure as public cloud targets.

Updated: Oct 16

Zerto has ZDP, ECE and Zerto 8.5. How do they fit together? Zerto told us: The core software, referred to as the Zerto platform has now moved from v8.0 to v8.5; ZDP and ECE are the ways you consume/use the platform. They are the license types that you can have now with Zerto.

Zerto told us ZDP [which] is for backup and long term retention. is a new offering, still based on Continuous Data Protection that is focused and priced for backup. The reason its priced for backup is because Disaster Recovery capabilities (failback, failover, DR testing, re-IP) are removed because customers dont need those capabilities for backup.

In effect ZDP is ECE-lite. The company tells us: This means that we will see most customers use ECE for the mission-critical applications and ZDP to back up the rest of their environment that only requires backup and not DR.

Zerto 8.5 is the latest version of Zertos core protection software, and is used in both the ECE and ZDP products. It follows on from the Zerto 7.0 and Zerto 8.0 versions. At the time of its release Zerto said v7 converges backup and disaster recovery using hypervisor-based replication and journalling for short- and long-term retention. And: Its technology allows you to achieve RPOs of seconds using journal checkpoints up to 30 days ago, instead of a 24-hour recovery time frame.

Zerto 8 advanced this, as it brought Continuous Data Protection (CDP) to VMware on Google Cloud. The company said: Zertos continuous data protection (CDP) offers the most effective protection for your business applications and data. Zerto automatically tracks and captures modifications, saving every version of your data locally or to a target repository, short and long term.

Zerto claimed its CDP and innovative journaling technology removes the need for snapshots and thus eliminates production impact and lengthy backup windows. Its recovery granularity reduces data loss to just seconds.

The 8.5 version takes this a step further as it expands the CDP applicability beyond VMware on Googles Cloud. It is also now suited for for lower tier application backup, and not only DR for upper tier applications. Zerto 8.5 includes;

The Cmdlets enable performing specific tasks by using a script and not from within the Zerto User Interface. This could be for retrieving information like the Virtual Protection Groups (VPGs)defined for a site, working with checkpoints and deleting a VPG.

Zerto also previewed an in-cloud data protection and DR product on AWS, which protects applications across regions with cloud-native resilience.

The company said it will extend its platform to offer more simplicity and orchestration across all use cases. This will cover businesses requiring a recent data restore due to user error or DR from an infrastructure outage, cloud-first businesses, or businesses just starting out in the public cloud.

It wants to be the one-supplier-fits-all use cases across the data protection spectrum from SMB backup to large enterprise DR covering on-premises virtual servers, containerised servers and in-cloud applications in multiple public clouds.

Read the original:
Zerto beefs up backup, DR and in-AWS-cloud protection Blocks and Files - Blocks and Files

What is Elasticsearch and why is it involved in so many data leaks? – TechRadar

The term Elasticsearch is never far away from the news headlines and usually for the wrong reasons. Seemingly every week that goes by brings a new story about an Elasticsearch server that has been breached, often resulting in troves of data being exposed. But why are so many breaches originating from Elasticsearch buckets, and how can businesses that leverage this technology use it to its fullest extent while still preventing a data leak?

To answer these questions, firstly, one must understand what Elasticsearch is. Elasticsearch is an open source search and analytics engine as well as a data store developed by Elastic.

Regardless of whether an organization has a thousand or a billion discrete pieces of information, by using Elasticsearch, they have the capabilities to search through huge amounts of data, running calculations with the blink of an eye. Elasticsearch is a cloud-based service, but businesses can also use Elasticsearch locally or in tandem with another cloud offering.

Organizations will then use the platform to store all of its information in depositories (also known as buckets), and these buckets can include emails, spreadsheets, social media posts, files basically any raw data in the form of text, numbers, or geospatial data. As convenient as this sounds, it can be disastrous when mass amounts of data are left unprotected and exposed online. Unfortunately for Elastic, this has resulted in many high-profile breaches involving well-known brands from a variety of industries.

During 2020 alone, cosmetics giant Avon had 19 million records leaked on an Elasticsearch database. Another misconfigured bucket involving Family Tree Maker, an online genealogy service, experienced over 25GB of sensitive data exposed. The same happened with sports giant, Decathlon, which saw 123 million records leaked. Then, more than five billion records were exposed after another Elasticsearch database was left unprotected. Surprisingly, it contained a massive database of previously breached user information from 2012 to 2019.

From what has been disclosed so far, clearly those who chose to use cloud-based databases must also perform the necessary due diligence to configure and secure every corner of the system. Also, quite clearly, this necessity is often being overlooked or just plain ignored. A security researcher even went to the length to discover how long it would take for hackers to locate, attack, and exploit an unprotected Elasticsearch server which was left purposely exposed online eight hours was all it took.

Digital transformation has definitely changed the mindset of the modern business, with cloud seen as a novel technology that must be adopted. While cloud technologies certainly have their benefits, improper use of them has very negative consequences. Failing or refusing to understand the security ramifications of this technology can have a dangerous impact on business.

As such, it is important to realize that in the case of Elasticsearch, just because a product is freely available and highly scalable doesnt mean you can skip the basic security recommendations and configurations. Furthermore, given the fact that data is widely hailed as the new gold coinage, demand for monetising up-to-date data has never been greater. Evidently for some organizations, data privacy and security have played second fiddle to profit as they do their utmost to capitalize on the data-gold rush.

Is there only one attack vector for a server to be breached? Not really. In truth, there are a variety of different ways for the contents of a server to be leaked a password being stolen, hackers infiltrating systems, or even the threat of an insider breaching from within the protected environment itself. The most common, however, occurs when a database is left online without any security (even lacking a password), leaving it open for anyone to access the data. So, if this is the case, then there is clearly a poor understanding of the Elasticsearch security features and what is expected from organizations when protecting sensitive customer data. This could derive from the common misconception that the responsibility of security automatically transfers to the cloud service provider. This is a false assumption and often results in misconfigured or under-protected servers. Cloud security is a shared responsibility between the organizations security team and the cloud service provider; however, as a minimum, the organization itself owns the responsibility to perform the necessary due diligence to configure and secure every corner of the system properly to mitigate any potential risks.

To effectively avoid Elasticsearch (or similar) data breaches, a different mindset to data security is required and one that allows data to be a) protected wherever it may exist, and b) by whomever may be managing it on their behalf. This is why a data-centric security model is more appropriate, as it allows a company to secure data and use it while it is protected for analytics and data sharing on cloud-based resources.

Standard encryption-based security is one way to do this, but encryption methods come with sometimes-complicated administrative overhead to manage keys. Also, many encryption algorithms can be easily cracked. Tokenization, on the other hand, is a data-centric security method that replaces sensitive information with innocuous representational tokens. This means that, even if the data falls into the wrong hands, no clear meaning can be derived from the tokens. Sensitive information remains protected, resulting in the inability of threat actors to capitalise on the breach and data theft.

With GDPR and the new wave of similar data privacy & security laws, consumers are more aware of what is expected when they hand over their sensitive information to vendors and service providers, thus making protecting data more important than ever before. Had techniques like tokenization been deployed to mask the information in many of these Elasticsearch server leaks, that data would have been indecipherable by criminal threat actorsthe information itself would not have been compromised, and the organization at fault would have been compliant and avoided liability-based repercussions.

This is a lesson to all of us in the business of working with data - if anyone is actually day-dreaming that their data is safe while hidden in plain sight on an anonymous cloud resource, the string of lapses around Elasticsearch and other cloud service providers should provide the necessary wake-up call to act now. Nobody wants to deal with the fall-out when a real alarm bell goes off!

Link:
What is Elasticsearch and why is it involved in so many data leaks? - TechRadar

How to move your computer systems to the cloud – KnowTechie

Cloud computing has been around for many years now, but some people are still skeptical of its benefits to business operations. Does it bring more efficiency to your company than the traditional local servers? Well, based on recent reviews, theres no doubt that cloud-based storage and services enhance the success of any business project. For instance, the fact that you can access your files from wherever you are makes it a must-have technology in the modern world.

If youre running a start-up business, then perhaps one of the stumbling blocks would be the process of migration. Luckily, there are professionals specialized in this task and are focused in making it all smooth sailing. In your research, youll come across many success stories such as Chicago MSP helps company move to the cloud. This is just proof that its very possible to modernize your business and become even more competitive. But before you start strategizing on how to migrate to the new world of computing, its important to understand what it entails.

This article will discuss all the fundamentals of moving your local servers to the cloud.

Simply put, the cloud is a series of servers that you can access over the internet. Others may define it as someone elses computer, which is still correct. Cloud computing, therefore, is the act of storing programs and data on a remote server and accessing them over the internet. Now that you have this basic knowledge, lets see how you can start the migration process.

What Do You Want To Move?

Before you can start your plans, its important to know which parts of your current system can be moved to the cloud. Every business has desktop applications, data, internet, and some peripherals. But which of these can be moved?

The most obvious group of items that should be on your list is the programs used in various departments within the business. There are two categories of these applications; cloud-based and traditional desktop programs. Cloud-based applications are those whose data is held on the cloud servers. You can access them either through a web browser or an installed program.

Traditional desktop applications, on the other hand, are those which dont have a web-based alternative and might need to be integrated with other programs for them to be operational. However, the best option is to use a hosted desktop, which basically works like your physical computer. It can accommodate as many services as you want to be provided you choose the right memory size.

The advantage of using a hosted desktop is that you dont need any special IT skills to operate it. In fact, all the maintenance procedures are done by the service provider. All you have to do is install the applications you have on your local server and start working. The best part is that you can connect it to your local server and transfer various files seamlessly. Continue reading to find more about the migration of databases.

Of course, if you decide to use cloud-based applications or hosted desktops, youll need to also move your database. Structured data, which includes names, geolocation, card numbers, and addresses, can be easily accessed via this system. All you need is to link your local server to the hosted desktop and youll be good to go.

Unstructured data, on the other hand, are quite difficult to deconstruct since they lack a pre-defined model. As such, they cannot be accessed in a similar way to the structured data. They include satellite imagery, videos, and audio. The best way to manage these files is by using separate storage servers.

For instance, you can store your data in DropBox, Google Drive, and OneDrive. There are many cloud-based storage services you can choose but not all of them are ideal for your business. Before making a final decision, consider reliability, security, performance, and flexibility. The best thing about transferring your data to these servers is the fact that you can access the files via your local computer and the remote desktop.

Peripherals such as your printers cannot be moved to the cloud. However, youll need to ensure that whatever cloud service you decide to use can be linked to your printers in case you need any printing done remotely.

As you already know, any cloud-based computing relies heavily on internet connectivity. In other words, without a good internet connection, there isnt much you can do. Therefore, before you even think of transferring your data to cloud servers, make sure you find a reliable internet service provider (ISP).

The whole point of moving your computer systems to the cloud is to enhance your operations. Slow internet will render all your efforts useless regardless of how hardworking your employees are. Remember, time is money, so every second you lose because of downtime can be very costly.

Unfortunately, there is no perfect ISP and downtime is something that can come when least expected. The best you can do is have backup internet links with other providers to create redundancy. One thing to consider though is that all providers should be using different underlying networks. As such, youll rarely experience significant downtime. For instance, you can have a 4G dongle that can be switched on whenever your main ISP goes offline.

If done correctly, implementing cloud computing can be very beneficial for your business. However, any wrong move can be quite significant and might even take you back to square one. Therefore, its important to consult different experienced professionals and check out the profiles of various cloud service providers.

The process of migration might seem quite intimidating considering the fact that some steps require a lot of special IT skills. However, all you need to do is understand which files can be moved and how you can move them. Also, find a cloud service that will perfectly fit your needs. The most important part of this project is to have a reliable service provider to enhance your operations. In addition, you should have a backup 4G dongle or another ISP that can be used during emergencies.

Have any thoughts on this? Let us know down below in the comments or carry the discussion over to ourTwitterorFacebook.

Link:
How to move your computer systems to the cloud - KnowTechie

What is application hosting? – Techradar

Put simply, a hosted application is any piece of software that is running on someone elses infrastructure rather than on-premise. Such hosted applications are accessed over the Internet and provide a web-based user interface for users to interact with them. Hosted applications are usually offered as Software-as-a-Service (SaaS).

In other words, application hosting allows you to run your applications on servers or in a cloud that is hosted by a service provider, such as Amazon Web Services (AWS) and Google Cloud Platform (GCP) that provide the necessary foundations to host your apps.

An example of a hosted application that everyone can identify with is WordPress. If you wanted to blog, the traditional method would be to download WordPress, spend time installing and configuring it before you could publish. However, thanks to application hosting on WordPress.com, you can simply visit the website, and get started immediately after registering an account. Thats because WordPress.com hosts a pre-installed, pre-configured version of WordPress.

Hosting applications on remote machines has several advantages. For starters, it reduces costs since you dont have to spend any money in building and maintaining the underlying hardware and software, and general IT infrastructure. This is hugely beneficial since most of the time the underlying hardware remains underutilized.

Furthermore, with application hosting, you only pay for the services you use. This also makes it very scalable as opposed to the traditional on-premise hosting, since you can provision additional resources to handle peak load requirements with just a few clicks. You can start small and grow as needed without incurring the costs of pre-purchasing excess server capacity.

Application hosting also provides improved availability by minimizing downtime as most of the reputable hosts provide enough redundancy to handle hardware failures and other faults. In the same vein, the cloud hosts also invest in enhancing the security of their servers. In fact, most of the reputable ones meet stringent ISO security standards.

Finally, since the hosted application is accessible from the Internet, all authorized users can access the app from anywhere and work remotely.

Theres no dearth of cloud hosting providers that you can use to host your applications. Since they dont all offer their services at the same price, there are some important factors that influence the final cost of hosting applications.

The most important factor is the nature of the application to be hosted. Some applications take more processing power while others need a lot of storage. The final cost of application hosting will be based on these technical requirements of the hosted application.

Another factor that influences the cost is the type of server. The two most common are shared and dedicated servers. While dedicated servers are more expensive than shared hosting ones, both types have their advantages and drawbacks. You should evaluate both to determine which option works best for your application.

In addition to choosing the type of server, the duration of the plan will also have an impact on the final cost of hosting the application. Instead of charging an upfront cost, most of the application hosting providers offer them on a recurring subscription. While most platforms offer flexible tenures, well advise you to choose a long term plan, which will be comparatively cheaper than monthly plans.

Convinced about the benefits of application hosting? The next step is to hunt for the right hosting provider that meets all your requirements without breaking the bank. Here are some of the main features that you should look for while evaluating a application hosting vendor:

1. Application compatibility: Just like all applications are built differently, application hosting too isnt a one-size-fits-all solution. The software requirements of your application will dictate the features that the hosting platform must meet.

2. Onboarding process: Depending on the type of application you need to host, many hosting platforms will offer a one-click setup to simplify the deployment process. But deployment is only one piece of the puzzle, and youll also need to look into the platforms management tools and also evaluate their documentation and other resources to help you get starting with the platform.

3. Security features: You should always be on the offensive when it comes to cyber security, especially these days when data breaches happen at an alarming frequency. Keep your eyes peeled for hosting platforms that invest in the security of their infrastructure, both from physical and online attacks.

4. Reliability and uptime: Servers, whether hosted on-premise or online, do occasionally have to go offline for maintenance, and for other reasons, such as faulty hardware, and other disruptions. Make sure you check the amount of time a service is affected by these kinds of issues. Many reputable providers promise 99% uptime and some even back their claims with a guarantee.

5. Support and service: Since most businesses service clients round the clock, youll need the same kind of available from your application hosting provider. Look for the platform that provides 24/7 customer support, and the available avenues of communication such as live chat, email, or phone.

6. Data export services: Although its fairly common for all providers to allow you to export your data from the provider, itll still be a good idea to check for this function. Application providers can and do go out of business, which is a legitimate concern and often cited as one of the disadvantages of hosting applications on a remote platform rather than on-premise. However, having the flexibility to export your data will help you migrate it to another platform without too much downtime.

See the original post here:
What is application hosting? - Techradar

The Role of Hybrid Cloud Technologies in Today’s Business Climate Wall Street Call – Reported Times

Oct 14, 2020 7:00 PM ET iCrowd Newswire Oct 14, 2020

You will agree with me that the cloud has incredibly transformed business computing dynamics. Cloud technologies come with a vast range of windfalls, from little upfront costs to easy scalability and superior uptime availability.

Mike Shelah (Advantage Industries) is just one among the many Managed Service Providers who agree that using the cloud presents many benefits to their clients. He particularly singles out Microsoft SharePoint that gives customers substantial storage for internal collaboration as part of their 365 licensing.

Our focus today is the role of hybrid cloud technologies in todays business climate. To put things into perspective, lets look at the different types of clouds.

According to Google, cloud computing is the practice of using a network of remote servers hosted on the Internet to store, manage, and process data, rather than a local server or a personal computer.

If you have been probing cloud computing, you must be well aware of the longstanding debate on private vs. public cloud. Before you make up your mind, its essential to interrogate the differences between the two:

We would generally define the hybrid cloud as a combination of private and public clouds and on-premises (bare metal) infrastructure, often with some level of integration or orchestration between environments.

So, How Does It Work?

At SemTech IT Solutions, clients have the main physical server on-site, but the server is paired with Office 365 and a third party file sync solution. This, according to Nick Allo, allows users who are working remotely to still access their email/files from anywhere, anytime. Besides being cost-effective, this solution enables business executives to have more control over their data.

The main drive pushing people toward cloud computing is the remote accessibility of data from anywhere. Ilan Sredni works with Palindrome Consulting in South Florida. He admits that everyone is conscious of hurricane seasons and power outages, and so clients want solutions that give them the ability to get work done, even when the office may be completely inaccessible. At the same time, administrators want to control their data and how its stored and managed. The only way out is to integrate both private and public clouds. In Ilans words, Hybrid cloud seems to bring the best of both worlds.

Also Read:

Rick CrawfordMSP Tech News

Keywords:hybrid cloud, cloud computing, cloud services, public cloud, private cloud, cloud technologies, cloud migrations

Original post:
The Role of Hybrid Cloud Technologies in Today's Business Climate Wall Street Call - Reported Times

IBM Goes All-In On Hybrid Cloud – IT Jungle

October 12, 2020Timothy Prickett Morgan

Well, that was a bit of a surprise, and probably something that only obliquely matters to IBM i shops at the moment, but Big Blues top brass has decided to carve out its managed infrastructure services business from Global Services and spin it out as a new, publicly traded company.

This business, which is tentatively being called NewCo until a real name is provided, is expected to be cut loose in a tax free manner and distributed to IBMs shareholders by the end of 2021, so we have some time to assess the ramifications, if any, for the IBM i base. The core of the $19 billion NewCo is the outsourcing and hosting business that made Global Services gigantic and, in a very real sense that we have described many times, saved IBM because it gave the company a story to tell and then live up to in the very difficult 1990s.

Looking at the prior 12 months of sales, the NewCo business brought in about $19 billion in revenues, according to IBM, which hosted a briefing with Wall Street analysts on Thursday to go over the separation. The remaining IBM will be smaller, at $59 billion a year, but growth in its cloud sales, including Red Hat, will now seem that much larger against a smaller base. In the trailing 12 months, Red Hat revenues were up by 19 percent, from $3.5 billion to $4.2 billion, and that very good growth smaller than many expected, mind you gets lost in the noise of the much, much larger NewCo business that is pulling down.

IBM will pay pretty handsomely for spinning out this business, which will incur a $2.3 billion charge at the end of 2020 for structural actions plus another $200 million or so as the deal closes next year, including around $1.5 billion in cash charges and around $1 billion in balance sheet charges. Presumably the IBM stock split to form NewIBM and NewCo will be proportional to the revenue streams of the two pieces of Big Blue that remain.

There is a certain amount of Power Systems iron running within the piece of the current Global Technology Services business that forms the core of NewCo, and the customers who have IBM run their applications on outsourced iron (meaning IBM literally takes over your stuff and usually some of your people and moves them to its own datacenter) or hosted iron (meaning IBM owns the iron and runs your applications on it) do upgrade their machines every couple of years as workloads dictate. This is an important source of revenue for the Systems group, and the majority of internal sales for Systems group are for servers and storage sold to Global Technology Services for this purpose. According to our 2019 revenue model, we think Global Technology Services bought $250 million in Power machinery and the Storage division bought another $226 million of Power servers to underpin the DS series of SAN storage arrays, which are basically AIX servers running storage software. Customers outside of IBM, by contrast, bought $1.78 billion in Power Systems servers, so this internal Power sales number is not an insignificant one and one big piece of it will now come from NewCo and it will be booked as an external sale. It will be interesting to see if NewCo will stretch out the lifetimes of Power Systems and System z iron as real customers do. We suspect the Global Technology Services customers might have had shinier iron than customers buying their own gear.

The most important thing about this table above, which shows what is staying and what is going, is that IBM is keeping servers, storage, operating systems, middleware, databases, break/fix and other technical support services, and IBM Cloud, and it is also retaining the core IT consulting, systems integration, process services, and application management services that are part of the Global Business Services Edition, the latter of which has a $41.1 billion services backlog against something on the order of $23.8 billion in revenues. NewCo has a $60 billion backlog against a $19 billion revenue stream, so that ratio is higher. NewCo has 4,600 customers in 115 countries and around 90,000 employees will be leaving Big Blue to go to NewCo, leaving something on the order of 260,000 employees in what we will call Littler Blue to be funny.

So, why is IBM doing this? Aside from getting slowly declining businesses out of its revenue stream, it is also because IBM wants to focus entirely on hybrid cloud. That means enterprise customers who have on-premises, mission critical systems who want to extend out into one or more public clouds, or maybe even run solely across one or more public clouds.

Here is what IBM is really focused on: Making money from hybrid cloud. Its no sexy, and some have argued that no one is going to get excited about plumbing, and they are right. But if IBM can make money with hybrid cloud, it doesnt matter if it is more boring than being Amazon Web Services, Google, or Microsoft. The way that IBM sees it, for ever $1 that customers spend on core Red Hat infrastructure software for hybrid cloud Enterprise Linux operating systems and OpenShift Kubernetes container controllers with perhaps some storage and virtualization they spend another $1 to $2 for the physical infrastructure and another $3 to $5 for middleware and applications and another $6 to $8 for various kinds of cloud transformation services. IBM knows that others sell servers, storage, cloud infrastructure, applications, middleware, and cloud transformation services, so it cannot capture all of that revenue, but if it could, then that $4.2 billion annualized revenue rate from Red Hat would be somewhere between $40 billion to $80 billion in total addressable market. If IBM can get half of that, then the Red Hat deal for $34 billion can pay for itself all that more quickly and, presumably, IBM can grow that $19 billion back like a salamander losing a tail.

IBM i Tries On a Red Hat

Red Hats Ansible Automation Comes To IBM i

How Big Blue Stacks Up IBM i On Premises And On Cloud

Big Blue Finally Brings IBM i To Its Own Public Cloud

IBM Takes A Hands Off Approach With Red Hat

The Impact On IBM i Of Big Blues Acquisition Of Red Hat

Tags: Tags: Global Services, Global Technology Services, IBM i, Kubernetes, Linux, NewCo, OpenShift, Power Systems, Red Hat, SAN, System z

Tweaks To Power System Iron Complement TR UpdatesGuru: Dynamic Arrays Come To RPG Limitations, Circumventions, And More

Originally posted here:
IBM Goes All-In On Hybrid Cloud - IT Jungle

Is your college in a severe wildfire zone? – CALmatters

In summary

Of Californias nearly 150 public colleges and universities, 18 are within areas Cal Fire deems at high risk from wildfires. In addition to evacuation plans, colleges have different fire mitigation tactics they can employ to minimize risk.

As a wall of flame drew closer to the northernmost reaches of the UC Santa Cruz campus, Saxon Stahl knew an evacuation order was imminent.

Stahl, a student living on campus during summer session, had been following the progress of the CZU Lightning Complex fires that started Aug. 16. By the time the email for voluntary evacuations reached Stahls inbox the afternoon of Aug. 20, they leapt at the chance, accepting a voucher to stay at a hotel four miles south.

They fled the ash raining from the sky, but the smell of campfire lingered still.

Hours later on the 20th, campus police sent all 1,200 students and staff packing under a mandatory evacuation order that was only fully lifted nearly three weeks later.

It was kind of chaotic in retrospect, the senior told CalMatters. An assignment due at 5pm the day they relocated to a hotel was only extended to midnight. After eight days of hotel dwelling, Stahl and several dozen other students lived out the rest of their evacuations at San Jose State University.

That the CZU Fires came within a mile of the northern end of UC Santa Cruzs borders shouldnt come as a surprise. The photogenic campus, nestled in a forest of redwoods, is one of several dozen public universities and community colleges near or in a fire hazard severity zone as designated by the states fire authority, the Department of Forestry and Fire Protection.

A CalMatters analysis found that 18 public higher-education institutions in California, out of 148, have addresses in these zones. That number excludes campuses whose territories partially stretch into hazard zones but have addresses outside of them or are within a few miles of the zones. Cal Fire ranks these zones by severity moderate, high and very high and bases the labels on signs of fire danger, such as topography, weather, wind, fire history and flammable forest debris. The Cal Fire zones also exclude federal lands and local areas that arent deemed a very high hazard.

Interactive graphic

Load interactive graphic

Already this year California endured its largest fire season in recorded history, with more than 4 million acres burned that have claimed 8 lives and damaged or destroyed almost 5,500 structures. Fires, predicted to intensify, could threaten numerous college dorms and school buildings.

And theres a lot of wood to burn. Before the Gold Rush, Californias forests had 50 to 70 trees per acre. In 2009, there were 400 trees per acre, the result of decades of fire suppression and public policies that abandoned the purposeful fires practiced by Native American tribes to limit the intensity of forest fires.

Campuses close to a fire hazard severity zone should definitely look at ways that they can reduce the risk around the campus, said Steven Hawks, staff chief of Cal Fires wildfire planning and engineering division. That means clearing out fallen leaves, removing brush and committing to expensive retrofitting, among other actions, especially for campuses built before fire-resistant building codes that came into effect in 2008.

While Cal Fires hazard zones show severity, they dont show risk. Hawks and others interviewed for this story stressed that campuses can put in the work to limit the damage caused by fires. Expanding roads for emergency vehicles, swapping out single-pane for dual-pane glass and new roofs are other mitigation techniques campuses near or in fire zones could pursue, Hawks and others said. Meanwhile, some campuses close to fire zones enjoy favorable conditions that can keep wildfires at bay.

Being prepared can only get a campus so far, however, especially with the increasing menace of recent fires. The town of Paradise had a pretty good evacuation plan, Hawks said, but the deadly 2018 Camp Fire burned so intensely and so rapidly that it cut off some of the towns evacuation routes, forcing officials to alter plans on the go.

Just how close is too close before calling an evacuation is impossible to say. The conditions determine the response. Most structures are destroyed because of an ember, said Hawks, which are carried by winds ahead of fires, sometimes for several miles. Fire burns faster uphill, so campuses atop an ignited slope stand a greater risk of damage than colleges in valleys where fires burn in the hills above. Of course, if the wind shifts, all bets are off, Hawks said. The drier the season, the greater the risk.

At UCs, wildfires are a local campus response. Cal Fire issues the evacuation orders but UC campuses implement those orders in conjunction with first responders and regional emergency management personnel. Each campus also has an emergency management director who coordinates emergency planning. During emergency events the director will be at the response center and may lead it, depending on the campus. But campuses keep the UC Office of the President informed. The office knew UC Santa Cruz would declare a campus emergency before it happened, said Amina Assefa, the UC systems director of emergency management and business continuity.

When I look at those maps, I see a lot of the state is in the fire hazard zones, said Assefa. We are aware of this reality and the challenge that poses.

For the the 23-campus California State University system, wildfire responses are largely the domain of campus chancellors and their staff, plus input from the system chancellor, said spokesperson MichaelUhlenkamp.

UC Santa Cruz is bounded by a horseshoe of public land susceptible to wildfire. The CZU fire that licked the terrain within a mile of campus burned to the north and west in mostly high hazard zones.

The sylvan landscape requires constant upkeep to reduce the risk of fire damage. The campus maintains a series of fire roads in the northern campus, where its more heavily wooded, for fire truck access. In collaboration with Cal Fire, the university annually clears out excess leaves, branches and trees to reduce the fuel load or material that can burn during a wildfire.

A report on lessons learned about the universitys response is in the works, though university officials shared some details. For one, the campus needs to increase the sheer tonnage of material that it clears off the land, said Jean Marie Scott, an associate vice chancellor at UC Santa Cruz who oversees a budget of nearly $15 million in risk and safety services, including the campus police, transportation and fire departments.

Interactive graphic

Load interactive graphic

Next, the website containing fire updates initially was hosted on physical servers until it was moved to a cloud server in case flames torched the IT equipment. UCSC also wants to ink a memorandum of understanding with the company that runs the coastal boardwalk. Thats where evacuated students and staff waited for resettlement but the campus wants the relationship formalized to move people to safety quicker next time.

The evacuation coincided with a global pandemic, further complicating the campuss response. Buses that brought students to San Jose State normally fit 40 people but in the era of COVID-19 could only carry 10 each, requiring more vehicles. Students waiting for rides to hotels stood masked in marked off squares measuring 10 by 10 feet to keep space from one another.

The evacuation didnt include just people. UCSC sent several mammals in its marine lab down to SeaWorld of San Diego and another location for safe harbor after ash from the fire littered the animals saltwater pools. The two dolphins, Donley and Rain, rode in a refrigerated truck on their way to San Diego, squeaking and whistling at each other the whole trip.

Other colleges close to a hazard zone are at a low risk of sustaining wildfire damage. Scan the CalMatters interactive map and Humboldt State University sits less than two miles west of an expanse of high fire hazard woodlands. But thanks to a marine layer cloaking the university and the quilt of redwoods surrounding it, the fuel that sits on the ground of the forest is just constantly moist as a result, and so its not a great conductor of fire, said Cris Koczera, emergency management coordinator at Humboldt State. She cant recall a single fire in the 20 years shes worked in disaster planning in the area that came close to the ridge line separating the damp area of the university and the drier forest to the east.

If emergency strikes, Koczera says Humboldt State has agreements with a fairground in Crescent City and a conference hall in Eureka to temporarily shelter evacuated campus students and staff. If an incident affects those two cities, other CSU campuses can help out, Koczera said.

Chico State also appears just a few miles from a hazard severity zone but the campus is relatively safe because its surrounded by city.

I dont think that Chico State as a campus is at risk, said Jacquelyn Chase, a professor of geography and planning at the university who studies fires. A wildfire that jumped the wildlands boundary into the city would run out of steam before it got that far in to damage the campus, she said.

The campus maintains its land in a way that reduces the risk of a wildfire consuming the university, Chase said. Plants are juicy because theyre watered often, ground crews pick up the leaves that fall to the ground and the buildings are not that close together, all of which limits ignition and fire spread.

If a crisis does strike, Chico States evacuation plan is largely to follow the orders of county and state disaster response officials, said J Marvin Pratt, director of environmental health and safety at Chico State. While the campus could order an evacuation before the rest of the city is issued one, Pratt says thats unlikely. It didnt during the 2018 Camp Fire, one of the most destructive in recorded state history.

It wasnt directly threatening us. So thats where it gets back to listening to the professionals and what they have to suggest, Pratt said, who added that the campus has never needed to evacuate because of a fire.

The Cal State campus also follows UC guidance on managing campus events during days with poor air quality caused by fire. The 2019 report includes a table indicating when events need to move inside or UC employees working outside should put on masks. The number of actions grows the higher the Air Quality Index indicator climbs. The UC has tweaked the guidance some this year, recommending that outdoor events be cancelled rather than move inside because of COVID-19.

The College of the Siskiyous, which is encircled by fire hazard zones, came closer to ruin. The campus was evacuated during the Sept. 2014 Boles Fire. The inferno ultimately damaged or destroyed more than 100 structures in the small town of Weed that sits at the base of Mt. Shasta an hour south of the Oregon border. Campus spokesperson Dawn Slabaugh told CalMatters that the campus president made the call because initially it seemed the fire was gunning for the college. You can step outside into our parking lots and see the fire on the hill that is just directly across the freeway coming in our direction, she said. Do you keep it open or wait too long?

But the fire shifted, barreling toward town and away from the campus. Spared, the college served as a community anchor. Slabaugh said Cal Fire officials conducted town halls for the community of 2,700 residents in the colleges theater for a few days immediately following the blaze. A food assistance program for the area and a Catholic Church temporarily relocated to the campus.

California Polytechnic State University, San Luis Obispo is a sprawling 9,000-acre campus that lies partly in fire-hazard wildland. Conflagrations have approached the campus several times. Had the wind shifted, it would have readily threatened buildings, including dorms and classrooms on campus, said Christopher A. Dicus, a renowned professor of wildland fire at Cal Poly San Luis Obispo. More recently, the 2020 CZU August fire torched a remote campus site after students, faculty and livestock were safely relocated.

In recent years Dicus and other fire professionals have argued for stricter rules removing combustible material thats within five feet of buildings. Anytime you have anything combustible that is touching a building, that is a really, really bad idea, he said, adding that vegetation, mulch, and flammable lawn furniture near buildings should either be removed or unable to catch fire.

A 2020 law creates a new ember-resistant zone within five feet of a structure in a fire hazard area. The law could go a long way to making some of Dicus recommendations a reality. What wont be permitted in the zone is still to be determined. The laws rules now have to be fleshed out by Cal Fire and the Board of Forestry and Fire Protection. The campus relies on Cal Fire and the San Luis Obispo fire department, but we cant just rely completely on the cavalry to come rescue us, said Dicus. We, like all California campuses, have to work with the fire service to shape that battlefield, to be such that the firefighters have a much easier chance at saving our buildings.

More:
Is your college in a severe wildfire zone? - CALmatters

Fujitsu Verifies Effectiveness of Private 5G in Manufacturing Sites with Microsoft Japan – Latest Digital Transformation Trends | Cloud News – Wire19

Fujitsu today announced that, in collaboration with Microsoft Japan Ltd., it has recently verified the effectiveness of a system that uses private 5G to visualize real-time data within the facility, with a view toward manufacturers digital transformation (DX).

Using Microsoft Azure IoT Edge(1) in the Fujitsu Collaboration Lab, a private 5G verification facility in Kawasaki, Japan, this system analyzes high-definition images of people moving in the private 5G network and operating data from cameras, mobile terminals, servers, and other equipment. This enables integrated visualization of the status of people, unmanned vehicles, and equipment with the Fujitsu Manufacturing Industry Solution COLMINA(2), unifying private 5G and cloud environments to bring about a system with optimized for network and processing load.

Based on the findings of the verification test, Fujitsu will collaborate with Microsoft Japan to conduct verification tests at Fujitsus plant in Oyama, Japan, Fujitsus manufacturing base for network products, by the end of fiscal 2020 and jointly develop solutions with a view to achieving global expansion going forward.

This verification will be showcased at Fujitsu ActivateNow, to be held as an online virtual conference on October 14, 2020.

Background

In the new normal society, the manufacturing industry is being called upon to improve the efficiency, automation, and remote capabilities of their operations through digitization, while maintaining quality that will transform manufacturing sites to make them more resilient to changes. Private 5G is attracting attention as one of the key technologies supporting this.

Private 5G enables enterprises to flexibly construct and operate 5G networks in their own buildings and premises and is expected to be used for unmanned and remote controls at manufacturing sites. On the other hand, in order to achieve these goals, a large amount of sensor data and high-definition video must be utilized to construct an optimal system according to the requirements of network and application processing load.

As the very first achievement of the Private 5G Partnership Program; the co-creation program which enables the use of Fujitsus expertise and technologies such as private 5G along with advanced technology from partners, the system that integrated Fujitsus private 5G technology with Azure IoT Edge and Azure was constructed and its effectiveness was verified.

Summary of Verification

Future Developments

Fujitsu plans to deploy this verification system with Microsoft Japan at the Oyama Plant by the end of FY 2020 and to verify it on-site. In this verification test, Fujitsu will use an AI technology for video-based behavioral analysis developed by Fujitsu Laboratories to recognize various human behaviors and to improve the quality and efficiency of operations at manufacturing sites.

In addition, Fujitsu will consider jointly developing an edge computing solution utilizing 5G with Microsoft Japan from a global perspective.

References:

(1) Azure IoT Edge Locally deployed cloud intelligence on IoT Edge devices(2) COLMINA A digital data solution that connects various information on manufacturing from design to manufacturing and maintenance.

See the article here:
Fujitsu Verifies Effectiveness of Private 5G in Manufacturing Sites with Microsoft Japan - Latest Digital Transformation Trends | Cloud News - Wire19