Category Archives: Cloud Hosting

Zadara Hosts Webinar With Leading Analyst Firm for MSPs Looking to ‘Live on the Edge’ – Business Wire

IRVINE, Calif.--(BUSINESS WIRE)--Zadara, the recognized leader in edge cloud services, is once again hosting a webinar with guest speaker from leading analyst firm IDC for a webinar catered to MSPs and hosting providers. This newest webinar, titled Living on the Edge: How MSPs and Hosting Providers Can Leverage the Edge Without the Public Cloud, will take place on September 29 at 11am PST / 2pm EST and will detail how MSPs and hosting providers can take advantage of edge computing to scale their business by moving to an IT as-a-service model.

Edge computing is being driven by the abundance of data created by a distributed workforce and the continuous growth of IoT devices. The volume, velocity and variety of this data has challenged centralized computing paradigms. A key benefit of edge platforms is bringing websites, applications, media, security and a multitude of virtual infrastructures and services closer to end devices, which public or private clouds fail to offer.

In this webinar, Dave McCarthy, IDCs cloud & edge infrastructure services research vice president, will be joined by Steve Bohac, Zadaras head of product marketing, to discuss how MSPs and hosting providers can deploy edge cloud workloads anywhere in the world, achieve compliance across regulated industries (like healthcare, BFSI and federal/state government agencies), and optimize cost per performance with a 100% OpEx-based, IT as-a-service business model. McCarthy will also highlight the latest research on the evolving edge computing market, including growth trends, why the edge is more important now than ever, and how this affects MSPs and hosting providers.

Webinar participants will learn how to:

It often makes more sense for MSPs to move intelligence to where the data is located, rather than the other way around, noted Bohac. But that is easier said than done unless you have the right tools and technology in place. Were excited to share how MSPs can scale their business, resources and billing by taking advantage of edge computing without the need for the public cloud.

To register for the webinar, please visit here. To learn more about Zadara, please visit

About Zadara:

Since 2011, Zadaras Cloud Platform (ZCP) simplifies operational complexity through automated end-to-end infrastructure provisioning of compute, storage and network resources. Backed by an industry-best NPS rating of 71, Zadara Edge Cloud users are supported by Zadaras team of battle-tested cloud experts and backed by our 100% SLA guarantee. With solutions available on-premises and through cloud and colocation providers, Zadaras turnkey hardware/software, combined with its pay-only-for-what-you-use model, helps companies gain agility without sacrificing the features and functionality that enterprise IT teams demand. Zadara operates worldwide, including clouds in hundreds of data centers at public- and private-cloud partners, with an expert team that provides follow-the-sun services and support and is the official cloud supplier of Alfa Romeo Racing ORLEN in the Formula One world championship. Zadara is headquartered in Irvine, California with locations in Cirencester (UK), Tokyo, Tel Aviv, Yokneam (Israel), Bangalore and Brazil.

See original here:
Zadara Hosts Webinar With Leading Analyst Firm for MSPs Looking to 'Live on the Edge' - Business Wire

ORock Technologies Makes History as the First & Only OpenStack FedRAMP-Compliant Cloud to Join OpenInfra Foundation – PRNewswire

RESTON, Va., Sept. 15, 2021 /PRNewswire/ --ORock Technologies, Inc., ahigh-performance hybrid cloud service provider built on OpenStack and certified by FedRAMP and the Department of Defense, today announced that it has joined the Open Infrastructure (OpenInfra) Foundation as the market's first and only open-source cloud services provider that is FedRAMP-compliant and built on OpenStack.

"We are pleased to welcome ORock to the Open Infrastructure community," said Mark Collier, COO at OpenInfra Foundation. "Together with ORock, OpenInfra members are working to accelerate the accessibility of cloud infrastructure resources with contributions that unlock resilient and reliable hosting environments for commercial enterprises and government agencies alike. ORock not only shares the Foundation's mission and dedication to open infrastructure, but its solutions also provide a solid framework for organizations seeking performance, security, compliance and price predictability in an open-source cloud solution."

ORock is a cloud IaaS and PaaS provider of choice given its unique ability to deliver government-grade security on OpenStack, the market's de facto open-source platform for operating cloud infrastructure around the world.ORock's OpenStack-based cloud reduces development to one orchestration model for deploying applications seamlessly to the cloud and on-premises, something not possible with closed-source proprietary platforms.Since 2010, the OpenStack community of companies, individuals and developers have driven the build-out and operation of open infrastructure. Technology contributions to the most recent OpenStack release have delivered enhanced security and integration with other open-source technologies, strengthening open infrastructure for cloud-native applications.

"We are delighted to join the OpenInfra Foundation and look forward to making measurable contributions to the OpenStack architecture," said Gregory Hrncir, Co-Founder, CEO and President, ORock Technologies. "Our FedRAMP-compliant hybrid cloud solutions place us in a unique position to anchor the broadest range of compute, storage and container solutions as well as support diverse use cases across artificial intelligence, machine learning and high-performance computing. ORock powers the full spectrum of mission-critical cloud implementations and enables our customers to rapidly and efficiently scale their cloud environments."

ORock's artificial intelligence and machine learning, cloud hosting, storage, containers, and hardened security solutions deliver a smarter way for companies to modernize their data operations, while receiving the security and compliance of a military-grade offering with 24/7 cloud customer support throughout the lifecycle of infrastructure deployments. To learn more, visit

About OpenStack

OpenStack is the only open source integration engine that provides APIs to orchestrate bare metal, virtual machines, and container resources on a single network. The same OpenStack code powers a global network of public and private clouds, backed by the industry's largest ecosystem of technology providers, to enable cost savings, control, and portability. A global community of more than 110,000 individuals in over 180 countries work together on the OpenStack

About the Open Infrastructure Foundation

TheOpenInfra Foundationbuilds communities that write open source infrastructure software that runs in production. With the support of over 110,000 individuals in over 180 countries, the OpenInfra Foundation hosts open source projects and communities of practice, including infrastructure for AI, container native apps, edge computing and datacenter clouds.Join the OpenInfra

About ORock Technologies

ORock Technologies is a high-performance cloud services company deploying hardened enterprise-grade open-source architectures designed for artificial intelligence, machine learning and high-performance computing. Leading organizations across highly regulated industries, federal agencies and private sector companies choose ORock solutions for their storage, compute, containers, and security needs. Our solutions support hybrid, multi-cloud and edge environments with superior compliance, security, cost predictability and 24/7/365 NOC and SOC services managed by U.S. citizens. To learn more, visit, contact [emailprotected] and follow us on LinkedIn.

Contact:Claudia CahillORock Technologies571-386-0201[emailprotected]

SOURCE ORock Technologies, Inc.


Go here to see the original:
ORock Technologies Makes History as the First & Only OpenStack FedRAMP-Compliant Cloud to Join OpenInfra Foundation - PRNewswire

After eight years, SPEC delivers a new virtualisation benchmark – The Register

The Standard Performance Evaluation Corporation (SPEC) has released its first new virtualisation benchmark in eight years.

The new SPECvirt Datacenter 2021 benchmark succeeds SPEC VIRT_SC 2013. The latter was designed to help users understand performance in the heady days of server consolidation, so required just one host. The new benchmark requires four hosts a recognition of modern datacentre realities.

The new tests are designed to test the combined performance of hypervisors and servers. For now, only two hypervisors are supported: VMwares vSphere (versions 6.x and 7.x) and Red Hat Virtualisation (version 4.x). David Schmidt, chair of the SPEC Virtualization Committee, told The Register that Red Hat and VMware are paid up members of the committee, hence their inclusion. But the new benchmark can be used by other hypervisors if their vendors create an SDK. He opined that Microsoft, vendor of the Hyper-V hypervisor that has around 20 per cent market share, didnt come to play because its busy working on other SPEC projects.

SPECvirt Datacenter 2021 runs in three phases. For starters, it assumes that most hosts are in maintenance mode, then brings more hosts online to test load balancing. A third phase of tests saturates all four hosts and gives them a solid workout. The benchmark tests how hypervisors manage resources across a datacentre, and simulates performance under the following five workloads:

One set of results using the new benchmark has already been published, featuring vSphere 7.0U2a, Lenovo ThinkSystem SR665 servers, and AMD EPYC 7763 CPUs.

That CPU is a 64-core beast that Lenovo has used to make merry licensing mischief in single-socket servers. However, the server-maker chose to use it in a two-socket machine for its benchmark run.

Lenovo and HPE are also paid-up members of SPEC's virtualisation committee, while Intel and Oracle have made contributions. Schmidt said the long period between benchmarks is attributable to the complexities of designing a valid test. He added that he expects committee members will soon publish more benchmark results but hopes that organisations that put the test to work will also share numbers they generate.

Read the original here:
After eight years, SPEC delivers a new virtualisation benchmark - The Register

VMware : 5 Factors to Drive Widespread Adoption of JWCC –

One of the greatest challenges that all large-scale cloud programs across Federal Government and DOD have faced is driving adoption once the full operating capability (FOC) is reached. Often, these clouds sit idle for months awaiting application transformation, re-platforming, and migration.

Next generation warfighting capabilities rely on cloud-agnostic providers like VMware to seamlessly connect information domains that span tactical edge, base infrastructure, and multiple public clouds with a unified approach that is simple to manage and highly effective.

This article outlines ways that the DOD can drive rapid consumption of the Joint Warfighter Cloud Capability (JWCC). There are five key capabilities that will drive widespread adoption of JWCC.

This vast DoD VMware install base coupled with cloud service provider's VMware-based virtual infrastructure enables rapid VMware to VMware migration. More importantly, it eliminates application refactoring. Porting workloads rapidly to VMware-based hyper-scaler clouds will drive immediate JWCC consumption. It will also allow each cloud service provider (CSP) to rapidly acquire its share of JWCC workloads which will help the overall viability of the JWCC multi-cloud marketplace. Figure 1 below presents the various CSPs that are capable of natively hosting VMware workloads for JWCC.

2. Enabling Rapid Migration Between Clouds - A multi-cloud without the ability to easily move between CSPs is not a true multi-cloud. It is simply a collection of distinct siloed clouds. A true multi-cloud takes advantage of each CSP's common VMware-based computing, network, and storage cloud services to rapidly migrate applications between clouds. The latter will enable JWCC consumers to move workloads immediately between clouds. Consumers will need this capability to meet mission requirements, counter security threats, reduce cost, or to gain access to unique CSP services.

Furthermore, this common VMware-based infrastructure will enable JWCC consumers to host applications across multiple cloud providers, in DOD data and operations centers, and/or the tactical edge. This will give consumers access to best-of-breed services from each CSP. It will allow application owners to host data where it is most accessible and will enable computing capability at the tactical edge where access and bandwidth may be limited.

3. Migrating and Maintaining Security Posture from Source to JWCC - Preserving the networking, computing, and storage architecture, configuration, and security policies during a migration can be difficult. However, doing so when migrating from an existing VMware environment to an identical VMware-based cloud is quite simple. One can simply transfer the all servers, networking, storage, configurations, and security policies as-is. No transformation or re-architecture is needed. This preservation is critical to being able to minimize risk and cost associated with maintaining compliance and accreditation of legacy workloads. Migration of existing secure, accredited mission workloads is depicted in Figure 2 below.

4. A Fully Compatible VMware-based 'Smart' Tactical Edge -Tactical Edge capabilities from Army WIN-T to Navy CANES and countless others across the DOD leverage VMware Cloud Foundation and software defined datacenter technology to meet their missions today. This significant DOD investment in VMware at the edge coupled with greenfield VMware hyperconverged software defined data centers enable DOD to immediately achieve tactical edge goals.

In addition to existing DOD tactical edge capabilities, VMware also offers DOD ready VMware Cloud Foundation-based tactical hybrid cloud capabilities from all leading hardware manufacturers such as Dell, HPE, and Cisco.

Leading cloud service providers such as Amazon Web Services and Oracle also offer VMware-based tactical edge solutions, namely VMware Cloud for Amazon Outpost and Oracle Private Cloud Appliance X8.

VMware's partnership with NVIDIA can also be leveraged to perform AI/ML operations on-mission with virtualized hardware accelerators or GPUs. When more processing power is required, tactical units transmit observational data through SD-WAN encrypted tunnels and SASE security gateways to one or more public CSP's. Once the data is securely transmitted, the hyperscale capabilities of public clouds can be leveraged to rapidly mine data and create impactful warfighting intelligence. The produced intelligence products can be pulled down to on-premises DoD data and operations centers at CONUS/OCONUS base locations for Data Decrees and management, and then transmitted back to the tactical edge units or Mission Partners.

In all cases, these VMware-based tactical edge solutions are fully compatible with each hyper-scaler cloud's VMware based virtualized computing, network, and storage capabilities. Thus, establishing a hybrid cloud from hyperscaler to tactical edge couldn't be easier. For additional coverage of Tactical edge solutions for JWCC, see 'Learn the fastest route to AI/ML capabilities at the tactical edge' at the following URL:

5. Security from Cloud to Edge - VMware is a long-standing cybersecurity partner with DoD and is often on the forefront of accreditation and authorization efforts. JWCC can leverage VMware's intrinsic security and zero trust capabilities to create a defense in depth strategy for the cloud. Intrinsic security is a fundamentally different approach to cyber security. It is a strategy for leveraging the infrastructure and control plane to provide consistent and ubiquitous cyber security, across any cloud, app, or device. Other security vendors utilize products, tools, or bundles that are loosely coupled to the infrastructure as an afterthought. VMware intrinsic security is built-in, which reduces add-on products, agents, and complexity. This decreases the chances of misconfiguration and vulnerabilities. It is also unified across the security, IT, and operations teams to improve visibility and identify threats. Unified security policies follow the application wherever it goes, whether it is on-prem or in a public cloud. Finally, Zero Trust-enabling security leverages the infrastructure for real-time context and control points, to help JWCC better detect and respond to threats.

In conclusion, VMware adds significant capabilities to accelerate the adoption of JWCC across the DoD. We ease migration by eliminating the need to refactor applications. We facilitate inter-cloud movement of workloads and cross-cloud architectures to maximize access to best of breed commercial cloud services. We allow complete applications (network, computing and storage) to be migrated along with security policies and accredited configurations to ease the certification and accreditation burden on JWCC and the application owner. We enable a fully compatible tactical edge and hybrid cloud capability to all major CSPs. Finally, we provide Zero Trust-enabling security throughout our software offerings to meet the unique requirements of the DoD.

If you are a VMware partner, Cloud Service Provider, or DoD customer and would like to know more about how VMware can support JWCC and the tactical edge, contact us at VMware product specialists, architects, and engineers are available to meet with your team to have a deep dive technology discussion or whiteboarding session. Similarly, VMware Pursuit and Capture team members are available to discuss teaming agreements or bid arrangements upon request. We look forward to working with your team.

Excerpt from:
VMware : 5 Factors to Drive Widespread Adoption of JWCC -

Marie Cloud is High Point x Design’s first diversity and inclusion officer. Here’s what she wants to do – Business of Home

High Point by Design, an organization with the mission of making the North Carolina city a year-round destination, has tapped Marie Cloud, principal of Charlotte-based Indigo Pruitt Design Studio, as its first diversity and inclusion officer.

These topics have always been part of our narrative, but to be honest, there werent specific actions behind them, says Tom Van Dessel, the chairman of HPxD. We began having conversations with Marie earlier this summer, and she displayed her passion for being a spokesperson and an ambassador for diversity in the industry. Those talks led to us eventually asking if she had time to serve on our board and to help us develop specific, intentional actions around diversity and inclusion.

High Point x Design (HPxD) was founded in 2020, born out of like-minded showroom owners and local industry entrepreneurs meeting amid the cancellation of that springs High Point Market to chart a path toward opening the towns showrooms in a more consistent manner. Initially a consortium of fewer than two dozen businesses, the organizations membership swelled to more than 50 companies in February, when it merged with the High Point Showroom Association, which had been hosting its own events to drive showroom traffic off-Market.

The relationship between HPxD and Cloud began at a panel event earlier this summer. The designer, who started her business in 2017 after a four-year stint at Sherwin-Williams, has always made community engagement a priority in her work. I want to branch off with my business so that my hands and my talents are actually contributing to the community in a way that is tangible, she told Business of Home in an interview for the 50 States Project series in 2020. During COVID, Ive been dreaming and journaling about what that looks likeand how I can turn that into actually bringing awareness of these issues in the Black community, not just through talking about it on social media, although Ive been very active in that regard, but also: What do we do with these hands of ours?

High Point x Designs board of directors has been taking shape in recent months, with the appointment of fellow North Carolina designer Don Ricardo Massenburg, who joins as design chair to focus on establishing deeper partnerships with the interior design community. But the announcement of Clouds appointment also comes just weeks after the High Point Market Authority and Esteem Media faced a public outcry upon revealing a campaign highlighting 10 design influencers, all of whom were white. Though High Point x Design and the HPMA are separate organizations, their audience of designers and showrooms is largely intertwined. It was really unfortunate, but it also proves what were talking about in our organizationwe need more communication, more intention and more action to affect change, says Van Dessel.

BOH spoke with Cloud to discuss her vision for the role, why she accepted it and how the design industry can create meaningful, lasting change when it comes to race and inclusivity.

How did this role come to be?I attended a panel discussion in June hosted by High Point x Design at the Universal Furniture showroom. At the end, they asked for questions, and I raised my hand and said, Im very excited about all that youre doing. But I have to ask: What intentionality is being placed in this organization in reference to diversity and inclusion? What efforts are you guys putting in place to ensure that is addressed and valued as you continue to grow?

There were various responses and a little dialogue. Afterwards, Kathy Devereux, the communications chair of HPxD, approached me and expressed her appreciation for me asking about that. We stayed in contact and had a phone call where we got to know each other. I wanted to convey the urgency of bringing diversity and inclusion to the table. Its a priority now for a lot of organizations because its trendy and cool, and companies are feeling the pressure. I want to ensure that these organizations are genuinely valuing the importance of diversity and inclusion and understand that diversity means creating a more interesting tapestry for your organization. It is not a trend.

Creating meaningful change, not just checking a box.Absolutely. And honestly, when Kathy and I had the conversation, we werent even talking about a role. She just seemed intrigued by how passionate I was, and she introduced me to HPxD chairman Tom Van Dessel. We spoke several times, and he conveyed to me that hes a part of a board full of changemakers. Outside of the fact that they want to open these showrooms and create opportunities for designers, [he knows] change has to be deeper than that, which aligned with what I was trying to convey from the start. Im all about community and [asking] how we can reflect the community in what were doing. If you go outside of the downtown area in High Point, youre going to see Black people, so its very hypocritical for us to come to High Point two times a year and ask people of color to hold the doors for us as we mosey in and not welcome their opinions and experiences. I said all of that up top, and Tom was very receptive to my thoughts.

As a result of those conversations, we agreed that there needed to be a board position dedicated to this. But I made it clear that if I was to be a part of this, it is not a project. This isnt a little committee where were going to get together and check a box. This needs to be an intentional focus, and we have to start from the leadership downI said, I need your board to be reflective of the diversity that you want to bring. I need your meetings and the people that youre bringing to the table to be reflective. It has to look like the future. It has to look like where youre going.

I dont know exactly what its going to look like in the future, but if you know anything about me, you know that there are going to be a lot of hard and difficult conversations, both inside and outside of High Point x Design. My hope is that we will partner with other organizations and hold them to the same fire that were holding ourselves to. Were going to get very uncomfortable and spend time with people who dont look like us and who dont have the same experiences. I think it is going to be very hard for me to stay within the parameters and boundaries that are probably expected of me, but theres a lot of work to do.

One thing thats interesting about High Point x Design is that the organization is as much about the High Point and North Carolina communities as it is about the design community. How are you thinking about those audiences? One of the things that really intrigued me when I went to the panel discussion during Market in June was all of the newness that is coming to High Point as a city. Theres a lot of new constructionhotels, restaurants, the baseball stadium, and theyre going to do a food hall. And the first thing that came to my mind was, What does this mean for the locals? I can see where this goes: Theres going to be a lot of gentrification, and I foresee a lot of people being overlooked.

My hope is that High Point x Design is going to stretch their arms far and wide beyond the design community, and that theres going to be a presence of change for High Point, North Carolina, and the people that live there. What does that mean? My hope is that were going to spend some time with school administrators, that were going to partner with organizations that maybe arent directly tied to the design community. Its very nuanced, but at the same time, its very simple once you put community and valuing people at the heart of it. It makes decision-making very easy.

Marie Cloud

Have the recent conversations about High Point Market and Esteem Medias all-white influencer tour changed how you view this new role?Its interesting that it happened, because it gave a prime example of what happens when you dont have diverse voices in the room making decisions. That is what happensand to be honest, I dont think they realize the impact of it. Theyre going to see the impact of that decision for a while. Theres a whole community of influencers and bloggers who are calling forfor lack of a better terma boycott of High Point Market, because it does not lend toward Black and Brown faces. And its unfortunate, but thats why these spaces have to be created.

Ive been connecting with various friends that play some diversity and inclusion roles in corporate America, but I feel that Im probably going to be a rebel compared to how these roles typically play out. Because Im going to give it to you straight with no chaser: Were talking about peoples lives and the impact of decision-making. Its crucial, and its so much bigger than design. Thats what I want to keep telling people. And if you dont think so, then you should not be in the room where decisions are made.

Were those discussions disheartening in terms of the change that diversity and inclusion officers are empowered to make?Completely. Based on conversations that Ive had, it just comes up as a check box, like, Lets throw this in the policylets tweak this to meet a quota or to appear more liberal. But you know when its right and you know when its wrong. Or at least, marginalized people do. I know when I feel welcome in a space. I know when my perspective is heard. My hope is that I can create a different feeling and actually cause change. Worst-case scenario, Im going to rock the boat.

What are the basic changes that High Point as a town and design destination needs to make so that it does feel safe and welcoming to all?Theres this word that I use pretty consistently, and thats intentionality. You have to put effort into inclusion and not assume that things are just going to happen organically. In this world, diversity and inclusion do not happen like that. If High Point Market does not root itself in being intentional, creating representation and diversity, and ensuring that the decision-makers are a diverse group, change wont happen.

For example, if you host an event and youre only marketing to the typical white designer, as a Black woman, I dont want to go, because all of your marketing and content is geared toward that particular profile. Thats why its not going to happen organicallyits because we dont feel welcome. So you have to intentionally market toward those people. You have to have conversations with them. And Im not just referring to race, color or creedexperiences matter, as well. We have designers from the full spectrum. You have designers showing up to Market for the very first time, and you have some coming to every single Market. How do you speak to every person and every experience? Some would probably say, Its so hard, or We cant. Yes, you can. Thats what you signed up for. Put the effort in, get the team together and make sure it happens. Theres no excuse.

What is your experience having those kinds of conversations in the design world?Ill give you a specific example. After I asked that question back at Market, there were two individuals that approached me. One was Kathy, and the other one was an individual from the furniture manufacturing world. Between calls and emails when we connected after the panel, I felt as though there was a lot of fluff in their responsesa lot of, Yeah, yeah. I hear you. That was great. We really need to do this. And after those conversations, I was given many promises about following up, but I have not heard from that individual in the four to six months since. That is typical, and it is a practical example of what I mean when I say taking advantage of peoplepeople of colorto utilize what they can produce for your advantage.

Marie Cloud

Theres something so complicated about the position that puts you innot to speak for you, but Id imagine you want them to take action after your conversation and follow through, but getting there also seems to require a lot of one-sided giving.Its very transactionaleven if I can step outside of the discriminatory pieces of it, thats the world we live in. Its very transactional. You do something for me, I do something for you. Its, Let me get as much as I can out of this situation to move my chip forward. Look, at the end of the day, Im not responsible for what you do with what I share with you. I can only own my portion of it. In the end, I know its never going to be fruitful when its not rooted in truth and honesty.

Im so excited to follow what you do in this role.Thank you. I dont really know exactly what this role is going to morph into. What I can say is that I promise to ask very hard, challenging questions and consistently advocate for marginalized people within the design community and advocate for the people of High Point as best as I can. I think I have to ensure that I am creating a pathway for individuals that have not gotten a chance or a seat at the tableto ensure that there is space and a comfy seat for them at that table, that they have space to advance within the industry, and that they are compensated equitably for their expertise and skills. I want to put it all in writing, and then partner with other organizations and encourage them to do the same.

I genuinely believe were better together. I dont want to be in a space that looks just like me. My music, my friend circletheyre diverse, because I feel like Im lacking if I dont stretch myself far and wide across new experiences. Besides the fact that Im a minority, thats where the passion comes from: There really is an appreciation for unity paired with my advocacy for the betterment of my people. Its a topic that is hard for some, and I get why its hard, but the worst thing you could do is not have the conversation. Lets figure this thing out, and lets figure it out together. And it may be hard, but doesnt it feel good when you go through something hard and you get through it? Its the best feeling ever. I dont think well ever get [all the way] there, but were going to work our butts off to make sure that we care about people along the way.

Homepage photo: Marie Cloud | Courtesy of High Point x Design

Read the original here:
Marie Cloud is High Point x Design's first diversity and inclusion officer. Here's what she wants to do - Business of Home

Aviation-themed phishing campaign pushed off-the-shelf RATs into inboxes for 5 years – The Register

A phishing campaign that mostly targeted the global aviation industry may be connected to Nigeria, according to Cisco Talos.

The malicious campaigns centred around phishing emails linking to "off-the-shelf malware" being sent to people around the world even those with a marginal interest in commercial aviation.

Although Talos couldn't confirm the threat actor behind the campaign was actually based in Nigeria or associated with the Nigerian state, Cisco's infosec arm was able to say with confidence that the campaign had been running for at least three years.

It compiled a list of IPs used by the threat actor's domain and concluded that 73 per cent of those were based in the African nation, "further strengthening the theory that the actor in question is based in Nigeria."

"Our research shows that actors that perform smaller attacks can keep doing them for a long period of time under the radar," said Talos, adding that these seemingly small-fry attacks "can lead to major incidents at large organizations."

The firm added: "These are the actors that feed the underground market of credentials and cookies, which can then be used by larger groups."

Building on previous research from Microsoft, Cisco Talos dived into malicious emails that contained a link purporting to lead to a PDF file containing aviation-related information.

One example was themed to appear as if it had been sent by aviation safety authorities in Dubai. The PDF link, however, took unwary readers to a site hosting a .vbs script, hosted on Google Drive.

An example of the aviation-themed phishing campaign seen by Cisco Talos

Other lures included mentions of Bombardier, the well-known business jet manufacturer, and "Trip Itinerary Details". Those were associated with the domain kimjoy[.]ddns[.]net as well as akconsult[.]linkpc[.]net.

"Analysis of the activity associated with the domain reveals that this actor has used several RATs and that, since August 2018, there are samples communicating with this domain with names that indicate the adversary wanted to target the aviation industry," said Talos.

The malicious script eventually downloaded the CyberGate remote-access trojan (RAT) onto the victim's machine. CyberGate, aka Rebhip, allows complete control of the target device including remote shell interaction and keylogging functionality. It is also freely available, as one Briton allegedly discovered not that it did him much good.

Another domain Talos associated with the malware campaign was delivering the AsyncRAT trojan. VMware's security unit Carbon Black defines it as a run-of-the-mill RAT, saying it "can perform many harmful activities such as disabling Windows Defender".

Cisco Talos concluded: "In this case, we have shown that what seemed like a simple campaign is, in fact, a continuous operation that has been active for three years, targeting an entire industry with off-the-shelf malware disguised with different crypters."

Even the least complicated of threats can still be meaningful if you're not careful enough.

Excerpt from:
Aviation-themed phishing campaign pushed off-the-shelf RATs into inboxes for 5 years - The Register

Disaster Recovery in the Cloud | TV Tech – TV Technology

Every major broadcaster acknowledges that they have to consider disaster recovery. Apart from meeting audience expectations, if a channel is off air, it cannot transmit commercials. Without commercials, it has no income. Getting the station back on airand broadcasting commercialsis clearly vital.

But, given todays very reliable technology, a large investment in replicating the primary playout center could be seen as wasted money: a lot of hardware (and real estate) that will never go to air.

The question, then, is how to ensure business continuity through a disaster recovery site that gets the channel on air in the shortest possible time, that can be operated from anywhere, and involves the least amount of engineering support to launch. And the answer that broadcasters are increasingly turning to is the cloud.

On DemandStart-up costs aside, it can be extremely cost-effective to keep a standby system in the cloud: ready to start when you need it; dormant when you do not. For many, cloud-based disaster recovery serves as a good, practical first experience of media in the cloud.

Whichever provider you choose, what you buy from them is access to effectively infinite amounts of processing power and storage space. We have worked extensively with AWS and other cloud suppliers, but AWS also offers some media-specific services (through their acquisition of Elemental) like media processing, transcoding and live streaming.

It is important to bear in mind that moving to the cloud is not an all or nothing, irreversible decision. The very nature of the cloud means it is simple to flex the amount of processing you put there, so if you should decide to back away it is simple to do so.

The cloud is an element within the IP transitionyou decide when and how to make that transition, and when and how much to use the cloud. For many broadcasters, disaster recovery is an excellent way to try out cloud services.

Keeping it FamiliarWith todays software-defined architectures, systems should perform identically whether they are in dedicated computers in the machine room, virtualized in the corporate data center, or in the cloud. Consistent operation is especially important in disaster recovery deployments; if disaster strikes, the last thing you want is for operators to scrabble around trying to make sense of an unfamiliar system.

That does not mean that the primary system and the disaster recovery site must be identical. But with a well-designed cloud solution, you should be able to emulate the same user interfaces. This makes it easy for the operators to switch back and forth between the two different environments.

It also means you can set resilience and availability by channel. You might want your premium channels to switch over to disaster recovery in seconds, for example, while some of your secondary channels can be left for a while. That is a business decision.

Content is Still KingOne of the common misconceptions about cloud playout is that synchronizing content between premises and the cloud demands a lot of bandwidth and potentially high costs. This need not be the case.

Faced with the imminent obsolescence of video tape libraries, and wary of the eternal cost of maintaining an LTO data tape library, many broadcasters are looking to archive in the cloud. You load the content once, confident that all the technology migration and maintenance will be carried out, flawlessly, by someone else.

You may have collaborative post-production by hosting content and decision lists in the cloud. Contentprograms and commercialscan be delivered direct to the cloud.

Playout, archiving, post and traffic may be managed as separate departments, but if you combine them content is only delivered to the cloud once. It is then available for playout without the high egress costs, and is securely stored at significant cost savings.

Outsourcing SecurityBroadcasters have traditionally sought very high availability from the technology delivering premium channels. Five nines used to be regarded as the gold standard99.999% up time. Even that, though, is equivalent to about 5 minutes of dead air a year.

AWS offers its broadcast clients unimagined availability, up to maybe nine nineseffectively zero downtime. And it achieves that without any maintenance effort on your part: no disk replacement, no routine cleaning of air conditioning, no continual updates of operating systems and virus protection.

If the disaster is that your building has to be evacuated because of detected cases of a communicable disease, playout operators can work from home with exactly the same user interface and functionality as if they were sitting in the MCR.

If you want hot standby (complete parallel running in the cloud for almost instantaneous failover), then the technology allows it, if you choose to pay for the processing time. Alternatively, pick your own level of cold or warm standby, confident that, even from cold, loading and booting the channel playout instances can be accomplished in just a couple of minutes.

Cyberattacks are becoming an all-too familiar headline. Other industries have seen crippling incursions and software systems held to ransom. Developing a business continuity strategy that protects from such attacks is paramount.

Again, the cloud is the right solution. A good cloud provider will deliver better data security than you can do yourself. AWS has thousands of staff with the word security on their business cards. While no organization can hope to be perfect, a good cloud provider will give you your best shot at complete protection, because that is their business. The alternative is to build your own data security team: an unnecessary overhead and a challenge to develop, recruit and manage.

AWS is even used by the U.S. Intelligence Community, which suggests that it is probably working.

Doing it LiveOne comment that is often heard is that you cannot run live channels or live content from the cloud. This is simply not true. At Imagine, we have implemented primary playout systems that feature live content.

In the United States, we recently equipped a SMPTE ST 2110 operations center and cloud-hosted disaster recovery channels for Sinclairs regional sports networks (RSNBally Sports Regional Networks). For Sinclairs Tennis Channel, we provided core infrastructure for a large-scale ST 2110 on-premises broadcast center and a cloud-based live production center for pop-up live events.

The biggest requirement for sports television is that live should be absolutely live: no one wants to hear their neighbors cheer and wait to find out why. Minimum latency is also critical for the big money business of sports books.

Sinclair spun up live channels around the 2021 Miami Open tennis tournament in March, and again for the French Open from Roland Garros. All the playout, including the unpredictable live interventions associated with fitting commercial breaks into tennis matches, was hosted in the cloud, with operators sitting wherever was convenient and safe for them.

As consumer preferences move from broadcast to streaming, what happens after the master control switcher becomes ever more complicated in preparing the output for all the different platforms. That level of signal processing is better done in the cloud, especially with transcoding-as-a-service providing high-performance, affordable delivery.

Stepping-Stone to Next-Gen PlayoutDisaster recovery is fundamentally a business issue, a strategic decision. Using the cloud can deliver the best total cost of ownership, but it can also be a valuable stepping-stone in the broadcasters transition to IP connectivity and outsourced hosting.

The technical and operational teams gain experience and confidence in the cloud as a suitable broadcast platform. Routine rehearsals of business continuity mean that operators will learn how similar the performance of the cloud and on-premises systems, and how the user interface seamlessly switches from one to the other.

This experience gives confidence to move on towards a completely cloud future. Pop-up channels can be created in minutes not months, so it is easy to service sports events or music festivals, while only paying for processor time when you need it.

The cloud is infinitely scalable, so you can add channels or services, support new delivery platforms, and test market 4K and HDR. The direct linkage between the cost of delivery and the revenue won makes for easier business management.

As the legacy playout network reaches life-expiration, broadcasters will know what the cloud can do, and have built up solid information on the costs of operating in the cloud. That knowledge will be invaluable in evaluating proposals for the next generation of playout.

View original post here:
Disaster Recovery in the Cloud | TV Tech - TV Technology

Grafana Labs and Alibaba Cloud Bring Pervasive Visualization and Dashboarding to Asia-Pacific Region – GlobeNewswire

NEW YORK, Sept. 14, 2021 (GLOBE NEWSWIRE) -- Grafana Labs, the company behind the open source project Grafana, the worlds most ubiquitous open and composable operational dashboards, today announced a new strategic partnership with Alibaba Cloud, the digital technology and intelligence backbone of Alibaba Group. Through the partnership, the companies are introducing Grafana on Alibaba Cloud, a fully managed data visualization service that enables customers to instantly query and visualize operational metrics from various data sources.

Our goal at Grafana Labs is to make sure Grafanas dashboarding capabilities are available however it makes the most sense for our users whether thats on their own infrastructure or in a public cloud platform like Alibaba Cloud, said Raj Dutt, Co-founder and CEO at Grafana Labs. Partnering with public cloud platforms like Alibaba Cloud further cements Grafana as the best-in-class solution for open source visualizations, and gives Alibabas millions of cloud users instant access to dashboarding capabilities in a way that is uniquely integrated with Alibaba Cloud and easier to get started than self-hosting, while opening the door to a brand new market for Grafana Labs.

We hope that our cooperation with Grafana Labs can let Alibaba Cloud users worldwide leverage Grafana products more conveniently and efficiently so that they can focus on business efficiency by reducing the need for strenuous operations and maintenance activities, said Jiangwei Jiang, Partner of Alibaba Group, Head of Alibaba Cloud Intelligence Infrastructure Products. While putting more efforts into open source fields, Alibaba Cloud will continue to cooperate with more open source vendors to launch complete cloud native products and solutions, providing new momentum for enterprise digital innovation.

To learn more about Grafana on Alibaba Cloud, visit

About Grafana LabsGrafana Labs provides an open and composable monitoring and observability stack built around Grafana, the leading open source technology for dashboards and visualization. There are over 1,500 Grafana Labs customers including Bloomberg, JP Morgan Chase, eBay, PayPal, and Sony, and more than 750,000 active installations of Grafana around the globe. Grafana Labs helps companies manage their observability strategies with full-stack offerings that can be run fully managed with Grafana Cloud, or self-managed with Grafana Enterprise Stack, both featuring extensive enterprise data source plugins, dashboard management, alerting, reporting and security, scalable metrics (Prometheus & Graphite), logs (Grafana Loki) and tracing (Grafana Tempo). Grafana Labs is backed by leading investors Lightspeed Venture Partners, Lead Edge Capital, GIC, Sequoia Capital, and Coatue. Follow Grafana on Twitter at@grafana or

Media Contact:Dan Jensen, PR for Grafana

Grafana Labs and Alibaba Cloud Bring Pervasive Visualization and Dashboarding to Asia-Pacific Region - GlobeNewswire

Setting up and troubleshooting multiple thin client monitors – TechTarget

The use of two or more monitors is the norm for many business workstations, and users expect excellent performance when accessing virtual resources on these monitors, regardless of the endpoint.

Unlike a traditional desktop, users can't resolve issues with their thin client monitors and display settings on their own locally because thin clients do not host the OS. Therefore, IT administrators must deliver these resources to the end users' devices and ensure that they have the proper configuration to handle multiple monitors.

There is no universal method to configure thin clients with every virtual desktop management and delivery platform. Still, Citrix Virtual Apps and Desktops (CVAD) provides a reasonable example that IT administrators can use to learn the general process for enabling multiple monitors on a thin client endpoint.

CVAD user sessions, also known as HDX sessions, present resources to users based on a series of bitmaps on the screen, and organizations often rely on thin clients to provide access to these resources at a low hardware and management cost.

When a user launches or modifies a virtual application or desktop within an HDX session, the server or cloud hosting the virtual resources modifies the bitmaps and sends the updated info to the end-user device. Whether a user is accessing these resources via a Windows, Mac, iPad, Chromebook or thin client device, the session processes are the same. However, once multiple monitors are in the mix, IT has to take administrative action to ensure users have a quality experience. Several challenges exist with multiple monitor deployments on thin clients, and IT admins will need to troubleshoot certain issues.

If any use cases require multiple monitors or extremely high resolution, IT will have to carefully review the model specifications of any thin clients the organization deploys. Many environments that benefit from thin clients, such as call centers, hot desks and general business workers, also benefit from dual monitors with about 1920x1080 resolution.

Citrix Virtual Apps and Desktops can support up to eight monitors, but most thin client devices can only support single or dual monitors. In addition, screen resolution support varies from thin client to thin client. Thin client hardware capabilities are typically far less than full Windows or macOS devices. To address this, thin client devices can run a dedicated OS, such as Igel OS. This stripped-down OS can run as a physical device or via UD Pocket in a USB port that can support eight monitors and 4K resolution. Long gone are the days of Video Graphics Array displays.

Where dual or multiple monitors are in use, it is best for IT admins to deploy monitors based on the same size and resolution. However, this is not always possible, and issues may arise because of these discrepancies.

The most common issue is that users can only see CVAD sessions on a single monitor.

In many cases, multiple monitors will function properly by default, but issues can still arise. Problems with session presentation on multiple monitors focus on two main areas: the CVAD setup and the thin client configuration. Most often, issues will occur on the local thin client device, but the CVAD configuration may require modification.

The most common issue is that users can only see CVAD sessions on a single monitor. If an individual user reports this issue, it is most likely related to the end-user device. This may be a hardware issue or thin client configuration issue. For example, on the HP t430 thin client device, dual displays require HDMI and DisplayPort connections; a display connected via just the HDMI cannot serve as both connections for dual display.

IT admins must allocate sufficient memory for the CVAD environment, whether based on a server or workstation. This is especially important for 4K monitors. In addition, users running GPU-enabled Virtual Delivery Agents need sufficient GPU capabilities as well.

The graphics policy settings are key items within Citrix policies. In particular, the display memory limit setting may affect screen resolution for multiple monitors. The default setting is 65,536 KB, which may not be sufficient; numerous high-resolution HDX sessions require more memory than this.

HDX session presentation on a single monitor may lead to thin client configuration issues. If the thin client is not in multimonitor mode, a virtual desktop admin can remedy this with a configuration setting. For example, on Igel devices, admins can go to the window setting and ensure that the multimonitor configuration is not set to restrict full-screen sessions onto one monitor.

Where the size or resolution of the monitors differs, it is possible that the alignment of screens does not display correctly. Most users will want their CVAD session to be aligned horizontally along the top, but they may prefer center or bottom alignment. Users can adjust monitor alignment appearance on their own via the on-device display settings.

The local settings need to be adjusted if the user sees one or more screens rotated improperly -- upside down or at a 90-degree angle. Using Igel devices as an example, admins can select screen rotation on the client settings and rotate right or left.

Read this article:
Setting up and troubleshooting multiple thin client monitors - TechTarget

Taking The Long View On Open Computing – The Next Platform

COMMISSIONED Software changes like the weather, and hardware changes like the landscape; each affects the other over geologic timescales to create a new climate. And this largely explains why it has taken so long for open-source computing to spread its tentacles into the hardware world.

With software, all you need is a couple of techies and some space on GitHub and you can change the world with a few hundred million mouse clicks. Hardware on the other hand is capital intensive you have to buy parts and secure manufacturing for it. While it is easy enough to open up the design specs for any piece of hardware, it is not necessarily easy to get such hardware specs adopted by a large enough group of people for it to be manufactured at scale.

However, from slow beginnings, open computing has been steadily adopted by the hyperscalers and cloud builders. And now it is beginning the trickle down to smaller organizations.

In a world where hardware costs must be curtailed and compute, network, and storage efficiency is ever more important, it is reasonable to expect that sharing hardware designs and pooling manufacturing resources at a scale that makes economic sense but does not require hyperscale will happen. We believe, therefore, that open computing has already brought dramatic changes to the IT sector, and that these will only increase over time.

The term open computing is often used interchangeably with the Open Compute Project, created in 2011 by Facebook in conjunction with Intel, Rackspace Hosting and Goldman Sachs However, OCP is just one of four open-source computing initiatives in the market today. Lets see how they all got started.

More than a decade ago, Facebook growing by leaps and bounds, bought much of its server and storage equipment from Dell, and then eventually Dell and Facebook started to customize equipment for very specific workloads. By 2009, Facebook decided that the only way to improve IT efficiency was to design its own gear and the datacenters that house it. In January 2014, Microsoft joined the OCP, opening up its Open Cloud Server designs and creating a second track of hardware to complement the Open Rack designs from Facebook. Today, OCP has more than 250 members, with around 5,000 engineers working on projects and another 16,000 participants who are members of the community and who often are implementing its technology.

Six months after Facebook launched the OCP, the Open Data Center Committee, formerly known as Project Scorpio, was created by Baidu, Alibaba, and Tencent to come up with shared rack scale infrastructure designs. ODCC opened up its designs in 2014 in conjunction with Intel. (Baidu and Alibaba, the two hyperscalers based in China, are members of both OCP and ODCC, and significantly buy a lot of their equipment from Inspur.)

In 2013, IBM got together with Google to form what would become the OpenPower Foundation, which sought to spur innovation in Power-based servers through open hardware designs and open systems software that runs on them. (Inspur also generates a significant portion of its server revenues, which are growing by leaps and bounds, from Power-based machinery.)

And finally, there is the Open19 Foundation, started by LinkedIn, Hewlett Packard Enterprise, and VaporIO to create a version of a standard, open rack that is more like the standard 19-inch racks that large enterprises are used to in their datacenters and less like the custom racks that have been used by Facebook, Microsoft, Baidu, Alibaba, and Tencent. Starting this year, and in the wake of LinkedIn being bought by Microsoft, the Linux Foundation is now hosting the Open19 effort, and the datacenter operator Equinix and server and switch vendor Cisco Systems are now on its leadership committee.

Inspur is a member of all four of these open computing projects and is among the largest suppliers of open computing equipment in the world, with about 30 percent of its systems revenue based on open computing designs. Given this, we had a chat with Alan Chang, vice president of technical operations, who prior to joining Inspur, worked at both Wistron and Quanta selling and defining their open computing-inspired rack solutions.

It depends on how broadly you define open computing, but I would say that somewhere between 25 percent to 30 percent of the server market today could be using at least some open computing standards. It is not in the hundreds of large customers yet, but in the tens, and that is the barrier that Inspur wants to break through with open computing, Chang tells The Next Platform. He points out that two top tier hyperscalers consumed somewhere around two million servers last year against a total market of 11.9 million machines. With just those two companies alone, you are at 18.5 percent, which sounds like a very large number, but it is concentrated in just two players,

Tens of customers may not seem like a lot, but the server market changes at a glacial pace and it is very hard to make big changes in hardware. For starters, customers have long-standing buying preferences, and outside of the hyperscalers and cloud builders, many large enterprises and service providers they are dependent on the baseboard management controllers, or BMCs, that handle the lights out, remote management of their server infrastructure. The BMC is a control point just like proprietary BIOS microcode inside of servers was in days gone by.

But this is going to change, says Chang. And with that change those who adopt the system management styles of the hyperscalers and cloud builders will reap the benefits as they force a new kind of management overlay onto systems and in particular, the open computing systems they install.

The BIOS and the BMC are programmed in a kind of Assembly language, and only the big OEMs have the skills and the experience to write that code, explains Chang. Even if a company like Facebook wants to help, they dont have the Assembly language skills. But such companies are looking for a different way to create the BIOS and the BMC, something similar to the way they create Java or Python programs, and these companies have a lot of Java and Python programmers. And this is where we see OpenBMC and Redfish all starting to evolve and come together, all based on open-source software, to replace the heart of the hardware.

To put it bluntly, for open computing to take off, the management of individual servers has to be as good as the BMCs on OEM machinery because in a lot of cases in the enterprise, one server runs one workload, and they are not scaled out with replication or workload balancing to avoid downtime. This is what makes those BMCs so critical in the enterprise. Enterprises have a lot of pet servers running pet applications, not interchangeable massive herds of cattle and scale-out, barn-sized applications. And even large enterprises are, at best, a hybrid of these. But if enough of them gang together their scale, then they can make a virtual hyperscaler.

That, in essence, is what all of the open computing projects have been trying to do: find that next bump of scale. Amazon Web Services and Google do a lot of their own design work and get the machines built by original design manufacturers, or ODMs. Quanta, Foxconn, Wistron, and Inventec are the big ones, of course. Microsoft and Facebook design their own and then donate to OCP and go to the ODMs for manufacturing. Baidu, Alibaba, and Tencent work together through ODCC and co-design with ODMs and OEMs, and increasingly rely on Inspur for design and manufacturing. And frankly, there are only a few companies in the world that can build at the scale and at the cost that the hyperscalers and large cloud builders need.

Trying to scale down is one issue, but so is the speed of opening up designs.

When Facebook, for instance, has a design for a server or storage, and they open it up, they do it so late, says Chang. Everyone wants a jump on the latest and greatest technology, and sometimes they might like 80 percent of the design and they need to change 20 percent of it. So in the interest of time, companies who want to adopt that design have to figure out if they can take the engineering change or just suck it up and use the Facebook design. And as often happens in the IT business, if they do the engineering change and go into production, then there is a chance that something better will come out by the time they get their version to market. So what people are looking for OCP and ODDC and the other open computing projects to do is to provide guidance, and get certifications for independent software vendors like SAP, Oracle, Microsoft, and VMware quickly. All of the time gaps have to close in some way.

The next wave of open computing adoption will come from smaller service providers various telcos, Uber, Apple, Dropbox, and companies of this scale. Their infrastructure is getting more expensive, and they are at the place that Facebook was at a decade ago when the social network giant launched the OCP effort to try to break the 19-inch infrastructure rack and so drive up efficiencies, drive down costs, and create a new supply chain.

The growth in open computing has been strong and steady, thanks in large part to the heavy buying by Facebook and Microsoft, but the market is larger than that and getting larger.

As part of the tenth-year anniversary celebration for the OCP, Inspur worked with market researcher Omdia to case the open computing market, and recently put out a report, which you can get a copy of here. Here are the interesting bits from the study. The first is a table showing the hardware spending by open computing project:

The OCP designs accounted for around $11 billion in server spending (presumably at the ODM level) in 2020, while the ODCC designs accounted for around $3 billion. Open19, being just the racks and a fledgling project by comparison, had relatively small revenues. Omdia did not talk about OpenPower iron in its study, but it might have been on the same scale a few years back and higher if Google or Inspur is doing some custom OpenPower machinery on their own. Rackspace had an OpenPower motherboard in an Open Compute chassis, for instance.

Add it all up over time, and open computing is a bigger and bigger portion of server spending, and it is reasonable to assume that some storage and networking will be based on open computing designs, following a curve much like the one below for server shipments:

Back in 2016, open computing platforms accounted for a mere seven percent of worldwide server shipments. But the projection by Omdia is for open computing platforms to account for 40 percent by 2025, and through steady growth after a step function increase in 2020. As we have always said, recessions dont cause technology transitions, but they do accelerate them. We would not be surprised if those magenta bars get taller faster than the Omdia projection particularly if service providers start merging and capacity needs skyrocket in a world that stays stuck in a pandemic for an extended amount of time.

Commissioned by Inspur

Original post:
Taking The Long View On Open Computing - The Next Platform