Category Archives: Cloud Hosting

CEO of Weather Network parent Pelmorex departs to lead Google’s Canadian cloud business – The Globe and Mail

People attending Collision 2022 at the Enercare Centre, chat inside the Google Cloud booth, Toronto June 22, 2022. The event is the largest international in-person gathering in Toronto in more than two years.Eduardo Lima/The Globe and Mail

The CEO of the company that owns The Weather Network is heading to the cloud Google Cloud to be precise.

Sam Sebastian said on LinkedIn Tuesday that he is leaving Pelmorex Corp., the Oakville, Ont.-based weather information company, to rejoin Google as vice-president and country manager of its cloud business in Canada. He had previously spent 12 years at Google, until 2017 when he left for Pelmorex, including the last three as head of Google Canada.

The move to the cloud was a big part of our transformation at Pelmorex, so Im excited to partner with the incredible Google Cloud team to help other Canadian businesses and organizations digitally transform, move faster, be more secure and grow, Mr. Sebastian stated.

He thanked Pelmorex controlling shareholder and executive chairman Pierre Morrissette, who entrusted me to lead the company he founded, calling him a great friend and mentor. Google said Mr. Sebastian was declining interviews until he starts Nov. 7.

Mr. Sebastian will focus on the cloud sales business strategy in Canada, where Google is one of three leading providers of cloud-hosting services to businesses and governments that have increasingly trusted their data to third parties and their vast server-laden warehouses. It competes here with Microsoft and Amazon as well as Snowflake and others. Googles Canadian corporate clients include Canadian National Railway, Bell Canada and Telus.

Mr. Sebastian, 51, grew up in Columbus, Ohio, and worked in online classified advertising before joining Google in 2006. He worked to ingrain himself in his new community after arriving in Canada in 2014, joining the board of Tennis Canada and becoming a director of Kitchener, Ont., startup Bridgit.

He had originally planned to spend a few years here before taking another post with Google. Those plans were blown off course when he succeeded Mr. Morrissette as chief executive officer of Pelmorex.

The 33-year-company is best known for its Weather Network station and its digital properties, including its smartphone apps. Its internet and smartphone properties accounting for most of its $100-million-plus in revenue. Pelmorex also has operations in Britain, other parts of Europe and India.

It made three acquisitions during Mr. Sebastians tenure, buying weather information company Otempo.pt in Portugal in 2018 and a majority of Weather Source, a provider of weather data products in the United States in 2019. Pelmorex also bought Addictive Mobility, a Toronto mobile data management and automated media-buying platform in 2017.

The rest is here:
CEO of Weather Network parent Pelmorex departs to lead Google's Canadian cloud business - The Globe and Mail

The Global Cloud Computing Market size is expected to reach $1143.2 Billion by 2028, rising at a market growth of 15.0% CAGR during the forecast…

ReportLinker

Cloud computing helps in running business operations quickly and effectively in response to changing market conditions. It has opened up previously unimaginable opportunities to develop a very engaging consumer experience.

New York, Sept. 29, 2022 (GLOBE NEWSWIRE) -- Reportlinker.com announces the release of the report "Global Cloud Computing Market Size, Share & Industry Trends Analysis Report By Service Type, By Deployment, By Enterprise Size, By End-use, By Regional Outlook and Forecast, 2022 - 2028" - https://www.reportlinker.com/p06321915/?utm_source=GNW By the help of the cloud computing, people and businesses have changed their behavior, and multiple business lines are now able to get things done by skirting IT regulations.

Fundamentally, corporate spending, digital business decisions, and vendor & technology selection are all being impacted by cloud developments. Emerging technologies like artificial intelligence (AI) and ML (machine learning) make it possible for businesses to use AI capabilities, which promotes cloud expansion. Rapid digitalization is forcing organizations to change their application and infrastructure landscapes in order to improve cost effectiveness along with business agility.

By integrating cloud solutions and services, businesses may support their new core business operations, move corporate workloads to a cloud platform, and lower network latency. Data security and privacy are organizations top priorities, which necessitates digital protection for information storage, use, and transmission. Some of the crucial security services provided by the vendors include data encryption, authorization management, cloud integration, access control, communication security, monitoring & auditing, and business continuity services.

Because cloud computing services offer insights into partnership methods, go-to-market plans, alliances, investments, alliance and acquisition strategies, and best operational practices, businesses are embracing them. Cloud computing services also make it easier to track, compare, and evaluate business activities and make sure that business operations are in accordance with client requests.

COVID-19 Impact Analysis

One of the biggest changes in the workplace is anticipated to result from the pandemic. In order to speed up Industry 4.0, the fourth industrial revolution, it is changing how companies use smart technologies like mobile supercomputing, big data, IoT, and artificial intelligence. In Q3 2020, the cloud computing market saw an increase in demand as businesses continued to move workloads from analog to digital formats. To maintain the well-being of employee along with operational efficiency, many businesses from a number of industries have switched to the work-from-home model, which has raised the demand for Software-as-a-Service (SaaS)-based solutions.

Market Growth Factors

Reduced Infrastructure And Storage Costs & Increased Return On Investments

The upfront setup as well as ongoing maintenance costs of on-premises data hosting are a matter to concern for businesses. Additional worries for businesses include downtime issues, staff costs, and electricity costs. The adoption of cost-effective strategies to rebuild business models has intensified due to the existing competitive environment and economic conditions of the world. Some of the other variables that would support the acceptance of cloud computing services and, eventually, reduce the business costs include the shifting business priorities toward digital transformation and the speeding consumer experience.

More People Are Using Hybrid Cloud Services

Businesses with current infrastructure are embracing cloud computing services and are prepared to use a hybrid strategy so they may profit from both on-premises and cloud services. Due to the certain advantages, such as no upfront infrastructure setup fees as well as the availability of computing services on demand, SMEs are widely adopting cloud computing services. These elements are supporting the surge in demand for cloud services in various organizations. Improved workload management, more security & compliance, and effective integration within DevOps teams are all advantages of the hybrid cloud.

Market Restraining Factors

Critical Data Loss And Corporate Operations Being Damaged By Cyberattacks

Cloud computing services assist businesses in increasing operational effectiveness and cutting costs. Additionally, these services have a number of benefits, such as scalability, flexibility, and agility. The data stored in the cloud is still vulnerable to hackers even though the cloud provides a number of advantages and security precautions. The amount of data being produced is growing, and businesses are starting to take more steps toward digital transformation. Enterprise data is exposed to risk from cyberattacks like Specter, Meltdown, cloud malware injection assaults, account or service hijacking, and man-in-the-cloud attacks.

Service Type Outlook

On the basis of service type, the cloud computing market is classified into infrastructure as a service (IaaS), platform as a service (PaaS), and software as a service (SaaS). The IaaS segment covered a substantial revenue share in the cloud computing market in 2021. The demand for IaaS is increasing due to the rising desire to minimize IT complexity, engage a qualified workforce to manage the IT infrastructures, and lower deployment costs for data centers.

Deployment Outlook

By deployment, the cloud computing market is divided into public, private, and hybrid. The hybrid segment acquired a significant revenue share in the cloud computing market in 2021. The hybrid model has been the most popular implementation methodology across sectors. Many businesses are putting more emphasis on creating hybrid cloud models and clever strategies to help improve business operations, resource consumption, cost efficiency, user experience, and application modernization while maximizing the benefits.

Enterprise Size Outlook

On the basis of enterprise size, the cloud computing market is fragmented into large enterprises, and small & medium enterprises. The small and medium-sized enterprises (SMEs) segment acquired a significant revenue share in the cloud computing market in 2021. The rise is attributable to the expansion of SMEs in developing nations like China as well as India. The market is also expected to grow as a result of an increase in SMEs need for cloud computing services to streamline workflow and save operating expenses.

End- User Outlook

Based on end-user, the cloud computing market is segmented into BFSI, IT & telecom, retail & consumer goods, manufacturing, energy & utilities, healthcare, media & entertainment, government & public sectors and others. The BFSI segment witnessed the highest revenue share in the cloud computing market in 2021. Moneylenders have adopted digital transformation as a result of a growth in online banking activity in the BFSI industry, with cloud computing playing a crucial part in this strategy.

Regional Outlook

Region wise, the cloud computing market is analyzed across North America, Europe, Asia Pacific and LAMEA. In 2021, the North America segment accounted the maximum revenue share in the cloud computing market. Companies in the United States prioritize digital transformation, and they are frequently seen as early adopters of cutting-edge technologies like the Internet of Things (IoT), big data analytics, additive manufacturing, connected industries, AI, augmented reality (AR), machine learning (ML), and virtual reality (VR), as well as the newest telecommunications technologies like 4G, 5G, and LTE.

The major strategies followed by the market participants are Partnerships. Based on the Analysis presented in the Cardinal matrix; Google LLC and Microsoft Corporation are the forerunners in the Cloud Computing Market. Companies such as Amazon.com Inc., IBM Corporation, Adobe, Inc. are some of the key innovators in Cloud Computing Market.

The market research report covers the analysis of key stake holders of the market. Key companies profiled in the report include Google LLC, IBM Corporation, Oracle Corporation, Amazon.com, Inc., Microsoft Corporation, SAP SE, Salesforce.com, Inc., Adobe, Inc., Alibaba Group Holding Limited, and Workday, Inc.

Recent Strategies Deployed in Cloud Computing Market

Partnership, Collaboration and Agreement:

Jul-2022: Oracle collaborated with Claro, a Mexican telecom group. This collaboration focused on jointly offering Oracle Cloud Infrastructure (OCI) services to the public as well as private sector organizations and enterprises in Colombia. In addition, the collaboration would accelerate the technology modernization of businesses and customers across Latin America. The collaboration with Claro would also accelerate cloud adoption, stimulate economic recovery, and spur competitiveness in these nations.

Jun-2022: Oracle entered into a partnership with Kyndryl, an IT infrastructure services provider. From this partnership, the companies aimed at helping consumers accelerate their journey to the cloud by delivering managed cloud solutions to enterprises all over the world. This partnership would expand the companys reach, helping more consumers across the world move critical workloads to the cloud.

Jun-2022: AWS signed an agreement with Redington India, an information technology (IT) provider. From this agreement, the companies focused on driving cloud technology adoption in India. Also, this agreement would enable AWS to extend the power of the AWS Cloud to more partners and customers across the metros, and tier-2 and 3 cities in India.

May-2022: IBM joined hands with Amazon Web Services, a subsidiary of Amazon that provides on-demand cloud computing platforms. The company focused on offering a broad array of its software catalog as Software-as-a-Service (SaaS). Through this collaboration, IBM took another major step in giving organizations the ability to choose the hybrid cloud model that works best for their own needs and workloads, freeing them up to instead focus on solving their most pressing business challenges.

May-2022: Oracle partnered with Informatica, an enterprise cloud data management leader. This partnership would bring compelling value to the companies joint consumers with the fastest, most cost-effective path to OCI.

Mar-2022: Google teamed up with Adani Group, an Indian multinational conglomerate. Under this collaboration, The Adani Group would drive up the next phase of digital innovation across its diversified business portfolio. The collaboration would tap into the companies expertise across best-in-class infrastructure, technology, and industry solutions to modernize the Adani Groups IT operations at scale.

Mar-2022: Microsoft partnered with FD Technologies, a group of data-driven businesses. Through this partnership, the companies focused on expanding the reach of its KX Insights streaming data analytics platform. With the combination of Microsofts Intelligent Cloud capabilities with KX technology and expertise, the companies would look forward to empower the capital markets and financial services, consumers, with latest, compelling solutions for faster decision-making and innovation

Feb-2022: Microsoft signed an MoU to collaborate with Larsen & Toubro, an Indian multinational conglomerate company. The collaboration focused on developing a regulated sector-focused cloud offering. In addition, the collaboration aimed to support the public sector and the other regulated industries as they seek to accelerate digital services to benefit all parts of India.

Feb-2022: Google Cloud collaborated with Elisa, finnish telecommunication & digital services producer. This collaboration aimed at accelerating Elisas cloud transformation journey and working together on joint innovations in several areas. Further, this collaboration would enable Elisa to leverage Google Clouds infrastructure, advanced data analytics, storage, and hybrid cloud management services to speed up its go-to-market activities, explore the latest edge computing possibilities, and build improved experiences for Elisas consumers.

Feb-2022: AWS came into a partnership with Kyndryl, the worlds largest IT infrastructure services provider. The partnership aimed at empowering, educating, and allowing thousands of AWS certified practitioners and developing joint solutions that would accelerate consumers journeys and help them innovate on the worlds leading cloud.

Jun-2021: Amazon Web Services extended its existing partnership with Salesforce, an American cloud-based software company. By the expansion of the partnership, the companies would make it easy for consumers to utilize the whole set of Salesforce and AWS abilities together to quickly build and deploy powerful latest business applications that accelerate digital transformation.

Jun-2021: Amazon Web Services signed an agreement to partner with Axis Bank, an Indian banking and financial services company. The Amazon subsidiary provides on-demand cloud computing services to organizations across the world and would help to accelerate the banks transformation in face of growing demand for digital services. Under this partnership, the AWS would help Axis Bank build and rise a suite of digital banking services that evolve with technology changes, introduce the latest payment modes, and support evolving customer and business requirements in India.

Dec-2020: Microsoft teamed up with Johnson, an American multinational corporation. The collaboration aimed to digitally transform how buildings and spaces are conceived, built, and managed. By integrating the power of Azure Digital Twins with JCIs OpenBlue Digital Twin platform, this collaboration would serve consumers with a digital replica and actionable insights to better meet their evolving requirements.

Product Launch and Product Expansion:

Jun-2022: Salesforce introduced Sales Cloud Unlimited, a unified platform with everything sales teams in one place. The companies aimed to drive growth and turn sales reps into trusted advisors. Sales Cloud is an all-in-one platform for sales where AI (powered by Einstein), automation, and analytics come standard, allowing every sales rep to be more efficient.

Jun-2022: Salesforce launched new Customer 360 innovations. This launch aimed to help companies tap into the power of automation so they can focus on what matters most driving productivity and building trusted relationships with consumers.

Dec-2021: IBM launched Cloud Modernization Center, a digital front door to a vast array of tools, training, resources, and ecosystem partners. The launch aimed at helping IBM clients accelerate the modernization of their applications, data, and processes in an open hybrid cloud architecture. As part of the IBM Z Cloud and Modernization Center, clients could access a digital journey showcasing comprehensive resources and guidance for business professionals, IT executives, and developers alike.

Dec-2021: Adobe unveiled Creative Cloud Express, a simple, template-based tool. This latest tool would allow drag-and-drop content creation, empowering every user to express their creativity with just a few clicks.

May-2021: Google Cloud launched Vertex AI, the latest managed machine learning platform. Vertex AI is designed to make it easier for developers to deploy and maintain their AI models. This latest product allows better deployments for a new generation of AI that would empower data scientists and engineers to do fulfilling and creative work. Ultimately, the goal with Vertex is to reduce the time to ROI for these enterprises, to make sure that they cannot just build a model but get real value from the models theyre building.

Mar-2021: IBM released IBM Cloud Satellite, an extension of the IBM Public Cloud. The IBM Public Cloud would enable its enterprise clients to launch consistent cloud services anywhere and in any environment across any cloud, on-premises, or at the edge.

Acquisition and Merger:

Mar-2022: SAP SE acquired Taulia, a leading provider of working capital management solutions. This acquisition aimed to expand SAPs business network and strengthen SAPs solutions for the CFO office. Taulias solutions would be tightly integrated into SAP software as well as continue to be available standalone. In addition, Taulia would operate as an independent company with its own brand within the SAP Group.

Mar-2022: Microsoft took over Nuance Communications, a leader in conversational AI and ambient intelligence industries. This partnership aimed to bring together Nuances best-in-class conversational AI and ambient intelligence with Microsofts secure as well as trusted industry cloud offerings. Also, this partnership would help providers offer more affordable, effective, and accessible healthcare, and help businesses in every industry create more personalized and meaningful customer experiences.

Feb-2022: IBM acquired Sentaca, a provider of telco consulting services and solutions. This acquisition focused on improving its hybrid cloud capabilities. Through this acquisition, Sentaca would become part of IBM Consulting and would be integrated into its Hybrid Cloud Services business in North America.

Nov-2021: IBM completed the acquisition of SXiQ, an Australian digital transformation services company. This acquisition aimed to bring additional hybrid and multi-cloud expertise that is at the core of open innovation for clients. SXiQ would improve IBM Consultings abilities in Australia and New Zealand to modernize applications and technology infrastructure in the cloud.

Jun-2021: IBM took over Turbonomic, an Application Resource Management (ARM) and Network Performance Management (NPM) software provider. This acquisition aimed to enable IBM to become the only company providing a one-stop shop of AI-powered automation capabilities, all built on Red Hat OpenShift to run anywhere.

Dec-2020: IBM acquired FinTech Expertus Technologies, a Montreal-based fintech company. By the acquisition, IBM would gain consulting experience from Expertus on addressing the latest challenges in payments coming in the next several years. Also, the acquisition would broaden IBMs capability to deal with complicated integrations of technologies, people, and processes.

Dec-2020: Adobe acquired Workfront, the leading work management platform for marketers. The acquisition aimed to give leading brands access to a single system to support planning, collaboration, and governance, to unlock organizational productivity.

Dec-2020: Google signed an agreement to acquire Actifio, a privately held information technology firm. Under this acquisition, Actifios business continuity solutions would help Google Cloud consumers prevent data loss and downtime because of network failures, external threats, human errors, and other disruptions.

Feb-2020: Google Cloud took over Looker, a Santa Cruz data analytics company. The acquisition aimed to strengthen the companys analytics and data warehouse capabilities, which include BigQuery, allowing the consumers to address some of their toughest business challenges, faster all while maintaining complete control of their data.

Scope of the Study

Market Segments covered in the Report:

By Service Type

Software as a Service (SaaS)

Platform as a Service (PaaS)

Infrastructure as a Service (IaaS)

By Deployment

Public

Private

Hybrid

By Enterprise Size

Large Enterprises

Small & Medium Enterprises

By End-use

BFSI

IT & Telecom

Retail & Consumer Goods

Manufacturing

Media & Entertainment

Energy & Utilities

Healthcare

Government & Public Sector

Others

By Geography

North America

o US

o Canada

o Mexico

o Rest of North America

Europe

o Germany

o UK

o France

o Russia

o Spain

o Italy

o Rest of Europe

Asia Pacific

o China

o Japan

Continue reading here:
The Global Cloud Computing Market size is expected to reach $1143.2 Billion by 2028, rising at a market growth of 15.0% CAGR during the forecast...

Microsoft Claims Reduction in Cloud Cost from Migrating Internal Services to .NET 6 – InfoQ.com

Microsoft has migrated several internal services running on the Azure cloud from .NET Framework to .NET 6, which the company claims has reduced the cost of cloud infrastructure by 29%, while simultaneously increasing performance and latency reduction by up to 50%.

Microsoft released .NET 6 in November 2021, announcing massive performance improvements all over the board. The performance improvements of .NET 6 are mainly due to optimisations in JIT (just-in-time) compiler, garbage collector, moving threading code from unmanaged to managed code, optimising async operations in several scenarios, and improving the performance of data structures such as arrays or file system access classes.

Since the release, Microsoft and other companies have shared experiences and results from migrating older versions of .NET to .NET 6.

Azure Active Directory gateway service moved to .NET 6 in September 2021, when the release candidate versions were available. They claimed a 30% decrease in CPU usage while hosting the same workload of requests per second. During the migration, they found some minor issues, and they coordinated with the .NET team to fix them. One of the biggest changes was removing the previous dependency on IIS to serve HTTP requests using HTTP.sys directly from Windows operating system.

Microsoft Commerce, a collection of around 700 revenue-related microservices, experienced a long migration towards .NET Core starting in 2019. Over time, the team migrated from Azure Windows VMs to Linux Kubernetes clusters, also migrating the .NET Framework to .NET Core 3.1, then .NET 5, and finally .NET 6. They observed improvements of 78% latency reduction in some cases, while the final Azure cost savings were around 30% in CPU usage. During the migration, the Microsoft Commerce team also removed some implicit Windows dependencies and moved away from IIS towards a cross-platform Kestrel web server.

Microsofts Teams infrastructure platform, called IC3 (Intelligent Conversations and Communications Cloud), also migrated to .NET 6 in May 2022. They claimed a 29% reduction in Azure compute costs due to getting the same throughput with fewer virtual machines, and a 30-50% latency reduction while increasing the stability and reliability of the services. While the migration is not yet complete, more than a third of the 200 services already run on the latest long-term supported version of .NET. The team invested heavily in analysing the dependencies of their .NET code and mitigated the risk of migration using shims and running code side-by-side.

Azure CosmosDB API gateway migrated to .NET 6 in January 2022. They claimed significant CPU usage reduction, memory footprint reduction and latency reduced to a fifth of the previous one. The team highlighted improvements in HTTP request handling in the Kestrel server, ValueTask optimisations for asynchronous operations, and memory-intensive operations support with Span structures in .NET 6.

Azure Web Applications, one of the most used services in Azure for web application developers, migrated their implementation during the first half of 2022 from IIS to Kestrel and YARP (an open-source reverse proxy) with .NET 6. They claimed almost 80% of the increase in throughput and a significant CPU usage reduction. Removing Windows dependencies with Kestrel and .NET 6 enabled them to use the same codebase for their Windows and Linux web application services, reducing the cost of maintenance.

Open-source .NET projects are also benefiting from the .NET 6 performance improvements. A service that reads and processes AIS messages, broadcasted by maritime traffic, claims 20% performance improvement with no code changes at all, just migrating from .NET Core 3.1 to .NET 6.

Read more here:
Microsoft Claims Reduction in Cloud Cost from Migrating Internal Services to .NET 6 - InfoQ.com

StorPool takes its software-defined storage to the AWS cloud – ComputerWeekly.com

Ten times less latency than other storage solutions and one million IOPS from a single node those are the claims of StorPool, which sells distributed and virtualised software-defined storage, for datacentre deployment to date, but now also available on the AWS cloud.

It was AWS that came to us to propose the offer of extremely performant storage alongside its online storage services, said Boyan Ivanov, CEO of StorPool, in a conversation with ComputerWeekly.coms French sister publication LeMagIT during a recent IT Press Tour event.

In fact, a StorPool unit on AWS allows online applications to achieve 1,200 IOPS, which compares to the 250 IOPS from the AWS Elastic Block Storage service that attaches directly to VMs [virtual machines].

Ivanov said AWS doesnt market StorPool among its services, but StorPool can install its system on AWS VMs.

StorPool claims that its software-defined storages high performance comes from not being burdened by too many storage functions. For example, block access has been the main focus, as used by transactional databases (ie, applications that have the most need for the most IOPS), OSs that read and write their volumes to virtual machines or their persistent containers.

Traditional storage arrays are too complex not elastic enough for modern use cases, said Ivanov. The best measure of performance now is latency in other words, the speed your storage responds to your application or your systems. Its in that way that we have developed our software-defined storage.

Ivanov said enterprises can add third-party file services to StorPool so a portion of the disk works as NAS. What is important is that you have a pool of storage that is faster than the basic offer, he added.

StorPools software is installed on at least three servers and these present their drives like a virtual SAN to other machines on the LAN. That is pretty similar to software-defined storage like VMware vSAN and DataCore SANsymphony. But StorPool claims its code is better optimised and that its performance depends on an emphasis on the RAM of each node in the cluster.

We use 1GB of RAM and a complete virtual machine per node to manage up to 1PB of data, said Boyan Krosnov, technical director at StorPool. Thats the key to offering better performance than arrays from Pure Storage or NetApp all-flash.

When applications are deployed on the same server as the StorPool VM, latency to data on another node can be as low as 70. Thats just 1.5x the latency when the application directly accesses NVMe on the same server. And when data is on the same server, its latency under StorPool is divided by two compared with direct access. Thats down to parallelised NVMe access with StorPool, not the host OS.

StorPool doesnt use host OS drivers, said Krosnov. It uses ones weve developed that allow for an optimised RAID for NVMe SSDs, but also for network cards that connect the nodes.

In its most recent version v20 StorPool supports NVMe-over-TCP, which can connect nodes to external disk shelves or have them operate as a target for other servers on the LAN.

NVMe-over-TCP offers low-cost storage networking, but at speeds to match NVMe SSDs. StorPool claims that an application connected via 100Gbps Ethernet can actually move data at 10GBps, which is the maximum authorised on such a connection.

Elsewhere in v20, StorPool has broken with former habits to offer NFS-based NAS functionality, with a maximum capacity of 50TB.

StorPools headline customers include Nasa, the European Space Agency and CERN. Last year, integrator Atos announced StorPool would be deployed as storage to its supercomputer projects.

StorPool is also available on AWS for its I3en.metal bare metal storage and r5n compute instances. According to a series of tests carried out by the software maker, such services as measured by UK-based hosting provider Katapult stayed under 4 milliseconds response time with databases at 10,000 requests per second. By comparison, AWS native block storage service EBS is limited to 4,000 requests, and other competitors 2,000.

The problem with block storage services in the cloud is that they are not elastic, said Krosnov. After a certain level of access requests, the server uses other SSDs, meaning SSDs that arent directly connected to the PCIe bus. So, your application then passes through the bottleneck of the host OS.

Excerpt from:
StorPool takes its software-defined storage to the AWS cloud - ComputerWeekly.com

How smart hardware and cloud-based software increase efficiency – Water Technology Online

As acquisitions, mergers and enterprises increasingly emerge expanding organizations well beyond the fences of a single facility industrial water users and producers require new ways to monitor process conditions throughout their expansive domains. Additionally, managing assets and data over these vast reaches can quickly become a challenge.

Historically, it was normal for large teams of plant personnel to carry out time-consuming tasks, like manual measurements, to ensure plants and networks were running safely. But today, industrial stakeholders can monitor operations and execute many tasks remotely, improving productivity, accuracy, efficiency and profitability.

In the modern data-centric landscape, smart instrumentation provides a wealth of diagnostic and other information, enabling plant staff to get more from their instruments than just 4-20mA primary variable measurements. This information transmitted via digital communication protocols to central monitoring software solutions helps plant personnel improve plant efficiency and avoid unplanned shutdowns by empowering them to implement proactive maintenance via predictive monitoring and analysis.

As water systems become more heavily regulated, there is a rapidly increasing list of data points to monitor. Real-time measurement and control remain vital to the health of any industrial system, but many more variables are required for reporting to regulatory agencies. Furthermore, efficiently tracking equipment diagnostic and process data, in addition to corresponding insights, can identify opportunities for treatment process optimization, making it easier to justify and get upgrade projects approved.

Obtaining and analyzing this data, in addition to performing comprehensive asset management of complex systems, is nearly impossible left to the solutions of yesteryear. But the industrial internet of things (IIoT) and connected instruments make these types of upgrades feasible for organizations of all sizes, helping them improve operational efficiency.

By incorporating smart instrumentation into water system designs, facility operation and optimization become much more manageable tasks. These instruments incorporate digital communication protocols, sometimes in place of and other times on top of in the case of HART traditional analog communication protocols, greatly increasing capabilities and value (Figure 1).

These systems regularly use flow, pressure, temperature, level and other process data to monitor and control water quality and availability, but they often discard status and diagnostic data. By passing this data by, plant personnel may miss out on opportunities to optimize, simplify and safeguard their operations.

When this data is ingested by intelligent plant analysis systems, facilities increase their ratio of proactive to reactive maintenance, thus reducing unplanned downtime, as well as equipment and human safety hazards. For example, instead of waiting for a high-temperature pump bearing failure, process data can be traced to issue an alert when anomalies are detected that would lead to this type of issue, such as steady motor temperature increase over time.

When this diagnostic data is integrated into host systems, it can be analyzed to provide advance warning of instrument or equipment failure, or troubleshooting insight in the event of a fault. Because calibration and nameplate information are also internally stored in each instrument, tracking and managing assets is easier throughout plant lifecycles.

Moving process and diagnostic data into host systems is key, but especially in enterprise settings, it is difficult to make sense of data without context. With cloud-based insight generation software, the IIoT provides users with access to instrument and plant process insights so they can contextualize data and make better decisions (Figure 2).

These cloud software solutions help organizations automate tasks that were previously manual and inefficient, providing productivity gains, and continuous reporting in real time.

This empowers facilities to:

This information and more are made available to plant staff in configurable dashboards, making it easy to understand operational states at a glance and make adjustments where necessary.

Control of waste effluent release into a sanitary sewer is a common need for industrial users, but few sites are equipped with water quality sensors to provide early indication of pollutants. Instead, composite liquid samples are typically taken at weekly or longer intervals, and then measured in the lab. This method can miss significant pollution events, each of which may adversely impact downstream wastewater treatment plants, or the environment where effluent is discharged.

A private wastewater treatment facility servicing industrial users was experiencing out-of-compliance flows into its facility, but based on lab data from its customers, it could not identify the offenders. To effectively run the business and adequately protect its own environmental discharge, the facility needed a way to single out the source of contaminants exceeding limit values.

By installing Endress+Hauser smart flow, analytical and temperature instrumentation at multiple points throughout its influent pipe network, and uploading the generated data to the Netilion cloud, it was able to securely monitor process conditions around the clock from anywhere via a web connection. The instrumentation was connected to the internet using 4G cellular gateways. With web-based visualization, reporting and alarming, the facility is now able to identify customers out of compliance, and bill them accordingly (Figure 4).

An American multinational beverage corporation with interests in the manufacturing, retailing and marketing of nonalcoholic beverages produces concentrates and syrups, and it was looking for a way to monitor its water abstraction more closely. Of primary concern, its global team needed to maintain compliance with international regulations and evaluate the productivity of its bore holes.

By implementing smart monitoring instruments at its drilling locations, connecting these instruments to the Netilion cloud, and linking this database to local water authorities, the company ensured regulatory compliance (Figure 5).

Additionally, the insights from Netilion initiated a pump efficiency optimization project by enabling clearer access to key performance indicators. The software dashboards included these indicators, along with reports, variable limits, warnings and alarms for 22 facilities around the world.

IIoT-based software solutions provide significant benefits for industrial water and wastewater stakeholders by enabling reliable monitoring of water quality, flow, pressure, temperature and level. And the cloud can connect all aspects of a water system, providing users with easy access to data and insights from a single source.

These capabilities enable better asset tracking of field devices, and more reliable data transfer, recording and archiving. With the high value placed on water in todays world, IIoT technology is boosting operational efficiency, empowering users to improve their processes and bottom lines, while conserving water.

Nick Hanson is the water and wastewater industry marketing manager for Endress+Hauser USA. In this role, he is responsible for strategic market planning and industry outreach events. As part of the Endress+Hauser Global Strategic Industry Group, he acts as the voice of the U.S. market to guide solutions specific to the region. Nick has a Bachelor of Science degree in Mechanical Engineering from the University of Colorado Boulder, and over 10 years of experience in the process instrumentation and control industry.

Read more from the original source:
How smart hardware and cloud-based software increase efficiency - Water Technology Online

Join us for the Intelligent Application Summit Madrona – Madrona Venture Group

Innovation has a way of appearing to be an overnight success while actually being a transformation taking a decade or more to emerge. Intelligent applications are one such area. In 2012, consumers were already experiencing the first generation of intelligent apps through Google and Bing search engines, recommender systems for services like Amazon.com, Netflix and Spotify, and nascent voice and image services like Siri and Alexa. Then the early days of enterprise and functional applications architected with intelligence inside began.

Since intelligent applications are the future, we are co-hosting our first-ever Intelligent Application Summit to meet, share perspectives, and discuss with other leaders the future of applications and the data-driven services that enable them to be intelligent.

A decade later, we are fully immersed with intelligent applications as consumers, employees, citizens, and patients. In fact, any application that is just software will struggle to survive in the coming years. Since intelligent applications are the future, we are co-hosting our first-ever Intelligent Application Summit to meet, share perspectives, and discuss with other leaders the future of applications and the data-driven services that enable them to be intelligent.

Madrona has been interested in and invested in ML-enabling technologies for years including but not limited to Turi (acquired by Apple in 2016), Lattice (acquired by Apple in 2017) and Algorithmia (acquired by DataRobot in 2021). And the pace of company formation and progress in intelligent apps has become torrid. Last year, we were inspired to work with the broader venture community and other partners to identify the top 40 private intelligent application companies. We then revealed the group, which included 10 intelligent application enablers and 30 intelligent apps across early, mid, and later stages, at AWS re:invent in late November 2021. Since then, those 40 companies have raised over $3 billion in new capital, including several successful up-rounds after the 2022 tech correction.

Be sure to request an invite to the Intelligent Application Summit:

Request an invitation

We define intelligent applications to be software services with contextually relevant machine/deep learning models embedded in the application. Of course, the precursor to these models is access to and preparation of multiple data sets and the use of algorithmic techniques to train, build and deploy the models in software. For example, Amperity combines first-party data major brands have about their customers to help better identify and personalize communications with those customers. SeekOut helps employers identify candidates inside and outside their company with the best attributes to fill specific roles. And there are endless examples like these in life sciences, financial services, creative arts, and process automation.

Many factors have had to come together for the intelligent applications era to fully arrive. Infrastructure capabilities around model training, such as GPUs and other specialized processors, have become so cost-effective and available through cloud services that model creation and iteration are much more approachable. Data is far more ubiquitous but is also more digitized and portable. Even though data has gravity, it can increasingly be aggregated in a data store, lake, or warehouse to enable the preparation and training of AI/ML models. SaaS software is also ubiquitous in the world of hybrid work, which allows new data to be easily captured and turned into predictive recommendations and insights. And with the emergence of foundational models such as transformers, the next generation of intelligent and accessible services are being enabled.

Our Intelligent Applications Summit will host a curated group of leaders from cloud computing companies, rapidly growing companies, venture capitalists, corporate development and research institutions. The event starts the evening of Nov. 1 with a reception and runs all day on Nov. 2. The day-long agenda combines keynotes, short company overviews/showcases, breakout sessions, and networking. We will also be celebrating this years IA 40 winners, which we will announce ahead of the summit. We encourage anyone interested to request an invite at the event page.

Intelligent Applications as a theme resonates with the industry, and we are honored to work with Summit sponsors: Microsoft, AWS, Goldman Sachs, and PitchBook. We are also thrilled that confirmed speakers so far include Charles Lamanna and Peter Lee from Microsoft, Matt Garman from AWS, Sidd Srinivasa from the UW, Maddison Masaeli from Deepcell, Craig Hanson from Gong, Oren Etzioni from the Allen Institute for AI, Justin Borgman from Starburst, Alex Ratner from Snorkel, Michelle Yi from Relational AI, Zayd Enam from Cresta, Geoffrey von Maltzahn from Flagship Pioneering, Kyle Coleman from Clari, Anoop Gupta from SeekOut, Diego Oppenheimer from DataRobot, Amanda Marrs from AMP Robotics, Prasad Raje from Outreach, Leo Dirac from Groundlight AI, Anu Sharma from Statsig, Zoe Hillenmeyer from Peak, and Were still finalizing the agenda, but keep an eye on the Summit event page. A few key conference topics include:

Originally posted here:
Join us for the Intelligent Application Summit Madrona - Madrona Venture Group

Infor Partners with Fontainebleau Las Vegas for Cloud-Based Front and Back-of-House Hospitality Solutions – Hospitality Net

Infor, the industry cloud company, today announced that Fontainebleau Las Vegas, a vertically integrated 67-story hotel, gaming, entertainment and meeting destination conceived by Fontainebleau Development, will partner with Infor to implement key front- and back-of-house hospitality solutions to automate critical business functions. Through this partnership, the Fontainebleau Las Vegas team can utilize cloud-based applications specifically built for the hospitality industry to unify and refine hotel operations, create scalable processes, improve processes, and share real-time data, empowering business leaders to make more-informed decisions as the property prepares for its global debut in late 2023.

Infors Hospitality solutions are built to help hoteliers better manage all facets of the business, so they can make more impactful decisions to amplify success and take the business further, says Infor General Manager Jason Floyd. Infors hospitality-specific cloud solutions will provide Fontainebleau Las Vegas with the tools to combat fluctuating variables, mitigate day-to-day challenges, and eliminate redundancies in the day-to-day workflow.

Fontainebleau Las Vegas will utilize Infors Hospitality Management System (HMS), a robust, integrated, and scalable hotel property management system built specifically for hospitality and gaming. This cloud-based system will provide centralized guest profile management to enable better personalization, support a digital guest journey with mobile-enabled check-in and check-out, guest services and housekeeping, and customizable fields and screens by user type allowing hotel team members to deliver extraordinary guest service and strategy precision.

The next-level technology that will be showcased throughout Fontainebleau Las Vegas will extend behind the scenes as we adopt modern solutions to capitalize on critical data and intelligence, says Fontainebleau Las Vegas Chief Technology Officer Marc Guarino. Infors technology solutions will allow us to automate time-consuming back-of-house processes so that we can further focus on delivering unforgettable experiences at the property.

Upon opening, Fontainebleau Las Vegas will feature approximately 3,700 uniquely designed hotel rooms, more than 550,000 square feet of customizable convention and meeting space, and a world-class collection of gaming, dining, retail, lifestyle, and health and wellness experiences.

Learn more about Infor HMS.

Fontainebleau Las Vegas is a vertically integrated, luxury 67-story hotel, gaming, entertainment, and meeting destination scheduled to open fourth quarter of 2023. Created by Fontainebleau Development, which designs, builds, and operates premier hospitality, commercial, retail and luxury properties, in partnership with Koch Real Estate Investments, Fontainebleau Las Vegas brings full circle the companys longtime vision of hosting its iconic brand on the Las Vegas Strip. Located at 2777 S. Las Vegas Blvd. adjacent to the acclaimed Las Vegas Convention Center expansion, Fontainebleau Las Vegas will feature approximately 3,700 uniquely designed hotel rooms, more than 550,000 square feet of convention space, and a world-class collection of restaurants and shops, pool experiences, vibrant nightlife options, and coveted spa and wellness offerings. Visit fontainebleaulasvegas.com.

Infor is a global leader in business cloud software specialized by industry. Infor's mission-critical enterprise applications and services are designed to deliver sustainable operational advantages with security and faster time to value. We are obsessed with delivering successful business outcomes for customers. Over 60,000 organizations in more than 175 countries rely on Infor's 17,000 employees to help achieve their business goals. As a Koch company, our financial strength, ownership structure, and long-term view empower us to foster enduring, mutually beneficial relationships with our customers. Visitwww.infor.com.

Christina Ledger+1 312 662 2135

Here is the original post:
Infor Partners with Fontainebleau Las Vegas for Cloud-Based Front and Back-of-House Hospitality Solutions - Hospitality Net

What Is Sandbox Security and Do You Need It in Your Business? – TechGenix

With sandbox security, cybercriminals think theyre attacking the real thing when theyre only playing with a decoy.Source: Markus Spiske via UnSplash.com

Sandbox security is a virtualization-based security (VBS) solution to protect systems from intrusions. You can use a sandbox to test security and solutions, including catastrophic attacks. The sandbox allows these tests without endangering the original system.

A sandbox effectively determines which attack vectors your system is vulnerable to. You can then patch them before anything becomes available to the public.

Ill first go into the details of what sandboxing is and how it works. Later, well consider a few scenarios which show you what to focus on if you want to use sandbox security.

Sandbox security is an approach to testing and developing cybersecurity systems. It creates a model on the on-site or cloud server and attacks it with Advanced Persistent Threats (APT). Its also a way to test unknown threats that might enter the system from the outside.

You can choose from three sandbox types. The one you select depends on what systems you believe malware would attack. These choices also use different amounts of system resources. So, in the end, its a calculation of what is most useful for your needs.

With full system emulation, you copy everything, including the hardware you use. At completion, you have two identical systems. The only difference is that the sandbox has its software dependent on and backed up by the master system.

Because these systems are alike, malware cant detect a sandbox unless its instructed not to act for unreasonable lengths of time. Even through side-channel attacks, malware cant determine that its attacking a trap instead of the real thing.

But, these systems are also expensive, as they need double the hardware and maintenance. The expense is worth it for large companies with remote workers sending information through the system.

The minimal increase in security wont be worth the added overhead for smaller companies.

Operating system (OS) emulation offers very good protection without needing a whole new hardware setup. Also, it works with cloud servers such as Microsoft Azure and AWS.

For on-premise servers, the added resource expenditure can be significant. But, the virtual device requires no hardware maintenance or added purchasing costs.

This setup is ideal for service industries with customers sending in information. People working in a field that would otherwise create a weak security point will benefit too.

In these cases, the only thing emulated is the access point, which can be the entire app, drop box, or inbox. Its also possible to set up a sandbox instance for emails. It emulates the person receiving it and clicks on the link. It can check if the link or document sent is legitimate or phishing and respond.

Using sandbox security for email can be useful for any enterprise. But the most common use is to test apps and web-based programs where customers import data. For this purpose, its cheap, effective, and scalable.

Although different in scope, these sandboxing options share many benefits in different capacities. Ill now list those benefits and discuss how they apply to different businesses.

In the next section, Ill go through how to create a sandbox and how they work.

Youve got two main methods to create a sandbox.

The first method uses one set of hardware. It usually has a higher capacity to run both the main and sandbox mirror systems.

The second method has separate hardware, and the main system controls both systems. This method performs better but increases component, maintenance, and power costs. This option is better for demanding businesses.

For many businesses, this cost increase isnt worth it. Its optimal to use the same system and lower the requirements for both the main OS and the sandbox.

Ill now go through the operational process. Whether using full system emulation or mimicking one instance, the rundown looks similar.

Its possible to make a sandbox more intricate depending on the requirements. But, in most situations, the process of building the sandbox, detecting malware, trapping it, and restarting looks like this:

The same server that copies the important parts to a sandbox on a functional system makes a new instance. It then creates a new virtual environment.

For anyone inside this new environment, it seems as if theyre in the main system. With full system emulation, businesses can see hardware, power consumption, and OS information.

Regardless if its a part of a test or an actual attack attempt, a sandbox is made to be attacked and taken down. The system records the attack, quarantines the malware, shuts down, and restarts.

A good sandbox destroys malware and knows when the data is safe or beneficial. The tested files are copied to the main server while the sandbox is refreshed for other data.

Now Ill go through some use cases where sandboxing is often used. If you recognize your business in the examples, you likely need to consider it.

Situations where sandboxing, including sandbox development and security, can be useful are plentiful. In almost every security situation you can think of, you want to have a decoy to use.

Here, Ill list four of the most frequent use cases. While your business might not fit these exactly, explore sandboxing options if you recognize the situation.

Because websites are almost always cloud-based through professional hosting, virtualization is often integrated. When using sandbox security, the interactive pages would run as a sandbox.

If the sandbox finds malware someone is trying to upload, the anti-malware software will start. It records the attack and flushes the entire web browser environment. The pages are still available for everyone else, but no malware can find its way into the websites back end.

Software protection works like web protection. The main difference is that, rather than a third party, the business runs the server, even if cloud-based.

The first step for protection is determining which components interact with the outside. Then, you must predict possible attack vectors to determine which sandbox you need to emulate. These include side-channel attacks.

Once you have the preparations and predictions, you can set up a sandbox system. It serves as the front end for communication with the outside. Here, you can allow people to send files and other types of code, including executable code.

The virtual machine runs internal and external anti-malware software. This software makes it hard for common threats to hide. If it finds anything malicious, it deletes the virtual machine and the threats.

Developing a security system isnt easy. You cant know how the features will work together unless you use proven solutions. Rather have a virtual machine to test malware attacks before malicious attacks occur.

Sandboxes certainly work more like containerization than virtualization in this regard. But, as you have full control, test it with more risks, attacks, and resource consumption.

In cybersecurity, its much better to be a pessimist proven wrong than an optimist proven wrong.

Virtual instances encompass the scenarios where many sandboxes run the same thing repeatedly. The primary resource consumption is on malware detection software and not the sandbox.

You can only set up the communication point for mobile and browser apps without OS information or dedicated hardware. For apps, its usually only the inbox page, shared folder, or similar access points.

On the outside, it seems identical to the main system because it is. But, if anyone tries to send malware, its detected, recorded, and the sandbox gets deleted. Plus, this virtualization solution works seamlessly on the cloud as it isnt resource-intensive.

The main difference between cases is the resources needed for optimal results. In most cases, creating a sandbox is rather inexpensive and quick. But youll find that investing more in this security offers excellent benefits for the money.

Now, lets summarize what we have covered about sandbox security.

Sandbox security is a solution using virtual machines. It creates a mock system that takes on the risk of interacting with external information. Sandboxing has three options: full system emulation, operating system emulation, and single instance virtualization.

For many companies, sandboxing reduces intrusions and allows for easier testing and innovation. While it can be resource-intensive, careful gauging can make it more than worth the added cost.

Sandboxing can prevent attacks, especially against Advanced Persistent Threats and cybercrime cases.

Additionally, complex systems can use sandboxing for software protection and security research. Its also used with web browsers and online apps where it can protect only one instance inside the system.

Do you have more questions about sandboxing? Check out the FAQ and Resources sections below!

It depends. Above all, a sandbox isnt safer than any other system for stopping malware. Virtualization security allows the malware to attack, then traps it inside so it cant cause damage.

Yes. If the malware recognizes its in a sandbox or stays dormant for a long time, it can circumvent sandbox protections. Also, its possible to miss malware if theres a new attack vector.

Yes, sandboxes are virtual machines. You can set up a sandbox security system if you know how to boot up your virtual machine. Unlike regular virtual machines, sandboxes with full system emulation can have dedicated hardware.

Microsoft Azure offers several native options for virtual machines. You can turn these into sandbox security systems. While it doesnt offer direct service, the increase for new instances is affordable and easy to set up.

Yes, AWS offers EC2 virtual machines. With dedicated servers, theyre indistinguishable from regular on-premise servers. These servers allow sandboxing and QA instances in the AWS Management Console. Through Amazon Connect, it creates a new instance, which you then dedicate to a sandbox.

See how you can create Linux Virtual Machines and learn more about how they work.

Learn how to host Hyper-V virtual machines on Azure.

Explore Virtualization-Based Security (VBS) and how you can use it.

Learn how to prepare your VM for Windows 11 with PowerShell.

Find out how to troubleshoot a non-responsive Microsoft Hyper-V virtual machine.

The rest is here:
What Is Sandbox Security and Do You Need It in Your Business? - TechGenix

United States Expands Sanctions Authorization of Internet-Based Activities in Wake of Protests in Iran – Gibson Dunn

October 3, 2022

Click for PDF

On September 23, 2022, the U.S. Treasury Departments Office of Foreign Assets Control (OFAC) issued General License D-2 (GL D-2), expanding a prior authorization to further facilitate the free flow of information over the internet to, from, and among residents of Iran. GL D-2 authorizes the exportation to Iran of certain services, software, and hardware incident to the exchange of internet-based communications. GL D-2 supersedes and replaces an existing license, General License D-1 (GL D-1), that had been in place without update for over eight years. According to the Treasury Department, the updated license is designed to bring the scope of the license in line with modern technology and ultimately to expand internet access for Iranians, providing them with more options of secure, outside platforms and services. As noted below, even though GL D-2 certainly expands upon the types of software and services allowed to be exported, one of its principal effects will likely be the enhanced comfort parties may have in providing such technology to Iran. GL D-1 was often not fully leveraged by the exporting community that was concerned about the extent of coverage. GL D-2 is an evident attempt to right this balance, making sure that exporters remain aware of limitations while also providing more certainty to those who wish to leverage the exemption.

GL D-2 is the latest Biden Administration effort to support the Iranians protesting the death of Mahsa Amini, a 22-year-old woman who was arrested by Irans Morality Police for allegedly violating the countrys laws on female dress and who died in police custody on September16. Iranian protest-related videos and messages on the internet and social media have captured local and global attention. The United Nations Secretary-General, among others, has called for an independent investigation into Aminis death. The Iranian government, meanwhile, has violently responded to protests and cut off internet access for most of its nearly 80million citizens. OFACs initial response on September22 was to impose blocking sanctions on Irans Morality Police and on seven senior leaders of Irans security organizations that have overseen the suppression of peaceful protests. The next day, OFAC issued GL D-2 and published accompanying FAQ guidance. The State Department highlighted the development as a step toward ensuring that the Iranian people are not kept isolated and in the dark.

GL D-2 retains much of the core operative language from GL D-1, including certain limitations. For example, the expansion in authorized services does not apply to the Government of Iran, for which there is still no authorization for fee-based services, and the license does not authorize services to most Iranian Specially Designated Nationals (SDNs). At the same time, as described by the Treasury Department, GL D-2 has modernized and broadened the authorization originally granted in GL D-1 in a number of meaningful ways. We highlight below the most notable updates.

Authorized Communications No Longer Need to Be Personal

As mentioned, GL D-2 authorizes the exportation to Iran of certain services, software, and hardware incident to the exchange of communications over the [i]nternet. Notably absent throughout the license is the requirement, previously present in GL D-1, that required the internet-based communications be personal. This is a significant change, because what exactly qualified as a personal communication under GL D-1 has been a gray area that caused many compliance questions. Indeed, a Treasury Department official confirmed that the change was motivated by feedback from industry that the personal limitation was a sticking point. This update makes clear that technology companies need not assess the personal nature of communications, which may make such companies and others more comfortable relying on the license.

This is also not the first time OFAC has omitted or removed the personal requirement in a communications-related general license authorizing internet-based activities. This past July, for example, OFAC issued General License No. 25C under its Russia Harmful Foreign Activities Program (promulgated in response to Russias invasion of Ukraine). This General License also allows the exportation of communications-related services to Russia without requiring that such communications be personal. Another example is the 2015 amendment by OFAC of the Cuba sanctions regulations which also dropped the personal requirement from that programs internet-communications license.

Casting a Wider Net Over Supporting Software

The new license has also further expanded the authorization of software. GL D-2 now allows the export of certain supporting software that is incident to or enables internet communications. Previously, GL D-1 only permitted software necessary to enable internet communications. By removing the requirement that software be necessary to support authorized services, GL D-2 expands the types of software covered by the exemption and provides an additional measure of confidence to exporters of internet-based communications software.

Additional Activities Are Now Expressly Covered

Compared to its predecessor license, the authorizing language in GL D-2 is broader and more explicit regarding the types of services that U.S. persons may offer to people in Iran. Previously, OFACs guidance listed only six examples of permitted activities: instant messaging, chat and email, social networking, sharing of photos and movies, web browsing, and blogging. GL D-2 expands on that illustrative list, adding social media platforms, collaboration platforms, video conferencing, e-gaming, e-learning platforms, automated translation, web maps, and user authentication services. In our view, many of the newly added activities were likely already authorized under the prior GL D-1, but their addition to GL D-2 helps to confirm that they are indeed covered.

Entirely New Cloud-Based Authorization That Extends Beyond GL D-2

GL D-2 authorizes the provision of cloud-based services in support of both the activities enumerated in GL D-2 and any other transaction authorized or exempt under the [Iranian Transactions and Sanctions Regulations (ITSR)]. As a Treasury Department official explained, cloud-based services are key to aiding Iranians access to the internet because today so many VPNs and other [sorts] of anti-surveillance tools are delivered via cloud.

This cloud-based authorization is among the most significant expansions of GL D-1s original authorization, because it applies to a variety of transactions, parties, and services beyond those listed in GL D-2. For instance, as described in FAQ 1087, the cloud-based services provision applies to news outlets and media websites covered by the exemption for information or informational materials in section 560.210(c) of the ITSR. Treasury also highlights that the cloud-based services provision applies to other transactions authorized under the ITSR, including:

Coverage of No-Cost Services and Software

Under the prior regime, GL D-1 authorized fee-based services and software, while the general license at ITSR section 560.540 contained a parallel authorization for services and software provided at no cost. GL D-2 explicitly covers both fee-based and no-cost activity, but no-cost services to the Government of Iran continue to be limited to those described in Section 560.540, which retains the personal communications requirement that has been dropped from GL D-2. This combination of restrictions means that it is permissible, for example, to provide the Government of Iran with a no-cost instant messaging service, but not with a fee-based collaboration platform supporting a commercial endeavor.

Clarification of Providers Due Diligence Obligations

In conjunction with the expansion of permitted activities under GL D-2, the Treasury Department released guidance regarding cloud-based providers due diligence obligations under the new license. In FAQ 1088, OFAC explained that providers whose non-Iranian customers provide services or software to persons in Iran may rely on GL D-2 as long as the provider conducts due diligence based on information available to it in the ordinary course of business. This ordinary course of business formulation is not new, and OFAC has increasingly used this standard in describing its due diligence expectations. See, for example, FAQ 901 on complying with the Chinese Military Companies Sanctions under Executive Order13959.

In FAQ 1088, OFAC provides several hypotheticals to further articulate its expectations: If a U.S.-based provider supports non-Iranian customers that supply access to activities authorized under GL D-2such as providing access to Iranian news sites or VPNsthen the U.S.-based provider need not evaluate whether providing access to Iranian end users is related to communications. On the other hand, if a U.S.-based provider supports non-Iranian customers providing services or software not incident to communications under GL D-2for instance, if the non-Iranian customer provides payroll-management software to Iranthen the U.S.-based provider must evaluate whether the service or software is a prohibited export.

Expansion of Specific Licensing Policy

GL D-2 also expands OFACs policy for reviewing applications for specific licenses for activities not authorized by the license. In FAQ 1089, the Treasury Department encourages specific license applications by those seeking to export items or conduct other activities in support of internet freedom in Iran that are not authorized by GL D-2.

In particular, GL D-2 expands OFACs specific licensing policy by encouraging applications for specific licenses for activities to support internet freedom in Iran, including development and hosting of anti-surveillance software by Iranian developers. A Treasury Department official described the agencys specific licensing policy under GL D-2 as forward-leaning and supportive, noting that OFAC will expedite [specific license applications] by working with the State Department for foreign policy guidance.

Key License Features That Have Remained the Same

While GL D-2 uses a number of mechanisms to increase internet access for Iranians, certain exceptions and other limitations have carried over from GL D-1:

* * *

GL D-2 is a welcome upgrade and enhancement to GL D-1, and should encourage the private sector to be more forward leaning with respect to tools and technologies incident to internet-based communications that are now listed or otherwise covered by the license. It remains to be seen whether corresponding changes will be made to communications-related licenses under other OFAC programs. We will continue to report on, and advise on, these nuances as well as any further developments in this evolving area of sanctions law.

The following Gibson Dunn lawyers prepared this client alert: Audi Syarief, Samantha Sewall, Lanie Corrigan*, Judith Alison Lee, Adam M. Smith, and Stephenie Gosnell Handler.

Gibson Dunns lawyers are available to assist in addressing any questions you may have regarding these developments. Please contact the Gibson Dunn lawyer with whom you usually work, the authors, or the following members and leaders of the firms International Trade practice group:

United StatesJudith Alison Lee Co-Chair, International Trade Practice, Washington, D.C. (+1 202-887-3591, jalee@gibsondunn.com)Ronald Kirk Co-Chair, International Trade Practice, Dallas (+1 214-698-3295, rkirk@gibsondunn.com)Courtney M. Brown Washington, D.C. (+1 202-955-8685, cmbrown@gibsondunn.com)David P. Burns Washington, D.C. (+1 202-887-3786, dburns@gibsondunn.com)Stephenie Gosnell Handler Washington, D.C. (+1 202-955-8510, shandler@gibsondunn.com)Nicola T. Hanna Los Angeles (+1 213-229-7269, nhanna@gibsondunn.com)Marcellus A. McRae Los Angeles (+1 213-229-7675, mmcrae@gibsondunn.com)Adam M. Smith Washington, D.C. (+1 202-887-3547, asmith@gibsondunn.com)Christopher T. Timura Washington, D.C. (+1 202-887-3690, ctimura@gibsondunn.com)Annie Motto Washington, D.C. (+1 212-351-3803, amotto@gibsondunn.com)Chris R. Mullen Washington, D.C. (+1 202-955-8250, cmullen@gibsondunn.com)Samantha Sewall Washington, D.C. (+1 202-887-3509, ssewall@gibsondunn.com)Audi K. Syarief Washington, D.C. (+1 202-955-8266, asyarief@gibsondunn.com)Scott R. Toussaint Washington, D.C. (+1 202-887-3588, stoussaint@gibsondunn.com)Shuo (Josh) Zhang Washington, D.C. (+1 202-955-8270, szhang@gibsondunn.com)

AsiaKelly Austin Hong Kong (+852 2214 3788, kaustin@gibsondunn.com)David A. Wolber Hong Kong (+852 2214 3764, dwolber@gibsondunn.com)Fang Xue Beijing (+86 10 6502 8687, fxue@gibsondunn.com)Qi Yue Beijing (+86 10 6502 8534, qyue@gibsondunn.com)

EuropeAttila Borsos Brussels (+32 2 554 72 10, aborsos@gibsondunn.com)Nicolas Autet Paris (+33 1 56 43 13 00, nautet@gibsondunn.com)Susy Bullock London (+44 (0) 20 7071 4283, sbullock@gibsondunn.com)Patrick Doris London (+44 (0) 207 071 4276, pdoris@gibsondunn.com)Sacha Harber-Kelly London (+44 (0) 20 7071 4205, sharber-kelly@gibsondunn.com)Penny Madden London (+44 (0) 20 7071 4226, pmadden@gibsondunn.com)Benno Schwarz Munich (+49 89 189 33 110, bschwarz@gibsondunn.com)Michael Walther Munich (+49 89 189 33 180, mwalther@gibsondunn.com)

* Lanie Corrigan is a recent law graduate practicing in the firms Washington, D.C. office and not yet admitted to practice law.

2022 Gibson, Dunn & Crutcher LLP

Attorney Advertising: The enclosed materials have been prepared for general informational purposes only and are not intended as legal advice.

Read this article:
United States Expands Sanctions Authorization of Internet-Based Activities in Wake of Protests in Iran - Gibson Dunn

Section’s Distributed GraphQL Hosting Allows Organizations to Quickly Launch and Scale Location-Optimized, Multi-Cloud API Servers – Business Wire

BOULDER, Colo.--(BUSINESS WIRE)--Section, the leading cloud-native distributed compute provider, today announced its new Distributed GraphQL Service, allowing organizations to quickly launch and easily scale location-optimized, multi-cloud API servers. Organizations can host GraphQL in datacenters across town or around the world to improve API performance and reliability, lower costs, decrease impact on back-end servers, and improve scalability, resilience, compliance, security and other factors all without impacting their current cloud-native development process or tools. Section handles day-to-day server operations, as its clusterless platform automates orchestration of the GraphQL servers across a secure and reliable global infrastructure network.

Distributing API servers and other compute resources makes all the sense in the world for developers, as long as its easy to do, said Stewart McGrath, Sections CEO. Our new Distributed GraphQL service is simple to start, gives you immediate access to a global network, and automates orchestration so developers can simply focus on their application and business logic.

GraphQL is a query language and server-side runtime for cloud APIs that improves the efficiency of data delivery. According to a report by Akamai, API calls represent 83% of all web traffic, and InfoQ considers GraphQL to have reached early majority usage in its 2022 architecture trends report.

With Section, GraphQL servers can be quickly deployed and immediately benefit from multi-cloud, multi-provider distribution. Application users will experience an instant performance boost from reduced latency, while API service availability and resilience is dramatically improved by Sections automated service failure/re-routing capabilities. Organizations will benefit from decreased costs versus hyperscalers or roll-your-own distribution solutions, and can even run other containers alongside the GraphQL Server, such as Redis caches, security solutions, etc., to further improve the costs/performance/availability equation.

Sections distributed cloud-native compute platform allows application developers worldwide to focus only on business logic yet enables their software to behave as if it runs everywhere, is infinitely scalable, always available, maximally performant, completely compliant, and efficient with compute resources and cost. DevOps teams can use existing Kubernetes tools and processes to deploy to Section and set simple policy-based rules to control its clusterless global platform.

Benefits of Sections Distributed GraphQL service include:

To learn more about the benefits of Sections Distributed GraphQL or how to get started, visit: https://www.section.io/blog/turbocharge-graphql/.

About SectionSection is a Cloud-Native Hosting system that continuously optimizes orchestration of secure and reliable global infrastructure for application delivery. Sections sophisticated, distributed and clusterless platform intelligently and adaptively manages workloads around performance, reliability, compliance, cost or other developer intent to ensure applications run at the right place and time. The result is simple distribution of applications across town or to the edge, while teams use existing tools, workflows and familiar rules-based policies. To find out more about how Section is revolutionizing application delivery, please visit section.io.

Read more:
Section's Distributed GraphQL Hosting Allows Organizations to Quickly Launch and Scale Location-Optimized, Multi-Cloud API Servers - Business Wire