Category Archives: Cloud Servers

After GigaOm’s Radar comes Sonar, as it dives deeper into emerging tech Blocks and Files – Blocks and Files

GigaOm has developed a diagrammatic lens and research document, through which to look at emerging startup technologies with few suppliers but much promise: the Sonar diagram in Emerging Technology Insight reports

Its first Sonar report looks at decentralised storage with peer-to-peer technology, and uses a three-axis triangle scheme rating companies positioning in terms of their roadmap, technology and strategy. There are two concentric triangles, the outer one being for Challenger suppliers, and the inner one for Leaders. In general, suppliers have higher overall value the closer to the centre of this triple-axis space they are located.

Each vendors position has an arrow attached showing its expected direction of travel over the next 12 to 18 months.

Analyst Enrico Signoretti writes that the reports analysis is focused on highlighting core technology, use cases, and differentiating features, rather than drawing feature comparisons. He argues: The goal is to define the basic features the user should expect from products that correctly implement the technology, while keeping an eye on characteristics that will have a role in building the differentiating value over time.

The emerging technologies covered may stay niche, or develop and become mainstream. In that case they could be covered by GigaOms Radar reports. A chart shows the technology evolution involved:

Distributed storage using a peer-to-peer scheme has multiple storage nodes hundreds if not thousands of servers or PCs, storing redundant chunks of files using some error-correcting and/or erasure coding scheme, in order to reconstruct the original data if one or more nodes fail.

A central management system controls and manages the whole network, maintaining and updating metadata to detail which nodes store which chunks of which original data. This system accepts incoming data, processes it and deploys it across the network, and also retrieves data upon request.

Client systems export data to be stored and also request data retrieval, using APIs or protocols such as S3. Blockchain technology may be used as a data integrity and security system, and also to underpin a payment system. Data storage providers can be paid in Bitcoin and storage consumers can be billed in Bitcoin.

Example use cases for technology like this with indeterminate network access speed include archives, backup, content distribution, video storage, secure collaboration and private data sharing. Mobile and cloud-native app developers may find such distributed storage attractive because it is generally much less expensive than a mainstream public cloud.

Five suppliers are covered: OChain, Filebase, Protocol (with Filecoin product), Sia Foundation and Storj. We have mentioned Storj and its Tardigrade technology in several storage round-up articles, as well as Filecoin.

In November last year we wrote Peer-to-peer based cloud storage provider Filecoin says its mainnet blockchain-based public storage cloud has reached 1.2EB (1 exabyte) of capacity and claims to offer a hyper-competitive alternative to AWS, Azure and Google. The service is priced in Bitcoin and the capacity is available across thousands of servers and PCs worldwide who assign spare capacity to Filecoin.

We have covered a sixth one, Cubbit, only recently. Its so new its not yet in the Sonar report.

The GigaOm report describes each vendor and its product, analyses the technology as a whole, and reviews considerations for adoption by enterprises. Signoretti writes: It helps organisations of all sizes to understand a technology, its strengths and weaknesses, and its fit within an overall IT strategy. Read it to keep an eye on newly emergent storage technologies.

Read more:
After GigaOm's Radar comes Sonar, as it dives deeper into emerging tech Blocks and Files - Blocks and Files

Google Drive tips and tricks: 9 features you might have missed – CNET

Google Drive has some hidden features that make it even more useful.

I use Google's services every day, including Google Drive. I use them for work. I wrote this story in Google Docs, in fact. I use them at home, whether using Sheets to map out the summer schedule for my kids, or adding to my ever-expanding folder of recipes -- it makes it easy to share favorites with friends or access needed ingredients on my phone when I'm at the grocery store. It's hard to imagine my digital life before Google.

I've used Google Drive long enough that I've discovered a few hidden gems along the way that make Google's cloud service an even better tool. Here are nine features that I use that might also help you.

Keep on top of the latest news, how-to and reviews on Google-powered devices, apps and software.

Need to work during your commute or other times when you're not connected to the internet? No problem. Google Drive lets you access your files while you're offline, and then it'll sync your changes when you get back online. You'll need to use the Chrome browser and be signed in to your Google account.

First, install theGoogle Docs Offlineextension for Chrome. Then sign in to Google Drive, open Settings and check the box forCreate, open, and edit your recent Google Docs, Sheets, and Slides files on this device while offline.

You can edit Microsoft Word documents in Google Drive, but sometimes the formatting looks weird. If you'd rather convert any Word doc you add to Google Drive to the native Google Docs format, you can set that up with a couple clicks. Open Settings and check the box forConvert uploaded files to Google Docs editor format. Done and done -- no more Word docs in your Google Drive library.

Sharing is baked into Google Drive: Up to 100 people can work on a Google Doc, Sheet or Slide at the same time. I doubt you'll come close to reaching triple digits for the number of simultaneous collaborators, but with even a handful of people editing the same doc at the same time, it can be difficult to see who's changing what.

To keep tabs on the various edits being made to a Google Doc, go toFile > Version History > See Version History to open a panel on the right side that shows who updated the doc and when. You can click each version to see what changes were made, and you can click the triple-dot button to rename an earlier version to make it easier to keep track. You can also make a copy of earlier versions, if you want to keep a draft you fear you might lose track of.

Previously called Quick Access and now labeled Suggested, there's a belt of thumbnails across the top of the My Drive view showing your recently modified files. I find it a huge time-saver, but let's say you use Drive at work and were updating your resume. You might not want your boss looking over your shoulder and seeing that you were recently working on your CV. You can hide this belt of thumbnails in Drive's settings. Scroll to the end of the General settings page and next to Suggestions, uncheck the box for Make relevant files handy when you need them in Suggestedand hit Done.

There's a little Drive icon at the bottom of Gmail's compose window. It lets you attach files you have stored in Drive or simply send a link. For Google Drive formats -- Docs, Sheets, Slides and so on -- your only option is to send a link to the file. For other file types -- like PDFs, Word docs and images -- you have the option of sending them as an attachment or a Drive link, which lets you share files larger than Gmail's 25MB size limit for attachments.

This one's hiding in plain sight. In the search box at the top of Google Drive, there's filter button along the right edge. Click it and you'll get a panel of search options to filter your search results with. If you've used Google Drive for years and have accumulated a large library of files, then these search options are hugely useful for narrowing your results. You can filter by file type, date modified and owner. For shared documents, you can filter by someone with whom you've shared a file. And so you don't leave someone hanging, you can also filter by files that have an action item assigned to you or that have suggestions waiting for you.

You've got a few options for clearing the formatting for text you paste into Docs. You can highlight the text and select Normal text from the toolbar at the top. Or you can go to Format > Clear formatting. (For the latter, you can use the keyboard shortcut Ctrl-, or Command- for Mac.)

You can avoid the format-removal process by holding down Shift when you paste text. Yep, Ctrl-Shift-V pastes without any formatting. That's Command-Shift-V for Mac users.

Want to back up your phone's important data to Drive? You can! And with a single tap. On the mobile app, go to Settings > Backup and choose what you want to back up -- contacts, calendar events or photos and videos (or all three). Just tap the Start Backup button to get rolling. It'll likely take a while, so you might want to start the process overnight. Your phone will need to be plugged in and connected to Wi-Fi.

With Google's Backup and Sync app, you can back up the contents of your Mac or PC -- or just selected folders. And you can go the other way and sync the files you have stored on Google Drive with your computer for easy, offline access.

To get started,download Backup and Syncto your Mac or PC and follow the instructions to install it and sign in. The app will install a folder on your computer called Google Drive, and you can drag photos and documents onto the folder to sync its contents with Google Drive on Google's servers. To sync other folders on your computer with Drive, open Backup and Sync's settings and select the folders you'd like to sync, such as Documents or Pictures.

For more, check out 10 Gmail tricks you'll use every dayand our list ofthe best Android phones to buy for 2021.

Link:
Google Drive tips and tricks: 9 features you might have missed - CNET

Robin.io Platform on QCT Servers Accelerates Cloud-native Transformation – Business Wire

SAN JOSE, Calif.--(BUSINESS WIRE)--Quanta Cloud Technology (QCT), a global data center solution provider, announced IronCloud Robin Cloud Platform, its latest addition to its 5G solutions.

The partnership between QCT and Robin helps customers accelerate their cloud-native transformations. The solution is built on the Robin.ios Multi-Data Center Automation Platform (MDCAP) and Cloud Native Platform (CNP), a comprehensive carrier-grade bare-metal to services orchestration and enhanced Kubernetes platform, for 5G and Multi-access Edge Computing (MEC) applications. The solution harmonizes virtual machines and containers, enabling unprecedented resource sharing with easy to use, unified workflows and lifecycle automation that is customer proven to reduce both CAPEX and OPEX. All of this is deployable on QCT servers using 3rd Gen Intel Xeon Scalable processors.

To accelerate cloud-native transformations, QCT and Robin have developed a centralized automation platform using hardware, network acceleration technologies and best practices for orchestration, automation and lifecycle management. The result is an optimized solution that utilizes a cloud-native infrastructure for Telco workloads that supports both virtual machines and containers, in the same resource sharing cluster, from regional data center to far edge. Operators and enterprises can reliably reach the high throughput and low latency required by cloud-native 5G applications (i.e., Core, RAN and CDN). This partnership reduces the challenges of deploying and managing networks, while providing an optimized, cost-efficient infrastructure.

IronCloud Robin Cloud Platform features the following 3rd Gen Intel Xeon Scalable processor-based servers:

Lifecycle management and automation are keys to reducing 5G infrastructure and operation costs, said Mike Yang, president of QCT. By partnering with Robin.io, we have created an automated cloud-native platform, IronCloud, for our mutual customers to boost 5G application time to market.

Our partners are creating optimized, performant and automated solutions that accelerate the path to cloud-native for 5G, stated Keate Despain, Intel Network Builders and Ecosystem Programs Director. The IronCloud Robin Cloud Platform is a solution that will enable companies to deliver an end-to-end cloud native platform and a 5G service delivery network at cost savings.

Cloud-Native technology has proven benefits for 5G economics, said Partha Seetala, CEO and founder of Robin.io. Our close relationships with QCT and Intel have delivered a production-ready open and extensible platform for deployment and life cycle management. Its an entire Telco Network Stack, with both CNFs and VNFs that offers industry leading TCO.

About QCT

QCT is a global data center solution provider that combines the efficiency of hyperscale hardware with infrastructure software from a diversity of industry leaders to solve next-generation data center design and operational challenges. QCT serves cloud service providers, telecoms and enterprises running public, hybrid and private clouds. The parent of QCT is Quanta Computer, Inc., a Fortune Global 500 corporation. http://www.qct.io

About Robin.io

Robin.io, the 5G and application automation platform company, delivers products that automate the deployment, scaling and life cycle management of data- and network-intensive applications and for 5G service chains across edge, core and RAN. The Robin platform is used globally by companies including BNP Paribas, Palo Alto Networks, Rakuten Mobile, SAP, Sabre and USAA. Robin.io is headquartered in Silicon Valley, California. More at http://www.robin.io and Twitter: @robin4K8S.

Intel, the Intel logo and other Intel marks are trademarks of Intel Corporation or its subsidiaries. Other names and brands may be claimed as the property of others.

See more here:
Robin.io Platform on QCT Servers Accelerates Cloud-native Transformation - Business Wire

Do the costs of the cloud outweigh the benefits? – The Economist

Jul 3rd 2021

FOR THE past decade few aspects of modern life have made geeks drool more than the cloud, the cumulus of data centres dominated by three American tech giants, Amazon, Microsoft and Google, as well as Alibaba in China. In America some liken their position of impregnability to that of Detroits three big carmakers, Ford, General Motors and Chrysler, a century ago. During the covid-19 pandemic they have helped transform peoples lives, supporting online medical appointments, Zoom meetings and Netflix binges. They attract the brightest engineering talent. Amazon Web Services (AWS), the biggest, is now part of business folklore. So it is bordering on heresy to argue, as executives at Andreessen Horowitz, a venture-capital firm, have done recently, that the cloud threatens to become a weight around the necks of big companies.

Your browser does not support the

Enjoy more audio and podcasts on iOS or Android.

That possibly explains the defensiveness of Andreessen Horowitzs Martin Casado, co-author of the blog post titled The cost of cloud: a trillion-dollar paradox. On June 24th he described it in a gathering on Clubhouse, a social-media app, as one of the more misread, misquoted things Ive ever done. At the risk of further mischaracterisation, Schumpeter would summarise it as follows. It uses paltry evidence and baffling numbers (where, for instance, does the trillion dollars come from?) to propose an excessively all-or-nothing business conundrum: Youre crazy if you dont start in the cloud; youre crazy if you stay on it. Yet for all its flaws, it is well-timed. It poses a question that businesses will have to think about for years to come. If they entrust all their datathe lifeblood of the digital economyto an oligopoly of cloud providers, what control do they have over their costs?

It is a problem many companies are already grappling with. On June 29th the Information, an online tech publication, reported that Apple, maker of the iPhone, is poised to spend $300m on Google Cloud this year, a 50% increase from 2020. It is also using AWS and its own data centres to handle overflowing demand for services such as iCloud, a data-storage app. On the same day the chief operating officer of a big software firm told your columnist that the current trajectory of cloud costs is unsustainable but that it does not make sense just to leave the cloud. It is very hard. One cant be so simplistic as to say its all cloud for ever or its no cloud. Jonathan Chaplin of New Street Research likens acquiring flexible data storage on the cloud to flexible office space such as WeWork. Both are similarly expensive, he says. He knowshis boutique firm of analysts is considering renting both.

One reason Andreessen Horowitz has stirred up a storm is because it went a step further. The blog post raises the prospect of repatriation, arguing that companies could save considerable sums of money by bringing back their data from the cloud to their own servers. It uses the example of Dropbox, a file-sharing firm that in 2017 said it had saved $75m in the two years before its initial public offering chiefly by clawing back workloads from the cloud. Mr Casado and his colleague, Sarah Wang, estimate that a group of 50 such publicly traded software firms could halve their cloud bills by doing the same, collectively saving $4bn a year. That could, using generous price-earnings multiples, improve their market value by around $100bn. You dont have to be a super-sleuth to suspect an ulterior motive: if Silicon Valley unicorns take the hint, higher valuations could make venture capitalists like Andreessen Horowitz more money when they go public.

This is an oversimplification, however, in several ways. First, the cloud is not just a cost. It can also boost revenues by providing young companies with the flexibility to scale up rapidly, accelerate new product launches and expand internationally without having to build their own mishmash of racks, servers, wires and plugs. Moreover, cloud providers offer more than storage and spare capacity. Increasingly their most valuable services are data analytics, prediction and machine learning, made possible by the vast troves of data they can crunch. They may also be more difficult to hack. The question is whether a company gets a better return on its investment by paying for cloud services, or by paying to bring data centres, engineers and cyber-security in house.

Second, the supply of engineers is finite. Whereas in the past coders were trained to work with on-premise servers, the latest generation knows more about working with cloud providers. That makes repatriation harder. In a recent podcast about its decision in 2015 to shift entirely from its own servers to Google Cloud, Spotify, a music-streaming app, highlighted the opportunity costs of having engineers tied up managing its own data centres rather than working on new products. (As a geeky relic, it keeps pieces of its last big server in an urn.)

Third, profits are in the eye of the beholder. A company may hope to improve margins by reducing the cost of renting cloud servers. But building its own data centres requires investment. Labour costs will also rise to pay for engineers to manage them.

There is little to suggest that the stampede into the cloud is slowing. Gartner, a data-gatherer, predicts that worldwide spending on cloud services will increase by almost a quarter this year, to more than $330bn. Repatriation is an urban myth, says Sid Nag, Gartners research vice-president. We just dont see it.

Continuing to write blank cheques to cloud providers is not sustainable, either. The more firms embrace cloud-computing, the more carefully they must manage its costs. The biggest users, such as Apple, bargain for huge discounts. Smaller ones lack the clout. To keep costs down, they may need to run basic storage in house, diversify into the multicloud by spreading computing across several clouds, and make engineers responsible for cloud expenditures. With luck, a low-cost alternative to the biggest clouds will emerge, much as Japanese car companies challenged Detroits big three. That took half a century, though.

This article appeared in the Business section of the print edition under the headline "Raining on the parade"

Read the rest here:
Do the costs of the cloud outweigh the benefits? - The Economist

Top Edge Computing Companies of 2021 – IT Business Edge

Edge computing is a distributed information technology (IT) architecture that not only addresses the shortcomings of traditional cloud computing, but also works hand-in-hand with it. The networking philosophy focuses on bringing computing as close to the source of client data as possible.

In essence, edge computing refers to bringing computation to the networks edge (a users computer, an Internet of Things (IoT) device, an edge server, etc.) and running fewer processes in the cloud. By moving cloud-intensive processes to local places, edge computing minimizes long-distance communication between a client and a server.

Also read: How AI Will Be Pushed to the Very Edge

Cloud computing offers several advantages over on-premises computing. Cloud service providers offer centralized access to the cloud via any device over the internet. Cloud computing, however, can lead to network latency, given the sheer distance between users and the collection of data centers where cloud services are hosted.

This is where edge computing comes into the picture. By bringing computation closer to end users, edge computing minimizes the distance client data has to travel while keeping the centralized nature of cloud computing. For example, lets say a bank incorporates 50 high-definition (HD) IoT video cameras to secure the premises. The cameras stream a raw video signal to a cloud server. On the cloud server, the video output is put through a motion-detection application to segregate clips that feature movement. These clips are saved on the database server. Due to the high volume of video footage being transferred, significant bandwidth is consumed.

By introducing the motion sensor computation to the network edge, much of the video footage will never have to travel to the cloud server. This will lead to a significant reduction in bandwidth use. This can be implemented by introducing an internal computer to each video camera.

The cloud server can now communicate with a larger number of video cameras, as only important footage is transferred to the server. Edge computing has several use cases, some of which include self-driving cars, medical monitoring devices, and video conferencing. In this guide, we will traverse all you should know about the best edge computing companies.

Also read: The Impact of 5G on Cloud Computing

Here are the best edge computing providers:

Mutable is a public edge cloud platform that operates like an Airbnb for servers. Mutables public cloud deployment on the edge brings the internet closer to users by upgrading underutilized servers of network operators at a physical distance of 40 kilometers or less into new revenue streams.

By bringing the processing closer to the makers of next-generation IoT, robotics, autonomous vehicle, artificial reality/virtual reality (AR/VR) and cloud gaming applications, Mutable enables developers to focus on continuous product iteration and development.

Key Differentiators

MobiledgeX is building a marketplace of edge computing services and resources that will connect developers with leading telecom operators like British Telecom, Telefonica, and Deutsche Telekom, in the quest to power the next generation of devices and applications.

The edge computing company develops edge cloud software to help telco operators run telco edge cloud on their infrastructure and provides device native software development kit (SDK) and matching engines that developers use to bring their applications (cloud-native) to the network edge and take advantage of the telco edge cloud.

Key Differentiators

Affirmed Networks technology is based on an open architecture and reduces operational costs, provides flexibility in the application of services and delivers high performance in a small footprint to help you meet customer requirements.

Affirmed Networks mobile edge computing (MEC) solution, Affirmed Cloud Edge, offers enterprises and communication service providers (CSPs) the ability to keep data local and host applications on the premises of customers, to maximize efficiency and reduce latency.

Affirmed Cloud Edge can be utilized and deployed at the network edge or used as part of the cloud edge offerings from Microsoft Azure, Amazon Web Services (AWS) or Google Cloud.

Key Differentiators

EdgeConneX builds and operates proximate, powerful, purpose-built data centers for customers in any deployment or scale edge, far edge, hyperscale and edge cable landing stations. At the time of writing, EdgeConneX operates over 40 data centers.

EdgeConneX designs and deploys purpose-built data centers to ensure the most efficient placement of customer network and IT infrastructure, in order to reduce latency and heighten performance. This includes rapidly built and specially tailored large-scale data center solutions for hyperscale customers.

Key Differentiators

Section helps accelerate your path to the network edge. The edge computing platform powers next-generation applications for software as a service (SaaS), platform as a service (PaaS), and application providers to deliver more secure and quicker digital experiences.

The edge computing provider allows you to deploy, scale, and protect your applications at the network edge, with the necessary flexibility, control, simplicity, and familiarity.

Key Differentiators

Mutable harnesses the power of underutilized servers at a physical distance of fewer than 40 kilometers and ensures latency rates of less than 20 milliseconds. The edge computing company transforms these servers into instant micro-data centers and promotes security, proximity, speed and sustainability.

Overall, Mutable is the best edge computing platform for enterprises. MobiledgeX simplifies deployment, management and analytics for app developers and telco operators and is a sublime platform. Affirmed Networks supports low latency, high-bandwidth applications and helps reduce OpEx with automated service provisioning and orchestration.

EdgeConneX is ideal for hyperscale customers. Section is a worthwhile option in its own right. Go through the offerings of each of the edge computing providers mentioned in this guide and select one that corresponds to your enterprise requirements and expectations.

Read next: AIOps Trends & Benefits for 2021

Originally posted here:
Top Edge Computing Companies of 2021 - IT Business Edge

The Basics of End-to-End Cloud Media Production – TV Technology

A paradigm shift in media-production technologies is changing how the cloud is perceived, used, presented and applied to media production. The lines between ground-based and cloud-based media production are becoming blurred.

One of the first steps in getting to a cloud-based production environment is understanding how the requirements and the components differ and those physical hard interfaces are being replaced by topics like dashboards and virtualized desktops.

We start with cloud computing, an application-based solution known also as an infrastructure in the cloud. Cloud computing is divided into a front-end part and a back-end part. To the user, these details dont need to be thoroughly understoodbut it is helpful to know that the end-to-end ecosystem is changing so that acceptability of these differences can be evaluated and adopted.

Users needing access to the cloud will typically employ a browser and will utilize a (public) internet service provider (ISP) for that access. Sometimes, instead of an ISP, there may be a direct connection portal available by the cloud service provider as a cost-added feature that provides for faster, more secure connectivity.

The primary component of a cloud computing solution is its back-end, which has at its core the responsibility for securing, storing and/or processing data on its often proprietary central servers, compute stacks, databases and storage sets. Cloud computing is multifaceted, employing databases, servers, applications and other elements including orchestration, storage and monitoring.

For years this Cloudspotters Journal has identified the advantages of cloud capabilities including scalability, virtualization, availability and the like (Fig. 1). It goes without saying that services available in the cloud continue to grow.

Yesteryear had cloud focused on storage. Today, cloud providers offer hundreds of specific services ranging from compute and storage to cloud consulting (through partners) and management. Each provider aims to enable users to deploy their compute and storage requirements in the cloud offering various competitive platforms, all eager for users to experiment in any way conceivable.

MEDIA-SPECIFIC AND CLOUD FORWARD

In more recent times the capabilities typically exposed in cloud services have started to reach deeper and farther into media-specific offerings. Global connectivity coupled with the rapid exchange of content throughout the world has strengthened those capabilities, with the provisioning of services increasing at an almost exponential rate. Applications for media production in the cloud are no longer just a unique opportunity; they are becoming a way of operating.

Cloud-forward initiatives are definitely expanding beyond simple storage and compute functions or alternatives to back-office software consolidation. Once thought of for storage as backup, cloud services now endeavor to provide full-time playout of programming on a channelized basis that includes sports, gaming, OTT services and delivery, and even into end-to-end production using core products from providers who had previously only utilized ground-based server architectures.

Major media organizations are steadily adopting and combining technologies that take the hardware out of the shop and place it in an entirely software-centric environment connected by on-ramps and off-ramps located almost anywhere. Dynamic scalability and high-performance storage/compute capabilities are enabling this fundamental change in how content is assimilated into the production ecosystem.

Today GP8-based virtual machines are now enabled using infrastructure-as-code into software applications that were formerly run on dedicated pizza-box servers. As a result, organizations are already shifting away from in-house central equipment rooms and past the outsourced data center directly into public cloud environments. Media workflows are now being developed as cloud-native solution sets, ignoring how things used to be done and placing them into unconstrained, non-interdependent environments that are treated almost like the way a new-greenfield facility might have been engineered for single purpose occupation as recently as five years ago.

Automation is a key factor in the success of making cloud-native media production successful. Back-office-like servers are no longer mainstream. Individual sets of configurations coupling a single-function device into the next single operation are evaporating. Plug-in management that was custom-tailored to the product and then tweaked to meet the operational needs are now orchestrated to rapidly adapt to multiple functional requirements without discrete, complex or time-consuming adaptations.

Once the automation process is confirmed and the capability requirements are established, the rest just happens. Through a dashboard abstraction of the functions, users are then ill concerned with all the nuances of manually moving files around various services that are typically steeped in numerous interfaces, which must be individually accessed and configured for each successive use or application. Flows are continuous, repeatable (if necessary) and able to be monitored.

Using configuration management tool sets, images of the application-specific interfaces (APIs) are landed on a resource pool of compute servers that operationally never see the light of tech-administrators. Systems are booted up, configured for the applications per the dashboard and the artist/editor starts their creative tasks.

COLLAPSE AND REDEPLOY

Once the production, show or activity is completed and confirmed, automation then collapses the environment, which stops the process and halts the billing charges. Ground-based users dont do anything other than confirm the "end" or "stop" command and walk away. If there is a requirement to change or adMust something, the exact configurations can be re-established and the workflows can continue as before.

New capabilities were brought into accelerated use as a consequence of Covid-19 and are being applied to next-gen-production ecosystems. Content supply chains can now be adapted to cloud, assuming the feature sets are available. Previously ground-based features, such as analysis, transformation and quality control now become exception-based background tasks in the new cloud model.

Virtual desktop infrastructures (VDI) essentially take all the previously required elements of media production and wrap them into a solution that is secure, features high-quality and fast-reacting actions and can be accessible anywhere there is a stable internet connection with sufficient bandwidth (Fig. 2).

VDI technologies, which utilize functionality known as zero-clients, offer users a variety of advantages including mobility, accessibility, flexibility and improved security. The latter, security, is accomplished because now the data lives only on the servers and does not need to be transported to the active users workspace. Cloud-enabled platforms can replicate data using other secure technologies only to specific locations, even employing transparent ledger technology known as blockchain.

Microservices and containerization are the keys to this future cloud infrastructure for production (Fig. 3). Calling up only what is needed, only when it is needed, is what production services in the cloud are moving to.

Entire catalogs of capabilities are growing out of these cloud-native services, which up until now could not be established except through discrete sets of hardware and software that were purpose-built to do one and usually only one specific function or operation.

Reliable, secure, scalable, protected and cost-effective media productionwithout the annoyances of managing a complex local infrastructureis changing the face of media from one end to the other. Whether the production services are hosted in a public cloud, regional co-lo site or even in your own private data center, the concepts developed (and being perfected all the time) are real, available and are here today.

If youre not currently using these kinds of services, you probably will in short order. Stay tuned.

Karl Paulsen is chief technology officer at Diversified and a frequent contributor to TV Tech in storage, IP and cloud technologies. Contact him at kpaulsen#diversifiedus.

Go here to read the rest:
The Basics of End-to-End Cloud Media Production - TV Technology

How Virtual, Cloud-Based Technologies Are Powering the Next Industrial Revolution – Total Retail

Companies inmanyindustries, including technology, construction, and healthcare, are completely revamping the way in which their manufacturing arms are designing, building, producing and servicing the goods they needfor projects and customers.

Just a short five years ago, these manufacturers began to embark on the second coming of their own industrial revolution. It wasnt enough that the internet and even mobile technology created a wealth of efficiencies in the production cycle.

Instead, a few years ago, these manufacturers began to see how virtual technologies could completely change the way they operated, interacted with design teams, and providedmore timely responses to customer inquiries.

Gains in these areasas a result ofvirtual technologies have been quick toillustrate early and noticeable returns for the executives running these businesses.

In fact, nearly half of the executives polledin a recent survey(44 percent) said they're experiencing approximately 10 percent in operational savings by using immersive mixed reality technologies in the design, training, production or customer service areas of their business. A yearago, only a quarter of businesses (26 percent) were seeing similar results in savings.

In terms of overall production efficiency, 45 percent of enterprises are seeing at least a 10 percent increase in production efficiency increases today, up from only 11 percent a year ago.

However, these increases don't tell the whole story.When these virtualized technologies (such as augmented reality and virtual reality, AR/VR) were initially utilized by manufacturers, they were leveraged in an on-premise environment. However, today they're utilizedin a cloud environment, bringing even more efficiencies and returns to the business.

Thebasicdifference between cloud vs. on-premisedatais where itlives.On-premisesoftwareand dataareinstalled locally, onamanufacturerscomputers and serversinside the actual facility, whereascloud softwareand dataarehosted onaserver and accessiblevia a web browserover the internet.

On-premiseinfrastructures limit the speed and scalability needed for todays virtual designs, and italsolimits the ability to conduct knowledge sharing between organizations that can be critical when designing new products and understanding the best way for virtual buildouts.

Manufacturers are overcoming these limitations by leveraging cloud-based (or remote server-based)virtualplatforms powered by distributed cloud architecture and 3D vision-based artificial intelligence. These cloudplatforms provide the desired performance and scalability to drive innovation in the industry at speed and scale.

Imagine what it would be like to virtually design an airplane usingthe different eyeglass filters used by an ophthalmologistduring a typical eye exam. Some filters allow you to readonly thelarger print because they restrict your ability to read this would be designing virtually in anon-premisesoftware environment. Other filters allow you to see fine print with pinpoint accuracy this is what's possible in a cloud environment.

One of the key requirements forvirtualapplications is to precisely overlay on an object its model or the digital twin. This helps in providing work instructions for assembly and training,as well as catches any errors or defects in manufacturing.

Most on-device object tracking systems use a 2D image and/or marker-based tracking. This severely limits overlay accuracy in a 3D environment because 2D tracking cannot estimate depth with high accuracy, and consequently the scale, and the pose. This means even though users can achieve what looks like a good match when looking from one angle and/or position, the overlay loses proper accuracy during alignment.

Deep learning-based 3D AI allows users to identify 3D objects of arbitrary shape and size in various orientations with high accuracy in the 3D space. This approach is scalable with any arbitrary shape and is amenable to use in enterprise use cases requiring rendering overlay of complex 3D models and digital twins with their real-world counterparts.

Cloud technology is pivotal to achieve this level of detail because technology and hardware used in anon-premiseenvironment easily overheats from the compute power needed. Virtual technologyrequires a precise and persistent fusion of the real and virtual worlds. This means rendering complex models and scenes in photorealistic detail, rendered at the correct physical location (with respect to both the real and virtual worlds) with the correct scale and accurate pose.

This isonlyachieved today byusing discrete GPUs from one or morecloud-basedservers and deliveringthe rendered frames wirelesslyor remotelyto the head-mounted displays (HMDs) such as the Microsoft HoloLens and the Oculus Quest.

An increasing number ofmanufacturers aremoving theirvirtualsolutions away fromon-premise data centers. Today, 48 percent of enterprises are leveraging cloud-hosted environments, and another 21 percent say they will leverage the cloud when they implement immersive reality solutions in the future.

DijamPanigrahi is co-founder and COO ofGridRaster, a leading provider of cloud-based AR/VR platforms that power compelling high-quality AR/VR experiences on mobile devices for enterprises.

Link:
How Virtual, Cloud-Based Technologies Are Powering the Next Industrial Revolution - Total Retail

The 10 Coolest Cloud Security Tools Of 2021 (So Far) – CRN

Locking Down The Cloud

Vendors have made great advances thus far in 2021 around securing cloud applications, data, and workloads, rolling out tools that can discover critical data located in the public cloud, centrally manage Amazon Web Services EC2 instances in real-time, and visualize and prioritize cloud security risks to reduce the attackers blast radius and facilitate faster investigations.

Securing customer and workforce identity has been a major area of investment, with companies provisioning just-in-time access to hybrid and cloud servers and making it possible for all users and devices to securely access cloud, mobile, SaaS and on-premises applications and APIs. Other vendors, meanwhile, have focused on automatically discovering and preventing risks for new SaaS applications.

Four of the coolest new cloud security tools come from companies based in California, four come from companies based in the Northeastern United States, and companies based in Colorado, Michigan and France each contributed one offering to the list. Read on to learn what new cloud security features and functionality partners are now able to enjoy.

For more of the biggest startups, products and news stories of 2021 so far, click here .

See the article here:
The 10 Coolest Cloud Security Tools Of 2021 (So Far) - CRN

How Much Is Hybrid Work Really Going to Cost? – PCMag

If one segment of the tech industry absolutely loves COVID-19, it's definitely cloud vendors. Whether it's public cloud infrastructure or all the managed services that run on them, the pandemic has companies accelerating cloud adoption just to keep the business working with no one in the office. In fact, a study by Synergy Research Group has business cloud spending growing by 35% in 2020 and exceeding on-prem data center and software, which dropped by 6% in the same year.

Now factor in that the pandemic has us all fired up about hybrid work, with IDC research predicting that up to 90% of enterprises will be permanently using that model by the end of 2022. That doesn't just mean employees working on their couch. It's that plus multi-cloud for mission-critical apps that now need to span hybrid workers. This is where some resources are at HQ, some are in the cloud, some managed services are on one cloud while others are in another cloud, some workers are at home, some are in the office. If you're an IT pro, that's giving you a headache. If you're a CFO, it's about to give you a migraine.

Just because SMBs are moving back to the office, they're not going to scale back cloud spend. Hybrid work necessitates new kinds of managed services to bridge that couch-to-office gap. You'll need that in the new normal because empowering remote workers using cloud apps is simply more cost-effective, but it's still a new cost. And it's probably not the only cloud cost you need to look at.

But that doesn't mean you shouldn't take close look at what you've got clouding around out there. A recent survey by CloudCheckrfound that 35% of companies reported unused virtual servers that were never disabled, 34% said employee adoption of cloud infrastructure and services was often low, and 32% said that apps they tried to migrate to the cloud were blocked because the realities of cloud architecture weren't taken into account prior to the move.

And the CloudCheckr survey doesn't even take into account managed services. Shadow IT, a pandemic, and a cloud computing universe that has a marketing answer for pretty much any kind of workload without your IT guys ever needing to wander into a data center, is a bad recipe for your wallet.

And don't kid yourself, it's happened. The pandemic didn't just hit us hard, it hit us fast. So more than a few SMB operators simply told teams to tap into whatever cloud services they needed to deal with remote work and left IT to play catch-up. So where does that leave you now that things are starting to return to sanity?

It leaves you doing math, but you need to do more than just the basics. Sure, do an audit first. What's in the cloud now? Why did someone put it there? Tally that up and see what you're spending now and what you were spending in 2019. Now you know how much more money is floating up into the cloud and why. But the next step takes more homework.

Is the cost justified or is there a cheaper way? That's not just math, that's some real deep investigation into not only what kinds of easy peasy cloud services are relevant, but also what's still possible in that musty, dusty data closet. Think about that for a bit, and you'll realize it's a lot of work, but it's also necessary work that you need to finish as soon as possible.

And there are other ouch factors to consider:

Your IT skillset. Clouds like Azure and AWS aren't lightweight environments. Yeah, you can spin up a Windows server in a few minutes, but spinning up dozens of them, connecting them, clustering them, connecting on- and off-prem clients to them, installing whatever workloads you need on them, and then making sure they can handle a sudden scaling need or disaster recovery problem...no, that's not easy and it's only part of a much longer list of requirements. You can't be an IT generalist to handle all that. You need full-on devops personnel, which means you need to factor in that cost.

Security. Every public cloud server that talks to clients or another server in a different cloud is a target both on its own and when that communication happens. Every client talking to any cloud service is a potentially vulnerable data exchange. All that data stored in the cloud is open to all kinds of attacks or simple carelessness from the service manager. Sure it's convenient, but even with a surge in cloud security, it's still a vulnerable environment and you need to secure yours as much as you can. That'll need a budget infusion, too.

The real cost of remote workers. We all know hybrid work is here to stay, but how much is that really going to cost? IT has been managing remote problems, like client, router, and bandwidth issues, on an ad hoc basis. Solve the problem any way you can till things return to normal. Ok, but we're almost there, and now the remote scenario looks permanent. That means you need to come up with a long-term plan, and that covers a lot of ground. Probably new management software, but what's it need to do exactly? And do you give everyone a new router so IT can standardize; how much will that cost? How do you need to update your help desk? Do you need to pay for home worker bandwidth just to keep a hybrid workload moving smoothly? That's a lot of questions, and it's only a few of the ones you'll need to ask. Others will include space management software for hot-desking, employee monitoring, and new "hybrid optimized" software, like say, Windows 11.

Cloud services are a great resource for a hybrid work model. But considering cost is a necessary step if you want to keep your IT budget in the black even in 2021. It's not just about the bill. It's matching the bill to how it affects your business' capabilities. Time to sit your IT team down, figure out the questions, and start coming up with the answers. 2022 isn't that far off. Don't agree? Hit me in the comments.

Robinhood Gets a $70 million ticket for harming millions of consumers. This is a lesson for those who think a little consumer misinformation to drive up revenue is just a white lie. Organizations like FINRA, which whacked Robinhood, and looking for you.

Zoom wants to add real-time translation to your video calls. Trying to hang on to that pandemic growth, Zoom acquired Kites, which has been developing real-time translation software. It'll be a while before it's actually in your Zoom client, but for multinational companies, it'll be a real boon.

Facebook hits $1 trillion. It's finally happened. Two antitrust cases get dismissed and Facebook shares surged so hard the company has passed the big T.

Yet More PCMag Business News

What Exactly Is The 'Hybrid Work Model'? Opinions Vary Widely

Microsoft Gets Fuzzy On When You Can Actually Upgrade to Windows 11

What Will Businesses Get From Windows 11?

GitHub Enlists AI to Help Your Devs Code

This newsletter may contain advertising, deals, or affiliate links. Subscribing to a newsletter indicates your consent to our Terms of Use and Privacy Policy. You may unsubscribe from the newsletters at any time.

View post:
How Much Is Hybrid Work Really Going to Cost? - PCMag

The Origin of Technosignatures – Scientific American

The search for extraterrestrial intelligence stands out in the quest to find life elsewhere because it assumes that certain kinds of life will manipulate and exploit its environment with intention. And that intention may go far beyond just supporting essential survival and function. By contrast, the general search for other living systems, or biosignatures, really is all about eating, reproducing and, not to put too fine a point on it, making waste.

The assumption of intention has a long history. Back in the late 1800s and early 1900s the American astronomer Percival Lowell convinced himself, and others, of non-natural features on the surface of Mars, and associated these with the efforts of an advanced but dying species to channel water from the polar regions. Around the same time, Nikola Tesla suggested the possibility of using wireless transmission to contact Mars, and even thought that he might have picked up repeating, structured signals from beyond the Earth. Nearly a century earlier, the great mathematician and physicist Carl Friedrich Gauss had also thought about active contact, and suggested carving up the Siberian tundra to make a geometric signal that could be seen by extraterrestrials.

Today the search for intention is represented by a still-coalescing field of cosmic technosignatures, which encompasses the search for structured electromagnetic signals as well as a wide variety of other evidence of intentional manipulation of matter and energyfrom alien megastructures to industrial pollution, or nighttime lighting systems on distant worlds.

But theres a puzzle that really comes ahead of all of this. We tend to automatically assume that technology in all of the forms known to us is a marker of advanced life and its intentions, but we seldom ask the fundamental question of why technology happens in the first place.

I started thinking about this conundrum back in 2018, and it leads to a deeper way to quantify intelligent life, based on the external information that a species generates, utilizes, propagates and encodes in what we call technologyeverything from cave paintings and books to flash drives and cloud servers and the structures sustaining them. To give this a label I called it the dataome. One consequence of this reframing of the nature of our world is that our quest for technosignatures is actually, in the end, about the detection of extraterrestrial dataomes.

A critical aspect of this reframing is that a dataome may be much more like a living system than any kind of isolated, inert, synthetic system. This rather provocative (well, okay, very provocative) idea is one of the conclusions I draw in a much more detailed investigation my new book The Ascent of Information. Our informational world, our dataome, is best thought of as a symbiotic entity to us (and to life on Earth in general). It genuinely is another ome, not unlike the microbiomes that exist in an intimate and inextricable relationship with all multicellular life.

As such, the arrival of a dataome on a world represents an origin event. Just as the origin of biological life is, we presume, represented by the successful encoding of self-propagating, evolving information in a substrate of organic molecules. A dataome is the successful encoding of self-propagating, evolving information into a different substrate, and with a seemingly different spatial and temporal distribution routing much of its function through a biological system like us. And like other major origin events it involves the wholesale restructuring of the planetary environment, from the utilization of energy to fundamental chemical changes in atmospheres or oceans.

In other words, Id claim that technosignatures are a consequence of dataomes, just as biosignatures are a consequence of genomes.

That distinction may seem subtle, but its important. Many remotely observable biosignatures are a result of the inner chemistry of life; metabolic byproducts like oxygen or methane in planetary atmospheres for example. Others are consequences of how life harvests energy, such as the colors of pigments associated with photosynthesis. All of these signatures are deeply rooted in the genomes of life, and ultimately thats how we understand their basis and likelihood, and how we disentangle these markers from challenging and incomplete astronomical measurements.

Analogous to biosignatures, technosignatures must be rooted in the dataomes that coexist with biological life (or perhaps that had once coexisted with biological life). To understand the basis and likelihood of techosignatures we therefore need to recognize and study the nature of dataomes.

For example, a dataome and its biological symbiotes may exist in uneasy Darwinian balance, where the interests of each side are not always aligned, but coexistence provides a statistical advantage to each. This could be a key factor for evaluating observations about environmental compositions and energy transformations on other worlds. We ourselves are experiencing an increase in the carbon content of our atmosphere that can be associated with the exponential growth of our dataome, yet that compositional change is not good for preserving the conditions that our biological selves have thrived in.

Projecting where our own dataome is taking us could provide clues to the scales and qualities of technosignatures elsewhere. If we only think about technosignatures as if theyre an arbitrary collection of phenomena rather than a consequence of something Darwinian in nature, it could be easy to miss whats going on out there in the cosmos.

Read the rest here:
The Origin of Technosignatures - Scientific American