Category Archives: Cloud Storage

Cloudflare R2 Storage Introduces Event Notifications and Infrequent Access Storage Tier – InfoQ.com

During the recent Developer Week, Cloudflare announced that the object storage R2 now supports event notifications, which automatically trigger Workers in response to data changes. Additionally, the migration service Super Slurper now extends its support to Google Cloud Storage and a new infrequent access storage tier is available in private beta.

Presently in open beta, event notifications dispatch messages to a queue whenever there's a change to data within a bucket. These messages are subsequently received by a consumer Worker, allowing developers to define any subsequent actions needed. Matt DeBoard, Mengqi Chen, Siddhant Sinha, systems engineers at Cloudflare, and Erin Thames, product designer at Cloudflare, write:

The lifecycle of data often doesnt stop immediately after upload to an R2 bucket event data may need to be transformed and loaded into a data warehouse, media files may need to go through a post-processing step, etc. Were releasing event notifications for R2 in open beta to enable building applications and workflows driven by your changing data.

Source: Cloudflare blog

Designed for data lakes, storage for cloud-native applications, and web content, Cloudflare R2 enables developers to store unstructured data using S3-like APIs. Dubbed the zero egress fee object storage platform by Cloudflare to emphasize its main differentiator from competing globally distributed object storage services, R2 offers dynamic functionalities that integrate with Cloudflare Workers.

Released last year with support for Amazon S3 only, Super Slurper is a migration service that enables developers to move all their data to R2 in "one giant slurp" or "sip by sip" and now supports also Google Cloud Storage as a source. Migration jobs preserve custom object metadata from the source bucket by copying them on the migrated objects on R2 and do not delete any objects from the source bucket.

Initially released last year with support exclusively for Amazon S3, Super Slurper is a migration service that enables developers to transfer their data to R2 either in one comprehensive transfer or gradually. The service now extends its compatibility to Google Cloud Storage as a source. Migration jobs preserve custom object metadata from the source bucket by replicating them onto the migrated objects in R2.

The private beta release of the Infrequent Access storage class, a cost-effective option with comparable performance and durability, marked the third feature announcement for R2 during Developer Week. This new storage class can be assigned either through APIs or lifecycle policies and is tailored for scenarios involving infrequently accessed data, such as long-tail user-generated content or logs. DeBoard, Chen, Sinha, and Thames add:

In the future, we plan to automatically optimize storage classes for data so you can avoid manually creating rules and better adapt to changing data access patterns.

On Hacker News, user thrixton questions the pricing of the new tier:

So pricing is 1c / GB-month, compared to S3 IA at 1.25c / GB-month, a decent saving but not massive, no archive or deep archive options though, I wonder if / when these will come. What sort of negotiated rates can you get from AWS for bandwidth I wonder, at the moment, that seems like the only real benefit from CF I think.

While there are no egress fees on the class, a data retrieval fee of USD 0.01/GB (the same amount as AWS S3-IA) is charged when data in the Infrequent Access storage class is accessed.

While the Infrequent Access storage class does not incur egress fees, a data retrieval fee of USD 0.01/GB (equivalent to AWS S3-IA) is charged when data within this tier is retrieved.

More here:
Cloudflare R2 Storage Introduces Event Notifications and Infrequent Access Storage Tier - InfoQ.com

Audacity Adds Cloud Backups and Device Syncing in 3.5 Update – How-To Geek

Everybody's favorite open-source audio editor just gained some cool new features. The Audacity 3.5 update introduces cloud backups, device syncing, automatic loop tempo detection, and a mess of other improvements.

Audacity's newfound cloud syncing capability relies on audio.com, a free SoundCloud-like platform for sharing, discovering, and collaborating on audio projects. It's a pretty solid solution for backing up or syncing Audacity projects, particularly for users who own multiple PCs. However, you'll need to link your audio.com account with each of your Audacity installations.

If you want to sync Audacity projects to a cloud service like Dropbox or Google Drive, you'll have to do it the old-fashioned waymanually save the project files to your cloud storage platform of choice. Audacity's built-in syncing functionality only supports the audio.com platform.

The 3.5 update also improves some of Audacity's music production capabilities with a new non-destructive pitch shifting tool, automatic tempo detection for imported loops (through audio and metadata analysis), and a refined plugin manager with search functionality.

Audacity says that automatic tempo detection will work best when loops have their BPM listed in the file name ("drum-loop-120-bpm.wav," for example), though audio analysis should detect the correct BPM when importing simple loops.

Note that some "niche features" were removed from Audacity in this release. The only notable removals are the "EQ XML to TXT Converter," which can be downloaded as a plugin, and "Vocal Reduction and Isolation" effect. Audacity recommends using Intel OpenVINO plugins in place of the Reduction and Isolation effect, though you can download the original effect if you still need it for old projects.

For a full list of changes and bug fixes in Audacity 3.5, check the changelog. You can also view update notes and track development at GitHub. While Audacity isn't known for rapid development, we've experienced more frequent updates since the open-source software was acquired by Muse Group. Muse Group also owns audio.com, by the way.

The Audacity 3.5 update supports Windows, macOS, and Linux installations. It also boasts improved compatibility with BSD operating systems. Audacity doesn't support automatic updates, so you must install Audacity 3.5 manually.

Source: Audacity

Here is the original post:
Audacity Adds Cloud Backups and Device Syncing in 3.5 Update - How-To Geek

Hitachi Vantara brings VSP One hybrid cloud storage to AWS Blocks and Files – Blocks and Files

Hitachi Vantaras Virtual Storage Platform One (VSP One), a unified hybrid cloud storage product, has moved into the realm of the public cloud with AWS.

Update: SDS File and object info added. 18 April 2024.

The high-end and mid-range VSP arrays were previously built on proprietary hardware up until a few years ago, but Hitachi Vantara added software-defined features and support for commodity x86-based hardware. It then announced Virtual Storage Software Block, layered on top of SVOS and presenting a single data plane across Hitachi Vantaras mid-range, enterprise, and software-defined storage portfolio.

Hitachi Vantara said it would eventually extend into the public cloud. In February this year, all the storage products were being brought together under a hybrid VSP (Virtual Storage Platform) One brand. VSP One running on AWS now fulfills that aim of extending into the public cloud.

Octavian Tanase, chief product officer at Hitachi Vantara, said: Virtual Storage Platform One is transformational in the storage landscape because it unifies data and provides flexibility regardless of whether your data is in an on-premises, cloud, or software-defined environment.

Additionally, the platform is built with resiliency in mind, guaranteeing 100 percent data availability, modern storage assurance, and effective capacity across all its solutions, providing organizations with simplicity at scale and an unbreakable data foundation for hybrid cloud.

There is a single control plane, data plane, and data fabric with VSP One, and three products available initially:

We were told by a Hitachi V spokesperson: The Virtual Storage Platform One File is an appliance. We plan to offer SDS File in 2025. Note that HitachiContent Software for File is based on an OEM relationship whereas: Virtual Storage Platform One is home to our own IP only. Also: Longer term we will be offering Virtual Storage Platform One Object that will integrate file services as will our block offerings.In 2025 we will be launching the Virtual Storage Platform One Community that will be the home of our OEM and 3rd party offerings to build out a data platform into a custom solution.

Hitachi Vantara says VSP One features include:

Dan McConnell, Hitachi Vantara SVP for product management, said in a blog late last year: This announcement signals a major strategic direction for our company. Imagine a single data plane that spreads neatly across your organizations structured and unstructured data, from traditional hardware optimized arrays to scalable software defined, to cloud-hosted.

The unstructured data includes files and also objects and mainframe data, according to an eBook.

McConnell says VSP One will be infused with Hitachi Vantara machine learning models that enable administrators to not only query and pull insights from the infrastructure but to automate and augment processes, such as determining the best deployment architecture for an applications data.

Additional VSP One products will be available later this year. Various links off the Hitachi Vantara VSP One web page tell you more.

Sheila Rohras Hitachi Vantara is catching up with Dell, HPE, and NetApp as a long-term incumbent storage supplier embracing software-defined storage, commodity hardware, unified block, file and object storage, hybrid on-premises and public cloud availability, control planes, and a cloud-like operating model. We can expect VSP One to appear in the Azure and Google clouds and to support GenAI and retrieval-augmented generation.

Here is the original post:
Hitachi Vantara brings VSP One hybrid cloud storage to AWS Blocks and Files - Blocks and Files

OneDrive will finally let you import your Google Drive, Dropbox files (APK teardown) – Android Authority

Edgar Cervantes / Android Authority

TL;DR

OneDrive is one of the better Google Drive competitors, offering competitive prices as part of Microsoft 365 subscriptions. What if you want to transfer your files from a rival cloud storage service to OneDrive, though? Theres no easy way to do this right now, but our own APK teardown suggests Microsoft is working on a solution.

An APK teardown helps predict features that may arrive on a service in the future based on work-in-progress code. However, it is possible that such predicted features may not make it to a public release.

A teardown of the OneDrive v7.4 Beta 1 app for Android reveals strings suggesting that Microsoft is working on the ability to let you import files from other cloud-sharing services. The strings all contain the name import_cloud_files, while some of them even name Google Drive, Google Photos, and Dropbox as supported services.

Code

OneDrive imports your photos and files without using your devices mobile data plan. Imports continue even if you close the app, reads an excerpt from one of the strings.

The aforementioned strings give us a good idea of the setup process too, as youll need to sign into the desired account. This step suggests you wont have to actually leave OneDrive to get the ball rolling. Furthermore, a string in the previous version of the app points to the import cloud files option being available in the main settings menu. So you wont have to dig deeply to find the feature.

We also found a few strings that shed more light on the import process. Perhaps the most prominent tidbit is that OneDrive will alert you when your import takes you over your quota limit.

Code

This would be a long-overdue feature for OneDrive, as it would definitely make things much easier for people transitioning from rival cloud storage services. Its not the only prominent feature weve recently spotted in OneDrive, as we saw references to a Magic Eraser-style feature in the app last month.

In any event, theres no word on when OneDrives cloud import functionality will be available to users. Weve asked Microsoft for a statement and will update the article if/when it gets back to us.

See the original post:
OneDrive will finally let you import your Google Drive, Dropbox files (APK teardown) - Android Authority

Storage 100: The Digital Bridge Between The Cloud And On-Premises Worlds – CRN

The storage industry continues to rapidly change, and many vendors are indeed looking to provide ways to better extend and manage storage regardless of where it exists. Here are 100 vendors solution providers should have on their radar across software-defined storage; data recovery, observability and resiliency; and components.

Go back a few years ago, and measuring the growth of the data storage industry was easy. Industry watchers could just check the latest quarterly IDC or Gartner reports to see the storage hardware revenue or capacity of the storage industry as a whole and for each of the top players in the market.

Today, its not so simple. Data today may reside on a traditional dedicated on-premises storage array. It may be sitting on an industry-standard server configured via software to act as a storage array. It may be sitting on a public cloud, using either the cloud providers own technology or a traditional storage vendors cloud-native version of its array software. Or the data may be in between some on-premises and some cloud infrastructures, maybe even temporarily based on an applications needs.

All these changes have made data storage much more capable than in the past, said John Woodall, vice president and CTO of hybrid cloud at Dallas-based solution provider General Datatech.

First there was file storage, then file and block, Woodall said. Then it was hybrid marketplace offerings, and then it was hybrid multi-cloud and then a redefined unified, which is file, block, object and cloud. Cisco recently reported that 82 percent of enterprises are operating in a hybrid cloud model, meaning on-prem and one or more hyperscalers. Those in the cloud, I think it was 92 percent operate in a multi-cloud model. So every time you have a different technology, a different set of APIs, a different set of services, even though they might be in the same storage category, theyre different.

Now multiply that across compute, hypervisors, networking, security, on-premises, Amazon Web Services, Microsoft Azure, Google Cloud, and the idea of a hybrid cloud becomes the Nirvana, Woodall said.

The promise is simplified operations, easier-to-do infrastructure as code, a more consistent set of services, observability and all these other things, he said. To deliver on that compute, you can extend it using maybe VMware, VMware Cloud, or containers, to create a consistent model and operating model observability around the compute layer. But if you cant expand your storage, meaning the operations thereof, and the APIs and automation and infrastructure as code capabilities to make on-prem and cloud storage the same and extend the services of snapshots, replication, quality, etc., then you really have only extended your fabric at the compute and network layer, but the storage layer is still left to more variability.

Furthermore, Woodall said, users are now showing a preference for cloud-native technologies, doing things like going to their Chrome or other browser, clicking on their cloud console and consuming native services for everything, Woodall said. There are options for storage, but they are not as clean as for compute and networking, he said.

If we look back over the last 10, 20 years, storage vendors in general have responded with the ability to either provide observability and manageability of cloud-native storage resources, or provide their own version with primary or secondary storage either via a marketplace or via first-party technologies, he said. And so its a move in the right direction. It is an essential dynamic where we must see more maturity and less siloing. And thats where third parties or established vendors are providing an overlay for command and control irrespective of the underlying technology.

The storage industry continues to rapidly change, and many vendors are indeed looking to provide ways to better extend and manage storage regardless of where it exists.

Here are 100 vendors solution providers should have on their radar across software-defined storage; data recovery, observability and resiliency; and components.

The 50 Coolest Software-Defined Storage Vendors: The 2024 Storage 100 As part of CRNs 2024 Storage 100, here are 50 vendors bringing software capabilities, services and cloud connectivity to storage technology.

The 40 Coolest Data Recovery/Observability/Resiliency Vendors: The 2024 Storage 100 As part of CRNs 2024 Storage 100, here are 40 vendors taking their data management offerings to new heights.

The 10 Coolest Storage Component Vendors: The 2024 Storage 100 These 10 storage component vendors give software and data the right base on which to operate.

More:
Storage 100: The Digital Bridge Between The Cloud And On-Premises Worlds - CRN

Protecting and Managing Sensitive Customer Data with Skyflow and Cloud Storage Security | Amazon Web Services – AWS Blog

By Ashok Mahajan, Sr. Partner Solutions Architect, Startups AWS By Ed Casmer, CTO Cloud Storage Security By Gokhul Srinivasan, Sr. Partner Solutions Architect, Startups AWS By Sean Falconer, Head of Marketing Skyflow

Securing personally identifiable information (PII) while maintaining compliance can be a daunting task for organizations. Despite best intentions, PII often finds itself scattered across various repositories such as databases, data warehouses, log files, and backups. This makes the maintenance of robust security and compliance measures an uphill battle.

File management only adds to the complexity, requiring stringent security measures, strict access controls, and compliance-oriented storage practices. The risk of data loss and malware threats further intensifies when organizations receive files from external sources such as customers. Organizations must scan such external files before processing for viruses and malware to mitigate potential threats.

To minimize risk and de-scope existing upstream and downstream systems, organizations use Skyflow whichis available in AWS Marketplace. Skyflow Data Privacy Vault delivers security, compliance, and data residency for your Amazon Web Services (AWS) workloads.

Skyflow, an AWS Partner, uses Cloud Storage Security (CSS) to automatically and asynchronously scan uploaded files for malicious code and malware. CSS is an AWS Specialization Partner with the Security Competency, and it helps to further protect your infrastructure and ease the burden of sensitive file management.

In this post, well show how to secure PII data using Skyflow Data Privacy Vault and add malware protection using Cloud Storage Security on AWS.

Skyflow is a software-as-a-service (SaaS) offering that supports multi-tenant and single-tenant deployment models. Skyflow Data Privacy Vault isolates, protects, and governs access to sensitive customer data, which is transformed by the vault into opaque tokens that serve as references to this data. The non-sensitive tokens can be safely stored in any application storage systems or used in data warehouses.

A Skyflow vault can keep sensitive data in a specific geographic location, and tightly controls access to this data. Other systems only have access to non-sensitive tokenized data.

In the example below, a phone number (555-1212) is collected by a frontend application. This phone number, along with any other PII, is transformed by the vault, which is isolated outside of your companys existing infrastructure.

Any downstream services (such as a database) store only the token representation of the data (e.g. ABC123), and are removed from the scope of compliance. The token representation can preserve formatting as needed and be consistently generated to not break analytics and machine learning (ML) workflows.

Figure 1 Reducing compliance and security scope with a data privacy vault.

A data privacy vault serves as core infrastructure for PII, and Skyflow Data Privacy Vault provides this core infrastructure as a service which includes compute, storage, and network. The core architectural block is simplified to an API call, and Skyflow uses polymorphic encryption which combines multiple forms of encryption to secure PII and make it usable. This allows you to perform operations over fully encrypted data.

You can build any PII-specific workload on a Skyflow vault for data sharing, analytics, and encrypted operations. This way you could find all records with the same area code without decrypting the data or calculate the average income of your customers, again without exposing yourself, your employees, or your infrastructure to PII.

While a data privacy vault isnt a database, Skyflow Data Privacy Vault was designed to have some similar properties. For example, a Skyflow vault supports a schema that can consist of tables, columns, and rows (see image below).

Figure 2 Vault schema with four tables.

The vault is specially designed for supporting the full lifecycle of sensitive data, and it understands the structure of PII and its uses. For example, a Skyflow vault understands a social security number as a data type, not simply a string. This means the vault natively supports use cases like showing only the last four digits of a social security number based on the roles and policies you set up, or securely sharing the full social security number with a third-party vendor of identity verification.

The vault not only transforms sensitive data into non-sensitive data, but it tightly controls access to sensitive data through a zero-trust model where no user account or process has access to data unless its granted by explicit access control policies. These policies are built from the bottom, granting access to specific columns and rows of PII. This allows you to control who sees what, when, where, for how long, and in what format.

To store, manage, and retrieve data with Skyflow, you can use APIs directly or software development kits (SDKs). Skyflow supports both frontend and backend SDKs. Depending on your needs and where you choose to integrate, that will impact which SDK you use.

To learn more about the Skyflow SDKs and APIs, check out the documentation.

To demonstrate secure file storage and management through Skyflow, lets look at how this solution de-scopes both the frontend and backend application from touching the sensitive documents.

The following architecture diagram illustrates the file upload flow with Skyflow, AWS services mentioned above and CSS.

Figure 3 Example of file upload processed through Skyflow and CSS.

To control access to the customers vault, policies are created in Skyflow to allow programmatic writes into the vault table for client records.

Read and update access needed to be restricted to the single record owned by the currently logged in user. Skyflow customers can use an authentication service like Auth0 and the customer application knows who the user is based on the Auth0 token.

Skyflow vault respects the identity of the user and restrict access based on this identity. To support this requirement, customers use Skyflows context-aware authorization.

Programmatic access to Skyflow APIs is controlled through a service account created within your Skyflow account. The service accounts roles, and the policies attached to those roles, decide the level of access a service account has to a vault. The creation of Skyflow roles, policies, and service accounts is controlled programmatically through Skyflows management APIs or through Skyflow Studio, Skyflows web-based vault administration portal (see image below).

Figure 4 Example of creating a policy from Skyflow Studio.

Context-aware authorization lets your backend insert an additional claim for end user context into the JWT insertion. You can use any string that uniquely identifies the end user, such as the token provided by Auth0 after a client successfully logs in.

After the additional claim is added, the vault verifies the request and returns a bearer token with the context identifier. The diagram in Figure 5 below illustrates authentication with contextual information for the Skyflow customer and data retrieval.

Figure 5 Context-aware authorization flow diagram using Auth0 token for context.

Using the returned bearer token with the context restriction, the frontend customer application is able to retrieve the PII and files owned by the currently logged in user and only that user (Step 6).

Further, the time-to-live (TTL) on the bearer token can be controlled, so the token can be set to live only long enough to retrieve the record for the client.

When collecting and managing sensitive data like files containing PII, its best practice to take the entire application infrastructure out of security and compliance scope including the frontend.

Skyflow Elements provides a secure way to collect and reveal sensitive data including files. It offers several benefits, including complete programmatic isolation from your frontend applications, end-to-end encryption, tokenization, and the ability to customize the look and feel of the data collection form.

When users interact with Skyflow Elements, various components work together to collect and reveal sensitive data. Heres how it works:

After uploading a file, Skyflow automatically scans the file for viruses leveraging the CSS integration within the vault. You can retrieve the status of a scan using the Get Status Scan API.

If the file doesnt contain a virus, a status of SCAN_CLEAN is returned and the file is available for downloading or in-page retrieval. Otherwise, a status of SCAN_INFECTED is returned and the file moved into quarantine.

To reveal an uploaded file, the file is embedded into the web frontend as an iframe so the file never touches the customers servers.

Skyflow enables a business to offload the security, privacy, and compliance responsibilities of sensitive file and PII handling so its can focus resources on their core business.

In this post, we discussed the challenges businesses face with managing sensitive customer data. We reviewed how to secure personally identifiable information (PII) using Skyflow Data Privacy Vault and add malware protection using Cloud Storage Security (CSS) on AWS.

We also showed how Skyflow Data Privacy Vault can securely collect, manage, and use sensitive data. Skyflow integrates with CSS to support automatic virus and malware detection and protection for files.

To learn more, contact Skyflow or try out Skyflow in AWS Marketplace. For additional information regarding Cloud Storage Security, check out CSS in AWS Marketplace.

Read the rest here:
Protecting and Managing Sensitive Customer Data with Skyflow and Cloud Storage Security | Amazon Web Services - AWS Blog

Hitachi Vantara Announces Availability of Virtual Storage Platform One, Providing the Data Foundation for Unified … – PR Newswire

Unbreakable hybrid cloud platform seamlessly integrates structured and unstructured data, redefining data management efficiency and flexibility for enterprises

SANTA CLARA, Calif., April 16, 2024 /PRNewswire/ -- As businesses face unprecedented challenges managing data among the proliferation of generative AI, cloud technologies, and exponential data growth, Hitachi Vantara, the data storage, infrastructure, and hybrid cloud management subsidiary of Hitachi, Ltd. (TSE: 6501), today announced the availability of Hitachi Virtual Storage Platform One. The hybrid cloud platform is poised to transform how organizations manage and leverage their data in today's rapidly evolving technological landscape.

To learn more Hitachi Vantara Virtual Storage Platform One, visit:https://www.hitachivantara.com/en-us/products/storage-platforms/data-platform

With organizations struggling to scale data and modernize applications across complex, distributed, multi-cloud infrastructure, the need for a comprehensive data management solution across all data types has never been more critical. Resultsfrom a recent TDWI Data Management Maturity Assessment (DMMA) found while 71% of IT experts agreed that their organization values data, only 19% said a strong data management strategy was in place. Complicating matters are rising cyberattacks, which leave business leaders increasingly worried about security and resiliency. A recent survey showed 68% of IT leaders are concerned their organization's data infrastructure is resilient enough.

Virtual Storage Platform One products available now include:

Virtual Storage Platform One simplifies infrastructure for mission critical applications, with a focus on data availability and strong data resiliency and reliability measures, including mitigation of risks such as downtime, productivity losses, and security threats.

"Virtual Storage Platform One is transformational in the storage landscape because it unifies data and provides flexibility regardless of whether your data is in an on-premises, cloud, or software-defined environment," said Octavian Tanase, chief product officer, Hitachi Vantara. "Additionally, the platform is built with resiliency in mind, guaranteeing 100% data availability, modern storage assurance, and effective capacity across all its solutions, providing organizations with simplicity at scale and an unbreakable data foundation for hybrid cloud."

Virtual Storage Platform One tackles the challenges of modern data management by eliminating the constraints of data silos and allowing every piece of information to work cohesively and with the flexibility to scale up or down with ease, empowering data to thrive in an environment that prioritizes efficiency and speed. Additionally, Virtual Storage Platform One SDS Cloud is available in AWS Marketplace, a digital catalog with thousands of software listings from independent software vendors that make it easy to find, test, buy, and deploy software that runs on Amazon Web Services (AWS). This offers businesses seamless integration and accessibility to leverage Hitachi Vantara's cutting-edge data management solutions within AWS.

A New Era of Data ManagementAt the heart of Virtual Storage Platform One lies a unified data ecosystem that seamlessly integrates block and file storage, eliminating data silos and fragmented landscapes. Powered by Hitachi Storage Virtualization Operating System (SVOS), Virtual Storage Platform One ensures every piece of information is collected, integrated, and accessible from any device or location, making it easier to access, view, and fuel their business.

"For more than a decade, we've forged a partnership with Hitachi Vantara, finding great satisfaction in their services, expertise, and product durability," Deniz Armen Aydn, cloud data storage technologies manager, Garanti BBVA. "The introduction of Virtual Storage Platform Onepromises to revolutionize data management and efficiency within the GarantiBBVA, igniting anticipation for the cutting-edge automation and resiliency features it offers."

The hybrid cloud platform sets itself apart from competitors with key differentiators that redefine the data management landscape:

Additional Virtual Storage Platform One products will be available later this year. For more information about Virtual Storage Platform One and its suite of solutions, please visit https://www.hitachivantara.com/VirtualStoragePlatformOne.

Additional Resources

Connect With Hitachi Vantara

About Hitachi VantaraHitachi Vantara is transforming the way data fuels innovation. A wholly owned subsidiary of Hitachi Ltd., Hitachi Vantara provides the data foundation the world's leading innovators rely on. Through data storage, infrastructure systems, cloud management and digital expertise, the company helps customers build the foundation for sustainable business growth.To learn more, visit http://www.hitachivantara.com.

About Hitachi, Ltd.Hitachi drives Social Innovation Business, creating a sustainable society through the use of data and technology. We solve customers' and society's challenges with Lumada solutions leveraging IT, OT (Operational Technology) and products. Hitachi operates under the business structure of "Digital Systems & Services" - supporting our customers' digital transformation; "Green Energy & Mobility" - contributing to a decarbonized society through energy and railway systems, and "Connective Industries" - connecting products through digital technology to provide solutions in various industries. Driven by Digital, Green, and Innovation, we aim for growth through co-creation with our customers. The company's consolidated revenues for fiscal year 2022 (ended March 31, 2023) totaled 10,881.1 billion yen, with 696 consolidated subsidiaries and approximately 320,000 employees worldwide. For more information on Hitachi, please visit the company's website at https://www.hitachi.com.

HITACHI is a trademark or registered trademark of Hitachi, Ltd. All other trademarks, service marks, and company names are properties of their respective owners.

SOURCE Hitachi Vantara

Read the original:
Hitachi Vantara Announces Availability of Virtual Storage Platform One, Providing the Data Foundation for Unified ... - PR Newswire

From the NAB Floor | Amove by Jose Antunes – ProVideo Coalition – ProVideo Coalition

We use cookies to optimize our website and our service.

The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.

The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.

The technical storage or access that is used exclusively for statistical purposes. The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.

The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.

Visit link:
From the NAB Floor | Amove by Jose Antunes - ProVideo Coalition - ProVideo Coalition

Backblaze introduces Event Notifications for enhanced workflow automation Blocks and Files – Blocks and Files

Backblaze has added Event Notification data change alerts to its cloud storage so that such events can be dealt with faster by triggering automated workflows.

The fast-growing B2 Cloud Storage provides S3-compliant storage for less money than Amazon S3 and with no egress charges. AWS offers a Simple Queue Service (SQS) designed for microservices, distributed systems, and serverless applications, enabling customers to connect components together using message queues. An S3 storage bucket can be configured to send notifications for specific events, such as object creation, to SQS and so on to SQS queue-reading services, which in turn can inform upstream applications to trigger processing.

Gleb Budman, Backblaze CEO and chairperson, said: Companies increasingly want to leverage best-of-breed providers to grow their business, versus being locked into the traditional closed cloud providers. Our new Event Notifications service unlocks the freedom for our customers to build their cloud workflows in whatever way they prefer.

This statement was a direct shot at AWS, as evidenced by an allied customer quote from Oleh Aleynik, senior software engineer and co-founder at CloudSpot, who said: With Event Notifications, we can eliminate the final AWS component, Simple Queue Service (SQS), from our infrastructure. This completes our transition to a more streamlined and cost-effective tech stack.

Event Notifications can be triggered by data upload, update, or deletion, with alerts sent to users or external cloud services. Backblaze says this supports the expanding use of serverless architecture and specialized microservices across clouds, not just its own.

It can trigger services such as provisioning cloud resources or automating transcoding and compute instances in response to data changes. This can accelerate content delivery and responsiveness to customer demand with automated asset tracking and streamlined media production. It also helps IT security teams monitor and respond to changes, with real-time notifications about changes to important data assets.

Event Notifications is now available in private preview with general availability planned later this year. Interested parties can join a private preview waiting list.

Read this article:
Backblaze introduces Event Notifications for enhanced workflow automation Blocks and Files - Blocks and Files

The 10 Coolest Storage Component Vendors: The 2024 Storage 100 – CRN

As part of CRNs 2024 Storage 100, here are 10 vendors storage component vendors taking their offerings to new heights.

The software-defined storage and the data recovery/observability/resiliency sections of the Storage 100 emphasized the value that software has on the storage features and services that add value to the storage offered to businesses small and large.

However, the focus on software should not by any means take away from the value of the hardware. No matter how wonderful and valuable storage software is, it still requires a solid base of hardware on which to run.

That hardware may be an industry-standard server, or a purpose-built appliance. Or it may be on purpose-built hardware stacked in a cloud providers data center. But it needs components: flash storage, SSDs, hard disk drives, memory. So for all the focus on software, hardware still counts.

As part of CRNs 2024 Storage 100, here are 10 vendors storage component vendors taking their offerings to new heights.

Apacer

Chia-Kun Chang

CEO

Apacer develops a wide range of digital storage and sharing products and services, including SSDs and DRAM. Its SSD lineup includes PCIe, SATA and PATA, industrial, eMMC and specialty models, while its DRAM lines include embedded memory, server and workstation memory, specialized memory, and memory with specific characteristics including ruggedized, wide temperature and lead-free.

Kingston

John Tu

Co-Founder, CEO

Kingston, one of the worlds leading manufacturers of memory products, was an early pioneer in developing memory modules for computers. The company has since expanded to offer a wide range of memory and memory cards, SSDs, USB flash drives, memory card readers, and embedded and industrial embedded flash and DRAM components.

Kioxia

Hideyuki Namekawa

President, CEO, Kioxia America

Kioxia is a global leading developer and manufacturer of flash memory and SSDs. The company, which was spun out of Toshiba as Toshiba Memory Corp. before getting its current name, produces a wide range of memory and SSDs for both business and personal computing requirements, in addition to enterprise, data center and client storage applications.

Micron Technology

Sanjay Mehrotra

President, CEO

Micron is one of the worlds largest producers of computer memory, as well as a major developer of flash storage technologies. Its memory products include DRAM modules and components as well as high-bandwidth memory and CXL modules for data center memory expansion. The company also develops a wide range of data center, client and industrial SSDs.

Pliops

Ido Bukspan

CEO

Pliops develops what it calls extreme data processors. These XDPs combine multiple data and storage technologies including a hardware-based storage engine, in-line transparent compression, RAID 5+ data protection, and built-in application integration into a single device that works with any server or SSD to improve application performance while cutting overall infrastructure cost.

Samsung

Kye Hyun Kyung

President and CEO, Device Solutions Division

Samsung is one of the worlds largest producers of semiconductor components and products, including DRAM components and modules and SSDs for PC, data center, enterprise and consumer applications. The company is also a major provider of semiconductor foundry services. In addition, Samsung develops microprocessor, image sensor, display, security and power technologies.

ScaleFlux

Hao Zhong

Co-Founder, CEO

ScaleFlux builds what it calls a better SSD by embedding computational storage technology into its flash drives. Its system-on-a-chip technology is behind the companys Computational Storage Engine technology that embeds intelligent storage processing capabilities into NVMe SSDs, which the company says helps reduce data movement, enhance performance and improve efficiency.

Seagate Technology

Dave Mosley

CEO

Seagate manufactures external and internal SSDs and hard drives for cloud, edge, data center and personal storage. The company also develops integrated mass storage for business and personal use, including arrays and expansion devices for managed block storage and hybrid storage applications as well as the Lyve edge to cloud storage service.

Solidigm

David Dixon, Kevin Noh

Co-CEOs

Solidigm, founded when Korea-based SK hynix acquired Intels NAND and SSD business, is a major developer of SSDs for data center and client device use. Its data center SSD line ranges from standard-endurance SATA drives to high-end models with varying performance and capacity levels. Its consumer line includes both PCIe 3.0 and PCIe 4.0 NVME SSDs.

Western Digital

David Goeckeler

CEO

Western Digital has long been a leader in the development of hard drive, SSD, flash drive and memory card technologies, as well as NAS and other storage products. However, the company is currently in the process of separating into two independent, publicly traded companies by year end, one focused on hard drives and the other on flash storage.

Follow this link:
The 10 Coolest Storage Component Vendors: The 2024 Storage 100 - CRN