Category Archives: Cloud Storage
iPad productivity tips: Keyboard tricks, shortcuts, and more – Fast Company
Every year, the idea that the iPad is insufficient as a productivity device becomes staler.
Thats because Apple keeps making the iPad increasingly laptop-like, with features like trackpad support, a full-blown file browser, and multitasking. At the same time, the company has imbued its tablet with capabilities that you wont find on a laptop, such as Pencil support for sketching and the Shortcuts app for automation.
Now that Apple has released new iPads along with iPadOS 14, lets look at all the ways you can turn the tablet into a productivity powerhouse.
Add a keyboard and cursor: Got a Bluetooth keyboard and mouse handy? Pair them with your iPad to turn it into a miniature workstation. Got a 2018 and 2020 iPad Pro (or are planning to get the upcoming fourth-gen iPad Air)? Apples Magic Keyboard accessory lets you snap on a keyboard and trackpad for using your tablet like a laptop, while Logitechs Folio Touch provides a cheaper alternative. For other iPads, like the baseline iPad, 2017 iPad Pro, and third-generation iPad Air, Logitechs Combo Touch gives you a keyboard, trackpad, and kickstand for the tablet all in one package.
Know your shortcuts: Once youve connected a keyboard, use shortcuts to get around faster.
Get a grasp on trackpad gestures: If your keyboard has a trackpad, you can also use multitouch gestures to navigate around the iPad:
Tweak your cursor settings: If your mouse or trackpad doesnt feel quite right, head to Settings > General > Trackpad (or Trackpad & Mouse), where you can adjust tracking speed, reverse the direction of scrolling, enable trackpad tap-to-click gestures, and reverse your mouse buttons.
Modify your modifiers: Cant stand Apples layout of modifier keys such as Ctrl and Cmd? Change their behavior under Settings > General > Keyboard > Hardware Keyboard > Modifier Keys. Making the globe key behave like Cmd for things like copy and paste will make Windows users feel right at home again.
Unlock more keyboard controls: With a setting called Full Keyboard Access, you can set up your iPad keyboard to perform all kinds of additional shortcuts. To enable this feature, head to Settings > Accessibility > Keyboards > Full Keyboard Access, then set the toggle to On.
Right away, youll be able to navigate the entire system with arrow keys and use new shortcuts such asTab-C for Control Center, Tab-N for notifications, and Tab-S for Siri. You can also view, modify, and create new keyboard shortcuts under the Commands section in the Full Keyboard Access menu.
Use desktop websites: If youre feeling constrained by a particular iOS app, try using the web version instead. Apples Safari browser on the iPad can load full desktop versions of sites such as Gmail, Tweetdeck, and Airtable. Those sites are often more capable than their mobile app counterparts, especially when paired with a trackpad and keyboard for right-click menus and shortcuts.
To make those sites more readily accessible, add them as bookmarks on your iPads home screen. In Safari, press the Share button, then select Add to Home Screen.
Scribble anywhere: With the new Scribble feature in iPadOS 14, you can use the Apple Pencil to write anywhere that accepts text entry, and the iPad will convert your handwriting to text. Its great for quickly entering text without putting your Pencil down. Once the iPad has converted some handwriting, here are some extra tricks to know:
If youd rather switch to the keyboard while scribbling, just hit the keyboard icon in the floating menu at the bottom of the screen. And to disable Scribble entirely, head to Settings > Apple Pencil.
Take a note faster: To jump straight into Apples Notes app from the lock screen, just tap in the middle of the screen with your Pencil. By default, this will create a new note every time, but you can change this under Settings > Notes > Access Notes from Lock Screen. You can also add a Notes shortcut in Control Center by heading to Settings > Control Center and hitting the green + icon next to Notes.
Make perfect shapes: Apples Notes app can automatically recognize shapes such as squares, circles, arrows, and lines. Just keep holding the Pencil down for a moment after making a shape, and your sloppy drawings will turn into perfect geometry.
Newer Apple Pencil tricks: With the second-generation Apple Pencil, you can double-tap on it to switch between drawing tools in supported apps. This will switch to an eraser by default, but you can change this under Settings > Apple Pencil.
Add a battery widget: To avoid getting stuck with a dead Pencil right when you need it, add a battery status indicator to your widgets list. At the bottom of the list, hit Edit, then hit +, then select Batteries from the list and hit Add Widget. Now you can see exactly how much charge is left at a glance.
Customize your dock: On the home screen, press, hold, and drag apps down to the bottom row to keep them in your dock. By default, some recently used apps will appear in the dock as well, but you can disable this under Settings > Home Screen & Dock.
View two apps in split-screen: To use the iPads Split View featurewhich works with a high percentage of popular apps but isnt universally supportedmake sure you have one app open, then press and hold another app in your dock and drag it up so the icon moves with your finger. (If the app isnt in your dock, you can hit Cmd-Space to search for it, then drag the icon in the search results.) Move it to either side of the screen, then let go once the other app slides over to make room. Adjust the split-screen by dragging the black bar between the two apps, or drag the bar to either edge of the screen to close the other app.
Open a mini app: Instead of moving your second app to the edges of the screen, try dropping it into the middle. This will open a miniature app window (called Slide Over) that appears on top of your main app. Dismiss this app by swiping the top bar to the right of the screen, or drag on the top bar to move it around. Dragging it to the top edge will open it in full screen, and dragging it to the sides will open it in Split View.
Stack up your apps: Slide Over really becomes useful when you stack up several apps on top of one another. Try dragging a second or third app on top of your first Slide Over app, then swipe on the bottom bar to switch between them. Or, flick up gently on the bottom bar to view all your Slide Over apps side by side. You can then dismiss any of them by swiping up.
Use the same app twice:Multitasking doesnt merely apply to separate apps. In some cases, you can also have two instances of the same app running side by side. Not every app supports this, but its great for viewing separate web pages in Safari or comparing documents in Word. Opening a second instance of one app works the same way as multitasking with two separate apps: Just drag the app onto your screen from the dock or from Cmd-Space search results.
Get familiar with Files: For full-blown file management on an iPad, use Apples Files app. Here youll find files saved by your appslocated under either On my iPad or iCloud Drive, depending on how the app stores files. Long-press or right-click any file for a menu of options, such as duplicating the file, moving it to another location, or adding it to a compressed Zip file.
Take note of the icons at the top-right corner of the Files app as well; thesell let you create new folders and switch between different views.
Connect cloud storage: The Files app isnt just for documents youve stored on the iPad or in iCloud. You can also link other cloud storage services such as Dropbox, OneDrive, and Google Drive. That way you can move files between locations or open them in other apps.
To link a service, make sure that youve installed and signed into corresponding app on your iPad. Then, in the Files app, hit the button in the left sidebar and select Edit Sidebar. From here you can toggle on the services you want to access through the Files app.
Add favorite folders: Instead of digging through endless directories to access your files, you can mark certain folders as favorites to make them show up in the left sidebar. This even works with cloud storage sources you link (per the tip above). Just long-press any folder and select Favorite for quicker access.
Scan your paper documents:While theres no shortage of paper scanner apps on the iPad, Apple also has a free one built into the Files app. In the left sidebar, tap the icon and select Scan Documents, then use the camera to scan each page of your document. Apple will automatically crop each image, and you can save the resulting PDF file to the directory of your choosing.
Use some basic shortcuts: Apples Shortcuts app is a powerful way to automate actions within apps on your iPad. On the most basic level, you can have Shortcuts for things like creating a new email, launching Google Assistant, texting a favorite contact, or shortening a link. You can then launch these actions through the Shortcuts app, create icons or widgets to launch them from your home screen, or in some cases access them via the Share menu within apps.
The easiest way to get started is to visit the Gallery section of the Shortcuts app, where youll find suggested Shortcuts from Apple. Take a look at the Shortcuts from your apps section in particular, which will list some quick actions you can take within the apps you use most.
Search for more advanced shortcuts:Apples Gallery only begins to cover whats possible with Shortcuts. For more advanced automation, check out some online communities such asRoutineHub, ShortcutsGallery, and Shortcut Hub, which host all kinds of Shortcuts you can add to your own iPad. One particularly impressive example: A Shortcut that automatically adds the weather, calendar events, and reminders to your lock screen.
Run shortcuts with your keyboard: After setting up some useful Shortcuts, you can map them to your keyboard for even quicker access. Just head back to the Full Keyboard Access menu (under Settings > Accessibility > Keyboard), then select Commands and scroll to the bottom of the list, where youll find all your shortcuts. Tap on any Shortcut, then enter the key combination youd like to associate with it. You might, for instance, use Ctrl-G to launch Google Assistant, or use keyboard shortcuts to open certain apps such as Safari or Gmail.
The only quirk to this trick is that you must engage keyboard most first by pressing any key, then type the shortcut you want to use. But once youve started mapping custom actions to the keyboard, you might wonder how you went so long without it.
Check out JaredsAdvisorator newsletterfor more tech advice, tips, and app recommendations.
Go here to read the rest:
iPad productivity tips: Keyboard tricks, shortcuts, and more - Fast Company
Cloud At The Edge, GPU Storage And LTO Gen 9 – Forbes
Abstract digital futuristic background. 3D rendered image.
This piece looks at developments and digital storage partners for public cloud company edge services, in particular AWS Outposts.We also look at a VAST Data GPU oriented AI storage offering as well as the introduction of LTO 9 magnetic tape technology in late 2020.
Public cloud companies have created services to bring their services to the edge of the network as well as in their hyperscale data centers.One example is AWS Outposts.Outposts was first announced in 2018, with general availability announced in December 2019.
Recently a number of storage companies made announcements on AWS Outposts partnerships.AWS Outpostsextends AWS infrastructure, services, APIs, and tools to customer datacenters, co-location spaces, or on-premises facilities.AWS Outposts is meant to provide low latency access to on-premises applications or systems and local data processing for local storage needs in a hybrid cloud storage environment.
These AWS Outpost connections included Zadara, who announced a partnership with data management provider Storage IT offering storage as a service solution in the AWS Marketplace.Qumulo also launched on AWS Outposts to enable file storage and data management.Qumulo on AWS Outposts allow customers to connect their file data to AWS and run AWS services.
VAST Data introduced its Universal Storage Platform that used Intels Optane SSDs in the front end of a storage system as a cache for data stored on quad level cell (QLC) SSDs in 2019.The company said that by using NVMe based Optane SSDs and QLC flash that they could bring the cost of a flash memory storage system to close to that of a HDD storage system.The company recently announced the availability of its next generation storage architecture that it calls LightSpeed.LightSpeed combines VASTs NAS appliance with NVIDIA GPU-based and AI processor-based computing for AI applications.
VASTs announcement says that GPUDirect enables customers running NVIDIA GPUs to accelerate access to data and avoid extra data copies between storage and the GPU by avoiding the CPU and CPU memory altogether as shown in the image below. In initial testing, VAST demonstrated over 90GB/s of peak read throughput via a single NVIDIA DGX-2 client, nearly 3X the performance of VAST's NFS-over-RDMA and nearly 50X the performance of standard TCP-based NFS.
GPU Direct storage access for AI applications using GPUs
The company says that LightSpeed uses a disaggregated, shared everything (DASE) architecture (using elements from its Universal Storage platform) to lower the costs of SSD storage and thus eliminate the need for storage tiering.LightSpeed doubles the performance of prior VAST storage solutions.It also provides NFS support for NVIDIA GPUDirect Storage.
The LTO program technology provider companies (HPE, IBM and Quantum), which manage the most popular digital magnetic recording tape format, officially announced the LTO 9 specification.LTO 9 tape cartridges support 18 TB of native storage capacity (less than the 24 TB native capacity for LTO 9 that was on prior LTO roadmaps).Whereas the most recent generations generally doubled storage capacity about every 2.3 years, this is a 50% increase from 12 TB native storage capacity for LTO 8.The LTO program says that they redid the LTO roadmap to reflect the changed capacity for LTO 9 and that following generations will double with each generation as shown below.
Updated LTO Magnetic Tape Roadmap
TheLTO generation 9 specifications include previously introduced features, such as multi-layer security support via hardware-based encryption, WORM (Write-Once, Read-Many) functionality and support for Linear Tape File System (LTFS). The new LTO generation 9 specifications include full backward read and write compatibility with LTO generation 8 cartridges.Quantum said that they will make LTO 9 tape drives available for their Scalar Tape Libraries and StorNext AEL archive systems beginning in December 2020.Other tape storage system vendors will be announcing LTO 9 support for products in late 2020.
AWS Outpost storage partners Zadara and Qumulo enable on-premises storage partnered with public cloud service.VAST Data introduces GPU AI high performance storage.The LTO program introduces LTO 9 tape technology with vendors providing products by late 2020.
See the rest here:
Cloud At The Edge, GPU Storage And LTO Gen 9 - Forbes
Zoom is being sued over its cloud storage practices – TechRadar
Popular video conferencing platform Zoom has been hit with lawsuit alleging it of patent infringement around its cloud storage practices for recorded content.
Specifically, Zoom is accused of running afoul of patent law because it enables users to record meetings, save the video to cloud storage, and then download the content later.
The suit has been filed by Rothschild Broadcast Distribution Systems, which filed the patent (US Patent No. 8,856,221) in 2011long after the technology for storing multimedia content in the cloud and distributing it on demand had been developed. The company has so far filed more than 25 suits against companies including Disney and World Wrestling Entertainment.
Rothschild is seeking both an award for damages as well as a court order halting Zoom from continuing to infringe on its patent for cloud storage and distribution.
The company is based in the East District of Texas, which is a favorite jurisdiction for so-called patent trolls because of its favorable legal protections for plaintiffs.
The lawsuit, filed in the District of Colorado, puts Zoom in a difficult position. On the one hand, it is highly unlikely that Rothschild could win if the lawsuit was challenged in court. The Supreme Court has ruled that abstract ideas are not eligible to be patented if they simply move existing technology onto a computer.
In a similar lawsuit that Rothschild filed concerning the same patent, the company dismissed its claim as soon as the defendant in that case challenged the suit.
However, challenging the lawsuit is likely more costly for Zoom than simply settling with Rothschild out of court. It remains unclear how much such a settlement could cost, since the company has not disclosed previous agreements with those it has sued. Whether Zoom decides to stand its ground or make the lawsuit disappear quickly, the whole affair is likely to be expensive.
Via LawStreetMedia
Originally posted here:
Zoom is being sued over its cloud storage practices - TechRadar
Microsofts storage dream: a hard disk drive the size of a wardrobe with Samsung Galaxy S20 parts – TechRadar
At the company's annual Ignite event for developers, Microsoft shed more light on the work it's doing with holographic storage.
The firm's research arm has gone back to the drawing board to rethink storage at a hyperscale level, starting by exploding the first dogma: that storage had to come in a 2.5-inch or 3.5-inch form factor.
After all, theres no hard and fast rule saying that data center storage has to be based on consumer hard disk drives - or even enterprise SSDs. New formats like the ruler SSD form factor offer some innovation, but dont really break the mould.
The smallest unit of deployment in cloud storage, say the researchers, is actually the storage rack, which is about the size of a cupboard and allows the designers to think of new hardware at rack scale.
According to a Microsoft blog post, this allows components to be efficiently shared across the entire rack and could end up shifting the paradigm for web hosting, IaaS and PaaS.
While Project Silica - another of Microsofts moonshot storage projects - looked at storing data for a long time using a write-only, read-many archival format, project HSD (for Hologram Storage Device) looks at how so-called hot data can be accessed faster and stored in even smaller volumes.
In the blog post, Microsoft shared an illustration highlighting the formidable rise in resolution of commodity camera sensors, which has grown from 1-megapixel to more than 100-megapixels in less than two decades.
Project HSD rides on the coat tails of this improvement, exploiting the resolution growth to simplify the (optical) hardware and moving the complexity to the software.
The 108-megapixel ISOCELL Bright HMX camera sensor was introduced more than one year ago by Samsung, in partnership with Xiaomi. It not only has a large image sensor but was also the first to break the 100-megapixel barrier, as it's used in phones including the Samsung Galaxy S20 Ultra and the Xiaomi Mi CC9 Pro Premium.
But Samsung wants to reach even greater heights and executive Yongin Park has already confirmed that a 600-megapixel sensor is the goal.
Someone at Microsoft Research will certainly take note, given that pairing consumer optics and Azure-based AI can significantly increase not only the storage density of HSD but also read/write speeds and access times.
Excerpt from:
Microsofts storage dream: a hard disk drive the size of a wardrobe with Samsung Galaxy S20 parts - TechRadar
How to Access S3 Buckets from Windows or Linux – ITPro Today
S3, Amazons cloud-based object storage service, is designed primarily for storing data that is used by applications running directly in the cloud. However, there are situations where you may want to access S3 buckets directly from your PC. You might want to do this to upload files from your PC to S3 without using the AWS Console, for example. Or, you may want to be able to monitor changes to S3 data from an application running on your local PC.
For purposes such as these, being able to access S3 data directly from your PC comes in handy. This article explains how to make S3 files available locally on both Windows and Linux using rclone, a free and open source tool for syncing cloud storage to local computers.
There are various other tools available for achieving this goal. I like rclone, however, both because its open source and because it works with any major operating system. Although there are some minor differences in the way you use the tool to access S3 data on Windows as compared to Linux, the basic process is the same regardless of which operating system youre running.
Following are the steps for using rclone to access S3 data from Linux or Windows.
Installing rclone is quite simple. You can download it for Windows or any version of Linux. On most Linux distributions, you also have the option of installing directly from your package manager using a command such as the following (for Ubuntu):
Or, you can download and run a Bash script to install rclone for you:
The latter approach may be desirable if you want a later version of rclone than the one offered in your distributions package repositories; otherwise, its better to use an official package from the repositories, because rclone will then be updated automatically for you whenever a new package version becomes available.
Rclone is a command-line tool, so youll need to open up a command shell or terminal to run it.
Once in the shell, you can run rclone directly with a simple rclone (or rclone.exe on Windows) command if the application is in your path--which it probably is if you installed it on Linux using a package.
If instead you just downloaded rclone as a ZIP file, you will have to unpack it, then use the cd command to navigate to the directory where the rclone files are located.
Once there, a simple ./rclone config (on Linux) or .rclone.exe config (on Windows) will start the program.
Rclone will then ask you a variety of configuration questions, including your AWS credentials for the S3 bucket you want to access. The configuration data will vary depending on how your S3 bucket is set up, but in general the default options should work.
After youve completed configuration, youre ready to use rclone to access S3 buckets.
Rclone offers about a dozen commands that you can use to interact with S3 data. For example, to list the contents of a bucket, use the ls command:
(If youre on Windows, replace rclone with rclone.exe.)
In this example, bucket-name is the name of your S3 bucket.
Likewise, to copy a file, use the copy command:
A full list of rclone commands is available on the rclone website. Keep in mind, however, that not all of them will work with S3 data. For example, you cant use the mkdir command (which would create a new directory) with an S3 bucket because S3 doesnt support directories.
Rclones built-in commands for interacting with data are handy if you just need to copy or access some files manually. But what if you want to automate interaction with your S3 data, or access it using commands that are not supported by rclone?
In that case, you can use the rclone mount command to mount your S3 bucket as a directory. That way, you can interact with your S3 data just as you would any other data stored locally on your computer.
To mount an S3 bucket with rclone on Linux, use a command like:
Note that you may need to run this command as root. Youll also need to make sure the mount point (/mnt/some-dir in the example above) exists before you run the command. (If it doesnt, use mkdir on Linux to create it.)
The process is similar on Windows, with one major difference: You first need to install WinFsp (find the installer here) before you can mount an S3 bucket. Once WinFsp has been installed, you can mount your S3 bucket as a directory with:
In this case, your mount point (C:somedir in the example) should be a directory that does not yet exist.
Whether you use Windows or Linux, rclone offers a free and straightforward way to access S3 buckets from your local computer. However, there are some caveats to keep in mind.
One is that Amazon charges you a fee every time you create or modify a file in an S3 bucket. This means that, if you perform a large number of file operations via rclone on S3 data, you may end up with a substantial cloud bill.
A second consideration to weigh is that the performance of your S3 data when you access it from your PC may be limited due to network latency. Even if you mount your bucket as a local directory, expect a delay when you interact with the data.
For both of these reasons, you may end up shooting yourself in the foot if you try to use S3 buckets as a cheap way to back up all of the data from your PC, or as a personal file-sharing service. In other words, dont try to use the method described above to turn S3 into something like Dropbox or Google Drive, which are better suited to situations where you need fast, cost-efficient integration between your local file system and cloud storage. Even though accessing S3 data from a local computer is relatively easy, the performance and cost implications make it impractical to do this on a large-scale or recurring basis.
Still, if you need a fast and simple way to access S3 data from your computer in order to copy files or use a certain application on a one-off basis, rclone makes it easy to do so on Windows, Linux and virtually every other operating system you can find.
View post:
How to Access S3 Buckets from Windows or Linux - ITPro Today
There is a hole in my cloud bucket – Fudzilla
Dear Liza, dear Liza
A Comparitech security report claims that nearly six percent of all Google Cloud buckets are vulnerable to unauthorised access due to misconfiguration issues.
Buckets, in cloud storage, are the basic containers that are used to hold the data. Everything that a user stores in cloud storage must be contained in a bucket. Admins can use these containers to organise their data and to control access to it. However, unlike folders and directories, they cannot nest one bucket into another bucket.
Writing in his bog, Comparitech's Paul Bischoff revealed that its team attempted to search for open bucket on the web. It started by scanning the web using a tool which is easily available to admins and hackers.
In its web search, the researchers looked for Alexa's top 100 web domains, in combination with some common words, such as "db", "database", and "bak" used by admins when naming their buckets.
Through this web scan, the research team was able to discover 2,064 Google Cloud buckets in about 2.5 hours.
After analysing all 2,064 buckets, the researchers found that 131 of them - nearly six percent - were misconfigured and vulnerable to unauthorised access.
According to Comparitech, the exposed data included nearly 6,000 scanned documents containing confidential information, such as passports details and birth certificates of children in India. A database belonging to a Russian web developer was also found that leaked developer's chat logs and email server credentials.
Bischoff warns that uncovering exposed cloud databases on internet is not difficult . Google cloud storage has naming guidelines that make open buckets easy to find. Such buckets can contain sensitive files, source code, credentials and databases, which can be illegally accessed by malicious actors.
According to Bischoff, admins can check if their bucket is exposed by using gsutil (Google's official command-line tool) or BucketMiner tool to scan the web. Scanning for company's name on Google and Amazon infrastructure will display some filenames, images, or other stats, suggesting the bucket is open.
Red Hat shifts automated data pipeline into OpenShift Blocks and Files – Blocks and Files
Red Hat today released OpenShift Container Storage 4.5 to deliver Kubernetes services for cloud-native applications via an automated data pipeline.
Mike Piech, Red Hat cloud storage and data services GM, Piech, said in his launch statement: As organizations continue to modernise cloud-native applications, an essential part of their transformation is understanding how to unlock the power of data these applications are generating.
With Red Hat OpenShift Container Storage 4.5 weve taken a significant step forward in our mission to make enterprise data accessible, resilient and actionable to applications across the open hybrid cloud.
OpenShift is RedHats container orchestrator, built atop Kubernetes. Ceph open source storage provides a data plane for the OpenShift environment.
The automated data pipeline is based on notification-driven architectures, and integrated access to Red Hat AMQ Streams and OpenShift Serverless. AMQ Streams is a massively scalable, distributed, and high-performance data streaming platform based on the Apache Kafka project.
OpenShift Serverless enables users build and run applications so that, when an event-trigger occurs, the application automatically scales up based on incoming demand, or scales to zero after use.
Red Hat says that, with the recent release of OpenShift Virtualization, users can host virtual machines and containers on a single, integrated platform which includes OpenShift Container Storage. This is what VMware is doing with its Tanzu project.
New features in OpenShift Container Storage 4.5 include:
Originally posted here:
Red Hat shifts automated data pipeline into OpenShift Blocks and Files - Blocks and Files
Seagate gets into object storage with new CORTX software Blocks and Files – Blocks and Files
Seagate is entering the object storage business with brand new CORTX software.
The disk drive maker aims to build a developer community for the open source software and has published a reference architecture for use in a Lyve Drive Rack.
Announcing the news today at Seagate Datasphere, the company said CORTX gives developers and partners access to mass capacity-optimised data storage architectures. CORTX use cases include artificial intelligence, machine learning, hybrid cloud, the edge and high-performance computing.
The object storage market has seen two entrants in two weeks Dell EMC has joined in with ObjectScale software.
So why does the world need another object storage software technology? Seagates Ken Claffey, GM for Enterprise Data Solutions, said: CORTX brings something different to other object stores in that it will uniquely leverage HDD innovations such as REMAN to reduce the likelihood of rebuild storms, HAMR to enable the largest capacity/lowest cost per bit next gen devices, and multi-actuator to retain IOPS per capacity ratios.CORTX and the community are focused on such capabilities that are required in mass capacity deployments.
HAMR is Seagates Heat-Assisted Magnetic Recording drive, due to ship at 20TB capacity by year-end, and a pathway towards 40TB HDD capacities. Multi-actuator drives have two sets of read-write heads and logically divide a disk drive into two halves that perform read/write operations concurrently to increase overall IO bandwidth.
Lyve Drive is a series of integrated, modular data storage drives, carriers and receivers for multi-stage workflow processes.
Jacques-Charles Lafoucriere, program manager at The French Alternative Energies and Atomic Agency, an early CORTX adopter said: CORTX can very nicely work with storage tools and many different types of storage interfaces. We have effectively used CORTX to implement a parallel file system interface (pNFS) and hierarchical storage management tools. CORTX architecture is also compatible with artificial intelligence and deep learning (AI/DL) tools such as TensorFlow.
Gary Grider, HPC division Leader at Los Alamos National Lab, also said: I am very excited to see what Seagate is doing with CORTX and am optimistic about its ability to lower costs for data storage at the exabyte scale. We will be closely following the open source CORTX and will participate in the community built around it, because we share Seagates goal of economically efficient storage optimised for massive scalability and durability.
Toyota and Fujitsu are also early CORTX adopters.
Shipments of Lyve Drive Rack and the 20TB HAMR drives are scheduled to begin in December.
Follow this link:
Seagate gets into object storage with new CORTX software Blocks and Files - Blocks and Files
Kioxia’s Ethernet SSD stirs into EBOF life as architects dream – Blocks and Files
Kioxia claims direct-attached performance from network-attached devices is no longer a thing of storage architects dreams.
The company has planted a Marvell Ethernet controller directly onto an SSD and fitted 24 of these drives into an EBOF (Ethernet Bunch of Flash drives) chassis.
The drives are accessed as Ethernet devices, with an NVMe twist. An EBOF is simpler to deploy than a JBOF (just a bunch of flash) platform, Kioxia argues, because it needs an integrated Ethernet switch only. A JBOF, by contrast, requires a box-controlling CPU, DRAM and Fibre Channel host bus adapters.
The Ethernet SSD and EBOF systems are intended for applications and workloads that need disaggregated low-latency, high bandwidth and highly available storage. Kioxia is positioning the EBOF as an affordable, well-performing box as opposed to a high-performance system for edge computing, enterprise and cloud data centres.
Thad Omura, VP for marketing at Marvells Flash Business Unit, supplied a quote: The native Ethernet SSD combined with our switches and controllers offer data centres an EBOF solution that lowers their total cost of ownership, increases performance and reduces power as compared to alternative JBOF solutions.
Alvaro Toledo, VP of SSD marketing and product planning at KIOXIA America, said: The Ethernet-attached storage ecosystem is an idea whose time has come. We are enabling the true potential of NVMe over Fabrics. This opens up a new world of possibilities for cloud data centre operators, software-defined storage providers, and server and storage system OEMs.
Kioxia needs the EBOF to help build demand for its Ethernet SSDs and is collaborating with Foxconn-Ingrasys and Accton to bring EBOF systems to market. We suspect NVMe over TCP/IP will be supported in the future, enabling the use of cheaper ordinary, non-converged Ethernet.
In the Kioxia set up, Ethernet SSDs are addressed as NVMe devices using RoCE (Remote Direct Memory Access RDMA over Converged Ethernet).vAny NVMe-oF-capable host server can use them.
Information about the Ethernet SSD is sketchy. The devices are supplied in 1.92TB, 3.84TB and 7.98TB capacities and output 670,000 random read IOPS equivalent to 16 million-plus IOPS per chassis.
The SSD has single or dual 25GbitE links and supports RoCE v2 RDMA, NVMe-oF 1.1 and NVMe 1.4. The drive has a Marvell 88SN2400 controller and supports IPv4 and IPv6 architecture plus Redfish and NVMe-MI storage management specifications.
We suspect it has a 20s latency but do not know what type of NAND it uses, but suspect its 64-layer3D NAND in TLC (3bits/cell) format because this was the case in a demo at the FMS 2088 event by Toshiba; Kioxias precursor company.
The chassis is a 2U x 24-slot box supporting 2.5-inch form factor drives. Each chassis supports 2.4 Tbit/s of connectivity throughput which can be split between network connectivity and daisy chaining additional EBOFs.
Kioxia, formerly called Toshiba Memory, promoted direct Ethernet-addressed drives and an Ethernet-accessed chassis in 2018. Each drive was rated at 666,666 IOPS and the chassis achieved 16 million 4K random read IOPS from its 24 drives claimed at the time to be the fastest random read IOPS rate recorded by an all-flash array.
Two years later that performance level looks good-ish but not great, especially when compared to Kioxias own CD6 NVMe SSDs using its 96-layer 3D NAND in TLC format. These have the same capacities as well as a 960GB entry-level and 15.3 TB and 30.72TB upper levels. They operate at up to 1.4 million random read IOPS across their PCIe 4.0 interface, more than twice the Ethernet SSDs speed.
We have seen the Ethernet-addressed storage drive concept before; notably with Seagates Kinetic disk drive concept with the drives implementing an object storage scheme using Gets and Puts to read and write data. This technology failed to take off, partially because the drives required host application software changes.
Kioxia is sampling the Ethernet SSD with customers. There is no word on general availability for the component or the EBOF.
More here:
Kioxia's Ethernet SSD stirs into EBOF life as architects dream - Blocks and Files
Ceph scales to 10 billion objects Blocks and Files – Blocks and Files
Ceph, the open source integrated file, block and object storage software, can support one billion objects. But can it scale to 10 billion objects and deliver good and predictable performance?
Yes, according to Russ Fellows and Mohammad Rabin of the Evaluator Group who set up a Ceph cluster lab and, by using a huge metadata cache, scaled from zero to 10 billion 64KB objects.
In their soon-to-be published white paper commissioned by Red Hat, Massively Scalable Cloud Storage for Cloud Native Applications, they report that setting up Ceph was complex without actually using that word. We found that, because of the many Ceph configuration and deployment options, it is important to consult with an experienced Ceph architect prior to deployment.
The authors suggest smaller organisations with smaller needs can use Ceph reference architectures. Larger organisations with larger needs better work with Red Hat or other companies with extensive experience in architecting and administering Ceph.
Analysis of unstructured data, files and objects is required to discern patterns and gain actionable insights in a businesss operations and sales.
These patterns can be discovered through analytics and by developing and applying machine learning learning models. Very simply, the more data points in an analysis run, the better the resulting analysis or machine learning model.
It is a truism that object data scales more easily than file storage because it has a single flat address space whereas files exist in a file-folder structure. As the number of files and folders grow, the file access metadata also grows in size and complexity and more so than object access metadata.
File storage is generally used for applications that need faster data access than object storage. Red Hat wants to demonstrate both the scalability of object storage in Ceph and its speed. The company has shown Ceph can scale to a billion objects and perform well at that level via metadata caching on NVMe SSDs.
However, Red Hat wants to go further and has commissioned the Evaluator Group to scale Ceph tenfold, to 10 billion objects, and see how it performed.
The Evaluator test set-up had six workload generating clients driving six object servers. Each pair of these accessed, in a split/shared-nothing configuration, a Seagate JBOD containing 106 x 16TB Exos nearline disk drives; 5PB of raw capacity in total spread across three storage JBODS.
Each object server had dual Xeon 18-core CL-6154 processors, 384GB of DRAM, six Intel DC P4610 NVMe 7.6TB write-optimised NAND SSDs for metadata caching, and Intel memory DIMMs.
Ceph best practice recommends not exceeding 80 per cent capacity and so the system was sized to provide 4.5PB of usable Ceph capacity. Each 64KB object required about 10KB of metadata, meaning around 95TB of metadata for the total of 10 billion objects.
The Evaluator Group testers ran multiple test cycles, each performing PUTS to add to the object count, then GETS and, thirdly, a mixed workload test. The performance of each successive workload was measured to show the trends as object counts and capacity both increased.
The measurements of GETs (reads) and PUTs (writes) performance showed a fairly linear pattern as the object count increased. PUT operations showed linear performance up to 8.8 billion objects; 80 per cent of the systems usable Ceph capacity, and then dropped off slightly. GET operations showed a dip to a lower level around 5 million objects and a more pronounced decline after the 8.8 million objects level.
GET performance declined once the metadata cache capacity was exceeded (yellow line on chart) and the clusters usable capacity surpassed 80 pr cent of actual capacity. Once the caches capacity was surpassed the excess metadata had to be stored on disk drives, and accesses were consequently much slower.
Performance linearity at this level would requite a larger metadata cache.
The deep scrubbing dip on the chart occurred because a Ceph parameter set for deep scrubbing, to help with data consistency, came into operation at 500 million objects. Ceph was reconfigured to stop this.
The system exhibited nearly 2GB/s of sustained read throughput and more than 1GB/sec of sustained write throughput.
The Evaluator Group also tested how Ceph performed with up to 20 million 128MB objects. In this test the metadata cache capacity was not exceeded and performance was linear for reads and near-linear for writes as the object count increased;
There is less metadata with the smaller number of objects, meaning no spill over of metadata to disk. The GET and PUT performance lines are both linearish deterministic is the Evaluator Groups term, with performance of 10GB/sec for both operation types.
Suppliers like Iguazio talk about operating at the trillion-plus file level. Thats extreme but todays extremity is tomorrows normality in this time of massive data growth That suggests Red Hat will have keep going further to establish and then re-establish Cephs scalability credentials.
Next year we might see a 100 billion object test and, who knows, a trillion object test could follow some day.
Read more here:
Ceph scales to 10 billion objects Blocks and Files - Blocks and Files