



The S3 backend can be used with a number of different providers: See also the last Fossies "Diffs" side-by-side code changes report for "s3.md": 1.61.1_vs_v1.62.0.Ī hint: This file contains one or more very long lines, so maybe it is better readable using the pure text view mode that shows the contents as wrapped lines within the browser window. I my next storage (E2), should I store files under sub-directories, such as images/0/0/0/0/, images/0/0/0/1, etc.As a special service "Fossies" has tried to format the requested source page into HTML format (assuming markdown format).Īlternatively you can here view or download the uninterpreted source code file.Ī member file download can also be achieved by clicking within a package contents listing on the according byte size field. I thought that S3, and by extension all S3-compatible storages, stored file paths as a flat structure, and that / was just a character like any other, and that you could easily list files under any prefix, ending or not with a /. Strangely, it looks like rclone has no problem retrieving from the server a list of files that are under a given "directory", for example /images/, but cannot do the same with a prefix, such as /images/0000. rclone copy "backblaze:/images/" -include "/0000*" seems to download the whole file list as well, and filter on the client.rclone copy "backblaze:/images/0000*", with or without *, does not find any file.I tried configuring rclone to copy only a batch of files: any problem during transfer (which will take many days) would restart from zero, forcing rclone to download the whole file list again.a quick benchmark of rclone ls on the /images/ directory tells me that transferring the whole file list would take almost 10 hours.I'm experimenting with moving them using rclone, but after 30 min of waiting for rclone copy to start, I realized that rclone does not start transferring files until it has received the whole file list.

I'm in the process of migrating once again, to iDrive E2 (S3-compatible as well). retrieving an existing file is very quick.images/0000005ae12097d69208f6548bf600bd7d270a6fĪ long time ago, these were stored on Amazon S3, and are now on Backblaze B2 (which is S3-compatible). I have > 100 million image files (book covers) as a flat list of files under a single "directory": /images/000000093e7d1825b346e9fc01387c7e449e1ed7
