diff --git a/MANUAL.html b/MANUAL.html
index 1327ff595edab..b29676b376108 100644
--- a/MANUAL.html
+++ b/MANUAL.html
@@ -81,7 +81,7 @@
Sep 24, 2024 Nov 15, 2024rclone(1) User Manual
-
--filter-from
Adds path/file names to an rclone command based on rules in a named file. The file contains a list of remarks and pattern rules. Include rules start with +
and exclude rules with -
. !
clears existing rules. Rules are processed in the order they are defined.
This flag can be repeated. See above for the order filter flags are processed in.
Arrange the order of filter rules with the most restrictive first and work down.
+Lines starting with # or ; are ignored, and can be used to write comments. Inline comments are not supported. Use -vv --dump filters
to see how they appear in the final regexp.
E.g. for filter-file.txt
:
# a sample filter rule file
- secret*.jpg
+ *.jpg
+ *.png
+ file2.avi
+- /dir/tmp/** # WARNING! This text will be treated as part of the path.
- /dir/Trash/**
+ /dir/**
# exclude everything else
@@ -10418,7 +10420,7 @@ Features
pCloud
MD5, SHA1 ⁷
-R
+R/W
No
No
W
@@ -11968,7 +11970,7 @@ Networking
--tpslimit float Limit HTTP transactions per second to this
--tpslimit-burst int Max burst of transactions for --tpslimit (default 1)
--use-cookies Enable session cookiejar
- --user-agent string Set the user-agent to a specified string (default "rclone/v1.68.1")
+ --user-agent string Set the user-agent to a specified string (default "rclone/v1.68.2")
Performance
Flags helpful for increasing performance.
--buffer-size SizeSuffix In memory buffer size when reading files for each --transfer (default 16Mi)
@@ -17168,7 +17170,7 @@ Scaleway
upload_cutoff = 5M
chunk_size = 5M
copy_cutoff = 5M
-C14 Cold Storage is the low-cost S3 Glacier alternative from Scaleway and it works the same way as on S3 by accepting the "GLACIER" storage_class
. So you can configure your remote with the storage_class = GLACIER
option to upload directly to C14. Don't forget that in this state you can't read files back after, you will need to restore them to "STANDARD" storage_class first before being able to read them (see "restore" section above)
+Scaleway Glacier is the low-cost S3 Glacier alternative from Scaleway and it works the same way as on S3 by accepting the "GLACIER" storage_class
. So you can configure your remote with the storage_class = GLACIER
option to upload directly to Scaleway Glacier. Don't forget that in this state you can't read files back after, you will need to restore them to "STANDARD" storage_class first before being able to read them (see "restore" section above)
Seagate Lyve Cloud
Seagate Lyve Cloud is an S3 compatible object storage platform from Seagate intended for enterprise use.
Here is a config run through for a remote called remote
- you may choose a different name of course. Note that to create an access key and secret key you will need to create a service account first.
@@ -24613,7 +24615,7 @@ Making your own client_id
Click on the "+ CREATE CREDENTIALS" button at the top of the screen, then select "OAuth client ID".
Choose an application type of "Desktop app" and click "Create". (the default name is fine)
It will show you a client ID and client secret. Make a note of these.
-(If you selected "External" at Step 5 continue to Step 9. If you chose "Internal" you don't need to publish and can skip straight to Step 10 but your destination drive must be part of the same Google Workspace.)
+(If you selected "External" at Step 5 continue to Step 10. If you chose "Internal" you don't need to publish and can skip straight to Step 11 but your destination drive must be part of the same Google Workspace.)
Go to "Oauth consent screen" and then click "PUBLISH APP" button and confirm. You will also want to add yourself as a test user.
Provide the noted client ID and client secret to rclone.
@@ -36964,6 +36966,51 @@ noop
"error": return an error based on option value
Changelog
+v1.68.2 - 2024-11-15
+
+
+- Security fixes
+
+- local backend: CVE-2024-52522: fix permission and ownership on symlinks with
--links
and --metadata
(Nick Craig-Wood)
+
+- Only affects users using
--metadata
and --links
and copying files to the local backend
+- See https://github.com/rclone/rclone/security/advisories/GHSA-hrxh-9w67-g4cv
+
+- build: bump github.com/golang-jwt/jwt/v4 from 4.5.0 to 4.5.1 (dependabot)
+
+- This is an issue in a dependency which is used for JWT certificates
+- See https://github.com/golang-jwt/jwt/security/advisories/GHSA-29wx-vh33-7x7r
+
+
+- Bug Fixes
+
+- accounting: Fix wrong message on SIGUSR2 to enable/disable bwlimit (Nick Craig-Wood)
+- bisync: Fix output capture restoring the wrong output for logrus (Dimitrios Slamaris)
+- dlna: Fix loggingResponseWriter disregarding log level (Simon Bos)
+- serve s3: Fix excess locking which was making serve s3 single threaded (Nick Craig-Wood)
+- doc fixes (Nick Craig-Wood, tgfisher, Alexandre Hamez, Randy Bush)
+
+- Local
+
+- Fix permission and ownership on symlinks with
--links
and --metadata
(Nick Craig-Wood)
+- Fix
--copy-links
on macOS when cloning (nielash)
+
+- Onedrive
+
+- Fix Retry-After handling to look at 503 errors also (Nick Craig-Wood)
+
+- Pikpak
+
+- Fix cid/gcid calculations for fs.OverrideRemote (wiserain)
+- Fix fatal crash on startup with token that can't be refreshed (Nick Craig-Wood)
+
+- S3
+
+- Fix crash when using
--s3-download-url
after migration to SDKv2 (Nick Craig-Wood)
+- Storj provider: fix server-side copy of files bigger than 5GB (Kaloyan Raev)
+- Fix multitenant multipart uploads with CEPH (Nick Craig-Wood)
+
+
v1.68.1 - 2024-09-24
diff --git a/MANUAL.md b/MANUAL.md
index a0451e3bb99c3..c9bfc3c454c2f 100644
--- a/MANUAL.md
+++ b/MANUAL.md
@@ -1,6 +1,6 @@
% rclone(1) User Manual
% Nick Craig-Wood
-% Sep 24, 2024
+% Nov 15, 2024
# Rclone syncs your files to cloud storage
@@ -17008,6 +17008,8 @@ processed in.
Arrange the order of filter rules with the most restrictive first and
work down.
+Lines starting with # or ; are ignored, and can be used to write comments. Inline comments are not supported. _Use `-vv --dump filters` to see how they appear in the final regexp._
+
E.g. for `filter-file.txt`:
# a sample filter rule file
@@ -17015,6 +17017,7 @@ E.g. for `filter-file.txt`:
+ *.jpg
+ *.png
+ file2.avi
+ - /dir/tmp/** # WARNING! This text will be treated as part of the path.
- /dir/Trash/**
+ /dir/**
# exclude everything else
@@ -19825,7 +19828,7 @@ Here is an overview of the major features of each cloud storage system.
| OpenDrive | MD5 | R/W | Yes | Partial ⁸ | - | - |
| OpenStack Swift | MD5 | R/W | No | No | R/W | - |
| Oracle Object Storage | MD5 | R/W | No | No | R/W | - |
-| pCloud | MD5, SHA1 ⁷ | R | No | No | W | - |
+| pCloud | MD5, SHA1 ⁷ | R/W | No | No | W | - |
| PikPak | MD5 | R | No | No | R | - |
| Pixeldrain | SHA256 | R/W | No | No | R | RW |
| premiumize.me | - | - | Yes | No | R | - |
@@ -20532,7 +20535,7 @@ Flags for general networking and HTTP stuff.
--tpslimit float Limit HTTP transactions per second to this
--tpslimit-burst int Max burst of transactions for --tpslimit (default 1)
--use-cookies Enable session cookiejar
- --user-agent string Set the user-agent to a specified string (default "rclone/v1.68.1")
+ --user-agent string Set the user-agent to a specified string (default "rclone/v1.68.2")
```
@@ -27871,8 +27874,8 @@ chunk_size = 5M
copy_cutoff = 5M
```
-[C14 Cold Storage](https://www.online.net/en/storage/c14-cold-storage) is the low-cost S3 Glacier alternative from Scaleway and it works the same way as on S3 by accepting the "GLACIER" `storage_class`.
-So you can configure your remote with the `storage_class = GLACIER` option to upload directly to C14. Don't forget that in this state you can't read files back after, you will need to restore them to "STANDARD" storage_class first before being able to read them (see "restore" section above)
+[Scaleway Glacier](https://www.scaleway.com/en/glacier-cold-storage/) is the low-cost S3 Glacier alternative from Scaleway and it works the same way as on S3 by accepting the "GLACIER" `storage_class`.
+So you can configure your remote with the `storage_class = GLACIER` option to upload directly to Scaleway Glacier. Don't forget that in this state you can't read files back after, you will need to restore them to "STANDARD" storage_class first before being able to read them (see "restore" section above)
### Seagate Lyve Cloud {#lyve}
@@ -37936,9 +37939,9 @@ then select "OAuth client ID".
9. It will show you a client ID and client secret. Make a note of these.
- (If you selected "External" at Step 5 continue to Step 9.
+ (If you selected "External" at Step 5 continue to Step 10.
If you chose "Internal" you don't need to publish and can skip straight to
- Step 10 but your destination drive must be part of the same Google Workspace.)
+ Step 11 but your destination drive must be part of the same Google Workspace.)
10. Go to "Oauth consent screen" and then click "PUBLISH APP" button and confirm.
You will also want to add yourself as a test user.
@@ -54650,6 +54653,36 @@ Options:
# Changelog
+## v1.68.2 - 2024-11-15
+
+[See commits](https://github.com/rclone/rclone/compare/v1.68.1...v1.68.2)
+
+* Security fixes
+ * local backend: CVE-2024-52522: fix permission and ownership on symlinks with `--links` and `--metadata` (Nick Craig-Wood)
+ * Only affects users using `--metadata` and `--links` and copying files to the local backend
+ * See https://github.com/rclone/rclone/security/advisories/GHSA-hrxh-9w67-g4cv
+ * build: bump github.com/golang-jwt/jwt/v4 from 4.5.0 to 4.5.1 (dependabot)
+ * This is an issue in a dependency which is used for JWT certificates
+ * See https://github.com/golang-jwt/jwt/security/advisories/GHSA-29wx-vh33-7x7r
+* Bug Fixes
+ * accounting: Fix wrong message on SIGUSR2 to enable/disable bwlimit (Nick Craig-Wood)
+ * bisync: Fix output capture restoring the wrong output for logrus (Dimitrios Slamaris)
+ * dlna: Fix loggingResponseWriter disregarding log level (Simon Bos)
+ * serve s3: Fix excess locking which was making serve s3 single threaded (Nick Craig-Wood)
+ * doc fixes (Nick Craig-Wood, tgfisher, Alexandre Hamez, Randy Bush)
+* Local
+ * Fix permission and ownership on symlinks with `--links` and `--metadata` (Nick Craig-Wood)
+ * Fix `--copy-links` on macOS when cloning (nielash)
+* Onedrive
+ * Fix Retry-After handling to look at 503 errors also (Nick Craig-Wood)
+* Pikpak
+ * Fix cid/gcid calculations for fs.OverrideRemote (wiserain)
+ * Fix fatal crash on startup with token that can't be refreshed (Nick Craig-Wood)
+* S3
+ * Fix crash when using `--s3-download-url` after migration to SDKv2 (Nick Craig-Wood)
+ * Storj provider: fix server-side copy of files bigger than 5GB (Kaloyan Raev)
+ * Fix multitenant multipart uploads with CEPH (Nick Craig-Wood)
+
## v1.68.1 - 2024-09-24
[See commits](https://github.com/rclone/rclone/compare/v1.68.0...v1.68.1)
diff --git a/MANUAL.txt b/MANUAL.txt
index be7c7687a79d4..c39e2268b570a 100644
--- a/MANUAL.txt
+++ b/MANUAL.txt
@@ -1,6 +1,6 @@
rclone(1) User Manual
Nick Craig-Wood
-Sep 24, 2024
+Nov 15, 2024
Rclone syncs your files to cloud storage
@@ -16432,6 +16432,10 @@ processed in.
Arrange the order of filter rules with the most restrictive first and
work down.
+Lines starting with # or ; are ignored, and can be used to write
+comments. Inline comments are not supported. Use -vv --dump filters to
+see how they appear in the final regexp.
+
E.g. for filter-file.txt:
# a sample filter rule file
@@ -16439,6 +16443,7 @@ E.g. for filter-file.txt:
+ *.jpg
+ *.png
+ file2.avi
+ - /dir/tmp/** # WARNING! This text will be treated as part of the path.
- /dir/Trash/**
+ /dir/**
# exclude everything else
@@ -19273,7 +19278,7 @@ Here is an overview of the major features of each cloud storage system.
OpenDrive MD5 R/W Yes Partial ⁸ - -
OpenStack Swift MD5 R/W No No R/W -
Oracle Object Storage MD5 R/W No No R/W -
- pCloud MD5, SHA1 ⁷ R No No W -
+ pCloud MD5, SHA1 ⁷ R/W No No W -
PikPak MD5 R No No R -
Pixeldrain SHA256 R/W No No R RW
premiumize.me - - Yes No R -
@@ -20080,7 +20085,7 @@ Flags for general networking and HTTP stuff.
--tpslimit float Limit HTTP transactions per second to this
--tpslimit-burst int Max burst of transactions for --tpslimit (default 1)
--use-cookies Enable session cookiejar
- --user-agent string Set the user-agent to a specified string (default "rclone/v1.68.1")
+ --user-agent string Set the user-agent to a specified string (default "rclone/v1.68.2")
Performance
@@ -27368,13 +27373,13 @@ rclone like this:
chunk_size = 5M
copy_cutoff = 5M
-C14 Cold Storage is the low-cost S3 Glacier alternative from Scaleway
+Scaleway Glacier is the low-cost S3 Glacier alternative from Scaleway
and it works the same way as on S3 by accepting the "GLACIER"
storage_class. So you can configure your remote with the
-storage_class = GLACIER option to upload directly to C14. Don't forget
-that in this state you can't read files back after, you will need to
-restore them to "STANDARD" storage_class first before being able to read
-them (see "restore" section above)
+storage_class = GLACIER option to upload directly to Scaleway Glacier.
+Don't forget that in this state you can't read files back after, you
+will need to restore them to "STANDARD" storage_class first before being
+able to read them (see "restore" section above)
Seagate Lyve Cloud
@@ -37328,9 +37333,9 @@ Here is how to create your own Google Drive client ID for rclone:
9. It will show you a client ID and client secret. Make a note of
these.
- (If you selected "External" at Step 5 continue to Step 9. If you
+ (If you selected "External" at Step 5 continue to Step 10. If you
chose "Internal" you don't need to publish and can skip straight to
- Step 10 but your destination drive must be part of the same Google
+ Step 11 but your destination drive must be part of the same Google
Workspace.)
10. Go to "Oauth consent screen" and then click "PUBLISH APP" button and
@@ -54294,6 +54299,52 @@ Options:
Changelog
+v1.68.2 - 2024-11-15
+
+See commits
+
+- Security fixes
+ - local backend: CVE-2024-52522: fix permission and ownership on
+ symlinks with --links and --metadata (Nick Craig-Wood)
+ - Only affects users using --metadata and --links and copying
+ files to the local backend
+ - See
+ https://github.com/rclone/rclone/security/advisories/GHSA-hrxh-9w67-g4cv
+ - build: bump github.com/golang-jwt/jwt/v4 from 4.5.0 to 4.5.1
+ (dependabot)
+ - This is an issue in a dependency which is used for JWT
+ certificates
+ - See
+ https://github.com/golang-jwt/jwt/security/advisories/GHSA-29wx-vh33-7x7r
+- Bug Fixes
+ - accounting: Fix wrong message on SIGUSR2 to enable/disable
+ bwlimit (Nick Craig-Wood)
+ - bisync: Fix output capture restoring the wrong output for logrus
+ (Dimitrios Slamaris)
+ - dlna: Fix loggingResponseWriter disregarding log level (Simon
+ Bos)
+ - serve s3: Fix excess locking which was making serve s3 single
+ threaded (Nick Craig-Wood)
+ - doc fixes (Nick Craig-Wood, tgfisher, Alexandre Hamez, Randy
+ Bush)
+- Local
+ - Fix permission and ownership on symlinks with --links and
+ --metadata (Nick Craig-Wood)
+ - Fix --copy-links on macOS when cloning (nielash)
+- Onedrive
+ - Fix Retry-After handling to look at 503 errors also (Nick
+ Craig-Wood)
+- Pikpak
+ - Fix cid/gcid calculations for fs.OverrideRemote (wiserain)
+ - Fix fatal crash on startup with token that can't be refreshed
+ (Nick Craig-Wood)
+- S3
+ - Fix crash when using --s3-download-url after migration to SDKv2
+ (Nick Craig-Wood)
+ - Storj provider: fix server-side copy of files bigger than 5GB
+ (Kaloyan Raev)
+ - Fix multitenant multipart uploads with CEPH (Nick Craig-Wood)
+
v1.68.1 - 2024-09-24
See commits
diff --git a/VERSION b/VERSION
index 4dac69b9afc80..8546614eb93b6 100644
--- a/VERSION
+++ b/VERSION
@@ -1 +1 @@
-v1.69.0-3
+v1.69.0-4
\ No newline at end of file
diff --git a/backend/115/115.go b/backend/115/115.go
index 5f4856b6920a2..790bac5ac0fc4 100644
--- a/backend/115/115.go
+++ b/backend/115/115.go
@@ -33,6 +33,7 @@ import (
"github.com/pierrec/lz4/v4"
"github.com/rclone/rclone/backend/115/api"
+ "github.com/rclone/rclone/backend/115/dircache"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/config"
"github.com/rclone/rclone/fs/config/configmap"
@@ -40,7 +41,6 @@ import (
"github.com/rclone/rclone/fs/fserrors"
"github.com/rclone/rclone/fs/fshttp"
"github.com/rclone/rclone/fs/hash"
- "github.com/rclone/rclone/lib/dircache"
"github.com/rclone/rclone/lib/encoder"
"github.com/rclone/rclone/lib/pacer"
"github.com/rclone/rclone/lib/rest"
@@ -52,7 +52,7 @@ const (
rootURL = "https://webapi.115.com"
defaultUserAgent = "Mozilla/5.0 115Browser/27.0.6.3"
- defaultMinSleep = fs.Duration(250 * time.Millisecond) // 4 transactions per second
+ defaultMinSleep = fs.Duration(1000 * time.Millisecond) // 1 transactions per second
maxSleep = 2 * time.Second
decayConstant = 2 // bigger for slower decay, exponential
@@ -258,6 +258,7 @@ type Fs struct {
srv *rest.Client
dirCache *dircache.DirCache // Map of directory path to directory id
pacer *fs.Pacer
+ rootFolder string // path of the absolute root
rootFolderID string
appVer string // parsed from user-agent; // https://appversion.115.com/1/web/1.0/api/getMultiVer
userID string // for uploads, adding offline tasks, and receiving from share link
@@ -303,10 +304,10 @@ func shouldRetry(ctx context.Context, resp *http.Response, info interface{}, err
if !apiInfo.State && apiInfo.Errno == 990009 {
time.Sleep(time.Second)
// 删除[subdir]操作尚未执行完成,请稍后再试! (990009)
- return true, fserrors.RetryErrorf("API State false: %s (%d)", apiInfo.Error, apiInfo.Errno)
+ return true, fserrors.RetryErrorf("API Error: %s (%d)", apiInfo.Error, apiInfo.Errno)
} else if !apiInfo.State && apiInfo.Errno == 50038 {
- // can't download: API State false: (50038)
- return true, fserrors.RetryErrorf("API State false: %s (%d)", apiInfo.Error, apiInfo.Errno)
+ // can't download: API Error: (50038)
+ return true, fserrors.RetryErrorf("API Error: %s (%d)", apiInfo.Error, apiInfo.Errno)
}
}
return false, nil
@@ -469,22 +470,6 @@ func newFs(ctx context.Context, name, path string, m configmap.Mapper) (*Fs, err
return f, nil
}
-// wraps dirCache.FindRoot() with warm-up cache
-func (f *Fs) dirCacheFindRoot(ctx context.Context) (err error) {
- if f.rootFolderID != "0" || f.isShare {
- return f.dirCache.FindRoot(ctx, false)
- }
- for dir, dirID := f.root, "-1"; dirID != f.rootFolderID && dir != ""; {
- dirID, err = f.getDirID(ctx, dir)
- if err != nil {
- return err
- }
- f.dirCache.Put(dir, dirID)
- dir, _ = dircache.SplitPath(dir)
- }
- return f.dirCache.FindRoot(ctx, false)
-}
-
// NewFs constructs an Fs from the path, container:path
func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, error) {
f, err := newFs(ctx, name, root, m)
@@ -493,19 +478,23 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
}
// mod - parse object id from path remote:{ID}
- var srcFile *api.File
if rootID, receiveCode, _ := parseRootID(root); len(rootID) == 19 {
- srcFile, err = f.getFile(ctx, rootID)
+ info, err := f.getFile(ctx, rootID, "")
if err != nil {
return nil, err
}
- f.opt.RootFolderID = rootID
- if !srcFile.IsDir() {
- fs.Debugf(nil, "Root ID (File): %s", rootID)
- } else {
- fs.Debugf(nil, "Root ID (Folder): %s", rootID)
- srcFile = nil
+ if !info.IsDir() {
+ // When the parsed `rootID` points to a file,
+ // commands requiring listing operations (e.g., `ls*`, `cat`) are not supported
+ // `copy` has been verified to work correctly
+ f.dirCache = dircache.New("", info.ParentID(), f)
+ _ = f.dirCache.FindRoot(ctx, false)
+ obj, _ := f.newObjectWithInfo(ctx, info.Name, info)
+ f.root = "isFile:" + info.Name
+ f.fileObj = &obj
+ return f, fs.ErrorIsFile
}
+ f.opt.RootFolderID = rootID
} else if len(rootID) == 11 {
f.opt.ShareCode = rootID
f.opt.ReceiveCode = receiveCode
@@ -515,40 +504,37 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
f.isShare = f.opt.ShareCode != "" && f.opt.ReceiveCode != ""
// Set the root folder ID
- if f.opt.RootFolderID != "" {
+ if f.isShare {
+ // should be empty to let dircache run with forward search
+ f.rootFolderID = ""
+ } else if f.opt.RootFolderID != "" {
// use root_folder ID if set
f.rootFolderID = f.opt.RootFolderID
} else {
f.rootFolderID = "0" //根目录 = root directory
}
- f.dirCache = dircache.New(f.root, f.rootFolderID, f)
-
- // mod - in case parsed rootID is pointing to a file
- if srcFile != nil {
- tempF := *f
- newRoot := ""
- tempF.dirCache = dircache.New(newRoot, f.rootFolderID, &tempF)
- tempF.root = newRoot
- f.dirCache = tempF.dirCache
- f.root = tempF.root
-
- obj, _ := f.newObjectWithInfo(ctx, srcFile.Name, srcFile)
- f.root = "isFile:" + srcFile.Name
- f.fileObj = &obj
- return f, fs.ErrorIsFile
+ // Set the root folder path if it is not on the absolute root
+ if f.rootFolderID != "" && f.rootFolderID != "0" {
+ f.rootFolder, err = f.getDirPath(ctx, f.rootFolderID)
+ if err != nil {
+ return nil, err
+ }
}
+ f.dirCache = dircache.New(f.root, f.rootFolderID, f)
+
// Find the current root
- err = f.dirCacheFindRoot(ctx)
+ err = f.dirCache.FindRoot(ctx, false)
if err != nil {
// Assume it is a file
newRoot, remote := dircache.SplitPath(f.root)
tempF := *f
tempF.dirCache = dircache.New(newRoot, f.rootFolderID, &tempF)
tempF.root = newRoot
+ tempF.dirCache.Fill(f.dirCache)
// Make new Fs which is the parent
- err = tempF.dirCacheFindRoot(ctx)
+ err = tempF.dirCache.FindRoot(ctx, false)
if err != nil {
// No root so return old f
return f, nil
@@ -640,8 +626,8 @@ func (f *Fs) FindLeaf(ctx context.Context, pathID, leaf string) (pathIDOut strin
}
}
// Find the leaf in pathID
- found, err = f.listAll(ctx, pathID, func(item *api.File) bool {
- if item.Name == leaf && item.IsDir() {
+ found, err = f.listAll(ctx, pathID, f.opt.ListChunk, false, true, func(item *api.File) bool {
+ if item.Name == leaf {
pathIDOut = item.ID()
return true
}
@@ -650,6 +636,11 @@ func (f *Fs) FindLeaf(ctx context.Context, pathID, leaf string) (pathIDOut strin
return pathIDOut, found, err
}
+// GetDirID wraps `getDirID` to provide an interface for DirCacher
+func (f *Fs) GetDirID(ctx context.Context, dir string) (string, error) {
+ return f.getDirID(ctx, f.rootFolder+"/"+dir)
+}
+
// List the objects and directories in dir into entries. The
// entries can be returned in any order but should be for a
// complete directory.
@@ -666,7 +657,7 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
return nil, err
}
var iErr error
- _, err = f.listAll(ctx, dirID, func(item *api.File) bool {
+ _, err = f.listAll(ctx, dirID, f.opt.ListChunk, false, false, func(item *api.File) bool {
entry, err := f.itemToDirEntry(ctx, path.Join(dir, item.Name), item)
if err != nil {
iErr = err
@@ -708,6 +699,9 @@ func (f *Fs) CreateDir(ctx context.Context, pathID, leaf string) (newID string,
// will return the object and the error, otherwise will return
// nil and the error
func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) {
+ if fs.GetConfig(ctx).NoCheckDest {
+ return f.PutUnchecked(ctx, in, src, options...)
+ }
existingObj, err := f.NewObject(ctx, src.Remote())
switch err {
case nil:
@@ -741,8 +735,8 @@ func (f *Fs) putUnchecked(ctx context.Context, in io.Reader, src fs.ObjectInfo,
}
var info *api.File
- found, err := f.listAll(ctx, o.parent, func(item *api.File) bool {
- if strings.ToLower(item.Sha) == o.sha1sum && !item.IsDir() {
+ found, err := f.listAll(ctx, o.parent, 32, true, false, func(item *api.File) bool {
+ if strings.ToLower(item.Sha) == o.sha1sum {
info = item
return true
}
@@ -778,7 +772,7 @@ func (f *Fs) MergeDirs(ctx context.Context, dirs []fs.Directory) (err error) {
for _, srcDir := range dirs[1:] {
// list the objects
var IDs []string
- _, err = f.listAll(ctx, srcDir.ID(), func(item *api.File) bool {
+ _, err = f.listAll(ctx, srcDir.ID(), f.opt.ListChunk, false, false, func(item *api.File) bool {
fs.Infof(srcDir, "listing for merging %q", item.Name)
IDs = append(IDs, item.ID())
// API doesn't allow to move a large number of objects at once, so doing it in chunked
@@ -1018,7 +1012,7 @@ func (f *Fs) purgeCheck(ctx context.Context, dir string, check bool) error {
}
if check {
- found, err := f.listAll(ctx, rootID, func(item *api.File) bool {
+ found, err := f.listAll(ctx, rootID, 32, false, false, func(item *api.File) bool {
fs.Debugf(dir, "Rmdir: contains file: %q", item.Name)
return true
})
@@ -1130,8 +1124,8 @@ func (f *Fs) readMetaDataForPath(ctx context.Context, path string) (info *api.Fi
}
// checking whether fileObj with name of leaf exists in dirID
- found, err := f.listAll(ctx, dirID, func(item *api.File) bool {
- if item.Name == leaf && !item.IsDir() {
+ found, err := f.listAll(ctx, dirID, f.opt.ListChunk, true, false, func(item *api.File) bool {
+ if item.Name == leaf {
info = item
return true
}
@@ -1402,7 +1396,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
// updating object with the same contents(sha1) simply updates some attributes
// rather than creating a new one. So we shouldn't delete old object
newO := newObj.(*Object)
- if !(newO.id == o.id && newO.pickCode == o.pickCode && newO.sha1sum == o.sha1sum) {
+ if o.hasMetaData && !(newO.id == o.id && newO.pickCode == o.pickCode && newO.sha1sum == o.sha1sum) {
// Delete duplicate after successful upload
if err = o.Remove(ctx); err != nil {
return fmt.Errorf("failed to remove old version: %w", err)
diff --git a/backend/115/dircache/dircache.go b/backend/115/dircache/dircache.go
new file mode 100644
index 0000000000000..e94bfa5ea8c33
--- /dev/null
+++ b/backend/115/dircache/dircache.go
@@ -0,0 +1,500 @@
+// Package dircache provides a simple cache for caching directory ID
+// to path lookups and the inverse.
+//
+// mostly based on lib/dircache but customized followings for 115
+// * GetDirID() for DirCacher interface
+// * Backward dir tree traverse
+// * Fill() for filling cache from source
+package dircache
+
+// _methods are called without the lock
+
+import (
+ "bytes"
+ "context"
+ "errors"
+ "fmt"
+ "path"
+ "strings"
+ "sync"
+
+ "github.com/rclone/rclone/fs"
+)
+
+// DirCache caches paths to directory IDs and vice versa
+type DirCache struct {
+ cacheMu sync.RWMutex // protects cache and invCache
+ cache map[string]string
+ invCache map[string]string
+
+ mu sync.Mutex // protects the below
+ fs DirCacher // Interface to find and make directories
+ trueRootID string // ID of the absolute root
+ root string // the path the cache is rooted on
+ rootID string // ID of the root directory
+ rootParentID string // ID of the root's parent directory
+ foundRoot bool // Whether we have found the root or not
+}
+
+// DirCacher describes an interface for doing the low level directory work
+//
+// This should be implemented by the backend and will be called by the
+// dircache package when appropriate.
+type DirCacher interface {
+ FindLeaf(ctx context.Context, pathID, leaf string) (pathIDOut string, found bool, err error)
+ CreateDir(ctx context.Context, pathID, leaf string) (newID string, err error)
+ GetDirID(ctx context.Context, dir string) (dirID string, err error) // mod
+}
+
+// New makes a DirCache
+//
+// This is created with the true root ID and the root path.
+//
+// In order to use the cache FindRoot() must be called on it without
+// error. This isn't done at initialization as it isn't known whether
+// the root and intermediate directories need to be created or not.
+//
+// Most of the utility functions will call FindRoot() on the caller's
+// behalf with the create flag passed in.
+//
+// The cache is safe for concurrent use
+func New(root string, trueRootID string, fs DirCacher) *DirCache {
+ d := &DirCache{
+ trueRootID: trueRootID,
+ root: root,
+ fs: fs,
+ }
+ d.Flush()
+ d.ResetRoot()
+ return d
+}
+
+// String returns the directory cache in string form for debugging
+func (dc *DirCache) String() string {
+ dc.cacheMu.RLock()
+ defer dc.cacheMu.RUnlock()
+ var buf bytes.Buffer
+ _, _ = buf.WriteString("DirCache{\n")
+ _, _ = fmt.Fprintf(&buf, "\ttrueRootID: %q,\n", dc.trueRootID)
+ _, _ = fmt.Fprintf(&buf, "\troot: %q,\n", dc.root)
+ _, _ = fmt.Fprintf(&buf, "\trootID: %q,\n", dc.rootID)
+ _, _ = fmt.Fprintf(&buf, "\trootParentID: %q,\n", dc.rootParentID)
+ _, _ = fmt.Fprintf(&buf, "\tfoundRoot: %v,\n", dc.foundRoot)
+ _, _ = buf.WriteString("\tcache: {\n")
+ for k, v := range dc.cache {
+ _, _ = fmt.Fprintf(&buf, "\t\t%q: %q,\n", k, v)
+ }
+ _, _ = buf.WriteString("\t},\n")
+ _, _ = buf.WriteString("\tinvCache: {\n")
+ for k, v := range dc.invCache {
+ _, _ = fmt.Fprintf(&buf, "\t\t%q: %q,\n", k, v)
+ }
+ _, _ = buf.WriteString("\t},\n")
+ _, _ = buf.WriteString("}\n")
+ return buf.String()
+}
+
+// Get a directory ID given a path
+//
+// Returns the ID and a boolean as to whether it was found or not in
+// the cache.
+func (dc *DirCache) Get(path string) (id string, ok bool) {
+ dc.cacheMu.RLock()
+ id, ok = dc.cache[path]
+ dc.cacheMu.RUnlock()
+ return id, ok
+}
+
+// GetInv gets a path given a directory ID
+//
+// Returns the path and a boolean as to whether it was found or not in
+// the cache.
+func (dc *DirCache) GetInv(id string) (path string, ok bool) {
+ dc.cacheMu.RLock()
+ path, ok = dc.invCache[id]
+ dc.cacheMu.RUnlock()
+ return path, ok
+}
+
+// Put a (path, directory ID) pair into the cache
+func (dc *DirCache) Put(path, id string) {
+ dc.cacheMu.Lock()
+ dc.cache[path] = id
+ dc.invCache[id] = path
+ dc.cacheMu.Unlock()
+}
+
+// Flush the cache of all data
+func (dc *DirCache) Flush() {
+ dc.cacheMu.Lock()
+ dc.cache = make(map[string]string)
+ dc.invCache = make(map[string]string)
+ dc.cacheMu.Unlock()
+}
+
+// SetRootIDAlias sets the rootID to that passed in. This assumes that
+// the new ID is just an alias for the old ID so does not flush
+// anything.
+//
+// This should be called from FindLeaf (and only from FindLeaf) if it
+// is discovered that the root ID is incorrect. For example some
+// backends use "0" as a root ID, but it has a real ID which is needed
+// for some operations.
+func (dc *DirCache) SetRootIDAlias(rootID string) {
+ // No locking as this is called from FindLeaf
+ dc.rootID = rootID
+ dc.Put("", dc.rootID)
+}
+
+// FlushDir flushes the map of all data starting with the path
+// dir.
+//
+// If dir is empty string then this is equivalent to calling ResetRoot
+func (dc *DirCache) FlushDir(dir string) {
+ if dir == "" {
+ dc.ResetRoot()
+ return
+ }
+ dc.cacheMu.Lock()
+
+ // Delete the root dir
+ ID, ok := dc.cache[dir]
+ if ok {
+ delete(dc.cache, dir)
+ delete(dc.invCache, ID)
+ }
+
+ // And any sub directories
+ dir += "/"
+ for key, ID := range dc.cache {
+ if strings.HasPrefix(key, dir) {
+ delete(dc.cache, key)
+ delete(dc.invCache, ID)
+ }
+ }
+
+ dc.cacheMu.Unlock()
+}
+
+// SplitPath splits a path into directory, leaf
+//
+// Path shouldn't start or end with a /
+//
+// If there are no slashes then directory will be "" and leaf = path
+func SplitPath(path string) (directory, leaf string) {
+ lastSlash := strings.LastIndex(path, "/")
+ if lastSlash >= 0 {
+ directory = path[:lastSlash]
+ leaf = path[lastSlash+1:]
+ } else {
+ directory = ""
+ leaf = path
+ }
+ return
+}
+
+// FindDir finds the directory passed in returning the directory ID
+// starting from pathID
+//
+// Path shouldn't start or end with a /
+//
+// If create is set it will make the directory if not found.
+//
+// It will call FindRoot if it hasn't been called already
+func (dc *DirCache) FindDir(ctx context.Context, path string, create bool) (pathID string, err error) {
+ dc.mu.Lock()
+ defer dc.mu.Unlock()
+ err = dc._findRoot(ctx, create)
+ if err != nil {
+ return "", err
+ }
+ return dc._findDir(ctx, path, create)
+}
+
+// Unlocked findDir
+//
+// Call with a lock on mu
+func (dc *DirCache) _findDir(ctx context.Context, path string, create bool) (pathID string, err error) {
+ // If it is the root, then return it
+ if path == "" {
+ return dc.rootID, nil
+ }
+
+ // If it is in the cache then return it
+ pathID, ok := dc.Get(path)
+ if ok {
+ return pathID, nil
+ }
+
+ // mod - only when the Fs is not shared
+ if dc.trueRootID != "" {
+ dirPath := path
+ if dc.foundRoot && dc.rootParentID != "" {
+ dirPath = dc.root + "/" + dirPath
+ }
+ pathID, err = dc.fs.GetDirID(ctx, dirPath)
+ if err == nil {
+ dc.Put(path, pathID)
+ return
+ }
+ // not found but doesn't have to create so return error
+ if !create {
+ return "", err
+ }
+ // proceed to search for the parent recursively
+ }
+
+ // Split the path into directory, leaf
+ directory, leaf := SplitPath(path)
+
+ // Recurse and find pathID for parent directory
+ parentPathID, err := dc._findDir(ctx, directory, create)
+ if err != nil {
+ return "", err
+
+ }
+
+ // mod - only when the Fs is not shared
+ if dc.trueRootID != "" {
+ pathID, err = dc.fs.CreateDir(ctx, parentPathID, leaf)
+ if err != nil {
+ return "", fmt.Errorf("failed to make directory: %w", err)
+ }
+ dc.Put(path, pathID)
+ return
+ }
+
+ // Find the leaf in parentPathID
+ pathID, found, err := dc.fs.FindLeaf(ctx, parentPathID, leaf)
+ if err != nil {
+ return "", err
+ }
+
+ // If not found create the directory if required or return an error
+ if !found {
+ if create {
+ pathID, err = dc.fs.CreateDir(ctx, parentPathID, leaf)
+ if err != nil {
+ return "", fmt.Errorf("failed to make directory: %w", err)
+ }
+ } else {
+ return "", fs.ErrorDirNotFound
+ }
+ }
+
+ // Store the leaf directory in the cache
+ dc.Put(path, pathID)
+
+ // fmt.Println("Dir", path, "is", pathID)
+ return pathID, nil
+}
+
+// FindPath finds the leaf and directoryID from a path
+//
+// If called with path == "" then it will return the ID of the parent
+// directory of the root and the leaf name of the root in that
+// directory. Note that it won't create the root directory in this
+// case even if create is true.
+//
+// If create is set parent directories will be created if they don't exist
+//
+// It will call FindRoot if it hasn't been called already
+func (dc *DirCache) FindPath(ctx context.Context, path string, create bool) (leaf, directoryID string, err error) {
+ if path == "" {
+ _, leaf = SplitPath(dc.root)
+ directoryID, err = dc.RootParentID(ctx, create)
+ } else {
+ var directory string
+ directory, leaf = SplitPath(path)
+ directoryID, err = dc.FindDir(ctx, directory, create)
+ }
+ return leaf, directoryID, err
+}
+
+// FindRoot finds the root directory if not already found
+//
+// If successful this changes the root of the cache from the true root
+// to the root specified by the path passed into New.
+//
+// Resets the root directory.
+//
+// If create is set it will make the directory if not found
+func (dc *DirCache) FindRoot(ctx context.Context, create bool) error {
+ dc.mu.Lock()
+ defer dc.mu.Unlock()
+ return dc._findRoot(ctx, create)
+}
+
+// Fill populates the cache from source
+//
+// This is particularly useful when the cache has been partially or fully populated
+func (dc *DirCache) Fill(src *DirCache) *DirCache {
+ dc.mu.Lock()
+ defer dc.mu.Unlock()
+ return dc._fill(src)
+}
+
+// Call with mu held
+func (dc *DirCache) _fill(src *DirCache) *DirCache {
+ dc.cacheMu.Lock()
+ defer dc.cacheMu.Unlock()
+ dc.cache = src.cache
+ dc.invCache = src.invCache
+ return dc
+}
+
+// _findRoot finds the root directory if not already found
+//
+// Resets the root directory.
+//
+// If create is set it will make the directory if not found.
+//
+// Call with mu held
+func (dc *DirCache) _findRoot(ctx context.Context, create bool) error {
+ if dc.foundRoot {
+ return nil
+ }
+ rootID, err := dc._findDir(ctx, dc.root, create)
+ if err != nil {
+ return err
+ }
+ dc.foundRoot = true
+ dc.rootID = rootID
+
+ // Find the parent of the root while we still have the root
+ // directory tree cached
+ rootParentPath, _ := SplitPath(dc.root)
+ if rootParentID, ok := dc.Get(rootParentPath); ok {
+ dc.rootParentID = rootParentID
+ }
+
+ // Reset the tree based on dc.root
+ dc.Flush()
+ // Put the root directory in
+ dc.Put("", dc.rootID)
+ return nil
+}
+
+// FoundRoot returns whether the root directory has been found yet
+func (dc *DirCache) FoundRoot() bool {
+ dc.mu.Lock()
+ defer dc.mu.Unlock()
+ return dc.foundRoot
+}
+
+// RootID returns the ID of the root directory
+//
+// If create is set it will make the root directory if not found
+func (dc *DirCache) RootID(ctx context.Context, create bool) (ID string, err error) {
+ dc.mu.Lock()
+ defer dc.mu.Unlock()
+ err = dc._findRoot(ctx, create)
+ if err != nil {
+ return "", err
+ }
+ return dc.rootID, nil
+}
+
+// RootParentID returns the ID of the parent of the root directory
+//
+// If create is set it will make the root parent directory if not found (but not the root)
+func (dc *DirCache) RootParentID(ctx context.Context, create bool) (ID string, err error) {
+ dc.mu.Lock()
+ defer dc.mu.Unlock()
+ if !dc.foundRoot || dc.rootParentID == "" {
+ if dc.root == "" {
+ return "", errors.New("is root directory")
+ }
+ // Find the rootParentID without creating the root
+ rootParent, _ := SplitPath(dc.root)
+ rootParentID, err := dc._findDir(ctx, rootParent, create)
+ if err != nil {
+ return "", err
+ }
+ dc.rootParentID = rootParentID
+ } else if dc.rootID == dc.trueRootID {
+ return "", errors.New("is root directory")
+ }
+ if dc.rootParentID == "" {
+ return "", errors.New("internal error: didn't find rootParentID")
+ }
+ return dc.rootParentID, nil
+}
+
+// ResetRoot resets the root directory to the absolute root and clears
+// the DirCache
+func (dc *DirCache) ResetRoot() {
+ dc.mu.Lock()
+ defer dc.mu.Unlock()
+ dc.foundRoot = false
+ dc.Flush()
+
+ // Put the true root in
+ dc.rootID = dc.trueRootID
+
+ // Put the root directory in
+ dc.Put("", dc.rootID)
+}
+
+// DirMove prepares to move the directory (srcDC, srcRoot, srcRemote)
+// into the directory (dc, dstRoot, dstRemote)
+//
+// It does all the checking, creates intermediate directories and
+// returns leafs and IDs ready for the move.
+//
+// This returns:
+//
+// - srcID - ID of the source directory
+// - srcDirectoryID - ID of the parent of the source directory
+// - srcLeaf - leaf name of the source directory
+// - dstDirectoryID - ID of the parent of the destination directory
+// - dstLeaf - leaf name of the destination directory
+//
+// These should be used to do the actual move then
+// srcDC.FlushDir(srcRemote) should be called.
+func (dc *DirCache) DirMove(
+ ctx context.Context, srcDC *DirCache, srcRoot, srcRemote, dstRoot, dstRemote string) (srcID, srcDirectoryID, srcLeaf, dstDirectoryID, dstLeaf string, err error) {
+ var (
+ dstDC = dc
+ srcPath = path.Join(srcRoot, srcRemote)
+ dstPath = path.Join(dstRoot, dstRemote)
+ )
+
+ // Refuse to move to or from the root
+ if srcPath == "" || dstPath == "" {
+ // fs.Debugf(src, "DirMove error: Can't move root")
+ err = errors.New("can't move root directory")
+ return
+ }
+
+ // Find ID of dst parent, creating subdirs if necessary
+ dstLeaf, dstDirectoryID, err = dstDC.FindPath(ctx, dstRemote, true)
+ if err != nil {
+ return
+ }
+
+ // Check destination does not exist
+ _, err = dstDC.FindDir(ctx, dstRemote, false)
+ if err == fs.ErrorDirNotFound {
+ // OK
+ } else if err != nil {
+ return
+ } else {
+ err = fs.ErrorDirExists
+ return
+ }
+
+ // Find ID of src parent
+ srcLeaf, srcDirectoryID, err = srcDC.FindPath(ctx, srcRemote, false)
+ if err != nil {
+ return
+ }
+
+ // Find ID of src
+ srcID, err = srcDC.FindDir(ctx, srcRemote, false)
+ if err != nil {
+ return
+ }
+
+ return
+}
diff --git a/backend/115/helper.go b/backend/115/helper.go
index a3f41b04cef56..5b72b31cf8c31 100644
--- a/backend/115/helper.go
+++ b/backend/115/helper.go
@@ -7,6 +7,7 @@ import (
"fmt"
"net/http"
"net/url"
+ "path"
"strconv"
"strings"
"time"
@@ -47,7 +48,7 @@ func (f *Fs) listOrder(ctx context.Context, cid, order, asc string) (err error)
if err != nil {
return
} else if !info.State {
- return fmt.Errorf("API State false: %s (%d)", info.Error, info.Errno)
+ return fmt.Errorf("API Error: %s (%d)", info.Error, info.Errno)
}
return
}
@@ -55,42 +56,17 @@ func (f *Fs) listOrder(ctx context.Context, cid, order, asc string) (err error)
// Lists the directory required calling the user function on each item found
//
// If the user fn ever returns true then it early exits with found = true
-func (f *Fs) listAll(ctx context.Context, dirID string, fn listAllFn) (found bool, err error) {
+func (f *Fs) listAll(ctx context.Context, dirID string, limit int, filesOnly, dirsOnly bool, fn listAllFn) (found bool, err error) {
if f.isShare {
- return f.listShare(ctx, dirID, fn)
+ return f.listShare(ctx, dirID, limit, fn)
}
order := "user_ptime"
asc := "0"
// Url Parameters
- params := url.Values{}
- params.Set("aid", "1")
- params.Set("cid", dirID)
- params.Set("o", order) // following options are avaialbe for listing order
- // * file_name
- // * file_size
- // * file_type
- // * user_ptime (create_time) == sorted by tp
- // * user_utime (modify_time) == sorted by te
- // * user_otime (last_opened) == sorted by to
- params.Set("asc", asc) // ascending order "0" or "1"
- params.Set("show_dir", "1") // this is not for showing dirs_only. It will list all files in dir recursively if "0".
- params.Set("limit", strconv.Itoa(f.opt.ListChunk))
- params.Set("snap", "0")
- params.Set("record_open_time", "1")
- params.Set("count_folders", "1")
- params.Set("format", "json")
- params.Set("fc_mix", "0")
-
- opts := rest.Opts{
- Method: "GET",
- RootURL: "https://webapi.115.com/files",
- Parameters: params,
- }
- if order == "file_name" {
- params.Set("natsort", "1")
- opts.RootURL = "https://aps.115.com/natsort/files.php"
- }
+ params := listParams(dirID, limit)
+ params.Set("o", order)
+ params.Set("asc", asc)
offset := 0
retries := 0 // to prevent infinite loop
@@ -98,18 +74,17 @@ OUTER:
for {
params.Set("offset", strconv.Itoa(offset))
- var info api.FileList
- var resp *http.Response
- err = f.pacer.Call(func() (bool, error) {
- resp, err = f.srv.CallJSON(ctx, &opts, nil, &info)
- return shouldRetry(ctx, resp, &info, err)
- })
+ info, err := f.getFiles(ctx, params)
if err != nil {
- return found, fmt.Errorf("couldn't list files: %w", err)
- } else if !info.State {
- return found, fmt.Errorf("API State false: %q (%d)", info.Error, info.ErrNo)
+ return found, fmt.Errorf("couldn't get files: %w", err)
}
- if len(info.Files) == 0 {
+ if info.Count == 0 {
+ break
+ }
+ if filesOnly && info.FileCount == 0 {
+ break
+ }
+ if dirsOnly && info.FolderCount == 0 {
break
}
if order != info.Order || asc != info.IsAsc.String() {
@@ -123,13 +98,19 @@ OUTER:
continue // retry with same offset
}
for _, item := range info.Files {
+ if filesOnly && item.IsDir() {
+ continue
+ }
+ if dirsOnly && !item.IsDir() {
+ continue
+ }
item.Name = f.opt.Enc.ToStandardName(item.Name)
if fn(item) {
found = true
break OUTER
}
}
- offset = info.Offset + f.opt.ListChunk
+ offset = info.Offset + len(info.Files)
if offset >= info.Count {
break
}
@@ -137,6 +118,73 @@ OUTER:
return
}
+// listParams generates a default parameter set for list API
+func listParams(dirID string, limit int) url.Values {
+ params := url.Values{}
+ params.Set("aid", "1")
+ params.Set("cid", dirID)
+ params.Set("o", "user_ptime") // following options are avaialbe for listing order
+ // * file_name
+ // * file_size
+ // * file_type
+ // * user_ptime (create_time) == sorted by tp
+ // * user_utime (modify_time) == sorted by te
+ // * user_otime (last_opened) == sorted by to
+ params.Set("asc", "0") // ascending order "0" or "1"
+ params.Set("show_dir", "1") // this is not for showing dirs_only. It will list all files in dir recursively if "0".
+ params.Set("limit", strconv.Itoa(limit))
+ params.Set("snap", "0")
+ params.Set("record_open_time", "1")
+ params.Set("count_folders", "1")
+ params.Set("format", "json")
+ params.Set("fc_mix", "0")
+ params.Set("offset", "0")
+ return params
+}
+
+// getFiles fetches a single chunk of file lists filtered by the given parameters
+func (f *Fs) getFiles(ctx context.Context, params url.Values) (info *api.FileList, err error) {
+ opts := rest.Opts{
+ Method: "GET",
+ RootURL: "https://webapi.115.com/files",
+ Parameters: params,
+ }
+ if params.Get("o") == "file_name" {
+ params.Set("natsort", "1")
+ opts.RootURL = "https://aps.115.com/natsort/files.php"
+ }
+
+ var resp *http.Response
+ err = f.pacer.Call(func() (bool, error) {
+ resp, err = f.srv.CallJSON(ctx, &opts, nil, &info)
+ return shouldRetry(ctx, resp, &info, err)
+ })
+ if err != nil {
+ return
+ } else if !info.State {
+ return nil, fmt.Errorf("API Error: %q (%d)", info.Error, info.ErrNo)
+ }
+ return
+}
+
+// getDirPath returns an absolute path of dirID
+func (f *Fs) getDirPath(ctx context.Context, dirID string) (dir string, err error) {
+ if dirID == "0" {
+ return "", nil
+ }
+ info, err := f.getFiles(ctx, listParams(dirID, 32))
+ if err != nil {
+ return "", fmt.Errorf("couldn't get files: %w", err)
+ }
+ for _, p := range info.Path {
+ if p.CID.String() == "0" {
+ continue
+ }
+ dir = path.Join(dir, f.opt.Enc.ToStandardName(p.Name))
+ }
+ return
+}
+
func (f *Fs) makeDir(ctx context.Context, pid, name string) (info *api.NewDir, err error) {
form := url.Values{}
form.Set("pid", pid)
@@ -159,7 +207,7 @@ func (f *Fs) makeDir(ctx context.Context, pid, name string) (info *api.NewDir, e
if info.Errno == 20004 {
return nil, fs.ErrorDirExists
}
- return nil, fmt.Errorf("API State false: %s (%d)", info.Error, info.Errno)
+ return nil, fmt.Errorf("API Error: %s (%d)", info.Error, info.Errno)
}
return
}
@@ -184,7 +232,7 @@ func (f *Fs) renameFile(ctx context.Context, fid, newName string) (err error) {
if err != nil {
return
} else if !info.State {
- return fmt.Errorf("API State false: %s (%d)", info.Error, info.Errno)
+ return fmt.Errorf("API Error: %s (%d)", info.Error, info.Errno)
}
return
}
@@ -211,7 +259,7 @@ func (f *Fs) deleteFiles(ctx context.Context, fids []string) (err error) {
if err != nil {
return
} else if !info.State {
- return fmt.Errorf("API State false: %s (%d)", info.Error, info.Errno)
+ return fmt.Errorf("API Error: %s (%d)", info.Error, info.Errno)
}
return
}
@@ -239,7 +287,7 @@ func (f *Fs) moveFiles(ctx context.Context, fids []string, pid string) (err erro
if err != nil {
return
} else if !info.State {
- return fmt.Errorf("API State false: %s (%d)", info.Error, info.Errno)
+ return fmt.Errorf("API Error: %s (%d)", info.Error, info.Errno)
}
return
}
@@ -266,7 +314,7 @@ func (f *Fs) copyFiles(ctx context.Context, fids []string, pid string) (err erro
if err != nil {
return
} else if !info.State {
- return fmt.Errorf("API State false: %s (%d)", info.Error, info.Errno)
+ return fmt.Errorf("API Error: %s (%d)", info.Error, info.Errno)
}
return
}
@@ -286,7 +334,7 @@ func (f *Fs) indexInfo(ctx context.Context) (data *api.IndexInfo, err error) {
if err != nil {
return
} else if !info.State {
- return nil, fmt.Errorf("API State false: %s (%d)", info.Error, info.Errno)
+ return nil, fmt.Errorf("API Error: %s (%d)", info.Error, info.Errno)
}
if data = info.Data.IndexInfo; data == nil {
return nil, errors.New("no data")
@@ -316,7 +364,7 @@ func (f *Fs) _getDownloadURL(ctx context.Context, input []byte) (output []byte,
if err != nil {
return
} else if !info.State {
- return nil, nil, fmt.Errorf("API State false: %s (%d)", info.Error, info.Errno)
+ return nil, nil, fmt.Errorf("API Error: %s (%d)", info.Error, info.Errno)
}
if info.Data.EncodedData == "" {
return nil, nil, errors.New("no data")
@@ -377,7 +425,7 @@ func (f *Fs) getDirID(ctx context.Context, dir string) (cid string, err error) {
if err != nil {
return
} else if !info.State {
- return "", fmt.Errorf("API State false: %s (%d)", info.Error, info.Errno)
+ return "", fmt.Errorf("API Error: %s (%d)", info.Error, info.Errno)
}
cid = info.ID.String()
if cid == "0" && dir != "/" {
@@ -386,13 +434,18 @@ func (f *Fs) getDirID(ctx context.Context, dir string) (cid string, err error) {
return
}
-// getFile gets information of a file or directory by its ID
-func (f *Fs) getFile(ctx context.Context, fid string) (file *api.File, err error) {
+// getFile gets information of a file or directory by its ID or pickCode
+func (f *Fs) getFile(ctx context.Context, fid, pc string) (file *api.File, err error) {
if fid == "0" {
return nil, errors.New("can't get information about root directory")
}
params := url.Values{}
- params.Set("file_id", fid)
+ if fid != "" {
+ params.Set("file_id", fid)
+ }
+ if pc != "" {
+ params.Set("pick_code", pc)
+ }
opts := rest.Opts{
Method: "GET",
Path: "/files/get_info",
@@ -408,7 +461,7 @@ func (f *Fs) getFile(ctx context.Context, fid string) (file *api.File, err error
if err != nil {
return
} else if !info.State {
- return nil, fmt.Errorf("API State false: %s (%d)", info.Message, info.Code)
+ return nil, fmt.Errorf("API Error: %s (%d)", info.Message, info.Code)
}
if len(info.Data) > 0 {
file = info.Data[0]
@@ -524,13 +577,13 @@ func parseShareLink(rawURL string) (shareCode, receiveCode string, err error) {
// listing filesystem from share link
//
// no need user authorization by cookies
-func (f *Fs) listShare(ctx context.Context, dirID string, fn listAllFn) (found bool, err error) {
+func (f *Fs) listShare(ctx context.Context, dirID string, limit int, fn listAllFn) (found bool, err error) {
// Url Parameters
params := url.Values{}
params.Set("share_code", f.opt.ShareCode)
params.Set("receive_code", f.opt.ReceiveCode)
params.Set("cid", dirID)
- params.Set("limit", strconv.Itoa(f.opt.ListChunk))
+ params.Set("limit", strconv.Itoa(limit))
opts := rest.Opts{
Method: "GET",
@@ -552,7 +605,7 @@ OUTER:
if err != nil {
return found, fmt.Errorf("couldn't list files: %w", err)
} else if !info.State {
- return found, fmt.Errorf("API State false: %q (%d)", info.Error, info.Errno)
+ return found, fmt.Errorf("API Error: %q (%d)", info.Error, info.Errno)
}
if len(info.Data.List) == 0 {
break
@@ -598,7 +651,7 @@ func (f *Fs) copyFromShare(ctx context.Context, shareCode, receiveCode, fid, cid
if err != nil {
return
} else if !info.State {
- return fmt.Errorf("API State false: %s (%d)", info.Error, info.Errno)
+ return fmt.Errorf("API Error: %s (%d)", info.Error, info.Errno)
}
return
}
diff --git a/backend/115/mod.go b/backend/115/mod.go
index ccdbd96fb73f3..5d686c5069254 100644
--- a/backend/115/mod.go
+++ b/backend/115/mod.go
@@ -36,7 +36,7 @@ func parseRootID(s string) (rootID, receiveCode string, err error) {
// get an id of file or directory
func (f *Fs) getID(ctx context.Context, path string) (id string, err error) {
if id, _, _ := parseRootID(path); len(id) == 19 {
- info, err := f.getFile(ctx, id)
+ info, err := f.getFile(ctx, id, "")
if err != nil {
return "", fmt.Errorf("no such object with id %q: %w", id, err)
}
diff --git a/backend/115/upload.go b/backend/115/upload.go
index 22f161888efdb..f2a0d3724cf7d 100644
--- a/backend/115/upload.go
+++ b/backend/115/upload.go
@@ -50,7 +50,7 @@ func (f *Fs) getUploadBasicInfo(ctx context.Context) (err error) {
if err != nil {
return
} else if !info.State {
- return fmt.Errorf("API State false: %s (%d)", info.Error, info.Errno)
+ return fmt.Errorf("API Error: %s (%d)", info.Error, info.Errno)
}
userID := info.UserID.String()
if userID == "0" {
@@ -224,7 +224,7 @@ func (f *Fs) postUpload(v map[string]any) (*api.CallbackData, error) {
return nil, err
}
if !info.State {
- return nil, fmt.Errorf("API State false: %s (%d)", info.Message, info.Code)
+ return nil, fmt.Errorf("API Error: %s (%d)", info.Message, info.Code)
}
return info.Data, nil
}
@@ -380,6 +380,9 @@ func (f *Fs) upload(ctx context.Context, in io.Reader, src fs.ObjectInfo, remote
acc.ServerSideTransferStart()
acc.ServerSideCopyEnd(size)
}
+ if info, err := f.getFile(ctx, "", ui.PickCode); err == nil {
+ return o, o.setMetaData(info)
+ }
return o, nil
case 7:
signKey = ui.SignKey
diff --git a/backend/local/lchmod.go b/backend/local/lchmod.go
new file mode 100644
index 0000000000000..823718dfe969c
--- /dev/null
+++ b/backend/local/lchmod.go
@@ -0,0 +1,16 @@
+//go:build windows || plan9 || js || linux
+
+package local
+
+import "os"
+
+const haveLChmod = false
+
+// lChmod changes the mode of the named file to mode. If the file is a symbolic
+// link, it changes the link, not the target. If there is an error,
+// it will be of type *PathError.
+func lChmod(name string, mode os.FileMode) error {
+ // Can't do this safely on this OS - chmoding a symlink always
+ // changes the destination.
+ return nil
+}
diff --git a/backend/local/lchmod_unix.go b/backend/local/lchmod_unix.go
new file mode 100644
index 0000000000000..f1fdc474507fc
--- /dev/null
+++ b/backend/local/lchmod_unix.go
@@ -0,0 +1,41 @@
+//go:build !windows && !plan9 && !js && !linux
+
+package local
+
+import (
+ "os"
+ "syscall"
+
+ "golang.org/x/sys/unix"
+)
+
+const haveLChmod = true
+
+// syscallMode returns the syscall-specific mode bits from Go's portable mode bits.
+//
+// Borrowed from the syscall source since it isn't public.
+func syscallMode(i os.FileMode) (o uint32) {
+ o |= uint32(i.Perm())
+ if i&os.ModeSetuid != 0 {
+ o |= syscall.S_ISUID
+ }
+ if i&os.ModeSetgid != 0 {
+ o |= syscall.S_ISGID
+ }
+ if i&os.ModeSticky != 0 {
+ o |= syscall.S_ISVTX
+ }
+ return o
+}
+
+// lChmod changes the mode of the named file to mode. If the file is a symbolic
+// link, it changes the link, not the target. If there is an error,
+// it will be of type *PathError.
+func lChmod(name string, mode os.FileMode) error {
+ // NB linux does not support AT_SYMLINK_NOFOLLOW as a parameter to fchmodat
+ // and returns ENOTSUP if you try, so we don't support this on linux
+ if e := unix.Fchmodat(unix.AT_FDCWD, name, syscallMode(mode), unix.AT_SYMLINK_NOFOLLOW); e != nil {
+ return &os.PathError{Op: "lChmod", Path: name, Err: e}
+ }
+ return nil
+}
diff --git a/backend/local/lchtimes.go b/backend/local/lchtimes.go
index c8f03ef467fce..fcabdcc34b777 100644
--- a/backend/local/lchtimes.go
+++ b/backend/local/lchtimes.go
@@ -1,4 +1,4 @@
-//go:build windows || plan9 || js
+//go:build plan9 || js
package local
diff --git a/backend/local/lchtimes_windows.go b/backend/local/lchtimes_windows.go
new file mode 100644
index 0000000000000..a6dec9a121212
--- /dev/null
+++ b/backend/local/lchtimes_windows.go
@@ -0,0 +1,19 @@
+//go:build windows
+
+package local
+
+import (
+ "time"
+)
+
+const haveLChtimes = true
+
+// lChtimes changes the access and modification times of the named
+// link, similar to the Unix utime() or utimes() functions.
+//
+// The underlying filesystem may truncate or round the values to a
+// less precise time unit.
+// If there is an error, it will be of type *PathError.
+func lChtimes(name string, atime time.Time, mtime time.Time) error {
+ return setTimes(name, atime, mtime, time.Time{}, true)
+}
diff --git a/backend/local/local_internal_test.go b/backend/local/local_internal_test.go
index ea0fdc765e401..b3b9b8ba16145 100644
--- a/backend/local/local_internal_test.go
+++ b/backend/local/local_internal_test.go
@@ -268,22 +268,66 @@ func TestMetadata(t *testing.T) {
r := fstest.NewRun(t)
const filePath = "metafile.txt"
when := time.Now()
- const dayLength = len("2001-01-01")
- whenRFC := when.Format(time.RFC3339Nano)
r.WriteFile(filePath, "metadata file contents", when)
f := r.Flocal.(*Fs)
+ // Set fs into "-l" / "--links" mode
+ f.opt.TranslateSymlinks = true
+
+ // Write a symlink to the file
+ symlinkPath := "metafile-link.txt"
+ osSymlinkPath := filepath.Join(f.root, symlinkPath)
+ symlinkPath += linkSuffix
+ require.NoError(t, os.Symlink(filePath, osSymlinkPath))
+ symlinkModTime := fstest.Time("2002-02-03T04:05:10.123123123Z")
+ require.NoError(t, lChtimes(osSymlinkPath, symlinkModTime, symlinkModTime))
+
// Get the object
obj, err := f.NewObject(ctx, filePath)
require.NoError(t, err)
o := obj.(*Object)
+ // Get the symlink object
+ symlinkObj, err := f.NewObject(ctx, symlinkPath)
+ require.NoError(t, err)
+ symlinkO := symlinkObj.(*Object)
+
+ // Record metadata for o
+ oMeta, err := o.Metadata(ctx)
+ require.NoError(t, err)
+
+ // Test symlink first to check it doesn't mess up file
+ t.Run("Symlink", func(t *testing.T) {
+ testMetadata(t, r, symlinkO, symlinkModTime)
+ })
+
+ // Read it again
+ oMetaNew, err := o.Metadata(ctx)
+ require.NoError(t, err)
+
+ // Check that operating on the symlink didn't change the file it was pointing to
+ // See: https://github.com/rclone/rclone/security/advisories/GHSA-hrxh-9w67-g4cv
+ assert.Equal(t, oMeta, oMetaNew, "metadata setting on symlink messed up file")
+
+ // Now run the same tests on the file
+ t.Run("File", func(t *testing.T) {
+ testMetadata(t, r, o, when)
+ })
+}
+
+func testMetadata(t *testing.T, r *fstest.Run, o *Object, when time.Time) {
+ ctx := context.Background()
+ whenRFC := when.Format(time.RFC3339Nano)
+ const dayLength = len("2001-01-01")
+
+ f := r.Flocal.(*Fs)
features := f.Features()
- var hasXID, hasAtime, hasBtime bool
+ var hasXID, hasAtime, hasBtime, canSetXattrOnLinks bool
switch runtime.GOOS {
case "darwin", "freebsd", "netbsd", "linux":
hasXID, hasAtime, hasBtime = true, true, true
+ canSetXattrOnLinks = runtime.GOOS != "linux"
case "openbsd", "solaris":
hasXID, hasAtime = true, true
case "windows":
@@ -306,6 +350,10 @@ func TestMetadata(t *testing.T) {
require.NoError(t, err)
assert.Nil(t, m)
+ if !canSetXattrOnLinks && o.translatedLink {
+ t.Skip("Skip remainder of test as can't set xattr on symlinks on this OS")
+ }
+
inM := fs.Metadata{
"potato": "chips",
"cabbage": "soup",
@@ -320,18 +368,21 @@ func TestMetadata(t *testing.T) {
})
checkTime := func(m fs.Metadata, key string, when time.Time) {
+ t.Helper()
mt, ok := o.parseMetadataTime(m, key)
assert.True(t, ok)
dt := mt.Sub(when)
precision := time.Second
- assert.True(t, dt >= -precision && dt <= precision, fmt.Sprintf("%s: dt %v outside +/- precision %v", key, dt, precision))
+ assert.True(t, dt >= -precision && dt <= precision, fmt.Sprintf("%s: dt %v outside +/- precision %v want %v got %v", key, dt, precision, mt, when))
}
checkInt := func(m fs.Metadata, key string, base int) int {
+ t.Helper()
value, ok := o.parseMetadataInt(m, key, base)
assert.True(t, ok)
return value
}
+
t.Run("Read", func(t *testing.T) {
m, err := o.Metadata(ctx)
require.NoError(t, err)
@@ -341,13 +392,12 @@ func TestMetadata(t *testing.T) {
checkInt(m, "mode", 8)
checkTime(m, "mtime", when)
- assert.Equal(t, len(whenRFC), len(m["mtime"]))
assert.Equal(t, whenRFC[:dayLength], m["mtime"][:dayLength])
- if hasAtime {
+ if hasAtime && !o.translatedLink { // symlinks generally don't record atime
checkTime(m, "atime", when)
}
- if hasBtime {
+ if hasBtime && !o.translatedLink { // symlinks generally don't record btime
checkTime(m, "btime", when)
}
if hasXID {
@@ -371,6 +421,10 @@ func TestMetadata(t *testing.T) {
"mode": "0767",
"potato": "wedges",
}
+ if !canSetXattrOnLinks && o.translatedLink {
+ // Don't change xattr if not supported on symlinks
+ delete(newM, "potato")
+ }
err := o.writeMetadata(newM)
require.NoError(t, err)
@@ -380,7 +434,11 @@ func TestMetadata(t *testing.T) {
mode := checkInt(m, "mode", 8)
if runtime.GOOS != "windows" {
- assert.Equal(t, 0767, mode&0777, fmt.Sprintf("mode wrong - expecting 0767 got 0%o", mode&0777))
+ expectedMode := 0767
+ if o.translatedLink && runtime.GOOS == "linux" {
+ expectedMode = 0777 // perms of symlinks always read as 0777 on linux
+ }
+ assert.Equal(t, expectedMode, mode&0777, fmt.Sprintf("mode wrong - expecting 0%o got 0%o", expectedMode, mode&0777))
}
checkTime(m, "mtime", newMtime)
@@ -390,7 +448,7 @@ func TestMetadata(t *testing.T) {
if haveSetBTime {
checkTime(m, "btime", newBtime)
}
- if xattrSupported {
+ if xattrSupported && (canSetXattrOnLinks || !o.translatedLink) {
assert.Equal(t, "wedges", m["potato"])
}
})
diff --git a/backend/local/metadata.go b/backend/local/metadata.go
index 7ab69af309f73..75b195e64049e 100644
--- a/backend/local/metadata.go
+++ b/backend/local/metadata.go
@@ -105,7 +105,11 @@ func (o *Object) writeMetadataToFile(m fs.Metadata) (outErr error) {
}
if haveSetBTime {
if btimeOK {
- err = setBTime(o.path, btime)
+ if o.translatedLink {
+ err = lsetBTime(o.path, btime)
+ } else {
+ err = setBTime(o.path, btime)
+ }
if err != nil {
outErr = fmt.Errorf("failed to set birth (creation) time: %w", err)
}
@@ -121,7 +125,11 @@ func (o *Object) writeMetadataToFile(m fs.Metadata) (outErr error) {
if runtime.GOOS == "windows" || runtime.GOOS == "plan9" {
fs.Debugf(o, "Ignoring request to set ownership %o.%o on this OS", gid, uid)
} else {
- err = os.Chown(o.path, uid, gid)
+ if o.translatedLink {
+ err = os.Lchown(o.path, uid, gid)
+ } else {
+ err = os.Chown(o.path, uid, gid)
+ }
if err != nil {
outErr = fmt.Errorf("failed to change ownership: %w", err)
}
@@ -132,7 +140,16 @@ func (o *Object) writeMetadataToFile(m fs.Metadata) (outErr error) {
if mode >= 0 {
umode := uint(mode)
if umode <= math.MaxUint32 {
- err = os.Chmod(o.path, os.FileMode(umode))
+ if o.translatedLink {
+ if haveLChmod {
+ err = lChmod(o.path, os.FileMode(umode))
+ } else {
+ fs.Debugf(o, "Unable to set mode %v on a symlink on this OS", os.FileMode(umode))
+ err = nil
+ }
+ } else {
+ err = os.Chmod(o.path, os.FileMode(umode))
+ }
if err != nil {
outErr = fmt.Errorf("failed to change permissions: %w", err)
}
diff --git a/backend/local/setbtime.go b/backend/local/setbtime.go
index 5c946348fb1f2..bb37b173511cb 100644
--- a/backend/local/setbtime.go
+++ b/backend/local/setbtime.go
@@ -13,3 +13,9 @@ func setBTime(name string, btime time.Time) error {
// Does nothing
return nil
}
+
+// lsetBTime changes the birth time of the link passed in
+func lsetBTime(name string, btime time.Time) error {
+ // Does nothing
+ return nil
+}
diff --git a/backend/local/setbtime_windows.go b/backend/local/setbtime_windows.go
index 510a4da6a084c..8ae4998268ee3 100644
--- a/backend/local/setbtime_windows.go
+++ b/backend/local/setbtime_windows.go
@@ -9,15 +9,20 @@ import (
const haveSetBTime = true
-// setBTime sets the birth time of the file passed in
-func setBTime(name string, btime time.Time) (err error) {
+// setTimes sets any of atime, mtime or btime
+// if link is set it sets a link rather than the target
+func setTimes(name string, atime, mtime, btime time.Time, link bool) (err error) {
pathp, err := syscall.UTF16PtrFromString(name)
if err != nil {
return err
}
+ fileFlag := uint32(syscall.FILE_FLAG_BACKUP_SEMANTICS)
+ if link {
+ fileFlag |= syscall.FILE_FLAG_OPEN_REPARSE_POINT
+ }
h, err := syscall.CreateFile(pathp,
syscall.FILE_WRITE_ATTRIBUTES, syscall.FILE_SHARE_WRITE, nil,
- syscall.OPEN_EXISTING, syscall.FILE_FLAG_BACKUP_SEMANTICS, 0)
+ syscall.OPEN_EXISTING, fileFlag, 0)
if err != nil {
return err
}
@@ -27,6 +32,28 @@ func setBTime(name string, btime time.Time) (err error) {
err = closeErr
}
}()
- bFileTime := syscall.NsecToFiletime(btime.UnixNano())
- return syscall.SetFileTime(h, &bFileTime, nil, nil)
+ var patime, pmtime, pbtime *syscall.Filetime
+ if !atime.IsZero() {
+ t := syscall.NsecToFiletime(atime.UnixNano())
+ patime = &t
+ }
+ if !mtime.IsZero() {
+ t := syscall.NsecToFiletime(mtime.UnixNano())
+ pmtime = &t
+ }
+ if !btime.IsZero() {
+ t := syscall.NsecToFiletime(btime.UnixNano())
+ pbtime = &t
+ }
+ return syscall.SetFileTime(h, pbtime, patime, pmtime)
+}
+
+// setBTime sets the birth time of the file passed in
+func setBTime(name string, btime time.Time) (err error) {
+ return setTimes(name, time.Time{}, time.Time{}, btime, false)
+}
+
+// lsetBTime changes the birth time of the link passed in
+func lsetBTime(name string, btime time.Time) error {
+ return setTimes(name, time.Time{}, time.Time{}, btime, true)
}
diff --git a/backend/onedrive/api/types.go b/backend/onedrive/api/types.go
index 0332f1c12c5d8..7fe032c23dbf7 100644
--- a/backend/onedrive/api/types.go
+++ b/backend/onedrive/api/types.go
@@ -202,9 +202,14 @@ type SharingLinkType struct {
type LinkType string
const (
- ViewLinkType LinkType = "view" // ViewLinkType (role: read) A view-only sharing link, allowing read-only access.
- EditLinkType LinkType = "edit" // EditLinkType (role: write) An edit sharing link, allowing read-write access.
- EmbedLinkType LinkType = "embed" // EmbedLinkType (role: read) A view-only sharing link that can be used to embed content into a host webpage. Embed links are not available for OneDrive for Business or SharePoint.
+ // ViewLinkType (role: read) A view-only sharing link, allowing read-only access.
+ ViewLinkType LinkType = "view"
+ // EditLinkType (role: write) An edit sharing link, allowing read-write access.
+ EditLinkType LinkType = "edit"
+ // EmbedLinkType (role: read) A view-only sharing link that can be used to embed
+ // content into a host webpage. Embed links are not available for OneDrive for
+ // Business or SharePoint.
+ EmbedLinkType LinkType = "embed"
)
// LinkScope represents the scope of the link represented by this permission.
@@ -212,9 +217,12 @@ const (
type LinkScope string
const (
- AnonymousScope LinkScope = "anonymous" // AnonymousScope = Anyone with the link has access, without needing to sign in. This may include people outside of your organization.
- OrganizationScope LinkScope = "organization" // OrganizationScope = Anyone signed into your organization (tenant) can use the link to get access. Only available in OneDrive for Business and SharePoint.
-
+ // AnonymousScope = Anyone with the link has access, without needing to sign in.
+ // This may include people outside of your organization.
+ AnonymousScope LinkScope = "anonymous"
+ // OrganizationScope = Anyone signed into your organization (tenant) can use the
+ // link to get access. Only available in OneDrive for Business and SharePoint.
+ OrganizationScope LinkScope = "organization"
)
// PermissionsType provides information about a sharing permission granted for a DriveItem resource.
@@ -236,10 +244,14 @@ type PermissionsType struct {
type Role string
const (
- ReadRole Role = "read" // ReadRole provides the ability to read the metadata and contents of the item.
- WriteRole Role = "write" // WriteRole provides the ability to read and modify the metadata and contents of the item.
- OwnerRole Role = "owner" // OwnerRole represents the owner role for SharePoint and OneDrive for Business.
- MemberRole Role = "member" // MemberRole represents the member role for SharePoint and OneDrive for Business.
+ // ReadRole provides the ability to read the metadata and contents of the item.
+ ReadRole Role = "read"
+ // WriteRole provides the ability to read and modify the metadata and contents of the item.
+ WriteRole Role = "write"
+ // OwnerRole represents the owner role for SharePoint and OneDrive for Business.
+ OwnerRole Role = "owner"
+ // MemberRole represents the member role for SharePoint and OneDrive for Business.
+ MemberRole Role = "member"
)
// PermissionsResponse is the response to the list permissions method
diff --git a/backend/pikpak/pikpak.go b/backend/pikpak/pikpak.go
index 24b5af2c48570..e5df9f910d899 100644
--- a/backend/pikpak/pikpak.go
+++ b/backend/pikpak/pikpak.go
@@ -575,6 +575,7 @@ func newFs(ctx context.Context, name, path string, m configmap.Mapper) (*Fs, err
if strings.Contains(err.Error(), "invalid_grant") {
return f, f.reAuthorize(ctx)
}
+ return nil, err
}
return f, nil
diff --git a/backend/s3/s3.go b/backend/s3/s3.go
index 3c62dd3c527dc..7413e8c1f66ea 100644
--- a/backend/s3/s3.go
+++ b/backend/s3/s3.go
@@ -6054,8 +6054,8 @@ func (f *Fs) OpenChunkWriter(ctx context.Context, remote string, src fs.ObjectIn
chunkSize: int64(chunkSize),
size: size,
f: f,
- bucket: mOut.Bucket,
- key: mOut.Key,
+ bucket: ui.req.Bucket,
+ key: ui.req.Key,
uploadID: mOut.UploadId,
multiPartUploadInput: &mReq,
completedParts: make([]types.CompletedPart, 0),
diff --git a/cmd/bisync/bilib/output.go b/cmd/bisync/bilib/output.go
index ccd85c1259565..c35abde860f0d 100644
--- a/cmd/bisync/bilib/output.go
+++ b/cmd/bisync/bilib/output.go
@@ -5,20 +5,13 @@ import (
"bytes"
"log"
- "github.com/rclone/rclone/fs"
"github.com/sirupsen/logrus"
)
// CaptureOutput runs a function capturing its output.
func CaptureOutput(fun func()) []byte {
logSave := log.Writer()
- logrusSave := logrus.StandardLogger().Writer()
- defer func() {
- err := logrusSave.Close()
- if err != nil {
- fs.Errorf(nil, "error closing logrusSave: %v", err)
- }
- }()
+ logrusSave := logrus.StandardLogger().Out
buf := &bytes.Buffer{}
log.SetOutput(buf)
logrus.SetOutput(buf)
diff --git a/cmd/bisync/log.go b/cmd/bisync/log.go
index 0d7f4b2f71993..7dce791763e5b 100644
--- a/cmd/bisync/log.go
+++ b/cmd/bisync/log.go
@@ -66,7 +66,8 @@ func quotePath(path string) string {
return escapePath(path, true)
}
-var Colors bool // Colors controls whether terminal colors are enabled
+// Colors controls whether terminal colors are enabled
+var Colors bool
// Color handles terminal colors for bisync
func Color(style string, s string) string {
diff --git a/docs/content/changelog.md b/docs/content/changelog.md
index 3f04c6746a1b3..5e31f318e32bb 100644
--- a/docs/content/changelog.md
+++ b/docs/content/changelog.md
@@ -5,6 +5,36 @@ description: "Rclone Changelog"
# Changelog
+## v1.68.2 - 2024-11-15
+
+[See commits](https://github.com/rclone/rclone/compare/v1.68.1...v1.68.2)
+
+* Security fixes
+ * local backend: CVE-2024-52522: fix permission and ownership on symlinks with `--links` and `--metadata` (Nick Craig-Wood)
+ * Only affects users using `--metadata` and `--links` and copying files to the local backend
+ * See https://github.com/rclone/rclone/security/advisories/GHSA-hrxh-9w67-g4cv
+ * build: bump github.com/golang-jwt/jwt/v4 from 4.5.0 to 4.5.1 (dependabot)
+ * This is an issue in a dependency which is used for JWT certificates
+ * See https://github.com/golang-jwt/jwt/security/advisories/GHSA-29wx-vh33-7x7r
+* Bug Fixes
+ * accounting: Fix wrong message on SIGUSR2 to enable/disable bwlimit (Nick Craig-Wood)
+ * bisync: Fix output capture restoring the wrong output for logrus (Dimitrios Slamaris)
+ * dlna: Fix loggingResponseWriter disregarding log level (Simon Bos)
+ * serve s3: Fix excess locking which was making serve s3 single threaded (Nick Craig-Wood)
+ * doc fixes (Nick Craig-Wood, tgfisher, Alexandre Hamez, Randy Bush)
+* Local
+ * Fix permission and ownership on symlinks with `--links` and `--metadata` (Nick Craig-Wood)
+ * Fix `--copy-links` on macOS when cloning (nielash)
+* Onedrive
+ * Fix Retry-After handling to look at 503 errors also (Nick Craig-Wood)
+* Pikpak
+ * Fix cid/gcid calculations for fs.OverrideRemote (wiserain)
+ * Fix fatal crash on startup with token that can't be refreshed (Nick Craig-Wood)
+* S3
+ * Fix crash when using `--s3-download-url` after migration to SDKv2 (Nick Craig-Wood)
+ * Storj provider: fix server-side copy of files bigger than 5GB (Kaloyan Raev)
+ * Fix multitenant multipart uploads with CEPH (Nick Craig-Wood)
+
## v1.68.1 - 2024-09-24
[See commits](https://github.com/rclone/rclone/compare/v1.68.0...v1.68.1)
diff --git a/docs/content/commands/rclone.md b/docs/content/commands/rclone.md
index 7651ff0e9d249..4c861f498762e 100644
--- a/docs/content/commands/rclone.md
+++ b/docs/content/commands/rclone.md
@@ -929,7 +929,7 @@ rclone [flags]
--use-json-log Use json log format
--use-mmap Use mmap allocator (see docs)
--use-server-modtime Use server modified time instead of object metadata
- --user-agent string Set the user-agent to a specified string (default "rclone/v1.68.1")
+ --user-agent string Set the user-agent to a specified string (default "rclone/v1.68.2")
-v, --verbose count Print lots more stuff (repeat for more)
-V, --version Print the version number
--webdav-bearer-token string Bearer token instead of user/pass (e.g. a Macaroon)
diff --git a/docs/content/flags.md b/docs/content/flags.md
index ff4cd09556dd6..3e77f5c7f7c9d 100644
--- a/docs/content/flags.md
+++ b/docs/content/flags.md
@@ -115,7 +115,7 @@ Flags for general networking and HTTP stuff.
--tpslimit float Limit HTTP transactions per second to this
--tpslimit-burst int Max burst of transactions for --tpslimit (default 1)
--use-cookies Enable session cookiejar
- --user-agent string Set the user-agent to a specified string (default "rclone/v1.68.1")
+ --user-agent string Set the user-agent to a specified string (default "rclone/v1.68.2")
```
diff --git a/docs/layouts/partials/version.html b/docs/layouts/partials/version.html
index 4dac69b9afc80..8546614eb93b6 100644
--- a/docs/layouts/partials/version.html
+++ b/docs/layouts/partials/version.html
@@ -1 +1 @@
-v1.69.0-3
+v1.69.0-4
\ No newline at end of file
diff --git a/go.mod b/go.mod
index 095496fecdb5c..8d70dde25c0a6 100644
--- a/go.mod
+++ b/go.mod
@@ -232,7 +232,7 @@ require (
github.com/ProtonMail/go-crypto v1.0.0
github.com/aead/ecdh v0.2.0
github.com/aliyun/aliyun-oss-go-sdk v3.0.2+incompatible
- github.com/golang-jwt/jwt/v4 v4.5.0
+ github.com/golang-jwt/jwt/v4 v4.5.1
github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da // indirect
github.com/pkg/xattr v0.4.9
golang.org/x/mobile v0.0.0-20240716161057-1ad2df20a8b6
diff --git a/go.sum b/go.sum
index acd43c1eeb105..7fbe964669eeb 100644
--- a/go.sum
+++ b/go.sum
@@ -282,8 +282,8 @@ github.com/gofrs/flock v0.8.1 h1:+gYjHKf32LDeiEEFhQaotPbLuUXjY5ZqxKgXy7n59aw=
github.com/gofrs/flock v0.8.1/go.mod h1:F1TvTiK9OcQqauNUHlbJvyl9Qa1QvF/gOUDKA14jxHU=
github.com/gogo/protobuf v1.3.2 h1:Ov1cvc58UF3b5XjBnZv7+opcTcQFZebYjWzi34vdm4Q=
github.com/gogo/protobuf v1.3.2/go.mod h1:P1XiOD3dCwIKUDQYPy72D8LYyHL2YPYrpS2s69NZV8Q=
-github.com/golang-jwt/jwt/v4 v4.5.0 h1:7cYmW1XlMY7h7ii7UhUyChSgS5wUJEnm9uZVTGqOWzg=
-github.com/golang-jwt/jwt/v4 v4.5.0/go.mod h1:m21LjoU+eqJr34lmDMbreY2eSTRJ1cv77w39/MY0Ch0=
+github.com/golang-jwt/jwt/v4 v4.5.1 h1:JdqV9zKUdtaa9gdPlywC3aeoEsR681PlKC+4F5gQgeo=
+github.com/golang-jwt/jwt/v4 v4.5.1/go.mod h1:m21LjoU+eqJr34lmDMbreY2eSTRJ1cv77w39/MY0Ch0=
github.com/golang-jwt/jwt/v5 v5.2.1 h1:OuVbFODueb089Lh128TAcimifWaLhJwVflnrgM17wHk=
github.com/golang-jwt/jwt/v5 v5.2.1/go.mod h1:pqrtFR0X4osieyHYxtmOUWsAWrfe1Q5UVIyoH402zdk=
github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q=
diff --git a/lib/multipart/multipart.go b/lib/multipart/multipart.go
index c388acc01a9ff..1545f4ced311c 100644
--- a/lib/multipart/multipart.go
+++ b/lib/multipart/multipart.go
@@ -17,7 +17,8 @@ import (
)
const (
- BufferSize = 1024 * 1024 // BufferSize is the default size of the pages used in the reader
+ // BufferSize is the default size of the pages used in the reader
+ BufferSize = 1024 * 1024
bufferCacheSize = 64 // max number of buffers to keep in cache
bufferCacheFlushTime = 5 * time.Second // flush the cached buffers after this long
)
diff --git a/rclone.1 b/rclone.1
index 092f42a08a2a4..a16e4d03fefe1 100644
--- a/rclone.1
+++ b/rclone.1
@@ -1,7 +1,7 @@
.\"t
.\" Automatically generated by Pandoc 2.9.2.1
.\"
-.TH "rclone" "1" "Sep 24, 2024" "User Manual" ""
+.TH "rclone" "1" "Nov 15, 2024" "User Manual" ""
.hy
.SH Rclone syncs your files to cloud storage
.PP
@@ -20995,6 +20995,12 @@ See above for the order filter flags are processed in.
Arrange the order of filter rules with the most restrictive first and
work down.
.PP
+Lines starting with # or ; are ignored, and can be used to write
+comments.
+Inline comments are not supported.
+\f[I]Use \f[CI]-vv --dump filters\f[I] to see how they appear in the
+final regexp.\f[R]
+.PP
E.g.
for \f[C]filter-file.txt\f[R]:
.IP
@@ -21005,6 +21011,7 @@ for \f[C]filter-file.txt\f[R]:
+ *.jpg
+ *.png
+ file2.avi
+- /dir/tmp/** # WARNING! This text will be treated as part of the path.
- /dir/Trash/**
+ /dir/**
# exclude everything else
@@ -25005,7 +25012,7 @@ pCloud
T}@T{
MD5, SHA1 \[u2077]
T}@T{
-R
+R/W
T}@T{
No
T}@T{
@@ -27759,7 +27766,7 @@ Flags for general networking and HTTP stuff.
--tpslimit float Limit HTTP transactions per second to this
--tpslimit-burst int Max burst of transactions for --tpslimit (default 1)
--use-cookies Enable session cookiejar
- --user-agent string Set the user-agent to a specified string (default \[dq]rclone/v1.68.1\[dq])
+ --user-agent string Set the user-agent to a specified string (default \[dq]rclone/v1.68.2\[dq])
\f[R]
.fi
.SS Performance
@@ -37433,11 +37440,12 @@ copy_cutoff = 5M
\f[R]
.fi
.PP
-C14 Cold Storage (https://www.online.net/en/storage/c14-cold-storage) is
+Scaleway Glacier (https://www.scaleway.com/en/glacier-cold-storage/) is
the low-cost S3 Glacier alternative from Scaleway and it works the same
way as on S3 by accepting the \[dq]GLACIER\[dq] \f[C]storage_class\f[R].
So you can configure your remote with the
-\f[C]storage_class = GLACIER\f[R] option to upload directly to C14.
+\f[C]storage_class = GLACIER\f[R] option to upload directly to Scaleway
+Glacier.
Don\[aq]t forget that in this state you can\[aq]t read files back after,
you will need to restore them to \[dq]STANDARD\[dq] storage_class first
before being able to read them (see \[dq]restore\[dq] section above)
@@ -50256,9 +50264,9 @@ It will show you a client ID and client secret.
Make a note of these.
.RS 4
.PP
-(If you selected \[dq]External\[dq] at Step 5 continue to Step 9.
+(If you selected \[dq]External\[dq] at Step 5 continue to Step 10.
If you chose \[dq]Internal\[dq] you don\[aq]t need to publish and can
-skip straight to Step 10 but your destination drive must be part of the
+skip straight to Step 11 but your destination drive must be part of the
same Google Workspace.)
.RE
.IP "10." 4
@@ -72562,6 +72570,87 @@ Options:
.IP \[bu] 2
\[dq]error\[dq]: return an error based on option value
.SH Changelog
+.SS v1.68.2 - 2024-11-15
+.PP
+See commits (https://github.com/rclone/rclone/compare/v1.68.1...v1.68.2)
+.IP \[bu] 2
+Security fixes
+.RS 2
+.IP \[bu] 2
+local backend: CVE-2024-52522: fix permission and ownership on symlinks
+with \f[C]--links\f[R] and \f[C]--metadata\f[R] (Nick Craig-Wood)
+.RS 2
+.IP \[bu] 2
+Only affects users using \f[C]--metadata\f[R] and \f[C]--links\f[R] and
+copying files to the local backend
+.IP \[bu] 2
+See
+https://github.com/rclone/rclone/security/advisories/GHSA-hrxh-9w67-g4cv
+.RE
+.IP \[bu] 2
+build: bump github.com/golang-jwt/jwt/v4 from 4.5.0 to 4.5.1
+(dependabot)
+.RS 2
+.IP \[bu] 2
+This is an issue in a dependency which is used for JWT certificates
+.IP \[bu] 2
+See
+https://github.com/golang-jwt/jwt/security/advisories/GHSA-29wx-vh33-7x7r
+.RE
+.RE
+.IP \[bu] 2
+Bug Fixes
+.RS 2
+.IP \[bu] 2
+accounting: Fix wrong message on SIGUSR2 to enable/disable bwlimit (Nick
+Craig-Wood)
+.IP \[bu] 2
+bisync: Fix output capture restoring the wrong output for logrus
+(Dimitrios Slamaris)
+.IP \[bu] 2
+dlna: Fix loggingResponseWriter disregarding log level (Simon Bos)
+.IP \[bu] 2
+serve s3: Fix excess locking which was making serve s3 single threaded
+(Nick Craig-Wood)
+.IP \[bu] 2
+doc fixes (Nick Craig-Wood, tgfisher, Alexandre Hamez, Randy Bush)
+.RE
+.IP \[bu] 2
+Local
+.RS 2
+.IP \[bu] 2
+Fix permission and ownership on symlinks with \f[C]--links\f[R] and
+\f[C]--metadata\f[R] (Nick Craig-Wood)
+.IP \[bu] 2
+Fix \f[C]--copy-links\f[R] on macOS when cloning (nielash)
+.RE
+.IP \[bu] 2
+Onedrive
+.RS 2
+.IP \[bu] 2
+Fix Retry-After handling to look at 503 errors also (Nick Craig-Wood)
+.RE
+.IP \[bu] 2
+Pikpak
+.RS 2
+.IP \[bu] 2
+Fix cid/gcid calculations for fs.OverrideRemote (wiserain)
+.IP \[bu] 2
+Fix fatal crash on startup with token that can\[aq]t be refreshed (Nick
+Craig-Wood)
+.RE
+.IP \[bu] 2
+S3
+.RS 2
+.IP \[bu] 2
+Fix crash when using \f[C]--s3-download-url\f[R] after migration to
+SDKv2 (Nick Craig-Wood)
+.IP \[bu] 2
+Storj provider: fix server-side copy of files bigger than 5GB (Kaloyan
+Raev)
+.IP \[bu] 2
+Fix multitenant multipart uploads with CEPH (Nick Craig-Wood)
+.RE
.SS v1.68.1 - 2024-09-24
.PP
See commits (https://github.com/rclone/rclone/compare/v1.68.0...v1.68.1)