HTTPS certificate can be reloaded on demand sending a SIGHUP signal on
Unix based systems and a "paramchange" request to the running service on
Windows
we upload a file while receiving it via SFTP not a file stored on a local
disk. We use concurrent uploads only to be able to send files of arbitrary
size, so concurrency is not really useful here. Setting the concurrency to
2 we have a max difference of 10 MB between the writer (sftp client) and
the reader (aws sdk), with the default concurrency value this difference
is 25MB.
We have to rework TestRelativePaths and TestResolvePaths if we want to run
them for Cloud Storage on Windows too: we use filesystem path while Cloud
Storage providers expect Unix paths.
On Windows is important to check the local filesystem so skip Cloud Storage
providers test cases for now
The `memory` provider can load users from a dump obtained using the
`dumpdata` REST API. This dump file can be configured using the
dataprovider `name` configuration key. It will be loaded at startup
and can be reloaded on demand using a `SIGHUP` on Unix based systems
and a `paramchange` request to the running service on Windows.
Fixes#66
Here are the main improvements:
- unliked files works on windows too
- the uploads are now synced on the lower speed between the SFTP client write
and the upload speed to S3
This commit increase the external auth timeout to 60 seconds too
... now that it contains all the needed patches.
Remove an hack for setstat with empty attrs, it is now handled in pkg/sftp.
Update other dependencies too.
Use os.Environ() as a base instead of empty variable. Currently the environment of executed external auth program only contains SFTPGO_AUTHD* variables and therefore the program lacks additional context when started.
Login can be restricted to specific ranges of IP address or to a specific IP
address.
Please apply the appropriate SQL upgrade script to add the filter field to your
database.
The filter database field will allow to add other filters without requiring a
new database migration
Sometime we can have this error:
read |0: file already closed
reading from the command standard error, this means that the command is
already finished so we don't need to do nothing.
This happen randomically while running the test cases on travis.
The git push test sometime fails when running on travis.
The issue cannot be replicated locally so print the logs to try to
understand what is happening
currently we support:
- Linux/Unix users stored in shadow/passwd files
- Pure-FTPd virtual users generated using `pure-pw` CLI
- ProFTPD users generated using `ftpasswd` CLI
using something like this:
update-user <user-id> <username> --public-keys ''
the public keys auth will be disabled
using something like this:
update-user <user-id> <username> --password ''
the password auth will be disabled
SysProcAttr.Credential is not available on Windows we need to move the
WrapCmd test in a separate file to be able to build test cases on Windows,
skipping the test is not enough
It was possible to remove an empty root dir or create a symlink to it.
We now return a Permission Denied error if we detect an attempt to remove,
renaming or symlinking the root directory
we can now have permissions such as these ones
{"/":["*"],"/somedir":["list","download"]}
The old permissions are automatically converted to the new structure,
no database migration is needed
It seems that there are some clients that sends Setstat requests with
no attrs:
https://github.com/pkg/sftp/issues/325
I haven't never seen this myself, anyway we now return ErrSSHFxBadMessage
and log the client version in such cases
and better document quota management issues for system commands.
rsync and git are not enabled in the default config so don't install
them in sample Dockerfiles, simply add a comment to facilitate their
installation if needed
Fixes#44
we only need to wait for the write from the local command to
the ssh channel. There is no need to wait for the write from ssh
channel to the local command stdin