"proxy_allowed" setting allows to specify the allowed IP address and IP
ranges that can send the proxy header. This setting combined with
"proxy_protocol" allows to ignore the header or to reject connections
that send the proxy header from a non listed IP
scp now properly handles virtual folders.
rsync is disabled for users with virtual folders: we execute a system
command and it is not aware about virtual folders.
git is not allowed if the repo path is inside a virtual folder
A custom program can be executed before the users login to modify the
configurations for the user trying to login.
You can, for example, allow login based on time range.
Fixes#77
This simplify the common pattern where the user password and a one time
token is requested: now the external program can delegate password check
to SFTPGo and verify the token itself
some clients, for example rclone can request only part of a file, we have
no way to detect this so we haven't return an error if the downloaded size
does not match the file size
added the "initprovider" command to initialize the database structure.
If we change the database schema the required changes will be checked
at startup and automatically applyed.
HTTPS certificate can be reloaded on demand sending a SIGHUP signal on
Unix based systems and a "paramchange" request to the running service on
Windows
we upload a file while receiving it via SFTP not a file stored on a local
disk. We use concurrent uploads only to be able to send files of arbitrary
size, so concurrency is not really useful here. Setting the concurrency to
2 we have a max difference of 10 MB between the writer (sftp client) and
the reader (aws sdk), with the default concurrency value this difference
is 25MB.
We have to rework TestRelativePaths and TestResolvePaths if we want to run
them for Cloud Storage on Windows too: we use filesystem path while Cloud
Storage providers expect Unix paths.
On Windows is important to check the local filesystem so skip Cloud Storage
providers test cases for now
The `memory` provider can load users from a dump obtained using the
`dumpdata` REST API. This dump file can be configured using the
dataprovider `name` configuration key. It will be loaded at startup
and can be reloaded on demand using a `SIGHUP` on Unix based systems
and a `paramchange` request to the running service on Windows.
Fixes#66
Here are the main improvements:
- unliked files works on windows too
- the uploads are now synced on the lower speed between the SFTP client write
and the upload speed to S3
This commit increase the external auth timeout to 60 seconds too