also gcs credentials are now encrypted, both on disk and inside the
provider.
Data provider is automatically migrated and load data will accept
old format too but you should upgrade to the new format to avoid future
issues
When true, users' Google Cloud Storage credentials will be written to
the data provider instead of disk.
Pre-existing credentials on disk will be used as a fallback
Fixes#201
This way you can force the user to login again and so to use the updated
configuration.
A deleted user will be automatically disconnected.
Fixes#163
Improved some docs too.
The common package defines the interfaces that a protocol must implement
and contain code that can be shared among supported protocols.
This way should be easier to support new protocols
HTTP clients are used for executing hooks such as the ones used for custom
actions, external authentication and pre-login user modifications.
This allows, for example, to use self-signed certificate without defeating the
purpose of using TLS
profiling is now available via the HTTP base URL /debug/pprof/
examples, use this URL to start and download a 30 seconds CPU profile:
/debug/pprof/profile?seconds=30
use this URL to profile used memory:
/debug/pprof/heap?gc=1
use this URL to profile allocated memory:
/debug/pprof/allocs?gc=1
Full docs here:
https://golang.org/pkg/net/http/pprof/
Please note that if the upload bandwidth between the SFTP client and
SFTPGo is greater than the upload bandwidth between SFTPGo and S3 then
the SFTP client have to wait for the upload of the last parts to S3
after it ends the file upload to SFTPGo, and it may time out.
Keep this in mind if you customize parts size and upload concurrency
HTTPS certificate can be reloaded on demand sending a SIGHUP signal on
Unix based systems and a "paramchange" request to the running service on
Windows
The `memory` provider can load users from a dump obtained using the
`dumpdata` REST API. This dump file can be configured using the
dataprovider `name` configuration key. It will be loaded at startup
and can be reloaded on demand using a `SIGHUP` on Unix based systems
and a `paramchange` request to the running service on Windows.
Fixes#66
Login can be restricted to specific ranges of IP address or to a specific IP
address.
Please apply the appropriate SQL upgrade script to add the filter field to your
database.
The filter database field will allow to add other filters without requiring a
new database migration
we can now have permissions such as these ones
{"/":["*"],"/somedir":["list","download"]}
The old permissions are automatically converted to the new structure,
no database migration is needed
We use the system commands "git-receive-pack", "git-upload-pack" and
"git-upload-archive". they need to be installed and in your system's
PATH. Since we execute system commands we have no direct control on
file creation/deletion and so quota check is suboptimal: if quota is
enabled, the number of files is checked at the command begin and not
while new files are created.
The allowed size is calculated as the difference between the max quota
and the used one. The command is aborted if it uploads more bytes than
the remaining allowed size calculated at the command start. Quotas are
recalculated at the command end with a full home directory scan, this
could be heavy for big directories.
an user can now be disabled or expired.
If you are using an SQL database as dataprovider please remember to
execute the sql update script inside "sql" folder.
Fixes#57