This simplify the common pattern where the user password and a one time
token is requested: now the external program can delegate password check
to SFTPGo and verify the token itself
added the "initprovider" command to initialize the database structure.
If we change the database schema the required changes will be checked
at startup and automatically applyed.
The `memory` provider can load users from a dump obtained using the
`dumpdata` REST API. This dump file can be configured using the
dataprovider `name` configuration key. It will be loaded at startup
and can be reloaded on demand using a `SIGHUP` on Unix based systems
and a `paramchange` request to the running service on Windows.
Fixes#66
Here are the main improvements:
- unliked files works on windows too
- the uploads are now synced on the lower speed between the SFTP client write
and the upload speed to S3
This commit increase the external auth timeout to 60 seconds too
Use os.Environ() as a base instead of empty variable. Currently the environment of executed external auth program only contains SFTPGO_AUTHD* variables and therefore the program lacks additional context when started.
Login can be restricted to specific ranges of IP address or to a specific IP
address.
Please apply the appropriate SQL upgrade script to add the filter field to your
database.
The filter database field will allow to add other filters without requiring a
new database migration
The git push test sometime fails when running on travis.
The issue cannot be replicated locally so print the logs to try to
understand what is happening
currently we support:
- Linux/Unix users stored in shadow/passwd files
- Pure-FTPd virtual users generated using `pure-pw` CLI
- ProFTPD users generated using `ftpasswd` CLI
we can now have permissions such as these ones
{"/":["*"],"/somedir":["list","download"]}
The old permissions are automatically converted to the new structure,
no database migration is needed
We use the system commands "git-receive-pack", "git-upload-pack" and
"git-upload-archive". they need to be installed and in your system's
PATH. Since we execute system commands we have no direct control on
file creation/deletion and so quota check is suboptimal: if quota is
enabled, the number of files is checked at the command begin and not
while new files are created.
The allowed size is calculated as the difference between the max quota
and the used one. The command is aborted if it uploads more bytes than
the remaining allowed size calculated at the command start. Quotas are
recalculated at the command end with a full home directory scan, this
could be heavy for big directories.
added matching permissions too and a new setting "setstat_mode".
Setting setstat_mode to 1 you can keep the previous behaviour that
silently ignore setstat requests
an user can now be disabled or expired.
If you are using an SQL database as dataprovider please remember to
execute the sql update script inside "sql" folder.
Fixes#57
The update is atomic so no transaction is needed.
Addionally a transaction will ask for a new connection to the pool
and this can deadlock if the pool has a max connection limit too low.
Also make configurable the pool size instead of hard code to the cpu number.
Fixes#47
* created a "Log" function for type "Connection"
* created a "log" function for type "Provider"
* replace logger calls to Log/log where possible
I also renamed PGSSQL to PGSQL, as this seemed to be a typo
Signed-off-by: Jo Vandeginste <Jo.Vandeginste@kuleuven.be>
This will show the key fingerprint and the associated comment, or
"password" when password was used, during login.
Eg.:
```
message":"User id: 1, logged in with: \"public_key:SHA256:FV3+wlAKGzYy7+J02786fh8N8c06+jga/mdiSOSPT7g:jo@desktop\",
```
or
```
message":"User id: 1, logged in with: \"password\",
...`
Signed-off-by: Jo Vandeginste <Jo.Vandeginste@kuleuven.be>
Added a compatibility layer that will convert newline delimited keys to array
when the user is fetched from the database.
This code will be removed in future versions please update your public keys,
you only need to resave the users using the REST API.