The common package defines the interfaces that a protocol must implement
and contain code that can be shared among supported protocols.
This way should be easier to support new protocols
some external auth users want to map multiple login usernames with a single
SGTPGo account.
For example an SFTP user logins using "user1" or "user2" and the external auth
returns "user" in both cases, so we use the username returned from external auth
and not the one used to login
Fixes#125
HTTP clients are used for executing hooks such as the ones used for custom
actions, external authentication and pre-login user modifications.
This allows, for example, to use self-signed certificate without defeating the
purpose of using TLS
This way we can import the default passwords format used in 389ds.
See TestPasswordsHashPbkdf2Sha256_389DS test case to learn how to convert
389ds passwords
Please note that if the upload bandwidth between the SFTP client and
SFTPGo is greater than the upload bandwidth between SFTPGo and S3 then
the SFTP client have to wait for the upload of the last parts to S3
after it ends the file upload to SFTPGo, and it may time out.
Keep this in mind if you customize parts size and upload concurrency
A custom program can be executed before the users login to modify the
configurations for the user trying to login.
You can, for example, allow login based on time range.
Fixes#77
This simplify the common pattern where the user password and a one time
token is requested: now the external program can delegate password check
to SFTPGo and verify the token itself
added the "initprovider" command to initialize the database structure.
If we change the database schema the required changes will be checked
at startup and automatically applyed.
The `memory` provider can load users from a dump obtained using the
`dumpdata` REST API. This dump file can be configured using the
dataprovider `name` configuration key. It will be loaded at startup
and can be reloaded on demand using a `SIGHUP` on Unix based systems
and a `paramchange` request to the running service on Windows.
Fixes#66
Here are the main improvements:
- unliked files works on windows too
- the uploads are now synced on the lower speed between the SFTP client write
and the upload speed to S3
This commit increase the external auth timeout to 60 seconds too
Use os.Environ() as a base instead of empty variable. Currently the environment of executed external auth program only contains SFTPGO_AUTHD* variables and therefore the program lacks additional context when started.
Login can be restricted to specific ranges of IP address or to a specific IP
address.
Please apply the appropriate SQL upgrade script to add the filter field to your
database.
The filter database field will allow to add other filters without requiring a
new database migration
The git push test sometime fails when running on travis.
The issue cannot be replicated locally so print the logs to try to
understand what is happening
currently we support:
- Linux/Unix users stored in shadow/passwd files
- Pure-FTPd virtual users generated using `pure-pw` CLI
- ProFTPD users generated using `ftpasswd` CLI
we can now have permissions such as these ones
{"/":["*"],"/somedir":["list","download"]}
The old permissions are automatically converted to the new structure,
no database migration is needed
We use the system commands "git-receive-pack", "git-upload-pack" and
"git-upload-archive". they need to be installed and in your system's
PATH. Since we execute system commands we have no direct control on
file creation/deletion and so quota check is suboptimal: if quota is
enabled, the number of files is checked at the command begin and not
while new files are created.
The allowed size is calculated as the difference between the max quota
and the used one. The command is aborted if it uploads more bytes than
the remaining allowed size calculated at the command start. Quotas are
recalculated at the command end with a full home directory scan, this
could be heavy for big directories.
added matching permissions too and a new setting "setstat_mode".
Setting setstat_mode to 1 you can keep the previous behaviour that
silently ignore setstat requests
an user can now be disabled or expired.
If you are using an SQL database as dataprovider please remember to
execute the sql update script inside "sql" folder.
Fixes#57
The update is atomic so no transaction is needed.
Addionally a transaction will ask for a new connection to the pool
and this can deadlock if the pool has a max connection limit too low.
Also make configurable the pool size instead of hard code to the cpu number.
Fixes#47
* created a "Log" function for type "Connection"
* created a "log" function for type "Provider"
* replace logger calls to Log/log where possible
I also renamed PGSSQL to PGSQL, as this seemed to be a typo
Signed-off-by: Jo Vandeginste <Jo.Vandeginste@kuleuven.be>
This will show the key fingerprint and the associated comment, or
"password" when password was used, during login.
Eg.:
```
message":"User id: 1, logged in with: \"public_key:SHA256:FV3+wlAKGzYy7+J02786fh8N8c06+jga/mdiSOSPT7g:jo@desktop\",
```
or
```
message":"User id: 1, logged in with: \"password\",
...`
Signed-off-by: Jo Vandeginste <Jo.Vandeginste@kuleuven.be>
Added a compatibility layer that will convert newline delimited keys to array
when the user is fetched from the database.
This code will be removed in future versions please update your public keys,
you only need to resave the users using the REST API.
When SQLite path starts with a `/`, we consider this to be an absolute path.
Eg.:
```json
{
"data_provider":{
"name":"/var/lib/sftpgo/sftpgo.db"
}
}
```